source
stringlengths
30
57
title
stringlengths
33
123
transcript
listlengths
27
1.54k
extra_urls
listlengths
0
299
https://www.dwarkesh.com/p/adam-brown
Adam Brown – How Future Civilizations Could Change The Laws of Physics
[ "This transcription was autogenerated and edited with the help of LLMs. Please be aware that there may be hallucinations or typos.", "(00:00:00) - Changing the laws of physics", "Dwarkesh Patel 00:00:00", "Today I'm chatting with Adam Brown, who is a founder and lead of the Blueshift team, which is cracking math and reasoning at Google DeepMind, and a theoretical physicist at Stanford. Adam, welcome.", "Adam Brown 00:00:12", "Delighted to be here. Let's do this.", "Dwarkesh Patel 00:00:13", "Okay, we'll talk about AI in a second, but first, let's talk about physics. First question: What is going to be the ultimate fate of the universe, and how much confidence should we have?", "Adam Brown 00:00:23", "The ultimate fate is a really long time in the future, so you probably shouldn't be that confident about the answer to that question. In fact, our idea of the answer to what the ultimate fate is has changed a lot in the last hundred years. About 100 years ago, we thought that the universe was just static, wasn't growing or shrinking, was just sitting there statically. And then in the late 20s, Hubble and friends looked up at massive telescopes in the sky and noticed that distant galaxies were moving away from us and the universe is expanding.", "So that's like big telescope discovery number one. There was then a learned debate for many years about, you know, the universe is expanding, but is it expanding sufficiently slowly that it'll then recollapse in a big crunch, like a time reverse of the Big Bang, and that'll be super bad for us? Or is it going to keep expanding forever, but just sort of ever more slowly as gravity pulls it back, but it's fast enough that it keeps expanding?", "And there was a big debate around this question, and it turns out the answer to that question is neither. Neither of them is correct. In possibly the worst day in human history, sometime in the 1990s, we discovered that in fact, not only is the universe expanding, it's expanding faster and faster and faster.", "It's what we call dark energy, or the cosmological constant. This is just a word for uncertainty. Is making the universe expand at an ever faster rate, accelerated expansion as the universe grows?", "So that's a radical change in our understanding of the fate of the universe, and if true, is super duper bad news. It's really bad news because the accelerated expansion of the universe is dragging away from us lots of distant galaxies. And we really want to use those galaxies.", "We have big plans to go and grab them and turn them into vacation destinations or computronium or in any other ways extract utility from them. And we can't, if the cosmological constant is really constant, if this picture is correct, because anything close enough, we can go out and grab it, obviously. But if it's further away than about a dozen billion light years, the expansion of the universe is dragging it away sufficiently rapidly that even if we send probes out at almost the speed of light, they will never make it.", "They will never make it there and make it back. They'll never even make it there if it's sufficiently far away. And that means that there's a finite amount of free energy in our future.", "And that's bad. I mean, that means we're doomed to a heat death if that's true. But is it true?", "I mean, that was the second half of your question. And first of all, we keep changing our minds about these things over the last century or so. So on first principles grounds, you may be somewhat suspicious that we'll change our minds again.", "And none of this is settled physics. And indeed, it may be that the cosmological constant is not constant and you should hope with all your heart that it's not. It may be that it naturally bleeds away.", "It may be, in fact, that our fate is in our hands and that our distant descendants will go and bleed the cosmological content away, will force it to go to zero. They will be strongly incentivized to do it if they can, because otherwise we're doomed to a heat death.", "Dwarkesh Patel 00:03:25", "How would they bleed this away?", "Adam Brown 00:03:27", "This obviously depends on physics that we're not totally sure about yet, but it seems pretty consistent with the known laws of physics that the cosmological constant, what we perceive it as being a constant, this dark energy quantity that's pushing the universe apart from each other in many very natural extensions of the known laws of physics. That is something that we have the ability to change. In fact, it can change, can take different values.", "It is not just totally fixed once and for all that in fact, you have what's called different vacuum, different regions of parameter space that you can transition between, in which the cosmological constant can take different values. And if that's true, then, well, you can either sort of wait around and hope to get lucky, hope that the universe just sort of spontaneously moves from one of these vacuums to another one with a lower cosmological constant tending towards zero asymptotically, or you could take matters into your own hand, or you could imagine our descendants deciding that they're not going to just suffer the heat death, that they're going to try and trigger a vacuum decay event to get us from the vacuum we're in to another vacuum with a lower cosmological constant, and our distance descendants will be forced basically to do that if they don't want to suffer a heat death.", "Dwarkesh Patel 00:04:49", "Yeah, Proceed with caution, but definitely proceed.", "Adam Brown 00:04:52", "With caution in these theories where there's lots and lots of vacuums out there. And most of those vacuums are incredibly inhospitable to life as we know it. In fact, seemingly they're just completely inhospitable to all forms of intelligence.", "So you really, really don't want to end up in them. However, again, if our best theories are correct, it seems as though there should be some of them that are much like our own in many ways, but have a lower value of the cosmological constant. And so what we'd want to do is engineer that we end up in one of those vacuums.", "Dwarkesh Patel 00:05:26", "Sorry, what is a vacuum?", "Adam Brown 00:05:28", "Ah, great question. A vacuum is like a possible, well, what we would perceive as a possible set of laws of physics as we see them. So what it really is, is a minima in some higher dimensional abstract laws of physics, space in which you can find yourself in a minima.", "But these minima may just be local minima. In fact, according to our understanding, the minima which we live today, that gives us all of the laws of physics that we see around us, is in fact just a local minimum. And there's a lower minimum.", "In fact, there's many lower minima out there to which we can transition spontaneously or because of our own deliberate action.", "Dwarkesh Patel 00:06:09", "Okay, I'm just going to throw all my confusion at you and you figure out which one is worth dealing with first. What is the nature of the loss function that makes one value a minimum and one higher up? You know, what is exactly the ball rolling up on when it gets out or into a valley here?", "And then you're hinting at the possibility that there are other places in. I'm not sure if you're suggesting in the physical universe or in some hypothetical universe where the vacuum could be different, as in, in reality there are other pockets with different vacuums, or that hypothetically they could exist, or that our universe counterfactually could have been one of these? I don't know.", "This is the kind of thing I'd like throw into. Put everything I can into a Claude prompt and see what comes out the other end.", "Adam Brown 00:00:00", "Good. Well, I'm happy to be your Claude. The loss function is the energy density, and so maybe a good analogy would be water.", "Water can exist in many phases. It can be steam, it can be water, it can be ice. And even if it's in a cloud, let's say it would rather be water than be water vapor, but it's having a tough time getting there because in the middle there's a barrier.", "And so, you know, spontaneously, it can eventually, due to a sort of thermal process, turn from steam into water. These will be like the two minima in this loss landscape. Or you can go and do cloud seeding to turn it from water vapor into water, and so those would be the equivalent of the minima here.", "The existence of different minima in general is a very well-established part of physics. The possibility that we could engineer going from one minimum to another in a controlled way is a more speculative branch of physics speculation, but it seems totally consistent with everything we know that our distant descendants would try to attempt it.", "Dwarkesh Patel 00:01:13", "What would it take to do this?", "Adam Brown 00:01:15", "Probably you'd want something that would look a bit like a particle accelerator, but it would be considerably more controlled. You'd need a very controlled way to collapse a field and make a bubble of this new vacuum that was big enough that it would continue to expand, rather than just recollapse under its own surface tension.", "You would have to do that in a very careful way, both to make sure that you didn't accidentally make a black hole instead by the time you've concentrated all those energies, and also worse than making a black hole, would be ending up in a vacuum that you didn't want to end up in. It would be ending up in a vacuum in which you had not only bled off the cosmological constant in some way, but that you had changed, let's say, the electromagnetic constant or the strong nuclear force or any of these other forces, which would be seriously bad news.", "Because if you did that, your life, as you know it, is extremely well attuned to the value of the electromagnetic constant in your evolutionary environment. It will be very, very bad indeed if we changed those constants as well. We'd really just try and target the cosmological constant and nothing else, and that would require a lot of engineering prowess.", "Dwarkesh Patel 00:02:24", "So, it sounds like you're saying that changing the laws of physics is not even Dyson sphere level crazy. Somebody could do it on some planet in the middle of nowhere.", "Adam Brown 00:02:38", "I'd say it's definitely substantially harder than Dyson spheres as far as the tech tree goes, but it's not magic. We're not actually changing the laws of physics. We're just changing the low energy laws of physics as they present to us in this scenario.", "Again, this is speculative, but it's not super duper crazy. It's a natural consequence of our best theories, or at least some of our best theories of quantum gravity, that they allow for this possibility.", "And there is a meta law of physics, the true laws of physics, be it string theory or whatever else, that you're not changing. That's just the rules of the game. What I'm describing is changing the way that the universe looks around you, changing the cosmological constant.", "So I think again, changing water into water vapor into water is a great analogy here. There's nothing actually -- the laws of physics are still the laws of physics, but the way it feels to live in that universe, the value of the electromagnetic constant is perhaps not an absolute fixed value. It can vary in different places.", "Similarly, the density of water around you, the viscosity would change. It'll be an environmental variable like that.", "Dwarkesh Patel 00:04:01", "One question you might have is if this is the thing that could, maybe spontaneous is the right way to describe it, if this is the thing that can just kind of happen, there's something really interesting where if a thing can happen, you kind of see examples of it happening before. Even with nuclear weapons, I don't remember the exact phrase, but wasn't it the case that early in Earth's history, when there was a higher fraction of 238 isotopes, that there were spontaneous nuclear explosions?", "Adam Brown 00:04:42", "There probably was spontaneous nuclear reactors. They've discovered a seam in Africa where it looks like there was a fission reaction that naturally happened. It didn't explode, but it did do the same thing that happens in our nuclear power plants.", "Dwarkesh Patel 00:05:00", "One way you can look at nukes is like, this thing just would not have been possible if some intelligent beings hadn't tried to make it happen. But something like this happened before because the laws of physics allow it. Is there any story you can tell here where this vacuum decay, maybe it takes a super intelligent species to coordinate to make it happen, but also because it is the thing that the laws of physics can manufacture or can allow for, it has happened before or is happening?", "Adam Brown 00:05:32", "Yeah, absolutely. Almost certainly anything that humans can do can happen without humans. It's interesting to reflect on what aspects of human behavior nature has a tough time doing without us and what it just does on its own.", "For example, we make colder things in our laboratories than really exist naturally in the universe. But the universe certainly could make anything colder just by chance.", "Vacuum decay is something that if it is possible, will in our future definitely happen. That's just a feature of the world that eventually in our distant future, if it's possible at all, it will happen due to a quantum fluctuation.", "Our descendants may not wish to wait around for a quantum fluctuation to happen. They may wish to take the fate into their own hands, since a quantum fluctuations can take exponentially long times to happen. And if they even happened, you'd end up in an unfavorable vacuum, not hospitable for life, rather than trying to steer the cosmological constant in a happy direction.", "But they certainly can happen, and in our future, and indeed definitely will happen, if they're permitted. According to our understanding of quantum mechanics, if they're permitted, they must eventually happen.", "Furthermore, there are again speculative, but not wild theories of the early universe in which this happened in our past, in which we transitioned far, far in the past, maybe into what's called a bubble universe. So we started off in some other much higher vacuum long in the past. And then what we see as the Big Bang was in fact just a sort of local vacuum decay that then gave rise to our the bubble in which we live, everything we see around us.", "Dwarkesh Patel 00:07:19", "Who would be in a position to seed these bubbles?", "Adam Brown 00:07:22", "Usually people are thinking that something just spontaneously happens, in the same way that rain spontaneously happens in a cloud, that somebody didn't go and seed it deliberately to make it happen. But you could more than free to speculate that somebody seeded it to make it happen as well.", "Dwarkesh Patel 00:07:40", "How does this respect the conservation of energy?", "Adam Brown 00:14:42", "Energy, or the conservation of energy, is not conserved in general relativity. Energy is not conserved. It's conserved locally, at things you can do at a local level.", "But in an expanding universe, energy is not conserved globally. This is one of the big surprises.", "That is not a speculative statement. That is a statement that goes all the way back to Einstein and general relativity: energy is simply not conserved at the global level. It's conserved at the local level.", "You can't do something in your lab that will generate free energy. But if you can participate in the expansion of the entire universe, then energy is not conserved.", "Dwarkesh Patel 00:15:18", "So if you were to spawn a bubble universe in your lab, you've theoretically created a lot more matter and energy. What would be the thing that offsets this or that makes this viable?", "Adam Brown 00:15:32", "Energy is conserved in a universe that's not expanding, a static universe. A universe that is expanding, energy is not conserved. It can just appear.", "General relativity is quite clear on that. General relativity, Einstein's theory of space and time, one of our most beautiful and best tested theories, is quite clear on that point.", "Energy is not conserved. To ask what happened to the energy, you can ask at a local level what happened to the energy density. But at a global level, energy is simply not conserved.", "Dwarkesh Patel 00:16:02", "Then do our future descendants have any constraints in terms of... Because earlier we were mentioning, as a catastrophe, we found out about the cosmological constant because it limits our cosmic horizon and thus limits the free energy that our descendants would have access to. But if you can just make entire universes...", "Adam Brown 00:16:23", "Then this is a matter of extreme interest, I would say, to us. It won't be relevant for tens of billions of years, probably, because that's the timescale on which the cosmological constant operates.", "But if the cosmological constant is truly constant, and we've only known about it for 25 years, and there are astronomical observations that seem to be in tension with that, but if it is truly constant, then there is a finite amount of free energy in our universe. If it's not constant, if we can manipulate it, or even if it naturally decays on its own, then there is the possibility of an unbounded amount of free energy in our future, and we would avoid a heat death scenario.", "Dwarkesh Patel 00:17:04", "The situation you mentioned earlier, where somebody seeded our universe, they've created a bunch of energy. That's related to them having something equivalent to a positive cosmological constant in there.", "Adam Brown 00:17:18", "Yes. In any of these scenarios in which our universe is a bubble that formed in a sort of bigger, what's called a multiverse, or that's a loaded term, but a sort of larger universe in which our universe is just one bubble, the higher meta universe also has a cosmological constant, and it is higher than the value in our universe. That is the one sense in which there's some version of energy conservation: you can go down from high to low. It is considerably harder to go from low to high.", "Dwarkesh Patel 00:17:57", "So the idea is that you'd recursively have universes in which the bottommost one would immediately implode because of a negative cosmological constant, and the biggest one is exponentially increasing.", "Adam Brown 00:18:09", "Correct. The rate at which the universe is exponentially increasing is set by the cosmological constant, in which the volume of the universe is exponentially increasing. So you can imagine a scenario in which there was a high cosmological constant.", "You have a bubble universe that has a lower value of the cosmological constant. It continues to expand.", "You could make new bubble universes or new regions in that universe that have a lower cosmological constant, either naturally and spontaneously or due to action that we might take. And as long as that cosmological constant is non-negative, is zero or positive, that universe will not implode. If it goes negative, that universe will eventually implode.", "So you could imagine a cascade in which you go to lower and lower values of the cosmological constant. There are a lot of engineering details to be worked out, but what I'm describing is a scenario that is not inconsistent with the known laws of physics.", "Dwarkesh Patel 00:19:02", "How likely do you think this is?", "Adam Brown 00:19:04", "If the laws of physics are as we believe them to be, and if we do not blow ourselves up in some other way, this is an issue that our distant descendants will eventually have to confront.", "Dwarkesh Patel 00:19:18", "No, no, no. As in the whole, like, there's like other bubbles. Not about something our descendants might do, but the fact that the big bang was the result of a bubble within some other metastable state.", "Adam Brown 00:19:31", "That's a tricky question. But since you asked it, I'd say probably 50%.", "There's a lot we don't understand about any of these questions. They're all super speculative. It's an active area of research how to combine quantum mechanics and expanding universes.", "On the other hand, it seems pretty natural when you do combine quantum mechanics and gravity and try and fit them all together in a consistent picture. If universes can expand a lot, then at all, according to the gravitational theory, then quantum mechanics will naturally populate those bits that can expand a lot, and so you'll naturally end up with an expanding universe. So I would say probably in my heart, slightly higher than 50%, but I'm going to round it down to 50 out of epistemic humility.", "Dwarkesh Patel 00:20:21", "It's funny because this is often the way people talk about their AI timelines of, like, really, I think it's like 2027, but if I'm taking the outside of you, I'm going to say 2030. Okay.", "And is there any way, given our current understanding, of using bubble universes to do useful work for the people outside of it? So to do some computation within it or to get some sort of actual energy out of it for the people.", "Adam Brown 00:20:50", "Outside of the bubble? So the thing about these bubbles is that they tend to expand at the speed of light. So even if you start off outside, you're probably going to end up inside them in short order unless you run away very quickly.", "So this isn't something that we make in the lab and then just remains in a box in the lab. And then we use use to do things. This would be something that we would do or maybe would just happen to us because of spontaneous vacuum decay and it would engulf all of our future light cone.", "And so we wouldn't. It's not a it's not a box that you're using to do things. It's a new place that you live. You better hope that you've engineered the stuff so that that new place is still hospitable for life.", "Dwarkesh Patel 00:21:28", "So look, if it's the case that you can set up some apparatus, not now, but not in this room, but eventually, that if some individual wants to change the constants of nature, they can not only do this, but then the repercussions will extend literally as far as light can expand. You might have some hope that future civilizations, individuals or AIs have tons of freedom, so they can do all kinds of cool things. You can have your own galactic cluster over there, and if you want to go do whatever you want, right, go live your life, and there's going to be some libertarian utopia. But if you can literally destroy the universe, maybe it's a different story.", "Adam Brown 00:22:10", "That is a big negative externality, destroying your future light cone. And in a world with big negative externalities, libertarian fantasies can't really happen. It has pretty good big government governance implications, is that if it is possible for people just to wipe out their entire future light cone — not only themselves, but everybody else who wishes to participate in that future light cone — then we're going to need a government structure that prevents them from doing so.", "The worst-case scenario is even worse than that. Not just that they could do it, but that they in some sense be incentivized to do it. You could imagine really adverse laws of physics in which maybe you could speculatively build some power plant that just really makes use of sitting on that edge of instability. And then each person individually might say, \"Oh, I'm quite happy to bear one in a trillion chance that I wipe out the future light cone because I get so much benefit from this power plant.\"", "But obviously the negative externality means that people really shouldn't do that. So I hope the laws of physics don't turn out that way, otherwise, we're going to have to have some super-arching control.", "Dwarkesh Patel 00:23:21", "I've done a couple of these interviews, and these end up being my favorite interviews, where a normal person who has just had grade school education can think, \"Of course, I understand this, right?\" Or if you've just seen enough YouTube videos about pop sci. Give you a concrete example. When I interviewed David Reich, the geneticist of ancient DNA, I feel like we have a sense that we understand the basics of how humans came to be.", "What is the story of human evolution? And just like the episode revealed to me that the main questions you might have about how humans came to be — where did it happen? When did it happen? Who did it happen with? In fact, the last few decades of insights have totally revolutionized our understanding.", "We have the sense that we understand what basically cosmology implies, but this idea that, in fact, there's this underlying field which not only implies very interesting things about the distant past, about the Big Bang, but also what our future descendants, you know, what kinds of civilizations they'll be able to set up, both from a governance and a practical energy perspective, it totally changes your understanding.", "Adam Brown 00:24:32", "It just keeps changing. Not just your idea, our idea, everybody's idea has changed a lot in my lifetime and may continue to change. In some sense, it's because you have the lever arm, the long lever arm of asking about the very, very distant future that makes even small uncertainties pan out to absolutely ginormous distances in the distant future.", "Dwarkesh Patel 00:24:53", "I think earlier you said, \"I wouldn't be that crazy.\" But also, \"It's not as easy as a Dyson sphere.\" What are we talking about here? How much energy would it take to...", "Adam Brown 00:25:04", "The energy requirements are probably pretty small, much more than we can currently make in our particle colliders, but much smaller just in terms of MC squared than the energy in your body. For example, the energy is not going to be the hard bit. The hard bit is going to be concentrating it together in a really small little bubble that's shaped exactly right in order that it doesn't form a black hole, expands in just the way that you want it to expand, and lands in the vacuum that you're aiming for.", "So it's more going to be a control issue than just a pure energy issue.", "Dwarkesh Patel 00:25:40", "But you think this is just table stakes for distant descendants who are colonizing the stars?", "Adam Brown 00:25:45", "It's not inconsistent with the known laws of physics, which means that it's just engineering.", "Dwarkesh Patel 00:25:53", "I feel like that the most sort of a churchy phrase physics can occur is \"your proposition is not inconsistent with the known laws of physics.\"", "Adam Brown 00:26:03", "Not this.", "(00:26:05) - Why is our universe the way it is", "Dwarkesh Patel 00:26:05", "If we lived in a world of intelligent design, and these were the laws we found ourselves with, at a high level, what is the creator trying to maximize? Other than maybe us existing, does there seem like something that is being optimized for? What's going on here?", "Adam Brown 00:26:26", "If you just throw a dart in laws of physics space, in some sense, you would not. There are some properties of our universe that would be somewhat surprising, including the fact that our life seems to be incredibly hospitable for complexity and interestingness and the possibility of intelligent life, which is an interesting fact. Everything is just tuned just so that chemistry is possible.", "Perhaps in most places you would throw the dart in possibility space, chemistry would be impossible. The universe as we look around us is incredibly rich. There's structure at the scale of viruses all the way to structure at the scale of galaxies.", "There's interesting structure at all levels. This is a very interesting fact. Now, some people think that actually interesting structure is a very generic property, and if we threw a dart somewhere in possibility space, there would be interesting structure no matter where it hit.", "Maybe it wouldn't look like ours, but there'd be some different structure. But really, if you look at the laws of physics, it does seem like they're very well attuned for life. So in your scenario where there's an intelligent creator, then they would probably be — you'd have to say they'd optimized for that.", "It's also the case that you can imagine explanations for why it's so well tuned for life that don't involve an intelligent creator.", "Dwarkesh Patel 00:27:52", "Is there any explanation other than the anthropic principle for why we find ourselves in such a universe?", "Adam Brown 00:27:57", "Well, you suggested one with an intelligent creator, but the usual one that people like to talk about is the anthropic principle.", "Dwarkesh Patel 00:28:02", "So is it 99% that basically the reason we find ourselves in a universe like this is the anthropic principle?", "Adam Brown 00:28:09", "What probability do you put high?", "Dwarkesh Patel 00:28:11", "Well, what probability do you put on, anthropic principle is key to explaining why we find ourselves in the kind of universe we find ourselves in?", "Adam Brown 00:28:18", "I think it's going to depend on what quantity you're asking me about. So if you ask me, you know, 99% of the matter in the solar system lives in the sun or on Jupiter, and yet we live seems to live in this really weird corner of the solar system, why is that? I'm pretty confident that the answer to that is anthropic.", "If we lived in the center of the sun, we'd be dead. And so one should expect intelligent life to live in this weird place in parameter space. So that's perhaps my most confident answer to that question.", "Why do we live where we live? Then if we start talking about different constants of nature, we start getting different answers to that question. Why is the universe tuned such that the proton is just a tiny bit more stable than the neutron?", "That seems like that's begging for an anthropic answer. Of course, if that's true, that demands that there be different places somewhere in the multiverse where in fact the neutron is slightly heavier than the protons, decay to neutrons rather than vice versa, and people just don't live there. So if you want to go down that road, you end up being naturally drawn to the existence of these variables scanning over space.", "Dwarkesh Patel 00:29:32", "Is there some way for the anthropic principle to exist that doesn't involve these bubble universes?", "Adam Brown 00:29:37", "Yes, all you need is that there are different places in some larger possibility space where these quantities scan, where they take different values. Bubble universes are just one way to do that. We could just be different experiments, simulations in some meta-universe somewhere.", "Dwarkesh Patel 00:29:54", "What part of this is the least logically inevitable? Some theories seem to have this feeling of like, \"It had to be this way.\" And then some are just like, \"Why are there these 16 fields and hundreds of particles?\" What part of our understanding of physics?", "Adam Brown 00:30:12", "I would say that there are three categories. There are things like quantum mechanics and general relativity that are not logically inevitable but do seem to be attractors in some sense. Then there are things like: the standard model has 20 fields, and it has a mass of the neutrino. Why do those masses of the neutrino have the values that they have? The standard model was just fine before we discovered that the neutrinos have mass in the 1990s. And those just seem to be totally out of nowhere. A famous Nobel Prize-winning physicist said about the muon, in fact, longer ago than that: \"Who ordered that?\" They just seem to be there but without any particular reason.", "And then there are these quantities that are somewhere in the middle, that are not logically necessary but do seem to be necessary for life as we know it to exist.", "Dwarkesh Patel 00:31:03", "How confident are we that these different properties of different universes would actually be inconsistent with intelligent life?", "Adam Brown 00:31:12", "That's a great question. This line of thought starts to be a skeptical response to the anthropic principle. An example that sometimes people use is a puddle that's sitting in some depression in the ground reflects on how wonderful the universe is, that this depression in the ground seemed to have been made the perfect shape for the puddle to exist. And our view would have said, \"No, the reason the puddle has that shape is because it isn't self-adapted to the hole in the ground.\"", "Maybe no matter what the laws of physics, there would be something that emerged there. Certainly, if you go to the bottom of the sea, or in nuclear reactors, or in various other places, this kind of life will find a way. Philosophy seems to be adapted at least there, where it's very different from the surface of the Earth where we find ourselves. And yet they're able to be, certain life is able to live in undersea vents and is able to adapt itself to those environments.", "I think I basically buy that life is quite adaptable, but whether life is adaptable enough that a universe with a cosmological constant that ripped it apart every microsecond, that seems implausible to me. Or even closer to home, the center of the sun. It's not clear exactly whether we can get intelligent life living at the center of the sun, even though that has the same laws of physics as us. It just has a different environmental variable.", "Dwarkesh Patel 00:32:39", "What is the most underappreciated discovery in cosmology in our lifetime?", "Adam Brown 00:32:44", "In the 2000s and before, we very carefully studied the cosmic microwave background. This, what's sometimes called the echo of the Big Bang, and the inhomogeneity to it, the fact that it's not quite the same in every direction.", "Doing that discovered a super interesting fact that was definitely not known in my lifetime, which is the quantum origin of all of the structure we see in the universe. So if you look out in the universe, the density is not the same everywhere. The density on Earth is much more than in interplanetary space, which is itself much more than in intergalactic space, and the center of the sun is all the more denser. It is inhomogeneous; it is not the same.", "If you look back to the early universe, it was considerably more homogeneous. It was homogeneous to 1 part in 10^5 or 10^6. Almost everywhere, every point had almost exactly the same density.", "So then there's an easy part and a hard part. The easy part is understanding how if you have very small inhomogeneities, how they grow into large inhomogeneities. That's already quite well understood by classical physics. Basically, the idea is this: if you have a place that's denser and a place that's less dense, then the gravitational force pulls stuff towards the high-density stuff. So if you have a small inhomogeneity, they naturally grow under that effect where they just gravitationally fall towards the denser thing. If you start seeded with small inhomogeneities, that will grow large inhomogeneities, and that's well understood.", "The thing that we now understand much better than we did is where those small inhomogeneities come from. Why, just after the Big Bang, was the universe not perfectly homogeneous? Because if it was perfectly homogeneous, there's no opportunity for anything to grow.", "We now understand with a high degree of confidence something that we didn't understand, which is that those inhomogeneities were seeded by quantum fluctuations. When the universe, just after the Big Bang, was considerably smaller than it is today, the effects of quantum mechanics were correspondingly more important. Those quantum fluctuations produced tiny little fluctuations in the density of matter in the universe. And all of those tiny little one-part-in-a-million fluctuations grew into all of the structures you see in the universe: all the galaxies, you, me, everything else.", "Dwarkesh Patel 00:35:12", "Is it a meaningful question to ask what level of structure each individual discrepancy corresponds to, each individual 1 in 10^5 part? Is it a galactic supercluster? Is it a galaxy?", "Adam Brown 00:35:27", "It depends. We believe that these were generated during a period we call inflation, very poorly understood, very early in the universe. And there were fluctuations made not just at one scale in those days, but at all scales, or many, many scales.", "There were fluctuations made at a scale that nowadays corresponds to 10% of the distance across the visible universe, all the way down to structures that were inhomogeneities that were much, much smaller scale that correspond to a galaxy today, all the way down to--now, this is speculation, but in some models of inflation, there were tiny inhomogeneities, very small-scale inhomogeneities that would give rise to primordial black holes, like tiny little black holes left over from the Big Bang. There's no actual evidence in terms of observational evidence, no strong observational evidence for those, but those are a possibility. That's allowed by our theory, and people think about them and look for them.", "(00:37:30) - Making Einstein level AGI", "Dwarkesh Patel 00:37:30", "What makes General Relativity so beautiful?", "Adam Brown 00:37:36", "I think general relativity is really an extraordinary story. It's pretty unusual in the history of physics that you, to first approximation, just have one guy who sits down and thinks really, really hard with lots of thought experiments about jumping up and down in elevators and beetles moving on the surface of planets and all the rest of it, and at the end of that time writes down a theory that completely reconceptualizes nature's most familiar force and also speaks not just to that, but speaks to the origin and fate of the universe and almost immediately achieves decisive experimental confirmation in the orbits of astronomical observations or the orbits of planets and the deflections of lights during eclipses and stuff like that. It's a pretty beautiful theory, and it completely changed our idea of gravity from being a force to just being an artifact of the curvature of spacetime, actually.", "Dwarkesh Patel 00:38:39", "So this is actually a good point to chat about your actual day job. So there's these open debates about the kind of reasoning that these LLMs do. Does it correspond to true reasoning, or is it something more procedural? And it sometimes gets into a definition game.", "But this is maybe a good way to test our intuitions here. The kind of thing that Einstein was doing, where you start off with some thought experiments, you start off with some seeming conceptual inconsistencies in existing models, and you trace them through to some beautiful unified theory at the end, and you make incredibly productive use of these intuition pumps, that kind of reasoning. How far are our AIs from that?", "Adam Brown 00:39:30", "I have heard it said, and I kind of agree with this, that maybe the very last thing that these systems will be able to do, these LLMs will be able to do, is, given the laws of physics as we understood them at the turn of the last century, invent general relativity from that. So I think that's probably the terminal step. And then once it can do that, if it can do that, then there won't be much else to do as far as humans are concerned.", "It's pretty extraordinary, I mean, particularly coming from a physics background in which progress is pretty slow, to come to the AI field and see progress being so extraordinarily rapid day by day, week by week, year by year. Looking at it, it certainly looks like these LLMs and these AI systems in some sense are just interpolators, but the level of abstraction at which they're interpolating keeps going up and up and up and we keep sort of riding up that chain of abstractions. And then, presumably, from a sufficiently elevated point of view, the invention of general relativity from Newtonian physics is just interpolation at some sufficiently grandiose level of abstraction.", "That perhaps tells us something about the nature of intelligence, human intelligence, as well as about these large language models. If you ask me how many years until we can do that, that is not totally clear, but in some sense, general relativity was the greatest leap that humanity ever made. And once we can do that, perhaps in 10 years, then we will have fully encompassed human intelligence.", "Will it be of the same character as what Einstein did? Clearly, there are many disanalogies between human intelligence in these large language models, but I think at the right level of abstraction, it may be the same.", "Dwarkesh Patel 00:41:36", "Do you see early examples of the kind of thing it was? Obviously not at that level of difficulty, but you just start off with, hey, here's something funny, go think about it for a while. Is there something especially impressive you see when you kind of run that kind of experiment at the moment?", "Adam Brown 00:41:55", "These systems tend to be doing more elementary material than that. They tend to be doing undergraduate-level material. Yes. I haven't seen anything that jumps out to me like inventing generative relativity or even a toy version of that.", "But there is, in some sense, creativity or interpolation required to answer any of these problems. Where you start with some science problem, you need to recognize that it's analogous to some other thing that you know, and then sort of combine them and then make a mathematical problem out of it and solve that problem.", "Dwarkesh Patel 00:42:31", "Do you think AI mathematicians, AI physicists will have advantages over humans just because they can by default think in terms of weird dimensions and manifolds in a way that doesn't natively come to humans?", "Adam Brown 00:42:46", "Ah, you know, I think maybe we need to back up to in what sense the humans do or don't think natively in higher dimensions. Obviously, it's not our natural space. There was a technology that was invented to think about these things, which was, you know, notation, tensor notation, various other things that allows you to much using just even writing, as Einstein did 100 years ago, allows you to sort of naturally move between dimensions. And then you're thinking more about manipulating these mathematical objects than you are about thinking in higher dimensions.", "I don't think there's any sense, I mean, in which large language models naturally think in higher dimensions more than humans do. You could say, well, this large language models have billions of parameters. That's like a billion-dimensional space. But you could say the same about the human brain, that it has all of these billions of parameters and is therefore billion-dimensional.", "Whether that fact translates into thinking in billions of spatial dimensions, I don't really see that in the human. And I don't think that applies to an LLM either.", "Dwarkesh Patel 00:43:48", "Yeah, I guess you could imagine that if you were just seeing like a million different problems that rely on doing this weird tensor math, then in the same way that maybe even a human gets trained up through that to build better intuitions, the same thing would happen with AI. It just sees more problems. It can develop better representations of these kinds of weird geometries or something.", "Adam Brown 00:44:13", "I think that's certainly true, that it is definitely seeing more examples than any of us will ever see in our life. And it is perhaps going to build more sophisticated representations than we have. Often in the history of physics, a breakthrough is just how you think about it, what representation you do.", "It is sometimes jokingly said that Einstein's greatest contribution to physics was a certain notation he invented called the Einstein summation convention, which allowed you to more easily express and think about these things in a more compact way that strips away some of the other things. Penrose, one of his great contributions, was just inventing a new notation for thinking about some of these space times and how they work that made certain other things clear.", "So clearly coming up with the right representation has been an incredibly powerful tool in the history of physics and many incredibly large developments, somewhat analogous to coming up with a new experimental technique in some of the more applied scientific domains. And one would hope that as these large language models get better, they come up with better representations, at least better representations for them that may not be the same as a good representation for us.", "Dwarkesh Patel 00:45:27", "We'll be getting somewhere when you ask Gemini a question and it says, \"Ah, good question. In order to better think about this, let me come up with this new notation.\"", "So we've been talking about what AI physicists could do. What could physicists with AI do? That is to say, are your physicist colleagues now starting to use LLMs? Are you yourself using LLMs to help you with your physics research? What are they especially good at? What are they especially bad at?", "Adam Brown 00:45:57", "What physicists don't do, or don't productively do, is just say, \"LLM, please quantize gravity for me. Go.\" That doesn't get you anywhere. But physicists are starting to use them in a big way, just not for that.", "More of an assistant rather than agent. Three years ago, they were totally useless. No value whatsoever in them. Low-hanging fruit uses include doing literature search. So if you just say, \"I have this idea, what are some relevant papers?\" They're great at that, and semantically greater than any other kind of search.", "The other thing that they're extremely useful for now that they were useless for before is just as a tutor. There is a huge amount of physics that a physicist would be expected to know that has already been done. No human has ever read the whole literature or understands everything, or maybe there isn't even something that you feel you should understand, or you once understood that you don't understand.", "I think the very best thing in the world for that would be to phone up a colleague, if you knew exactly who to phone, they'd probably be able to answer your question the best. But certainly, if you just ask a large language model, you get great answers, probably better than all but the very best person you could phone. They know about a huge amount, they're non-judgmental, they will not only tell you what the right answer is, but debug your understanding on the wrong answer.", "So I think a lot of physics professors are using them just as personal tutors. And it fills a hole because there are personal... If you want to know how to do something basic, it's typically very well documented. If you want to know quite advanced topics, there are not often good resources for them.", "Talking to these language models will often help you debug and understand your understanding. It'll explain to you not only what the right answer is, but what you thought was wrong. I think it'll be a pretty big deal, sort of analogous to the way that chess players today are much better even when they're playing across the board without the benefit of a computer, just having been able to be tutored by chess machines off the board. This is the same: you want to understand this thing about group theory, go and ask the machine and it'll explain it to you and it won't judge you while it's doing it.", "Dwarkesh Patel 00:48:26", "So there's an interesting question here. Clearly, these models know a lot, and that's evidenced by the fact that even professional physicists can ask and learn about fields that they're less familiar with. But doesn't this raise the question of... We think these things are smart and getting smarter.", "If a human that is reasonably smart had memorized basically every single field, and knew about the open problems, knew about the open problems in other fields and how they might connect to this field, knew about potential discrepancies and connections, what you might expect them to be able to do is not like Einstein-level conceptual leaps, but there are a lot of things where just like, \"Hey, magnesium correlates with this kind of phenomenon of the brain. This kind of phenomenon correlates with headaches. Therefore, maybe magnesium supplements cure headaches.\" These kinds of basic connections...", "Does this suggest that LLMs are, as far as intelligence goes, even weaker than we might expect given the fact that, given their overwhelming advantages in terms of knowledge, they're not able to already translate that into new discoveries?", "Adam Brown 00:49:42", "Yes, they definitely have different strengths and weaknesses than humans. And obviously one of their strengths is that they have read way more than any human will ever read in their entire life.", "I think maybe again the analogy with chess programs is a good one here. They will often consider way more possible positions, Monte Carlo research, than any human chess player ever would. And yet, even at human-level strength, if you fix human-level strength, they're still doing way more search. So their ability to evaluate is maybe not quite as natural as a human.", "The same I think would be true of physics. If you had a human who had read as much and retained as much as they had, you might expect them to be even stronger.", "Dwarkesh Patel 00:50:23", "Do you remember what the last physics query that you asked an LLM was?", "Adam Brown 00:50:29", "Well, a recent one was I asked it to explain to me the use of squeezed light at LIGO, which is a topic that I always felt like I should understand, and then tried to explain it to somebody else and realized that I didn't understand it. And went and asked the LLM.", "That blew me away, that it was able to exactly explain to me why what I was thinking was incorrect. So, why do we use this particular form of quantum light in interferometers used to discover gravitational waves?", "The reason that's a good topic is perhaps because it's an advanced topic, not many people know that, but it's not a super advanced topic. There are, out of a physics literature of millions of papers, there have got to be at least a thousand on that topic. If there was just a handful of papers on a topic, it's typically not that strong at it.", "Dwarkesh Patel 00:51:25", "Do you reckon that among those thousand papers is one that explains why the initial understanding or thought you had about it was wrong? Because if it just intuited that, that is actually quite like, that's pretty fucking cool.", "Adam Brown 00:51:40", "I don't know the answer. That is an interesting question. I think it might be able to debug even without that. If you do much simpler things like give these language models code, it will successfully debug your code even though presumably no one has made that exact bug in your code before. This is at a higher level of abstraction than that, but it wouldn't surprise me if it's able to debug what you say in that way.", "Dwarkesh Patel 00:52:02", "It does falsify a lot of stories about them just being fuzzy search or whatever. Scott Aaronson recently posted about the fact that GPT-4 got a B or an A- on his intro to quantum computing class, which is definitely a higher grade than I got. And so I'm already below the waterline.", "But you teach a bunch of subjects, including general relativity at Stanford. I assume you've been querying these models with questions from these exams. How has their performance changed over time?", "Adam Brown 00:52:38", "Yeah, I take an exam I gave years ago in my graduate general relativity class at Stanford and give it to these models, and it's pretty extraordinary. Three years ago, zero. A year ago, they were doing pretty well, maybe a weak student, but in the distribution. And now they essentially ace the test.", "In fact, I'm retiring that. That's just my own little private eval. It's not published anywhere, but I just give them this thing to follow along how they're doing, and it's pretty strong.", "They, you know, maybe it's easy by the standards of graduate courses, but it's a graduate course in general relativity, and they get pretty much everything right on the final exam. That's just in the last couple of months that these have been doing that.", "Dwarkesh Patel 00:53:29", "What is required to ace a test? Obviously, they probably have read about all the generality textbooks, but I assume to ace a test, you need something beyond that. Is there some way you'd characterize physics?", "Adam Brown 00:53:43", "Physics problems, compared to math problems, tend to have two components. One is to sort of take this word question and turn it, using your physics knowledge, into a math question and then solve the math question. That tends to be the typical structure of these problems. So you need to be able to do both.", "The bit that maybe only LLMs can do, and wouldn't be so easy for other things, is step one of that: turning it into a math problem. I think if you ask them hard research problems, you certainly can come up with problems that they can't solve. That's for sure.", "But it's pretty noticeable, as we have tried to develop evaluations for these models, that as recently as a couple of years ago, certainly three years ago, you could just scrape from the internet any number of problems that are standard, totally standard high school math problems that they couldn't do. And now we need to hire PhDs in whatever field, and, you know, they come up with one great problem a day or something. The difficulty, as these LLMs have got stronger, the difficulty of evaluating their performance has increased.", "Dwarkesh Patel 00:54:49", "How much do they generalize from these difficult problems to not only that domain of physics but just generally becoming a better reasoner overall? If they just see a super hard GR problem, are they better at coding now?", "Adam Brown 00:55:03", "Generally, you see positive transfer between domains. So if you make them better at one thing, they become better at another thing across all domains.", "It is possible to make a model that is really, really, really good at one very particular thing that you care about. And then at some stage, there is some Pareto frontier and you start degrading performance on other metrics. But generally speaking, there's positive transfer between abilities across all domains.", "Dwarkesh Patel 00:55:31", "We've got these literally exabytes of data that we collected from satellites, telescopes, and other kinds of astronomical observations. Typically in AI, when you have lots of data and you have lots of compute, something something large model, great discoveries. Is there any hope of using these exabytes of astronomical data to do something cool?", "Adam Brown 00:55:59", "Yeah, great question. People are trying that. There's an effort, Shirley Ho and Flatiron, which is basically that exact plan.", "They take the pipeline of all of the data that comes out of these astronomical observatories, they plug them into a transformer, and see what happens. You can come up with all sorts of reasons in advance why that might not be something that will work, but you could also come up with reasons in advance why large language models wouldn't work, and they do. So I'm very curious to see what happens.", "The dream there would be that there are lots of things hidden in the data that no human would ever be able to tease out. And that by doing this, you could just revolutionize the amount of... These astronomical observatories are incredibly expensive. If we can just have a computer better parse all of the data from them in a way that no human ever could, that would be a tremendous improvement.", "These things are very good at finding patterns, and maybe they'll find patterns that are not particularly interesting to a human.", "Dwarkesh Patel 00:57:00", "Okay, so going on the GR thread again, maybe one advantage these models have is, obviously, you can run a lot of them in parallel, and they don't get fatigued or dazed. And you could imagine, again, naively, you would imagine some sort of setup. I assume you're doing much more sophisticated things, but naively, you could imagine a setup where, look, it seems like special relativity, which is something that maybe is easy to understand, is just like you start off with, let's just randomly select a couple of observations. Obviously, they were randomly selected, but you know, and let's just think about what's going on here for a while. Let's just do a bunch of chain of thought for a year or so.", "And you could just imagine doing this and doing some sort of best of n across a thousand different randomly selected parts of the current model of the universe and just seeing at the end of it which one comes up with some especially productive line of thought.", "Adam Brown 00:58:01", "Yeah, I think that could be productive. One challenge in that would be, how do you evaluate whether you had a good theory at the end? That's going to be the tricky bit.", "For things that are most easily parallelized are things in which, if you get the right answer, it's clear you got the right answer. You know, perhaps things in NP, one might say. Whereas in this case, is special relativity... How would your computer know when, if it generated special relativity, that it was onto a winner?", "There are various ways in which it could know. It could check that it was mathematically self-consistent and various other facts. But the evaluation is going to be a tricky part of this pipeline that you might wish to set up.", "Dwarkesh Patel 00:58:45", "Is there no experimental way that you could detect time dilation or something?", "Adam Brown 00:58:49", "There is an experimental way that you could detect time dilation, but that would involve sending out probes or doing something in the real world. Whereas I thought you were just trying to run this in a data center.", "Dwarkesh Patel 00:59:00", "But now, today, we have these exabytes of information, so you could just have some sort of ability to search or query, like, I've come up with this theory.", "Adam Brown 00:59:08", "I think maybe this is a philosophical difference, where you maybe think that the way that a theory is good is that it best matches the data with some loss minimization. That's not always how new theories, particularly revolutionary theories, come up.", "There's this famous fact: even when they were moving from a geocentric worldview to a heliocentric worldview, it was so beautiful, the theory, by the time they were finished with the epicycles. I mean, not beautiful. It was so ornate by the time where these planets were moving around the sun but moving on epicycles, that actually the data didn't any better fit the heliocentric worldview than the geocentric worldview, especially since they didn't properly understand the ellipticity of the Earth's orbit around the sun.", "So it wasn't. Why does one theory replace another? One reason is obviously that it's more consistent with the data, but that's by no means the only theory.", "And if you just optimize for being consistent with the data, you're going to end up with—if you optimize only for being consistent with the data—you're going to end up with epicycles. You're not going to end up with some beautiful new conceptual thing. Part of the reason people like these new theories is that even though they're maybe not better at matching the data, they are more beautiful, and we'd have to teach—and that's been a reliable guide in the history of science—and we'd have to teach these LLMs beauty.", "(01:00:31) - Physics stagnation and particle colliders", "Dwarkesh Patel 01:00:31", "This actually raises an interesting question, which is, in some sense, we have the same problem with human scientists, right? And there's all these people who claim to have a new theory of everything. I guess there's not an easy verifier that everybody agrees to, because some people call them cranks, other people think they're geniuses. But somehow we've solved this problem, right?", "Adam Brown 01:00:53", "Well, we've sort of solved it. We haven't solved it in the same way that, if you have some new sort algorithm that you claim is faster than everybody else's, a sort algorithm doesn't need to be any dispute about that. You can just run it and see.", "Physics is not the same way. It is definitely the case that there's a number of people who think they have great theories, and there are even perfectly respectable people who are professors at prestigious universities who have very different opinions about what is and isn't a worthwhile direction to be exploring. Eventually, you hope that this gets grounded in experiment and various other things.", "But the distance between starting the research program and the community reaching consensus based on data and other considerations can be a long time. So, yeah, we definitely don't have a good verifier in physics.", "Dwarkesh Patel 01:01:43", "Even if we did someday get superhuman intelligence that could try to find all the remaining sort of high-level conceptual breakthroughs, how much more room is there for that? Basically, was it just like 50 years of \"here's all the really advanced-grade physics,\" and now we just bog through additions to the Standard Model?", "If you look at Nobel Prizes, year after year, they get less and less—at least in physics, they tend to get less and less significant. And in fact, this year, the Nobel Prize in physics was awarded to Hopfield and Hinton for their work in AI. So apparently, maybe a taste of things to come.", "Adam Brown 01:02:22", "I don't think there's reason—I don't think we should be pessimistic about that. I think there could easily be room for completely new conceptualizations that change things. I don't think it's just turning the crank going forward.", "I think new ways to think about things have always been extremely powerful. Sometimes they're fundamental breakthroughs; sometimes they are breakthroughs in which you even take regular physics. This is a story to do with renormalization that maybe is a little too technical to get into, but there was a sort of amazing understanding in the 1970s about the nature of theories that had been around for forever—or for years at that stage—that allowed us to sort of better understand and conceptualize them.", "So I think there's good reason to think that there's still room for new ideas and completely new ways of understanding the universe.", "Dwarkesh Patel 01:03:15", "Do you have some hot take about why the current physics community hasn't—I mean, cosmology is maybe a very notable exception, where it does seem like the expected value of the light cone is switching back and forth.", "Adam Brown 01:03:28", "Well, if you take particle physics, I think it's because we were a victim of our own success, is that we wrote down theories in the 1970s, and those theories were—it's called the Standard Model—and those theories were too good, in the sense that we won.", "In the sense that we could predict everything that would come out of a particle accelerator, and every particle accelerator that's ever been built, and every particle accelerator that's likely to be built given our current budgetary constraints. So particle physics—I mean, there were some questions around the edges, but this model that we wrote down in the '70s and into the '80s basically completely cleaned up that field.", "We wish to build bigger, more powerful particle accelerators to find stuff that goes beyond that, but basically, we won, and that makes it difficult to immediately—if you get too good, then it's hard to know where to push from there. That's as far as particle physics is concerned.", "Dwarkesh Patel 01:04:34", "Is there some—so it sounds like the problem with these colliders is that the expected entropy is not that high of like—the reason it's not that useful is because we kind of have some sense of what we'd get on the other side. Is there some experimental apparatus that we should build where we, in fact, do have great uncertainty about what would happen, and so we would learn a lot by what the result ends up being?", "Adam Brown 01:04:56", "Well, the problem with particle colliders is, in some sense, that they got too expensive. And CERN is tens of billions of dollars—a small number of tens of billions of dollars—to run this thing.", "Dwarkesh Patel 01:05:08", "You could build AGI with that money, right?", "Adam Brown 01:05:10", "It's super interesting how everybody talks about how academics can't possibly compete with the big labs, but the cost of CERN is larger than the cost of big model training runs by a lot. So that's just academics pooling their money. That's an interesting fact.", "They got so expensive that it's difficult to persuade people to buy a new one for us that's even bigger. It's a very natural thing to do, to build an atom smasher that just smashes things together to higher energy. It's a very natural thing to see what comes out.", "People were perhaps somewhat disappointed with the output of the LHC, where it made the Higgs, which was great, and we found it, but we also expected it to be there. And it didn't make anything else, any of these more fanciful scenarios or anything basically unexpected. People had speculated we'd see supersymmetry there, or we'd see extra dimensions, and basically that was a null result.", "We didn't see anything like that. I would say we should definitely build another one if it was cheap to do so, and we should build another one once AGI has made us all so rich that it's cheap to do so. But it's not the obvious place to spend $50 billion if you had $50 billion to spend on science.", "Often it's these smaller experiments that can look for things in unexpected places. A decade ago, there was BICEP, which is a reasonably cheap, tens of millions of dollars, experiment at the South Pole that thought it had seen some hints in the cosmic microwave background of gravitational waves. That would have been revolutionary if true.", "Not worth doing BICEP if it cost $10 billion. Definitely worth doing BICEP if it costs $10 million. So there's all sorts of experiments like that, often observational.", "Dwarkesh Patel 01:07:01", "What is the value of seeing these primordial gravitational waves?", "Adam Brown 01:07:04", "Oh, it gives you hints. You're just examining the night sky very closely and seeing hints of what happened at the Big Bang. This is a sort of different approach to doing high energy physics.", "Why do you want to build a big collider? You want to build a big collider because the bigger the collider, the more high energy you can smash things together with. And Heisenberg's uncertainty principle says that high energy means short resolution.", "You can see things on very small scales. That's great, except the cost to build them, there's some scaling laws, and the scaling laws are not particularly friendly.", "There is another sort of approach that one might say, which is, you know, there was a ginormous explosion that happened, which was the Big Bang. If you imagine, if we look out in the universe, it's expanding. If you sort of play the tape backwards, it's contracting.", "Eventually it all contracts at 13.8 billion years ago in the Big Bang. And so that's a very big particle collider indeed. And so by just examining very closely the Big Bang and its aftermath, we're able to hopefully probe some of these quantities that are very difficult to probe with particle colliders.", "The disadvantage is that you can't keep running it and adjust the parameters as you see fit. It's just like one thing that happened once, and now we're having to peer backwards with our telescopes to see what happened. But it can give us hints about things that would be inaccessible with any future collider.", "Dwarkesh Patel 01:08:41", "Is there any information about the distant past that is, in principle, inaccessible?", "Adam Brown 01:08:45", "Probably not in principle. Something happened to the universe in its evolution, which is that the very early universe, just after the Big Bang, was opaque to light. We can only see light past about 300,000 years after the putative Big Bang.", "Before that, everything's so dense. It's like just a dense plasma that light just gets absorbed by. It's like trying to look through the sun.", "And so we cannot see directly anything from before 300,000 years. Nevertheless, we can infer lots of stuff that happened from before 300,000 years. In fact, looking at that light, what's called the cosmic microwave background that was emitted at that time, we infer lots of stuff about just due to the patterns of anisotropies that we see in the sky.", "We can infer a great deal about what was happening earlier. Most of our confidence about modern cosmology comes from a number of experiments that, starting in the '80s but accelerating in the 2000s, really very carefully measured that anisotropy and allowed us to infer stuff before that. At the information theoretic level, there's nothing inaccessible.", "Dwarkesh Patel 01:09:55", "I guess that makes sense. Conservation of information. Maybe you will tell me that that also isn't true.", "Adam Brown 01:10:01", "Well, that's a great question. I mean, there's been a lot of debate in the black hole context about whether information is conserved by black holes, but the modern consensus is that it is.", "(01:11:10) - Hitchhiking", "Dwarkesh Patel 01:11:10", "All right, Adam, what are your tips for hitchhiking?", "Adam Brown 01:11:21", "Oh, good question. So I hitchhiked a bunch around America and Europe. I've done Oxford to Morocco, when I moved from Princeton out to Stanford, I hitchhiked a bunch of other times, down to New Orleans, various other places.", "I think probably the biggest tip for hitchhiking is to stand in a good place. Some counterparty modeling. Imagine the person who's picking you up.", "They need time to see you, to evaluate you and to decide they're going to pick you up and then to safely stop. And that all needs to happen. So stand somewhere where people can see you, possibly at a stoplight, and where there's a place for them to safely pull over.", "Dwarkesh Patel 01:12:02", "How do you model the motivations of people who pick you up? What are they getting out of this?", "Adam Brown 01:12:08", "I think it's different for different people. I think about 20% of people will just always pick up hitchhikers no matter what. Even if I was dressed very differently and presented very different, I think some people would just pick people up no matter what.", "I basically fall into that category now. It's hard coded into my brain that I will 100% pick up hitchhikers always under all circumstances. Just because enough people have generously picked me up down the years that I just feel as though it's my duty and sort of not subject to a cost benefit analysis.", "Just it's in there. Many other people are evaluating you and just trying to decide what you're in for. Some people are lonely and want somebody to talk to.", "Some people have just a spirit of adventure and find it exciting to pick people up. Certainly it's not a representative cross section of people. I would say there's definitely a selection bias in who picks you up.", "They tend to be more open and more risk tolerant.", "Dwarkesh Patel 01:13:04", "And what was your motivation for that? Were you just in need of a car or what was going on?", "Adam Brown 01:13:10", "No, I enjoy meeting people. I enjoy the experience of meeting people and the weird episodic sense of which, just, you never know what's going to happen. I think I have a very high tolerance for ambiguity, and I enjoy that.", "Dwarkesh Patel 01:13:27", "What was the percentage of, \"We just had a normal conversation, they went in the general direction I was going, and that was that,\" versus, \"I've got a crazy story to tell about X incident\"? What percentage is each?", "Adam Brown 01:13:40", "I think some people are just totally normal people. Families moving their child to college, and you get there and you help them move some stuff into the dorm room just to, just to thank you, all the way through to absolutely wild cases. Probably 20% are just like, this is one of the craziest things that ever happened in one way or another.", "Dwarkesh Patel 01:14:04", "Any particular examples of the wildest things?", "Adam Brown 01:14:07", "Oh, yeah, huge. I mean, it's just absolutely a fire hose of wild things happening. I could tell so many stories.", "I remember once there was a trucker who picked me up in the desert outside Salt Lake City and who drove me to Battle Station, Nevada. And who, as we were talking—the truckers are always, in fact, the most interesting of all. It's typically illegal or anyway in violation of their employment contract for them to pick people up, so those guys are really, and it's always guys, are really pushing the envelope in terms of picking you up. The truckers often will say, \"You're the first person I've had in my cab in 20 years of trucking or something,\" and then they tell you about 20 years' worth of things that have been on their mind. So, I'd say that those are often the really interesting ones.", "As I said, there was this one in Utah who just talked from the moment I got into the cab until we got to Nevada. I kind of got the feeling that he had sort of excess mental capacity and that this was his, you know, he was now just gonna dump it on me. And he was telling me all about his life, and I remember this very well, how his brother-in-law thought he was a loser, his sister's husband. But like, now he had the hot fiance, so who was the loser?", "And then just sort of gradually over the course of the six hours, it just suddenly occurred to me that his fiance was doing advanced fee fraud on him. The whole thing was some ginormous, and he was being scammed by his fiance. And very unfortunately for them, they tried to execute the scam while he had me in the cab, and he never had anyone in his cab.", "So now he had me in his cab, and they were trying to do some fraud on him. And I was able to, they had some wheat factory in Wales, United Kingdom, that they had some British High Court document saying that he was entitled to if he paid off the lien on it. There was some long, complicated story that was totally flagrantly false.", "I kind of felt like I had a moral obligation to him to break the news to him. On the other hand, we were in the middle of nowhere in Nevada, and it was clearly a very important part of his personality that this was so. So I kind of waited until we got close and said, is it possible that your fiance is being scammed by these people? You know, sort of raised the notion of scamming.", "And he was willing to intellectually entertain the possibility. And then we got a bit closer. Is it possible that you were yourself being scammed by your fiance? And then he was like, \"No, no, no, it can't be.\" And he had all these documents to show that it was all legit, and they were just sort of, to somebody from a British legal background, transparent forgeries.", "He did eventually accept it and was just crying on my shoulder in some truck stop. It was quite a high pathos moment. And then said, uh, this happened before.", "And it turned out he'd previously been scammed in the same way or a similar way through somebody he'd met through the same match.com profile. That was his lucky profile because, you know, people kept messaging him through it. So we, you know, we talked through that and worked through that and, like, I felt in some ways I'd been his guardian angel and, uh, but he, you know, he'd also been my guardian angel and picked me up in the middle of the desert, so there was some, there was some great exchange there.", "Dwarkesh Patel 01:17:38", "That's crazy. I hope he closed down that profile.", "Adam Brown 01:17:44", "I hope so. I mean, we, you know, I did chat to him about that possibility and he wasn't fully bought in on it, but, uh, yeah.", "Dwarkesh Patel 01:17:52", "What's the longest you've been stranded somewhere?", "Adam Brown 01:17:55", "That would probably be one time in Richmond, Virginia, in some not particularly good neighborhood, trying to hitch out of there. I think that was about a day, which is really bad. That's really bad.", "Sometimes, if you get a good spot, that's worth a thousand miles. Just don't, don't give it up just for a short hop anywhere. If you get a bad spot, get out of there on any means necessary because there's, because there's probably a thousand X variance in how high quality hitchhiking spots are, I would say.", "Dwarkesh Patel 01:18:25", "How did you find the time to like get stranded for a day at an end?", "Adam Brown 01:18:29", "In terms of intensity, it doesn't really take that much wall clock time, as we say. Coast to coast is, um, you know, like a week or so. It's pretty fast because you don't, you're not yourself driving, in that sense, it's easier.", "You do have to wait, and, you know, there is definitely high variance how long you can be. But in terms of sort of incidents per minute, it's, it's a pretty good way to see the world. And you see such a cross section of people who I might never, never otherwise meet, and such a sort of high variance cross section.", "Everything from sort of idle millionaires cruising around the country looking for adventure to people who just got out of prison to, in one memorable incident, well, it eventually transpired as we were going along that they were, uh, they had, they were actually just teenagers and I didn't, somehow didn't clock that when getting in the car. And they, they had stolen the family car and were, were driving west, um, without a plan. And that, yeah, there I gave him a talk, I was talking to, and, uh, bought them dinner and some, some life advice. So that was some fun stuff I got.", "Dwarkesh Patel 01:19:35", "Did you make them call their parents?", "Adam Brown 01:19:36", "I did make them call their parents, yes. Or, you know, heavily encouraged them to call their parents.", "Dwarkesh Patel 01:19:40", "Is there a luck to get the professor?", "Adam Brown 01:19:44", "Yeah, none of these people typically realize that, uh, you know, your academic background never really comes up in conversation, typically. I mean, sometimes it does, but typically that's, that's not the nature of the conversations.", "Dwarkesh Patel 01:19:55", "Was there any time you felt particularly unsafe?", "Adam Brown 01:19:58", "I have definitely felt more unsafe picking up hitchhikers than I have hitchhiking. Maybe I just got lucky, but picking up hitchhikers, there it tends to be, um, you know, no one really picks up hitchhikers, uh, anymore, and there's definitely a selection effect on who's hitchhiking.", "Dwarkesh Patel 01:20:18", "Right.", "Adam Brown 01:20:19", "I've definitely felt more in risk of my life with hitchhikers I picked up than I ever did hitchhiking. But, you know, it's possible I just got lucky. You don't see the other branches of the wave function.", "Dwarkesh Patel 01:20:29", "What are the other interesting insights from just getting this random cross section?", "Adam Brown 01:20:34", "Yeah, all sorts of facts. A lot of people just like to talk. There's a lot of, a lot of people out there, and I like to talk too, so it's mutually beneficial.", "Dwarkesh Patel 01:20:41", "Well, the truckers, I imagine are especially so.", "Adam Brown 01:20:43", "Those guys are interesting. They're all cheating their logs. They have certain logs about how long they can travel for, and at least every single one who's ever picked me up has all been, in some way or another, gaming the system of their logs about how long they're allowed to drive for and playing games with time zones.", "They're smart people, and they just have a lot to say and don't really have anybody to say it to. So they're very grateful.", "Dwarkesh Patel 01:21:14", "What are they especially insightful about?", "Adam Brown 01:21:17", "They tend to have listened to a huge number of audiobooks. They have a ginormous amount of information stored in their brain, but nobody to tell it to. Also, many of them tend to have had unlucky romances at some stage in their past that they've never really gotten over or spoken to.", "I really feel as though many of them would do well to speak to a therapist. But you are the therapist in that case. In many ways, people will tell you things.", "Frequently people will say things like, \"I've never told anybody else this in my life before.\" That's common, not just the truckers, other people as well. Sometimes it's families picking you up, and so they're not going to say that.", "Often it's just single people picking you up, and they'll say, \"I've never said this before to anyone else in my life.\" And they'll tell you some story of their life. I do think it's an exchange, and they're also getting quite a lot out of the conversation.", "I remember one case going to New Orleans. Somebody just meant to only take us, I think it was just some state trooper had come along in South Carolina and was going to arrest us because it's illegal in some states to hitchhike in North Carolina. And so I was like, \"Okay, just take the next ride.\"", "And it was just 10 miles down the road. He ended up getting so into it that we ended up driving maybe 1,000 miles out of his way by the time we'd gone. He'd had this, we were having great conversations, just absolutely wonderful time, and he just wanted to keep going and going and drive us through the night.", "Then we ended up going through the Deep South in the middle of the night and arriving near New Orleans around dawn. He'd had a father who had been in the military, but he'd kind of had a difficult relationship with. He ended up going and visiting his father's grave in Baton Rouge, never having done that in the 20 years since his father died.", "But just as this sort of turned, I mean, he just was driving along expecting to go home, and then it just turned into this sort of spiritual quest for him. Stuff like that can be pretty gratifying. It's also sort of cheating.", "You're not, in my way of thinking about it, meant to be taking people out of their way. They're meant to be going where they're going, and you go with them, and they take you no further. But in this case, I think he needed to go there, so that was good for him.", "Dwarkesh Patel 01:23:47", "Did you stay in contact with any of the people you hitchhiked with?", "Adam Brown 01:23:51", "Typically, no. I would almost consider it poor form to do so. But actually, there was one lady who came to stay in New York later.", "She was going down to Haiti to sort of be a doctor there. She was a doctor, and so I stayed in contact with her a bit. But typically it's just the nature of the interaction is that you have this sort of beautiful moment in time together, and then that's it.", "Dwarkesh Patel 01:24:21", "Any other tips that somebody should know? I mean, should they do this anymore, given that it's largely uncommon and so uncommon types of people might pick you up?", "Adam Brown 01:24:32", "I think it used to be very common in the United States. It's still reasonably common in Europe. It used to be very common in the United States, and then there were some mass murderers who drove the popularity down by targeting hitchhikers.", "Maybe this is just pure cope. In my mind, you need to worry about that less because if you are a mass murderer, it's really not a high expected value strategy to cruise around looking for hitchhikers since there are so few of them. But that just might be pure cope in my head.", "I've never refused a ride for safety grounds, but I would, I hope I would if necessary. Sometimes you would refuse a ride because somebody is only going a short distance, and you're in a good hitchhiking spot. It's kind of bad karma to refuse a ride, but sometimes you should do that.", "Other tips: don't write your exact destination on your sign. Write the sort of direction in which you're going. The reason is maybe twofold.", "One, a lot of people, if they're heading towards that place but not going to that place, will not stop because they think, \"Oh, I'm not going to wherever it is, I better not. I'm not going there, so I won't pick you up,\" even though you'd very much appreciate a partial ride there. The other reason is if you do want to decline a ride, it's certainly a lot easier to do so.", "Dwarkesh Patel 01:25:58", "If the person says, \"Oh, I'm going to that city.\"", "Adam Brown 01:26:01", "Right, that's hard. Where if they say they go into that city and you've written something more vague on your sign, then it's maybe easier to decline a ride. If you want to get out of the car, the classic, if you get in and you feel unsafe, is to say that you're carsick because even serial killers don't want vomit in their car.", "So that's a good reason to get out, and then you just say, \"Okay, I'll just stay here.\" That's another trick. I've never had to deploy that.", "Dwarkesh Patel 01:26:26", "Oh, I was just about to ask.", "Adam Brown 01:26:27", "No, I've never had to deploy that. Typically, it's pretty, there's a moment of anxiety in the first minute. But then after a minute, it's clear that everybody is, they're also anxious about you.", "In many ways, you can tell that they're quite nervous about you. And then after a minute, it's clear that everybody's, if not a sensible human being, then at least a safe human being. And everything's super relaxed for the rest of the ride, typically.", "Dwarkesh Patel 01:26:52", "Any other strange people who picked you up that come to mind? Not necessarily strange, but just memorable.", "Adam Brown 01:26:59", "So many different kinds of people. I remember there was one seemingly very successful cowboy, driving some fancy truck in Wyoming. He had a big herd of cattle and all the rest of it, and was asking me what I do.", "At that time, I was doing cosmology, so I tried to explain to him. It had no connection with anything. He just didn't understand a word I was saying all the way through.", "Eventually, we landed on the fact that the stars in the sky are just like the sun, only much further away. This was a fact that, in his life up to that stage, he had just never encountered.", "That was extremely gratifying because he was blown away by that fact. He wasn't intellectually incapable of understanding it. He just never, in his 50 years of existence up to that moment, ever heard that fact.", "His mind was just totally racing. This was reorienting his picture of his place in the universe. \"The universe must be so big if there are stars out there!\"", "He phoned his wife, who I think was somewhat less excited, and then took me to a gun store and bought me lunch. He was a rancher, seemingly a very successful rancher based on everything about him. He had some prize, high-quality bulls that were some rare kind of high-quality bulls. I can't exactly remember the details, but he just never really contemplated what the night sky meant for him.", "Dwarkesh Patel 01:28:37", "There's a Sherlock Holmes story where Holmes learns that actually the sun is the center of the solar system.", "Adam Brown 01:28:44", "Oh, interesting.", "Dwarkesh Patel 01:28:45", "Watson tells him this, and Holmes is like, \"Why did you tell me this? I try to reserve mental space for things that are actually relevant to my work. Now I have to forget this.\"", "Adam Brown 01:28:58", "A Hitchhiker's Guide to the Galaxy.", "(01:29:00) - Nagasaki", "Dwarkesh Patel 01:29:00", "What did you learn from studying the first-hand accounts of the Nagasaki bombers?", "Adam Brown 01:29:08", "During the pandemic, my landlord had a big library, and I just started reading some books in the library during deep lockdown. There was some sort of enigmatic statement in some book about the history of Japan.", "Dwarkesh Patel 01:29:18", "Where do you stay that your landlord has a library?", "Adam Brown 01:29:21", "I live in a house that used to belong to the chair of the English department at Stanford, and then it was inherited by his grandson who rents it to me. He has a very extensive library.", "I was going through it during the first lockdown and came across this super enigmatic statement in a book about the history of Japan. I was super fascinated by it and started, for reasons that I'll explain in a moment, then just became obsessed for a few months on reading absolutely everything I could about the bombing of Nagasaki.", "It's the most recent nuclear weapon ever to be set off during wartime. It was reasonably controversial because people questioned whether we should have done it or not. That wasn't the question I was looking at. The question I was looking at wasn't, \"Should they have ordered it to be done?\" but, \"Were the people who did it even following orders?\"", "It's a pretty wild story that I certainly didn't know before any of this happened. It was never meant to be a mission to Nagasaki. It was meant to be a mission to bomb Kokura, a different Japanese city, but they got there, and it was clouded over.", "They had very strict instructions: \"Do not bomb unless you can see the target.\" That was the order.", "They got to this other city, passed over a bunch of times, and they couldn't see the target because it was covered in clouds. Then they went to their secondary target, Nagasaki, and it was again covered in clouds, and they did a whole bunch of passes.", "They'd made various mess-ups beforehand, including getting lost. They'd made a number of personal flying mistakes on their part that meant that they didn't have enough fuel once they got to Nagasaki to carry the bomb back to base. They probably would have ended up in the ocean had they tried.", "They were extremely motivated at the time. This was the only nuclear weapon that existed in the world. We'd had two, and then it went down to one. Now there was one, and they were just about to drop it in the ocean and lose it.", "According to the official account, after having done all this, on the third and final pass over Nagasaki, there was a miraculous hole in the cloud that suddenly opened up, and then they dropped it. That story is a bit suspect, if for no other reason than that they actually missed. Little-known fact, they missed Nagasaki.", "They were aiming for one point, and they hit another point that was on the other side of the hill, such that the original thing they were aiming for was reasonably untouched by comparison, considering the fact that a nuclear weapon had been dropped. They missed by much more than you would miss if you were doing visual bombing, and they had been told to do visual bombing.", "There's this kind of suspicion that they were doing a little bit of radar bombing against direct orders. So is it possible that 50% of all of the nuclear weapons ever dropped in combat were, in fact, dropped against direct orders? If true, that's a pretty striking fact about nuclear war, since people are somewhat worried with nuclear war that someone will launch nuclear weapons without being ordered to do so.", "It does kind of look like 50% of all the nuclear weapons ever dropped in combat were dropped against direct orders. When they got back, Curtis LeMay was going to court-martial them and was super mad, but then the war ended, and they didn't want to do it for PR reasons.", "I just ordered and found every account ever written by every person. It was super fascinating to do that because all these different people had completely non-overlapping lives. Some of them were on the Manhattan Project and were there as observers and later won Nobel Prizes for physics. Some of them were just people who were just there for one moment.", "Dwarkesh Patel 01:33:39", "So Louis Alvarez was on the plane?", "Adam Brown 01:33:44", "There was typically a physicist, a representative of the Manhattan Project, on the plane just in case. So Louis Alvarez was someone there. He actually wasn't on the Nagasaki mission. He was on the Hiroshima mission.", "But in his biography, he's like, \"They said they saw a hole in the clouds. I don't think I believed them.\" So that was, I think, one of the hints. It was maybe reading his autobiography at some stage, that was one of the big hints.", "The other people insist there was. But what's super clear is that, whether or not there was a hole in the clouds, and probably there was a hole in the clouds, just because of some of the technical things to do with their discussion, though it's definitely not obvious.", "What's clear is that whether or not there was a hole in the clouds, they certainly had decided in the cockpit on that final run that, no matter what, they were going to drop it. So even if there wasn't a hole in the clouds, they had decided to drop the nuclear weapons against direct orders.", "Dwarkesh Patel 01:34:42", "And had they written, basically, \"Oh, we totally saw a hole in the clouds, but even if we hadn't, we would have dropped it.\"", "Adam Brown 01:34:48", "That basically is. Yeah. So different people write different things.", "Dwarkesh Patel 01:34:50", "How did you end up on the plane?", "Adam Brown 01:34:51", "There are about ten people on these planes. Did any of them say? Not all of them were, you know, some of them were some ways away from where the action is happening. There's the bombardier who says that he saw a hole in the clouds. There's the pilot who says something.", "But everyone has their own different perspective, and some of the perspectives are just totally, this is something that I guess I'd always been told by my history teachers but never really appreciated until I'd done this 360-degree view of history, that people can describe the same events and just, they have flatly inconsistent memories of each other.", "Nobody who was on the plane said that they faked the hole in the cloud story, but some people who were on the plane said they were determined to drop the bomb no matter what, and they were highly incentivized to do this, because if had they not done it, they'd have probably, as it was, they only barely made it back to their emergency landing spot in Okinawa. They would have definitely ended up in the drink, and certainly the bomb would have ended up in the drink had they not done it.", "So I don't know. I'm not a professional historian, and maybe there'll be a difference of opinions, but it's clear there was something highly sus about at least 50% of all the nuclear weapons dropped in combat.", "Dwarkesh Patel 01:35:57", "The interesting thing is that the reason nuclear war was averted in other cases is also because they refused to follow direct orders, right? So in this case, or in the case of Petrov, he didn't report the seeming sighting of Nuke Storm America, and that obviously contradicts orders.", "Adam Brown 01:36:15", "Yeah, there's nuclear insubordination in both directions.", "Dwarkesh Patel 01:36:18", "That's right.", "Adam Brown 01:36:19", "There's the good kind, where they maybe should drop the bomb according to their orders and refuse to, and then there's the other kind.", "(01:36:19) - Adam’s career", "Dwarkesh Patel 01:36:27", "I also want to ask, so you've had not only one remarkable career but two remarkable careers. In physics, you're a close collaborator of people like Leonard Susskind, and you've done all this interesting research. Now you're helping do the reasoning work that Google DeepMind's working on in AI. Is there some chronology you have in your head about how your career has transpired?", "Adam Brown 01:36:58", "Oh, I don't impose narratives on it like that. It's certainly a very big contrast between doing physics and writing retail papers, as it were.", "Dwarkesh Patel 01:37:13", "Retail.", "Adam Brown 01:37:14", "Doing one-by-one writing physics papers and then doing AI, which moves just tremendously faster, and trying to contribute to wholesale production of knowledge in that way.", "They have very different impacts in terms of counterfactual impact. In physics, you write some papers, and you're like, had I not written that paper, no one would have written that paper for years or ever, perhaps. Computer science doesn't feel like that. It feels like if you didn't do it, someone else would do it pretty soon thereafter. On the other hand, the impact, even a few days of impact in computer science, these things are going to change the world, hopefully for the better, to such a large degree that that's much bigger than potentially all the physics papers you ever wrote.", "Dwarkesh Patel 01:38:02", "That's interesting you say that about, you feel that physicists are not fungible in the same way. The story about why physics has slowed down is usually that, in fact, there isn't any low-hanging fruit. The idea that you would discover something that somebody wouldn't have written about for many years to come. I had a couple of double negatives there, but basically, we found all the things that you can just write a paper about. You're not just going to think about something and find something that somebody else wouldn't have written about otherwise.", "But here, you're saying the field that's moving way faster, which is computer science, that's the one where all these people are going to come up with your algorithms if you hadn't come up with them yourself. And it's physics where if you had more Leonard Susskinds and Adam Browns, you would have much faster progress, potentially.", "Adam Brown 01:38:55", "Well, partly there's just so many more people working on the problems in computer science than there are in physics. Just the number of people is part of what makes the counterfactual impact.", "Dwarkesh Patel 01:39:06", "How many theoretical physicists are there versus how many people are working on AI research?", "Adam Brown 01:39:11", "AI research around the world? I don't know how many people are in research, but it's like thousands and thousands and thousands. The amount of matter is 100, 200, 300.", "Dwarkesh Patel 01:39:22", "Really?", "Adam Brown 01:39:22", "Well, in the narrow domain of high-energy theoretical physics. There are many more physicists than that if you include people more generally, but they're sufficiently specialized. I mean, that's partly part of the reason is that it's a much more specialized field. So in a very specialized field, the number of people who would actually write that paper is a much smaller number.", "Dwarkesh Patel 01:39:40", "How much do you ascribe the slowness of physics to these kinds of things that are just intrinsic to any field that is as specialized and as mature versus to any particular dysfunctions of physics as a field?", "Adam Brown 01:39:54", "Yeah, we look back on the golden era of physics from the 1900 through 1970s or something as a period when things happened. I do think there is a low-hanging fruit aspect to it. We already talked about how the Standard Model is so successful in terms of particle colliders that it's just hard to make rapid progress thereafter. So I don't really see it as a dysfunction of the field so much as being a victim of our own success.", "Having said that, does physics have fads? Does physics have fashions? Does physics have any of these other things? Absolutely, it does. But quite how much counterfactual progress we'd make if that weren't true, I don't know.", "Dwarkesh Patel 01:40:37", "How well-calibrated are the best physicists?", "Adam Brown 01:40:40", "It doesn't necessarily pay to be well-calibrated, and that incentive structure is perhaps reflected in the poor calibration of many of the best physicists. First of all, because physics is a sufficiently mature field, all the good ideas that look like good ideas have already been had, or many of them. Where we're at now is the good ideas that look like bad ideas.", "So in order to motivate yourself to get over the hump, get over the barrier, and actually explore them, you need a little bit of irrational optimism to ride out the initial discouraging things that you'll discover as you go along.", "I would say that typically theoretical physicists are not particularly well-calibrated and tend to be in love with all their own theories and make highly confident predictions about their own theories. Before the LHC turned on, there were certainly a lot of high-energy theorists making extremely confident predictions about what we'd see at the LHC, and it was typically their own favorite particle that we'd see. While I'd love to have found supersymmetry, it would have, in some sense, felt somewhat unjust to reward the hubris of people making overconfident and poorly calibrated predictions. So, yeah, that's definitely a thing that happens.", "Dwarkesh Patel 01:42:02", "But I wonder if poor calibration on the individual level is somehow optimal on the collective level.", "Adam Brown 01:42:08", "I think that's basically right. The same is kind of true in other domains of life as well. Of course, with startups, if you were properly calibrated about how likely your startup is to succeed, maybe you wouldn't do it.", "But it's good for the ecosystem that certain people are willing to give it a go. I think it's good for the ecosystem and perhaps bad for the individual to be well-calibrated.", "(01:43:25) - Mining black holes", "Dwarkesh Patel 01:43:25", "Another topic I know you studied a lot is how one might mine a black hole.", "Adam Brown 01:43:34", "Oh yeah, right. I read a paper about that. Very good.", "Dwarkesh Patel 01:43:37", "Tell me about it.", "Adam Brown 01:43:38", "Okay, so what do we mean by \"mine a black hole?\" Mining a black hole means taking energy out of a black hole that used to be in a black hole. Obviously, if our distant descendants have used up all of the energy in stars and everything else, the black hole might be the last thing they turn their eye to.", "Can you get energy out of black holes at all? The old story, pre-1970s, is no. A black hole is one way: matter falls in, it never comes out, it's stuck.", "The thing that Hawking and Bekenstein discovered in the 70s is that once quantum mechanics is involved, that's not true anymore. Once quantum mechanics is involved, in fact, energy, even without you doing anything, starts to leave black holes. The problem, as far as our distant descendants will be concerned, is that it leaves black holes extremely slowly.", "So if you took a solar mass black hole, same mass as the sun, just collapsed to form a black hole, there'll be this little quantum, what's called Hawking radiation nowadays, little quantum Hawking radiation in which the energy will leach out again very, very slowly. The temperature of a solar mass black hole is measured in nanokelvins, a very low temperature. So the energy leeches out when something that cold, so cold you couldn't even see it in the cosmic microwave background, it leeches out incredibly slowly back into the universe.", "And that's bad news because it means the energy comes out super duper slowly. So the mining question is, can you speed that up? A solar mass black hole, if you don't help it, will take about 10 to the 55 times the current age of the universe to have given out all its energy back into the universe.", "Can you make that faster? There were these proposals stretching back a few decades that you could do what's called mining black holes, where we see the Hawking radiation that escapes when we are a very long way away from the black hole. But actually, mathematically, it's known that much of the Hawking radiation doesn't escape.", "It just sort of makes it a little bit out of the black hole and then falls back in again. And there was this proposal that you could kind of reach in with a mechanical claw, obviously not crossing the horizon, because otherwise, you've lost the claw and you're somewhat counterproductive, but just outside the horizon, just grab some of that Hawking radiation and just drag it a long way away from the black hole and then feast on it or do whatever it is you want to do with it. In that way, you could mine a black hole.", "You could speed up the evaporation of a black hole by a huge factor. So in fact, the lifetime would no longer go like the mass cubed, like it does with just unaided Hawking radiation, but would scale like just the mass, so considerably faster for a large black hole.", "So this was these proposals and what I had a somewhat pessimistic contribution to the story, which is that the existing proposals did not work. They didn't work to speed it up. And in fact, you can't speed it up.", "You can't get down that M cubed down to M. You can't, in fact, get it anything less than M cubed. It still scales like the mass cubed. The length of time you need to wait to get all the energy out of a black hole still scales like the mass cubed.", "And what goes wrong is ultimately a material science problem. So this scoop that comes down really close to the horizon, now, from one point of view, that's just like a space elevator, albeit a very high-performance space elevator. Space elevators, you'll remember, are these ideas for how we might get things off the surface of the Earth without using rockets.", "The idea is that you have some massive orbiting object sort of very long way away, beyond geostationary orbit, and then you dangle off that a rope down to the surface of the Earth, and then you can essentially just climb up the rope to get out. That's the space elevator idea. And already around Earth, it's hitting pretty hard material science constraints.", "So if you want to make a space elevator, the trouble with making a space elevator isn't so much supporting the payload that you're trying to have climb up. It is merely just the rope supporting its own weight because each bit of the rope needs to support not only its own weight but also the weight of all of the rope beneath it. So the tension that you require keeps getting more and more and more as you go up.", "At the bottom, there is no tension effect. It doesn't even touch the Earth. It's not like a compression structure that's like a skyscraper that's pushed up from below. It's a tension structure that's held up from above.", "But as you go up, because you need more and more tension, you also need to make the rope thicker and thicker and thicker. And if you try and on Earth or around Earth, build a space elevator out of steel, say, it just doesn't work. Steel is not strong enough.", "You need to keep doubling the thickness until, by the time you get to geostationary orbit, the thickness of the steel rope is more than the size of the Earth. Like, the whole thing just doesn't work at all. But carbon nanotubes are this material that we discovered that are much stronger than steel.", "So, in fact, around Earth, carbon nanotubes will just about work. If we can make them long enough and pure enough, then they will be strong enough that we will be able to build a space elevator around Earth in, you know, maybe sometime in the next century, that you only need a couple of doublings of the thickness of the carbon nanotubes along its entire length. So carbon nanotubes work great around Earth, but they are totally inadequate for black holes.", "For black holes, the critical material science property you need for this rope is the tensile strength to mass per unit length ratio. It needs to be strong, high tensile strength, but low weight, light, low mass per unit length. And that's the critical ratio.", "And carbon nanotubes is 10 to the minus 12 or something on that scale. And that is simply not strong enough at all. In fact, what I showed in my paper is that you need a tensile strength to weight ratio that is as strong as is consistent with the laws of nature.", "So, in fact, the laws of nature bound this quantity. The finiteness of the speed of light means you cannot have an arbitrarily strong rope with a given mass per unit length. There is a bound set by the C squared in some units that bounds the maximum possible tensile strength that any rope can have.", "Any rope, in fact, that has that, or an example of a rope that has that, is a string. So a string is, I mean, a fundamental string from string theory is an example of a hypothetical rope that is just strong enough to saturate that bound, that strength bound. And then the problem is the following.", "The problem is that if you have a rope that saturates the bound as strong as any rope can be, it is just strong enough to support all of its own weight exactly on the edge there, with exactly no strength left over to support any payload it might wish to carry. And that's ultimately what dooms these mining black holes, you know, these rapid mining black hole proposals.", "Dwarkesh Patel 01:50:30", "And what happens if you try to make the rope stronger?", "Adam Brown 01:50:32", "Well, you can't. One example of a thing that goes wrong is the speed of sound in a rope goes up with the tension and down with the mass per unit length. And if you try and use a rope that's stronger than this or some hypothetical rope, you would find that the speed of sound is greater than the speed of light. And that's a pretty good indication.", "Dwarkesh Patel 01:50:53", "What is the speed of sound?", "Adam Brown 01:50:54", "So, if you just take a rope stretched between you and me and ping it, there will be little vibrations that head over towards you. Those vibrations are subluminal. If it's just a normal rope, they move at the speed of light. For a string or something that saturates null energy condition, it would be faster than the speed of light.", "That would be an example of why there's something wrong with that proposal.", "Dwarkesh Patel 01:51:21", "So, it just happens to be the case that the rope cannot mine black holes. I think we've mentioned a couple of other bounds like this where there's no principled reason you might have anticipated ex ante why there would be such a bound that prevents something that just gets in our way, but it just so happens to be this way.", "Does this suggest that there's some sort of deeper conservation principle we'd be violating? And then the universe conspires to create these engineering difficulties which limit that?", "Adam Brown 01:51:56", "Yes, nothing is ever a coincidence. Usually, from the perspective of the story I just told to do with mining black holes, it's not clear what exactly will be broken about the universe if you could mine black holes somewhat faster than we can. There are other ways of thinking about it in which, if you could make a string that was strong enough to actually do it, if you could make a rope that was stronger than this bound, various other things would go wrong.", "There are various symmetry arguments that that can't happen. But often, it turns out, if we have these bounds, that there's something that sort of saturates the bound or gets very close to the bound. And that's a sign that you're on the right lines with some of these bounds.", "Dwarkesh Patel 01:52:43", "On the right lines in what sense?", "Adam Brown 01:52:44", "As in, if you have a bound but you can't think how to get close to the bound, that's usually an indication that you need to think closer. Because often these bounds, if you're clever enough, there's a way to get to the bound.", "There's no rule that it has to be so, but that's often the case. Someone will come up with a bound, and there will be a gap between the bound and how close we can get. Usually, more ingenuity will take you up to the bound.", "Dwarkesh Patel 01:53:15", "I guess the thing I'm curious about is why it would be the case that such a bound would exist in the first place. And how often do you run into these things? Basically, are you expecting to discover something in the future about why it had to be this way, that you can't mine black holes?", "Something would be violated that tells us something important about black holes, that they can't be mined, and it's deeper than the tensile strength of the string that would be required to mine it.", "Adam Brown 01:53:41", "Yeah, good question. I started these investigations because it offended my intuition for various information theoretic reasons, the idea that black holes could be mined with parametric speed ups. When I thought harder about it, the reasons why I thought that couldn't happen didn't really make sense.", "So in this particular case, maybe someone will come up with a reason. I don't actually have a particularly strong reason why they can't be mined anymore, except that they can't.", "Dwarkesh Patel 01:54:11", "Okay, so we can't get the material out of the black hole at a pace that would make it reasonably useful to us. What can we do with black holes? What are they good for?", "Adam Brown 01:54:22", "If you have a small black hole, you can get stuff out of them more rapidly. The temperature of a black hole is inversely proportional to its size. So one thing that people have talked about with black holes is using them to extract all of the energy from matter.", "As you know, most chemical reactions are pretty inefficient. You burn gasoline and you extract, as a function of the rest mass of the gasoline that you started with, one part in 10 billion of energy from the gasoline that you started with. So that's bad from the point of view, you know, you have MC squared worth in a gallon of gasoline. You've got a full MC squared worth of energy in there, and you can only get out one part in 10 to the 10. That's a pretty unsatisfactory situation.", "Roughly speaking, the reason that all chemical processes are so inefficient is that they only address the electromagnetic energy in the electrons. A very small fraction of the electromagnetic energy in an electron in atoms is stored in the electromagnetic interaction between the electrons and between the nucleus and the electrons. Most of it is stored in the nucleus itself, in the strong nuclear forces, and particularly in the rest mass of the protons and neutrons that constitute it.", "So you can do much better if, instead of doing electromagnetic interactions, you use nuclear interactions that can probe the energy in turning protons into neutrons. That's why nuclear power plants are so much more efficient on a per mass basis than chemical power plants like coal plants or gas plants because you're getting a much higher fraction. Best case scenario, you're getting one part in 10 to the three or 10 to the four of the rest mass of the uranium that you start with, you're extracting as energy.", "But even there, even in that process, it's still only absolute best one part in a thousand of the rest mass. And the reason is that you are using where much more of the energy is stored, which is the strong and weak interactions between the protons and the neutrons. So much more is available to you.", "But still, at the end of whatever the process you finish with there, there's a number that will be conserved, and that is what's called the baryon number. So it's the total number of protons plus the total number of neutrons. You can transmute protons into neutrons or vice versa in nuclear processes, which is part of the reason they're so much more energy than things that just affect the chemistry.", "But still, most of the energy is stored in the rest mass of the protons and the neutrons. And you want to get that, and nuclear processes conserve that. Beta decay will maybe turn a proton into a neutron or vice versa, but the total number of protons plus neutrons is not changing. And so therefore 99.9% of the energy is inaccessible to you.", "So what you need to do to get that energy and try and get most of the MC squared out of the matter that you have, what you need to do is use a process that eats baryon number, in which you can start off with a proton and a neutron and end up with no proton or neutron. Instead, all of that energy is unleashed in high energy radiation that you can use for your own purposes.", "So electromagnetic interactions won't do that. Strong interactions also won't do that. Weak interactions won't do that. The only force of nature that will do that, with a small caveat, the only force of nature that we know that will do that is the gravitational interaction.", "And so it is a property of black holes that you can stand outside the black hole and throw protons and neutrons into the black holes, and then it'll process it and then spit out photons at the end in Hawking radiation and gravitons, which is going to be slightly annoying to have to capture, and neutrinos. But they're there in principle, and in principle, you could capture them. So one thing that black holes might be technologically useful for in the future is you start off with a much smaller black hole than what I've described, than the size of the sun.", "Dwarkesh Patel 01:58:42", "Be very careful about making sure it doesn't grow.", "Adam Brown 01:58:44", "You can be super duper careful and throw in protons and neutrons and then get out photons. In principle, if you could capture everything that's emitted from the black hole, including the gravitons and the neutrinos, that gets rid of the baryon number conservation problem. It allows you to build power plants that approach 100% efficiency.", "And by 100%, I mean, not the way we measure gas turbine efficiency, where we talk about the total available chemical energy in the gas. I mean, 100% of the MC squared of the entire gas you're putting in.", "Dwarkesh Patel 01:59:19", "Although, if you consider our cosmic endowment, we're not exactly lacking for mass.", "Adam Brown 01:59:26", "We have a lot of mass. On the other hand, we also have plans for our future that involve exponential growth, and eventually we will run low on that mass. Not that many doublings before using up the whole galaxy, so you want to use it carefully.", "(01:59:42) - The holographic principle", "Dwarkesh Patel 01:59:42", "Let's talk about black holes. How much information can a black hole store?", "Adam Brown 01:59:48", "That's a great question. That has been a very productive line of thought. The answer to that question goes back to Hawking and Penrose.", "You could even ask another question, which is: how much information can anything store?", "Dwarkesh Patel 02:00:04", "Can we back up? Why do we ask this question of black holes in particular? How often do we ask, \"How much information can the Sun store?\" Why are we interested in how much information a black hole can store?", "Adam Brown 02:00:21", "Well, it turns out that that's been an incredibly productive line of thought. And it also turns out that that is the main fact that we're most confident about about quantum gravity.", "So the two great theories of 20th century physics: gravity, Einstein's theory of the curvature of spacetime and gravity; and quantum mechanics, the theory of the very small, to do with Heisenberg's uncertainty principles and atomic spectra. Gravity tends to make itself felt at the very large scale, and quantum mechanics tends to make itself seen at the very small scale.", "These are the two most beautiful theories of 20th century physics, the two things that we should be most proud about that we discovered in the early 20th century. It was noticed pretty early on that these two theories seem to be inconsistent with each other. The most obvious ways to try and reconcile quantum mechanics and gravity break. You can't really shove them together.", "And this is a problem if you think that the world should be comprehensible, that there should be some theory that is consistent, that describes the world. So this has been a big project in theoretical physics over the last few decades, trying to understand how we can take Einstein's general relativity and quantum mechanics and make them meld together in a mathematically and physically consistent manner.", "It's tricky, in part because there's very little experimental guidance because general relativity tends to make itself felt at large scales, quantum mechanics at small scales. So trying to find a place where they meet in the middle, and it must be that they do meet, but trying to drag that out with experiment is very tricky.", "This has been a big project, trying to figure out how to do this. Einstein spent some years unsuccessfully doing this in the later, less productive part of his career.", "This project of trying to unite these is something that a lot of people have thought a lot about. String theory comes out of this project, a number of other lines of thought.", "There is, however, one fact about that merger that we are most confident about, and about anything about the merger. And that exactly returns to this question of how much information you can store in a given region of spacetime. And in fact, how much region.", "And the answer to that involves black holes. So the answer is how much? If you have a region of a certain area, maybe a sphere of a certain area, and you said, \"How much information can you store in that region?\" The amount of information you can store, measured in bits, the entropy of that region, is given by the area of that region divided by G, Newton's constant, and H bar, Planck's constant.", "So that's how you know that this is something to do with quantum gravity because it involves both G and H bar.", "Dwarkesh Patel 02:03:26", "Is that the only situation in physics where both of those constants end up being in the same place?", "Adam Brown 02:03:32", "That is not the only situation. No, anytime you have quantum gravity, they'll tend to be in the same place. And sometimes even when you don't have quantum gravity, but you have the interplay of gravitational forces and quantum degeneracy pressures, those will also end up in those.", "But it's in some sense the simplest situation in which it occurs, which is why so much time has been spent thinking about thought experiments to do with black holes. So there was a physicist called Bekenstein who figured out that that should be the answer, the area divided by GH bar. And then Hawking's great contribution to physics was figuring out that it was the area divided by GH bar, but he also got the pre-factor, and the pre-factor was a quarter.", "So Hawking figured out that it's a quarter at the area divided by 4GH bar. And this is a super interesting answer. How much information can you store in a given region is given by the area. And in fact, black holes maximize that. Black holes store that amount of information in a given area.", "Dwarkesh Patel 02:04:38", "But specifically area, meaning surface area?", "Adam Brown 02:04:40", "Meaning surface area, exactly. The reason that that's such a wild answer, and an answer that's led to all sorts of thought experiments to do with quantum gravity ever since then, is that you might naively think that the amount of information you can store in a region is given not by its surface area, but by its volume.", "If I have a hard drive, and I take another hard drive, and another hard drive, and another hard drive, and I keep piling them up, the amount of information I can store on those hard drives scales like the number of those hard drives. That means it scales like the volume of the region in which I'm storing the hard drives. Everything we know about classical thermodynamics tells us that the amount of information should scale like the volume.", "Everything we know about non-gravitational physics tends to point in the direction that the amount of information you can store goes like the volume. And yet, this is the most surprising fact that is incredibly generative: once you add gravity to the picture, once you combine quantum mechanics and gravity, the amount of information you can store in a given region, a given sphere, goes like the surface area of that region, not like the volume of that region.", "You might think that that can't possibly be right. You might give the following argument: there's some region, and I'm just going to keep adding more and more hard drives to that region. As I make that region bigger and bigger and bigger, the amount of information on those hard drives scales like the number of those hard drives, which goes like the radius of that region cubed.", "The thing about the radius of the region cubed is it grows faster at a large radius than the radius of that region squared. So I just told you that the amount of information you can store in a region is given by the surface area, and yet I also gave you a way to make it scale like the volume. So eventually, if I make the region big enough, the amount of information in that volume will be bigger than the bound that I just said.", "Therefore, I've ruled out Hawking's, Penrose's, and Bekenstein's bound. What goes wrong with that thought experiment is that eventually, if I make a big enough pile of hard drives, the whole pile of hard drives will undergo gravitational collapse and form a black hole.", "Dwarkesh Patel 02:07:02", "But then there has to be an experiment--not experimental, but do you have to crunch the numbers then to determine that just before the pile of hard drives would collapse into a black hole, the amount of information stored in that cubic pile of hard drives is less than the amount of information that then gets turned into the surface area of the black hole? Because theoretically, I don't know if I'm getting my mathematicians right, it's theoretically possible that even though the black hole is smaller because it's only the surface area, the cubic ends up being bigger.", "Adam Brown 02:07:41", "You have to run that calculation. But if you do run the calculation, it turns out that it's nowhere near. It wasn't close.", "Dwarkesh Patel 02:07:49", "It's not one of those things where they just balance each other out.", "Adam Brown 02:07:52", "They don't just balance each other out. If I take an online shopping website and I buy a bunch of Western Digital hard drives, and I calculate the information storage capacity of those and compare it to the area of a black hole, I figure out when the pressure in the hard drive would be enough to stop it collapsing to form a black hole. It is nowhere close. It will make a black hole way before it comes close to violating the Bekenstein-Hawking bound.", "Dwarkesh Patel 02:08:15", "Got it. Okay, sorry. I didn't mean to interrupt. And then you...", "Adam Brown 02:08:17", "So that's the information storage in black holes. The reason you know that that's also the information storage bound for anything, not just black holes, is that if you had something that wasn't a black hole that had more information than that in a given region, and you just added matter, eventually that thing itself would collapse to form a black hole. And so it couldn't be the case, just logically, that it had more information than the black hole. It'll tend to...", "Dwarkesh Patel 02:08:43", "You just hinted at the idea that somehow this is the most productive line of thought that physics has come up with in the last few decades. Why is that? Why does the fact that the area is proportional to the information of a black hole tell us so much about the universe?", "Adam Brown 02:09:02", "It's been extremely important for our understanding of quantum gravity. It's perhaps the central fact that we know about quantum gravity: the information scales with the area. That fact, which was known since the 70s, was a big hint that became very influential later on.", "As understood by Bekenstein and Hawking, it was just a weird fact about black holes, perhaps. But we now understand it as a strong indication of what we call the holographic principle. The holographic principle has been a powerful idea in quantum gravity, and it's the following.", "If you took a non-gravitational system in which you ignored gravity, like the pile of hard drives, the information storage would scale like the volume, as we discussed. Whereas in fact, it scales like the area. Another way to say that is if you take a three-dimensional, three-plus-one-dimensional theory in which you have both quantum mechanics and gravity, the information storage scales like R squared rather than R cubed.", "That is, it scales as though you had a non-gravitational system in one fewer dimension. So if you had a two-dimensional theory in which there was no gravity, the information stored in a given region would also scale like R squared because the information would be just the two-dimensional volume, as in the area. So in other words, at least as far as information density, the information capacity is concerned, a gravitational theory in three dimensions is like a non-gravitational theory in two dimensions.", "Or more generally, a gravitational theory in n dimensions is like a non-gravitational theory in n minus 1 dimensions. So that is a big hint that forms the basis of the holographic principle. It's like gravity eats information. There's less information than you thought there was, than you naively thought there was if you didn't include information.", "And so the holographic principle says that maybe that's not just a neat observation. Maybe it is, in fact, the case that for every, or for some quantum gravitational theories, there is another theory that is exactly equivalent to it in one fewer dimension. And so this led to Maldacena's ADS/CFT correspondence, the gauge gravity duality, which was the most cited paper in high-energy theoretical physics ever, I think, maybe at this stage.", "In the late 90s, he wrote down an exact, we believe, an exact duality between a particular theory of quantum gravity, some particular flavor of string theory, and a non-gravitational theory that lives on the boundary of that space.", "Dwarkesh Patel 02:12:00", "And what problem does it solve if you can model the world in fewer dimensions that doesn't involve gravity?", "Adam Brown 02:12:07", "This was a very influential paper, and really becomes a tremendous theoretical laboratory for trying to understand the connection between gravity and quantum mechanics. One problem it solves is this: gravity is mysterious, particularly once we improve quantum mechanics in various ways. This is why it's hard to quantize gravity.", "But if you can say that this theory that involves both quantum mechanics and gravity is exactly dual—is in some sense the same theory as just an alternative description of a theory in one fewer dimension that doesn't involve gravity—that's great, because we have a much better grasp on how to understand theories that don't have gravity than we do on theories that do have gravity. So, it puts everything on a much clearer footing to have this non-gravitational description, because then you can just use the standard tools of non-gravitational quantum field theory in order to define it and understand it.", "Dwarkesh Patel 02:13:08", "At one level, I understand that if the information in an area is limited by the information that would be on the surface of a black hole in that region, then you can model the surface area as a two-dimensional object. On the other hand, if I just think about the real world, you're over there and I'm over here, and if I do something here, it's not interacting with you. In order to model that fact, I need to model the dimension in which—the third dimension in which—we're separated.", "I guess if I'm actually looking at you through a window pane, maybe I wouldn't have access to that. So, in two dimensions, how do you model that? There's a reason we have the third dimension, right? And how is that modeled if you reduce that dimension?", "Adam Brown 02:13:59", "Maybe I should just lead with some disappointing news: AdS/CFT was a tremendous conceptual breakthrough in our understanding of quantum gravity and embodied the holographic principle, but at the same time, it doesn't describe our universe. In particular, in AdS/CFT, there is a negative cosmological constant in the gravitational theory, and our universe, as we discussed before, has a positive cosmological constant. So, it's great because it provides an existence proof of a well-defined theory of quantum gravity—not, alas, in the universe in which we live.", "But having said that, it's extremely confusing and was a very impressive result precisely because you might think, how could it possibly be the case that two different theories in two different dimensions could turn out to be equivalent? The answer to your question is, if you have two people who are living in this negatively curved space and talking to each other, what does that look like in this other theory? I say that there's this process going on in the gravitational theory that's dual, which is exactly isomorphic to some process going on in the non-gravitational theory in one fewer dimension.", "But what maybe looks very simple in one theory, like you and I chatting back and forth to each other, would look like some complicated plasma physics in the lower-dimensional boundary theory. And so, the complexity of how it looks, which is a better description, does not need to be conserved across the isomorphism. In fact, that's often what we use it for.", "We use it to do arbitrage between things that look simple in one theory and things that look simple in the alternative description. We use the fact that things look simple in one to understand the sort of complicated version in the other. In fact, it flows in both directions.", "You might naively expect that because gravity is so complicated, we would always be using the non-gravitational theory to understand the gravitational theory. That's not always true. Plasma physics is itself extremely complicated.", "There are these big collisions that we do at RHIC in Brookhaven where we smash two gold atoms together and make big fireballs of quark-gluon plasma. It's extremely challenging to calculate what would happen there, and yet people use this duality in the opposite direction to say, even though it looks super complicated with this weird plasma physics in the non-gravitational theory, it actually can simply be understood as some simple black hole property in the gravitational theory.", "Dwarkesh Patel 02:16:34", "Maybe not AdS/CFT itself, but would some theory which relies on the holographic principle ever be able to account for a world like ours, where, unlike the surface of a black hole, there isn't a boundary because of the positive cosmological constant and it's constantly expanding? Is there some hope that there in fact is a way to have some sort of dual theory to this that somehow describes a boundary?", "Adam Brown 02:16:57", "People are working on that. That is an open area of research. Ever since the original AdS/CFT was written down, people have been trying to formulate versions of it which have a positive cosmological constant.", "It's difficult. Part of the difficulty goes all the way back to Archimedes: it is easiest to formulate a theory if you have a fixed point on which to stand and observe things from a distance. In a universe with a positive cosmological constant, you don't have that.", "You don't have that. You're necessarily mixed up with the system because you live in a universe that has only a finite amount of entropy, a finite amount of free energy. There is an inherent limitation to the precision of the experiments you can do.", "That just makes things way trickier. So, for that and related reasons, it's a much harder project, but for sure people are working on that.", "Dwarkesh Patel 02:17:51", "What is the correct conceptual way to think about this? One version is, the boundary is one way to simplify the processes that are actually four-dimensional. Another is—I don't know how we think about this in the context of black holes—but maybe in the context of black holes, no, the information actually is on the horizon. The analogous thing here would be, no, somehow we are on the boundary of the universe somehow. Is there a sense in which one of these interpretations is correct?", "Adam Brown 02:18:22", "This duality idea, where you have two different descriptions of the same thing, is not new. The AdS/CFT correspondence was not the first such example in physics. It's a common trope in physics that you can have two different descriptions of the same thing, some of which are more useful in one scenario, some of which are more useful in the other scenario, but both are exactly correct.", "There are non-gravitational examples in physics that go back a long way. You may then ask, \"Which one is right, and which one is not right? Is it actually a CFT that's pretending to have this weird alternative description as a gravitational theory? Or is the gravitational theory correct, and the other one is not correct?\"", "I think this is more of a philosophical question. My answer would be that if the isomorphism was just an approximation -- if it was really one thing and you were just pretending it was the other thing, and that approximation worked in some region of validity and not others -- then I would say that one was right and the other one was just an alternative, fanciful description.", "That is not our understanding of AdS/CFT as we understand it today. Our understanding is that this is a precise isomorphism. It's not an analogy, it's not a metaphor, it is not an approximation that is valid in some domain and not another.", "It really is the case that these two theories are exactly equivalent to each other. And if that's correct, then as a matter of philosophy, I would say those are both equally real. So it's not the case that one is more real than the other; they're perfect simulations of each other.", "Are you an AdS dreaming you're a CFT, or a CFT dreaming you're an AdS? I think these are just two completely different, inequivalent descriptions of the same identical physics.", "Dwarkesh Patel 02:20:05", "Tell me if this is just a question that doesn't make sense. Because when I was, if you try to ask somebody about the quantum many worlds, \"Where are the other worlds?\" right? And they're just like, \"They're in Hilbert space.\"", "\"Where is Hilbert space?\"", "\"No, dude, it's just a conceptual thing. Stop asking questions.\"", "Intuitively, it feels like there should be a sense in which there is some physical existence. Either that existence is in this four-dimensional space, or it's in some space that exists on the boundary. Is this just going to lead us into philosophical loops, or is there something that can be said more about it?", "And also, in a de Sitter space, in a world like ours, what exactly would the boundary mean?", "Adam Brown 02:20:57", "There are two components to that question. You have an intuition that if something is real, it needs to be spatially localized, and things that are delocalized in space somehow can't be real. I would say that that's not my intuition.", "My intuition is that there can be two completely different descriptions of the same physics, and if it's precise, neither of those is any more real than the other. Things do not need to be spatially localized. You separately asked, what would a version of \"where is the boundary theory?\" in de Sitter space, since there's no boundary?", "That is a great question that people who are trying to generalize AdS/CFT to a universe like ours, that has a positive cosmological constant, wrestle with. There's more than one proposal. Some suggest that the dual theory should live on the cosmic horizon.", "So, if you go 5 billion light-years, you can send information to that point and have it returned to you. But on the other hand, there are things that are 100 billion light-years away that we'll never be able to communicate with. There's a boundary between those two -- between things that we could, in principle, communicate with and things that we couldn't, in principle, communicate with.", "That is the cosmological horizon. Some people who are trying to do a version of holography that works in universes with a positive cosmological constant like to put the second theory there. Other people like to put it in the distant future, in the sort of infinitely distant future.", "That's part of the problem: where do we even put that theory? It's not like in our universe where you can just put it spatially infinitely far away and be done with it.", "Dwarkesh Patel 02:22:41", "If it's spatially finite, then we are currently at the boundary of infinite many other universes that are located, or whose center is located, elsewhere.", "Adam Brown 02:22:54", "Absolutely. A cosmological horizon is very different from a black hole horizon in this regard. A black hole horizon -- there is a point of no return.", "If you get closer than that, you fall into the black hole; you're never getting out again. And everybody can agree where that is. For cosmology, there is a point of no return, but the point of no return is relative to a given person.", "For each person, there is a different point of no return. And as you say, we live on the boundary just as much as we live on the boundary of -- those people live on our cosmological horizon, we may live on theirs.", "(02:23:25) - Philosophy of infinities", "Dwarkesh Patel 02:23:25", "Okay, another philosophical question. There seem to be many theories which imply that there's some sort of infinity or approximate infinity that exists. In quantum many worlds, there are constantly these different branches of the wavefunction spawning off where things are slightly different.", "So everything that can possibly happen has happened, including basically the same exact thing. I guess if this bubble universe stuff is correct, it implies a similar picture. Philosophically, should it have some implication on our worldview?", "It would be surprising that we learned this much about the universe and then it has no implications whatsoever, right?", "Adam Brown 02:24:15", "Good question. I think I'm going to say yes and no. I mean, it's clearly, if correct, let's just take the quantum case, which is perhaps even more secure than the cosmological multiverse case.", "In the quantum case, it really does look like the default expectation, given everything we understand about quantum mechanics, should be the many-worlds interpretation in which the universe keeps branching off, and there'd be more and more branches. Every time, or almost every time you come to a point of quantum a measurement, we might colloquially say is made that the universe branches, and then every possibility is represented still in the grander wave function.", "That's a pretty profound thing to learn about the ontology of the world. If correct, it seems like it should be the default expectation.", "And you might say, maybe I don't care about existential risk in our universe because we blow each other up or turn into goo or whatever. Okay, that's sad for us. Maybe our world has vacuum decay, but there are some other branches of the wave function where it's not. And so some other branches will have made different choices in the past, and they're sort of guaranteed somewhere in the branches of the wave function to be a flourishing world. And so I'm not so bothered.", "I would say that that's, I'm not going to tell you what utility function you should place on the wave function, but Born is. There's the Born rule in quantum mechanics.", "And that tells you that you shouldn't just say, if it's there in one branch, that's just as good as anything else. Born's rule, which is one of the foundational rules in quantum mechanics, tells you how much to care about each branch. You don't care about them equally.", "It says that the correct way to calculate the expectation value of anything is to calculate its value in each branch and weight those branches by the square of the amplitude of the weight function, which is some particular quantity, and then add together all of those different answers. So that's a linear answer, which is to say that the total utility of the universe is the sum of the utility in each of these branches appropriately weighted by Born's rule.", "So if that's true, you should hope to make our branch as good as possible, just because whatever is going on in the other branch, the total utility is just the sum of what's going on in that branch and what's going on our branch. And so you should try as hard as you can to make our branches as great as possible.", "Nevertheless, I do kind of understand that you might have a portfolio theory that seems to be inconsistent with Born's rule, but is somehow intuitive, in which somehow it's not just a linear function on these universes.", "Dwarkesh Patel 02:27:00", "This would only be applicable if you are a total utilitarian who then there's a sort of very straightforward way in which we can dismiss this and be like, it's one of these. It seems like in physics, there's always these kinds of things where, like, oh, we think we discovered something new, but how would you look at that?", "The speed of light is still conserved. And similarly here, like, oh, infinite universes. Ah, but would you look at that? That has no implications on our decisions.", "But most people are not total utilitarians. And if you have some very simple thought experiments to illustrate a couple. Suppose that there are two universes and, sorry, two worlds in two different cosmic horizons who will never interact with each other causally, but each one has intelligent life and civilization and beauty and everything we might care about.", "If one of the two gets extinguished, I'm like, pretty sad. And this is, both of these make up the entire universe. If both of them get extinguished, I'm more than twice as sad.", "There's something to that sort of finality which makes existential risk salient in the first place. And if you agree with that intuition, then I think you should be inclined to think that there is something significant about the fact that in some base reality, like genuinely the story carries forward.", "On the other end, if you're somebody who cares about minimizing the downside of people talking about suffering, risk or something. Right. The idea that if it's physically possible to have a universe full of torture, it's actually in fact happening or will happen again is like you could just be like, ah, but the amplitude on that is so small or the square of the amplitude is so small.", "Adam Brown 02:28:47", "And.", "Dwarkesh Patel 02:28:47", "The weighted average ends up close to nothing. But I'm like, that really sucks, that's actually happening.", "Adam Brown 02:28:55", "Yeah, I think there are a number of ways to think about this. I think in part people's intuition is maybe formed in cases like extinction, where if you have an animal that's going extinct, if half of the animals get wiped out, that's somehow less bad than if both halves of the animals get wiped out.", "But that's because they really aren't going to interact in the future. And there's the possibility of the. Those don't have non-overlapping future light cones, the two populations of some possibly extinct animal.", "It's also the case that this is a pretty like Born's rule, narrowly defined, does not really have anything to say about this how one should calculate the total utility. It's just more of a sort of the natural utility measure that would come out of this.", "Particularly when you get to the cosmological multiverse. I think that these are very difficult questions to answer your intuition that perhaps two different universes in which how we calculate those do we just add together the utility in both or is there some non-linearity to do with it?", "Basically, for the cosmological multiverse, there isn't a particularly good way to decide what the weighting factor should be. We don't have the same equivalent of Born's rule in quantum mechanics. And I think it's at least open for opinions like yours to be to be.", "In fact, there should be some better way in which we calculate it. That's not just a linear function of.", "Dwarkesh Patel 02:30:17", "These different kinds of infinities is there some sense in which some are more fundamental than others. That is, maybe the bubbles are artifacts of what's actually happening on the wave function, or vice versa.", "Adam Brown 02:30:32", "You're talking about the two kinds of multiverse, the sort of cosmological multiverse and the quantum mechanical multiverse. Yeah, they get very bound up if you try and write down a theory that has both of them.", "Because whether there's a bubble there, you're trying to make bubble universes. But what gives rise to bubble universes is often quantum processes. So often you end up in superpositions over there being a bubble universe and there not being a bubble universe there.", "And that means that these two kinds of multiverse, the sort of quantum mechanical multiverse and the cosmological multiverse end up getting totally intermeshed with each other.", "Dwarkesh Patel 02:31:06", "But it sounds like the base reality is still the wave function over all the bubbles in the entire inflaton field or whatever.", "Adam Brown 02:31:13", "Yeah. So again, we only really properly know how to do quantum gravity and do the counting when there's a negative cosmological constant. As we discussed with ADS cft, in these bubble universes, where there's a positive cosmological constant, it's still somewhat an open question how to do the accounting of what happens and where and how much it should count out.", "Dwarkesh Patel 02:31:33", "Okay.", "Adam Brown 02:31:34", "Which is to say we don't know the answer to that question, and your opinion is not ruled out.", "(02:31:42) - Engineering constraints for future civilizations", "Dwarkesh Patel 02:31:42", "It's a little bit confusing because in one context, we're laying out very practical—I don't know if you can call black hole batteries practical—but very tangible limitations on the what future, like very distant future descendants, could do with all the matter in the galaxy and so forth. On the other hand, we're like, \"Bubble universes as big as our own made somewhere in somebody's lab, maybe.\" So, basically, how confident are we that the practical limitations we think we know about will actually constrain our future descendants?", "Adam Brown 02:32:33", "That's a good question. Certainly, some of the possibilities we've discussed so far have different epistemic statuses about how confident we are or are not. And as we also discussed, some of these bounds are somewhat fragile.", "Can you communicate faster than the speed of light, for example? Let's just take that as an example bound. We think you can't, according to the laws of science as we understand it. Most physicists would be pretty surprised if it turned out that you could.", "Dwarkesh Patel 02:33:02", "What is your probability if, like, a million years from now, we are able to communicate faster than light? How surprised are you?", "Adam Brown 02:33:10", "That is a tricky one. That is a really tricky one. It's only a century that we've thought you can't communicate faster than the speed of light.", "A million years is such a radical time that maybe we've sort of dissolved the question into some greater question, and we understand it doesn't even really make sense. I would be pretty surprised. If you make me make a number, I think that there is a greater than 90% chance that in 100 years, we are still limited by the speed of light. There's a 98% chance, if you make me be precise.", "Dwarkesh Patel 02:33:44", "Okay, so then what are the other constraints on a future civilization that they might care about? If we've got these superhuman intelligences and they're colonizing the galaxy, what are the things they might want to do that they can't do? They probably care about energy. They care about computation.", "Adam Brown 02:34:07", "Energy limits. We've talked about the efficiency of batteries and extracting energy.", "MC squared is the—I'm highly confident that the most energy you can extract from a given piece of matter is MC squared, at least until you start getting cosmology involved. Other limits will be Landauer's limit, or in other words, with a given amount of energy, how useful is a given amount of energy to you? We wouldn't care about having huge amounts of energy if you could get an arbitrary amount of value out of a fixed unit of energy.", "We think that that's not true. We think that in particular, if we're going to do computations with it, for example, that there's going to be—and that computation makes errors—that there is a fixed cost of a bit, basically a bit of free energy, in order to correct those errors.", "Dwarkesh Patel 02:35:07", "And we're confident that there's no way to make computers that don't make errors?", "Adam Brown 02:35:10", "It is a very interesting question what the fundamental limits on errors are in a computer, how far down they can be pushed. In terms of never making errors, I think that's very unlikely. If for no other reason than there is a minimum background temperature caused by the expansion of our universe—again, it all comes back to the cosmological constant—that gives a very small but non-zero temperature to our universe, that I think will inevitably mean that we make errors.", "You might imagine we could just set up some kind of perpetual motion machine that's just thinking happy thoughts over and over again in a quantum computer that never tires and never stops. I think that inevitably the universe would leak in, and there would be errors. But what the minimum error rate is, is not—I don't have a clear answer to that question. Physics doesn't have a clear answer to that question.", "Dwarkesh Patel 02:36:13", "One question you might have is: What will be the nature of not only the things that our descendants might care about, but what will they be able to produce domestically? What will they want to trade for? If something like alchemy is just super—you know, it's like equals MC squared is all you care about—then it's just like, look, your star system or your galaxy has a certain amount of mass, and you can convert that to energy, and there's fundamentally no reason to trade if there's not that high transaction cost to make it into whatever you want.", "On the other hand, if there are some limits, like in fact you had to make galaxy-wide factories, or you had to do these NP-hard calculations that even with a galaxy you can only trace down certain segments of the search space or something, there might be reasons to trade extremely—sort of a pie-in-the-sky question—but how much can we intuit about these kinds of constraints?", "Adam Brown 02:37:19", "In economics, the theory of comparative advantage only applies if not all resources can be transported. If you can just go in and just disassemble whoever you're doing the comparative advantage with, you might as well just turn them into—apply it all to the party with the absolute advantage. So maybe the same thing would be true in the universe.", "I think there are a number of questions in there. For starters, not all energy is equally useful in different places in the universe. If there's a galaxy over there and a galaxy here on this side of the universe, because of the expansion of the universe, if I beamed the energy—if I disassembled that galaxy and tried to send it back here, either by literally sending it on starships or converting it to light and beaming the light back in a laser and then having a big PV here to collect it, or for whatever mechanism—by the time it reached me, there will be a massive redshift.", "So keeping it in place is maybe better than just disassembling it and bringing it back home. But there's another question, which is—these are all unknowns to do with both physics and the nature of technology—is the most important thing that all of the value will be created here on Earth, and we just need to get as many resources back here on Earth? And there are super linear returns to scale of having accumulated resources in one place, so we just want to make Earth an absolute paradise? Or do we want to spread—is it in fact sublinear, and we want to spread civilization all the way throughout all of these galaxies?", "I think questions like that are going to be important in addressing your question of what the returns to scale are and returns to trade as well.", "Dwarkesh Patel 02:39:10", "It's like the galaxy in a billion years from now has a certain GDP. What percentage of that GDP do you think is just the end result of computations or confirmation that a computation has been made? Maybe it's like simulating hedonium that the other side of the galaxy cares about or something.", "Adam Brown 02:39:29", "Just because it may prove to be so much more efficient to do things in simulation than to do them in the real world, my guess would be a high percentage of that. But maybe that's wrong.", "Dwarkesh Patel 02:39:41", "If computing is the main thing you hear about, what is going to be the—physically, how will the flops in a galaxy be organized? Will it be as planet-wide computers, as a huge blob the size of a star system? Do we have some sense of—", "Adam Brown 02:40:01", "This is a super interesting question. It returns to the question we were asking before: with quantum computers, we know, for example, that the amount of quantum computation you can do in terms of the equivalent amount of classical computation when trying to do some factoring algorithm or something grows super-linearly with the number of qubits. In fact, it grows almost exponentially with the number of qubits.", "So a 200-qubit quantum computer is much more than twice as good as a 100-qubit quantum computer for certain tasks. But for the tasks that we try and use quantum computers for, that's true. So that line of reasoning might lead you to believe that in the distant future, we will just try, even paying the cost of the redshift and all these other questions, we'll feed all of the energy and free energy back into one central quantum computer.", "It will all be about making that central quantum computer as big as we possibly can, even at the cost of inefficiency. On the other hand, there are other kinds of tasks for which actually having a twice-as-big computer is not that much better, or certainly not more than twice as better, than having two smaller computers. In that scenario, it'll be a more distributed setup.", "Dwarkesh Patel 02:41:22", "I guess in this quantum computer system, you would need to have coherence across this huge, which might not be a practical engineering difficulty for future civilizations, but does seem...", "Adam Brown 02:41:34", "Either it would need to be co-located, or you'd need to send the quantum coherence out. That's actually not that hard to do. It's a property of photons that they do tend to maintain when they're propagating in the vacuum.", "They basically maintain their coherence for a very long way. In fiber optic cables, you reach trouble because they start getting absorbed by the fiber optics after tens of miles. But in the vacuum, you could, in principle, share quantum entanglement across the universe if you did it right.", "Dwarkesh Patel 02:42:04", "Then wouldn't you expect, when you say a central computer, physically it wouldn't just be like a huge, contiguous...", "Adam Brown 02:42:15", "Well, it might be because the sort of analog of the classical fact that flops are not the only thing you care about. You also care about bandwidth and interconnects and things like that. So perhaps the same would be true.", "I mean, here we're getting into a pretty speculative area, but you could imagine either configuration, either one in which you have a huge number of different quantum computers that are talking to each other via entanglement networks or one in which you just have one big central computer.", "Dwarkesh Patel 02:42:41", "Final question. Timeline to when you are automated as a physicist.", "Adam Brown 02:42:49", "Oh, good question. Many of the tasks that I might have performed in the past, I think, are already automated at some level. Until I am totally out of the picture and no longer necessary...", "That's probably pretty close to ASI complete. So whatever your timeline for ASI is.", "Dwarkesh Patel 02:43:14", "Well, I guess the question is, what is yours?", "Adam Brown 02:43:19", "Yeah, I'm squirming somewhat uncomfortably in answer to that question because I'm not totally sure. I could certainly imagine a scenario in which it's five years.", "Dwarkesh Patel 02:43:27", "All right. I think that's a great place to close. Adam, thanks so much.", "Adam Brown 02:43:30", "Thank you. Great to be here." ]
[]
https://www.dwarkesh.com/p/andrew-roberts
Andrew Roberts - SV's Napoleon Cult, Why Hitler Lost WW2, Churchill as Applied Historian
[ "This transcript was autogenerated and then cleaned up by GPT-4. As such it may contain typos or hallucinations.", "(00:00:00) - Post WW2 conflicts", "Dwarkesh Patel 00:00:51", "Today I have the pleasure of speaking with Andrew Roberts, who is most recently the author of “Conflict: The Evolution of Warfare from 1945 to Ukraine.” And this book is like Churchill’s histories of the Second World War or the First World War, in that one of the principal actors in the conflicts discussed here is the coauthor, General David Petraeus, who commanded the US forces in Afghanistan and Iraq. As one of your coauthors. And speaking of Churchill, Andrew is also the author of some superb and magnificent biographies of Churchill, Napoleon, King George, and an excellent book about World War II. But first, let’s begin with conflict. Andrew, welcome to the podcast.", "Andrew Roberts 00:01:36", "Thank you very much indeed, Dwarkesh. It’s an honor to be on your show.", "Dwarkesh Patel 00:01:40", "So my first question is this: when we look at the first half of the 20th century, it seems like we got unlucky many times in a row. World War I, World War II, the Bolshevik Revolution in Russia, the Maoist Revolution in China—all those things seem like they didn’t have to happen. From reading historians about those topics, that if you reversed a bunch of contingent factors a few years back, any one of them could have not happened. And in each of those cases, tens of millions of people died. When we look at the second half of the 20th century, which you write about in these books, it seems like we got lucky again and again, right? So the Cuban missile crisis doesn’t go nuclear. We have all these proxy wars that don’t go nuclear or result in a world war. China and India liberalize, and communism falls. What explains why we had such different luck in these two different parts of the century?", "Andrew Roberts 00:02:27", "The invention of the nuclear bomb. It’s pretty much as easy as that. You have all these wars that take place in the post-nuclear age after 1945. And so as a result, you have an umbrella under which everybody acts. But although there are hundreds of wars that break out—about 140 wars—they have to be fought in an essentially limited way because of the existence of nuclear weapons.", "Dwarkesh Patel 00:02:56", "But couldn’t you have said the same thing before World War I, and in fact, many did say that before World War I, where we have all this heavy artillery, we have these to kill millions of people even then, and they still went to war. So how much does nuclear war explain the absence of something escalating?", "Andrew Roberts 00:03:11", "Well, you’re right that the First World War did come about in part because of the arms race, but the knowledge that the nuclear bomb could obliterate the entire planet is something that has always managed to make wars limited. Post 1945, what happened in 1914, the most people you could kill in a single moment would be as a result of an artillery shell. And that’s nothing like a nuclear bomb, frankly. So it’s apples and pears.", "Dwarkesh Patel 00:03:49", "I think it’s really interesting in this book you write about all these conflicts that have happened since World War II, and in many cases, they’re counterinsurgencies or civil wars. And it’s interesting when one side gets to say that they’re the legitimate force fighting for the country’s independence against foreign aggressors when both sides are getting foreign funding and support. So I’m curious, how come the US has been bad at the propaganda here, where Ho Chi Minh or the Taliban get to say that they’re the legitimate forces fighting for their country, or how does that determination get made?", "Andrew Roberts 00:04:20", "Yes, that’s a very good point. I think in both cases, Ho Chi Minh and the Taliban both were local inhabitants in a way that the United States obviously wasn’t in either place. But whether they represented the majority of the people in either North Vietnam or Afghanistan is a completely different issue. So it’s much more a question of whether or not they are totalitarian powers who are able to establish dominance and keep it in a difficult and dangerous part of the world. And that’s what both of them were able to do. It didn’t mean that they have a legitimacy in the kind of Jeffersonian democracy that one would like in a utopian world. But if they are the people that are wielding power in the sense of a Marxist-Leninist clique, of course, in North Vietnam, you have to deal with them, and they are the established government.", "Dwarkesh Patel 00:05:22", "But it’s interesting that South Vietnam or the government in Afghanistan didn’t seem to have that same sort of legitimacy that these other insurgencies had, even though they were still local governments.", "Andrew Roberts 00:05:36", "Do you think not? I’d rather think they mean, obviously, they’re both immensely inefficient and useless and corrupt, but nonetheless, I don’t think that detracts from the fact that they were more legitimate than the forces that were rising up against them.", "Dwarkesh Patel 00:05:56", "Yeah. And in fact, this might be a good opportunity for you to discuss the four key tenets of strategic leadership that you discuss in the book.", "Andrew Roberts 00:06:03", "Yes, well, what we found in the book very much, very strongly, and it’s interesting you should have mentioned the Chinese Civil War because you get that very powerfully as well, is that the side that wins wars very often is not the one that controls the cities or has the largest amount of men or has the best weaponry. As you mentioned the Chinese Civil War, let’s look at that for a second. The Guomindang Nationalist forces, at the outset of that war, had all the major cities, they had four or five times the number of men, and they also had all the advanced weaponry that they’d taken off the Japanese at the time of the Japanese surrender in 1945. Yet they still lost that war. One of the reasons was that they didn’t have very impressive strategic leadership. And Chiang Kai-shek, even when he did come up with good plans, often had warlords below him that refused to carry them out. So what we discovered in war after war is that the thing that matters most is this concept of strategic leadership, by which we mean having a leader at the top, either civil or military, but the ultimate decision-maker. And it’s usually best when there’s somebody who represents the civil and somebody who represents the military and they get along, and they need to get the big idea for the war right. They need to then be able to communicate that to their lieutenants effectively and indeed to the wider country. They need to be able to implement it aggressively and efficiently. And then they need, as the fourth of the levels, to continue to adapt the big idea to circumstances on the ground and to the way in which the war develops. Because obviously, no war carries out according to plan. The enemy always has a say, and then to refine it again and again and again. And so the people who are able to do that are very often victorious, even though they start off the campaign with many more disadvantages than their enemy.", "Dwarkesh Patel 00:08:22", "You know, I think this might be a good opportunity to start talking about Iraq and Afghanistan, which obviously your coauthor can speak to like nobody else. I found it really interesting in reading his accounts of what happened in those two countries, especially Iraq, which was a premeditated invasion. It wasn’t something we had to just immediately do. It had a completely different casus belli with the weapons of mass destruction than 9/11.", "Andrew Roberts 00:08:46", "Well, and also the surprise attack on Kuwait. Of course, that’s the ultimate reason that this took place, what had happened 13 years before, and the 13 years in between. It wasn’t just WMD.", "Dwarkesh Patel 00:09:01", "Yeah. Although I guess 13 years still leaves us enough time to have a plan of what to do. And I found it interesting that you’re discussing that after the regime has been changed, you realize that there’s not a plan for how to ensure security and stability in the country. And I just can’t imagine, obviously, you have really intelligent people like David Petraeus there who are working on this. How is it possible that there was an invasion of these countries without a good plan for how to secure them afterwards?", "Andrew Roberts 00:09:30", "Well, he wasn’t working on it. He was working on how to destroy the Iraqi army and get to Baghdad. And the people who were working on it were a completely different set of generals who were failing to work out what to do once you had got to Baghdad, who assumed that the thing to do would be to get rid of the Ba’ath Party, which essentially ran the country down to the fourth level. It was all very well getting rid of Saddam’s sons and some of the other people at the top level, but when you do that and also, essentially, send the army home and not tell them how they’re going to feed their families and allow them to keep their weapons, you’ve got a recipe for disaster. And sure enough, disaster happened. But that can’t be blamed on the soldiers at the point of the spear who did an extremely good job, who overthrew that regime in double-quick time.", "Dwarkesh Patel 00:10:32", "Now, speaking of strategic leadership, why is it that we don’t have a figure natively in Iraq and Afghanistan who had that level of leadership? A Zelensky in Iraq and Afghanistan, where in the book General Petraeus discusses the frustrations he had with Maliki in Iraq. And of course, Ghani leaves Afghanistan when the Taliban start routing the Afghan forces.", "(00:10:57) - Ukraine", "Andrew Roberts 00:10:57", "And Karzai, of course, also in Afghanistan. Yes, these guys come in for a bit of a pasting, understandably in our book, because they are not the sort of Churchillian figures that Zelensky is. I think partly it’s down to the sectarian and tribal nature of Iraqi and Afghan society, where, however good a leader is, he doesn’t automatically command the attention and loyalty of other people in the same country. The thing about Zelensky was that it was very clear very early on that he was speaking for the huge majority of the country, and it’s very difficult for an Iraqi or Afghan leader, however good they are. And I’m not saying for a minute that Maliki and Karzai were any good, let alone the last chap who gets into his helicopter weighed down with suitcases full of money and hightails it out of there. By complete contrast, you do have Zelensky, who shows all of those four qualities of leadership that I mentioned, and also, of course, who decided he was going to stay in Kyiv, fight in Kyiv, his family were going to stay in Kyiv, he wasn’t going to let any military-age male Ukrainians leave the country. And his big idea was, “I need ammunition, not a ride.”", "Dwarkesh Patel 00:12:23", "What is our big idea, the Americans’ big idea in Ukraine? What is the ceasefire, or end arrangement, we are driving at which we think would be plausible for both sides to accept?", "Andrew Roberts 00:12:35", "Good question. I don’t think Biden has articulated one properly yet. Zelensky has, which is the obvious one, which is that we’re not going to allow 18% to 19% of our country to forever be under the rule of the Russians and we’re going to throw them out. And when David and I visited Kyiv about four months ago, we came across a huge level of national unity over that big idea. All the generals, of course, and the ministers subscribe to it, but they’re sort of paid to—it’s part of their job. But so also did everybody on the street and everybody that we spoke to. They all absolutely believed in ultimate victory. They didn’t know how long it was going to take, they didn’t know how much more blood was going to have to be shed, but they all believed that this would not stand and that they were going to ultimately be victorious, even if you, the Americans, cut off their funding after the next election.", "Dwarkesh Patel 00:13:44", "Right, but what is the answer to the American question of what is our goal? Is that the same as the Ukrainian goal?", "Andrew Roberts 00:13:49", "No, I don’t think it is at the moment. It seems to be to wait until other countries, such as Britain, give a new set of weaponry, then to give much more of the same kind of weaponry, then to wait until somebody else gives some more advanced weaponry. You saw this with anti-tank weaponry, later with tanks, then with artillery, then long-range artillery. Now, you’ve been giving them these ATACMS, which are very impressive, but you’ve hung back a bit with fighter aircraft and so on. So it seems to be a piecemeal approach, where you wait until the Russians don’t respond, and then you give a bit more. Frankly, it would have been much better, I think, to have armed the Ukrainians earlier with the Leopards, essentially, and the tanks that they really needed for this big southern counteroffensive and come out wholeheartedly for them. Now, you’ve given a lot of money, obviously. $44 billion is a very significant amount of money, and the Europeans have given as much or slightly more now. But still, the Russians are in control of 18% of the country, and they’ve been building what the Ukrainians were expecting, hundreds of yards of minefields. In fact, there are miles of minefields down in the south there. And so I’m afraid it’s a long and bloody slog, but we’ve seen wars like this before. This is one of the things that we write about—the Korean War being a classic example, where you just have to thrash it out.", "Dwarkesh Patel 00:15:27", "Actually, I want to ask you about Korea in a second, but it does seem weird that we’re slowly funding a war of attrition. It’s classic Clausewitz to focus your effort on the point of attack. If you’re just slowly doling out this equipment, why not just give it to them all at once? So they can have a successful counteroffensive.", "Andrew Roberts 00:15:46", "Because I don’t think you’ve got the political will in the United States to do that, frankly. I think that, yes, you have a nominal majority in both houses, but especially with your lower house at the moment, what’s going on there, you don’t have the sense of national will to do that. And so, as a result, these poor Ukrainians are fighting and dying. When you do give them stuff, it’s extremely helpful and useful. But as I say, the key point is that they will carry on fighting and dying even if you didn’t give them the stuff because they’re not going to have America essentially dictating to them what their national destiny is.", "(00:16:33) - How Truman Prevented Nuclear War", "Dwarkesh Patel 00:16:33", "Now, speaking of the Korean War, the chapter you wrote about this in your book was really interesting and great. And I wonder if Truman had decided to use a nuclear bomb in Korea, had agreed with MacArthur to do so, whether the taboo that we have against nuclear weapons—against tactical nuclear weapons—would not have emerged in the first place. So, the Soviets would have used it in Afghanistan, we would have used it in Vietnam, and Thatcher would have used it in the Falklands.", "No, I wouldn’t go that far because that would have wiped out the Falklands, and we were trying to win back the Falklands. But yeah, you’re quite right. Of course, if MacArthur had used nuclear weapons against the Chinese crossing the Yalu River, then yes, he might well have actually won that war, but it would have lowered the moral barrier so significantly that nuclear weapons would have been used an awful lot more. As it is today, although there’s lots of saber rattling by Lavrov and Putin, it doesn’t really look as though, I mean, yes, there might be a catastrophic disaster at the Zaporizhzhia nuclear plant, but it’s very unlikely for Putin actually to use tactical nuclear missiles in Ukraine, not least, of course, because the Chinese don’t want him to. But had they been a regular feature of warfare in the 1950s, so on, then yeah, he might well do it.", "Dwarkesh Patel 00:18:06", "I just think it’s important when you’re looking back in history, to give credit at decisions that are not often discussed like this, where Truman just decided to lose Korea rather than, at the time the taboo didn’t even exist, but rather than to create a taboo against nuclear weapons.", "Andrew Roberts 00:18:24", "Exactly. He did create it, as did Clement Atlee, actually, to give him his due. The British Prime Minister flew over to Washington very concerned about this talk, MacArthur’s talk about using the nuclear bomb, and so he needs to get some credit as well. And also actually, Truman needs credit for sacking MacArthur in the first place, because MacArthur did have some, I mean, he was a charismatic and impressive figure whose island hopping policy in the Second World Wars was inspired and so on. But he was the classic example of the general who becomes too powerful, an overmighty subject, who had political ambitions himself, who got the Chinese involvement in the war completely wrong, got the big ideas wrong, essentially, and had to be sacked.", "Dwarkesh Patel 00:19:17", "And that’s also another interesting point, how overwhelmingly popular he was. And I remember reading in Lyndon Johnson’s biography that when MacArthur came to Congress to speak after he was sacked about the Korean War’s progression, somebody said, this is the closest thing, if he had wanted to, MacArthur was so popular that he could have just said, we’re going to storm the Capitol, and people would have just followed him.", "Andrew Roberts 00:19:38", "Well, having seen what happened on January the 6th of last year, it’s obviously not completely impossible.", "Dwarkesh Patel 00:19:47", "Okay, going back to Iraq and Afghanistan, how much have those conflicts, those counterinsurgency operations, prepared the American military for a war with a peer competitor like China?", "Andrew Roberts 00:20:00", "A great deal, obviously, but that’s true of most wars. It’s interesting, of course, China hasn’t actually fought a major war for a very long time, really, and since the 1960s against India. So actual practice is an incredibly useful thing. If Ukraine were ever to be allowed into NATO, for example, we’d have 900,000 troops on the southern border of NATO. It would be a huge addition. So actually, having troops that have fought, there’s no amount of training that is the same as actual war fighting. What you mentioned earlier actually, I was just thinking about that. Good question. You asked about nuclear bombs. Of course, what we’re seeing today in Gaza is a classic example of limited war, in that however vicious and ghastly and painful and bloody it’s going to be, it is the story of a group fighting against a country that has got the nuclear bomb alone of all the countries in the region and is on moral grounds not prepared even to threaten the use of it. So in that sense, the Netanyahu government has, it’s not doing what Lavrov and Putin are doing by saber-rattling the nuclear option, which does exist. So much of what is happening is as a result of Tehran wanting it to happen. And Tehran doesn’t have the bomb and Israel does, and yet Israel is not threatening Tehran.", "Dwarkesh Patel 00:21:48", "Yeah, and that’s a really interesting point. I mean, as early as 1973, you could have had Israel nuke the Egyptian beachheads. It was a war of self-defense. You die if you don’t use it or if you lose the war.", "Andrew Roberts 00:22:04", "Exactly. ‘73 was an existential war in the way that this one at the moment isn’t. Now, obviously, we don’t know what’s going to happen within Israel, with the West Bank, with Hezbollah, with the Iranians and Syrians. It’s not impossible that this could turn into an existential war for Israel. But the possession of the nuclear bomb hasn’t done Israel any favors. Equally, it hasn’t weakened Israel either.", "Dwarkesh Patel 00:22:32", "Is deterrence dead? So, speaking of Israel, Iran funded these Hamas terrorists to conduct this attack. And as far as I know, there’s no serious repercussions in Iran itself for doing this or funding Hezbollah, of course. Is deterrence as a doctrine, is that dead?", "(00:22:49) - Taiwan", "Andrew Roberts 00:22:49", "No, because it’s working very well in Southeast Asia. In Taiwan, it is only dead amongst people who are so irrational and illogical that they don’t mind essentially being extirpated in the way that the Israelis might soon be trying to extirpate Hamas. So if you sort of don’t care, if you believe that God has given you the right and duty to kill Jews, then you’re not going to be deterred. In the same way that a much more rational and logical actor such as Xi Jinping is where he wakes up every morning and thinks, right, should I be invading Taiwan? And he recognizes, looks to the world situation, to the might of America in the South China Seas, and looks to all his neighbors, all but North Korea, of which hate and fear him, and recognizes that today is not the day to do it. And that is what deterrence is. It’s incredibly expensive, of course, deterrence, but it’s immensely cheap at the same time compared to the alternative, which is war.", "Dwarkesh Patel 00:23:59", "So, yeah, this is one of the points you make in the book, is that deterrence money spent on deterrence is seldom wasted. But deterrence also has to be credible. Now, regardless, separate from the question of whether America would actually intervene if China invaded Taiwan, is it rational for the Chinese to believe that America would intervene on behalf of an island with 20 million people, have a kinetic war with China over an island off the coast of China? Does that make sense? Is that deterrence credible for Chinese?", "Andrew Roberts 00:24:29", "It is. It certainly is, because there is what’s been called strategic ambiguity in the American stance. And that is something that no rational actor wants to have to deal with. An America which could be sucked into a major war, an America that would have maybe act irrationally over Taiwan, or which, as you can see, with the Orcus Treaty, has got ambitions to stand up to China and feels that it needs to carry them out. The public statements are obviously not intended deliberately to provoke China, but they’re pretty straightforward in being ambiguous enough that China doesn’t want to take the risk. Whilst obviously the United States military budget is so enormous, so vast, it’s capable of deterring China, if it was to send the wrong messages, taking ships away and so on, then it might not look at what America’s done in Ukraine. And Xi recognizes that it’s led a coalition which has fought very hard and so far hasn’t lost. And so, without a single American serviceman being involved, were American servicemen involved, which they would be in a Taiwan confrontation, the American president would be much, much more likely to go all in.", "Dwarkesh Patel 00:26:13", "Yeah. Although if, for example, China blockades Taiwan and puts the onus on America to launch the kinetic war to break through the blockade. I wonder if then put in those terms, an American president would not intervene, or at least the Chinese wouldn’t expect an American president to launch the kinetic war to break the blockade.", "Andrew Roberts 00:26:33", "Well, they’ve obviously war gamed this a million times in the Pentagon. And I think that your remark about 20 million people is obviously an apposite one. But do let’s also remember that Taiwan has 80% of the semiconductor industry, or at least the high-level semiconductor industry. Lots of good things are being done to mitigate that against that now. But nonetheless, it would be catastrophic for China to be able to snaffle all of that in a single coup d’état. And obviously the Biden administration knows that.", "(00:27:15) - Churchill", "Dwarkesh Patel 00:27:15", "Before we return to conflict, I do want to ask you some questions about Churchill and World War II. And in fact, this is actually a good jumping-off point because speaking of rational leaders, I’m struck when reading your biography of Churchill of how much of his thinking is more emotive, less probabilistic, much more principled. And when I try to backtest how I would have reacted, given my mindset to World War II if I was in Britain, I have to admit I like to think in terms of probabilities and expected value. I would have said, what’s the expected value of fighting Germany in 1938 over Czechoslovakia? What would happen if he just didn’t? Looks like probably, it probably might just be best to run our odds with appeasement. And I wonder if this is just a one-off case, or do you think in general that illustrates a weakness in the more sort of probabilistic way of thinking about geopolitics compared to Churchill’s more emotional, oratorial, principled way?", "Andrew Roberts 00:28:12", "I don’t really agree with you. I think, with the premise because I think that Churchill, yes, he was emotional and principled but also he recognized that the advance that the Germans made between the Sudeten crisis, which ended in Munich in September 1938, and the outbreak of war a year later in September 1939, was so huge, especially in their creation of bombers and tanks and so on. And also it was helped so much by taking the Skoda factories of the Czechs from Czechoslovakia and churning out tanks for Germany, that it was a rational thing to have tried to have stopped Germany invading Czechoslovakia. So what Churchill was doing, yes, he was emotional and a great rhetorician and so on, but he was also making a very, very hard-nosed decision with regard to the balance of power, recognizing that in fact, Germany was in a much stronger position a year later than it had been at the time of Munich.", "Dwarkesh Patel 00:29:27", "Now, it’s remarkable to what extent Churchill had read, and not only read but written a tremendous amount of history. And I’m curious how concretely that history informed his decision-making as a leader. Was it at the level of tactics and geography, where you see how old battles in the same places are fought? Was it at the level of grand strategy? Was it at the understanding of human nature? What level did that understanding of history help him?", "Andrew Roberts 00:29:52", "All of those and more. One of the reasons I’m proud to be a historian is that Churchill was one, and primarily that was his job in the 1930s when he was out of office, was to write history books. And one of his great ancestors, John Churchill, Duke of Marlborough, it’s almost like an autobiography of the Second World War, and he’s actually writing about his own ancestor 200 years beforehand. But it is extraordinary how many things to do with tactics and strategy, of course, but also with how to deal with allies, how to deal with domestic political opinion and so on. All of these things are gone into and then only five years after the publication of that book, he is Prime Minister and fighting a world war himself. History was a constant echo for him. It gave him endless signposts. It’s mentioned in some 10% of his speeches in the Second World War. 10% of those speeches do have references to history. He was basically telling the British people that, look back at the Spanish Armada, look back at Napoleonic Wars, we have been in this dangerous situation before. The country has seen great perils before, Elizabeth I and the Spanish Armada, for example, and we’ve come through them and been victorious. So, yes, he recognized the sort of political power of historical analogy and he bent it to his overall overarching theme that we have to stand up to the Nazis, actually.", "Dwarkesh Patel 00:31:39", "So speaking of this, if we think of Churchill as an applied historian, this isn’t a question I was just planning on asking you, but you are in the House of Lords. You’ve written about these, I guess, basically everything that’s happened in the last few centuries across your 20 books. Would you ever consider getting more involved with politics?", "Andrew Roberts 00:31:56", "Well, I’m a politician. I mean, I go to the House of Lords from Monday to Wednesday, lunchtime to dinner time. I go and vote and other than I can’t see how much more involved in politics I can be than speaking and voting in one of the chambers of our parliament. If you want to refine that slightly, Dwarkesh, yeah.", "Dwarkesh Patel 00:32:27", "Let me restate the question. Would you consider running for, aspiring towards a leadership position in the UK, given how successful past historians have been at that endeavor?", "Andrew Roberts 00:32:39", "Well, we’ve just mentioned one past historian who’s been successful. I assure you that there are an awful lot of other ones who haven’t. No, I’m very happy with the extent that I involve myself in politics in the UK. I’ve got to get back down to writing history books, frankly, is the reason I was put on Earth, really.", "Dwarkesh Patel 00:33:03", "Now, tying back to Churchill. And your most recent book, there’s this interesting thing where wartime leaders, very successful wartime leaders, are kicked out of office after they win their wars. Churchill in 1945, De Gaulle resigns in ‘46. He led the French against the Nazis. And then more recently, we should discuss in your book, George H. W. Bush possibly has the most successful foreign policy since World War II, the unification of Germany, the fall of the Soviet bloc without a single shot being fired, and many others.", "Andrew Roberts 00:33:35", "David Lloyd George is the other classic example. Of course, David Lloyd George led us to victory in the First World War. He was out by 1922.", "Dwarkesh Patel 00:33:44", "So what is this? Why are we in democratic countries keen to kick out the people who win us these wars and foreign policy wins?", "Andrew Roberts 00:33:52", "Because we recognize that the skills you need in peace are completely different from the ones you need in war. And what the Labor Party was offering in 1945, for example, this sort of New Jerusalem of socialism and the welfare state and nationalizing the Bank of England and free stuff, essentially national Health Service was going to be given by Clement Attlee. But although much of that actually was going to be done by Winston Churchill as well, they recognized that the Conservatives didn’t have their heart in it in the same way that the socialists did. So it’s completely rational, isn’t it, in a democratic country to go for when you’ve got a choice of leaders to go for the one who’s going to lead you through the peace, however well the person who led you through the war did.", "Dwarkesh Patel 00:34:44", "Although that particular example of socialism in Britain doesn’t seem like the rational choice for the British population to have made.", "Andrew Roberts 00:34:51", "Well, it did after six years of grueling warfare, where people wanted to have a sort of more healthy and better life, and they assumed that socialism was going to be able to do that for them. It took us half a century before we grew out of that particular miasma.", "(00:35:11) - Gaza & future wars", "Dwarkesh Patel 00:35:11", "Now let’s talk about future wars, which is something interesting that you and General Petraeus survey. In your most recent book, you mentioned that the balance of power has shifted more towards defense than offense recently. Why is that?", "Andrew Roberts 00:35:26", "We’re seeing that, aren’t we? Or we will be about to see that, I fear. In Gaza, in Napoleonic times, it was one in three. You needed three attackers for every defender. That probably stays true until the Second World War. But frankly, with taking Gaza as an indication, with IEDs, with booby traps, certainly with all these tunnels, and with the capacity for ambush and for sniper fire as well, which has come on leaps and bounds since the old days of Stalingrad, you need certainly more than three to one in offense. It’s an interesting fact that when Clausewitz was writing, three to one was a perfectly reasonable ratio, but I think that’s gone to the birds now.", "Dwarkesh Patel 00:36:32", "Oh, interesting. This just preempted and I guess answered a question I was about to ask you, which is it’s remarkable to me that the three to one ratio which Clausewitz first came up with has stayed consistent for, I guess, the answer is that it hasn’t. But I was about to ask, well, it’s weird that for hundreds of years, with all these new technologies, that that ratio is still the one that people use, that technicians still use.", "Andrew Roberts 00:36:53", "Yes, well, as I say, they did until sort of well into our lifetimes. But they’d be mad to, today, because that has altered, especially, of course, in built-up areas in the kind of situation which one gets in Gaza with lots of high-rise buildings, fewer now than there were, to be frank, but lots of built-up areas. You can look at, for example, the Battle of Monte Cassino, where because the Allies flattened the monastery, actually the rubble was easier for the Germans to defend with machine gun nests and so on, than if the actual building had been left standing. So, there is an argument, actually, that you do better if you don’t blast the buildings, as you saw also in Mariupol, which you mentioned earlier. And then there’s, of course, Stalingrad, where they fight something called Rattenkrieg, which is essentially \"rats’ war\", because people are fighting down in the sewers. It’s hand-to-hand stuff. It’s extremely vicious, where every building, every room has to be fought for, sometimes down with bayonets. So this kind of fighting, which of course is heavily full of high casualty rates, might well be the one that we’re about to see the IDF enter in Gaza.", "Dwarkesh Patel 00:38:30", "That’s a scary comparison. Gaza becomes Stalingrad. I didn’t think of it that way, but that’s wow.", "Andrew Roberts 00:38:36", "In my house in London, I have an actual copy of one of Winston Churchill’s speeches with his handwritten annotations, and one of the sentences is that London fought street by street could engulf and devour an entire hostile army. And one hopes that doesn’t happen, obviously, to the IDF, but that’s the reality of house-to-house fighting.", "(00:39:05) - Could Hitler have won WW2?", "Dwarkesh Patel 00:39:05", "Actually, while we’re on the subject, I have a few other questions on World War II that I want to ask you before we return to future wars. You have this really interesting book, I think probably my favorite book about World War II, The Storm of War, which I highly recommend. And in it, you make the claim that were it not for the ideologically inspired blunders of Hitler and the Nazis, they could have won World War II, and then you detail a lot of the mistakes they made. But when we step back and look at after America joins the war, the overwhelming industrial output of America, even if they didn’t make these mistakes, is there really any chance that you have a country that has, like, twice the GNP and is outproducing the rest of the world combined in ships and planes, that you could have really stood up to that?", "Andrew Roberts 00:39:53", "Well, why did Hitler declare war against America? There wouldn’t have been a war if he hadn’t declared war against America. You’d have fought a war against the country that attacked you, Japan. And the reason is because he was a Nazi. Because he believed that Jews and blacks dominated the American decision-making process, which, by the way, is completely absurd. When one looks at the Roosevelt administration, it had very, very few Jews or blacks, but nonetheless, the Nazis didn’t do their factual accuracy; it wasn’t always their highest attribute. And they also thought that Americans were cowards and wouldn’t be able to fight very well, which is extraordinary considering that Americans had fought very well indeed in the First World War. Adolf Hitler told Molotov when they were in a bunker in 1940 in Berlin that if the Americans did come into the war, they wouldn’t be able to actually put any troops into the western theater until the year 1970. As it was, needless to say, by November 1942, you had a quarter of a million GIs storming ashore in North Africa. This sense of ideology. You see it also, of course, six months earlier, in June of 1941 where Hitler invades Russia in the belief that the Slavic people can’t stand up to the great German Aryan master race. And as Goebbels said to him, “We’ll kick in the door and the whole rotten edifice will come crashing down,” talking about the Bolshevik state. But that’s not what happened, of course. And the Russians, fighting on their own territory, i.e., when they’re not fighting a foreign adventure like in Poland or in Finland or now in Ukraine, are actually very good soldiers. So he got that wrong. Again and again, Hitler put his Nazi ideology before the strategic best interests of the German Reich.", "Dwarkesh Patel 00:42:17", "It was fascinating to read about the different mistakes Hitler made, obviously, from liquidating 6 million of his most productive, intelligent, and well-educated people to the timing of Operation Barbarossa or launching it in the first place. To the timing of launching World War II in the first place. But even if he hadn’t declared war on America, the Lend-Lease aid on whose basis the Soviets were able to drive back the Germans would still have continued. And that was, of course, a meat grinder where the overwhelming majority of German troops died. So I guess you could say, well, then he wouldn’t have Operation Barbarossa at all. But then are we still talking about the same war?", "Andrew Roberts 00:42:57", "He would have invaded Russia and been caught in this enormous war. But if he hadn’t declared war on the United States, it’s very difficult to work out how Roosevelt would have been able to have declared war on him, especially if you’re fighting a full-scale war against Japan, which by that stage had, by early 1942, conquered one-eighth of the world’s surface. It’s a huge undertaking. But you’re absolutely right about the might of American production. By the calendar year 1944, when the British produced 28,000 warplanes, the Russians and the Germans both produced 40,000 each, the United States produced 98,000 warplanes. It’s almost as much as the whole of the rest of the world put together. They were building Liberty ships at the rate of one a week. It was just a truly extraordinary thing in terms of sheer production. So, of course, that was going to give them the final say over who commanded D-Day, when D-Day would happen and what would happen once they landed in France. But it also had huge implications for everything else, really, in the Second World War as well. And you’re also completely right to say, for every five Germans killed in conflict, I don’t mean bombed from the air, I mean killed on a battlefield, four of them died on the Eastern Front.", "Dwarkesh Patel 00:44:27", "Now, given how misled Hitler was by Nazi ideology, why weren’t the Soviets as misled by Communist ideology in the waging of World War II?", "Andrew Roberts 00:44:37", "Because Communist ideology hadn’t affected actual Politburo the way in which the Politburo worked. Under Stalin, there was no sort of dictatorship of the proletariat or anything like that, let alone any equality. He was obviously a totalitarian dictator. But what he did learn was that the Hitlerian way of fighting the war was not the most productive one. So what you get after Operation Barbarossa, after which he had some kind of a mini mental breakdown in the immediate hours that he learned about it, how the one man he trusted in politics, Adolf Hitler, had betrayed him for a, that’s a difficult moment to take. But then what he does is to start to lean on those marshals such as Konev and Zhukov and Rokossovsky and others, and gives them a lot more power than they ever had before, and listens to them and takes their advice and actually has a much more Western view. The relationship between Churchill, Roosevelt, Alan Brooke, and George Marshall, which I write about in my book, Masters and Commanders, is a big sort of give and take, much more democratic and Western way of coming to military decisions. And that’s the one that Stalin adopts, and quite rightly and completely contrasted from what’s going on in the Wolf’s Lair, 1800 miles behind the German front, which is actually the Führer listening for hours to his generals, most of whom knew strategy far better than he did because they actually went to staff colleges. And they’d fought, of course, as officers in the First World War rather than just as a corporal. And men like Rundstedt and Guderian and Manstein and so on, these people would be listened to by Hitler. And then, right at the end of the meeting, Hitler would sum up and say that they were going to do exactly what he’d originally said right at the beginning of the meeting. And we have every word said by everybody at the Führer conferences because the stenographers took down every word that was said. And it’s very clear that they would go into tremendous detail. But ultimately, Hitler’s way was the way that the Wehrmacht went.", "Dwarkesh Patel 00:47:33", "This is actually interesting and this is one of the points you discussed in Masters and Commanders about the different ways in which democracies versus dictatorships are able to execute wars, and World War II is obviously the perfect example to evaluate this.", "Andrew Roberts 00:47:47", "Well, except that the Soviet Union was not a democracy, of course, and it was on the winning side. There is that sort of glaring glitch in the…", "(00:48:00) - Surprise attacks", "Dwarkesh Patel 00:48:00", "…you have the Allied, the Western democracies have this strategy by committee, I think you described it in The Storm of War, and that obviously means something as stupid as Operation Barbarossa never happens. You have to come to a consensus between all these leaders. At the same time, in your Napoleon biography, you have this singular genius who is able to execute these moves that even his advisors often are like, well, you shouldn’t do that. I guess in the case of Russia they were right. But yeah, maybe you can talk generally about the merits of strategy by consensus versus strategy by a singular mind.", "Andrew Roberts 00:48:41", "Yes. Actually, the interesting thing about Napoleon in 1812 is that he wasn’t warned by his generals that it was a big mistake. And this was partly because he and they thought that this was going to be a three-week campaign and it was only going to go about 50 miles into Russian territory before the Russians capitulated or came to a big battle and were defeated. And he had absolutely no plans at all to go all the way to Moscow in 1812. That would have seemed, as he was crossing the Neiman River, as a complete absurdity. But he was drawn in more and more into the Russian heartland until finally, they gave battle in September 1812 at Borodino. And then he went on and took Moscow. But he left enough time to get back from Moscow to Smolensk. It was, in fact, more time than he had taken to get from Smolensk to Moscow. There are other reasons, which I go into in the book, about why the retreat from Moscow turned into the catastrophe it did, but it wasn’t actually primarily the weather at the beginning. So, yes, Napoleon is the classic example of the single mind strategic leader who, like Alexander the Great or Julius Caesar, has the whole centrality of the campaign in his head. But, of course, he does lose. And after 1812 you have the various coalitions of 1813 and 1814 which force him to abdicate. Then he comes back, of course, in the Hundred Days and loses there as well. So that is in contrast to the much more collegiate way that Wellington and Schwarzenberg and Blücher and so on interacted with one another. But, yes, your overall thesis, I think, is absolutely right about democracies being better at fighting wars, but dictatorships, of course, and totalitarian ones are authoritarian as well, much better at starting wars because they do have the elements of surprise. Very often, one looks at the Yom Kippur War, of course, look at 9/11, Pearl Harbor, Barbarossa that you mentioned earlier, the Falklands, the attack on Kuwait by Saddam. There’s a great line of the Chinese sneaking into Korea, the Chinese crossing the Yalu River, 160,000 of them in the dead of night. I mean, it’s the most extraordinary surprise attack. And there’s that wonderful line of Paul Wolfowitz’s, who, when he said that surprise attacks take place so often in history, that the only surprising thing is that we’re still surprised by them. And that is right. Democracies can do surprise attacks. Obviously, the major exception to that rule is when Israel did successfully carry out the Six-Day War surprise attack at the beginning of that conflict. But otherwise, democracies tend not to. And by the way, it’s a good thing not to, because what it does do is light a fire under the country that’s been surprised. Classic examples, of course, being Pearl Harbor, and it makes them feel outraged and angry as a result. They tend to exact revenge. And by the way, what Hamas did on the 7th October is a classic example of that. Of course, that’s a surprise attack which was, by its own light, immensely successful, but which will have lit a fire under Israel that is going to be very dangerous for Hamas.", "Dwarkesh Patel 00:52:54", "This is actually an excellent opportunity to ask you about bringing us back to the future of war, which you discuss in your newest book, Conflict. The question I have is given we have satellite reconnaissance, drones, and all this cyber espionage, given how clearly we can see the world now with these new technologies, are large-scale surprise attacks ever going to be possible again?", "Andrew Roberts 00:53:19", "That’s a very good question. I’m tempted to say no, because you’re quite right, everything can be spotted on the battlefield today. Obviously, the Hamas surprise attack was a much smaller scale than a complete nation-on-nation kind of attack like Barbarossa or Pearl Harbour. But nonetheless, it is much more difficult to hide troops today than it ever has been in the past. That doesn’t change, of course, the psychology of what happens when you are surprised in the way that Israel was.", "But, yeah, in the 10th chapter, the last chapter of our book, we call it “The Future of War.” We look at areas like cyber and space, but also sensors, AI, robotics, and drones. Of course, in the future, the war will be fought between two sets of drones, and the humans won’t be in the loop. They’ll be on the loop. They’ll have written the algorithms, but they won’t be in the loop because decision-making has to take place far faster than the human mind can work. If a human is involved and at the controls of weaponry of the future, then he’ll lose. It has to be fought between two sets of machines and of course, that has great advantages in terms of speed. But also machines have no conscience. They don’t feel fear or cowardice. They don’t feel remorse or regret or pity. It’s going to be a much more dangerous world in that sense.", "Dwarkesh Patel 00:55:05", "Yeah, and that has all kinds of interesting implications from the technical which you discuss in the book, that the electromagnetic spectrum will be under much greater contention because then you can jam the electronics and the communications between these devices to the strategic. I mean, you have these examples famously, like, let’s say that in the 1980s when the stock market crashes because an algorithm malfunctions, if that leads to a world war, whatever that was the equivalent of, that leads to a world war.", "So you discussed in your Ukraine chapter that tech entrepreneurs are now having a much bigger impact on the waging of war where obviously you have Elon Musk providing Starlink services to Ukraine and notably refusing to provide the service to help with the surprise attack, the naval surprise attack that Ukraine was planning on launching in Crimea. Now, how will the ability of tech entrepreneurs to dictate where and how they will get involved in lending their technology to governments, how will that play into the future strategy? Will they be a force for peace or will they not be a force at all? Because if the government really wants your technology, in the end, they can just expropriate it.", "Andrew Roberts 00:56:24", "I don’t think they’ll do that except for in times of extreme stress and crisis. But no, actually there’s a very wide and I think overall very positive area that tech entrepreneurs can play here. And Starlink, yes, it’s true that Mr. Musk did refuse to help one attack in Crimea, but overall Starlink has been invaluable in this war. I mean, in a way, it is the first proper internet war. People with iPhones on the battlefield can upload both images and obviously also map references which can prove extremely useful to drones and artillery. And this is one of the reasons that Kiev didn’t fall in the opening parts of the Russo-Ukrainian war because Ukrainian artillery was being given accurate information on all sorts of open intelligence, open sources. It was a new kind of warfare which the Russians took a very long time to catch up with. And of course, because they didn’t have their own people on the ground, whereas Ukraine did. The native population was 100% opposed to the Russian invasion in every area apart from four Donbass oblasts, you essentially had just a multitude of information sources that were proving to be incredibly useful.", "So that’s one aspect of modernity. The next one, obviously, is drones and the use they’ve been put to by the Ukrainians. But the sort of innovative stance of the Ukrainians has been really extraordinarily impressive. And when tech people all across the world, not just obviously in the United States, came out very actively in support of Ukraine, it really did move the dial. And so I think with companies like Palantir and others that are really making huge advances and the cutting edge still being with the west in terms of tech entrepreneurial ability, this is a good thing for the west. And that some individuals are going to be pretty much like Mr. Musk, the most important private individuals, I would say, to actually affect warfare since Teesen and Krupp back before the First World War. So it really is a new world, but it’s not a bad new world. It could actually be an extremely good new world for the west and for democracy.", "(00:59:33) - Napoleon and startup founders", "Dwarkesh Patel 00:59:33", "Yep. Speaking of tech entrepreneurs and their personalities, let’s discuss your biography of Napoleon. So I’m not aware if you’re aware of this, but living in there is in the startup community, there is a cult of Napoleon that is solely there.", "Andrew Roberts 00:59:52", "I didn’t know that. Seriously? Is there?", "Dwarkesh Patel 00:59:54", "Your biography is the part of the canon. So in the person of Napoleon, I think startup founders see the best aspects of themselves resemblance. You have somebody who is a young upstart, just stupendously energetic and competent, much more efficient than the bureaucracies and old systems around him, a reformer, tremendously intellectually curious, an autodidact. You can just go down the list. What is your reaction?", "Andrew Roberts 01:00:26", "I love that. That’s exactly what he was, yes, absolutely. He was totally fascinated by every new thing. He flung himself into ideas for balloons and submarines and anything that could be useful for agricultural development. He was fascinated by trying to build bridges faster and better and cheaper. He was a real go-getter when it came to giving prizes for new chemical components and so on. This was somebody who created the legend, not just for soldiers, but very much for inventors and entrepreneurs and people like that, who he felt were going to help France outstrip Britain, essentially, which had a head start on France in the Industrial Revolution. So there was a very strong sort of nationalist reasoning behind his embrace of science. But he was made a fellow of the French Academy on the basis of his genuine interest, not just because he was the First Consul of France. He used to attend all their meetings. And this was an extraordinary thing. If there was going to be a meeting on, I don’t know, electricity, sitting in the front, there would be the First Consul taking notes.", "So I can understand why young tech entrepreneurs might like Napoleon, and I’m thrilled that my book might be helping with that.", "Dwarkesh Patel 01:02:09", "Yeah, no, I think you’d be surprised to the extent of it.", "Andrew Roberts 01:02:13", "Also, it’s true that megalomaniacs also love Napoleon. So I’m not saying that there is a massive Venn diagram shaded area between tech entrepreneurs and megalomaniacs, but it doesn’t necessarily mean that liking Napoleon doesn’t necessarily mean that you’re going to be a great tech entrepreneur, shall we say.", "On the point of being a futurist, it’s really remarkable. In your Churchill biography, you discuss the ways in which he saw the influence of tanks and planes and even nuclear energy far before many others.", "Andrew Roberts 01:02:50", "His best friend was the Oxford professor for physics, professor Lindemann, and later Lord Cherwell. When it comes to the people he had around him, he loved having scientists around him. He said that scientists should be on tap but not on top. So he did recognize that he didn’t want to have a world run by scientists, but he definitely wanted to know what they were thinking. And as early as the mid-1920s, so a good 20 years before the atom bomb, he talked about how an entire city could be destroyed by a nuclear bomb the size of an orange. And that was very advanced stuff, frankly. He, of course, was fascinated by the use of radar in the Second World War, especially at the very beginning of the Second World War, how one could bend the German rays to mean that their bombers were sent off target and didn’t fly over British cities. He wanted to get into the real nitty-gritty of all that, and of course, the ultimate mathematical genius machine, the Ultra machine that broke the Enigma code. In everything related to that, he was also really interested in learning and understanding the reasoning behind what was going on.", "It’s very easy to think of Churchill as a bit of a reactionary figure, this sort of tubby Tory with his cigars and his brandy and wanting to hang on to India and all of these sorts of very much set in the past kind of attitudes and attributes. But really, he was somebody who was obsessed with the future.", "Dwarkesh Patel 01:04:48", "But on the point of Napoleon, it’s interesting the way you describe the way he would micromanage every aspect of the Empire and obviously his energy and efficiency. It reads honestly like an Elon Musk biography where Elon is micromanaging the engines on his Raptors at the same time running these five other companies. I wonder what you think a person like Napoleon does today that seems that a genius is born today. Does he become an Elon Musk or does he do something else?", "Andrew Roberts 01:05:16", "No, that’s exactly what he does, absolutely. He goes to Silicon Valley and sets up his own company and makes a billion out of finding something useful to advance mankind with. That’s exactly what Napoleon does today. And by the way, if he has anything like the same acquisitive techniques, he probably buys up lots of other companies around him in the way that Napoleon invaded country after country. But when he did invade those countries, what he would do, for example, in Italy after the Italian campaign and he entered Milan, the first thing he did was to get the intellectual, the writers, the scientists, the chemists, and so on, together. He was very interested in astronomy and so on, and would talk to them about their thing. So you had an intellectual as leader, which frankly the Bourbons had not been for the last thousand years of French history. It’s very difficult to think of more than one or two genuine intellectuals as ruler. And so one can understand why he became popular amongst the middle class and the intellectuals themselves. And he would also, one of the other things he would do was go into every town he went into. He would go into the ghetto and free the Jews and give them civil and religious rights and so on. And I think that was tremendously forward-thinking for that day and age as well, and a very attractive feature about him.", "Dwarkesh Patel 01:06:56", "Obviously, the biography of Napoleon must end tragically, and I notice this about many other biographies of great people I read. It is often what makes them great in the first place, they keep making these double or nothing gambles that catapult them to the top. And then, of course, at some point, your luck runs out. That’s obviously an oversimplification in every single case. But I wonder if this is also a pattern you notice in the lives of great figures. You could say, for Elon, having his reputation and fortune wasted away at the altar of Twitter could be an example of one such thing. But what is your reaction to that?", "Andrew Roberts 01:07:33", "Yes, of course, hubris is the occupational hazard of hugely successful people, needless to say. I mean, it’s probably also the occupational hazard of lots of other people, but we just don’t know about it because they’re not usually successful. But one does tend to get stuck in one’s ways. One can’t necessarily teach an old dog new tricks. You can’t necessarily reinvent yourself and therefore you go down the same old paths.", "I would say in Napoleon’s defense, of course, not least that idea that when he invaded in 1812, which is the key moment, after that nothing good happens, and before that, lots and lots of good things happen. But the key thing about that is that, look, he had beaten the Russians twice before. He was invading with an army of 615,000, which was the same size as Paris at that time. He knew that the Russian army was only about half the size of his and he didn’t want to go too far into Russia, which of course, as I mentioned earlier, changed in the course of the campaign. But it wasn’t an insane hubristic mad decision to go to war against Russia in 1812.", "What was hubristic, mad, and insane was to try to beat Britain by imposing a continental blockade on the entirety of Europe, and therefore attempting to crush Britain by stopping smuggling, which was completely rife, and to stop every other country from entering into free trade with Britain. It was that belief that protectionism could somehow win the war against Britain. That was the mad thing. And that’s what led him into the Peninsula campaign, which cost him a quarter of a million.", "Dwarkesh Patel 01:09:32", "Wow. Actually, if you have somebody like Napoleon, who for his entire life has succeeded in ways that nobody else is succeeding or could anticipate or people tell him, well, that’s not possible, and he accomplishes it. Obviously, he has to take advisors with a grain of salt, knowing that he has been able to do things that others have not been able to do, but he also has to recognize his limits. And this is not just a question with Napoleon in particular, but just in general. How does somebody who is at the tail end of multiple distributions not fall back to mediocrity when making judgments about themselves, but also recognize their limits?", "Andrew Roberts 01:10:13", "It’s always a question of choosing the right advisors, isn’t it? In domestic politics, areas that he didn’t know much about, like legal codes—although it’s called the Napoleonic Code, actually it was his legal experts who drew it up and saw it through and passed the legislation. So in areas he wasn’t particularly interested in, he did allow a considerable degree to be advised upon. He was the dictator, he had the ultimate decision, but he was very good at choosing advisors, quite regardless of their status in society or how respectable they were.", "There’s a man called Cambacérès, who was a truly powerful figure, and he was gay. This was something that was pretty much unknown at that stage. He was outwardly gay, and at a time when, of course, that was against the law. But Napoleon didn’t mind that because he was so good at his job that he kept him on as arch chancellor. Some of the marshals, he was a true believer in meritocracy in that. Some of his marshals—there were 26 marshals, 13 of them came from the working classes and in some cases below, they were peasants, sons of innkeepers and barrel coopers and domestic servants, and so on. And yet, if he saw that a man was lucky, it was one of the things he always wanted in his generals, but also was a natural leader. He would appoint him, and they became marshals. And all the marshals, apart from a couple, became dukes and princes, two of them became kings. To be the son of a barrel cooper and to become a king in the early 19th century was a truly extraordinary thing in an army where for the last thousand years, certainly your rank and status in life was very much the same as your father and grandfather.", "Dwarkesh Patel 01:12:33", "One thing I found really interesting in your biographies of Napoleon and Churchill, if I’m remembering correctly, both of them wrote a novel in their early 20s or thereabouts, where they, or rather, a character saves their country in battle and wins over a pretty maiden. And I remember the details, but I thought, wow, both of them did that. That’s a really interesting detail.", "Andrew Roberts 01:12:57", "What explains this? Yes, it’s probably a terrible psychological disorder, but I’ve just realized that I did the same thing when I was in my twenties. I had a novel in which I saved the country and married a fair maiden. Gosh, I don’t know what that makes me. Probably a megalomaniac like Napoleon, but yes, they’re both great reads.", "By the way, I love Klisson and Eugenie, the book you’re referring to by Napoleon, but the best one by far is Savrola by Winston Churchill, where you see lots of rolling Churchillian phrases which come out again later on in life. And you’re right, both of them are very, very obviously autobiographical. Actually, in Savrola, the hero, doesn’t he save the country, but then he goes off into exile. And I think, doesn’t the Napoleon figure die heroically in battle after saving the nation? But there’s a lot of nation saving going on in both of those youthful novels.", "(01:14:06) - Robert’s insane productivity", "Dwarkesh Patel 01:14:06", "Now that we’re nearing the end of our time, I want to ask you about how is it possible you’re in the House of Lords? I just realized that well, I knew you were in the House of Lords, but I just realized how much of your time that consumes. On top of that, you’re writing these books that are your biographies are widely recognized as the best biographies of these people, who have thousands of biographies written about them. And you’ve written 20 books. How are you managing your time? How is this possible?", "Andrew Roberts 01:14:32", "Because I start work at 04:00 A.M every day. You get about five hours or so before anyone wants to bother you or irritate you. And so that’s the trick. It’s time management. I nap every single day for about half an hour in the afternoon. I’ve been doing it since I was at Cambridge 40 years ago. And so, I’ve trained my body to switch off and then switch back on again. It means that you get two days’ worth of work out of one day on Earth. Obviously, everybody’s body clock is completely different. But I do recommend if you’re young enough to start, and as I say, I started when I was in my early 20s, you can really squeeze more time out of the day than you think is mathematically possible.", "Dwarkesh Patel 01:15:28", "Yeah, I’m 23, so this might be the perfect time to launch this habit.", "Andrew Roberts 01:15:32", "Today is the day. Make sure after lunch you put on an eye patch and literally go to bed, and you will find that you’ve squeezed an extra day out of the day.", "Dwarkesh Patel 01:15:45", "So why is biography, which is a genre you’ve employed across many of your books and, of course, books that have become overwhelmingly famous, rightly so? Why is that the best medium to understand an era or to understand the impact of the era on the present?", "Andrew Roberts 01:16:02", "Because it focuses the mind. It concentrates on one person. You emotionally connect with that person. You either love him or hate him or her. Of course, I have done some work on writing about women. It’s the great man and woman theory of history, of course. And I do believe in that because I think that although there are enormous historical movements that happen, the decline of magic and the rise of science, the industrialization, and everything, those come about as the result of the deliberate choices made by millions, indeed billions of people. And you can’t look at something like the invasion of Russia we were talking about earlier in 1812, or Churchill’s decision to fight on and not make peace with Hitler in 1940, and not recognize that the individual does play an absolutely central role in some major world-changing decisions. So I think it is intellectually justified to write biography. A lot of Whigs and determinists and Marxists don’t. They think that biography is far too antithetical to determinism. But you know, what are we but our decisions? Man is spirit, as Churchill said. So, I think it stacks up as a reasonable way for me to spend my time.", "Dwarkesh Patel 01:17:42", "Yeah, indeed. I think that’s a great place to close this conversation. This was absolutely fascinating. And the book, again, I highly recommend it. It was a really thorough and interesting read about recent conflicts with insights from not only one of the best historians in the world, but also somebody who commanded the two most recent campaigns that involved conflict since World War II. So the book is Conflict: The Evolution of Warfare from 1945 to Ukraine, available at Amazon and fine bookstores everywhere. Andrew, thank you so much.", "Andrew Roberts 01:18:19", "Thank you, Dwarkesh. I’ve really enjoyed it." ]
[]
https://www.dwarkesh.com/p/andy-matuschak
Andy Matuschak - Self-Teaching, Spaced Repetition, & Why Books Don’t Work
[ "(00:00:52) – Skillful reading", "Dwarkesh Patel 00:00:52", "Today I have the pleasure of speaking with Andy Matuschak , who is a researcher, engineer, and designer working on tools for thought. In addition to this podcast we did an interesting collaboration on Andy's YouTube channel which I encourage you all to check out, where I just watched Andy try to learn some new material.", "It was just an intro chapter of quantum mechanics. Honestly I was expecting to see some cool techniques or be impressed but I was way more surprised than I expected to be by the deliberateness and the effortfulness of the practice.", "It was 15 minutes a page in this textbook. And for every small thing that Andy thought, ” I don't fully understand this, the author's trying to say something here, he's trying to draw an analogy or relationship, I'm not sure I totally comprehend the relationship between classical mechanics equation and the quantum mechanics equation, the author thinks is analogous, ” just really delving deep in that.", "I thought that it was really interesting that this is a way to approach new material. So in this conversation I'm looking forward to talking with Andy about not only that experience, but a whole bunch of his other research and the other tools he's built.", "Let me ask you this. That experience made me think that this is somebody who actually cares about understanding the material. Do you think people in general care about actually integrating and understanding the material they're consuming in books and textbooks? Don't you think they'd make more effort to actually assimilate that information if they cared to?", "(00:02:30) - Do people care about understanding?", "Andy Matuschak 00:02:30", "I think the statement is just a little too general to comment on. I think it's certainly the case that most students don't actually want to do this because they're learning stuff that they don't actually care about learning or even if they do care about learning it, often there isn't a clear connection between whatever reading or activity they're doing in the moment and the thing that originally inspired them for the subject and what they actually want to do. So there's always something tenuous going on. On the other hand, it's amazing to look at subreddits and to look at the level of nerdy and fascination that will be brought to bear on gardening equipment or knots, for instance. People are competing to tie some very obscure 18th century knot or whatever, and they're flipping through almanacs from the period. So when people are interested and it connects to something that's truly meaningful for them, they really do want to absorb and we see that in their behavior.", "There is a second thing that I think is relevant. To explain this, I will reference Mortimer Adler and Van Doren's How to Read a Book , which is a great guide on serious reading. They consider the case of people who often have difficult or demanding books on their bedside table. So these are kind of aspirational, like, “ Oh, I wish I could read King Lear. I want to be the kind of person who reads King Lear.” You put it on your bedside table and people will read it before bed. They'll find that they fall asleep while they're reading it, they're not really absorbing or understanding this book. It's not just an issue of memory, they simply are not apprehending the words on the page. The authors of How to Read a Book make the case that the issue with these people who are falling asleep reading King Lear is not that they don't want to stay awake and to really deal with that text, in many cases, it's that they actually don't know how. They butt their heads up against this very difficult wall of material. It's almost like a rock climber who's not very experienced going up against a wall that only has these really subtle notches. To an experienced rock climber, those subtle notches are like a ladder and they can get right in there and start making some progress and seeing what's up with this wall. But if you're an inexperienced rock climber, it just looks like a solid wall. The claim, maybe this is an optimistic claim, you can take me to task, is that there is such a thing as being a more skillful reader and being a more skillful reader will actually, in practice, in many cases, when the reading is aligned with your actual interests, produce a more serious, more understanding, forward kind of reading.", "Dwarkesh Patel 00:05:18", "Right. So there's two models of why people might fail to retain the material they're consuming. One is they got it at some point, but they forgot it. And the other is they never understood it in the first place and they just never noticed that they never understood it. What I found really interesting was you going paragraph by paragraph, sentence by sentence, and asking “ Have I got this?” This was material that I had tried to go through the week before. And there were things when you dwelled on something, I'm like, “ Actually, I don't understand that either.” and I didn't notice I didn't understand that. How are you able to notice your confusion while you are going through it?", "Andy Matuschak 00:05:56", "This is a kind of habit. It's a skill that can be built. Adler and Van Doren suggest that the first and most important rule of skillful reading, active reading, is asking questions and trying to answer them. If you just dwell on that, what kinds of questions should I be asking and how should I go about asking them? How should I go about answering them when the author isn't present? And so on and so forth. [Unclear] They also say conversely, and this is meant as a criticism, an undemanding reader asks no questions and gets no answers. I certainly have read many, many books that way, particularly before I developed this habit and I often found myself falling into that second category. The issue was not that I failed to remember things, but rather that my eyes just skidded across paragraphs without even realizing.", "Dwarkesh Patel 00:06:47", "You're halfway through a chapter and you're thinking, what is the chapter about?", "(00:06:52) – Structuring effective self-teaching", "Ok, the broader question is — now that we have all these online resources, some of which you’ve helped develop like Khan Academy, it seems that the value of conscientiousness as a trait has dramatically increased. If you can motivate yourself to learn these things, the world is out there for you to absorb. What are the sort of design or UI or even content modifications that can be made to give you a conscientiousness boost? In the past you had a professor, you had peers, you had in-person deadlines to motivate you. Is there something equivalent to a pen and paper and how that boosts your mathematical IQ for conscientiousness?", "Andy Matuschak 00:07:33", "Right. One enduring result in education psychology is that when you're doing a lot of cognition, metacognition is difficult. What I mean by that is when you're thinking really hard about the stuff on the page, it's very difficult for you to plan, to regulate yourself, to figure out what the best next action to do is, to reflect and evaluate on whether you're understanding things. All that gets harder as the material gets harder and as it gets less familiar. So one common thread, at least in learning science stuff, has been outsourced metacognition. Some of the ways we outsource that are actually very familiar, they're things like somebody gives you a syllabus and tells you what to read when and you reference that. That is a user interface, that is a design practice. If you're a self-motivated student, one thing you can do and that I've done, is just go appropriate a syllabus from some graduate level course that corresponds to the text that you're reading as that might be a good guide to what's most important and how to approach this.", "There are also lots of things that one can build directly into the interfaces. Just as one example, in Quantum Country , which was a textbook that Michael Nielsen and I developed to explore some ideas around augmented reading experiences, we embedded a bunch of review questions every 1500 words or so in this text on quantum computation. Our primary intention in doing this was to help people remember what they read. We had this theory that part of what makes it hard to learn a complex subject is that there's all these new definitions and notation and terms and things being thrown at you at once and you're being asked to combine these things, which are still unfamiliar. And so you're constantly having to retrieve these elements and struggling to do it, either it's taking a while or your success rate is low.", "That was our motivation but it had this other metacognitive benefit that was really important. When you’re asked these questions after reading 1500 words it is an opportunity for you to notice that you did not in fact absorb what was in that thing. Not that you don't remember but that there's a word in the question that is apparently important that you simply didn't even notice. And so not only does that give you feedback, it tells you that maybe you need to reread that specific section, but it may also change your behavior towards future sections. In interviews, readers told us that after they reached the first set of questions or a particularly difficult set of questions, they found themselves slowing down and reading more attentively or actually realizing that their reading practices were ineffective in general. In the way that you were mentioning towards the start of the conversation. There's been a bunch of research on adjunct questions, questions that go along with a text, and they have all kinds of effects. The adjunct questions have the kind of effects on forward material I was just describing and they also have the effect of making you reflect on what you've just learned. And in addition to the questions being asked, you might find yourself pondering, “ Well, I'm being asked about this. But why does this matter? ”", "Dwarkesh Patel 00:10:58", "Yeah, on the point of adopting a syllabus from somebody else. One problem you might have as a self learner is you have some goal, a reason for learning, and then you start thinking, “ Well, do I really need this chapter? Do I really need this content?” At this point, you're doing the metacognition that you were trying to use a syllabus to avoid.", "Andy Matuschak 00:11:16", "Yeah.", "Dwarkesh Patel 00:11:20", "If you are trying to self learn and there is a resource that is a close approximation of the syllabus you want. Should you just think “ Hey, I don't know why I need this chapter. I'm just going to go through it. ” or should you use your own judgment there?", "Andy Matuschak 00:11:33", "This is a pretty classic issue for learning in general. You have this problem where to bootstrap yourself in a domain you have to outsource the question of what is necessary to know. You might know, for instance, that you really want to build a model that can generate images given descriptions, like Midjourney, but you don't even know what you need to study to do that. So you pick up some textbooks on machine learning. You're outsourcing the answer to this question to the author. What is necessary to know to build things? Maybe you can find a book that's actually labeled “What you need to know to make an image generating model” But even then, you're outsourcing the answer to the author.", "You can take that answer as a start and treat it as tentative and revise it iteratively. And as you become more skilled you can lean less on it. And you probably should. I think a very common mistake that people make is to feel that they need to do the thing the right way and that is exhaustive and completionist. If they fail because they find themselves bored or unmotivated because the material doesn't actually seem to relate to what they want to know, but they're just going on faith that, “ Well, if I follow what the author says, everything will be good.” Anyway, they find themselves having trouble for that reason, and then they just stop. This is bad. They would be better off just skipping around according to their interest and continuing.", "One other thing I'll say about this is that the role that these syllabi play is as a scaffold. This is a term of art from learning science, but it relates to the thing we're familiar with. If you want to get higher up a building, you may not be able to climb it yourself, but you can build some scaffolding around it and then suddenly you can reach that top shelf or the top of that building. The scaffolding is ubiquitous in education. We give you simpler versions of questions first, that's a kind of scaffolding. We partially work the answer first, that's a kind of scaffolding. We give you worked examples first, where we might ask you to predict the next step of the work example. That's also a kind of scaffolding. Where the metaphor breaks down is that once you become more capable, we try to remove the scaffolding. It's called fading. The idea is that once you have solved a lot of calculus problems, you don't need half of it worked out and you're just filling in one of the blanks anymore. And in fact, doing that would not be as effective a learning experience.", "If I'm studying something in computer science, which is a domain that I know really well, I don't need those syllabi, not in the same way for most subjects, and I think that's mostly just because the amount of cognitive demand that's placed on me by the subject is just much lower than it is for other subjects. So much of it is familiar already that I can deploy my own planning more effectively as I go. But it's also the case that because I know so many things about the subject, I can do a better job from the get go of making a plan. Because making a plan requires modeling a path or predicting a path or saying, “ Well, I guess I'd need to see how this connects to that or something like this.” And if your destination and your starting point are very far away, then you can't necessarily see all the things in between or how to draw those lines. But if those things are only a couple hops away, you can maybe infer pretty accurately.", "Dwarkesh Patel 00:15:22", "Right. I guess this maybe implies that if you do want to learn about a subject, it might just be helpful to just do an Intro to X subject course or textbook, not necessarily because it is instrumentally valuable to whatever problem you're interested in but because it'll give you the context by which to proceed on, the actual learning.", "Andy Matuschak 00:15:45", "That's true. It's also the case that you don't even know all the stuff there is. This is another key problem and this is another reason why we outsource stuff. There's a fundamental tension in unschooling, for instance. Just let the kids pursue what they're interested in. That's cool. There's a lot of good things about that. But say that a kid's true passion turns out to be ocean geology or something and they're in a landlocked country and there's just no one around them that talks about ocean geology, then they're missing out on some great opportunity. But if the school had a program where they are bringing in guest speakers and then there's a special lecture on ocean geology from this person and it lights up the kid's world, even if they wouldn't have chosen that lecture, that's a good thing.", "Dwarkesh Patel 00:16:33", "Yeah. Unschooling is actually an interesting subject to talk to you about.", "(00:16:37) – Memory and forgetting", "But before that, I want to ask you about this excerpt from a Paul Graham blog post titled How You Know and it says, “ Reading and experience train your model of the world. And even if you forget the experience or what you read, its effect on your model of the world persists. Your mind is like a compiled program you've lost the source of. It works, but you don't know why.” So it's a compiled program, you don't need the source code. Is it okay that we're forgetting so much of what we're reading?", "Andy Matuschak 00:17:05", "What he's saying is true, to some extent, whether or not that extent is sufficient is going to depend a great deal on the situation and on what you need. If your aspiration actually depends on having a deep, detailed understanding of the material, then the imprint on your worldview or on your automatic responses made by the book may not be sufficient. On the other hand, if what you want is to absorb a lot of different ways of looking at the world, knowing the details of these isn't necessarily important. Maybe you just want to know that Confucius emphasizes community and society as a moral patient in contrast to the individualism of a bunch of humanist philosophers. And if that's kind of the level that you feel like you need to make decisions in that domain then I think that's fine.", "Very practically speaking, it's funny that he uses the word compile, because one of the prominent theories of cognition, that is how we come to know and learn things, is this theory called ACT-R by John Anderson. A key part of it is this process that he calls knowledge compilation. This is the process by which we take individual facts and turn them into higher level patterns that we can generalize and apply in more contexts. And I think that's what Paul is gesturing at. By reading a book which contains a story or a case study you learn to generalize to some extent and you apply it in other contexts when it seems relevant.", "The reason why I bring up Anderson's theory is just that he has a bunch of specific claims about what's necessary for knowledge compilation to happen and what you'll be able to do as a consequence of certain degrees of knowledge compilation. I think he'd probably respond to this by saying that — actually, in order to effectively compile things that you've learned into schemas that will match feature scenarios effectively, you need to be exposed repeatedly to those things, you need to use them, you need to do a variety of things that will basically show your brain that is relevant to apply these things in combination. And simply reading probably won't do that. But if you read and you have a lot of conversations and you're in a context where it's demanding and it's drawing on what you read, then you may naturally do that kind of compilation step.", "Dwarkesh Patel 00:19:41", "I've actually been thinking about this in preparation of talking with you. I've had the pleasure of talking to a lot of interesting people across a lot of different fields. When I look back on some of my old conversations, I notice that I actually had a lot more context at the time I interviewed them and had done all the prep than I can remember now. Sometimes I'll listen back to a conversation and I won't even remember the content in the conversation. And I remember thinking after the conversation, I knew so much more about this field than was compressed into this one hour interview, right? I had to prep other things that might come up. And afterwards I'm like, “ I don't even remember the things that were in this one hour.” But then the other part of me thinks, “ Well, I'm getting better at doing the podcast” , that might imply that I've picked up something. But it is a shame that I didn't have some sort of rigorous practice throughout the time of retaining the material that I was keeping.", "Andy Matuschak 00:20:31", "Well, yeah, I expect the main [unclear] in which you're getting better, is actually not really about any of the details of those materials. I think it's about your practices as an interviewer, the way that you generate questions, you probably have a bunch of patterns, whether you know it or not. You read a thing that a person has written in hopes of generating good questions about it. And even though you maybe don't have this habit for textbooks yet, of constantly demanding things of the textbook, you have started to develop this for essays or blog posts that interesting people you're interviewing have read. And to point to this Anderson theory, in the course of repeatedly doing that, you've made parts of it automatic, so that you don't need to do it consciously, you can focus more on the material, you can probably take on more difficult material, or actually understand material at a higher level than you could have before, because less of yourself is engaged in this question of how do I make the questions from the material?", "Dwarkesh Patel 00:21:36", "Yeah, I certainly hope so. Otherwise, there's a question to be asked of what I've been doing all these years.", "Having interviewed some of these people who are infovores and have consumed and continuously consume a lot of content, they don’t have a note-taking practice.", "This is something you also noticed and pointed out in your notes. Tyler Cowen, for example, I don't think he has any sort of note-taking practice. He just devours information. What is your theory of how these people are integrating things that they're reading?", "Andy Matuschak 00:22:07", "Tyler's a good example. I think he's actually a little easier than some others we might discuss. So, let's talk about Tyler for a second. One of the other things that's so interesting about Tyler is his writing obligations. This is a man who's blogged every day since 2007 or something and has a weekly Bloomberg column, something like 1500 words, and also has published something like a book a year for a decade or more, and occasionally publishes some academic articles, plus like a bunch of other collateral. That is notes. And I think it's also important to note that like the way that Tyler writes these blog posts and the way that Tyler does these columns and even the books is very different from the way that many other book authors work. Tyler’s blog posts often have this a real first draft mentality to them. He's just thinking out loud and he's got decades of practice thinking out loud and like writing down a decent take the first time. And so he gets something pretty good, the first time, much of the time. And that works for him. So that is a note, right? Your initial thoughts on the subject is what you would write in a note.", "Dwarkesh Patel 00:23:24", "Yeah, one of my former guests, Scott Young , was comparing Bryan Kaplan’s books and Tyler Cowen's books and he said, when you read a Brian Kaplan book it's like a chess game. If you try to move a pawn up on this case for education, I've got this rook that I can move here. With Tyler, it's more like he’s shooting the shit on a subject.", "Andy Matuschak 00:23:43", "Bangladeshi train stations", "Dwarkesh Patel 00:23:44", "Yeah, right, right. On a separate question, do LLMs make memorization more or less valuable? There's a case you can make for both. But on net, is it more important to have more Anki cards in your deck now that GPT-4 is out?", "Andy Matuschak 00:23:58", "Maybe this is a good time to talk about what memorization is or what it's for. We could use that word to refer to the practice of learning more trivia. For instance, a thing that I and some people I know have done is, we’ve gone through a book called Cell Biology by the Numbers , which says all of these things like, how big exactly is a nucleotide? Like how much volume does it take up? It's kind of helpful occasionally to know that it's about a nanoliter. And that can help you model things. So you can just commit all of those things to memory, right? That's one kind of memorization. And we could talk about how LLMs affect that. But I just want to make the case that so much of what you do and experience day to day is memory bound, or is memory influenced in important ways. For instance, your ability to understand a difficult argument, even in the course of a text, is memory bound. Some of that's working memory. But your ability to understand an argument that has many steps in it, more steps than you can keep in your working memory, depends on your ability to think of some of those steps in terms of some stuff that you already know, so that you can kind of reduce it or abstract it.", "Likewise in creative work, there's a bunch of studies trying to catalog case studies of how it is that people have flashes of insight. It's a little hard to talk about that but one of the things that's a pretty consistent source of insight for people is noticing a surprising connection or a surprising contradiction. It probably feels pretty familiar, right? You're reading through the newspaper and you see that people have finally figured out how to do X and you're like, “ Wait a minute, that means if I combine it with this other thing, like we'd be able to do Y!” or something like that. Now that's only possible if the other thing is in your memory. If you have to think to look up the other thing, the newspaper wouldn't seem so salient to you.", "Early on in my time in Khan Academy I learned a whole lot of details about the education market in a very thorough way using memory systems. This let me be in high level executive kinds of conversations where we're trying to figure out strategy stuff and somebody would propose a particular direction and I could say things like, “ Well the total budget spending for instructional materials is this and that market is growing by this percent per year and 10% of students in the US are in this place” and so on and so forth. Basically I could evaluate ideas on the fly in a way that others couldn’t. Anyway, this and other things are just part of my rant about how people in general under-appreciate the role that memory has in our lives.", "So just to come back to the question, explicit memorization or explicit making sure that you can recall the thing reliably. We can test it against these things. So for the case of the creative instinct, for instance, noticing the contradiction, noticing the connection, I imagine that we will have future notebooks that will do some of this noticing with us and that will decrease our need to be able to rely on our own sense of salience or something like that. But I guess I don't know how much. My own experience coming up with weird ideas that feel very new is that it feels very personal, it feels very [unclear]. I often haven't been able to describe, textually, the constituents of the thing very clearly. There's just kind of a feeling that something in this general direction is connected with something in that general direction, or there's a tension. That makes me a little hesitant. LLMs depend on our ability to externalize things and to make them legible. Back to the learning point about the role of memory. If what you're trying to do is to understand something pretty difficult, your ability to understand that thing is still absolutely going to be bound on your memory of the constituent material.", "Dwarkesh Patel 00:28:11", "Do you think there's pedagogical value in forgetting? Some anecdotal or unrelated evidence is in neural networks where sometimes you can improve performance by pruning some of the weights. Obviously, we forget things and we don't remember everything. When we sleep, we lose a lot of our memories. Is it possible that by not getting the details and only getting the gist, that actually helps us better generalize the insights we're getting from text and things like that? What do you think of that way of thinking?", "Andy Matuschak 00:28:42", "Yeah, it could be. Memory is very connected to attention. And we can't attend everything. So one of the roles of memory is to help guide us to the things that are important. Maybe I happen to know that the magnitude and energy of an electron volt, that's something I can draw on because of the memory system stuff, but I also don't want that to be front and center in my mind all the time. I don’t want it to be hyper salient the way that I want some very important design principles to be. So yeah, there's some role there. There's also some theories that the reason we have forgetting is that our environment or ancestral environment was very traumatic. So we would like our episodic memory in particular to maybe not be all that faithful. I actually don't know the status of those theories.", "Dwarkesh Patel 00:29:30", "Probably why you forget dreams as well, right? Dreams are pretty traumatic. If you thought of them as the same as a real life experience.", "Andy Matuschak 00:29:37", "Yeah. Another weird thing about memory is that as far as we can determine, memories aren't lost exactly, at least not completely. There's a series of interesting experiments that people have used to demonstrate that decades later, things are still there. If you can cue them right, people can bring things back, even things that they feel are lost. And of course, you can also cue people in ways that are hallucinatory so you need to be careful about that. I guess the reason why I bring that up is that it flies in the face of this view that there's a limit.", "One of the things that I think is kind of weird about this memory system stuff, or like memory champions, or something like that is “ Oh, if you do these things, will you start to forget other normal human stuff?” And what's weird is, no. I've been doing this memory system stuff for years and I just know more stuff now. This is aligned with the experimental literature, which seems to suggest that, there's probably upper bounds but we're not close to them. Some of these memory champions have memorized maybe two orders of magnitude more things than I have practiced. Certainly people who are multi-lingual have really, really absurd numbers of things memorized. So there isn't a resource management argument.", "Dwarkesh Patel 00:30:59", "If there isn't, why do we forget so many things? Is there some reason the brain just forgets some of the things we’re coming across? Maybe we were training the ancestral environment to find certain things salient that don’t just map onto books?", "Andy Matuschak 00:31:16", "It’s a good question. We're getting to a part of the cognitive science space that I'm less familiar with and also that I suspect we simply know less about. But let me just riff a little bit. One of the things that we sort of know is this idea of spreading activation. When you go to try to look something up or when you try to deal with a particular situation, there's something almost kind of like DNS exchanges or like routing on a network or something where we start from some point that is like a stimulus, and speaking very informally, we kind of expand outwards from there and there are effectively like weights on those connections. By tuning those weights effectively, we route the packets on the network effectively. Memory is encoded in these weights, at least partially. So if you didn't forget things, then you might just have this weird cacophony on the network and in particular, what's salient? What to do next? Which response seems most appropriate to this question? You might answer those kinds of things very effectively, because all this stuff is coming up for you, that is much less relevant. One of the theories about how well we remember stuff in what circumstances is actually called predictive utility theory. And it suggests that the probability of retrieval of a particular item in a given situation actually does correspond with basically a model of to what extent the brain predicts it will be useful.", "Dwarkesh Patel 00:32:48", "Right. And then the prediction but doesn't necessarily map on to…", "Andy Matuschak 00:32:54", "Doesn’t necessarily, exactly. So when you repeatedly access something, when you practice retrieving it, the prediction of the utility of the thing goes up. And when you do it in a variety of situations, it goes up across a broader distribution.", "(00:33:10) – Andy’s memory practice", "Dwarkesh Patel 00:33:10", "Okay, so this is interesting. When did you start your memory practice? Presumably it was after after Apple?", "Andy Matuschak 00:33:14", "Yeah.", "Dwarkesh Patel 00:33:15", "Okay. Let me ask you this. At Apple, you were in charge of a bunch of important flagship features on iOS and I'm guessing other things. Presumably you didn't have some sort of practice but since you were encountering these things day to day, that natural frequency and way in which problems came up, did you have a worse understanding of those problems then compared to now, knowing what you do and having the practices you do, you're able to comprehend now? I don't know if that question made sense.", "Andy Matuschak 00:33:45", "No, that's a great question. Here's a fun thing. I was much better at what I was doing then than I am at what I'm doing now. That's pretty funny. It was just totally different. Let's talk about this a little bit. This feels very, very juicy for me. Most of what I was doing was engineering. Some of it was very difficult engineering, but mostly engineering, and mostly on things that were fairly well understood. I wasn't trying to decide what should be done, sometimes I was from a technical perspective, but certainly rarely from a product perspective. It was rarely a relevant question for me. I was a somewhat design minded engineer and I did a bunch of engineering and design-ish things on tasks which were set out for me. By the time I joined Apple I had been programming for a really long time, 13 years maybe, and programming in Apple's ecosystem for probably two to three thirds of that time. So everything was just really familiar. It was mostly flow all the time, every day. I was just in it. I knew the stuff that I needed to know. I was very well practiced. And the space didn't change that much. Most engineers at Apple most of the time are not pushing the frontier of what is known, like trying to discover. They're doing very difficult technical work, mostly applying things that they already know and understand quite well to problems which are usually not always pretty well understood. Memory was essential to me doing that job well, but I had already built most of it by the time I got there. I'd already built just tons of stuff for Apple's platform. I had to learn a lot of stuff. I learned a ton of stuff about the internals of those systems. But because I already had such a rich understanding, both of Apple's platforms and of computer science and engineering in general, I had this really rich network for stuff to slot into. Learning stuff is easier when you have other stuff to connect to. It's a nice principle. Metacognitive load on me was lighter because others were figuring out what we should be doing. Just like by contrast, now I'm doing research, I'm trying to discover things that are not known. I'm trying to make things that didn't exist. The hard questions that I answer are mostly, what should be done or what should I do? And that question is not just a technical one of how I should implement this feature that needs to get built, but what intervention on a reader should be taken? That requires synthesizing lots of different unfamiliar literature.", "Dwarkesh Patel 00:36:30", "There's two different threads I want to go on. Maybe I'll just mention the other one. This is also related to the thing we're talking about a few minutes ago with LLMs. Swanson-Linking . Swanson was just somebody who read the medical literature and he was just familiar with a lot of esoteric results. Different things would come up and he would be able to figure out what different things are connected. For example, he noticed in one case that headaches are linked to some other symptom and that other symptom is linked to magnesium deficiency. Apparently a whole bunch of people's headaches were solved once they were given magnesium supplements and he noticed that connection. Again, this is the kind of sort of combinatorial thing that you wouldn't notice otherwise.", "But on this subject itself, there's this natural way in which you're able to get up to speed in all the things that are happening at Apple. Is it possible and maybe advantageous to do similar kinds of things in other fields? For example, instead of doing an explicit space repetition system when you're trying to absorb material from books, you just read a cluster of books and hopefully things would just come up that are relevant to get in again. Or is there a value in having explicit practice of setting up cards and so on?", "Andy Matuschak 00:37:50", "Yeah, again the answer is going to be it depends. Maybe the most familiar example of what you're talking about is immersion learning a new language. Immersion learning is like a great thing and it's going to be more interesting and more effective than doing space repetition practice. It's going to be integrative. It's going to be socially based. So there's a bunch of stuff about social learning that's relevant. A problem though is that say you decide you want to learn Swahili today and you go down to the local Swahili community center and you're like, “ Cool, I'm going to immerse myself” Good luck. You can't even get started. So through this lens, explicit practice is a way to bootstrap yourself. All of the best pianists at sight reading that I knew in university played with churches. They were so good at sight reading because they had to show up every Sunday and they're playing a different thing. New hymn every Sunday. So this is immersion also. Over time, they're learning all these cadences and these things that are really common and whatever. But you can't show up and be the church pianist every Sunday in the first place if you don't already have some decent foundation. This is a bootstrapping argument. One role for explicit practice of this kind is to get yourself into a position where you can more naturalistically reinforce. But there are still going to be instances where naturalistic reinforcement isn't going to work. For example, the linking that you brought up, one issue for doctors is rare diagnosis. So if it's only going to be once every couple of years that you see a patient that's going to present with these symptoms, that's not going to be frequent enough to naturally reinforce your memory of that. You're going to need some out of band mechanism. And unfortunately, I think for many kinds of creative leaps and creative insights, that may be closer to the regime that we’re in.", "Dwarkesh Patel 00:39:50", "Yeah, that makes a lot of sense. Where in many fields, the things you're regularly doing is the thing you need to reinforce. It makes a lot of sense that if you're a researcher, the long tail of events that might come up is a thing, it might happen once every few months but the regularity is not a thing that matters, right? It's [unclear] on your work.", "(00:40:07) – Intellectual stamina", "Here's a question I actually have. When we were doing the quantum mechanics textbook, it was like three hours and afterwards, I was just exhausted. I was actually surprised that you went the entire three hours without interruption. Afterwards, I was packing up and you're like, “ Hey, I'm about to actually go to my piano lesson.” I was so confused at how you had the stamina to keep going. Is the stamina just inherent in you? Or is that something you did to develop?", "Andy Matuschak 00:40:39", "One of the things that I think is funny about stamina is first off, there's some kind of weird grass is always greener kind of situation where, I often feel struck by other people's stamina and feel like I have very little of it. I struggle with energy. I've actually written extensively about all my struggles with energy and ways of managing energy. I spent a lot of time thinking about it, managing the energy levels and structuring my day around it. So I think there is something where one often feels maybe lower stamina than one actually is because one misapprehends other's stamina. Okay, in that particular situation, how do I explain why three hours of studying, etc. First off, social. So if I were alone and studying that book for three hours, and I weren't effectively trying to perform for you Dwarkesh, it wouldn't have been nearly as energizing for me. And I definitely would have taken breaks. I still would have been able to go for three hours, I think. Part of the reason for that is that it's simply way less hard than things I normally do.", "In some sense, learning quantum mechanics should be much harder and it kind of is cognitively demanding in a lot of ways. It's much more cognitively demanding in kind of a direct way than what I actually do day to day. But it's much less demanding on what William James calls the Energies of Men , which is something like a life force that permits you to act according to your will or something like that. Maybe it's gumption, maybe it's willpower, maybe some people call it [unclear], these aren't all the same thing exactly. But sitting and staring at a page and deciding what you should do next on a research project is incredibly draining on that resource. The sitting and not knowing is the hardest thing that I do in my work. It's a wonderful vacation to be presented with, “ Oh great, somebody else is going to tell me what to do. This is great.”", "Dwarkesh Patel 00:42:40", "So although it might be less demanding than our usual work, it is definitely more demanding than the way in which I or most people approach textbooks or other material in the sense that, I would just read through and then once I get to the exercise, I'm like, “ let's see what I didn't understand. ” Whereas just the attention and the intensity to go through sentence by sentence and constantly being paying attention seems to be way more exhausting.", "Andy Matuschak 00:43:07", "Yeah, I mean, so this is sort of true. It's definitely the case that I will occasionally do some of this before bed reading, where I think “ Oh, let me just do a little bit more. ” and it's basically useless. But I want to make the case that there is a kind of pocket that you can fall into. Maybe you call it flow where the demandingness that you're bringing to bear is matched to your ability, the book is not overwhelming, you feel like you can make your way through it, and this is actually more engaging. I occasionally will find myself reading as an undemanding reader and finding my attention kind of slipping because I'm just not that attached to the text emotionally, I'm kind of reading dutifully, I'm like trying to get through it. That sometimes produces an adversarial aspect where the text is in my way or it's kind of something to be accomplished. And often I will find that I need to bring more gumption to bear to power through and make myself sit there and keep flipping the pages than I need if I actually just open my curiosity and attention and really start engaging the book.", "(00:44:27) – New media for learning (video, games, streaming)", "Dwarkesh Patel 00:44:27", "There are ideas that people have come up with for different pedagogical tools, which are mediums that give closer connection to the reader. One is, you have some sort of fiction account, where a concept is introduced and reinforced, or you have a video game with characters you care about. As far as I know, there isn't something that has really taken off using these sorts of new mediums. Why do you think that is? Is it just an inherent limitation of everything but text and lectures or people just haven't given it the right content and design?", "Andy Matuschak 00:45:00", "Yeah, I'm fascinated by this question. Let's see, I can say a few things about it. One is that I would argue that one medium has taken off in an absolutely enormous way and that’s video. People love video. People will watch Grant Sanderson spend an hour going through some explanation of an esoteric math problem, people who would never crack a Springer graduate textbook in mathematics or something like that. The issue is that they will not walk away from that interaction with much understanding but they're much more engaged. So that's cool. That's suggestive and it suggests the question, is there a version of that which actually produces detailed understanding? Maybe one approach to producing that might be like a game. My favorite example of this is The Witness by Jonathan Blow . Have you played The Witness?", "Dwarkesh Patel 00:45:52", "No.", "Andy Matuschak 00:45:54", "I think The Witness is an absolutely extraordinary work of art. It's a game that has no text, at least no text that's relevant to the game elements. In kind of classic Myst style, you wake up on an island, and figure out what's going on. And the game proceeds to explain to you, without using words, but just by shaping your environment, a series of extremely complex mechanics of a system that exists in this world. You learn a bunch of stuff and it gets to the point where it feels like you're in conversation with the game's designers. It's like, “ Ah, he's asking me to do this here. ” No one's asking you, right? There's no text, but you can feel that you are being asked. You perform some interaction in the environment and you feel that you have answered the game's response in kind. This is very, very interesting. It's like a medium of action. Some people have tried to make educational games, games that are explicitly about arithmetic or something, Jonathan Blow's game is not about that. It's the mechanics that you learn are they're about the environment. I don't think anybody has yet really succeeded in doing this about explicit subjects. There are, for instance, things like Kerbal Space Program . Maybe people learn some things about project management or orbital mechanics from that. Zachtronics has a bunch of games that are sort of about assembly language, roughly speaking. Maybe you can learn some things about that. The issue seems to be that games are ultimately in aesthetic form.", "The purpose of the game is to have an experience that feels a particular way. And so they're sort of serving a different purpose than Grant's videos or a text. Grant's videos are also serving a different purpose from the text. The text you might pick up because you're like, “ I want to be able to build a robot. ” So you pick up a textbook on robotics or something. And so is there something that you can pick up that's sort of like a game in so far as it's an active environment that you use in a similar situation to “ I want to learn to build a robot? ” Maybe kind of? We don't quite have those yet. We have some things that are kind of like that. I don't know if you've seen From Nand to Tetris . This is a very interesting project that's kind of along these lines. And what characterizes it, like games, is doing. It's active. So when I was asking all those questions of the book, that was active learning, active reading, Nand to Tetris is naturally active. So this is a course in which you kind of start with basically nothing. You start with memory and you build a virtual computer and build Tetris. You build a processor and stuff. The whole thing's active. The whole time you're making the computer grow. This is doing a similar job to the question asking that I was doing, except that you don't have to regulate all of that yourself. The regulation, the choice of what activity to do, is in the course, is in the structure of the material. I think some kind of mass medium that is like that is waiting to be created, but that can be applied in many, many circumstances. We have the non-mass medium version of it already and it's apprenticeship. If you want to be a good yoga teacher, you go hang out in yoga studios. If you want to be a good surfer, you go to the beach when the other surfers are there and you participate peripherally and you talk to them and you learn about their tactics. They might give you some feedback eventually and you'll start to participate less and less peripherally over time and eventually you'll be part of the community. This isn’t a mass medium. We can't print billion copies of it like we can with a book.", "Dwarkesh Patel 00:49:46", "What is the experience of watching George Hotz on the stream code up tiny grad? How does that compare to just being in an office with him? Because even if you're in an office with him, there would be constraints on his time and how much engagement there would be. Why isn’t video a scalable way to increase apprenticeship?", "Andy Matuschak 00:50:07", "I'm actually incredibly excited about streaming as a medium for this. We're gesturing at a particular kind of learning that needs to happen. It's often called tacit knowledge. One of the things that you have to learn to do as an engineer is to learn to deal with 100,000 different weird situations where something is not behaving the right way. Eventually you learn pattern recognition, you learn ways of dealing with this. Much of this is not described in any book. It's not explicitly taught. You just learn it by doing it over a long period of time. By watching George do it, I think that people do absorb stuff. They can absorb some of that knowledge. That's part of how apprentices absorb that knowledge. There's a few things that are missing. You're not getting feedback. There's a whole lot of chaff there. There's a whole lot of stuff that probably isn't all that meaningful. It's also true for apprentices. I'm pretty excited about streaming videos. I've complained loudly that there aren't more designer streamers.", "One of the things that I think is really interesting is that we have some disciplines like programming where there are a million books on courses about how to learn to program. They don't give you everything you need. There's this tacit knowledge stuff that you need to develop. If you work through these courses, if you go through the MIT OpenCourseWare for computer science, you'll be able to build some stuff and you'll be able to lift yourself up. This is not true in all domains. In particular, design, but lots of other domains that are like that, like musical composition, architecture, something like this. Nope. It's normally done in studio classes. Lots and lots of hands-on feedback. The feedback is highly contingent. It's highly contextual. We just haven't figured out how to communicate this. It's good to see lots of programmer streamers, but I really want to see the streamers in these other domains.", "Dwarkesh Patel 00:52:10", "On the point about more programming books. Ironically, the reason why there's some more resources on programming is that it's just so legible, but it already makes it easier to understand in the first place. You just have this reinforcement. Nand to Tetris is like a video game analog to learning, maybe not just programming, but how things in the internals of a computer work but programming has an element where it already feels like a video game. I have a friend who has a sort of intense sort of manic energy, and he used to be addicted to video games when he was a teenager, and now he just stays up all night in these coding binges. It's just the same part of the brain. Are you optimistic about things like video games and fiction being able to work and feel as though they're not already kind of like a video game, like programming?", "Andy Matuschak 00:52:56", "I think what makes programming feel like a video game is this sense of instantaneousness, this sense of direct contact with the environment. You're learning about a conceptual world, but that world is right underneath your hands, and as you manipulate it, you're constantly getting this feedback, the red squiggly lines, you’re pressing command R regularly, and you're seeing it fail, and that feels great. There's this feeling that's very common for programmers, and it's laden with doom. The feeling is it's like 9 p.m., and you've been working on a thing all day, and it's almost working. It's almost working. And you know, if you just debug this one thing, then your project will be done, and you'll be able to go to, so you're like, “ Well, I'll just stay up and I'll debug this one last thing. ” And then you start debugging it, and you get it, and you solve it, and that feels great then immediately you run into one more thing, like, “ Oh, it's almost running all the way through, it's almost going end to end, ” and you're like, “ Well, I'll just stay up a little bit longer. ” Before you know it, it's 2 a.m. You keep going because it feels so good. You feel the sense of forward progress. You're not just staring at a wall. For the programming problems where you are at a brick wall, it doesn't feel like this. It feels bad. Can every field be transformed into something where you can feel the sense of forward progress? You can get this rapid feedback cycle. I think that's really hard. It's not clear to me that some fields can be transformed in that way. I think we can get a lot closer in most cases than we're at right now.", "What's hard about designing a user interface is that often there's this feeling of exploring a combinatorial search space. Programming often feels like a search problem too. You have a sense that there's some right way to solve the problem. There might be some set of right ways to solve the problem, and you're looking for it. And you have some heuristics that guide you to, like, “ Oh, this might be a dynamic programming problem! ”, or this might be something that is solved well by separating concerns or something like that. Design often feels less like that. You have those heuristics, too. You have those patterns, too. Often it just feels like, “ Nope, I just need to try 300 things. ” The core action of Figma is to duplicate. You have an artboard, you tried something, and that didn't work so you select it, and you press Command D. And what you end up with, and when you look at Design Twitter, it's just all these screenshots of people's Figmas with a million artboards. They're just trying stuff. And you don't have this feeling, or at least I don't, and I think many designers don't have this feeling of progress. You're just kind of exploring parts of the search space, and you're learning that parts of the search space don't work. And eventually you stumble on one that does, but you don't have this feeling of getting closer often. Often there will be like weeks that go by without feeling like you're getting closer, because what you're doing is just kind of like narrowing the search space.", "Dwarkesh Patel 00:56:22", "Interesting. Although there are people who are obsessed with Design. What is the sort of loop that keeps them obsessed with a process that doesn't feel intrinsically forward feeding?", "Andy Matuschak 00:56:34", "So to some extent, I think they are skillful. The people that I know who are like this, it's a combination that they’re often skillful and the nature of the problems that they're solving are highly tractable. An example of a kind of thing that designers will often rabbit hole into is designing a poster. It actually often used to be kind of a cliche that at Facebook, there were all these posters up on the wall of the office. Very, very elaborate, beautifully designed posters for a talk that someone was coming to give at Facebook. Why did somebody put all this effort into it? Well, it feels really good, because a poster is really constrained, it’s finite, it's ephemeral. You can start it and yeah, there's a search space, but you can find a decent part of the search space pretty rapidly. And once you're there, there's this beautiful and very enticing feeling of turning the crank and like making it better and polishing it and trying this or that. But when you're trying this or that, like, all of the options are kind of okay. And you're kind of trying them out of curiosity, or like maybe it can be even better. And that's very different from the kind of design where you're just like, “ I simply don't know how to do this. ” And I think it's part of why those designers loved making those posters. It's a snack. It's a treat. It's also something they get to control whereas ordinarily, they don't.", "Dwarkesh Patel 00:58:08", "Yeah, just don't tell the manager how many software engineering hours were used up in the poster designing at Facebook.", "Andy Matuschak 00:58:19", "Well, no software engineering. It's only designers. But for the software engineers, code golf is the equivalent, right?", "Dwarkesh Patel 00:58:25", "What is code golf?", "Andy Matuschak 00:58:27", "You know, in golf, you try to get the lowest score. So code golf, you try to solve the problem as minimally as possible. Like, “ Ah. I don't need this. I can combine this. I can do it in three lines. If I use Haskell, I can do it in one line. ” That's a kind of thing programmers do that's like this. But just endless refactoring is another thing that's kind of like this. You have the thing working, but it could be more beautiful.", "(00:58:51) – Schools are designed for the median student", "Dwarkesh Patel 00:58:51", "Right. So it seems like the tools and the ideas you're developing seem especially geared towards very intelligent and very motivated students. If they would be different, what would the tools that you would develop for a median student in the education system look like? Both in motivation and in other traits?", "Andy Matuschak 00:59:14", "Yeah, they'd be super different. I kind of got out of the educational space in part because I don't like the framing of this problem. For the median student, the education system mostly wants to make the student do things they don't want to do. It's not about helping them achieve their goals more easily or more effectively. For the most part, it's about achieving goals that aren't theirs. Obviously, that's not always true. But for the median student, it kind of is true. When I was at Khan Academy I was kind of thinking about this problem. At Khan Academy, we were mostly thinking about not just the median learner, but like maybe the 25th percentile learner. One of the angles that felt most relevant, maybe not from an efficacy perspective, but for me, from like a breaking out of this, getting them to follow goals that aren't their own perspective, was to focus on inquiry learning and to focus on transforming the learning experience into something that actually is related to their goals. That is, we're asking questions that are authentically interesting, that they authentically want to answer, and that they can participate in in a way that feels natural. We did a lot of experiments with dynamic media, representations of things. The idea being that, you've probably seen these like plastic blocks or things that people can play with when they're kids to get an idea of numbers and number systems. Kids will play with these things unprompted because they're fun. It's just a pleasure to handle them. It's a pleasure to manipulate them. When you have them in hand, it's very natural to suggest, “ Ah, can you make a pattern like this? Why can't you seem to make patterns like that? Why is that? ”", "Cuisenaire rods is the name for a set of 10 rods that have basically unit length 1 to 10, and they're all different colors. You can do things like take the rod that represents 8, and put 2 of the rods that represent 4 up next to it, and show that this one you can divide into 2 rods effectively. But then if you take 7 there is no other pair of rods, for the same color, you can put it next to it. So, you get these different patterns and things kind of naturally suggest themselves by experimenting with these materials and having conversations with people around these materials.", "One of the things we were interested in was, are there things that are like that that are more advanced topics? Can we create something that's kind of like those rods, but that is about a more advanced topic in math or about debates in history or something like that? One of our tactics was to lean heavily on social interaction. People like talking about stuff with people, if it's a real conversation. For the same reason that I had to use less willpower to study that quantum mechanics text, because you were there with me, a student who's engaged in a real activity with a peer will need less willpower as well. They'll also learn from their peers if you structure things right. Social learning becomes interesting. But I think at a high level, I mostly have abandoned this question to others. Basically everyone in the educational space, this isn't totally true, but like 90+% of people in the educational space are focused on the bottom quartile, not even the median. And there's a good reason for this. Many people who are in education are motivated by arguments of equity and opportunity. They want everybody to have the opportunities they had. They're very motivated by the injustices that they see in the differing access and the differing support that different people have. And they're very motivated by the very real disadvantages that accrue to the bottom quartile performing students. It's also true that the marginal impact that you'll have in that student's life will be much greater, probably, than the marginal impact on say an 80th percentile performing student or so the argument goes, like that student will be fine, which is probably true.", "Dwarkesh Patel 01:03:57", "But there's a big marginal difference between fine and supercharged.", "Andy Matuschak 01:04:04", "Yeah, that's true. Anyway, I say all this to say that I understand why the vast majority of people in education are focused on what they're focused on. And I think it is good. And I'm glad they're doing it. I have mostly decided to let them do that. I’d focus elsewhere.", "Dwarkesh Patel 01:04:22", "Yeah. No, I see tremendous value in focusing on the cool new shit that's coming out, where's that coming from? And what's the way to increase that? It's interesting to know that the same tools might not just work across the spectrum.", "Andy Matuschak 01:04:38", "Yeah. Part of the trouble here is that the cool shit is very likely to come from students who are performing at the 20th percentile in school, because they're disaffected and bored and none of this stuff matters to them, right? Part of the trouble here is that by opting out of helping these people learn, there are all kinds of interesting inventions that could probably occur that aren't occurring. So I don't quite know how to contend with that. I guess basically I'm trying to bite off a piece of a problem that feels maybe tractable.", "(01:05:12) – Is learning inherently miserable?", "Dwarkesh Patel 01:05:12", "Once all the tools are built, when you're at the end of your career, is the learning process supposed to feel fun? Or does it have to feel fun? Is there an element of even when all the tools are there, that there's just like a level of David Goggins, this is going to be miserable, but I've decided to learn this in this way and I just had to go through it.", "Andy Matuschak 01:05:33", "Where does misery come from? I'm asking this honestly, not really rhetorically. Let me try to answer my own question. Let me say first off that I am, broadly speaking, very opposed to what I understand to be David Gogginsesque attitude towards almost anything. In this particular instance, I think what I think is something like, if I ask why is it miserable to learn a particular subject? The answers that come to mind are things like, first off, I don't care about this subject. And I think that's not what we're talking about. You're asking about a world in which these great tools exist and someone's using one of these tools to try to do something they really care about. So another reason why it could be miserable that I think is pretty common is that you have some idea about, you're not going fast enough, or you're failing, or you're struggling, and the misery comes from resisting that. It comes from feeling like you're doing poorly and you shouldn't be doing poorly, it's bad that you're doing poorly. And maybe you're feeling fearful that others are going to judge you or you don't have enough time or something like that. And I think that's basically like an emotional problem that needs to get healed, rather than like a practical problem with learning. In the case of something like organic chemistry, where you truly do just need to learn 200 names or something. One answer is that it can be done very cheaply using modern memory systems. Organic chemistry students suffer through this and they don't need to. But even with modern memory systems, you're probably going to spend a total of 100 or so minutes across some weeks, studying all of these formulae. That still is unpleasant so can that be resolved? And I think the answer is yes, actually. I was thinking about this in the context of the Cell Biology By The Numbers book I was telling you about where there's all of these things like the volume of the nucleotide is a nanoliter. To study the flashcard “ What's the volume of a nucleotide? ” is not terribly pleasant. I'm not sure it constitutes suffering exactly. It's fine. I'll do it while waiting in line. But I think there is a better version of that, which is like solving an interesting Fermi problem which involves that term. So something like, if I have a vial of the COVID vaccine, how many copies of the COVID RNA are likely to actually be in it if the vial is a milliliter large? That's a fun little question and I can enjoy sitting and noodling on that. And in doing so, I will need to retrieve the volume of the nucleotide to help me make that approximation. So I think there's moves like that you can use to paper over any remaining stuff that feels kind of necessarily unpleasant or rote.", "Dwarkesh Patel 01:08:30", "I'm actually surprised to hear you say that because one way in which I read some of your stuff is that this is actually a way of endorsing the traditional way of thinking about education, but using new tools to get the traditional ones. To give you an example of what I'm talking about, you go back to a headmaster from the 1900s, and you say, is it important to have the taxonomy of a subject memorized? You say, of course it is, that's why you’re going to spend a year memorizing the taxonomy. And then you would say memorizing is actually important so that you have a dictionary by which to proceed on the subject. So in those ways you have new systems for doing that same kind of thing. And the reason in this particular case, I was expecting you to say, No, you have to be disciplined if you decided to learn something. I expected that in the case of the three hours of intense learning followed by an intense piano session, you were just really tired at the end and you're like, “But no, this is something I have to do this evening.” So yes, I'm actually surprised to hear you say that.", "Andy Matuschak 01:09:33", "Yeah, no, I really enjoy this tension. I'm probably reacting to the Goggins reference with a bit of an over extreme overcorrection or something but this really is how I feel. And I feel this tension all the time. The histories in educational psychology that I'm most aligned with are the most robotic, authoritarian kind of histories, and also the ones that are most kind of unschooling and Montessoriesque. I really have a ton of sympathy for elements of both of these directions and there's kind of a weird synthesis of this in my head that I can't fully externalize. I guess part of what I'm saying is aspirational. It certainly is the case that I do in practice use willpower to make things happen. Just as an example of something totally contrary to everything I was saying, I use a tool called Beeminder , which charges me if I don't do certain things. This sounds kind of military, but it's certainly more authoritarian than this kind of freewheeling butterflies kind of gesture I was making a moment ago. And I use it to make sure that I do my memory practice. Shouldn't my memory practice be so joyful? It's at the center of my research, right? It should be the most interesting, exciting part of my day, but often it's not. And so I use this to do it anyway. So there's some tension here.", "I think I do want to say the reason why I'm willing to endorse this headmaster's view about the taxonomy has to do with the price. I did a bunch of memorization in high school, and it was very inefficient, and it was very uncertain. It was emotionally difficult because I wouldn't even feel confident that I'd learned the stuff. I didn't know what it was to learn something reliably, to be confident that I'd be able to recall it. And it was hugely time consuming because I didn't have techniques or tools. And now, part of why I respond so favorably to just learning the taxonomy is that, for me, it's just trivial. Like, yeah, sure, whatever, throw it into the deck. It'll consume a total of 15 minutes over the next few weeks and then I'll know it. It just doesn't cost anything. Other things in learning do still have real costs and those are maybe more difficult to negotiate.", "(01:11:57) – How Andy would structure his kids’ education", "Dwarkesh Patel 01:11:57", "Actually, this is maybe a good place to ask you about unschooling and your attitude towards it. Somebody on Twitter had this question, how are you structuring the education of your kids as they're growing up?", "Andy Matuschak 01:12:10", "Well, okay, so to be clear, I don't have kids.", "Dwarkesh Patel 01:12:14", "Right. Hypothetical kids.", "Andy Matuschak 01:12:17", "So yeah, so you're gonna hear the foolish response of a person talking about what one would do hypothetically. This is very difficult. The school, of course, has many purposes other than instructional, right? It has a social purpose, it has a societal purpose, it has a behavioral purpose, and it also has like a pragmatic purpose of basically babysitting. Those things can be unbundled. I think it's pretty interesting to consider that. If I actually did have a kid, I would probably consider that project pretty thoroughly. I think it's pretty likely that some kind of homeschooling situation would occur. It probably wouldn't be [unclear] the teacher, but it would probably be the people I would hire. I have some resources. I'm not wealthy but I have some resources, that is maybe a difference. During the pandemic, I was struck by a company called Schoolhouse, which is now defunct, started by Brian Toball. The idea was that he noticed that people were getting together in pods, right? That was the thing we did during the pandemic. And in particular, they got together in pods with their classmates from school, maybe five or six kids. And some of these pods started hiring elementary school teachers who were not working because of the pandemic. And the elementary school teachers would come to the backyard of one of these people's houses, and the five or six kids would get together with the elementary school teacher, and they do stuff all day. Buying this one teacher's time split five or six ways was actually really very tractable. Say you want to pay the person $50 an hour, maybe that seems reasonable for a teacher, this is not that hard to do, and actually costs substantially less than a private school. I think Schoolhouse costs something like a fifth or whatever, the cost of an elementary school. Once you get to older grades you may need specialists. It's actually not clear if you do. My friend, Alec Resnick , is working on a very interesting school called Powder House in Somerville, Massachusetts, that does something like the model I just described, where you have adults who are in more of like a coaching role, and they aren't necessarily domain specialists, but they'll connect people with domain specialists. Anyway, I would explore something like that model. I'm sorry, this is a little bit vague. If you want to ask about something specific...", "Dwarkesh Patel 01:14:38", "Sure, let me ask you a more specific question. This child grows up and is now 12. At this point, you have taught arithmetic, reading and everything. Do you proceed and say you have to learn your biology, you have to learn chemistry, or do you just say, what are you interested in? Are you interested in Roman history? Oh, let's learn about the aqueducts. Or is there an actual curriculum that proceeds until they get to college?", "Andy Matuschak 01:14:59", "Yeah, this is really challenging. One of the heroes of the reform school movement is this philosopher named John Dewey, and he has a lovely book called Experience and Education , sort of written near the end of his time, looking back on all of his efforts to reform schooling in a kind of unschooling-ish direction. He was never as extreme as that, but broadly looking for freedom on the child's part. And he makes this wonderful argument that because these kids don’t have a fully developed prefrontal cortex, certainly don’t have a fully developed kind of sense of self, to let them do whatever it is that their whim commands them to do in any given moment is actually not freedom, but rather is chaining them to whatever that impulse is. It makes them the subject of these tides of impulse. And I think that's a pretty compelling argument. It doesn't authorize tyranny, but it also suggests that, you know, you got to be a little bit skeptical about the planning or the plans of 12-year-olds, I guess. How skeptical should one be? I don't know. I think I would probably have stronger opinions on that if I had a 12-year-old. But my instinct as a foolish non-parent would be something of a mix. I would be interested in exposing the 12-year-old to lots of topics and possibilities. I would be voluble in expressing the consequences of any particular actions. Like if they just want to compose music all day, we could talk about like, well, what does that mean? What kind of life does that look like? I would try to be non-coercive in this as much as possible. And I think to some extent, the child should be allowed to feel the consequences of their choices. This is complicated by the fact that, again, I'm not wealthy, but any child of mine would have chances. If they made some weird choice about a career path when they're 13 and so they didn't get into Harvard or whatever, that would be okay. They could be 24 and finally figure it out then, or 32 and finally figure it out then. It would probably turn out fine. This doesn't seem like reliable guidance. You should notice I'm feeling very confused about it.", "Dwarkesh Patel 01:17:36", "Yeah, no worries. So one question I have is historically, and maybe even to the modern day, it seems like improving education has been a very intractable problem. And you did reference this earlier when we were talking about gearing towards median student versus whatever percentile you're working with. But I don't know. Do you feel like there's been progress even in the percentile you're gearing your stuff towards? And if not, what is the explanation for the relative stasis? I mean, this is something you talked about. We have so many new tools with IT. What explains the broader sort of stagnation here?", "Andy Matuschak 01:18:09", "The funny answer to your question is actually there's been a ton of progress. Actually, things are pretty good. I think the stat is that in 1900, 6% of teenagers graduated high school in the US. Now, that doesn't mean that 94% didn't have an education that we would regard as a high school education, but it roughly means that. Now, these people are homeschooled. It's also the case that a high school education meant something lesser back then. A substantial fraction of high schoolers now study AP courses and complete them in high school. That's at the high end. On the low end, illiteracy was a very live situation 100 years ago in the US and is emphatically not now. Now it is the case that something like 10 to 15% of adults, depending on which polls you use, maybe would struggle to perform simple kinds of number manipulation or reading or writing kind of tasks, but our bar is basically moved. It used to be like, can you read it all? These tasks are maybe a little artificial. They're maybe not relevant to their day to day, and that's actually why they're experiencing this. The number of people, the fraction of the population who graduates at 17 or something, knowing a particular amount of stuff has basically moved up monotonically. And this is mostly about the bottom portion of the population. It used to be the majority were effectively uneducated past age 10 or something other than informally and in their trade. And really the story of the 20th century has been in part one of mass education. Part of why we have a service economy, an IT economy, is that basically all of our population is educated to a particular level. If you look at the national tests of fourth, eighth and 12th grade math and language proficiency, you'll see really pretty slow movement in the 75th percentile and practically none at all in recent decades. But you'll see absolutely enormous movement in the bottom quartile. And so in some sense the story, especially the last 20, 30 years, has been closing what's often called the performance of achievement gap where certain groups, part of underfunded schools, or who might have households that are unsupportive, or difficult, were just not having anything like the educational attainment of their peers. And that story has changed.", "Dwarkesh Patel 01:21:02", "One thing I'm curious about is that every other part of the distribution has been moved upwards. Has the ceiling been raised significantly?", "Andy Matuschak 01:21:10", "Depends on what we mean by the ceiling.", "Dwarkesh Patel 01:21:14", "Because you can go back like hundreds of years and the most learned people around. It's just incredible you look back on how many books Thomas Jefferson read. There's some story where Kennedy was hosting a bunch of Nobel laureates in the White House in 1963 or something. And he says, this is the greatest collection of genius and insight and wisdom that has been collected into this room ever since the time that Thomas Jefferson dined alone.", "Andy Matuschak 01:21:48", "Right. I think it's very hard to raise the ceiling. The ceiling has aristocratic tutors. The ceiling has whatever family dynamics and heritable propensities produce tremendous intellectual greatness. Early 20th century schools produced von Neumann. Right. And it's certainly not at all clear that they're now producing more von Neumanns or something like that.In fact, von Neumann's productions seem to have probably very little to do with any kind of mass schooling that we would recognize. As far as the very top, I think that's difficult. We're talking about an institution that was created for the masses. I guess there have always been people who have been using resources outside of those kinds of systems. So the mass system doesn't seem to help those people, I guess. That doesn't seem surprising.", "Dwarkesh Patel 01:22:45", "By the way, on the von Neumann thing. Okay, a mass system doesn't help them. What is the production function for a von Neumann?", "Andy Matuschak 01:22:53", "Yeah, so lots of people have studied this. I actually am not a student of Von Neumann's history. I know that many of his peers, the 20th century greats, got something like aristocratic tutoring or came from small Eastern European incredible schools that there's stories about these things. I actually don't know them. I'm sorry.", "Dwarkesh Patel 01:23:14", "I mean, I’m sure you’ve heard about that one high school.", "Andy Matuschak 01:23:18", "Yes. Yes.", "Dwarkesh Patel 01:23:20", "Okay, interesting. Are we getting worse at the von Neumann production or is it just static?", "Andy Matuschak 01:23:26", "Well, maybe. I don't know. So let's see. Here's the theory that seems kind of plausible. If someone was going to have aristocratic tutors in the late 19th century, would they now go to a fancy private school and would that experience now actually be less good for them? I don't know. I think it's probably more likely that they'd go to the fancy private school and also still have fancy tutors and then go to a very exclusive university where they're going to get a bunch of highly hands-on kind of interaction with professors.", "Dwarkesh Patel 01:23:58", "Although the reason that might not be the case is the opportunity costs for people who might become teachers or aristocratic tutors is much higher now, whereas the kind of person who would be, your tutor can now directly be making lots of money on Silicon Valley or Wall Street.", "Andy Matuschak 01:24:13", "That's interesting. Okay, so that would be an argument that maybe it's not so much about the 20th century that we've gotten worse about this, but more like over history. Maybe Aristotle was a tutor to Alexander the Great, and now Aristotle would be like a full professor and wouldn't need to take that job. That might be so. It may be the case that some tutors have been priced out of the market, but it's not clear to me that the most expensive tutors actually would be the best. There is a bunch of empirical research on tutoring, and one of the questions they ask is, what kind of experience level do the tutors need to have? And it's interesting, how far you get in tutoring efficacy when the tutor doesn't necessarily know anything. Just having another warm body there actually contributes a very large effect. I mean, things get better as you get an expert. And I also have a kind of healthy skepticism of these studies. I think part of the role of having Aristotle as a tutor is communicating a worldview. It's not something that would show up on a test or something that these studies would be measuring. And so having an extremely inspiring individual might actually be the important component, and inspiring is going to be highly correlated with expensive, I think. Not necessarily, I don't know. That feels complicated.", "Dwarkesh Patel 01:25:32", "I mean, especially today, the material is available. What the tutor is bringing is the inspiration and the motivation. Not exclusively but one of the large parts.", "Andy Matuschak 01:25:41", "That's right. They're not really responsible for instruction. I'll also say that I know lots of people who have postdoc tutors right now. These people, as graduate students, they're very pleased often to have a $60 an hour tutoring kind of commission. And that's a little sad. But, you know, the pool of available postdocs to hire as tutors is very large now compared to how it would have been 100 years ago. The pool being bigger doesn't mean that the top 1% are getting more though. So I think that's undecided. There is a question of, have teachers gotten better at their jobs over the last 50 years? And there are some ways in which maybe they have. There's been a bunch of projects of trying to disseminate certain research results, ways of instruction that are more effective in other ways. For instance it's better to interleave stuff than doing blocked units where it's like, “ Okay, like, we're going to talk about the Civil War, and then we're going to talk about women's suffrage. ” It's somewhat far apart but it's better to kind of weave these things into each other, not just in history, but in general. So that kind of dissemination has been happening more systematically in the last few decades. I'm unaware of any kind of studies or results trying to establish anything about the efficacy of teachers now versus long ago.", "Dwarkesh Patel 01:27:10", "Well, I'm sure you've seen the claim that one of the consequences of the very unforeseen circumstance of the mid-20th century, was that one of the very few occupations an intelligent woman could pursue was teaching. And now that other options are available, which is obviously hugely good, you know, there's other competition for the same very intelligent woman.", "Andy Matuschak 01:27:34", "Oh, that's interesting. I haven't heard that claim. Yeah, I'd have to think about it. I guess it's not clear to me how much intelligence matters. If you want to think of that as some kind of separable quantity.", "Dwarkesh Patel 01:27:48", "Or whatever trade is relevant to just that. You just had a population that was hostage to either housework or teaching.", "Andy Matuschak 01:27:56", "I guess what I'm saying is something like, if that were true, and there are like a bunch of people who are now astrophysicists or something, it's not clear to me actually that they would have been good teachers. Being a good teacher is often about empathy and effective communication and care. It's very personal. It's very intimate. You need to understand the subject but to teach a 15 year old or something, you actually don't need to understand it at a postgraduate level necessarily. It's very interesting to see that there's a bunch of studies of the impact of domain knowledge on teaching efficacy. I've read some in math, I'm sure they exist in all fields. And one of the things that comes up is like, if you aren't very familiar or comfortable in math, then you will struggle specifically to do inquiry oriented classes, classes that are more about creative ways of thinking with math, or open ended problems, as opposed to like here's how to do this algorithm. Because to conduct those kinds of classes, you have to be able to think on your feet. You pose a difficult question to which there may not be just one appropriate answer and your students will throw all kinds of stuff at you. And you have to be able to take that stuff and integrate it and show how one student's answer relates to another student's answer and show how those conceptions can be built upon in order to produce some useful understanding for what you had in mind. Anyway, this kind of improvisation requires a mathematical familiarity and ability. But I don't think it requires anything like extraordinary ability.", "Dwarkesh Patel 01:29:39", "Yeah, but more than the extraordinary have been pulled out of teaching as a consequence.", "Andy Matuschak 01:29:43", "Yeah, I guess I'm just wondering what the correlation is. If it's the case that actually effective teaching is mostly about empathy, then maybe it's anti-correlated. Like the people who are going to be good particle physicists are actually like they wouldn't make good teachers anyway. Maybe.", "(01:30:00) - The usefulness of hypertext", "Dwarkesh Patel 01:30:00", "Interesting. Why hasn't hypertext changed how people write more? Often I write a blog post and I actually do wonder how much different it is with the knowledge that I can add footnotes and I can link to things. I'm actually kind of a fan of how Wikipedia organizes content. It is genuinely surprising how often the best explanation of a subject is just this resource that is trying to explain every single subject. Because there's this practice of you don't need to do exposition in every single topic. You can just hide it behind links and things like that. Anyway, so why hasn't hypertext changed online writing more?", "Andy Matuschak 01:30:34", "This is a really good question. I think the reason why Wikipedia works as well as it does is that encyclopedia entries are already forced to stand on their own. And that was true before hypertext existed. In fact, encyclopedias were already hypertext-ish before there was hypertext. There are some other interesting kinds of hypertext that existed pre-computers. There is this very interesting book called The Syntopicon from Adler. If you wanted to understand what classical authors had to say about a topic like the father's responsibility to a daughter, you can look that up in The Syntopicon and you will get references across Rousseau, through the Bible, and so on and so forth. And those are kind of hyperlinks. They were printed on dead trees, but you're expected to get the books down and look up the appropriate pages. The Syntopicon wasn't that successful. I think it's in part because those concepts, unlike the Wikipedia entries, don't quite stand on their own so cleanly. You kind of need sinews, you need linkages. And actually, I want to make the case that while Wikipedia is an astounding resource, I find it rarely to be the best available introduction or explanation of a topic. I find it often to be like a good jumping off point. It'll help me know the right thing to ask about. It's good as a reference. Hypertext is a very effective navigational aid. It can help you get to a spot that you're looking for very quickly because it's about automating flipping through pages. And so for a reference, it's very effective. If what you have is a book of chemical compounds and their properties, hypertext is going to let you navigate that book very effectively. Likewise, dictionaries have been revolutionized by hypertext. Navigating around the sources by clicking on links to say like, “ Oh, shade it a little bit more like that. ” It's like a much better thesaurus. I guess I'm making the case that there are certain kinds of texts that are more amenable to hypertext, because they are more amenable to having the reader dropped in the middle of them. Encyclopedias are like that, dictionaries are like that. Most texts are not like that and most concepts are not like that. I guess most ideas are embedded in something kind of holistic or richer. They require a narrative arc. They're difficult to excerpt. Not everything, but things that are not so raw and atomically informational. There were all these dreams of hypertext novels, for instance, and some people wrote them. And one of the problems that a hypertext novel has, and it actually can be seen in a choose your own adventure book that existed before there was digital hypertext, that the author is forced to write something like a lowest common denominator story, the page that is the destination of a hyperlink, it has to work as the endpoint of all of its reference. And so it can't establish any kind of coherent or consistent arc, unless there's a kind of sameness to all of the reference. And the more that there's sameness to the reference, the less useful hypertext is. So a lot of people have been disappointed by this conclusion. I, among them. I'll say that I do find hypertext very useful in my own notes, not really for reading. I actually don't think it makes for a very good reading experience for others.", "Dwarkesh Patel 01:34:15", "Having been a reader, you have a separate webpage where you have your working notes. It is a very cool UI to explore your thoughts.", "Andy Matuschak 01:34:24", "Thanks. It does an interesting thing for me as a writer. It lets me build stuff up over time. Today, I was reading this very old cognitive psychology paper on the topic of adjunct questions, which we discussed earlier, the effects of asking questions while you read, not on remembering the information covered in the questions, but on the general effect it has on stuff that isn't touched by the questions. I have some notes on the design decisions of the mnemonic medium, this Quantum Country thing that I was talking about earlier, interleaving the questions into the text. Those notes are kind of partial. They evolve over time. What was the impact of doing this? My notes about that, they've come from interviews with readers. They expand when I read a paper that's relevant to them. It means that when I go to design the next system, and I'm thinking about the role of questions in text, I'll have a place to look. The role of hypertext is roughly a navigational aid. It's possible to do this without hypertext. You’d just end up with what Neumann had, a giant dresser-like thing, but made of card files rather than drawers for clothing.", "Dwarkesh Patel 01:35:50", "This actually goes back nicely to the original conversation we had about why people like Tyler are able to integrate so much information without an explicit note-taking system. Another person who comes to mind immediately is Byrne Hobart. Again, you have an example of somebody who is extremely prolific and writes a tremendously detailed and insightful daily financial newsletter . It's a daily note-taking practice in some sense.", "Andy Matuschak 01:36:18", "Nothing quite accumulates for either of them, at least not in the same way. It's very interesting. They're doing the whole thing over again every day. One thing I find interesting about Levine's newsletter is that when he's talking about a topic repeatedly, like the recent bank collapse or something, he will have to explain some concept like interest rate risk over and over again for days. Every day he has to explain it, but every day he explains it anew, and every day the explanation is colored a little bit by that day.", "This is an argument against the kind of note-taking that I do. It's an argument for ephemerality, for recreating the thing every day, because it will change and it will become inflected by what you're thinking about and your experiences. It's pretty interesting. I find myself doing a mix these days. I have a journal that's about today, and I'll do a bunch of writing. Often I'm recapitulating stuff I've written before, and I have other things that are trying to be more durable, and be a useful reference that can stand outside of time. The combination feels useful. I don't yet have a clearer model of when one is better than the other.", "Dwarkesh Patel 01:37:37", "An interesting way to tie in what you just said with the hypertext: Byrne’s newsletter doesn't give that much context. Often you'll find yourself lost about the concept being talked about if you're not familiar with the topic. I asked him at some point, have you considered doing narrations of your blog post? Scott Alexander has somebody who has a podcast where they narrate his blog posts. He said, “ I don't think it would work out as well for mine, because I heavily rely on the old blogosphere ’ s norms around hypertext, where you can add jokes and sarcasm. ” One example of this is his write up about SBF and his collapse. It has a bunch of links - if you want to learn more about margin calls, read this. And he goes, and if you want to learn more about the psychology of utilitarian bets, read this, and it's just a link to the Amazon page of Crime and Punishment. That kind of stuff is harder to do.", "Andy Matuschak 01:38:39", "Yeah, you're right. He's leaning more on his past explanations, which is interesting, because he can't update them. That format of writing a newsletter and then linking it to past newsletters, or as you say, the former blogosphere thing to do, you have a series of six words and each word is linked to a previous post on the topic. I certainly have written stuff like that. It's kind of funny. It's approximating the durable note thing I was writing about, but without the ability to revise it over time. Maybe for many topics, you don't need that ability. I wonder now what fraction of my notes are in the state they were when I did my first major revision of them. It's probably at least a third, it might be more than half.", "Dwarkesh Patel 01:39:24", "What percentage of notes have you published?", "Andy Matuschak 01:39:27", "By word count? By note? I don't know. For instance, my journal notes are not published. There's one of those every day so there's a lot of them. If we're looking by note, we're excluding all of those. I also have a note about all of the people in my life and those are not public, unless they're public individuals. There's a lot of notes that are not public, but they're mostly not durable. They wouldn't be all that meaningful to others. The journals might be, but they're also intimate.", "Dwarkesh Patel 01:40:01", "Are they written in a way that would be intelligible to somebody else?", "Andy Matuschak 01:40:05", "It depends. Usually my journals are complete sentences, complete paragraphs. Sometimes bullets, sometimes veering and breaking and changing to new subjects suddenly. They tend to be filled with links to the things that I'm talking about, in part because I'm trying to accumulate context in those things.", "Dwarkesh Patel 01:40:27", "How come they're not just shorthand?", "Andy Matuschak 01:40:31", "It's partially because past me is another person, it's kind of a cliche. I am routinely looking at journal entries from a year ago. You could view that as a failure of this note writing system. In some ideal sense, I shouldn't be looking at these journal entries, because if something's important, and it's going to be something I refer to a year later, it should be in some durable evergreen note. I don't know. You don't always want to do that. It feels like prepping. Maybe there's an amount of prepping that's good. We live in California and maybe everybody should have an earthquake kit. Maybe that's good, but maybe you don't need to hoard 300 cans of beans. There's an amount of prepping that feels like a reasonable amount to do and there's an amount that feels kind of dutiful and unpleasant.", "(01:41:22) – How computer tools enable iteration", "Dwarkesh Patel 01:41:22", "As a researcher, who is in the Silicon Valley circles, what is your opinion on the startup advice “ Do things fast. Fail fast. Get to users immediately with an MVP. ”? As somebody who is making products, but is also in a different mode than a typical startup, how do you think about advice like that?", "Andy Matuschak 01:41:42", "I have complicated feelings about this. I need different advice on different days. Of course, different people need different advice on different days. When I was getting into this kind of work, what that advice led me to do is to not think all that deeply about the ideas I was exploring. An idea would come up and I’d think, “ Oh, I can try that. ” I would try that, then I'd learn something, and then I'd repeat. There wasn't this sense of building a theory of what the problem is and what it would mean to solve it. Instead, it was just a theory of action. A theory of action as opposed to a theory of change is, imagine you're in your current position, and eventually want to get to some goal state, a theory of action is you look around you and you say, “ Well, what can I do? What can I build? What do I see as possible? ”.", "A theory of change is to look at the endpoint to try to work backwards. The metaphor is imperfect because in research, you don't exactly know what the endpoint is and you certainly don't know how to work backwards. I guess what I'm saying is that following that advice historically has led me to try things that were straightforward. The most powerful design work has ideas in it. What makes a nonlinear text editor, the text editors that we all know and love, so powerful is this observation that writing is a nonlinear process, but writing with a pen linearizes it. Many, many other observations like that and on the nature of what it means to have a thinking environment is how we got that particular interface. Likewise, the way that we got powerful programming environments is by people thinking very hard about what it means to specify a system and coming up with new primitives that express those ideas.", "The most powerful interfaces are often the expression of new ideas or new primitives that capture new ways of doing, new kinds of objects that can be manipulated. In Photoshop, for instance, you can manipulate a photo by means of a construct called a layer. This is a very strange idea. It has some precedent in dark rooms where you could potentially have sheets of film. I don't mean the negatives, I mean sheets of gels that you could potentially put over lights to affect the exposure - to make there be more exposure here and less there. But in Photoshop, they're non-destructive and they're continuously manipulatable. The layer is like a new primitive that is introduced into the activity of photo editing. It utterly changed what you could do in photo editing.", "What I'm saying in a very long-winded and confused way is that it's difficult to have ideas by means of building an MVP very rapidly. Now, if you have an idea that you think is interesting, it is good to test it rapidly. Part of why I'm confused in my response here is that it's good advice once you have something worth testing. For me, adopting that mindset, and I've lived in it for so long that it's very ingrained in me, it makes me not sit in stillness and in confusion and in contemplation with the ideas long enough for them to be good. Michael Nielsen and I made Quantum Country and when I was trying to think about what to do next, the most obvious or natural idea was “ What if we just tried that with lots of other things? ”.", "That idea occurred to me and the pandemic had just struck, so I was feeling a little timid, creatively or emotionally. I wanted something that felt safe and I knew I could do that. I can build a platform that generalizes this thing that we did for this textbook. So I did. And I did it relatively quickly. I did it in a few months. And that wasn't the right thing to do. It wasn't really the right question to be asking. The idea wasn't that strong. Building this highly general version of it wasn't the right way to test it. I would have been better building more one-offs rather than a self-serve thing that anyone could use. And this comes down to the difference in aim. I'm not trying to build some kind of scalable thing for the world at this moment. I'm trying to build the idea. The prototype is an expression of the idea. Once it arrives at a good place, then maybe there can be some scalable solution. But it's not necessarily at that place. Until it's at that place, there's a lot of thinking and sketching that goes along with the building and prototyping. Part of my confusion here is that often I still need to hear this advice. Often I will just tie myself in knots in theory land. What I really need to do is to have a friend sit me down and say, “ Is there a piece of this that you can carve off and build next week? ” So you're hearing a lot of tension.", "Dwarkesh Patel 01:47:14", "Interesting. What was the consequence of shipping Orbit out before it felt ready to scale?", "Andy Matuschak 01:47:25", "I mean, I learned some things. It was fine. It taught me a lot about where that particular format succeeds and fails in other venues. It was just not a very effective way to find those things out. It was an MVP in the sense that it has very few features and it's very simple. But it was highly general. It's a deployed thing that has infrastructure and has accounts, it has all this stuff that you do when you're building a real thing. That's very different from, “ Let me work with this one author and see if I can make it work with this one other book that's very different from Quantum Country to form a specific question or specific theory about, it worked for this text, what's the next kind of text that would be good to test with? ”. I certainly could have done it much more rapidly.", "Dwarkesh Patel 01:48:16", "Why do you think that this idea of tools of thought has nerd sniped so many people in Silicon Valley?", "Andy Matuschak 01:48:24", "It contains this message for technologists that they can potentially be very powerful. That's always tantalizing for people. It also feels very actionable for people in a way that's super misleading. I meet tons and tons of people who tell me that they're interested in tools for thought and 95 plus percent of them are engineers. The problem with this is that building an interesting tool for thought is basically entirely a design problem. Their design ideas are usually not very good, or troubled in a variety of ways. Yet, they can make a thing that solves a problem for them in their lives. That feels very tantalizing or encouraging. It feels like something to get their hands around.", "We in Silicon Valley are very, very interested in thought. We are thinking people. People are very interested and engaged with anything that can potentially expand our capacity. That too is tantalizing. What if I could think better? It's also tantalizing because it's meta. There's all these cliches about people tinkering with their dot files endlessly or tinkering with their blog website, which has two posts on it, but they have to rewrite it because they want to do something else. The new one will have three posts on it before they rewrite it again. Tools for thought also scratch that itch. It's work about the work. This sounds very cynical, by the way. I don't mean for it to be, I'm just trying to earnestly answer the question.", "Here's a more optimistic and generous response. Many of us got into computing because computers portray a sense of personal empowerment and possibility. We remember growing up, being locked in our bedrooms at midnight, fooling around. We have this very powerful tool at our disposal and it's opening up these worlds for us. For many people here, that was a formative part of their personal development. Anybody pointing to that and saying we can do more stuff like that is going to be pretty compelling.", "(01:50:44) – Monetizing public work", "Dwarkesh Patel 01:50:44", "This was an interesting question from Matt Clancy on Twitter - what are the characteristics of a good crowdfunded research project?", "Andy Matuschak 01:50:53", "One of the unfortunate things that I've learned in my crowdfunding experience is that there are some dynamics that seem hard to change. One of them is churn rate. Any subscription revenue business model - that's what I have - you lose subscribers every month. In my case, it's about 2%. It's not that large, but it does mean that I need a certain number of new subscribers all the time. One thing I've learned is that the churn rate is surprisingly insensitive to anything I do. I experiment with a variety of things and it hasn't meaningfully changed the churn rate. What does change things is getting more people into the top of the funnel, in other words, marketing. There are some things that have affected the fraction of those people in the top of the funnel who convert. I really hate this way of thinking about it.", "In summary, the thing that I've discovered that's sad is that I ended up having to think about this a little bit. I realize that this crowdfunding project only even slightly works, because it's understandable and interesting to others. It's already in a place where there's some results that look promising. It's very easy to imagine other projects that are not broadly applicable. If I were doing marine geology stuff, I probably wouldn't have a big crowd of internet people, not nearly as large or excited. That's one property - this work is very general. It applies to many, many people. It applies to people who have disposable income. If I were doing a research project on writing practices of disadvantaged artists, my audience might not have as much disposable income.", "I have already made some progress. That's important. Unfortunately it's probably very difficult to use crowdfunding in the very early days of a research project. I've already chosen a research agenda or direction, and I can express it. Crowdfunding probably applies after the first few stages of research have been completed. There's probably standard grant advice, where at some point, I'm going to be using this crowdfunding to figure out the next thing, and I won't be able to explain it to anybody. There certainly are seedlings, but you have to have something in flight. I need to be able to say something about my progress with some kind of regularity.", "For instance, my wife is working on this study of biological markers of age in association with delirium and traumatic brain injury. To do this, she is signing up patients who show up to the hospital with traumatic brain injury. Once they agree to participate in the study, they agree with the taking of various blood samples and things like that. Recruiting enough patients to get the significance that she requires will take two years or something like this. She can report a little bit of intermediate stuff, but certainly not a monthly update. Right?", "Dwarkesh Patel 01:54:13", "Yeah, that would be a weird Patreon post.", "Andy Matuschak 01:54:17", "Yeah, I can't quite report monthly updates either, but there's a cadence that's necessary.", "Dwarkesh Patel 01:54:22", "Why bother with it at all? I'm sure there's many wealthy individuals who would be happy to single-handedly fund your research. Is there a reason you chose crowdfunding?", "Andy Matuschak 01:54:32", "Those wealthy individuals are very welcome to reach out and offer to do so. I will say I've been fortunate to have many high net worth individuals as sponsors, but each of them is providing a sponsorship of $100 a month on my Patreon. That is what I get from these people. I'm certainly not getting wild offers for more.", "Dwarkesh Patel 01:54:59", "I think you're using the wrong tool given the wealth distribution of your audience.", "Andy Matuschak 01:55:05", "Maybe. There's a couple of ways to interpret your question. One question is, why crowdfund when I could appeal to high net worth individuals? Another version is why crowdfund at all, as opposed to raising grants or talking to philanthropies? Are you mostly focused on the first of those?", "Dwarkesh Patel 01:55:19", "Yes.", "Andy Matuschak 01:55:21", "If I'm going to be honest, it's because it has worked. The history of the crowdfunding of this project, like many things in my life, is the result of goading from Michael Nielsen . Early on when we were working on this Quantum Country project, he suggested we set this up and I hemmed and hawed and said, “ Yeah, it's going to be a distraction. We don't really need this right now. Let's deal with it later when we have something to show. ” He's like, “ No, no, no, let's just get it started. It's going to take a long time to get enough subscribers. ” It turned out he was right. The process of building a subscriber base and crowdfunding a project takes a couple of years, at least in my experience. Starting earlier was better.", "If that hadn't worked or if we hadn't started early, I probably would have just reached out and asked for individual help and I probably will if it fails on me. I'll say also, when there have been specific projects that I've wanted to do that require, say, hiring people, I have reached out to high net worth friends and they've helped, but in the low five figure, four figure range. And that's great and I'm very grateful. The answer may be a mix. One of the big limitations to crowdfunding is it can't sustain a team or institution. It can barely sustain me. I'm somewhere between a grad student and a junior faculty member. And that's okay. There's a variety of reasons why that's okay for me that are pretty particular to my circumstance, but it certainly wouldn't be okay for everybody. Even for me, it doesn't allow me to support others.", "Dwarkesh Patel 01:57:06", "Right. It's even more striking because you're pretty well known, especially amongst the audience that would be happy to fund this kind of work. If the LeBron James of independent public research is earning between a grad student and a junior faculty member, it's not a great sign.", "Andy Matuschak 01:57:39", "It's worth considering that I'm maybe not very good at this. First off, I'm not that successful as a researcher. I object to the LeBron James characterization. It's true that I'm maybe the most successful crowdfunded researcher in tech stuff, and that's kind of weird. In the last couple of years, I've figured some stuff out, but I wouldn't say I've had any spectacular hit publications.", "One thing that is true of this is that when I have big publications, I get a lot of new subscribers. There is some kind of market force that could be higher if I were having more spectacular success with my research. It's also true that I systematically avoid marketing it. That's a self-protection thing. I am really worried about the corrosive influence of audience and marketing on honest inquiry. It is very easy to distort my work. It's almost a default to try to make it be something that people would be more likely to like rather than the thing that I actually want to investigate, or to do the boring simple version of it rather than the interesting deep version so I can publish more stuff more often.", "One thing that I've chosen not to do, and it's a choice that's definitely cost me financially, is to publish what academics would call minimum viable units of paper. They have a pithier phrase than that. Minimum viable papers. It's very common to take any new marginal insight that is above a particular bar and publish that. I just haven't done that. I've written informal letters to my patrons, “ Hey, I figured this thing out this month. ” If I were an academic, I would have published that as a paper. If I were a marketing-oriented crowdfunded researcher, I would have done some glossy thing and promoted it “ Look at this thing I figured out. ” But actually, I just don't think it's that big a deal and I'd rather get on to the next thing. I have that choice of waiting to publish. That's not really what I'm worried about. Really what I'm worried about is marketing, man. Marketing. It makes it so hard to be honest with oneself, at least in my experience. Not only to be honest, with what I think is interesting and important, but even to be honest about the results. Every paper is, in some sense, a little marketing piece trying to make the case that it's significant, that its results are really exciting, really important. That is really corrosive to discovery. It's true that you need a really strong emotional connection to the work in order to do good work. Part of that emotional connection comes from a sense of excitement of maybe being hot on the tail of something really good. There's a temptation to portray what you found in the best possible light, to downplay its limitations, to take up space and to totalize. All of this is just death for discovery.", "Dwarkesh Patel 02:01:00", "It is interesting to hear that from somebody who inadvertently and without intentionally trying to do so has done a good job of spreading your material. I've known about you for a long time. I do wonder if there's an element of, if you get to a certain level of quality, trying to market your stuff, not only doesn't help, but probably hurts you. If you can try to think of somebody like Gwern trying to post YouTube shorts of his blog post, it would just be like, “ What are you doing, man? ” It's just so good that he doesn’t need to promote it.", "Andy Matuschak 02:01:33", "Gwern is an interesting example because there's a simpler failure mode. I still routinely run into people who will tell me, “ Oh, I've really liked your work for a while. I didn't know you had a Patreon. ” That's a simple failure of a certain kind of marketing on my part. Gwern actually has this even worse. I adore Gwern. I have learned so much from him. You can go to his Patreon page and he actually makes public his revenue. He makes a tiny fraction of what I do on Patreon. I think this is inappropriate. Gwern is a much more impactful researcher than I am and he has a much bigger audience than I do. The fact that they aren't converting into patrons is mostly a matter of the way that he talks about it and the way that he presents it. It's not that he needs to market more people to his webpage. I expect he has plenty of traffic and a large audience. It's much larger than mine. There are a bunch of variables about the way you talk about this membership offering. None of us really want to think about them. I've ended up at a slightly more effective part of the space, but I'm pretty sure that there's much more effective ways to do whatever it is I'm doing.", "Dwarkesh Patel 02:02:48", "Yeah, this is a really interesting problem because I have a Substack where if people choose, they can help contribute to the podcast. It's a broad enough revenue to help pay for certain episodes and traveling. However, in comparison to what I’ll be making now that I’m going to be doing ads, it's a small fraction. Some people might say “ It's unfortunate that you have to do ads ” and maybe listeners will just be finding out for the first time that there was an option on Substack. But you don't want to be in the position where you're asking listeners for money every episode, right?", "Andy Matuschak 02:03:25", "Yeah, I hate asking people for money. This is a common issue for creative people. I hate it. I really hate it. I probably need to get over this. I do want to make one point, though. I had much more success with my Patreon when I recast it as, “ Oh, please subscribe to support my work ”, like the thing you were describing, to an offering to become a member. “ When you become a member, these things will happen. ” These things need not be terribly substantial necessarily. There's a difference between a tip jar and a membership in people's minds. Becoming a member means something. If you could offer something small, that feels membership-ish, you might get very different results. Gwern has the kind of tip jar vibe. These days, I have a member vibe. My instinct is that if you were to move to “ Become a member of Gwern ’ s lab ”, he would have better results.", "Dwarkesh Patel 02:04:30", "He has a thing on Patreon where if you donate five bucks or eight bucks, he'll read an entire book and review it.", "Andy Matuschak 02:04:37", "Yeah, this is crazy. I don't know if anybody's ever taken him up on this.", "Dwarkesh Patel 02:04:39", "Yeah. That's like valuing his time at a dollar.", "Andy Matuschak 02:04:45", "Yeah, I don't quite understand this. It's also the case that you'd probably have an easier time asking for subscriptions if you had a larger audience first. You can build the audience for free and then have some bonus offering behind a wall. I feel very conflicted about this actually, maybe you can help me think about it.", "I have all these patron essays. It's where most of my writing is these days, because I'm waiting until I can collect enough things for the next big public piece. I have a couple of big public pieces in various stages of flight. I'm writing a lot for patrons and probably much of my audience or people out there don't even know that it exists. One challenge of member only content is even making clear to others that it's there. Often people will try to achieve this by tweeting or sending newsletters out about this subscriber only content. I just can't bring myself to do it. It feels terrible to say, “ Oh, here's a link, but you can't view it. ” I can't do it. I don't know how you think about this, or if you think about subscriber only material for Lunar Society.", "Dwarkesh Patel 02:05:59", "I was actually just about to mention this to you. I'm a patron and I got a chance to read all your patron only essays, and they're great. I was thinking while I was reading them that it's really unfortunate that a person might not know they exist. If they're not familiar enough with your work to go ahead and sign up, it's just behind the Patreon. It's a shame that one of the ways to fund public work is to make some of that work less public.", "Andy Matuschak 02:06:31", "There are better ways to do this. There are design solutions. For instance, if it were the case that my work was mostly all in one place rather than separate places, and the subset of the work that's public was visually and structurally adjacent to the subset of the work that's private, it would be clear that there's additional stuff that's available. Perhaps you can see the first bit of it – Substack has this to get some sense of what it is that you'd be seeing. I've invested like zero effort into figuring out an appropriate presentation of this stuff.", "Dwarkesh Patel 02:07:06", "Right. Another thing to consider is that a big part of the impact of your writing work is how many people actually consume it. The expected value of that is dominated by the probability it goes viral. For example, you had this really insightful post based on your experience in industry at Apple about the possibilities of Vision Pro and in what ways it's living up to and not. I think that would have just gone huge.", "Andy Matuschak 02:07:43", "Oh, thanks. I did make it public. I put it on Twitter and it was on the front page of Hacker News. I think you're right. Usually I don't want this stuff to go viral. The primary value that most of it has for people is opening up a window into a particular very unusual kind of creative work that they don't normally get to see the behind the scenes of. And most of it is kind of context laden. It's not really freestanding. And I don't really want to write it as if it could be freestanding. I've occasionally had the experience of one of these things getting widely distributed and then getting all these comments of people being angrily confused about what I'm even talking about. That's kind of discouraging. All of this to say when I want to write something for broad public consumption, I write something for broad public consumption.", "(02:08:36) – Spaced repetition", "Dwarkesh Patel 02:08:36", "Okay, I've got some questions from Twitter.", "Andy Matuschak 02:08:43", "Okay, bring it on Twitter.", "Dwarkesh Patel 02:08:38", "This is another question from Matt Clancy. Are there other examples of beneficial knowledge work practices that perhaps mostly work because they are former spaced repetition practice where the participants don't realize it?", "Andy Matuschak 02:08:52", "This is embedded in our working world. For a researcher, when you need to write papers regularly and you're writing those background sections, you're repeatedly explaining the history of a particular line of research and citing the appropriate sources. That is a kind of spaced repetition. When you have students and you're mentoring them in conversation “ Oh, in this kind of situation, you really need to remember to do x ”. That is a kind of spaced repetition. All this is kind of accidental. The doctors have rounds - even when they're not seeing patients regularly, they're still exposed to other patients. There's often a structure in this where while the patient is being presented, you're supposed to be trying to think of what to ask. “ What would my differential be? ” Before you hear it, there's covert retrieval happening. It's everywhere in our world. It's spaced, and it's repeated.", "The thing that differentiates the formal practice that I've been exploring is that it focuses on material that you wouldn't normally have repeated either because you're too early with it to have a consistent practice, or because it's just not firmly tethered enough in anything in your life.", "(02:10:16) – Andy’s personal website and notes", "Dwarkesh Patel 02:10:16", "This is a question from Ian Vanagas. “ What is the optimal amount of effort that should go into a personal website? ” I think he might have noticed the amount of CSS that exists on andymatuschak.org , which is very beautiful.", "Andy Matuschak 02:10:32", "I don't like it. But this is what everybody says about their website, right? It's three years old, that means I want to redesign it, but I will not allow myself to because it feels like a distraction. What's the right amount of effort? There's no general answer to that question. Of course, that's going to be my answer but what can I say about it? What's the job of the website? What's it trying to do? Many people, especially engineers, do themselves a disservice by fretting over their websites unnecessarily, building vast technical infrastructure when really what they want is a place to post markdown files. They're better off getting a ghost installation. The main thing to think about is what is it that you want to put out in the world? What is the ideal form of that thing? And to try to find some way of organizing and expressing that.", "We have these common patterns, like a blog or a portfolio. Often people end up forcing themselves into these patterns. People will end up using blogging software to make something that's durable. Very interesting personal websites often come from people who are thinking about that question – the shape of the thing that they want to put out into the world and making something that speaks to it. Often, once you understand the shape of making the thing, it's not that effortful. My website was not an enormous project for me, probably should have been a slightly larger one, given that my income depends on people coming through it.", "Dwarkesh Patel 02:12:04", "The working notes with the…?", "Andy Matuschak 02:12:06", "That was a weekend.", "Dwarkesh Patel 02:12:07", "Really?", "Andy Matuschak 02:12:08", "Yeah, I feel bad about it because it's made its way into tons of commercial projects now. People are like, “ Ah, this is the way to present network notes ”. I think it's not very good in a variety of ways. I spent like a couple days on it.", "Dwarkesh Patel 02:12:21", "Wow. Because I thought this is where the question was alluding to – you must have spent months on this.", "Andy Matuschak 02:12:25", "Nope. It is a little bit like the thing about the mechanic hitting one thing and knowing the thing. I have design intuitions that led me in a particular direction. But there's lots of things I don't like about it. I just haven't allowed myself to spend any more time on it because I just don't think it's important enough.", "(02:12:44) – Working at Apple", "Dwarkesh Patel 02:12:44", "I have a question about your time at Apple before I ask the final Twitter question. Everybody has an iPhone and from the outside, there must be so many different tradeoffs and constraints when a thing like this is being designed. What is the supply of certain components and the cost? What do different consumers want? What features is the R&D team ready to put forward? At your time at Apple, you were responsible for a lot of these cornerstone design features. How is all that information integrated – taking all of these constraints into account and deciding that this is the design? How does that happen?", "Andy Matuschak 02:13:20", "It's very compartmentalized. None of what you just said was relevant to me. It was all pre-specified. At Apple, you have a little domain that's your own, and the boundaries of that domain are determined by everybody else's little domain. There's a person who's responsible for thermals. Actually, there's a team that's responsible for thermals, and they figure out things like “ What is our thermal budget? How much can we have the CPU on and during what kinds of working situations?” I can't argue with that. Those are just my constraints.", "Dwarkesh Patel 02:14:00", "But aren't those constraints informed by different problems?", "Andy Matuschak 02:14:06", "It is iterative. We'll run into stuff where there's a thing we really want to do, but we can't pull it off, because it drains too much power. So Hey Siri is an interesting example. To be able to activate a voice command at any time without interacting with the device is great. People prototype that just like having a thing listening in the background and watching for it. But that requires having the main CPU on all the time, processing audio buffers. You simply can't do that – it drains the battery. That attempt led to eventually having this dedicated co-processor that runs at a lower power that's very limited and restricted and it can be on when the main CPU is not on, and it can listen for that sound.", "Dwarkesh Patel 02:14:51", "Is there a person whose job it is to take all things into account? “ I have decided, given the memos from everybody, that thermals, you guys need to work on this, you guys work on that? ”.", "Andy Matuschak 02:15:04", "Not exactly. It's a little more push and pull. Some of a team’s priorities will be internally determined. The thermals team has its hobby horses, and it knows what it thinks is important. Some of them will be externally determined. There is an executive team that makes ultimate decisions about the main priorities for next year's devices. “ Ah, next year, we're going to do this face ID thing to unlock the phone and we're not going to have a home button. ” If you want to not have the home button, and you want to have the screen go edge to edge, it has all of these impacts like top to bottom on the device. That decision creates lots of necessary work for lots of teams.", "Some stuff is kind of handled at a more local level. For instance, the director of iOS apps might decide, we have this problem because the apps were built at the same time as the system frameworks, we end up building our apps using this weird Frankenstein, partially internal framework, partially the public one that our developers use. The internal one is always a little bit different and it's not always maintained reliably. So we have all these problems about the skew between the two. A big priority for us is going to be to rewrite all the pieces of our apps to only use the public bits so that they could be distributed on the App Store.That's a more local decision rather than at a top level executive team.", "Dwarkesh Patel 02:16:36", "What I find really interesting about this is that it's possible for a $2 trillion company to integrate all this information to have a cohesive hierarchy where so many different products, so many different trade offs are being made. Does that make you think that over time, these very well functioning tech firms will get bigger and bigger, that they can actually handle the cost of having this much overhead?", "Andy Matuschak 02:17:02", "Let me first just respond to this observation about the enormity of the company and then we'll talk about the other firms. The reason Apple is able to do this is because of the way they delegate. While there is a very strong command and control structure, and important decisions are made by a small group of people at the top, the individual leaders in various areas at all levels of hierarchy have an enormous amount of latitude. That's the only way that any of this can work. Individual people are given very, very strong responsibility and authority within domains to make decisions. That's how you can have all of these disparate products.", "Craig Federighi is head of software at Apple. What does that mean? How can you be head of software? How many platforms do they have? iOS, iPadOS, WatchOS, VisionOS, MacOS. There's also an operating system running in a bunch of the little chips in the cables. All of that is under Craig. What does that mean? In practice, what it means is that there is a set of software concerns that he's super concerned with and he's thinking about day to day. When I was at Apple, I had Craig Federighi in my office talking about gesture-recognizer heuristics with me, because that was something that was hyper salient to him. At the same time, he was basically completely ignoring 95% of software-related decisions. He just fully delegated those things to others.", "There's a really interesting Harvard Business Review piece from a few years back about Apple's management structure and how they have different concentric rings of responsibility for any given leader. I don't exactly remember the breakdown, but say there will be 5% of things that you're responsible for – that you have your hands on at all times and you are directly manipulating, controlling. There's a ring outside of that, that's a little bit bigger. Those are the things that you're keeping an eye on. They are salient to you, you're getting reports on them, you are checking in on them, you are thinking about them, you're coming up with ideas and sending them down the chain, but you're not directly controlling them. Then there's a bunch of stuff that you've figured out how to delegate and you want to hear if there's problems. They talk about how that structure's evolved over time. It's now been eight years since I've been at Apple so I'm sure it's practically unrecognizable to me.", "(02:19:25) – Spaced repetition 2", "Dwarkesh Patel 02:19:25", "This is a question from Bazel Halperin on Twitter. “ Is the lack of spaced repetition adoption a market failure? Or is a lack of adoption efficient? ”", "Andy Matuschak 02:19:35", "It's probably mostly efficient. In places where spaced repetition, as it stands without substantial novel cultural knowledge that's difficult to transmit and isolate, is valuable, we see a lot of space repetition usage. Among medical students who are highly motivated and have lots of reason to study, the material is shaped in a way that's highly amenable to spaced repetition usage, there's tons of spaced repetition usage. In fact, the med student Anki subreddit is bigger than the Anki subreddit.", "Likewise, among language learners, spaced repetition in various forms is extremely common. Duolingo has spaced repetition integrated into it. Spaced repetition is naturally present in the process of immersion learning. Modern spaced repetition tools between the Leitner’s Box and Wozniak's SuperMemo , were both originally motivated by language learning. In language learning, there's a substantial market for spaced repetition. It could be used in a variety of more creative ways. For instance, Russell Simmons has pointed out to me that studying individual vocabulary words on flashcards often misses integrative opportunities. What you really want is to study lots of sentences, or possibly to build up towards that. Duolingo does something like that. People in spaced repetition for language learning subreddits mostly don't. Some of them do, it's complicated. There's edges of the market where you need early adopters to try things that have rough edges. And the early adopters sometimes get cut and bleed a little bit. That's why people aren't rushing into it.", "As to why spaced repetition isn't widely used, for instance, to learn quantum physics, it's basically correctly priced. I can use spaced repetition to learn quantum physics a bit faster. It doesn't make it a fait accompli or anything like that. It's not like learning anatomy, where basically if you study the deck, you'll be done. You need some more stuff. I'm working on some of that stuff. You also need an incredible amount of very unusual knowledge that's largely tacit at the moment, in order to use it in that way. That's part of what motivated recording this other video is to show some of that in action. The fact that the market isn't acting on this thing that it can't really act on seems pretty appropriate.", "Dwarkesh Patel 02:21:56", "That's a good place to tie off that collaboration and this project. This is really interesting.", "Andy Matuschak 02:22:04", "Thank you so much.", "Dwarkesh Patel 02:22:05", "This is many hours of just insights and lots of food for thought.", "Andy Matuschak 02:22:07", "Wonderful. Thank you.", "Dwarkesh Patel 02:22:08", "Yeah, thanks for coming on." ]
[ "https://www.amazon.com/How-Read-Book-Classic-Intelligent/dp/0671212095/", "https://quantum.country/", "https://michaelnielsen.org/", "http://www.paulgraham.com/know.html", "https://en.wikipedia.org/wiki/ACT-R", "https://www.dwarkeshpatel.com/p/scott-young", "https://twitter.com/cauchyfriend/status/1641577765060349954?s=20", "http://book.bionumbers.org/", "https://en.wikipedia.org/wiki/Literature-based_discovery", "https://www.jstor.org/stable/2177575", "https://store.steampowered.com/app/210970/The_Witness/?snr=1_1056_4__curatorfeatureddiscount&curator_clanid=33228275", "https://en.wikipedia.org/wiki/Myst_(series)", "https://store.steampowered.com/app/220200/Kerbal_Space_Program/", "https://www.zachtronics.com/", "https://www.nand2tetris.org/", "https://geohot.com/", "https://en.wikipedia.org/wiki/Cuisenaire_rods", "https://www.beeminder.com/", "https://www.linkedin.com/in/alecresnick", "https://www.amazon.com/Experience-Education-John-Dewey/dp/0684838281", "https://en.wikipedia.org/wiki/A_Syntopicon", "https://www.thediff.co/", "https://withorbit.com/", "https://quantum.country/", "https://michaelnielsen.org/", "https://gwern.net/", "https://www.patreon.com/gwern", "https://andymatuschak.org/", "https://www.apple.com/in/leadership/craig-federighi/", "https://en.wikipedia.org/wiki/Leitner_system", "https://www.supermemo.com/en" ]
https://www.dwarkesh.com/p/austin-vernon
Austin Vernon - Energy Superabundance, Starship Missiles, & Finding Alpha
[ "Intro", "Dwarkesh Patel (00:00:00):", "Okay! Today, I have the pleasure of interviewing Austin Vernon who writes about engineering, software, economics, and investing on the internet, though not that much else is known about him. So Austin, do you want to give us a bit of info about your background? I know that the only thing the internet knows about you is this one little JPEG that you had to upload with your recent paper. But what about an identity reveal or I guess a little bit of a background reveal? Just to the extent that you're comfortable sharing.", "Austin Vernon (00:00:29):", "My degree is in chemical engineering and I’ve had a lifelong love for engineering as well as things like the Toyota Production System. I've also worked as a chemical engineer in a large processing facility where I've done a lot of petroleum engineering. I taught myself how to write software and now I'm working on more research and the early commercialization of CO2 electrolysis.", "Dwarkesh Patel (00:00:59):", "Okay yeah. I'm really interested in talking about all those things. The first question I have is from Alex Berger, who's the co-CEO of Open Philanthropy. When I asked on Twitter what I should ask you, he suggested that I should ask “Why so shady?” Famously you have kind of an anonymous personality, pseudonymous thing going on the internet. What's up with that?", "Austin Vernon (00:01:25):", "Yeah. I think he posted a tweet that said “I don't know who this guy is or if he's credible at all, but his stuff sure is interesting”. That really made me laugh. I thought that was hilarious. Fame just doesn't seem necessary, I think I'm fine with my ideas being well known and communicating, but I have less desire to be personally famous.", "Starship as a Weapon", "Dwarkesh Patel (00:01:52):", "Gotcha, gotcha. I wanted to start off with a sexy topic, let's talk about using Starship as a kinetic weapon. I thought that was one of the more amusing posts you wrote. Do you want to talk more about how this would be possible?", "Austin Vernon (00:02:08):", "Well, I think the main thing with Starship is that you're taking a technology and you're making it about 100 times cheaper for cargo and 1000 times cheaper for people. When things like that happen that drastically, you're just looking at huge changes and it’s really hard to anticipate what some of those can be when the change is that drastic. I think there's a lot of moon-based, Mars-based stuff that doesn't really catch the general public's eye. They also have trouble imagining some of the point-to-point travel that could be possible. But when you start talking about it as a weapon, then I think it lets people know they should be paying attention to this technology. And we certainly do not want to be second or third getting it. We should make sure that we're going to be first.", "Dwarkesh Patel (00:03:05):", "Yeah. I think you mentioned this in the post, but as recently as the '90s, the cost of sending one kilogram to space was around $20,000. More recently, SpaceX has brought it to $2,000. Lots of interesting questions pop up when you ask, “What will be possible once we get it down to $200 per kilogram to send into orbit?” One of them could be about how we might manufacture these weapons that are not conventional ballistics. Do you want to talk about why this might be an advancement over conventional ballistic weapons?", "Austin Vernon (00:03:37):", "Well, regular conventional ballistic weapons are extremely expensive. This is more like a bomb truck. But usually we think of B52 as the bomb truck and this could be even cheaper than the B52, delivering just mass on target. When you think about how expensive it is to fly a B52 from Barksdale in Louisiana all the way across the world.. you can do it from south Texas or Florida with the Starship and get more emissions per day and the fuel ends up being. When you go orbital, it takes a lot to get to orbit. But then once you're in orbit, your fuel consumption's pretty good. So over long distances, it has a lot of advantage. That's why the point-to-point works for longer distances.", "Austin Vernon (00:04:27):", "There's really a sweet spot with these weapons where you want it to be pretty accurate, but you also want it to be cheap. You're seeing that problem with Russia right now as they have some fancy parade style weapons that are really expensive, like multi-billion dollar cruise missiles, but they're missing that $5,000 guided artillery shell or that $20,000 JDM that you can just pit massive. Or the multiple launch rocket system, guided rockets. They're really short on all those because I think they had just had a limited amount of chips they could get from the US into Russia to make these advanced weapons.", "Austin Vernon (00:05:07):", "But yeah, so the Starship gives you just a platform to deliver. You could put JDMs in a shroud, or you could just have the iron unguided kinetic projectiles, and it just becomes impossible for a ship to launch missiles to intercept yours if your cost is so low, you can just overwhelm them.", "Dwarkesh Patel (00:05:29):", "Okay. There are a few terms there that neither I nor the audience might know. So what is JDM ? What is shroud ? And why are chips a bottleneck here? Why can't it just be any micro-controller?", "Austin Vernon (00:05:42):", "So JDM is Joint Direct Attack Munition. So what we did is we took all our Vietnam surplus bonds and we put this little fin-kit on it and it costs like $20,000, which is cheap for a weapon because the actual bond costs, I don't know, $3,000. And then it turns it into a guided weapon that, before you were probably lucky to get within 500 meters of a target, now you can get it in with two meters. So the number of missions you have to do with your planes and all that goes down by orders of magnitude. So it's an absolutely huge advantage in logistics and in just how much firepower you can put on a target. And we didn't even have to make new bombs, we just put these kits on all our old bombs.", "Austin Vernon (00:06:33):", "Let's see.. Yeah the chips are a problem. There's this organization called RUSI . I think they're in the UK, but they've been tearing down all these Russian weapons they found in Ukraine and they all have American chips in them. So technically, they're not supposed to be able to get these chips. And yet, Russia can't make a lot of its own chips. And especially not the specialized kinds you might want for guided weapons. So they've been somehow smuggling in chips from Americans to make their advanced weapons", "Dwarkesh Patel (00:07:03):", "What is special about these? As far as I'm aware, the trade with China is still going on and we get a lot of our chips manufactured from Taiwan or China. So why can't they do the same?", "Austin Vernon (00:07:14):", "It's the whole integration. It's not just the specific chip, but the board. They're more like PLCs where you almost have wired-in programming and they come with this ability to do the guidance and all that stuff. It all kind of has to work together. I think that's the way I understand it. I don't know. Maybe I don't have a really good answer for that one, but they're hard to replicate is what matters.", "Dwarkesh Patel (00:07:43):", "Okay that's interesting. Yeah, I guess that has a lot of interesting downstream effects, because for example, India buys a lot of its weapons from Russia. So if Russia doesn't have access to these, then other countries that buy from Russia won't have access to these either.", "Dwarkesh Patel (00:07:58):", "You had an interesting speculation in the post where you suggested that you could just keep these kinetic weapons in orbit, in a sort of Damocles state really, almost literally. That sounds like an incredibly scary and risky scenario where you could have orbital decay and you could have these kinetic weapons falling from the sky and destroying cities. Do you think this is what it will look like or could look like in 10 to 20 years?", "Austin Vernon (00:08:26):", "Well, yeah, so the advantage of having weapons on orbit is you can hit targets faster . So if you're launching the rocket from Florida, you're looking at maybe 30 minutes to get there and the target can move away in that time. Whereas if you're on orbit, you can have them spaced out to where you're hitting within a few minutes. So that's the advantage there.", "Austin Vernon (00:08:46):", "You really have to have a two stage system I think for most, because if you have a really aerodynamic rod that's going to give you really good performance in the low atmosphere, it’ll end up going too fast and just burn up before it gets there. Tungsten's maybe the only thing that you could have that could go all the way through which is why I like the original concept of using these big tungsten rods the size of a telephone pole. But tungsten's pretty expensive. And the rod concept kind of limits what you can do.", "Austin Vernon (00:09:28):", "So a lot of these weapons will have, that's what I was talking about with the shroud, something that actually slows you down in the upper atmosphere. And then once you're at the velocity where you're not just going to melt, then you open it up and let it go. So if you actually had it fall from the sky, some may make it to the ground, but a lot would burn up. So a lot of the stuff that makes it to the ground is actually pretty light. It's stuff that can float and has a large surface area. Yeah, that's the whole thing with Starship. Or not Starship, but Starlink. All those satellites are meant to completely fall apart on de-orbit.", "Dwarkesh Patel (00:10:09):", "I see. One of the implications of that is that these may be less powerful than we might fear, because since kinetic energy is mass times velocity squared and there's an upper bound on the velocity (velocity being the component that grows the kinetic energy faster), then it suggests that you can upper bound the power these things will have. You know what I mean?", "Austin Vernon (00:10:32):", "Yeah, so even the tungsten rods. Sometimes people, they're not very good at physics, so they don't do the math. They think it's going to be a nuclear weapon, but it's really not. I think even the tungsten rod is like 10 tons of T&T or something. It's a big bomb, but it's not a super weapon.", "Austin Vernon (00:10:54):", "So I think I said in the post, it's about using advanced missiles where they're almost more defensive weapons so I can keep you from pitting your ship somewhere. Yeah I could try to bombard your cities, but I can't take ground with it. I can't even police sea lanes with it really. I'd still have to use regular ships if I had this air cover to go enforce the rules of the sea and stuff like that.", "Dwarkesh Patel (00:11:23):", "Yeah. You speculated in the post, I think, that you could load this up with shrapnel and then it could explode next to an incoming missile or an incoming aircraft. Could these get that accurate? Because that was surprising speculation to me.", "Austin Vernon (00:11:43):", "I think for ships, it's pretty... I was watching videos of how fast a ship can turn and stuff. If you're going to do an initial target on a ship to try to kill their radars, you'd want to do it above the ceiling of their missiles. So it's like, how much are they going to move between your release where you stop steering and that? The answer’s maybe 1000 feet. So that's pretty simple because you just shrapnel the area.", "Austin Vernon (00:12:12):", "Targeting aircraft, you would be steering all the way in. I'd say it's doable, but it'd be pretty hard. You'd actually maybe want to even go slower than you would with the ship attack. You'd need a specialized package to attack the aircraft, but if you have enough synthetic aperture radar and stuff like that, you could see these aircraft using satellites and then guide the bomb in the whole way. You could even load heat seeking missiles into a package that unfurls right next to them and launch conventional missiles too, probably. It’d be pretty hard to do some of this stuff, but they’re just the things you might be able to do if you put some effort into it.", "Dwarkesh Patel (00:12:57):", "Yeah. The reason I find this kind of speculation really interesting is because when you look at the modern weaponry that's used in conflicts, it just seems directly descendant from something you would've seen in World War II or something. If you think about how much warfare changed between 1900 and 1940, it's like, yeah, they're not even the same class of weapons anymore. So it's interesting to think about possibilities like these where the entire category of weapons has changed.", "Austin Vernon (00:13:33):", "You’re right and that's because our physical technology hasn't changed that much. So it really has just made more sense to put better electronics in the same tanks. We haven't learned enough about tanks to build a new physical tank that's way better, so we just keep upgrading our existing tanks with better electronics. They're much more powerful, they're more accurate. A lot of times, they have longer range weapons and better sensors. So the tank looks the same, but it maybe has several times more killing power. But the Ukraine war right now, they're using a lot of 40, 50 year old weapons so that especially looks like that.", "Dwarkesh Patel (00:14:20):", "Yeah. Which kind of worries you if you think about the stockpiles our own military has. I'm not well educated on the topic, but I imagine that we don't have the newest of the new thing. We probably have maintained versions of decades old technology.", "Austin Vernon (00:14:35):", "We spend so much, we've got relatively... This kind of gets into debate about how ready our military is. For certain situations, it's more ready than others. I'd say in general, most people talking about it have the incentive to downplay our capabilities because they want more defense spending. There's lots of reasons. So I think we're probably more capable than what you might see from some editorial in The Hill or whatever. Us just sending a few weapons over to Ukraine and seeing how successful they've been at using them, I think, shows a little bit of that.", "Austin Vernon (00:15:18):", "There's so much uncertainty when it comes to fighting, especially when you're talking about a naval engagement, where we don't just don't have that many ships in general… you can have some bad luck. So I think you always want to be a little bit wary. You don't want to get overconfident.", "Dwarkesh Patel (00:15:37):", "Yeah. And if the offensive tech we sent to Ukraine is potentially better than the defensive tech, it's very possible that even a ballistic missile that China or Russia could launch would sink a battleship and then kill the 2,000 or 1,000 whatever soldiers that are on board. Or I guess, I don't know, you think this opens up avenues for defensive tech as well?", "Austin Vernon (00:16:03):", "Yeah––generally the consensus is that defensive technology has improved much more recently than offensive technology. This whole strategy China has is something they call anti-access/area denial, A2/AD . That's basically just how missiles have gotten better because the sensors on missiles have gotten better. So they can keep our ships from getting close to them but they can't really challenge us in Hawaii or something. And it really goes both ways, I think people forget that. So yeah, it's hard for us to get close to China, but Taiwan has a lot of missiles with these new sensors as well. So I think it's probably tougher for China to do it close to Taiwan than most people would say.", "Dwarkesh Patel (00:16:55):", "Oh, interesting. Yeah, can you talk more about that? Because every time I read about this, people are saying that if China wanted to, they could knock out Taiwan's defenses in a short amount of time and take it over. Yeah, so can you talk about why that's not possible?", "Austin Vernon (00:17:10):", "Well, it might be, but I think it's a guess of the uncertainty [inaudible 00:17:14]. Taiwan has actually one of the largest defense budgets in the world and they've recently been upping it. I think they spend, I don't know, $25 billion a year and they added an extra $5 billion. And they've been buying a lot of anti-ship missiles, a lot of air defense missiles.. Stuff that Ukraine could only dream of. I think Ukraine's military budget was $2 billion and they have a professional army. And then the other thing is Taiwan’s an island, whereas Russia could just roll over the land border into Ukraine.", "Austin Vernon (00:17:44): There's just been very few successful amphibious landings in history. The most recent ones were all the Americans in World War II and Korea. So the challenge there is just... It's kind of on China to execute perfectly and do that. So if they had perfect execution, then possibly it would be feasible. But if their air defenses on their ships aren't quite as good as we think they could possibly be, then they could also end up with half their fleet underwater within 10 hours.", "Dwarkesh Patel (00:18:20):", "Interesting. And how has your view of Taiwan's defensive capabilities changed... How has the Ukraine conflict updated your opinion on what might happen?", "Austin Vernon (00:18:29):", "I didn't really know how much about it. And then I started looking at Wikipedia and stuff and all this stuff they're doing. Taiwan just has a lot of modern platforms like F16s with our anti-ship missiles. They actually have a lot of their own. They have indigenous fighter bombers, indigenous anti-ship missiles because they're worried we might not always sell them to them.", "Austin Vernon (00:18:54):", "They've even recently gotten these long range cruise missiles that could possibly target leadership in Beijing. So I think that makes it uncomfortable for the Chinese leadership. If you attack them, you're going to have to go live in a bunker. But again, I'm not a full-time military analyst or something, so there's a lot of uncertainty around what I'm saying. It's not a given that China's just going to roll over them.", "Software Productivity", "Dwarkesh Patel (00:19:22):", "Okay. That's comforting to hear. Let's talk about an area where I have a little bit of a point of contact. I thought your blog post about software and the inability of it to increase productivity number s, I thought that was super fascinating. So before I ask you questions about it, do you want to lay out the thesis there?", "Austin Vernon (00:19:43):", "Yeah. So if there's one post I kind of felt like I caught lightning in a bottle on, it's that one. Everything I wanted to put in, it just fit together perfectly, which is usually not the case.", "Austin Vernon (00:19:55):", "I think the idea is that the world's so complex and we really underestimate that complexity. If you're going to digitize processes and automate them and stuff, you have to capture all that complexity basically at the bit level, and that's extremely difficult. And then you also have diminishing returns where the easily automatable stuff goes first and then it's increasing corner cases to get to the end, so you just have to go through more and more code basically. We don't see runaway productivity growth from software because we're fighting all this increasing complexity.", "Dwarkesh Patel (00:20:39):", "Yeah. Have you heard of the waterbed theory of complexity by the way?", "Austin Vernon (00:20:42):", "I don't think so.", "Dwarkesh Patel (00:20:44):", "Okay. It's something that comes up in compiler design: the idea is that there's a fixed amount of complexity in a system. If you try to reduce it, what you'll end up doing is just you'll end up migrating the complexity elsewhere. I think an example that's used of this is when they try to program languages that are not type safe, something like Python. You can say, “oh, it's a less complex language” , but really, you've added complexity when, I don't know, two different types of numbers are interacting like a float and an int. As your program grows, that complexity exponentially grows along with all the things that could go wrong when you're making two things interact in a way that you were expecting not to. So yeah, the idea is you can just choose where to have your complexity, but you can't get rid of that complexity.", "Austin Vernon (00:21:38):", "I think that’s kind of an interesting thing when you start pairing it with management theory... when you add up all the factors, the most complex thing you're doing is high volume car manufacturing. And so we got a lot of innovations and organization from car manufacturers like the assembly line. Then you had Sloan at GM basically creating the way the modern corporation is run, then you have the Toyota Production System.", "Austin Vernon (00:22:11):", "But arguably now, creating software is actually the most complex thing we do. So there's all these kinds of squishy concepts that underlie things like the Toyota Production System that softwares had to learn and reimagine and adopt and you see that with Agile where, “oh, we can't have long release times. We need to be releasing every day, ” which means we're limiting inventory there.", "Austin Vernon (00:22:42):", "There's a whole thing especially that's showing up in software that existed in carbon manufacturing where you're talking about reducing communication. So Jeff Bezos kind of now famously said, \"I want to reduce communication,\" which is counterintuitive to a lot of people. This is age-old in car manufacturing where Toyota has these cards that go between workstations and they tell you what to do. So people normally think of them as limiting inventory, but it also tells the worker exactly what they're supposed to be doing at what pace, at what time. The assembly line is like that too. You just know what to do because you're standing there and there's a part here and it needs to go on there, and it comes by at the pace you're supposed to work at.", "Austin Vernon (00:23:29):", "It's so extreme that there's this famous paper, by List, Syverson and Levitt. They went to a car factory and studied how defects propagated in cars and stuff. Once a car factory gets up and running, it doesn't matter what workers you put in there, if workers are sick or you get new workers, the defect rate is the same. So all the knowledge is built into the manufacturing line.", "Austin Vernon (00:23:59):", "There’s these concepts around idiot-proofing and everything that are very similar to what you'll see. You had Uncle Bob on here. So Uncle Bob says only put one input into a function and stuff like that because you'll mix them up otherwise. The Japanese call it poka-yoke. You make it where you can't mess it up. And that's another way to reduce communication, and then software, of course you have APIs.", "Austin Vernon (00:24:28):", "So I'm really interested in this overall concept of reducing communication, and reducing how much cooperation and everything we need to run the economy.", "Dwarkesh Patel (00:24:41):", "Right. Right. Speaking of the Toyota Production System, one thing they do to reduce that defect rate is if there's a problem, all the workers in that chain are forced to go to the place where the defect problem is and fix it before doing anything else. The idea there is that this will give them context to understand what the problem was and how to make sure it doesn't happen again. It also prevents a build up of inventory in a way that keeps making these defects happen or just keeps accumulating inventory before the place that can fix the defects is able to take care of them.", "Austin Vernon (00:25:17):", "Right. Yeah, yeah. Exactly.", "Dwarkesh Patel (00:25:19):", "Yeah. But I think one interesting thing about software and complexity is that software is a place where complexity is the highest in our world right now but software gives you the choice to interface with the complexity you want to interface with. I guess that's just part of specialization in general, but you could say for example that a machine learning model is really complex, but ideally, you get to a place where that's the only kind of complexity you have to deal with. You're not having to deal with the complexity of “ How is this program compiled? How are the libraries that I'm using? How are they built?” You can fine tune and work on the complexity you need to work on.", "Dwarkesh Patel (00:26:05):", "It's similar to app development. Byrne Hobart has this blog post about Stripe as solid state . The basic idea is that Stripe hides all the complexity of the financial system: it charges a higher fee, but you can just treat it as an abstraction of a tithe you have to pay, and it'll just take care of that entire process so you can focus on your comparative advantage.", "Austin Vernon (00:26:29):", "It's really actually very similar in car manufacturing and the Toyota Production System if you really get into it. It's very much the same conceptual framework. There's this whole idea in Toyota Production System, everyone works at the same pace, which you kind of talked about. But also, your work content is the same. There's no room for not standardizing a way you're going to do things. So everyone gets together and they're like, “All right, we're going to do this certain part. We're going to put it together this certain way at this little micro station. And it's going to be the same way every time.” That's part of how they're reducing the defect rates. If your assembly process is longer than what your time allotment is to stay in touch with the rest of the process, then you just keep breaking it down into smaller pieces. So through this, each person only has to know a very small part of it.", "Austin Vernon (00:27:33):", "The overall engineering team has all sorts of strategies and all sorts of tools to help them break up all these processes into very small parts and make it all hold together. It's still very, very hard, but it's kind of a lot of the same ideas because you're taking away the complexity of making a $30,000 car or 30,000 part car where everyone's just focusing on their one little part and they don't care what someone else is doing.", "Dwarkesh Patel (00:28:06):", "Yeah. But the interesting thing is that it seems like you need one person who knows how everything fits together. Because from what I remember, one of the tenets of the Toyota Production System was you need to have a global view. So, in that book, was it the machine or the other one, the Toyota Production System book ? But anyways, they were talking about examples where people would try to optimize for local efficiencies. I think they especially pointed to Ford and GM for trying to do this where they would try to make machines run all the time. And locally, you could say that, “oh this machine or process is super efficient. It's always outputting stuff.” But it ignores how that added inventory or that process had a bad consequence for the whole system.", "Dwarkesh Patel (00:28:50):", "And so it's interesting if you look at a company like Tesla that’s able to do this really well. Tesla is run like a monarchy and this one guy has this total global view of how the entire process is supposed to run and where you have these inefficiencies.. You had some great examples of this in the blog post. I think one of the examples is this guy (the author) goes to this factory and he asks, \"Is this an efficient factory?\" And the guy's like, \"Yeah, this is totally efficient. There's nothing we can do, adopting the Toyota way, to make this more efficient.\"", "Dwarkesh Patel (00:29:22):", "And so then he's like, \"Okay, let me look.\" And he finds that they're treating steel in some way, and the main process does only take a couple of seconds, but some local manager decided that it would be more efficient to ship their parts out, to get the next stage of the process done somewhere else. So this is locally cheaper, but the result is that it takes weeks to get these parts shipped out and get them back. Which means that the actual time that the parts spend getting processed is 0.1% of the time, making the whole process super inefficient. So I don't know, it seems like the implication is you need a very monarchical structure, with one person who has a total view, in order to run such a system. Or am I getting that wrong?", "Austin Vernon (00:30:12):", "Not necessarily. I mean, you do have to make sure you're not optimizing locally, but I think it's the same. You have that same constraint in software, but I think a lot of times people are just running over it because processing has been getting so much cheaper. People are expensive, so if you could save development time, it just ends up the trade offs are different when you're talking about the tyranny of physical items and stuff like that, the constraints get a little more severe. But I think you have the same overall. You still have to fight local optimization, but the level you have to is probably different with physical goods.", "Austin Vernon (00:30:55):", "I was thinking about the smart grid situation from a software perspective, and there's this problem where, okay, I'm putting my solar farm here and it's impacting somewhere far away, and that's then creating these really high upgrade costs, that cost two or three times more than my solar farm. Well, the obvious thing would be, if you're doing software, is like you're going to break all these up into smaller sections, and then you wouldn't be impacting each other and all that, and you could work and focus on your own little thing.", "Austin Vernon (00:31:29):", "But the problem with that is if you're going to disconnect these areas of the grid, the equipment to do that is extremely expensive. It's not like I'm just going to hit a new tab and open a new file and start writing a new function. And not only that, but you still have to actually coordinate how this equipment is going to operate. So if you just let the grid flow as it does, everyone knows what's going to happen because they could just calculate the physics. If you start adding in all these checkpoints where humans are doing stuff, then you have to actually interface with the humans, and the amount of things that can happen really starts going up. So it's actually a really bad idea to try to cart all this stuff off, just because of the reality of the physical laws and the equipment you need and everything like that.", "Dwarkesh Patel (00:32:22):", "Okay. Interesting. And then I think you have a similar Coasean argument in your software post about why vertically integrating software is beneficial. Do you want to explain that thesis?", "Austin Vernon (00:32:34):", "Yeah. I think it actually gets to what we're talking about here, where it allows you to avoid the local optimization. Because a lot of times you're trying to build a software MVP, and you're tying together a few services… they don't do quite what you need, so if you try to scale that, it would just break. But if you're going to take a really complex process, like car manufacturing or retail distribution, or the home buying process or something, you really have to vertically integrate it to be able to create a decent end-to-end experience and avoid that local optimization.", "Austin Vernon (00:33:20):", "And it's just very hard otherwise, because you just can't coordinate effectively if you have 10 different vendors trying to do all the same thing. You end up in just constant vendor meetings, where you're trying to decide what the specs are or something instead of giving someone the authority, or giving a team the authority to just start building stuff. Then if you look at these companies, they have to implement these somewhat decentralized processes when they get too complex, but at least they have control over how they're interfacing with each other. Walmart, as the vendors, control their own stock. They don't tell the vendor, \" We need X parts. \" It's just like, it's on you to make sure your shelf is stocked.", "Dwarkesh Patel (00:34:07):", "Yeah. Yeah. So what was really interesting to me about this part of the post was, I don't know, I guess I had heard of this vision of we're software setting, where everybody will have a software as a service company, and they'll all be interfacing with each other in some sort of cycle where they're all just calling each other's APIs. And yeah, basically everybody and their mother would have a SAAS company. The implication here was, from your argument, that given the necessity of integrating all those complexity vertically in a coherent way, then the winners in software should end up being a few big companies, right? They compete with each other, but still...", "Austin Vernon (00:34:49):", "I think that's especially true when you're talking about combining bits and apps. Maybe less true for pure software. The physical world is just so much more complex, and so the constraints it creates are pretty extreme, compared to like... you could maybe get away with more of everyone and their mom having an API in a pure software world.", "Dwarkesh Patel (00:35:14):", "Right. Yeah. I guess, you might think that even in the physical world, given that people really need to focus on their comparative advantage, they would just try to outsource the software parts to these APIs. But is there any scenario where the learning curve for people who are not in the firm can be fast enough that they can keep up with the complexity? Because there's huge gains for specialization and competition that go away if this is the world we're forced to live in. And then I guess we have a lot of counter examples, or I guess we have a lot of examples of what you're talking about. Like Apple is the biggest market cap in the world, right? And famously they're super vertically integrated. And yeah, obviously their thing is combining hardware and software. But yeah, is there any world in which it can keep that kind of benefit, but have it be within multiple firms?", "Austin Vernon (00:36:10):", "This is a post I've got on my list I want to write. The blockchain application, which excites me personally the most, is reimagining enterprise software. Because the things you're talking about, like hard typing and APIs are just basically built into some of these protocols. So I think it just really has a lot of exciting implications for how much you can decentralize software development. But the thing is, you can still do that within the firm. So I think I mentioned this, if the government's going to place all these rules on the edge of the firm, it makes transactions with other firms expensive. So a few internal transactions can be cheaper, because they're avoiding the government reporting and taxes and all that kind of stuff. So I think you'd have to think about how these technologies can reduce transaction costs overall and decentralize that, but also what are the costs between firms?", "Dwarkesh Patel (00:37:22):", "Yeah, it's really interesting if the costs are logistic, or if they're based on the knowledge that is housed, as you were talking about, within a factory or something. Because if it is just logistical and stuff, like you had to report any outside transactions, then it does imply that those technology blockchain could help. But if it is just that you need to be in the same office, and if you're not, then you're going to have a hard time keeping up with what the new requirements for the API are, then maybe it's that, yeah, maybe the inevitability is that you'll have these big firms that are able to vertically integrate.", "Austin Vernon (00:37:59):", "Yeah, for these big firms to survive, they have to be somewhat decentralized within them. So I think you have... you're going to the same place as just how are we viewing it, what's our perception? So even if it's a giant corporation, it's going to have very independent business units as opposed to something like a 1950s corporation.", "Dwarkesh Patel (00:38:29):", "Yeah. Byrne Hobart, by the way, has this really interesting post that you might enjoy reading while you're writing that post. It's type safe communications, and it's about that Bezos thing, about his strict style for how to communicate and how little to communicate. There's many examples in Amazon protocols where you have to... the only way you can put in this report, is in this place you had to give a number. You can't just say, \"This is very likely,\" you had to say like, \"We project X percent increase,\" or whatever. So it has to be a percent. And there's many other cases where they're strict about what type definition you can have in written reports or something. It has kind of the same consequence that type strict languages have, which is that you can keep track of what the value is through the entire chain of the flow of control.", "Austin Vernon (00:39:22):", "You've got to keep work content standardized.", "Dwarkesh Patel (00:39:26):", "So we've been hinting at the Coasean analysis to this. I think we just talked about it indirectly, but for the people who might not know, Coase has this paper called The Theory of Firms , and he's trying to explain why we have firms at all. Why not just have everybody compete in the open market for employment, for anything? Why do we have jobs? Why not just have... you can just hire a secretary by the day or something.", "Dwarkesh Patel (00:39:51):", "And the conclusion he comes to is that by having a firm you're reducing the transaction cost. So people will have the same knowledge about what needs to get done, obviously you're reducing the transaction cost of contracting, finding labor, blah, blah, blah. And so the conclusion it comes to is the more the transaction costs are reduced within people in a firm, as compared to the transaction cost between different firms, the bigger firms will get. So I guess that's why the implication of your argument was that there should be bigger tech firms, right?", "Austin Vernon (00:40:27):", "Yes, yes, definitely. Because they can basically decrease the transaction costs faster within, and then even at the limit, if you have large transaction costs outside the firm, between other firms that are artificially imposed, then it will make firms bigger.", "Dwarkesh Patel (00:40:45):", "What does the world look like in that scenario? So would it just be these Japanese companies, these huge conglomerates who are just... you rise through the ranks, from the age of 20 until you die? Is that what software will turn into?", "Austin Vernon (00:40:59):", "It could be. I mean, I think it will be lots of very large companies, unless there's some kind of change in inner firm transaction costs. And again, that could possibly come from blockchain like technology, but you probably also need better regulation to make that cheaper, and then you would have smaller firms. But again, in the end, it doesn't really matter. You'd be working in your little unit of the big bank of corporate, or whatever. So I don't know what that would look like on a personal level.", "Car Manufacturing", "Dwarkesh Patel (00:41:40):", "Yeah. Okay. So speaking of these Japanese companies, let's talk about car manufacturing and everything involved there. Yeah, so we kind of hinted at a few elements of the Toyota way and production earlier, but do you want to give a brief overview of what that is, so we can compare it to potentially other systems?", "Austin Vernon (00:42:02):", "I think all these kinds of lean Toyota process systems, they do have a lot of similarities, where mostly you want to even-out your production, so you're producing very consistently, and you want to break it into small steps and you want to limit the amount of inventory you have in your system. When you do this, it makes it easy to see how the process is running and limit defects. And the ultimate is you're really trying to reduce defects, because they're very expensive. It's a little bit hard to summarize. I think that's my best shot at it there, quickly off the top of my head.", "Dwarkesh Patel (00:42:49):", "Yeah. The interesting thing about the Toyota system, so at least when the machine was released, is they talk about... that book was released I think the nineties, and they went to the history of Toyota, and one of the interesting things they talked about was there was a brief time where the company ran... I think, was this after World War II? But anyways, the company ran into some troubles. They needed to layoff people to not go bankrupt. They had much more debt on books than they had assets. So yeah, they wanted to layoff people, but obviously the people were not happy about this, so there were violent protests about this. And in fact I think the US written constitution gave strong protections to labor that they hadn't had before, which gave labor an even stronger hand here.", "Dwarkesh Patel (00:43:42):", "So anyway, Toyota came to this agreement with the unions that they'd be allowed to do this one time layoff to get the company on the right track, but afterwards they could never lay somebody off. Which would mean that a person who works at Toyota works there from the time they graduate college or high school till they die. Right? I don't know, that's super intense in a culture. I mean, in software, where you have the average tenure in a company's one year, the difference is so much.", "Dwarkesh Patel (00:44:13):", "And there's so many potential benefits here, I guess a lot of drawbacks too. But one is, obviously if you're talking in a time scale of 50 years, rather than one year, the incentives are more aligned between the company and the person. Because anything you could do in one year is not going to have a huge impact on your stock options in that amount of time. But if this company's your retirement plan, then you have a much stronger incentive to make sure that things at this company run well, which means you're probably optimizing for the company's long term cash flow yourself. And also, there's obviously benefits to having that knowledge built up in the firm from people who have been there for a long time. But yeah, that was an interesting difference. One of the interesting differences, at least.", "Austin Vernon (00:45:00):", "I mean, I think there's diminishing returns to how long your tenure's going to be. Maybe one year's too short, but there's a certain extent to where, if you grow faster than your role at the company, then it's time to switch. It's going to depend on the person, but maybe five years is a good number. And so if you're not getting promoted within the firm, then your human capital's being wasted, because you could go somewhere else and have more responsibility and perform better for them. Another interesting thing about that story, is almost all lean turnarounds, where they're like, we're going to implement something like Toyota production system, they come with no layoff promises. Because if you're going to increase productivity, that's when everyone's like, \"Oh gosh, I'm going to get laid off.\" So instead you have to increase output and take more market share, is what you do.", "Dwarkesh Patel (00:46:00):", "It's kind of like burning your bridges, right? So this is the only way.", "Austin Vernon (00:46:05):", "The process really requires complete buy-in, because a lot of your ideas for how you're going to standardize work content come from your line workers, because that's what they're doing every day. So if you don't have their buy-in, then it's going to fail. So that's why it's really necessary to have those kinds of clauses.", "Dwarkesh Patel (00:46:22):", "Yeah. Yeah, that makes sense. I think it was in your post where you said, if somebody makes their process more efficient, and therefore they're getting more work allotted to them, then obviously they're going to stop doing that. Right? Which means that, I don't know, do you ought to give more downtime to your best workers or something or the people who are most creative in your company?", "Austin Vernon (00:46:48):", "I was just going to say, if you're a worker at a plant, then a lot of times for that level of employee, actually small rewards work pretty well. A lot of people on drilling rigs used to give the guys that met certain targets $100 Walmart gift cards. So sometimes small, it's a reward, new ideas, stuff like that works.", "Austin Vernon (00:47:15):", "But because the whole system has to grow together, if you just improve one part of the process, it may not help you. You have to be improving all the right processes so normally it's much more collaborative. There's some engineer that's looking at it and like, \" All right, this is where we're struggling, \" or \" We have our defects here .\" And then you go get together with that supervisor and the workers in that area, then you all figure out what improvements could be together. Because usually the people already know. This is like, you see a problem at the top, and you're just now realizing it. Then you go talk to the people doing the work, and they're like, \"Oh yeah, I tried to tell you about that two weeks ago, man. \" And then you figure out a better process from there.", "Dwarkesh Patel (00:47:58):", "Based on your recommendation, and Steven Malina's recommendation, I recently read The Goal . And after reading the book, I'm much more understanding of the value that consultants bring to companies, potentially. Because before you could think, “What does a 21 year old, who just graduated college, know about manufacturing? What are they going to tell this plant that they didn't already know? How could they possibly be adding value?” And afterwards, it occurred to me that there's so many abstract concepts that are necessary to understand in order to be able to increase your throughput. So now I guess I can see how somebody who's generically smart but doesn't have that much industry knowledge might be able to contribute to a plan and value consultants could be bringing.", "Austin Vernon (00:48:43):", "I think this applies to consultants or young engineers. A lot of times you put young engineers just right in the thick of it, working in production or process right on the line, where you're talking to the workers the most. And there's several advantages to that. One, the engineer learns faster, because they're actually seeing the real process, and the other is there's easy opportunities for them to still have a positive impact on the business, because there's $100 bills laying on the ground just from going up and talking to your workers and learning about stuff and figuring out problems they might be having and finding out things like that that could help you lower cost. I think there's a lot of consultants that... I don't know how the industry goes, but I would guess there's... I know Accenture has 600,000 employees. I don't know if that many, but it's just a large number, and a lot are doing more basic tasks and there are some people that are doing the more high level stuff, but it's probably a lot less.", "Dwarkesh Patel (00:49:51):", "Yeah. Yeah. There was a quote from one of those books that said, \"At Toyota we don't consider you an engineer unless you need to wash your hands before you can have lunch. \" Yeah. Okay. So in your blog post about car manufacturing, you talk about Tesla. But what was really interesting is that in a footnote, I think you mentioned that you bought Tesla stocks in 2014, which also might be interesting to talk about again when we go to the market and alpha part. But anyways. Okay. And then you talk about Tesla using something called metal manufacturing. So first of all, how did you know in 2014 that Tesla was headed here? And what is metal manufacturing and how does it differ from the Toyota production system?", "Austin Vernon (00:50:42):", "Yeah. So yeah, I just was goofing around and made that up. Someone actually emailed me and they were like, \"Hey, what is this metal manufacturing? I want to learn more about this.\" It's like, \"Well, sorry, I just kind of made that up, because I thought it sounded funny.\" But yeah, I think it's really the idea that there's this guy, Dimming, and he found a lot of the same ideas that Toyota ended up implementing, and Toyota respected his ideas a lot. America never really got fully on board with this in manufacturing. Of course it's software people that are coming and implementing this and manufacturing now which is like the real American way of doing things.", "Austin Vernon (00:51:32):", "Because when you look at these manufacturing processes, the best place to save money and optimize is before you ever build the process or the plant. It's very early on. So I think if there's a criticism of Toyota, it's that they're optimizing too late and they're not creative enough in their production technology and stuff. They're very conservative, and that's why they have hydrogen cars and not battery cars, even though they came out with the Prius, which was the first large sales hybrid.", "Austin Vernon (00:52:12):", "So yeah, I think what Tesla's doing with really just making Dimming's ideas our own and really just Americanizing it with like, \"Oh, well, we want to cast this, because that would be easier.\" Well, we can't, because we don't have an alloy. \"We'll invent the alloy.\" I love it. It's great. Mostly, I love Tesla because they do such... I agree with their engineering principles. So I didn't know that the company would come to be so valuable. It's just, I was just always reading their stock reports and stuff so I was like, \"Well, at least I need to buy some stock so that I have a justification for spending all this time reading their 10 Ks.\"", "Dwarkesh Patel (00:52:53):", "I want to get a little bit more in detail about the exact difference here. So lean production, I guess, is they're able to produce their cars without defects and with matching demand or whatever. But what is it about their system that prevents them from making the kinds of innovations that Tesla is able to make?", "Austin Vernon (00:53:16):", "It's just too incremental. It's so hard to get these processes working. So the faster you change things, it becomes very, very difficult to change the whole system. So one of the advantages Tesla has is, well, if you're making electric cars, you have just a lot less parts. So that makes it easier. And once you start doing the really hard work of basically digitizing stuff, like they don't have speed limit dials, you start just removing parts from the thing and you can actually then start increasing your rate of change even faster.", "Austin Vernon (00:53:55):", "It makes it harder to get behind if you have these old dinosaur processes. But I think there's a YouTube channel called The Limiting Factor, and he actually went into the detail of numbers on what it costs for Tesla to do their giga-casting, which saves tons of parts and deletes zillions of thousands of robots from their process. If you already have an existing stamping line and all that, where you're just changing the dyes based on your model, then it doesn't make sense to switch to the casting. But if you're building new factories, like Tesla is, well, then it makes sense to do the casting and you can build new factories very cheaply and comparatively and much easier. So there's a little bit of... they just have lots of technical data, I guess you could say, in a software sense.", "Dwarkesh Patel (00:54:47):", "Yeah. That's super interesting. The analogy is actually quite... it's like, Microsoft has probably tens of thousands of software engineers who are just basically servicing its technical debt and making sure that the old systems run properly, whereas a new company like Tesla doesn't have to deal with that. The thing that's super interesting about Tesla is like, Tesla's market cap is way over a trillion, right? And then Toyota's is 300 billion. And Tesla is such a new company. The fact that you have this Toyota, which is legendary for its production system, and this company that's less than two decades old is worth many times more, it's kind of funny.", "Austin Vernon (00:55:32):", "Yeah. I would say that, in that measure, I don't like market cap. You need to use enterprise value. These old car companies have so much debt, that if you look at enterprise value, it's not so jarring. Literally, I don't know, I can't remember what GM's worth, like 40 billion or something, and then they have $120 billion in debt. So their enterprise value is five times more than their market cap.", "Dwarkesh Patel (00:56:02):", "What is enterprise value?", "Austin Vernon (00:56:03):", "Enterprise value is basically what is the value of the actual company before you have any claims on it. It's the market cap plus your debt. But basically, if you're the equity holder and the company gets sold, you have to pay the debt first. So you only get the value of what's left over after the debt. So that's why market cap is... when Tesla has very little debt and a lot of market cap, and then these other guys have a lot of debt with less market cap, it skews the comparison.", "Dwarkesh Patel (00:56:34):", "Yeah, and one of the interesting things, it's similar to your post on software, is that it seems like one of the interesting themes across your work is automating processes often leads to decreased eventual throughput, because you're probably adding capacity in a place that you're deciding excess capacity, and you're also making the money part of your operation less efficient by have it interface with this automated part. It sounds like there's a similar story there with car manufacturing, right?", "Austin Vernon (00:57:08):", "Yeah. I think if we tie it back into what we were talking about earlier, automation promotes local optimization and premature optimization. So a lot of times it's better to figure out, instead of automating a process to make a really hard to make part, you should just figure out how to make that part easy to make. Then after you do that, then it may not even make sense to automate it anymore. Or get rid of it all together, then you just delete all those robots.", "Austin’s Carbon Capture Project", "Dwarkesh Patel (00:57:37):", "Yeah. Yeah, that's interesting. Okay. So let's talk about the project that you're working on right now, the CO2 electrolysis. Do you want to explain what this is, and what your current approach is? What is going on here?", "Austin Vernon (00:57:55):", "Yeah, so I think just overall, electrofuels right now are super underrated, because you're about to get hopefully some very cheap electricity from solar, or it could be, maybe, some land. If we get really lucky, possibly some nuclear, geothermal. It’ll just make sense to create liquid fuels, or natural gas, or something just from electricity and air, essentially.", "Austin Vernon (00:58:25):", "There's a whole spectrum of ways to do this, so O2 electrolysis is one of those. Basically, you take water, electricity, and CO2, and a catalyst. And then, you make more complex molecules, like carbon monoxide, or formic acid, or ethylene, or ethanol, or methane or methine. Those are all options. But it's important to point out that, right now, I think if you added up all the CO2 electrolyzers in the world, you'd be measuring their output and kilograms per day. We make millions of tons per day off of the products I just mentioned. So there's a massive scale up if it's going to have a wider impact.", "Austin Vernon (00:59:15):", "So there's some debate. I think the debate for the whole electrofuels sector is: How much are you going to do in the electrolyzer? One company whose approach I really like is Terraform Industries . They want to make methane, which is the main natural gas. But they're just making hydrogen in their electrolyzer, and then they capture the CO2 and then put it into a methanation reaction. So everything they're doing is already world scale, basically.", "Austin Vernon (00:59:47):", "We've had hydrogen electrolyzers power fertilizer plants, providing them with the Hydrogen that they need. Methanation happens in all ammonia plants and several other examples. It's well known, very old. Methanation is hydrogen CO2 combined to make water and methane. So their approach is more conservative, but if you do more in the electrolyzer, like I'm going to make the methane actually in the electrolyzer instead of adding this other process, you could potentially have a much simpler process that has less CapEx and scales downward better. Traditional chemical engineering heavily favors scaling. With the more Terraform processes, they're playing as absolutely ginormous factories. These can take a long time to build.", "Austin Vernon (01:00:42):", "So one of the things they're doing is: they're having to fight the complexity that creeps into chemical engineering every step of the way. Because if they don't, they'll end up with a plant that takes 10 years to build, and that's not their goal. It takes 10 years to build a new refinery, because they're so complex. So yeah, that's where I am. I'm more on the speculative edge, and it's not clear yet which products will be favorable for which approaches.", "Dwarkesh Patel (01:01:15):", "Okay, yeah. And you're building this out of your garage, correct?", "Austin Vernon (01:01:19):", "Yeah. So that's where electrolyzers... Everything with electric chemistry is a flat plate instead of a vessel, so it scales down. So I can have a pretty good idea of what my 100 square centimeter electrolyzer is going to do, if I make it quite a bit bigger. I have to worry about how my flow might interact in the larger one and make sure the mixing's good, but it's pretty straightforward because you're just making your flat plate a larger area. Whereas the scale, it is different from scaling a traditional chemical process.", "Dwarkesh Patel (01:01:56):", "I'm curious how cheap energy has to be before this is efficient. If you're turning it into methane or something like that, presumably for fuel, is the entire process energy positive? Or how cheap would energy, electricity you need to get before that's the case?", "Austin Vernon (01:02:18):", "The different products and different methods have different crossovers. So Terraform Industries, they're shooting for $10 a megawatt hour for electricity. But again, their process is simpler, a little less efficient than a lot of the other products. They also have better premiums, just worth more per ton than methane. So your crossover happens somewhere in between $10 and $20 a megawatt hour, which is... I mean, that's pretty... Right now, solar, it's maybe like $25. Maybe it's a little higher because payment prices have gone up in the last year, but I think the expectation is they'll come back down. And so, getting down to $15 where you start having crossovers for some of these products like ethanol or ethylene or methanol, it's not science fiction.", "Dwarkesh Patel (01:03:08):", "I think in Texas where I live, that's where it's at right? The cost of energy is 20 or something dollars per megawatt hour.", "Austin Vernon (01:03:16):", "Well, not this summer! But yeah, a lot of times in Texas, the wholesale prices are around $25 to $30.", "Dwarkesh Patel (01:03:26):", "Gotcha. Okay. Yeah. So a lot of the actual details you said about how this works went over my head. So what is a flat plate? I guess before you answer that question, can you just generally describe the approach? What is it? What are you doing to convert CO2 into these other compounds?", "Austin Vernon (01:03:45):", "Well, yeah, it literally just looks like an electrolyzer. You have two sides and anode and a cathode and they're just smushed together like this because of the electrical resistance. If you put them far apart, it makes it... uses up a lot of energy. So you smush them together as close as you can. And then, you're basically just trading electrons back and forth. On one side, you're turning CO2 into a more complex molecule, and on the other side, you're taking apart water. And so, when you take apart the water, it balances out the equation, balances out your electrons and everything like that. I probably need to work on that elevator pitch there, huh?", "Dwarkesh Patel (01:04:31):", "I guess what the basic idea is, you need to put power in to convert CO2 into these other compounds.", "Austin Vernon (01:04:38):", "The inputs are electricity, water, and CO2, and the output is usually oxygen and whatever chemical you're trying to create is, along with some side reactions.", "Dwarkesh Patel (01:04:49):", "And then, these chemicals you mentioned, I think ethanol, methane, formic acid, are these all just fuels or what are the other uses for them?", "Austin Vernon (01:04:58):", "A lot of people are taking a hybrid approach with carbon monoxide. So this would be like Twelve Co… They've raised a lot of money to do this and 100 employees or something. You can take that carbon monoxide and make hydrogen, and then you have to send gas to make liquid fuels. So they want to make all sorts of chemicals, but one of the main volume ones would be like jet fuel.", "Austin Vernon (01:05:22):", "Let's see Formic acid is, it's the little fry of all these. It is an additive in a lot of things like preserving hay for animals and stuff like that. Then, ethanol there's people that want to... There's this company that makes ethylene, which goes into plastics that makes polyethylene, which is the most produced plastic. Or you can burn it in your car, although I think ethanol is a terrible vehicle fuel. But then you can also just make ethylene straight in the electrolyzer. So there's many paths. So which path wins is an interesting race to see.", "Dwarkesh Patel (01:06:13):", "The ability to produce jet fuel is really interesting, because in your energy superabundance paper, you talk about... You would think that even if we can electrify everything in solar and when it becomes super cheap, that's not going to have an impact on the prices to go to space for example. But I don't know. If a process like this is possible, then it's some way to in financial terms, add liquidity. And then turn, basically, this cheap solar and wind into jet fuel through this indirect process. So the price to send stuff to space or cheap plane flights or whatever––all of that goes down as well.", "Austin Vernon (01:06:52):", "It basically sets a price ceiling on the price of oil. Whatever you can produce this for is the ceiling now, which is maybe the way I think about it.", "Dwarkesh Patel (01:07:06):", "Yeah. So do you want to talk a little bit about how your background led into this project? This is your full-time thing, right? I don't know if I read about that, but where did you get this idea and how long have you been pursuing it? And what's the progress and so on.", "Austin Vernon (01:07:20):", "I've always loved chemical engineering, and I love working at the big processing plant because it's like being a kid in a candy store. If I had extra time, I'd just walk around and look at the plant, like it’s so cool. But the plant where I worked at, their up time was 99.7%. So if you wanted to change anything or do anything new, it terrified everyone. That's how they earned their bonuses: run the plant a 100% uptime all the time. So that just wasn't a good fit for me. And also, so I always wanted my own chemical plant, but it's billions of dollars to build plants so that was a pretty big step. So I think this new technology of... there's a window where you might be able to build smaller plants until it optimizes to be hard to enter again.", "Dwarkesh Patel (01:08:21):", "And then, why will it become hard to enter again? What will happen?", "Austin Vernon (01:08:27):", "If someone figures out how to build a really cheap electrolyzer, and they just keep it as intellectual property, then it would be hard to rediscover that and compete with them.", "Dwarkesh Patel (01:08:38):", "And so, how long have you been working on this?", "Austin Vernon (01:08:42):", "Oh, not quite a year. But yeah, I actually got this idea to work on it from writing my blog. So when I wrote the heating fuel post, I didn't really know much about... There's another company in the space, Prometheus Fuels and I'm like, \"Oh, this is an interesting idea.\" And then, I got talking to a guy named Brian Heligman, and he's like, \"You should do this, but not what Prometheus is doing.\" And so, then I started looking at it and I liked it, so I've been working on it since.", "Dwarkesh Patel (01:09:08):", "Yeah. It's interesting because if energy does become as cheap as you suspect it might. If this process works, then yeah, this is a trillion dollar company probably, right? If you're going to get the patents and everything.", "Austin Vernon (01:09:22):", "I mean, maybe. With chemical plants, there's a certain limitation where your physical limitation is. There's only so many places that are good places for chemical plants. You start getting hit by transportation and all that. So, you can't just produce all the chemical for the entire world in Texas and transport it all around. It wouldn't work. So you're talking about a full, globe-spanning thing. At that point, if you're building factories all over the world, someone's going to figure out what your intellectual property is and all that. You would have to keep innovating to stay ahead of the competitors. I think that would limit your... Ultimately, it's a commodity. You're making commodity, so you don't have the same kind of defensibility that other sectors do.", "Dwarkesh Patel (01:10:18):", "I see. Yeah, yeah, yeah. Okay. There's not network effects I guess?", "Austin Vernon (01:10:22):", "Yeah. This is not quite consistent with what I just said about harder to enter. But I think what happens is the scale starts increasing as you go on. Even though this is easier to scale down, there's certain elements that are very much hard to scale and then the organization as well. Basically, you'll end up with early on a few competitors that continue to grow against each other and limit the margins. It'd be hard to be the fifth 30 years down the line.", "Dwarkesh Patel (01:11:05):", "What is the state of this project right now? Are you guys planning on starting a company, and what are the milestones you guys are shooting for?", "Austin Vernon (01:11:14):", "Right now, it is just me, but I have a family of engineers. We're all engineers, so it's loosely supported by them right now, by other people in my family as well, they're participating some. But yeah, basically, I just have to get... I've already done a lot of the theoretical design work at just a very cursory level to make sure it makes sense, and the cost will be reasonable and stuff like that. So now, it's working on the electrolizer to, basically, meet the targets you need for reliability and product concentration and energy costs. Also then just, is it manufacturable? Because right now, a lot of electrolyzers they use in the labs, they're literally smaller than a postage stamp, and they're very difficult to make.", "Dwarkesh Patel (01:12:10):", "Okay. I see. So you started working on this before or after you had quit your job?", "Austin Vernon (01:12:16):", "Oh, yeah. After. I quit my job five years ago or something. I was doing software stuff in between.", "Dwarkesh Patel (01:12:21):", "Oh, yeah? What did you work on?", "Austin Vernon (01:12:24):", "I worked on several products. I have one that’s an oil and gas data service that's somewhat successful. I’ve kept paying customers, but it's still relatively small.", "Dwarkesh Patel (01:12:37):", "Okay. I see. Yeah. So it seems like your blog is pretty recent, right? You started that about a year ago. What encouraged you to do that?", "Austin Vernon (01:12:48):", "Well, let's see. I was curious about cryptography in general, but specifically for blockchains. I wanted to be able to read the Bitcoin white paper and understand some of this IPFS. So I figured the best way to do it was... people talked about like, \"Oh yeah, you should write, blah, blah, blah.\" So I did, \" Well, I'll create an IPFS blog. \" I did that and learned a lot. It was not the most reliable blog when I was running it on my own Droplet and everything. So thankfully, I migrated to a service that has much more up time than my own server. So then, I wrote several posts to basically learn about it. I wrote posts about hash functions and private key cryptography. So then I could understand the white papers and what they're doing with the math and the cryptography. Eventually, I had this blog–– so my first non-crypto topic was on how to build a cheaper house or why it's difficult to reduce home construction costs. That made it on Hacker News and all that. It's like, \"Oh, maybe actually people want to read this stuff,\" so I've just been writing since then in my spare time.", "Dwarkesh Patel (01:14:18):", "I actually interned for Protocol Labs, which is a place that built IPFS.", "Austin Vernon (01:14:25):", "Oh, yeah?", "Dwarkesh Patel (01:14:26):", "Yeah. So I got a chance to learn a lot about it and... about how file coin exactly works. That part threw me into a world for a while. But yeah, it's really interesting. I actually had a blog on IPFS. I mean, it was just a toy thing, not the one that I actually ended up writing on. The thing is, obviously at the moment, being it's like... nobody else is going to seat it for you, so you got to use a centralized service anyways, like Piñata, but it is a fun exercise.", "Austin Vernon (01:14:57):", "I was just running it off of Droplet and on DigitalOcean. If you use the direct content hash, it works pretty well, even if you're linking through your ENS name. But the problem is, of course, when I was first doing this, the fees on Ethereum were so high that I didn't want to change that link all the time. So I tried to use the pinning feature with IPNS because CloudFlare does the e-thought link. And then, they look up whatever your IPNS name is. And then, they tried to go find it.", "Austin Vernon (01:15:35):", "So the breaking point for me was how CloudFlare couldn't always find my server using IPNS. Now, the service I'm using is called Fleek. They basically go directly to the content hash, but on DNS it's cheap to change. You can change it in one minute. If Ethereum fees got lower, I might switch back to that, but I don't want to... Eventually, and I think it will be. What if it's like 1 cent transactions? Then it would be no big deal to just change the content hash every time you update your website.", "Dwarkesh Patel (01:16:17):", "What is the reason for having it on Ethereum?", "Austin Vernon (01:16:21):", "Just for fun.", "Dwarkesh Patel (01:16:23):", "It is inconvenient, I guess, if your content hash is changing every time you update the website, so you got to keep re-updating the actual... where people can find the site or use some other service to take care of it.", "Austin Vernon (01:16:34):", "I mean, yeah, if transactions are cheap, then you just have... You could automate it all, and it just cost you a little bit of money each time, and it'd be fine. But it was like $50, and I'm not going to pay $50 to post a blog post.", "Energy Superabundance", "Dwarkesh Patel (01:16:47):", "Yeah. Yeah. And then, you find a typo. It's like, \"Oh, gosh, I can't fix that.\" So you have a paper that you recently released with Eli Dourado on energy superabundance, and you have lots and lots of interesting speculation in there for what might be possible if energy gets a lot cheaper. I think we should just jump into it. On the big picture, as I'm sure you're aware, per capita energy used since the 1970s has not gone up. Before that, there's this thing called the Henry Adams Curve, where per capita energy use would increase 2% a year.", "Dwarkesh Patel (01:17:22):", "After 1970, that was no longer the case. Ironically enough, right after, the Department of Energy was created. But nonetheless, we've still had economic growth since the 1970s. I mean, it's been slower, but even though per capita energy hasn't increased per capita, GDP has increased. I think in the paper's abstract or the introduction, you talk about why increasing energy use is necessary for increasing economic growth. But doesn't that pattern suggest that you can still have decent economic growth without having to use energy, or have we just not come across the constraints yet?", "Austin Vernon (01:17:57):", "Hey, I mean, you just have diminishing returns. There's physical limits to how efficient things could be, and as you get closer to that efficiency limit, it's harder and harder and takes more and more effort. There's some diminishing returns there, where if you can just... A perfect example of what we were just talking about is oil's quite expensive and natural gas is expensive too. Oil’s easy to transport, you can produce it anywhere in the world and get anywhere else pretty cheaply. Natural gas is extremely expensive transport, but it's very useful fuel and for also making fertilizer. Not everyone has natural gas or the economic capability to extract natural gas using traditional processes. So if you have independent energy because you can just build these natural gas factories, where you're just using sunshine and water and air, then all of a sudden everyone has access to natural gas even if you weren't blessed with easily obtainable, natural gas reserves.", "Austin Vernon (01:19:00):", "I think that there's this whole story about the tyranny of geography here when it comes to energy. Because there are some countries that have extreme electricities per capita, like Iceland and Norway, where they have crazy amounts of hydro-power and people build aluminum plants there and stuff like that. Then you have places in Africa, where they have no coal, very little gas. They're just energy starved. Their transportation system sucks. You can't transport coal in.", "Austin Vernon (01:19:39):", "The hydropower is... There's only so much of it. It may not be close to where their cities are. So if you start adding solar to the mix for them, and some of these other technologies, it could really be an incredible increase in energy availability for them. I think we talked about that in the paper. We're looking at doubling rich-world use of energy, but it would be more like 10x more if you live in Africa.", "Dwarkesh Patel (01:20:08):", "Yeah. Yeah. So I wonder, if that's the case, then as energy becomes that abundant, will it just be other resources that become the bottleneck in terms of what our civilization needs... ? For example the actual resources that are necessary to build the factories and the raw materials? To what extent can even that be...", "Austin Vernon (01:20:34):", "I would argue the ultimate limit is like.. it is really human capital. What more abundant energy does is it allows you to redeploy human capital away from trying to figure out how to use scarce energy sources. Here's an example I love about trucking. I love trucks, not as big a fan of freight trains, but freight trains are extremely efficient. They get like 10 times more efficient than a truck or something.", "Austin Vernon (01:21:15):", "They use just very little fuel, but the flip sde is that the train doesn't come by all the time and they may not hold to the schedule. You have to aggregate your product with the other stuff or your raw materials, and it adds a lot of cost to your production. Toyota production system runs on trucks, not trains. The reason is that trucks are just extremely flexible. They come when you need it. They go when you need it. And even then, people still complain about truck drivers not showing up when you want them.", "Austin Vernon (01:21:51):", "When you have cheaper energy, this electrification and automation of trucking, you are going to shift a huge amount of goods from trains to trucks. And it's going to just have huge knock on effects all across the economy. It's more specialization. There's a lot of products that you're just limited to your suppliers because transportation's expensive. It reduces working capital, because a lot of times it takes longer on trains, similar stuff like smaller ships, more air freight. One thing that shocked me as Eli was telling me about how the elasticity of demand for air freight is just insane. When you even decrease the cost a little bit, demand goes to the roof.", "Austin Vernon (01:22:37):", "So I'm pretty sure that there'll be some kind of... You always think, \" Oh, we can't do this with batteries,\" and then someone comes up with a more clever idea. So even if you have a 500 mile rein limit for your freight plane. The freight doesn't care if you have to stop every 500 miles to refuel or recharge. You can go over land on almost all these routes. You could go up through Japan and the Aleutian Islands, or you could go overland from China to Europe, charging just wherever's convenient.", "Austin Vernon (01:23:15):", "If that electric plane has half the operating cost of the jet plane, the amount of freight you're moving on airplanes will go way up, and it'll go down on ships. And then, everyone will be better off. Because right now, if you're a shipping company, you have real working capital problems, because your stuff sits on a boat for a month, and you've got to finance that and do all this stuff. And then, what if things change in the meantime? Like, \"Oh, I don't really want that product anymore, \" so the air freight is just an absolute economic, just booster. So if you could make that cheaper, it's really exciting because it uses way more energy.", "Dwarkesh Patel (01:24:01):", "An analogy that had just occurred to me is, you could imagine that with computational power, if Moore's Law had stopped in 2005, we would still have a lot of interesting applications using compute and the effects of the computer would still have permeated society. But obviously, a lot of things that are possible today with computers, they just wouldn't have been tried or been possible in that kind of world.", "Austin Vernon (01:24:27):", "Okay. Yeah. I mean, all your engineers would be working on optimization instead of building new products.", "Dwarkesh Patel (01:24:34):", "Yeah. I think in J. Storr Hall's new book on... \"Where's My Flying Car?\" One of the points he makes is that GDP growth has been probably overstated because a lot of what constituted the growth has just been increasing the efficiency of existing machines to make them use less energy. Which still doesn't result in more total resources or goods or services being produced. But yeah, instead of making the laundry machine more efficient, you can just create a new kind of machine that may need to use more energy. So for this vision to come to pass, do you need energy to... Is it just enough that energy becomes super cheap or do you need advances in the ability to store that energy as well? So for example, lithium batteries are the bottleneck, does it matter if you can get energy super cheap, if you can't put them in appliances or cars or planes or whatever?", "Austin Vernon (01:25:33):", "I think the important thing to think about here is that our current energy is so expensive, especially electricity. With our energy resources, which are basically thermal, it's quite difficult to make electricity comparatively. And so, what we use electricity for is stuff we really want to use electricity for. It's hard to imagine that... We're not going to turn our air conditioner off. We're going to run it. And so, we're willing to pay a lot of money for that electricity to run our air conditioner. Whereas, if you look at really closely at a lot of the use cases that use tons of extra energy, they're much more flexible in how they use the energy. There's not a whole lot of storage evolved. If you're looking at growing crops or making methane for rocket fuel or making chemicals, you can design these processes to run when the energy's available. And so, the batteries are really going to be for keeping air conditioner on.. where you're willing to pay a lot of money. So I don't really see the batteries and storage as a limit.", "Storage for Cheap Energy", "Dwarkesh Patel (01:26:47):", "Okay. If you had something like air freight, right? If that's the thing we're concerned about. Wouldn't you need some way to store that electricity for air freight, or maybe you can just convert it to jet fuel? Is that what you're saying?", "Austin Vernon (01:27:03):", "Yeah. I was thinking of more grid storage, but yeah, transportation's going to dominate battery demand. Grid storage is tiny in comparison, but I think you're basically getting to the point where we're making batteries out of dirt because that's how you scale it. So, if you're making batteries out of carbon and iron and phosphate, it becomes about how many battery factories you want to build. There's plenty of lithium, it's just you have to build the lithium mines. I don't really see any hard limits there eventually. Once you build all the factories, then yeah, you're pretty much ready to go.", "Dwarkesh Patel (01:27:44):", "Is the point you're making with the alternative batteries that even if they're worse than lithium batteries we'll have just so much energy that it doesn't matter? As in even if we lose a lot of it, that's fine, we'll just use whatever we can take. Or are you saying that we'll produce batteries with other chemistries that are as good as lithium batteries or better?", "Austin Vernon (01:28:06):", "Right now the shortage is really nickel. So in the very short term, lithium's kind of starting to be a shortage, but there's plenty of lithium. It won't be a problem. With lithium iron phosphate or whatever, there's a huge amount of substitution right now because it's avoiding nickel. It's not quite as good as some of the nickel chemistries, but for a lot of applications, it just doesn't matter like for a lot of cars and everything like that. And you're going to have the aircraft and stuff paying the premium for the high energy density batteries. Eventually  technologies could just use less and less materials because they're better batteries, like some of these concepts around solid state. I'm not sure if those will come to fruition and if they'll be really that much better when they do come, but I think there's lots of opportunities for substitution down the line.", "Dwarkesh Patel (01:29:03):", "What is solid state, by the way?", "Austin Vernon (01:29:04):", "Right now, all our batteries charge and discharge through the lithium ion going back and forth between the cathode and the anode. It travels through a liquid and the liquid is electrolyte, which means ions can travel through it. Solid electrolytes are a little more challenging so that's why we don't have them. You get rid of the liquid and it's just like the ion has to travel through a solid instead. The promise is it could be a much higher energy density and theoretically cheaper too, just because it weighs less and stuff, but there's all sorts of problems around how they degrade faster. Batteries have six different areas where you have to hit the requirements and if you miss one, then it's no good so they're kind of hard to improve in that sense.", "Dwarkesh Patel (01:30:02):", "I guess if the energy super abundance is going to come from solar and wind, obviously these are intermittent sources of energy, in that case, you would need there to be progress in the battery storage, that's contingent on that. Right?", "Austin Vernon (01:30:16):", "Yeah. I think that's what I mean, a lot of the extra energy uses that we talk about don't really require many batteries, if any batteries at all. The transportation, yes, you have batteries in them, but if you're going to have abundant nuclear electricity, or abundant geothermal electricity, you still have to build all those electric vehicles. You still need the batteries for that. So the extra batteries that solar and wind require over geothermal, I think could end up being pretty minimal. Maybe the way I think about it is if you can have solar, a farm that's going to give you $10 a megawatt of our electricity, you have to figure out how to utilize that. And if you do, then you'll be very rich and you'll beat the guy who's paying $40 a megawatt hour from the more expensive traditional generators.", "Dwarkesh Patel (01:31:12):", "Yeah. But before we get into which sources of energy are most promising, let's talk about some of the other applications of an energy support abundance. Obviously we talked a little bit about travel, but one thing that might be concerning with air travel, at least for passengers, is if the bottleneck step there is TSA and other regulations, to what extent will reducing the travel time or increasing flight speed or number of flights have an impact on how much time you're going to spend in an airport or in transit?", "Austin Vernon (01:31:47):", "Well, so right now, if you think about Airbus, they have this super jumbo thing. I can't remember what that plane’s number was, but none of the airlines really loved it because it's too big. It's too hard to get everyone loaded and unloaded and you really just hit this economies of scale. So the electric planes are likely to be just tiny in comparison, 10 passengers. It's easier to load and unload or you're going to fly out of smaller airports so you won't be going to this giant regional airport that just, has all the parking problems and all the security, you'll be driving to your neighborhood general aviation airport, where there's a small line to get through. And a lot of these small aircraft, under certain situations, even avoid some of the screening requirements, because they're just not as dangerous. If you have a small plane, there's only so much damage you can do with it.", "Dwarkesh Patel (01:32:42):", "I did not know that. I got to start booking planes from the smaller airports or something to avoid the TSA.", "Austin Vernon (01:32:48):", "It's very nascent, but there's some business models that are coming down from the net jet style to a little more commercial. I think that they're trying to hit a price point that's similar to first class, but you get to avoid all the airport craziness. And I think, I'm just kind of a believer in if that existed, people would get angry enough that they would loosen up a lot of the rules. It seems impossible to change those rules now, but I think for the average person, it just costs them no time because most people don't even fly very much.", "Travel in the Future", "Dwarkesh Patel (01:33:26):", "Yeah. Do you want to talk about your vision for what a city could look like if energy got a lot cheaper? In the paper you have all kinds of interesting projections about drones and electric deliveries and just the entire congestion of the 3D space, and I guess with tunnels as well. Well, what does the city look like with energy superabundance?", "Austin Vernon (01:33:48):", "Basically disaggregate the car to a certain extent, inner city car trips are less because flying's going to be cheaper and it's going to be more convenient to have the bots deliver your stuff. I love the tunnels because I don't like taking people's land so with tunnels you can run new roads and everything without imminent domaining and taking people's land away from them when they don't want to lose their land. That process is so, it makes people so angry when you take their land that it's very expensive to imminent domain people, because they will fight you until literally the sheriff has to show up and haul them away. If you can go around that with tunnels existing right away it just makes that societal cost of doing some of this stuff significantly less expensive and it's then just the engineering challenge.", "Future Cities", "Austin Vernon (01:34:54):", "I think there's really an opportunity now there. Boring Company is the most famous, but recently I think there's another company that wants to do tunnels for electricity. They have this plasma boring machine concept–– it seems pretty crazy right now, but it's just one of those solutions where you're going to reduce the coordination cost across the whole economy and improve property rights so people should really try to build it.", "Dwarkesh Patel (01:35:26):", "You mentioned one of these machines in your blog post on tunneling and it was DS, SpaceX1, I forgot the name of it, but it's like this insane thing.", "Austin Vernon (01:35:34):", "Prufrock ?", "Dwarkesh Patel (01:35:35):", "Yeah, exactly. It's pretty big but apparently it's all electric, which is kind of insane, and it can just do it in one go. How is it getting the material out if you're just doing the tunneling in one step?", "Austin Vernon (01:35:51):", "The problem is that most of the tunneling is in soft soil and it's difficult to drill through soft soil because of the materials handling. When you first start drilling an oil well through this stuff, you actually have to limit your drilling speed and you don't even have to put any weight on the bit, just the pumping fluid around basically jets out the fluid. That's kind of what you're doing with the boring machine and the soft soil stuff. So managing the spoils, which is they have muck carts a lot of times, I think maybe it's basically like trying to do a conveyor belt, but you could also just make it a full liquid and pump it out. In the oil field we carry our cuttings in mud and we pump it.", "Austin Vernon (01:36:36):", "Then they have, the other big challenge is they have to keep the walls from caving in on them so that's... Current boring machines and soft soil spend enormous amounts of time erecting these tunnel supports that keep it from collapsing in themself. So it's kind of counterintuitive. It's actually dramatically faster to bore in hard rock than soft soil because in soft soil you spend so much non-productive time, whereas in hard rock, you're just blowing and going.", "Dwarkesh Patel (01:37:09):", "Interesting. Yeah. Okay. To get back to the cities, you mentioned something in the paper, Marchetti's constant, which is about how the amount of time people spend in transport per day is the same so if you just increase the amount of speed in which they can move with VTOLs (Vertical Takeoff and Landing) or other kinds of things, then they have a wider surface area in which they can explore. Right?", "Austin Vernon (01:37:37):", "Yeah. I don't know if physically the cities will look that much different, but their effective economic size will be much larger because you could live in Cedar Rapids and commute to Minneapolis with some of these technologies. Your city in Cedar Rapids still looks the same, but you don't have to work there. If you have a better job in Minneapolis, you could commute there three times a week or whatever it is, five days a week.", "Dwarkesh Patel (01:38:11):", "Yeah. It's super interesting. But does that imply, by the way, that if the commute time stays the same and people just get more spread out.. If energy becomes cheaper, then neighborhoods and cities kind of become this unwalkable mess out of a Jane Jacob's nightmare if the conglomeration goes away?", "Austin Vernon (01:38:29):", "I think it's actually the opposite. If you have tunnels and if you have some, these alternative methods to cars, then you use cars less. And I think in many cities, they never made sense for cars anyway, because they were built before cars. In New York city, you're never going to move everyone around on a car unless you build tunnels, you could then. But even then I think there's other technologies there that make a lot of sense. I think people like walkable so even though I live in a city that requires a car, some of the hottest neighborhoods are walkable neighborhoods where the neighborhood is walkable itself and then you just drive your car to wherever else you need. The car is hidden within the neighborhood.", "Dwarkesh Patel (01:39:22):", "Okay. It's interesting. I guess we'll see more segregation (not in the racial sense or anything) but in the sense that people will prefer to live in these walkable neighborhoods so they don't have any problem commuting to work using a VTOL or something. What you would end up seeing is these walkable neighborhoods and then industrial zones that are way far away, distance wise, but not that far away time wise.", "Austin Vernon (01:39:46):", "Right. And it's the same for if you want to live in a small town. Now it would be too far to commute to the city, but you could in the future.", "Flying Cars", "Dwarkesh Patel (01:39:56):", "Yeah. More choice. I see. What is holding back VTOLs? VTOL, by the way, is Vertical Take Off and Landing. The reason you need to go to an airport is because you need a large landing pad to take off and land. The hope is that if you could just vertically take off, then you would be able to lift off from your roof or something. Obviously we've had prototypes of this kind of stuff since the thirties. Why don't we have these widely available? Is energy the constraint, or is it something else?", "Austin Vernon (01:40:26):", "Well, I think in the past, theoretically liquid fuels were dense enough, but they're too complex, too expensive because when you're turning heat energy into mechanical energy, it's just a lot of weight and complexity comes with that. Some of these old concepts, you have all these engines and all that. And so if you electrify them, it really changes the game. Because it's not just batteries, it's the motors, it's the inverters, that are now getting dense enough and small enough to make sense. It takes time to get this stuff through the FAA, for better or worse. The technology hasn't been good enough long enough to get stuff through the FAA.", "Austin Vernon (01:41:14):", "And there are some limitations I think right now. A lot of people wouldn't use batteries and the batteries are just on the edge of good enough. You're going to have a 50 mile VTOL, not a couple hundred mile VTOL, but eventually my dream VTOL application is a nuclear power quadcopter that carries a container. So you can take the container directly from the factory in Vietnam or wherever, directly to the people who are using it or the warehouse in Arkansas or whatever.", "Dwarkesh Patel (01:41:50):", "Yeah. That would be interesting. Theoretically, you could have these drones that are carrying these huge payloads, weight wise.", "Austin Vernon (01:41:58):", "But you wouldn't necessarily want a large payload. You just want whatever the customer wants, you want to size your vehicle to deliver that payload, and that's the most efficient.", "Dwarkesh Patel (01:42:12):", "Oh, I see. Right. Because it doesn't need to be a shipping container or a shipping vessel where you just have it be huge. And then what does this mean for computing? If energy gets a lot cheaper, I guess if Bitcoin mining becomes, well, it doesn't necessarily become more profitable because other people's energy's cheaper too, but what are the other consequences? Is spinning up an AWS server just become trivial now and then building a deep learning model costs nothing in terms of GPU time? What impact does this have on computing?", "Austin Vernon (01:42:48):", "Yeah, I think the limitation would probably still be just chips for a while until you figure out a better production process for that. I think it'd be a while before it becomes energy. I think smartphones really worry about energy. There could be some interesting things with smartphones if you have a very power dense betavoltaic battery which is a nuclear battery that you don't have to worry about running down. But outside of smartphones, I'm not sure that energy is the limit for a lot of those computing.", "Carbon Shortage", "Dwarkesh Patel (01:43:26):", "And one of the interesting things that you speculate about at the end of the paper is about a potential carbon shortage. I think in an email to Tyler Cowen, he published on his blog, you said by the end of this century, we'll have a carbon shortage because of the process you talked about earlier, the thing you're working on, if you can take CO2 out of the atmosphere. What is the probability that this ends up happening? Do you think it's more than 50% by the end of the century or is it just speculation?", "Austin Vernon (01:43:58):", "I think it's extremely high that it happens and it's harder to put the timeline on it. By the end of the century might be a little, I think I ran some numbers in there and if you 10X current plastic production, and then you're just land filling it all, I think it was a little over a hundred years to get, and you're assuming you're out, the rest of your carbon output is zero in that scenario. But it's probably pretty hard to do it in a hundred, by the end of the century without a lot of growth, but it's kind of the exponential thing that can get you where... I think some large number of the carbon emissions have happened in the last 20 years, and it was very small before 1950. You could kind of get surprised at the back half, the last 10 years it goes crazy. It makes it hard to predict.", "Dwarkesh Patel (01:44:53):", "By the way, in Will Mcaskill’s new book on longtermism , one of the things he speculates about is if society collapses and we need to restart, one of the things we'll need is coal or some other sort of dense, easy-to use-fuel. And the problem is we've been burning up easily accessible coal, like coal in places where we could just dig up and find it, and so one of the things he's concerned about is making sure we leave some easily accessible coal silos around so that in case society collapses, we can restart and use these to power up a second industrial revolution. I wonder if you could use a process like this with carbon sequestration to actually just build up these kinds of reserves. I don't know if a long term, if somebody's really interested in making sure we have that kind of resource. They could just use this process to... Is that possible? Or", "Austin Vernon (01:45:51):", "Actually there's a company called Charm Industrial , and they're basically doing that because they take trees and they do a process called fast pyrolysis. It's where you burn biomass without oxygen, anoxic environment. And it makes this bio-oil and then they're injecting the oil down into wells and selling carbon credit. So it's already happening, you could say.", "Dwarkesh Patel (01:46:15):", "Oh wow. And that’s easy to burn and stuff?", "Austin Vernon (01:46:21):", "Yeah. If you just want to burn it for heat, it's okay, but it's hard to refine. There are a lot of people that tried to do bio-oil as an alternative for petroleum 20 years ago, Cleantech 1.0, and they all failed. So it makes me laugh that they're reimagining the process to sell what are right now very expensive carbon credits. But you could do something similar. Actually you could do something similar just to make straight carbon and stuff if you wanted to.", "Dwarkesh Patel (01:46:50):", "Okay. I see. The thing that I find interesting about this is that often in the case of global problems, people will early on identify that a thing is going to be a problem, but it often ends up being the case that they get the direction of the problem opposite. If you think about population, in the seventies, people were correct that global population was going to be a problem. The thing is, it seems like now the problem is going to be that the population might decline too fast, not that it's going to grow exponentially. And I think this is another example of this kind of thing where CO2 is going to be a problem either way. It just, I'm not sure if it's going to be a problem where we'll over reproduce it or we'll have shortages.", "Austin Vernon (01:47:36):", "Yeah. If you just think about the large scale, if you're going to be like Kardashev, whatever scale civilization, where you're using condensed amounts of energy, that's going to have side effects and you're going to have to figure out how to manage that one way or the other. One of those is eventually earth may just be a nature preserve and we all live in space or something.", "Nuclear", "Dwarkesh Patel (01:48:01):", "Yeah. Okay. Let's talk about nuclear energy. It seems like you're much less optimistic about nuclear energy than you are about solar and wind. Do you want to explain why that's the case?", "Austin Vernon (01:48:15):", "Yeah. Well, especially solar more so than wind. Wind, I think it's limiting because it's transmission problems and again, if you want to build out huge amounts of wind like some of these zero carbon plans call for, you're going to have to take a lot of people's land to build transmission lines and stuff. Which really pisses people off and they fight hard and it becomes expensive. It's not like... The wind turbines are relatively easy to sight because you pay people and you'll actually see, they never put above ground power lines on the people's land, where they put the wind turbines. They're always underground so at least they get to the county right away. But when you get these giant transmission lines, like Grainbelt or something, they almost inevitably have to go across a lot of people's lands and you can't just stuff them all in county and state right aways because the pylons are so big.", "Dwarkesh Patel (01:49:12):", "Sorry, what is the pylon?", "Austin Vernon (01:49:13):", "The pylon is what holds the wire up, the tall tower. Yeah, solar is, it's much more flexible where it can go. I think when it comes to solar getting cheaper, the obstacles are just pretty simple. It's like, \" Well gosh, it's expensive to put all this racking. Why don't we just lay the panels on the ground?” “ Gosh, this glass we're encasing with is getting expensive and we don't need it to last 80 years or 50 years. We can just put some plastic on it instead .\" We've gotten these, the actual materials and stuff so cheap and all the other labor and stuff is getting more expensive. Well, why don't we just add another layer and make more energy? Those are your solar solutions to get down to $10 a megawatt hour, and they're pretty straightforward.", "Austin Vernon (01:49:57):", "Whereas nuclear is like, \" Well the light water reactor can't get us there. Let's instead cool our reactor with sodium, which catches fire when it, it explodes when it reacts with water and catches fire when it reacts with air .\" Or there's, you could cool it with liquid lead. That's an option. Helium, which leaks a lot. Or you could do molten salts that corrode everything. We don't really have anything like that, and so I think when you start looking at, this is for large reactors. So I think those solutions for very large reactors are pretty hard. It's pretty difficult. And there's a lot of reasons why. Why did we make these weird choices? Well, a lot of stuff just reacts poorly when you expose it to neutrons and stuff. They each have their own features that make them possibly good candidates.", "Austin Vernon (01:50:57):", "I actually think regulation is actually, it's oversold a little bit. And I think actually to the extent that if people were internally consistent, then they would see NRC as a regulatory success story because, the background on this is my wife's mom and step father are nuclear engineers that have worked at all levels of the nuclear power industry so I get to ask them the general questions and learn a lot about it, which is nice. It's very helpful for learning about it.", "Austin Vernon (01:51:33):", "But there's, back in the eighties, the nuclear power industry was in real trouble because their competitors in coal and natural gas got deregulated. Most of the cost of coal is the rail getting there, and the rail industry got deregulated and then the national gas industry got deregulated. So the cost of their alternatives was falling and they had to build more safety into their plants because of all these, it wasn't just three miles out. It was Browns Ferry. It was Rancho Heco, all these things that could have been really scary and, to a certain extent, we got a little bit lucky that we didn't have a worse disaster. They were just relatively limited accidents at their sites.", "Austin Vernon (01:52:25):", "There was actually a time where nuclear power plants were selling for less than what their fuel was worth in their plant, around there. What the industry did and what NRC did is they moved to probabilistic risk assessment, which is usually the gold standard. People are really happy that we use probabilistic risk assessment for commercial crew, with NASA and SpaceX. And they want the FDA to use more probability, more expected value. What this allowed was it's basically you're rolling up some of the rules and moving it into the risk assessment. Around 1980 nuclear power plants only ran about 60% of the time. They weren't very reliable. They had all sorts of unplanned outages, stuff like that. The safest mode of operation is just running as designed. The more consistent nuclear power is, the safer it is. The probabilistic risk assessment allows you to do repairs while you're running, which was kind of discouraged before. It'll be like if your main cooling pump is leaking, before you'd be like, \" Oh, gosh, I hope we can make it, \" and then eventually it just fails and you shut down the reactor. And now it's like, \"All right. Well, we have backups. The safest thing to do is actually repair it now while the plant's still running and then get it repaired and put it back online.\" Not only, to give you the idea of the safety standards that NRC has, I think for the nuclear plant taking damage is one time in 10,000 reactor years. And then for a large release it's one in 100,000 reactor years, and there's 93 operating reactors that––less than 93 sites. We should only see a three mile island under the current standards once every hundred years or so. A large release, like a Fukushima type situation, should only happen once every thousand years.", "Austin Vernon (01:54:37):", "But they had, just in a few years in the 1970s, the industry had three or four of these damage events at least. I don't know how many officially count, but probably at least three. The safety is incredible. And now the operating capacity is up to over 90% so the plants are just extremely reliable and it lowers their costs because their costs are so fixed. When you compare it to a country like France? They've had a lot of reliability problems with their nuclear fleet in the last couple years. This year, their capacity factor I think is barely over 60%. We have 90 gigawatts of nuclear. They have 60 gigawatts. That makes a huge difference for Europe that those plants aren't running full out. And it is really, you see a lot of charts about, \"If Germany didn't shut down its reactors, what would the energy balance be?\" But you don't see as many, \"If the French could run their reactors like American reactors, what would the energy balance be?\" I could go on about how that integrates into new plants, if you want to know about that.", "Dwarkesh Patel (01:55:46):", "Yeah, I do. Because the line I've always heard on this from my bubble is like, \"Oh, they haven't approved a new plant. The NRC has not approved a new thing since, a new plant, since it was created.\" I guess they just approved the design for the new, small modular reactors, which I guess I'd love to hear your opinion on as well. I'm very curious to hear this perspective.", "Austin Vernon (01:56:07):", "Well, okay. So think about it, you're in the 1980s, you had new sources of fuel, you had new competitors, you also, by the end of the decade, increased the amount your nuclear power plants ran by a lot. So a lot of these new power plants that people were thinking about building were at existing sites, like an extra reactor at Watts Bar or whatever. And well, you basically just got like a buy two, get one free, by running your plant better. So you don't really need them as much. So all those contributed to just not making sense to build new nuclear power plants, because the existing fleet ran better, and more competitors, and electricity demand slowed down.", "Austin Vernon (01:56:50):", "Is it hard to get through NRC approval? Yes. That last one, the mini reactor you're talking about took 10 years or something. But when you think about a probabilistic risk assessment, no one ever says, \"Well, gosh, NRC's current standards of a large release, which would basically happen one every thousand years... We're not arguing over that. We're just talking past each other, I guess, instead.", "Austin Vernon (01:57:19):", "So to me, that's a pretty reasonable risk level. If you're going to 10x your reactors,  that means almost certainly you'd have a Fukushima within your lifetime if you go with NRC standards. But it actually turns out that it's pretty cheap to do way better. A lot of the reason why the plants weren't built may not necessarily have been because of regulation, but because the market conditions changed. You had more competitors, and the coal with the gas, being deregulated. And then you also had increased production from the existing nuclear plant. So if you're going to build an extra nuclear plant or an extra reactor at an existing site, then you might not have needed to anymore because you got so much more production out of your existing plants. Stuff like shortening the fueling time and all-around improvements, paired with electricity demand flattening is what really made new plants not economic or not necessary.", "Austin Vernon (01:58:27):", "When we really think about the probabilistic risk assessment, it just takes a lot of engineering time to get it through. If you look at how hard it was for SpaceX to get Falcon 9 and Dragon through NASA's loss of crew risk calculations… it took years, it took hundreds of millions of dollars. So it's kind of funny that people see that as a success. Especially when the stakes were only a few lives for people that volunteered for danger. And then you have a nuclear power plant, where we're going through the same problematic risk assessment, and it could impact many more people's lives. That's not good enough. So I think it would make more sense to argue about the risk factor, how much risk we should take with the actual numbers, as opposed to just like, \"Oh, I'm mad that we're not building nuclear power plants.\"", "Austin Vernon (01:59:28):", "Actually it becomes very inexpensive to improve the risk probabilities, because the old plants that we're running now have active safety systems, which means you have to maintain them and they have to work. So if you want to move the control rods back into the reactor, well, there's like a mechanism and a motor that does that that can fail. So when you're calculating your risk, you have to calculate, \"Oh gosh, what if this motor fails? Or what if my control fails? Or what if I don't have a properly trained operator to do it?\" And it's the same for the cooling systems. But this new generation of plants, they have passive safety systems where natural convection can cool a reactor in an emergency, or... The rods are more like a dead man switch, where if something happens, they just drop in from gravity. And so the new power plants, like this one that just got approved, or the one they're building down in Georgia, can be orders of magnitude safer than the running plants.", "Austin Vernon (02:00:29):", "It's not really a huge cost increase. You're just changing how you do these things. In fact, if you look at all the literature, they're actually supposed to be less complex and easier to build. But you're talking about a project that is so complicated, it takes thousands of workers, years and years to build, working every day. And it's like, if you're going to go through and do the engineering in great detail to prove that you're playing it safe under the problem risk assessment, it's going to take hundreds of thousands of hours of engineering time. That's why you see... I mean, why are investors willing to pay that at this point, after you build it because of the rank and cycle and all that, like is it going to generate economic power? The fact is it's not necessarily going to. I think one way to think about this is my father-in-law.", "Austin Vernon (02:01:27):", "He always says, when people ask him about why we're not doing more nuclear, he says, \"Well, you have to think about the politics first, and economics second.\" Those are the important ones. So people are submitting designs to build plants that are big enough to impact lots of people's lives, even if that risk is very low.", "Austin Vernon (02:01:47):", "Some people still are bothered by that. But also, they're selling an easily substitutable commodity in most cases. I think a lot of times on the political side, if you can substitute nuclear power, people will. Even if it's coal or whatever, people don't really care that much about the emissions. They just care about their electricity turning on. And I think you see the opinion change very fast when nuclear power is no longer the substitute. All of a sudden Germany's like, \" Well, we could turn our reactors back on.\" Or Japan, same way. They've had these reactors off for years, but now that there's an energy crunch, they're like, \"Well, let's turn em back on.\"", "Austin Vernon (02:02:25):", "So I think the future for nuclear power, which would be a better future, is you create products that impact less people's lives, or have the potential to impact less people's lives, and also are not substitutable. I think that means small reactors. If you have a battery that can power your phone or have a little battery out in your garage that can power your house, these are hard to make. There's a lot of problems, especially in power density–– nuclear is very energy dense, but not necessarily power dense. So you have to do a lot of work on that to get there. But one of the most exciting examples of recent nuclear technology is these people at some national labs and NASA got together and created this KRUSTY reactor, and only to, it's only one kilowatt. So it's small. I think the thing weighs 400 kilograms. It fits in the room. But they got the whole project done in a couple years for less than $20 million. And it worked great. It's very safe, just because, partly because it's so small, but it has almost no moving parts. The whole thing has a Sterling engine on top, and that's the only moving part.", "Austin Vernon (02:03:19):", "So it's really... And there's several startups now that are working on improving that technology and commercializing it. So that's the kind of nuclear stuff that when I talk about small nuclear, micro nuclear, is really exciting to me because it has so much potential. And when you start putting nuclear in that small form factor, there's no other energy source that can compete with it on energy density. So you can do things you could never do before, whereas selling to the grid in a large power plant is like, \" Well, I can do that in lots of ways .\" And if you think through this lens, then you see the entire nuclear debate is the nuclear proponents trying to claim that nuclear is not substitutable, and that we should pay more, accept the risk, or whatever. And maybe we should, but it makes it hard to promote that technology. If you could have a phone that you never tried to charge, people would love that. It'd be like, \"I don't care if that's nuclear. I just have a phone that never goes dead.\"", "Dwarkesh Patel (02:04:41):", "I guess the question is: is the lengthy and expensive process necessary for the probabilistic risk assessment? If there's a way you could just have the process not be more streamlined, and be as effective in evaluating the harm. I guess another thing is if we haven't seen... Zero people, or very few people have directly died from nuclear, right? So is it just that we've gotten lucky or you're saying that could have been way more, and we're just in a lucky timeline?", "Austin Vernon (02:05:14):", "I guess I'll go backwards a little bit here, on answering those questions. I think what people are responding to is even if Fukushima didn't have airborne radiation (that was very dangerous) people still got removed from their home. And there's a lot of costs for associated with that. It's hard for me to believe that if we had a similar thing in the US, there wouldn't be some type of mandatory evacuations that were really unpleasant. If you could get your power from coal or natural gas, without that risk, I mean, a lot of people would make that trade off.", "Austin Vernon (02:05:54):", "And I think the other thing with Fukushima is, as I understand it, because it was on the ocean with fast currents, they were able to use a lot of seawater to keep the reactor from getting too out of control. But they were just dumping a lot of the radioactive stuff into the ocean, but it was dispersing quickly, it wasn't a big deal. So if you're on a freshwater reservoir like most US power plants are, your risk equation might have been different there. I don't know enough about it to know if that really matters. But I think the main thing is because of the precautionary principle, people are still going to get removed from their homes and people don't like that.", "Austin Vernon (02:06:39):", "Let's see. I'm making it... I mean, you can always streamline processes, but the thing is, people are submitting designs that are extremely complex. So whether your design is ultra safe, or not safe at all, to do all the engineering to prove that costs about the same either way. So that's part of why these new plants are so much safer than the NRC standards. It's just not that hard to make them that much safer. And you're going to spend the same engineering resources no matter what, based on your plant complexity. So that's the difference why KRUSTY was able to go through so fast. Their thing is very simple. They don't have very many moving parts. There's only so many things that can go wrong with it. I think that's what's exciting to me about these other startups is they have the potential to get through faster, with less money, and then there's real markets and remote power space, military, where people are willing to pay the premium for these initial models.", "Dwarkesh Patel (02:07:43):", "Okay, I see. Okay. So you're not bearish on nuclear in the future, given the new designs with passive cooling and stuff like that. It was more like the old designs that you're pessimistic about, is that correct?", "Austin Vernon (02:07:56):", "Yeah. I mean, if you look at what the cost of electricity is going to be from that, if they ever build the reactors that just got proven or just got approved, it's quite expensive. I think usually it's around like $40 - $50 a megawatt hour, best case, but more likely it could be up to like $80 a megawatt hour. So they're not building it in deregulated power markets, because you lose money. But there are places where it could make sense. Some places, like in Europe, have very expensive electricity.", "Dwarkesh Patel (02:08:31):", "And Japan and Singapore, and there's a lot of other places that are...", "Austin Vernon (02:08:34):", "Yeah, yeah. So there could be some markets in there, but that technology then still has to compete with those places building solar panels or all these other technologies that you could do. Then there's the whole argument, well, nuclear can do this and that, but I think the people building the reactors clearly don't want to build them in deregulated power markets because it's not economic. That's why I'm excited about the small, because there's alternative markets other than selling this substitutable commodity that's very cheap.", "Dwarkesh Patel (02:09:08):", "Have you talked to Eli about this? What is his opinion?", "Austin Vernon (02:09:12):", "Yeah. So Eli finds out about these new startups that fit this bill and sends me the information on them because he knows I'm excited about it. His specialty is governmental affairs. So I'm sure there's still lots of opportunity to improve the process at NRC. Recently INPO, which is the industry group that's very much a German style industry group, it's very powerful, their goal with NRC was to reduce the nuclear rules by one third. And then you also have NRC writing new standards for gen-four reactors that's supposed to be done in a couple years, but Congress instructed them to do it. So there's lots of opportunity to try to improve the process, but it's very complex. I'll give one example, the Browns Ferry accident. The main thing that came out of that was, you can't have control cables for safety systems on, redundant safety systems on the same cable tray, because that cable tray catches on fire you lose both systems.", "Austin Vernon (02:10:14):", "So it's very, very expensive to run extra cable trays and all this cable separation. That's actually one of the problems that's delaying Vogtle and Georgia right now–– they had 500 issues of safety systems sharing the same cable tray. So they have to build all new cable trays and clean out the mess of the stuff they already built and redo it. Super expensive. So the NRC tried a pilot program where they did a performance base safety as opposed to just the strict cable separation rule, and I think Oconee was one of the power plants that tried it. It ended up being more expensive than just the simple rule. So the reality is often very complex. I think when you have these complex plants, it's just hard to do. So it can always be improved, but I think the small could end up greatly out competing the large because they have less complexity.", "Dwarkesh Patel (02:11:18):", "You had a small section in that piece about fusion where you were especially pessimistic about fusion. Well, what is your take on fusion?", "Austin Vernon (02:11:28):", "Well, it's kind of the same thing. I'm not pessimistic about fusion, I'm pessimistic about fusion technologies that heat up water to make steam, and run it through a steam turbine.", "Dwarkesh Patel (02:11:37):", "Because they're not efficient?", "Austin Vernon (02:11:38):", "It's just so expensive. Literally just pitting in the steam turbine and the condenser and all that kind of stuff you need for that, basically makes you uncompetitive on most deregular power markets.", "Dwarkesh Patel (02:11:53):", "Yeah. So I mean there's startups who have plans to do direct energy conversion. I don't know how feasible those plans are, but yeah, presumably you think those are... In those cases, do you think fusion could have a big future?", "Austin Vernon (02:12:06):", "Yeah, yeah. Again, I don't know too much about... The same as you, I don't know too much about their specific technology, but if you're pursuing a direct conversion technology, you actually have a chance of success. I think a lot of people I've talked to in the fusion space, they're like, \" Well, I can make electricity for $50 a megawatt hour. And because I'm fusion, people should pay me $50. \" And it's like, well, not everyone may want to pay you $50.", "Solar", "Dwarkesh Patel (02:12:34):", "Yeah. Yeah. I mean, it might involve an initial period of large subsidies that we had to give electric vehicles, and even solar, we had to give huge subsidies to solar in the beginning when we were at the beginning of the learning curve, so... That might be necessary though.", "Austin Vernon (02:12:50):", "Yeah. I mean, I really disagree with the subsidy solars had, actually. And I think it, just like if you actually look at the numbers, it proves the point. People say like, \" Oh, because Germany did the feed in tariffs, it made solar cheap .\" So if you had a country that's 1% of the population, they spent a tiny portion of their GDP and that was enough to scale the technology, well, you should just let some other fool do that, and reap the benefits. So I would be supportive of taking away most of the subsidies for energy in general.", "Dwarkesh Patel (02:13:21):", "Just to make sure I understood that argument, you're saying that it's unlikely that the small subsidies that Germany gave were enough to actually make the difference? Is that what...", "Austin Vernon (02:13:29):", "Well, I'm just saying, if it took such a small amount of subsidy to do it, someone will be foolish enough to do that. In this case, it was Germany. They spent a lot of money doing that. They're not reaping the benefit from.", "Dwarkesh Patel (02:13:43):", "Yeah. It's not compatible with their environment, so... And their climate I mean, yeah.", "Austin Vernon (02:13:48):", "We benefited from them doing that. We still do spend some subsidies on solar. And I think they're very poorly designed. So it would be better just to get rid of them. But the thing with fusion, if you're just heating up water to make steam, it's that technology, there's no learning curve anymore for steam engines basically, because that technology's so mature. So that's why some people are looking at super critical CO2 cycles, because, well, maybe this could be a little cheaper than doing steam turbines. That's some possibility there. And there's some other technologies that, maybe someday you have thermoelectric generators and stuff like that. But I think the direct conversion technologies have just a massive advantage. Not only an initial cost, but in ongoing operating cost.", "Alpha and Efficient Markets", "Dwarkesh Patel (02:14:39):", "Okay. Okay. There's one more topic I really want to talk about, which was... Yeah, you have an interesting post on where you can actually expect to find alpha, given that at least public markets are efficient ? Do you want to explain the basic thesis of that post before I ask you specific questions about it?", "Austin Vernon (02:14:58):", "Yeah. If I was going to dumb that post down, I love Fama’s original paper where he lays out this efficient market hypothesis thesis and he is like, \"There's multiple types of information .\" And so the first is, if you just have pricing data for stocks or whatever securities, you can be the smartest person in the world, and you're not going to make any money doing that, because it's just random.", "Austin Vernon (02:15:22):", "But if you start incorporating more information, what's in 10Ks and all that, if you're super, super smart, you might be able to make a little bit of money there. We see that with Renaissance Technologies, and you can debate about Warren Buffet and all that. But then there's the third category, which is the strong type information. And it's basically you have legally acquired private information, and you can make money that way and be significantly less smart. So if you want to just take Fama's paper, how do I make money? It's like, \" Okay, well I should find legal ways to acquire this information, and then I don't have to be super genius to make money on it.\"", "Dwarkesh Patel (02:16:11):", "What I thought was really interesting in your post was you had this point about how one of the ways you can actually earn excess returns is through labor, right? Buffet, in the earliers at least, he would go into these factories and interrogate every single piece of operations and whatever. I thought it was an interesting twist on Piketty's thesis . So I don't know if you've seen his stuff, but he has this claim that, well, not only does capital earn more than... The gains to capital are higher than the gains to labor, but the more capital you have, the higher returns you can earn.", "Dwarkesh Patel (02:16:49):", "I guess Harvard has access to hedge funds that may be able to earn excess returns. Basically if you take this view, it's basically the inversion of Piketty, because over time as Buffet has gotten wealthier, his returns have gone down, because it's harder to invest the marginal dollar more effectively. As you said, with the medallion fund, yeah, they no longer accept outside money. Then the interesting thing about labor is, the reason that Buffet was able to earn those excess returns in the beginning, was because of the labor he put in. So the interesting thing is, capital is just fungible with other capital. So capital doesn't enjoy as high returns as really good labor, really smart labor, which is the opposite of the Piketty thesis.", "Austin Vernon (02:17:33):", "And I think, there was actually a paper, I think it was on marginal revolution a couple years back. So I'm pulling from my memory here, so I could be missing a little bit, but basically it studied all these businesses and what happened to the business after a founder unexpectedly died and the profit just... It looks like these are capital returns so many people would see them, but then the earnings just drop like a rock, because they lost some irreplaceable human capital there, and they didn't spend any time training them because they died unexpectedly.", "Dwarkesh Patel (02:18:10):", "Right, which also has an interesting implication for CEO pay, which is just that... Actually, okay. In the Marxist sense, what is pay? It's like, your pay is what it costs to replace you right? And if Steve Jobs was so irreplaceable that if he goes away, earnings are going to drop like a rock, and stock prices are going to drop like a rock. Actually, that means that he should get paid... That's how expensive it is to replace him. He may be irreplaceable. So it's actually worth whatever dozens of millions of dollars you're paying him.", "Austin Vernon (02:18:43):", "Yeah. Yeah. I'm generally a proponent for letting the market decide that.", "Dwarkesh Patel (02:18:49):", "Yeah. Yeah. Okay, and then another way you suggested was that firms could earn excess returns by developing a unique brand. So YCombinator is probably able to earn excess returns through normal venture capitalism because of their unique brand. Yeah, I thought that was really interesting. Do you want to talk to me more about that?", "Austin Vernon (02:19:06):", "I think it's just a lot of this intangible capital and labor are complements for regular capital, and I think you can see it too. If you build a brand around that, you're a good investor. You can raise money from other people and charge the money on it, more so than if you're just a no name. So I think there's lots of examples of that where building a brand or building relationships is extremely valuable, just like specific knowledge can conduct your returns. I mean it's a type of specific knowledge.", "Dwarkesh Patel (02:19:45):", "Well, what do you mean a specific knowledge?", "Austin Vernon (02:19:48):", "Well, I mean to build a brand like YCombinator, you have to understand what tech founders want. So they use that knowledge to create a place that's great to go do your startup.", "Dwarkesh Patel (02:20:02):", "Yeah, yeah, yeah. Interesting. Is the market for blogging efficient? So now there's actually financial rewards to blogging. The Effective Ideas blog prize, there's other kinds of grants like this. Recently they opened a contest where you can win many prizes where it seems like if you're really good at blogging, you could earn six figures, it seems. Given the regularity in size of these prizes, is this a market that we should expect to be efficient?", "Austin Vernon (02:20:34):", "I think it would be hard to measure. Given my own experience, I'm blogging for free, but the benefits I've gotten from learning about what I'm blogging about, and then the few connections I've made that have helped me with my projects I'm working on. There's huge returns. If my project's successful, these returns could be just almost immeasurable. So yeah, I would guess it's very hard to measure and probably inefficient in that more people could blog, because it's hard to predict the returns to what your blogging might have. But I guess if you're going to do these blog prizes, I don't know if the blog prize... Because the blog prizes are about specific topics, I don't know how much that helps the efficiency there.", "Dwarkesh Patel (02:21:20):", "Yeah. Yeah. Let's take that part of it out. Let's just talk about the factor you mentioned.. This is a regular thing you hear from people who write online, which is that the gains they get are huge and that's also the case in my case. So it's kind of interesting. Efficient Markets don’t.. just because the stock market is efficient, that doesn't mean that everybody will put their money into the stock market. That's not the implication. But the question is, given that you're writing something that's high quality, will it get noticed by the market? Will it get the attention and broadcasting that it deserves? And in my experience, actually, I guess this was the case. You mentioned that when some of your first posts ended up on Hacker News. So in that sense that market was efficient. But yeah, it seems to me that when somebody finds a good blogger, it's not hard for their initial post, or at least their subsequent post as they get better to gain an audience.", "Austin Vernon (02:22:18):", "Yeah, I do think that. And I don't know what the counterfactual is. We don't know about the people that didn't have posts go to Hacker News, so it could've easily been... I mean, I think that what the alternative for me is, I just would've blogged way less, if one of those early posts hadn't gotten more attention. So yeah. It's hard to know what the counterfactual is.. how many people have just abandoned blogs after writing three posts. They would've written one more. Maybe it would've been better.", "Conclusion", "Dwarkesh Patel (02:22:49):", "Yeah. Yeah. Okay. Awesome. This has been a lot of fun. I think we're two hours over at this point, so thank you so much for your time!", "Austin Vernon (02:22:59):", "All right. Thank you.", "Dwarkesh Patel (02:23:00):", "I don't know if you have any other final thoughts or any other subjects that we should hit on or...", "Austin Vernon (02:23:06):", "No, I think we covered everything.", "Dwarkesh Patel (02:23:09):", "Okay, cool. Awesome. And then just people can find you at Austin Vernon dot...", "Austin Vernon (02:23:15):", "Dot site.", "Dwarkesh Patel (02:23:16):", "Okay. AustinVernon.site. And then your Twitter is?", "Austin Vernon (02:23:20):", "I think it's Vernon3Austin.", "Dwarkesh Patel (02:23:22):", "Okay. And it'll also be in the story description, but yeah. Okay. Yeah. Thanks so much for coming on, man. This was a lot of fun.", "Austin Vernon (02:23:28):", "All right. Thank you.", "Dwarkesh Patel (02:23:31):", "All right. I hope you enjoyed that episode with Austin Vernon. This one was a lot of fun. If you did like it, I would really appreciate it if you could share the podcast. Put the episode in your group chats, put it on Twitter, share it to friends who you think might enjoy it. That kind of stuff helps out more than you would imagine. I want to give a special shout out to Sonya Gupta for her help in prepping me for this episode. Austin is, as you can tell, really smart and technical, so I would not have been able to prepare for this episode without Sonya's help. She's got an amazing technical mind, and did a ton to help me out with preparing questions and doing research into the topics we talked about. I also want to thank my amazing editor and producer Graham Besolou for the work he puts into this podcast. All right. See you on the next one." ]
[ "https://twitter.com/albrgr/status/1545831601254215680", "https://www.google.com/search?q=b52&rlz=1C5CHFA_enAE1011AE1011&sxsrf=ALiCzsZcqhz32YueavU7SKvSmA1QckNQoQ:1662014940044&source=lnms&tbm=isch&sa=X&ved=2ahUKEwimi_OQgPP5AhVStqQKHeuLBkwQ_AUoAXoECAIQAw&biw=928&bih=789&dpr=1#imgrc=gDbW3nhKszKckM", "https://www.af.mil/About-Us/Fact-Sheets/Display/Article/104572/joint-direct-attack-munition-gbu-313238/", "https://patents.google.com/patent/US8519312B1/en", "https://rusi.org/", "https://www.rand.org/pubs/research_reports/RR229.html#:~:text=Anti%2Daccess%20challenges%20prevent%20or,forces%20within%20the%20operational%20area.", "https://austinvernon.site/blog/softwareisprocess.html", "https://en.wikipedia.org/wiki/Waterbed_theory#:~:text=Waterbed%20theory%20is%20the%20observation,to%20%22pop%20up%22%20elsewhere", "https://www.jstor.org/stable/10.1086/671137", "https://www.dwarkeshpatel.com/p/uncle-bob#details", "https://www.thediff.co/p/stripe", "https://www.amazon.com/Toyota-Production-System-Beyond-Large-Scale/dp/0915299143", "https://www.thediff.co/p/type-safe-internal-communications", "https://www.economist.com/schools-brief/2017/07/29/coases-theory-of-the-firm", "https://www.amazon.com/Goal-Process-Ongoing-Improvement/dp/0884271951", "https://terraformindustries.com/", "https://www.prometheusfuels.com/", "https://www.thecgo.org/research/energy-superabundance/", "https://www.amazon.com/Where-Flying-Car-Storrs-Hall/dp/1953953182/ref=sr_1_1?keywords=Where%27s+My+Flying+Car%3F&qid=1662026602&s=books&sr=1-1", "https://www.teslarati.com/elon-musk-the-boring-company-hyperloop-prufrock-2-launch/", "https://www.amazon.com/What-Owe-Future-William-MacAskill/dp/1541618629", "https://charmindustrial.com/", "https://austinvernon.site/blog/earningalpha.html", "https://www.cairn-int.info/article-E_AMX_056_0164--the-economics-and-politics-of-thomas.htm" ]
https://www.dwarkesh.com/p/bethany-mclean
Bethany McLean - Enron, FTX, 2008, Musk, Frauds, & Visionaries
[ "This transcript was autogenerated and thus may contain errors .", "Dwarkesh Patel: the rapid implosion of a company worth tens of billions of dollars. Insider dealing and romantic entanglements between sister companies, a politically generous c e o, who is well connected in Washington, the use of a company's own stock as its collateral, the attempt, the short-lived attempt to get bought out by a previous competitor, and the fraudulent abuse of mark to market account.[00:01:00]", "We are not talking about ftx, we are talking about Enron, which my guest today, Bethany McClean, uh, first broke the story of and has written an amazing and detailed book about, uh, called The Smartest Guys in the Room. And she has also written, uh, a book about the housing crisis. All the devils are here, a book about Fannie and Freddy Shaky Ground, and a book about fracking Saudi America, all of which we'll get into.", "She's, in my opinion, the best finance nonfiction writer out there, and I'm really, really excited to have this conversation now. So, Bethany, thank you so much for coming on the podcast.", "Bethany McLean: Thank you so much for the, for the probably Undeserved Conference, for having me on the show.", "Dwarkesh Patel: My first question, what are the odds that Sbf read the smartest guys in the room and just followed it as a playbook, given the similarities there?", "Bethany McLean: You, you know, I, I love that idea. I have to, I have to admit, I guess I love that idea. I don't know. That would make me responsible for what, for what happened, . So maybe I don't love that idea. L let me take that back . [00:02:00] Anyway, but I, I, I actually think that, that, that even if he had read the book, it would never have occurred to him that, that there was a similarity because self-delusion is such a, Strong component of all of these stories of business gone wrong.", "It's very rare that you have one of the characters at the heart of this who actually understands what they're doing and understands that they're moving over into the dark side and thinks about the potential repercussions of this and chooses this path. Anyway, that's usually not the way these stories go.", "So it's entirely possible that Sbf studied Enron, knew all about it, and never envisioned that there were any similarities between that and what he was doing.", "Dwarkesh Patel: Oh, that's a fascinating, um, which I guess raises the question of what are we doing when we're documenting and trying to learn from books like yours?", "If somebody who is a, about to commit the same exact kind of thing can read that book and not realize that he's doing the same exact thing, is there something that just [00:03:00] prevents us from learning the lessons of history that we, we can never just, uh, get the analogy right, and we're just guided by our own delusions.", "Bethany McLean: Wasn't there a great quote that history rhymes, but it doesn't repeat. I'm Yeah. Relying on who it is who said that, but I think that's, that's absolutely true. Oh, I think it's important for all of us, those of us who are not gonna find ourselves at the center of, uh, giant fraud or, so, I hope, I think my time for that has passed.", "Maybe not you, but, um, I think it's important for all of us to understand what went wrong. And I, I do think these, I do think just there, there's a great value and greater understanding of the world without necessarily a practical payoff for it. So I think when something goes wrong on a massive societal level, it's really important to try to, to try to explain it.", "Human beings have needed narrative since the dawn of time, and we need narrative all, all, all the more now we need, we need to make sense of the world. So I like to believe. Process of making, trying to make sense of the world. , um, [00:04:00] has a value in, in and of itself. Maybe there is small, some small deterrence aspect to it in that I often think that if people understand more the process by which things go go wrong, that it isn't deliberate, that it's not bad people setting out to do bad things.", "It's human beings, um, at first convincing themselves even that they're doing the right thing and then ending up in a situation that they, they never meant to be in. And maybe on the margin that does, maybe on the margin that does, that does help because maybe it has deterred some people who, who would've started down that path, but for the fact that they now see that that's the, that's the usual path.", "Dwarkesh Patel: Yeah. Yeah. That actually raises the next question I wanted to ask you. Bern Hobart, uh, he's a finance writer as well. He wrote a blog post, um, about, uh, I mean this was before FTX obviously, and he was talking about Enron and he said in the end, it actually looks like we fixed the precise problem. Enron represented.", "Nobody I know solely looks at gap [00:05:00] financials. Everybody ultimately models based on free cash flow, we're much more averse to companies that set up a deliberate conflict of interest between management and shareholders. And I guess there's a way in which you can read that and say, oh, it doesn't FTX prove I'm wrong.", "But, you know, there's another way you can look at it is that FTX deliberately set up outside the us. So there's a story to be told that actually we learned the lessons of Enron and, you know, uh, so remains obviously worked. Uh, that's why, you know, they were in The Bahamas and we haven't seen the scale fraud of that scale in, you know, the continental United States.", "Um, do, do you think that the FTX saga and I guess the absence of other frauds of that scale in America shows that. The regulations and this changed business and investment practices in the aftermath of Enron have actually.", "Bethany McLean: Well, I think they've probably worked in narrowly, written in, in the way in which the writer you quoted articulated, I think it would be very hard for the cfo, F O of a publicly traded company to set up other private [00:06:00] equity firms that he ran, that did all their business with his company.", "Because everybody would say That's Enron and it would be completely. On the nose. And so, and Sarbanes Oxley in the sense of, in the sense of helping to reign in corporate fraud of the sort that was practiced by Enron, which was this abuse of very specific accounting rules. Um, I think I, I, I think that worked.", "But you know, you say there hasn't been fraud on a scale like Enron up until perhaps f ftx, but you're forgetting the global financial crisis. Yeah. And then the end, the line between what happened at Enron. and, and what happened in the global financial crisis. It's not a matter of black and white. It's not a matter of, one thing was clear cut fraud and one thing great.", "We love these practices. Isn't this fantastic? This is the way we want business to operate. They're both somewhere in the murky middle. You know, a lot of what happened at Enron wasn't actually outright fraud. I've coined this phrase, legal fraud to describe, um, to describe what it is that, that, that, that happened at Enron.", "And a lot of what [00:07:00] happened in the global financial crisis was legal, hence the lack of prosecutions. But it's also not behavior that that leads to a healthy market or mm-hmm. , for that matter, a a a a healthy society. And so there's a reason that you had Sarbanes Oxley and what was it, eight short, short years later you had Dodd-Frank and so Riri broadly.", "I'm not sure Sarbanes actually did that much good. And what I mean by that is when President George Bush signed it into law in the Rose Garden, he gave this speech about how investors were now protected and everything was great and your, your ordinary investors could take comfort that the laws were meant to protect them from wrongdoing.", "And you compare that to the speech that President Barack Obama gave eight years later when he signed Don Frank into law in the Rose Garden. And it's remarkably similar that now ordinary investors can count on the rules and regulations keeping themself from people who are prey on their financial wellbeing.", "[00:08:00] And I don't think it was, it's, it's true in either case because our markets, particularly modern markets move and evolve so quickly that the thing that's coming out of left field to get you is never gonna be the thing you are protecting against. Mm. .", "Dwarkesh Patel: , but given the fact that Enron, as you say, was committing legal fraud, is it possible that the government, um, when they prosecuted skilling and Fastow and lay, they in fact, We're not, uh, they, they prosecuted them to a greater extent than the law as written at the time would have warranted.", "In other words, were, uh, was there something legally invalid in the, in this, in the quantity of sentence that they got? Is it possible?", "Bethany McLean: So that's a really, it, it's, it's a, I I get what you're asking. I think it's a really tricky question because I think in absolute terms, um, Enron needed to be prosecuted and needed to be prosecuted aggressively.", "And while I say it was legal fraud, that is for the most part, there was actually real fraud around, around, uh, but it's on the margin. It doesn't [00:09:00] entire, it doesn't explain the entirety of Enron's collapse. Much of what they did was using and abusing the accounting rules in order to create an appearance of economic reality.", "Nothing to do with actual, with actual reality. But then there was actual fraud in the sense that Andy Fasta was stealing money from these partnerships to benefit himself. And they were, if you believe, the core tenant of the prosecution, which was their, this agreement called Global Galactic that was signed by, that was between Andy fau and Jeff Skilling, where Jeff agreed that Andy's partnerships would never lose money.", "Then that invalidated all of the, all of the accounting, and that's the chief reason that that. That skilling was, was, was convicted, um, was that the jury believed the existence of this, of this, of this agreement that in, um, one set of insider stock sales, which, which we can talk about, which was also a really key moment relative to the, so in absolute terms, I don't know, it's, it's hard for me to, to say there was [00:10:00] such, Enron was such a, to a degree that is still surprising to me, such a, a watershed moment in our, in our country, far beyond business itself.", "it, it, it caused so much insecurity that about our retirements, our retirement assets safe. Can you trust the company where you work? That I think the government did, did have to prosecute aggressively, but relative to the financial crisis where a lot of people made off with a lot of money and never had to give any of it back, does it seem fair that, that, that Jeff Skilling went to jail for over a decade and no one involved in a major way in the financial crisis paid any price whatsoever?", "People didn't even really have to give up that much of the money they made then. Then it seems a little bit unfair. Yes, so I think it's, it's an absolute versus a relative", "Dwarkesh Patel: question. Yeah. Yeah. By the way, who do you think made more money? Um, the investment banks, uh, like, uh, Goldman Sachs and Morgan Stanley, um, from doing, [00:11:00] providing their services to Enron as the stock was going up, or Jim Chanos from shorting the stock?", "In absolute terms, who made more money?", "Bethany McLean: Oh, I think the investment banks for sure. I mean, they made, they made so much money in investment banking fees from, from, from Enron. But, you know, it's a good question. . , it's a good question actually, because I think Jim made a lot of money too, so,", "Dwarkesh Patel: Yeah. Yeah. I mean, I, I, you've spoken about, I guess the usefulness and the shortage of short sellers des a sort of, uh, corrective on irrational exuberance.", "And I'm curious why you think that shortage exists in the first place. Like, if you believe in the efficient market hypothesis, you should think that, you know, if some company has terrible financials and implausible numbers, then people would be lining up to short it. And then you would never have a phenomenon like Enron.", "And so it's, it's, you know, it's so odd that you can. , you know, reporters who are basically ahead of the market in terms of predicting what's gonna happen. Uh, well, uh, how do you square that with like the efficient [00:12:00] market hypothesis? Well, do you", "Bethany McLean: believe in the efficient market hypothesis, ?", "Dwarkesh Patel: I, I, I'd like to, but I'm like trying to , trying to wrap my head around Enron.", "Bethany McLean: I, I'm, I'm, I'm, I'm not sure how you. Can, unless you, unless you adopt Warren Buffett's point of view, and I'm gonna mangle the quote because, uh, but, but it's that the market in the short term is a voting machine in the long term. It's a weighing machine, right? Mm-hmm. , or is it the other way around? . Anyway, but the idea is that the market may be very efficient for a long, very inefficient, for a long period of time.", "But, but it does actually, rationality does actually work in, in, in the end. And I think I might believe that, but isn't it John Maynard Cas who said the market can remain irrational for a lot longer than you can remain solvent. And so I think that's true too. I think believing that the market is efficient and rational in the short term is just obviously wrong", "Um, but back to your question about short sellers, which is, which is interesting, you know, I think part of it is that there is still this, um, there certainly was a couple of [00:13:00] decades ago, and I think it still exists, this idea that. Owning stocks is Mom, American, and apple pie in shorting stocks somehow is bad and evil and rooting, rooting against America.", "And I remember going back to the Enron days, someone, people criticizing me, even other people in the press saying, but you took a tip from a short seller. They're biased. And I. , I would say. But, but, but wait, the analysts who have buy ratings on stocks and the portfolio managers who own those stocks, they're biased too.", "They want the stocks to go up. Everybody's biased. So the trick as a journalist is getting information from all sides and figuring out who you think is right and what makes sense. But it's not avoiding anybody with any bias. But it was really interesting that people saw the bias on the part of short sellers and did not see it on the part of, of, of Longs.", "And I think there is that preconception that exists broadly, that somehow you are doing something wrong and you're somehow rooting for a company's failure. And that this is, I don't know, anti-American if you, if, if you [00:14:00] short a stock. And so I think that's part of why there's, there's, there's a shortage of shortage of, of, of short sellers.", "Um, I think also, I mean, we've had. Incredible, unprecedented bull market for the last four decades as a result of falling interest rates, and especially in the decade before the pandemic hit, it was very, very difficult to make money shorting anything because everything went to the moon. Didn't matter if its numbers were good, if it was eventually unmasked to be somewhat fraudulent, , it stocks just went to the moon anyway.", "The riskier the better. And so it is only diehard short sellers that have managed to stick it out . Yeah, and I think, I think lastly, Jim Chano said this to me once, and I, I think it's true that he could find, dozens of people who were skilled enough to come, smart enough to come work for him.", "There's no shortage of that. People who are technically skilled and really smart, but being able to be contrarian for a long period of time, especially when the market is going against you, is a different sort [00:15:00] of person. It that it requires a completely different mindset to have everybody in the world saying, you're wrong to be losing money because the stock is continuing to go up and to be able to hold fast to your conviction.", "And I think that's another, uh, part of the explanation for why there are fewer short sellers.", "Dwarkesh Patel: Yeah, and that raised an interesting question about. Uh, venture capital, for example, where, or private markets in general? Um, at least in the public markets, there's shorting maybe in shortage, but it, it is a possible mechanism, whereas, uh, I'm a programmer.", "So, you know, if, if like a one guy thinks the company's worth a hundred million dollars and everybody else thinks it's not, you know, the company will still be, uh, the price will still be said by the, you know, the person who's a believer. Um, does that increase the risk of some sort of bubble in venture capital and in technology?", "Um, and I guess in private markets generally, if they're, they're not public, is that something you worry about that they're, they will be incredible bubbles built up if there's a lot of money that's floating around in these", "Bethany McLean: circles. . Well, I think we're seeing that now, [00:16:00] right? And I don't think it's a coincidence that FTX and Theranos were not publicly traded companies, right?", "Mm-hmm. . Um, there's a certain sort of, uh, black box quality to these companies because people aren't charting them and aren't, aren't, and aren't, you know, whispering to journalists about that. That there's something wrong here and there aren't publicly available financials for people to dig through and look, look, and look at the numbers.", "So now I don't think that's a coincidence. And I do think this gigantic move into private assets has been, um, probably not great for the, for the, for the, for the. for the, for the safety of the system. And you'd say, well, it's just institutional investors who can afford to lose money who are losing money.", "But it's really not because institutional investors are just pension fund money. Mm-hmm. and in some cases now mutual fund money. So that distinction that the people who are investing in this stuff can afford to lose it is not really true. Um, so I don't, I don't like that rationalization. I think we're gonna see how that plays out.", "There was [00:17:00] just a really good piece in the Economist about private equity marks on their portfolio companies and how they are still looked to be much higher than what you would think they should be given the carnage in the market. And so all of what, what actually things are really worth in private markets, both for venture capital firms and for private equity firms, Is absent another, another bubble starting, starting in the markets.", "I think we're gonna see how that plays out over, over the next year. And it might be a wake up call for, for a lot of people. Um, you know, all that, all that said, it's an interesting thing because investors have been very complicit in this, right? In the sense that a lot of investors are absolutely delighted to have prep, to have their, their private, um, their private investments marked at a high level.", "They don't have to go to the committee overseeing the investments and say, look, I lost 20% of your money the way they might, um, if, if the numbers were public. And so that the ability of these of private investors to smooth as they call it, the, the, the returns is, is it's [00:18:00] been, it's been part of the appeal.", "It hasn't been a negative, it's been a positive. And so I would say that investors who wanted this moving are. Art might be getting what they deserve except for the pointing made earlier that it isn't, it isn't their money. It's, it's the money of, of teachers and firefighters and individual investors a around the country, and that's, that's problematic.", "Dwarkesh Patel: Yeah. Yeah. Being in the world of technology and being around people in it has. made me, somewhat shocked when I read about these numbers from the past. For example, when I'm reading your books and they're detailing things that happened in the nineties or the two thousands, and then you realize that the salary that Hank Paulson made a c e o of Goldman, or that skilling made as, you know, um, c e o of Enron, you know, I, it's like I have friends who are my age, like 22 year olds who are raising seed rounds, , that are as big as like these people's salaries.", "And so it just feels like the, these books were, you have $50 billion frauds or, you know, hundreds of billions of dollars of collapse and the individuals there, um, it just feels like they, it's missing a few zeros, uh, [00:19:00] because of the delusion of the private markets. But, um, but speaking of short sellers and speaking of private equity, um, I think it'd be interesting to talk about sbf.", "So, you know, your 2018 Vanity Fair article I thought was really interesting about, you know, sbf factory in Buffalo H How, how do you think back on Tesla and sbf now, given the fact that. The stock did continue to rise afterwards, and the factory, I believe, was completed and it's, I hired the 1500 or so people that had promised New York State, uh, is sbf just a fraud?", "Who can pull it off? And so he's a visionary. How, how do you think about sbf in the aftermath?", "Bethany McLean: So I don't think that's right about Buffalo and I have to look, but I don't think they ended up, I mean, the Solar City business that Tesla has pretty much collapsed. I don't think people haven't gotten their roofs.", "There was just a piece about how they're canceling some of their roof installations. So sbf has repeatedly made grand visions about that business that haven't played out. And I will check this for you post the podcast, but I don't think [00:20:00] if there is employment at that factory in, in Buffalo, it's not because they're churn out solar, solar, solar products that are, that are, that are doing.", "What was originally promised. So I guess I, I think about that story in a, in a couple of ways. It definitely, um, it was not meant to be a piece about Tesla. It was meant to be a piece that shown a little bit of light on how sbf operates and his willingness to flout the rules and his reliance on government subsidies, despite the fact that he, um, presents himself as this libertarian free, free, free market free marketeer, and his willingness to lie to, to, to, on some level enrich himself, which also runs counter to the Elon sbf narrative that he doesn't care about making money for, for himself.", "Because the main reason for Teslas to by Solar City was that Solar City had the main reason, was it Tes, that was, that Solar City had, that, that sbf and his, and his and his relatives had extended the these loans to Solar City that were gonna go. [00:21:00] There were gonna be lo all the money was gonna be lost at Solar City when bankrupt.", "And by having Tesla buy it, sbf was able to bail himself out, um, as, as as well. And I also think a good reason for the, for the, for, and it brings us to the present time, but a reason for the acquisition was that sbf knows that this image of himself as the invincible and vulnerable who can always raise money and whose companies always work out in the end, was really important.", "And if Solar City had gone bankrupt, it would've cast a big question mark over over sbf, over over the sbf narrative. And so I think he literally couldn't afford to let Solar City go bankrupt. Um, all of that said, I have, I have been, and was I, I was quite skeptical of Tesla and I thought about it in, in, in, in.", "And I always believed that the product was great. I just, mm-hmm. wasn't sure about the company's money making potential. And I think that, that, it's something I started thinking about, um, background, the Solar City time, maybe earlier, but this line, something I've talked about [00:22:00] before. But this line between a visionary and a fraudster.", "You know, you think that they're on two opposite ends of the spectrum, but in reality they're where the ends of the circle meet. Characteristics of one. One has that many of the characteristics of the other. And sometimes I think the only thing that really separates the two is that the fraudster is able to keep getting mo raising money in order to get through the really difficult time where he or she isn't telling the truth.", "And then they, that person goes down in history as a visionary. Um, but because no one ever looks back to the moment in time when they were lying, the fraudster gets caught in the middle. Um, so Enron's Lo lost access to to the capital markets lost AC access to funding as the market collapsed after the.com boom.", "And people began to wonder whether skilling was telling the truth about Enron's broadband business. And then there were all the disclosures about Andy fasa partnerships if Enron had been able to continue raising money, Business of Enron's called Enron Broadband might well have been Netflix. It was Netflix ahead of its time.", "So Enron just got caught in the middle and all [00:23:00] the fraud, all the fraud got exposed . Um, but that's not because Jeff Skilling wasn't a visionary who had really grand plans for, for, for, for the future. So I think sbf falls somewhere in that spectrum of, of, of fraudster and visionary. And what's gonna be really interesting why I said that this, we bring it to the present time about what happens to the mu narrative.", "If something fails is what happens. Yeah. Is as the world watch watches Twitter implode, um, what does that mean then for the Elon sbf narrative overall?", "Dwarkesh Patel: Yeah. Yeah. Um, going back to the Smartest Guys is the Room, the title obviously suggests something about. The, I guess in general, the ability and the likelihood of very smart people committing fraud or things of that sort.", "Um, but you know, Begar Jones has this book called Hi Mind, where he talks about how the smarter people are more likely to cooperate in prisoners dilemma type situations. They have longer time preference. And one of the things you've written about is the problem in corporate America is people having shorter, [00:24:00] um, uh, you know, doing two too big time discounting.", "So, uh, given that trend we see in general of greater Cooperativeness, um, and other kinds of traits of more intelligent people, do you think the reason we often find people like S B F and skilling running big frauds just by being very intelligent, is it just that on, on average smarter people, maybe less likely to commit fraud, but when they do commit fraud, they do it at such garat scales and they're able to do it at such gar scales that it just brings down entire empires?", "How, how, how do you think about the relationship between intelligence and fraud? .", "Bethany McLean: That's interesting. Um, I'm not sure I know a coherent answer to that. Um, smartest guys in the room as a title was a little bit tongue in cheek. It wasn't meant to say, these guys actually are the smartest guys in the room. It was, it, it was a little bit, it was a little bit ironic, but that doesn't take away from the really good question that you asked, which is what, what, what is that relationship?", "I, I mean, I think if you look at the history of corporate fraud, you are not going to find unintelligent people having [00:25:00] been the masterminds behind this. You're gonna find really, really, really smart, even brilliant people having, having, having been, been behind it, maybe some at part of that is this linkage between the visionary and the fraud star that so many of these, of these corporate frauds are people who have qualities of the visionary and to.", "The qualities of, of a visionary, you have to have a pretty, pretty, pretty, pretty high intelligence. Um, and I do think so many of these stories are, are about then self delusion. So I don't think smart people are any less likely to suffer from self delusion than dumb people. And they're probably more likely to, because you can rationalize, you know, the smart person's ability to rationalize just about anything they wanna rational rationalize is pretty profound.", "Whereas perhaps someone who doesn't have quite the same, the same brain power isn't gonna be able to create a narrative under which their actions are blameless and they're doing the right thing. So I think sometimes, so maybe there is some sort of relationship [00:26:00] there that somebody more qualified than I am would have to study between smart people's ability to, to, to rationalize just about anything as a way of, as part of the path to self delusion and part of the path by which these things happen.", "Yeah, that's completely, that's completely , that's Bethany theory. There's absolutely nothing to back that . I'm just", "Dwarkesh Patel: well clear. Let's do some more speculation. So, um, one of the things, uh, John Ray talked about in his testimony, um, was it two days ago where he said that, you know, FTX had done $5 billion of investments and deals in the last year, and most of those investments were worth a fraction of the value that FTX paid for them.", "And we see this also in, obviously in Enron, right? With, uh, broadband and with, um, ul, or is that how pronounce it, but basically their international department. Yeah. Um, what is this, uh, this obsession with deal making for its own sake? Is that to appease investors and make them think a lot's going on, is that because of [00:27:00] the hubris of the founder, of just wanting to set up a big empire as fast as possible, even if you're getting a bad sticker price?", "What, why do we see this pattern of just, you know, excessive deal making for its own sake?", "Bethany McLean: That's an interesting question too. I'm not sure that that's, um, limited to companies that go splat dramatically. There's a lot of, a lot of deal making in, in corporate America has that same frenzied quality. Um, I haven't seen an updated study on, on this in a, in a long time, but, you know, I began my career working as an analyst in an m and a department at at at Goldman Sachs.", "And. Definitely deals are done for the sake of doing deals. And I once joked that synergies are kind of like UFOs. A lot of people claim to have seen them, but there's no proof that they actually exist. , and again, I haven't seen an updated study on, on, on this, but there was one years back that showed that most m and a transactions don't result in increased value for shareholders.", "And most synergies, most promised synergies never materialize. [00:28:00] Just getting bigger for the sake of getting bigger and doing deals for the short term value of showing Wall Street a projection. That earnings are gonna be so much higher even after the cost of the debt that you've taken on. And that they're these great synergies that are gonna come about from, from combining businesses.", "So I don't know that either the frenzy deal doing or deal doing deals gone wrong is, um, solely limited to people who are committing fraud. , I think it's kinda across the spectrum. , .", "Dwarkesh Patel: Um, um, well one, one thing I find interesting about your books is how you detail that. And correct me if this is the wrong way to read them, but that, uh, incentives are not the only thing that matter.", "You know, there there's this perception that, you know, we've set up bad incentives for these actors and that's why they did bad things. But also, um, the power of one individual to shape a co co company's culture and the power of that culture to enable bad behavior, whether scaling at Enron or with Clarkson Right at Moody's.", "Yeah. Um, is that a good, good way of reading your books or how, how do you think [00:29:00] about the relative importance of culture and incentive?", "Bethany McLean: I think that's really fair. But incentives are part of culture, right? If, if you've set up a culture where, where how you're valued is what you get paid, I think it's a little, it's a little difficult to separate those two things out because, because the, the incentives do help make the culture, but for sure culture is incredibly, um, incredibly compelling.", "I've often thought and said that if I had, when I was leaving my short lived career in investment banking, if I had, if I had gotten in some of the head hunters I was talking to, if one of them had said, there's this great, really energetic, interesting energy company down in Houston, , why don't interview there?", "If I had gone there, would I have been a whistleblower or would I have been a believer? And I'd like to believe I would've been a whistleblower, but I think it's equally likely that I would've been a believer. Culture is so strong. It creates this. What's maybe a miasma that you can't see outside?", "I remember a guy I talked to who's a trader at Enron, really smart guy, and he [00:30:00] was like, after the, after the bankruptcy, he said, of course, if we're all getting paid based on creating reported earnings and there's all this cash going out the door in order to do these deals that are creating reported earnings, and that's the culture of the entire firm, of course it's not gonna work economically.", "He said, I never thought about it. . It just didn't, it didn't, it didn't occur to me. And I think the more compelling the CEO o the more likely you are to have that kind of mass delusion. I mean, there's a reason cult exist, right? . We, we are as human beings, remarkably susceptible to.", "Visionary leaders. It's just, it's the way the human brain is wired. We, we wanna believe, and especially if somebody has the ability to put a vision forward, like Jeff Gilling did at Enron, like Elizabeth Holmes did it Theranos like SPF F did, where you feel like you are in the service of something greater by helping this, vision, , actualize then, then you're, particularly susceptible.", "And I think that is the place where [00:31:00] incentives don't quite explain things. That is, there is this very human desire to matter, to do something important. Mm-hmm to be doing something that's gonna change the world. And when somebody can tap into that desire in people that feeling that what you're doing isn't just work in a paycheck and the incentives you have, but I mean, I guess it is part of the incentive, but that you're part of some greater good.", "That's incredibly powerful. Yeah.", "Dwarkesh Patel: It's what we all speaking of. We all wanna matter. . Yeah. Speaking of peoples psychology, uh, crime and punishment, underrated or overrated as a way to analyze the psychology of people like scaling and S B F or maybe SBF specifically because of the utilitarian nature of SB F'S crime?", "Um,", "Bethany McLean: I think it's, I think it's underrated, overrated. I'm not sure anybody. , I'm not sure anybody has ever proven that jail sentences for white collar criminals do anything to deter subsequent white collar crime. Mm-hmm. , and I think one part of this is the self delusion that I've, that I talked about. Nobody thinks, [00:32:00] oh, I'm doing the same thing as Jeff Skilling did at Enron, and if I, and if I do this, then I too might end up in jail.", "Therefore, I don't wanna do this. I just don't think that's the way the, the, the, the, the thought process works. I think Elizabeth Holmes at Theranos, probably for the most part, convinced herself that this was going to work, and that if you just push forward and push hard enough and keep telling people what they wanna hear and keep being able to raise money, it's gonna work.", "You know, if. . If, if you pause to think, well, what if it doesn't work and I've lied and I go to jail, then, then you'd stop right, right then and there. So I think that, I think that, that I'm, I'm not, I'm not sure it's much of a deterrent. I remember, and partly I'm, I'm biased because I remember a piece, my co-author Peter Alkin, and I wrote out right after Jess Gilling and Kenley were, were convicted and can lay, we're we're convicted.", "And we wrote a piece for Fortune in which we said that the entire world has changed. Now that corporate executives are, um, are, are put on high alert that behavior in the gray area will no longer be tolerated and that it will be aggressively prosecuted. And this was spring of [00:33:00] 2006 and the events that caused the global financial crisis were pretty well underway.", "It didn't. Do much to prevent the global financial crisis. Mm-hmm. , Enron's, Enron's jail time, didn't do anything to present, prevent, Elizabeth Holmes doesn't seem to have done anything to change what Sbf was doing. So I just, I, I just, I'm, I'm, I'm not sure, I'm sure a psychologist or somebody who specializes in studying white color crime could probably make a argument that refutes everything I said and that shows that has had a deterring effect.", "But I just, I just don't think that people who get themselves into this situation, con, con, consciously think, this is what I'm doing.", "Dwarkesh Patel: Yeah. Yeah. Um, speaking of other incentives, stock options, uh, you've spoken about how that creates short-term incentives for the executives who are making decisions. If you wanted to set up an instrument that aligned an executive or a leader's compensation with the long-term performance of a company, what would that look like?", "W would you have the options of less than 10 years instead of a [00:34:00] year? H how would you design it? How do you usually design a compensation scheme to award long-term thinking?", "Bethany McLean: If I could do that, I should ru rule the world . I think that very sweet. I think that is one of the really tough, um, problems confronting boards or anybody who is determining anybody who's determining stock options and that almost anybody who's determining compensation and that most compensation schemes seem to have really terrible unintended consequences.", "They look really good on paper. And then as they're implemented, it turns out that there was a way in which they accomplished exactly the opposite of, uh, thing the people who designing them wanted, wanted them to accomplish. I mean, if you think back to the advent of stock options, what could sound better?", "Right. Giving management a share of the company such that if, if, if shareholders did well, that they'd do well, nobody envisioned the ways in which stock options could be repriced. The ways in which meeting earnings targets could lead to gaming the ways in which the incentive of stock-based [00:35:00] compensation could lead to people trying to get anything they could in order to get the stock price higher and cash out when they're, as soon as their stock options vested.", "So, and even there was, there was, the whole valiant saga was fascinating on this front because the people who designed Mike Pearson's compensation package as ceo e o Valiant, they were convinced that this was absolutely the way to do it. And he got bigger and bigger, um, stock option incentives for hitting certain, for having the stock achieve certain levels.", "But of course, that creates this incredible bias to just get the stock to go up no matter, no matter what else you do. Um, it does seem to me that vesting over the long term is. is, is a much better way to go about things. But then do you create incentives for people to play games in order to get the stock lower at, at various points where there's about to be a stock optional board so they have a better chance of having directions be, be worth, be worth something over the long term.", "And do you, particularly on Wall Street there is this, or in firms where this sort of stuff matters the most? There [00:36:00] is this, there was this clearing out of dead wood that happened where people got paid and they got outta the way and made way for younger people. And I don't know, it was a harsh culture, but maybe it made sense on some level.", "And now at least I've been told with much longer vesting periods, you have people who don't wanna let go. And so you have more of a problem with people who should have retired, stick sticking around instead of in, in, instead of clearing out. And then it also becomes a question, How much money is, is enough.", "So if somebody is getting millions of dollars in short-term compensation and then they have a whole bunch more money tied up in long-term compensation, do the long-term numbers matter? At what point do they, do they, do they really matter? I mean, if you gave me $5 million today, I'm not so sure I'd really care if I were getting another $5 million in 10 years.", "Right. ? Yeah. So, so I think all of that is, is it, it's, I'm not, I'm not sure there's a perfect compensation system. All things considered though, I think longer term is, is probably better, [00:37:00] but.", "Dwarkesh Patel: Yeah, I didn't think about that downside of the long investing period. That's so interesting there. I guess there is no free lunch.", "Uh, so with Enron, um, it, it was clear that there was a lot of talent at the firm and that you had these companies and these trading firms launch at the aftermath by people who left Enron, kinder Morgan and John Arnold's, um, uh, Sintas, uh, that were wildly profitable and did well. Do you think we'll see the same thing with FTX, that while Sbf himself and maybe the, his close cadre were frauds, there actually was a lot of great trading and engineering talent there that are gonna start these very successful firms in the aftermath.", "Bethany McLean: That's, that's interesting. And just, just for the sake of clarification, kinder Morgan was actually started years before Enron's collapsed, when Rich Kinder, who was vying with Jeffs skilling in a sense, to become Chief Operating Officer. Um, Ken Lay, picked Jeffs skilling and Kinder left. Mm-hmm. and took a few assets and went to create Kinder, kinder Morgan.", "But your overall point, I'm just clarifying your overall point holds, there were a lot of people who [00:38:00] left Enron and went on to do, to have pretty, pretty remarkable careers. I think the answer with ftx, I bet there will be some for sure. But whether they will be in the crypto space, I guess depends on your views on the long-term viability of, of, of the crypto space.", "And I have never , it's funny is crypto exploded over the last couple of years. I was, I've been working on this book about the pandemic and it's been busy and difficult enough that I have not lifted my head to, to think about much else. And I always thought, I don't get it. I don't understand , I mean, I understand the whole argument about the blockchain being valuable for lots of transactions and I, I get that, but I never understood crypto itself and I thought, well, I just need to, as soon as this book is done, I just need to put a month into understanding this because it's obviously an important, important enough part of our world that I need to figure it out.", "So now I think, oh, Okay, maybe I didn't understand it for a reason and maybe, um, maybe there isn't anything to understand and I've just saved myself a whole life of crime because it's all gone. And you have [00:39:00] people like Larry Fink at BlackRock saying, whole industry is gonna implode. It's done. And certainly with the news today, this morning of finances auditor basically saying We're out.", "Um, I, I don't, I don't know how much of it was, how much of it was, is, was a Ponzi scheme. You might know better than I do. And so I don't know what's left after this whole thing implodes. It's a little bit like, there is an analogy here that when Enron imploded, yes, a lot of people went on to start other successful businesses, but the whole energy trading business is practiced by kind of under capitalized, um, um, energy firms went away and that never came back.", "Yeah. And so I, I, I don't, I don't know, I'm, it'll be, I, I don't know. What do you. The", "Dwarkesh Patel: time to be worried will be when Bethany McLean writes an article titled Is Bitcoin Overvalued for the Audience. My Moments on That ? Yeah, for the audience that, that was, I believe the first skeptical article about Enron's, um, stock price.", "Yeah. Uh, and it was titled [00:40:00] Is Enron Overvalued. In aftermath understated, , title. But ,", "Bethany McLean: , I joked that that story should have won, won, won awards for the NICU title and business journalism history. , given that the company was bankrupt six months later was overpriced", "Dwarkesh Patel: Um, uh, well, let me ask a bigger question about finance in general. So finance is 9% of gdp, I believe. How much of that is the productive use and thinking and allocation of the, uh, the capital towards their most productive ends? And how much of that is just zero sum or negative sum games? Um, if, if you had to break that down, like, is 9% too high, do you think, or is it just.", "I think it's", "Bethany McLean: too high. I have no idea how to think about breaking it down to what the proper level should be. But I think there are other ways to think about how you can see that in past decades it hasn't been at the right level when you've had all sorts of smart kids. Um, Leaving, leaving business school and leaving college and heading into [00:41:00] finance and hedge funds and private equity is their career of choice.", "I think that's a sign that that finance is too big when it's sucking up too much of, of, of the talent of the country. Um, and when the rewards for doing it are so disproportionate relative to the rewards of of, of doing other things. Um, the counter to that is that there've also been a lot of rewards for starting businesses.", "And that's probably, I think, how you want it to be in a, in a product. In a productive economy. So I think the number is, is too high. I don't know how to think about what it should be other than what a, actually, a former Goldman Sachs partner said this to me when I was working on all the devils are here, and she said that finance is supposed to be like the, the substrata of our world.", "It's supposed to be the thing that enables other things to happen. It's not supposed to be the world itself. So the, the role of a financial system is to enable businesses to get started, to provide capital. That's what it's supposed to be. It's the lubricant that enables business, but it's not supposed to be the thing itself.", "Right. And it's become the thing itself. [00:42:00] You've, you've, you've, you've, you've got a problem. Um, um, and I think the other,", "Dwarkesh Patel: there's your article about crypto , that paragraph right there. .", "Bethany McLean: There you go. That's, that's a good, um, and I think, I think the other way, you, you, you can see, and perhaps this is way too simplistic, but the other way I've thought about it is that how can it be if you can run a hedge fund and make billions of dollars from, and have five people, 10 people, whatever it is, versus starting a company that employs people mm-hmm.", "and changes a neighborhood and provides jobs and, you know, provides a product that, that, that, that, that improves people's lives. It, it is a shame that too much of the talent and such a huge share of the financial rewards are going to the former rather than the latter. And that just can't mean good things for the future.", "Dwarkesh Patel: Yeah. Yeah. And I, you know, when people criticize technology, for example, for the idea that, you know, these people who would've been, I don't know, otherwise teachers or something, they're, you know, making half a million dollars at Google. [00:43:00] Um, and I think like when I was in India, people were using Google Maps to get through the streets in Mumbai, which is, which is unimaginable to me before going there that, you know, you would be able to do that with, um, a service built out of Silicon Valley.", "And so, Yeah, I think that actually is a good allocation of capital and talent. I, I'm not, I'm not sure about finance. Um, yeah,", "Bethany McLean: I think I, I, I agree with you. I think there are other problems with Google and with the, the social media giants, but, but they are real businesses that employ people, that make products that have had, uh, huge.", "Um, impact on on, on people's, on people's lives. So in, in that sense, it's very different than a private equity firm, for instance, and especially private equity, even more so than hedge funds draws my ire. Mm-hmm. , because I think one of the reasons they, that it, they've been able to make part of the financialization of our economy has been due to super, super low interest rates and low interest rates that have enabled so many people to make so much money in finance are not, they're just a gift.", "It wasn't because these people were uniquely smart, they just [00:44:00] found themselves in a great moment in time. And the fact that they now think they're really smart because money makes me crazy.", "Dwarkesh Patel: Um, are Fanny and Freddy America special purpose entities? Are they our Alameda? It's just the way we hide our debt and uh, that's interesting.", "Yeah.", "Bethany McLean: Well, I guess we, you know what? I don't know anymore because, so I last wrote about them when was it in 2016 and I don't know now. No, you're right. Their, their debt is still off, off, off balance sheet. So Yeah, in a lot of ways they, they were. . I would argue though that the old Fanny and Freddy were structured more honestly than, than the new Fanny and Freddy, that it really is conservatorship that have made them, um, that have made them America's off balance sheet entities, because at least when they were their own independent entities.", "Yes, there was this odd thing known as the implicit guarantee, which is when you think about, back to your point about efficient markets, how can you possibly believe there's an as such a thing as an efficient market when their [00:45:00] Fanny and Freddy had an implicit guarantee, meaning it wasn't real. There was no place where it was written down that the US government would bail Fanny and Freddie out in a crisis, and everybody denied that it existed and yet it did exist.", "Yeah.", "Dwarkesh Patel: No, but we, I feel like that confirms the official market hypothesis, right? The, the market correctly, they thought that mortgages backed by Fannie and Freddy would have governments. Uh, okay, okay. You might be father", "Bethany McLean: and they did . You might be right. I, I, I think what I was getting at you, you might be right.", "I think what I was getting at is that it is such a screwed up concept. I mean, how can you possibly, when I first, when people were first explaining this to me, when I first read about Fanny and Freddie, I was like, no, no, wait. This is American capitalism . This is, no, wait. What? I don't, I don't understand . Um, um, so yeah, but I, I, I, I think that Fanny and Freddie, at least with shareholders that were forced to bear some level of, of the risks were actually a more honest way of going about this whole screwed up American way of financing mortgages than, than the current setup is.", "Dwarkesh Patel: What [00:46:00] is the future of these firms? Or are they just gonna say in conservatorship forever? Or is there any developments there? Well, what's gonna happen to them?", "Bethany McLean: The lawsuit, the latest lawsuit that could have answered that in some ways ended in a mistrial. Um, I don't think, I don't, I don't think unfortunately anybody in government sees any currency in, and I mean, currency in the broad sense, not in the literal sense of money in, in taking this on.", "And unfortunately, what someone once said to me about it, I think remains true and it's really depressing, but is that various lawmakers get interested in Fannie and Freddy. They engage with it only to figure out it's really, really goddamn complicated. Mm-hmm. and that, and that any kind of solution is gonna involve angering people on one side of the aisle or another and potentially angering their constituent constituents.", "And they slowly back away, um, from doing anything that could, that, that could affect change. So I think we have a really unhealthy situation. I don't think it's great for these two [00:47:00] entities to be in conservatorship, but at this point, I'm not sure it's gonna change.", "Dwarkesh Patel: Yep. Speaking of debt and mortgages, um, so total household debt in the United States has been, uh, climbing recently after it's, it's like slightly d decline after 2008, but I think in quarter three alone it increased 350 billion and now it's at 16.5 trillion.", "Uh, the total US household debt, should we worried about this? Are, are, are we gonna see another sort of collapse because of this? Or what, what should we think about this?", "Bethany McLean: I don't know. I don't know how to think about that because it's too tied up in other things that no one knows. Are we going to have a recession?", "How severe is the recession going to be? What is the max unemployment rate that we're gonna hit if we do, if we do have a recession? And all of those things dictate how to, how to think about that number. I. Think consumer debt is embedded in the bowels of the financial system in the same way mortgages were.", "And in the end, the, the, the [00:48:00] problem with the financial crisis of 2008, it wasn't the losses on the mortgages themselves. It was the way in which they were embedded in the plumbing of the financial system. Mm-hmm. and ways that nobody understood. And then the resulting loss of confidence from the fact that nobody had understood that slash lies had been told about, about that.", "And that's what caused, that's what caused everything to, to collapse. Consumer debt is a little more visible and seeable and I, I don't think that it has that same, um, that same opaque quality to it that, that mortgage backed securities did. I could be, I could be wrong. I haven't, I haven't, I haven't dug into it enough, enough to understand enough to understand that.", "But you can see the delinquencies starting to climb. Um, I mean, I guess you could on, on, on mortgages as well, but there was this, there was this profound belief with mortgages that since home prices would never decline, there would never be losses on these instruments because you could always sell the underlying property for more than you had [00:49:00] paid for it, and therefore everything would be fine.", "And that's what led to a lot of the bad practices in the industry is that lenders didn't think they had to care if they were screwing the home buyer because they always thought they could take the home back and, and, and, and, and make more money on it. And consumer debt is, is unsecured. And so it's, it's, it's different.", "I think people think about it differently, but I'd have. I'd have to, I'd have to do some more homework to understand where consumer debt sits in the overall architecture of the financial industry.", "Dwarkesh Patel: I, I, I'm really glad you brought up this theme about what does the overall big picture look like? I feel like this is the theme of all your books that people will be, So obsessed with their subsection of their job or, or that ar area that they won't notice that, um, broader trends like the ones you're talking about.", "And in Enron it's like, why, why, why do we have all these special purpose entities? What is the total debt load of Enron? Um, or with the, you know, mortgage back securities a similar kind of thing, right? What, what, uh, maybe they weren't correlated in the past, [00:50:00] but what's that? Do we really think that there's really no correlation, um, uh, between, uh, delinquencies across the country?", "Um, so that, that kind of big picture, think. Whose job is that today? Is it journalists? Is it short sellers? Is it people writing on ck? Who's doing that? Is it anybody's job? Is, is it just like, uh, an important role with nobody assigned to it?", "Bethany McLean: I think it's the latter. I think it's an important role with nobody, with nobody assigned to it, and there there is a limit.", "I mean, , I hate to say this, it is not, uh, um, it is not an accident that many of my books have been written. That's probably not fair. It's not true of my book un fracking, but that some of my books have been written after the calamity happened. So they weren't so much foretelling the calamity as they were unpacking the calamity after it happened, which is a different role.", "And as I said at the start of our conversation, I think an important one to explain to people why this big, bad thing took, took place. But it's not prediction, I don't know, as people that were very good at, at prediction, um, they tried [00:51:00] to set up, what was it called? In the wake of the global financial crisis, they established this thing called fsoc, and now I'm forgetting what the acronym stands for.", "Financial Security Oversight Committee. And it's supposed to be this, this body that does think about these big picture. That thinks about the ways, the ways an exam, for example, in which mortgage backed securities were, um, were, were, were, were, were, were, were repopulating through the entire financial system and ways that would be cause a loss to be much more than a loss.", "That it wouldn't just be the loss of money and that security, it would echo and magnify. And so that there are people who are supposed to be thinking about it. But I think, I think it's, it's, it's really hard to see that and. In increasingly complex world, it's even, it's even harder than it was before, because the reverberations from things are really hard to map out in, in, in advance, and especially when some part of those reverberations are a loss of confidence, then all bets [00:52:00] are off because when confidence cracks, lots of things fall apart.", "But how do you possibly analyze in any quantitative way the the risk that that confidence will collapse? Mm-hmm. . So I think it's, I think, I think, I think it's difficult. That said, and of course I am talking my own book here, I don't think that the lack of the, the increased financial problems of journalism really help matters in that respect, because in an ideal world, you want a lot of people out there writing and thinking about various pieces of this, and then maybe somebody can come along and see the.", "Pieces and say, oh my God, there's this big picture thing here that we all need to be thinking about. But there's, there's a kind of serendipity in the ability to do that one, that one that the chances, I guess the best way to say that is the chances of that serendipity are dramatically increased by having a lot of people out there doing homework, um, on the various pieces of the puzzle.", "And so I think in a world, particularly where local news has been decimated mm-hmm. , um, the [00:53:00] chances of that sort of serendipity are, are definitely lower. And people may think, oh, it doesn't matter. We still got national news. We've got the Washington Post, we've got the Wall Street Journal, we've got the New York Times.", "Um, I would love to have somebody do a piece of analysis and go back through the New York Times stories and see how many were sparked by lp, a piece in the local paper that maybe you wouldn't even notice from reading the New York Times piece, because it'd be in like the sixth paragraph that, oh yeah, credit should go to this person at this local paper who started writing about this.", "But if you no longer have the person at the local paper who started writing about this, You know, it's, it's, it's, it's less likely that the big national piece gets written. And I think that's a part of the implosion of local news, that people, a part of the cost of the implosion of local news that people don't really understand the idea that the national press functions at, at the same level, um, without local news is just not true.", "Dwarkesh Patel: Yeah. And, but even if you have the local news, and I, that's a really important point, but even if you have that local news, there still has to be somebody whose job it is to synthesize it all together. And [00:54:00] I'm curious, what is the training that requires? So you, I mean, your training is, you know, math and English major and then working at working in investment banking.", "Um, is that the, uh, I mean, obviously the anecdotal experience then equals one, seems that that's great training for synthesizing all these pieces together. But what is the right sort of education for somebody who is thinking about the big picture?", "Bethany McLean: I, I don't, I don't know.", "And there may be, there may be, there are probably multiple answers to that question, right? There's probably no one, one right answer for me. In, in the end. My, my math major has proven to be pivotal. Even though , my mother dug up these, um, my, my parents were moving and so my mother was going through all her stuff and she dug up these, some my math work from, from college.", "Literally, if it weren't for the fact that I recognized my own handwriting, I would not recognize these pages on pages of math formula and proofs. And they're like, get gibberish to me now. So , but I, but I still think that math has, so I do not wanna exaggerate my mathematical ability at this stage of [00:55:00] the game.", "It's basically no. But I do think that doing math proofs any kind of formal, any kind of training and logic is really, really important because the more you've been formally trained in logic, the more you realize when there are piece is missing and when something isn't quite, isn't quite adding on, it just forces you to think in, in a way that is, that in a way that connects the dots.", "Um, because you know, if you're moving from A to B and B doesn't follow a, you, you understand that B doesn't follow a And I think that that, that, that kind of training is, is really, really important. It's what's given. , whatever kind of backbone I have as a journalist is not because I like to create controversy and like to make people mad.", "I actually don't. It's just because something doesn't make sense to me. And so maybe it doesn't make sense to me because I'm not getting it, or it doesn't make sense to me because B doesn't actually follow, follow away, and you're just being told that it does. And so I think that, I think that training is, is really, really important.", "Um, I also have, have often thought [00:56:00] that another part of training is realizing that basic rule that you learned in kindergarten, which is, um, you know, believe your imagination or you know, your imagine follow your imagination. Because the truth is anything can happen. And I think if you look at business history over the last couple of decades, it will be the improbable becoming probable.", "Truth over and over and over again. I mean, the idea that Enron could implode one of the biggest, supposedly most successful companies in corporate America could be bankrupt within six months. The, from its year, from its stock price high. The idea that the biggest, most successful, um, financial institutions on wall, on Wall Street could all be crumbling into bankruptcy without the aid of the US government.", "The idea that a young woman with no college degree and no real experience in engineering could create, uh, uh, um, could create a machine that was going to revolutionize blood testing and land on the cover of every business magazine, and that this [00:57:00] whole thing could turn out to be pretty much a fraud. The entire idea of ftx, I mean, over and over again, these things have happened.", "Forget Bernie Madoff if you had told people a year ago that FTX was gonna implode six months ago, three months ago, people would've been like, no, no, no, no, no, no, no. And so I think just that, that, that, that knowledge that the improbable happens over and over again is also a really fundamental, fundamentally important.", "Dwarkesh Patel: If we're con continuing on the theme of ftx, I, I interviewed him about four or five months ago.", "Wow. And this is one of these interviews that I'm really, I'm, I don't know if embarrass is the right word, but I knew things then that I could have like asked, poked harder about. But it's also the kind of thing where you look back in retrospect and you're. If it had turned out well, it's, it's not obvious what the red flags are.", "Um, while you're in the moment, there's things you can look back at the story of Facebook and how, you know, Marcus Zuckerberg acted in the early days of Facebook and you could say, if the thing fell apart, that this is why, or, you know, this is a red flag. So [00:58:00] I have a hard time thinking about how I should have done that interview.", "Bethany McLean: Cliches are often cliches for a reason. And the one about hindsight being 2020 is, is, is one of the best cliches ever. And I am so fundamentally annoyed by stories that say, here were all the red flags. Why didn't anybody see it?", "Well, Oftentimes the person writing that story didn't see the red flags either. And it's really easy in retrospect to pick up all the signs that there wa that there was a problem. And it's really hard in the moment. And there's, there's a little bit of a, of a, of, I think in all of us, a subconscious sense that this doesn't sound quite right, but getting the subconscious suspicion to rise to the level of conscious thought is, is also really difficult.", "So I think, I think again, there is, there is, it is one value of, and I hope we're coming back to a world that appreciates old people because I'm getting older . But, um, there is something, there is some value in having, having seen it before that I think the red flags do maybe tend to rise to the level of conscious, um, conscious thought.", "That said, if I [00:59:00] had gotten interested in crypto a year ago, would I have seen the problems with Ft. Doubtful. I, I don't know. Mm-hmm. , you just, you just, you just can never know. Sometimes these things also depend on the way in which they're presented to you and by whom. And I think that shouldn't be, I think it's not, not intellectually honest,", "but if somebody you really respect comes to you and tells you, this business is the next greatest thing, and this person is brilliant, chances are that preconception is gonna be lodged in your mind in a way that's gonna make it really hard for you to see the red flags. Whereas if first you heard of this company with somebody coming to you, somebody really smart, you like coming to you and saying, yeah, I don't, I don't think this makes sense.", "These, these are my problems with this. You're gonna be far more apt to see the red flags. And I say, it's not, that's not the right way, not the right phrase. I say it's not intellectually honest or not smart, because really you kind of wanna strip that away. You wanna strip those biases away because even really smart people, Make mistakes all the time.", "And so you want to the extent possible to think for yourself, but, [01:00:00] but of course, or it's, it goes back to math, right? Order of operations, , in some order. The order in which information is presented to you unfortunately is gonna have an effect on how you see that, how you see that in information. Mm-hmm.", "Dwarkesh Patel: Yeah. Um, let's talk about fracking. So in this year, uh, I think quarter two, was it the first time that fracking and SHA shield finally became profitable? Um, were the fracking investors, right? That, that there would be an oil shock and now it is actually profitable? Or how do you think about Saudi America in the aftermath of the events?", "Bethany McLean: Well, I, at least from my understanding of it, and again, because I have been, um, out to lunch on this difficult book I'm writing, I haven't actually, I'm not as up to date on this as I should be. So take what I'm saying with, with the grain of salt. Um, but I think actually it proves the underlying thesis because fracking is profitable, but at a much smaller scale than it was [01:01:00] when people were saying, this is, this is gonna change the world.", "So it hasn't, the idea that us shale oil was the swing factor in world oil prices, I don't think anybody believes that anymore because it can't profitably produce the amount of oil required for it to be the major force that it was, that it was supposed to be. So I think what has happened actually proves the point instead of negating the point and for.", "For how long it can be profitable. Um, um, and at what level of oil prices is also, was also part of the, under the, under the underlying thesis, which is how much oil is there actually that can be extracted at, at, at this price. And so the fact that that very high oil prices, there is a certain amount that can be extracted profitably.", "Um, that's, that's not, that's not surprising. I think the way in which the book's thesis would be wrong is if it turned out US shale oil, oil could be the, the, the, the swing producer for the world. And we could be the world's largest oil producer and there could, the [01:02:00] Permian could continue to grow at whatever it was, 20% a year and grow profitably.", "And then I would have to say I, yeah, the, the, the investors were right. Um, or the people who believed in this were right. So.", "Dwarkesh Patel: Yep. Um, if you could have a Robert Carro like biography of anybody in finance in the last a hundred years, especially somebody who we feel hasn't been talked about or covered enough, who to be, who do we need?", "A thousand pages?", "Bethany McLean: About a thousand pages. I don't know that we need a thousand pages about anybody. , . Um, Um, who would I like to write about? Um, let me think about that. There are, there are some books I've been mulling over that I'd like to do, but I'm not sure I I think I might prefer to keep those to myself for, for, for a little while.", "I don't know. I'm also always gonna be, at least I'm trying to change my orientation, but I'm always gonna be biased toward the, the thing that went badly wrong rather than the thing that, that, that worked out. I don't know, maybe that betrays some underlying darkness in my personality, but [01:03:00] I always find that more fun.", "So I'm, I'm probably not the right person to ask about, uh, uh, a glowing biography. I don't, you know, I don't, I don't know. You could say, you could say Warren Buffet, but Pliny has been written about buffet. I dunno if there's anything to be. added to that. So let me, let me think. If I come up, if I come up with a great answer that I'm prepared to share, I will, I will tell you, I don't know that, maybe it goes back to your question about how big a role that finance should play in our economy overall, too.", "Maybe. I just don't think that anybody in finances necessarily worthy of that, of mm-hmm. of that kind of, of that kind of, um, that kind of sustained focus.", "Dwarkesh Patel: Yeah. Yeah. Um, I wanna talk about the rating agencies because they have featured heavily in both the story about the housing crisis and Enron. And as a libertarian leading person, that's really been, um, that, that, that's been kind of devastating in the sense that, you know, there's a hope that you can maybe replace [01:04:00] things like the F D A or at least functions of it, um, with agencies, private agencies that are tasked with like, rating, how good is this food?", "Or, uh, you know, how safe is this airplane? Things like that. And, I I'm curious if you think that there is any possibility of having any private agency that is being paid by the person they're evaluating, being able to give fair evaluations. And this even brings us, it doesn't even have to be, um, rating agencies.", "It can be things like, you know, now we have the big four accounting firms. I'm curious how much trust you put in them in terms of being able to do evaluations fairly. Do we think, do, have you passed through the Art Arthur Anderson Anderson days? Um, how, how credible are", "Bethany McLean: these institu. Look at EYs roll and Wirecard, right?", "And then mm-hmm. I think you can, uh, and some of the scandals PWC has been involved in over the last bunch of years. I mm-hmm. . Yeah. Um, um, it is, it is a really good question. I thought you were gonna go somewhere different with the rating agencies, which is why in a supposedly free market you have so many investors Yeah.", "Who rely [01:05:00] on credit ratings.", "Dwarkesh Patel: That's a good", "Bethany McLean: question. And I, and I do think that there's, uh, this is another perhaps counter to your efficient. Theory or maybe counter to a libertarian view of the world. But you know, a lot of big investors who complain about the rating agencies after there, there's been a disaster, really want the cover provided by the rating agencies because they can say, well, you told me this was aaa.", "Mm-hmm. , I bought it, therefore I can't be blamed. And so once the crisis has passed, the appetite of big investors to reform the rating agencies or do away with them, meaning then they would have to do their own work and not just say, well, I bought a AAA rated security. It's really next to nil. And that's a good part of the reason why reform doesn't, doesn't happen.", "I mean, remember the credit rating agencies were reformed in 2006. In the wake of interim, the Credit Rating Agency Reform Act was passed. And yet the rating agency sat at the center of the global financial crisis just a few years later. They, they, I, I'm not sure they, I'm not sure they really, they, they, they [01:06:00] can be reformed, but I'm also not sure there is a perfect, um, , maybe it goes back to your question about the ability to see the big picture and foresee a problem.", "I'm not sure there is a perfect regulator who, mm-hmm. wasn't being paid by companies who could then do a better job. You'd like to believe that's true, that if they were a government agency in charge of credit rating and they weren't paid by the companies and the securities. They weren't, they weren't they, they, and they weren't paid by, by the companies that, that would lead to better ratings, would it?", "Mm-hmm. . I, I don't, I dunno, right? , so I'm, I'm not, I'm not, I'm not sure. There's, I guess another way of saying this is very easy to criticize the current system as really problematic. And it's certainly the fact that credit rating agencies were making so much money by rating subprime, mortgage backed securities for sure played an enormous role in, in, in what happened.", "But then if you ask the next question, well, what's the alternative ? That's, that's [01:07:00] when it starts, that's when it starts to get to get pretty complicated.", "Dwarkesh Patel: Yeah. Yeah. Um, now that we're going to have this long proceeding on FTX in terms of compensating the people who are harmed, looking back at Enron, I mean, that was a long process.", "And I think you said in the book that a billion dollars of the remaining sum that should have been doled out to the victims was actually spent in legal fees and the procedures. If, if I read about that number.", "Bethany McLean: I think so. I'm not sure. I have to admit I don't remember, but it was some enormous side of it.", "Yeah.", "Dwarkesh Patel: So what should be the procedure when an implosion like this happens because it seems suboptimal that so much. I mean, it goes back to their finance discussion, right? Like, so much of the capital that these people are supposed to be doling out is just going to themselves. Um, do we need something like the F FAA when the plane crashes, they have a body of experts that kind of figures out what happened?", "Do we need a completely non legalistic or a different, uh, sort of procedure in these kinds of situations? How should the ftx, um, uh, [01:08:00] the, the, the dis disbursement of the remaining assets and so forth, how should that be done?", "Bethany McLean: It's a really interesting question and a really interesting analogy and something people have brought up to me that I've never really dug into, which is the incredible success of the f a a in cutting down, on, , problems with planes because of the incredibly thorough job they do in investigating the cause of a CRA crash afterwards.", "And so maybe somebody's written about this, but to really get inside that process and the incredibly powerful, profound role it's played might provide an interesting roadmap for, for corporate America and, and, and how to do this sort of thing and how to, how to prevent it in the future. Maybe, maybe the analogy doesn't hold up in some kind of, in some kind of subtle way, but for sure that's, that's, that's, that's a really, that's a really interesting question because you do, you, you need this to happen, right?", "You need the bankruptcy. You, you need. You need the excavation and even if it costs a great deal of money to just, for all of us, the worst possible outcome would be for everybody to throw their hands up and say, [01:09:00] okay, it's done. , nobody, no, no work. Because, because the truth of the matter is even journalists who excavate, who try to excavate this stuff, we depend on the work done by by others.", "I mean, I could not have written my Enron book if it weren't for the Justice Department investigating and their indictments for the role of Congress in getting all these documents from, from all these internal documents from Enron, including by the way, the list of executives and their cell phone numbers, which turned out to be incredibly valuable.", "Um, um, um, but you can't, you the, the role of the bankruptcy examiner and getting in there and really uncovering, um, all of what Enron was doing in order to manipulate its, its earnings. There's no way that a journalist, you could do that without, without all of that data that's provided by all these entities that are doing their own investigation.", "Because the journalist, you don't have subpoena power. You can't, you literally can't do this. And it becomes an incredibly important part of, of, of telling the overall story. So it's hard for me to say [01:10:00] when it's a calamity like this, that those dollars are wasted. But is there a better way to go about it?", "Maybe.", "Dwarkesh Patel: Maybe. Yeah. Uh, another similarity between Enron and FTX is that the bankruptcy is being overseen by John Ray, and this is a rather mysterious person. I, I believe I, the, one of the first times you see in a photo of him is when he appeared in front of Congress a few days ago. Um, and I checked the index of the smartest guys in the room.", "Um, and I, I don't think he was mentioned in the book. Who is this guy? . What, what's his deal?", "Bethany McLean: I don't actually really know. I feel like people have made a really big deal over the bankruptcy administrator being the same person. I'm not really sure how much that that matters.", "Okay. I mean, the bankruptcy administrator, when people, it was all over the press, you know, he says this is a bigger disaster than, than whatever it was. I can't remember the exact quote, but that's his incentive to say that because then any money that he can recoup looks like, due to his genius in administrating this thing in [01:11:00] bankruptcy.", "If you say, oh, this wasn't that big a deal, everybody's gonna get their money back, and then they don't, then you've got a huge problem. Right? , if you say, this is the, this is, this is terrible and it's awful, and there will be no money left because the the, then any money you get back, people are like, look, what a great job he did.", "So I, yeah, I, I, I, I don't know, I don't know, I don't remember him being a really pivotal figure in the, in, in, in, in the un in the excavation of Enron.", "Dwarkesh Patel: Yeah. Yeah. Um, I, I, I wanna ask you a bit about your writing process. Do you go into the book with some sort of thesis about the actors? And if so, is it often that you realize that somebody you thought was a bad actor is actually one, one of the good guys and or the other way around?", "Uh, or is basically the, your initial picture confirmed by further invest.", "Bethany McLean: Well, I guess that's a tough question about the books I've done because the books I have done, especially the two big ones, were after the fact and it was pretty [01:12:00] clear, a, that something went badly wrong, and B, that the people who were in power had to be responsible for it because they weren't responsible.", "Who else was the ways in which they were responsible and the degree to which they were responsible. Um, open questions that I don't think I had a, a view on going, going in. Uh, to that point on the financial crisis, I actually ended up changing my mind pretty, pretty radically. I wrote a piece in the early stages of the financial crisis saying that basically homeowners were equally, equally to blame because homeowners had cashed out, had had homeowners, took out risky mortgage mortgages, and without personal responsibility what were lost.", "And homeowners had also cashed out and people had made a lot of money. So the idea that this was all the faults of the banks was, was absurd. . And I still think that's true to some extent because without personal responsibility mm-hmm.", "we, we are lost. But I remember coming across this presentation that had been given by Washington Mutual, where they were, which was one of the big lenders at the time, um, no longer exists. And they were trying to get [01:13:00] people to take out, it was a whole internal presentation about how you got someone to take out a risky mortgage.", "And what they really wanted was the same 30 year fixed rate mortgage that their, that their parents had had. And it was all the tactics and, and tips you could use to twist their arm to take out this, this riskier mortgage because those were the ones that were in demand to be packaged up into subprime mortgage backed securities.", "And so Wamo could turn around and sell those mortgages for more money to Wall Street than they could the say 30 year fixed rate mortgages. And then I remember thinking there's just something wrong with a world where all of the responsibility is on consumers to understand and none of the responsibility is on the people selling the product.", "Mm-hmm. To be honest mm-hmm. About, about what this thing is. So anyway, that was a case where I. Where I changed, I changed my mind over, over the course of reporting a book. I try to, I try not to work on for stories. I try not to work on stories where there has to be an answer. Because, because if you're working on a story where it has to be one way for it to be a good story, then if it's not that, then you're biased to wanna see it one way.", "[01:14:00] And if it's not that way, you no longer have a story. So I always prefer to work on things where it's just really interesting and whatever you end up thinking or saying about it, it's still a really interesting story. So I can't say it's always been, so, I, I guess one of the last stories I did for Vanity Fair was when David Sackler of the, of the, uh, wanted to talk to me about what had of the Sacklers in Purdue and the opioid crisis, wanted to talk about his view of, of what had gone wrong and that in a way.", "It was, it was a weird story for me because usually I'm working from the outside in. I don't get stories where people wanna cooperate with me for, I guess all these reasons, . But, um, but in a way it was my favorite type of story because it was really interesting that he was willing to speak publicly. And if I had ended up thinking the slackers were totally blameless and the whole opioid crisis, I could have written that because it was really interesting that he was speaking publicly.", "And so I like, I like stories where that's where, where, where it's a [01:15:00] story no matter how you end up viewing the main characters and how you end up viewing the arc of the thing. Does that make sense?", "Dwarkesh Patel: Yeah. Yeah, it does. And I guess it raises another question, is that the people you've studied, um, in your books and your journalism, ha, have you concluded afterwards that, you know, there's some people who are fundamentally deceptive and there's some people who are, you know, ethical and that there's sort of this line, or, uh, do you feel like if anybody was put into skilling shoes at the time or SVF shoes, There's a good chance they would've done the same kinds of things.", "Bethany McLean: I think we all have the capacity to be deceptive. I think it's part of human nature. Some people are more inclined toward truth telling and transparency than other people. Maybe just by, by by nature, maybe by nurture.", "And what's been rewarded in your career and your life, um, inclines you to either be open and honest or to try to try to hide things. And maybe that's, maybe that's more the experiences that you've had, but I think we are all capable of a fundamental level of [01:16:00] deception for, for CEOs. Whenever people look at a Jeff Skilling or an Angela Mozilla or any of these characters and say, I could never have done that.", "I mean, there's a fundamental rationalization. Skilling by manipulating Enron's money was, and Enron's earnings was trying to keep the stock price, stock price high. So he didn't hurt investors by having the stock go down the way it would've been if he had been radically honest about all of Enron's problems.", "And so we're all capable of that kind of rationalization, that what we're doing is really in the long-term interests of somebody else and really in their best interest, even if it's, even if it's not telling the truth. Um, and I think, I think to say that you're not capable of that. I, I mean, I don't know, I almost think that's more dangerous than admitting that you're capable of it because Yeah.", "Yeah. Admit that if you admit that you're capable of it, then you can make a fundamental choice to either behave that way or, or, or not to behave that way. But there's also another component of, um, human nature that I think is more [01:17:00] prevalent in some people than others. And it's not, uh, I don't know if you'd characterize it as dishonesty, but it's very deeply believing something in the moment and convincing other people because you so deeply believe it, and a month later you don't believe it anymore because you're somebody who changes their mind.", "So is what you told people a month earlier a lie, or was it a truth at its moment in time? And so there's, there's that too that makes it more difficult to label somebody fundamentally deceptive, right? And they look like a lie to the person who believed what that person was saying and acted on that person's advice and belief.", "And a month later finds out that the person sold their stock or no longer believes that. But was was the person being dishonest in there in that moment?", "Dwarkesh Patel: Not necessarily. Yeah. And it raises the interesting question of moral luck, I'm sure there were tons of CEOs and there are tons of CEOs today who are like Ken Le who are just basically disconnected from the business they started and are kind of hanging on.", "And it just so happened that Ken Le was the [01:18:00] ceo e o of the one that was committing massive fraud. Uh, but it, it, it's kinda like texting in. Texting and driving. Yeah. Where lots of people do it. One guy crashes and obviously he should be punished, but he's no more culpable than the rest of us who do it,", "Bethany McLean: I like that concept of moral luck. I think that's incredibly true. Yeah. Um, it goes with the concept of, you know, somebody who is an investor over the last 40 years. Sure. Probably most people did, did really, really well. Right? With the tailwind of declining interest rates. Does that mean that these people were brilliant or that they lucked into the right field?", "And of course, some people didn't look into it. They saw what was going on and chose it and, and you know, so there, so there's that too. But, but. , maybe that trivializes your, your question because I think that's a very profound question, and I think there really is, um, there definitely is such a thing as moral luck, and the only way in which you can tip the scales is by trying to be aware of bad people and bad situations and keeping yourself clear of them.", "Because back to your earlier question about culture, , [01:19:00] it's a trite cliche, but you are who you're, you surround yourself with either in a job or, or in your friendships. And so if you wanna have the best chance possible of, of, of steering clear of accidents, you do have to be careful about the situations you choose to put yourself into.", "Um, because the, i the idea that you can remain an island in a, in a bad situation is, is not true of most of us.", "Dwarkesh Patel: Yep. Yep. Uh, final question, or I guess second to final question. What advice would you give to people who. Want to do something similar to what you've done over your career, whether that's investigative journalism or some other role in maybe finance or technology that involves putting together the big picture.", "What advice do you have for them", "Bethany McLean: that's, that's an interesting question. I can't give advice for journalists anymore because the world in which I grew up no longer exists. Hmm. I once would've told anybody to go work on, and maybe that's, maybe, maybe that's leading me to the right answer, but the world in which I grew up in journalism no longer exists.", "And so to [01:20:00] go off and be a writer, um, back when I did it, you know, you could get a job at a magazine and you could take home a paycheck that would enable you to cover your rent. And if you were like me and didn't have any family money, then you needed that . And so the idea now of telling somebody to go off and pursue a career as a writer, well, if you don't have any other source of, um, source of external support, that it's, it's a lot more difficult than it, than it, than it, than it once was, but, actually for me, I think that does lead to a better and deeper answer to your question, which is that right?", "Don't allow yourself to be seduced by the quick ease of PowerPoints and putting together bullet points. Force yourself to when you're dealing with something complicated, force yourself to write it out. Um, and maybe that would be different for other people, but for me, writing forces a level of intellectual honesty and um, a clarity about the big picture that nothing else does.", "It's really hard work. I've never understood people who say, I just love to write. It's so fun. I always like what? , it's [01:21:00] one of the most difficult things you can do because to write clearly requires thinking clearly and thinking clearly is really, really hard. , at least for me, but. , it is really only in writing that I realized that I didn't understand something that I was pretending to understand.", "And so it's really easy to, to pretend to yourself that you understand something. And if you have to write it and write it clearly in a way that somebody else can understand it, that often forces you, um, to, that forces you to realize your own, your own lack of comprehension. And I think that exercise in terms of an understanding of the world and a chance of seeing the big picture is really, really critical.", "Maybe there's something for somebody else that would work that isn't writing. Maybe it would be turn turning the world into a math proof, you know, , . But, um, but, but there is, I, for me, that's what it is.", "Dwarkesh Patel: Yeah. Yeah. Um, and final question, what is your next book about and when will be, have, when will we have the pleasure to read it?", "Bethany McLean: Hope it's a pleasure. I'm not sure at this point. It's kinda a pleasure. Um, so I'm [01:22:00] writing it with Jon Oser again, who I wrote All the devils are here with, and we set out to write about the economic consequences of the pandemic, but it's really become a broad look at how. Capitalism is and isn't functioning, um, in our, in our modern society.", "And then underneath that, a look at how government is and isn't setting the right rules for capitalism to function. Because back to your point about being a libertarian. . I do think there's this lovely idea that markets exist independently of the rules set for them by society, but I'm not really sure that that's true.", "If you think about everything from the existence of a limited liability corporation to bankruptcy law, these are all rules laid down by society that dictate how the market. How the market functions. And so in the end, if the market isn't functioning the way we want it to, chances are it's the results of the improper, the improper conditions having, having been set.", "So that's what the book is turned into as a, as a way of looking at [01:23:00] the flaws and capitalism even leading up to the pandemic, um, and how those were exposed and exacerbated by the pandemic", "Dwarkesh Patel: yeah. Okay. Uh, I, I'm really, um, I'm really excited to read it. That's a, that's an exciting thesis. I'm , but, but do you know when it's gonna be out, by the way?", "Bethany McLean: October, 2023?", "Dwarkesh Patel: Yes. Okay. You'll have to come back on the podcast. And I, I would love that. Yeah. Yeah.", "Bethany McLean: I would love that. Awesome.", "[01:24:00]", "[01:25:00]" ]
[]
https://www.dwarkesh.com/p/brett-harrison
Brett Harrison - FTX US Former President & HFT Veteran Speaks Out
[ "This transcript was autogenerated and thus may contain errors.", "Dwarkesh Patel", "Okay. Today I have the pleasure of speaking with Brett Harrison, who is now the founder of Architect, which provides traders with infrastructure for accessing digital markets. Before that he was the president of FTX US, and before that he was the head of ETF technology at Citadel. And he has a large amount of experience in leadership positions in finance and tech. So this is going to be a very interesting conversation. Thanks for coming on the Lunar Society, Brett.", "Brett Harrison", "Yeah. Thanks for coming out to Chicago.", "Dwarkesh Patel", "Yeah, my pleasure. My pleasure. Is the growth of ETFs a good thing for the health of markets? There's one view that as there's more passive investing, you're kind of diluting the power of smart money. And in fact, what these active investors are doing with their fees is subsidizing the price discovery that makes markets efficient. And with passive investing, you're sort of free writing off of that. You were head of ETF technology at Citadel, so you're the perfect person to ask this. Is it bad that there's so much passive investing?", "Brett Harrison", "I think on that it's good. I think that most investors in the market shouldn't be trying to pick individual stock names. And the best thing people can do is invest in sort of diversified instruments. And it is far less expensive to invest in indices now than it ever was in history because of the advent of ETFs.", "Dwarkesh Patel", "Yeah. So maybe it's good for individual investors to put their money in passive investments. But what about the health of the market as a whole? Is it hampered by how much money goes into passive investments?", "Brett Harrison", "It's hard to be able to tell what it would look like if there was less money in passive investment. Now, I do think one of the potential downsides is ending up creating extra correlated activity between instruments purely by virtue of them being included in index products. So when Tesla gets added to the Sfp 500, tesla doesn't suddenly become a different company whose market value is fundamentally changing, but yet it's going to start moving very differently in terms of its beta correlation between other instruments in SP 5100, purely as a function of all the passive investing that moves these instruments in the same direction. So that's the sense in which I think it could be detrimental naively.", "Dwarkesh Patel", "You would assume that efficient market hypothesis would say that if people know that Tesla's stock price would irrationally climb whence including the S&P 500, then people would short it and then there should be no impact from this irrelevant information. Why isn't that the case?", "Brett Harrison", "It probably mostly is. I think that sometimes there can be liquidity differences that cause at least temporary dislocations and stocks. The simplest example is like you have an ADR like an American Depository receipt that's sort of expunging for some underlying foreign stock and these two things should be like almost the same value at all times, like net of currency conversion and conversion ratios. But if one of the markets is highly illiquid or difficult to access, then there's going to be dislocations in price. And that's like the job of the Jane Streets, the world to kind of arbitrage away the price over time and so long run you wouldn't expect these things to be dislocated for that long. So I'm sure there are people who are understanding the fundamentals of individual names in the S P 500 and when there's like American News and the entire S and P falls, they are maybe buying S and P and selling individual names and expecting that relative value spread to come in over time.", "Dwarkesh Patel", "Speaking of by the way, these firms, you don't have to tell me specifics, but how similar are the strategies for market making or trading that Jean Street versus Citadelo and these firms, is it the same sorts of strategies or are they pretty different?", "Brett Harrison", "I think a lot more differences than people appreciate from the outside. Different companies have established different niches and areas like Jane Street established its early niche and ETF at kind of like a mid frequency level. So not like ultra fast but not like long term year long discretionary macro. Whereas maybe your Citadel securities kind of firm built their niche more on lower latency options, market making so it could be all over the place. There are some where they are trying to optimize for really short term like microstructure. Alpha is trying to predict where the order book is going to move over the course of anywhere from milliseconds to seconds. There are firms that care more about the relative convergence of instruments over the course of hours to days. There's sophisticated quantity of trading firms that are doing longer term days to weeks to months long trades too. A lot of the infrastructure can be similar. Like either way you need to be able to connect to exchanges, download market data, establish simulation platforms, build tools for traders to be able to grasp what's going on in the market and especially be able to visualize their own proprietary models and Alphas. But beyond that, the actual strategies and the ways they make money can be very different.", "Dwarkesh Patel", "Famously, in other kinds of development, there's these like, very famous hacks and algorithms, right? So in gaming and graphics, John Carmack has the famous fastener square root for doing graphics calculations, normalizing vectors faster. You were not only a developer in finance. I know what the exact term is for that. But you led teams of hundreds of people who are doing that kind of development. Are there famous examples like this in finance, the equivalent of fastener square root, but for the kinds of calculations you guys do?", "Brett Harrison", "Yeah, they're all over the place. There's tons of hacks and tricks and things like that. I think, for example, here's a famous one, not famous, I think I read it in a paper and like a bunch of other developers from different other companies told me about this. It's not something I saw at places that I worked. But if you're sending a message to, let's say, Nasdaq to buy stock and you want to get there as fast as possible, well, what is a message to Nasdaq? It's a TCP IP wrapped message with a particular proprietary protocol that Nasdaq implements. Well, let's say your goal is, you know, you're going to trade Apple, but you're not sure, like, what price and at what time. And you're kind of waiting for some signal to buy Apple as fast as possible. So what you can do is you can preconstruct the entire TCP IP message. Like, first put the TCP header on there, then the IP header, then, like, the kind of outer protocol that Nasdaq specifies and the inner protocol except for the byte slot where you put in the price and then pre. Load that message into the network cards sending buffer so that once you're ready to send, you can just pop in the price and send it off and incur as little latency as possible.", "Dwarkesh Patel", "That's awesome.", "Brett Harrison", "I think the analogy to video games is a good one because just like in video game graphics, what's the end goal? It's not like to produce the most theoretically perfect simulation of environmental graphics. It's to have something that looks good enough and is fast enough for the user. And that's also true in hft and quantitative finance, where the goal is to get to the approximately right trade as fast as you can, it's not to have the perfect theoretical model of underlying.", "Dwarkesh Patel", "Price dynamics that is so fascinating. But this actually raises an interesting question. If you have some sort of algorithm like this that gets you a few nanoseconds faster to the Nasdaq Exchange, and that's why you have Edge, or you've leased microwave towers to get from New Jersey to Chicago faster, or you've bought an expensive server in the same place that Nasdaq is housed. What fundamentally is the advantage to society as a whole from us getting that sort of information faster? Is this just sort of a zero sum game of who can get that incorporate that signal faster. Like, why is it good for society that so much, so many resources and so much brain power is spent on these kinds of hacks and these kinds of optimizations?", "Brett Harrison", "Yeah. So I think if you start from the premise that having liquid, tight, efficient markets is important for the world and you say, like, how do I design a system that optimizes for that? I think you want smart, sophisticated technologies competing at the margins. And of course, the more they compete, the smaller the margins become to the point where you think, like, the little extra activity people are doing to get slightly better don't seem to be greatly affecting the whole system as much as if as it was in the earlier days when things were slower and tick sizes were wider. I think it's difficult to imagine designing a market where you say, like, okay, everyone should innovate up until this point and then stop competing and then just stay stasis. And maybe you can create certain regulatory or market structures to try to prevent that. But I think on average you want people competing at the margins, even if they seem like they are minuscule. But at the same time, I think it's not zero sum for society for technologists to be creating super fast, ultralow latency, very sophisticated algorithms. Like maybe, I don't know, we have a lot of geopolitical instability in the world. Who knows if our microwave network that we built out in the US. Could have greater use cases than just for quantitative finance? But the quantitative finance subsidized the creation of these towers.", "Dwarkesh Patel", "Okay? So that's sort of like a contingent potential benefit. People tell us a little story about NASA, right, in this case, literally microwaves that they subsidize a lot of the science that ended up becoming becoming into products. So that's an interesting account of the benefits of finance, that it has the same yeah, whatever tricks they come up with might be useful elsewhere. But that's not a story about how it's directly useful to have nanosecond level latency for filing your Apple stock or something like that. Why is that useful directly?", "Brett Harrison", "I mean, if there is some kind of news that happens in one part of the world and that should affect the current price of stock in a different part of the world. I think that if you care about efficient markets, you want the gap between source of truth events and ultimate price discovery to be as small as possible. I think if you believe if you want to question whether getting a few extra milliseconds or microseconds or nanoseconds is worth it, I think you're then putting some kind of value judgments on.", "Brett Harrison", "What.", "Brett Harrison", "Is the optimal time it takes to get from to price discovery and saying a second is too slow, but a millisecond is too fast, or a millisecond is too slow, but a microsecond is too fast, and I just don't think we're in a position to do that. I think we kind of always want as close to instantaneous price discovery as possible.", "Dwarkesh Patel", "I'm only asking more about this because this is really interesting to me. There are some level of resources where we would say that at this point it's not worth it, right? Let's say $5 trillion a year was spent on getting it down from like, two nanoseconds to like, 1. Know that's probably not a realistic number, but just like there is some margin at which for some weird reason that there's society or just spend so many resources on it, would you say that we haven't reached that margin yet where it's not socially useful? The amount of brain power and resources that are spent on getting these tight.", "Brett Harrison", "Spreads I don't know how large a percentage of GDP prop trading is. I suspect it's not that large. So I don't think we're close to that theoretical limit of where I would start to feel that it's a waste. But I also think there's a reason why they're willing to spend the money on this kind of technology because they're obviously profiting from doing so and it has to come from somewhere. So somehow the market is subsidizing the creation of this technology, which means that there's still ability for value capture, which means there's still a service that's being provided in exchange for some kind of profit. I think we wouldn't spend $5 trillion in a microwave network because there isn't $5 trillion of extra value to be created in doing so.", "Dwarkesh Patel", "Got it. Has being a market maker change your views about civilizational tail risk because you're worried about personally getting run over, right, by some sort of weird event and adverse selection? Does that change how we think about societies getting run over by a similar thing, or are the mental models isolated?", "Brett Harrison", "So I think working in high speed finance teaches you to understand how to more correctly estimate the probability of rare events. And in that sense, working in finance makes me think more about the likelihood of civilization ending problems. But it doesn't suggest to me sort of different solutions. There's a very big difference being in a financial setting where your positions are.", "Brett Harrison", "Numbers that you can put in a.", "Brett Harrison", "Spreadsheet and you can model. Like, what happens if every single position goes against me three X the wrong way? And what instruments would I have to buy or sell in order to be able to hedge that portfolio? That's like a closed system that you can actually model and do something about. Having, like, a trigger mentality on future pandemics I don't think helps you much. I think maybe it slightly changes your ability to kind of estimate the probability of such events. But the actual solutions to these problems are a combination of, like, collective action problems plus being able to sort of model the particular type of unknown unknown about whatever the event is. And I think those kinds of solutions should be left to the experts in those particular fields and not off the traders. In other words, I don't think, like, having the trader mentality among rare events in, like, normal civilization outside of finance really kind of helps you much.", "Brett Harrison", "And maybe in some ways it's led.", "Brett Harrison", "People to think more hubristically that they can do something about it.", "Dwarkesh Patel", "Gee, who could be talking about that's?", "Brett Harrison", "Really interesting.", "Dwarkesh Patel", "You would say that famously, these market making firms really care about having their employees be well calibrated and good at sort of thinking about risk. I'm surprised you think that the transfer between thinking about that in financial context and thinking about that in other contexts is that low.", "Brett Harrison", "Yeah.", "Brett Harrison", "Again, I think it helps you at estimating probability of rare events, but it does not translate super well to what action, then, do you take in the face of knowing those rare events?", "Dwarkesh Patel", "Were your circles or people in finance earlier to recognize the dangers of COVID.", "Brett Harrison", "That's a good question. I think that people in my circles were quicker to take action in the face of knowing about COVID There are a lot of people who kind of stuck around in cities and their existing particular situations not knowing kind of where this was going to head long term. And I think if you have the fortune of having the financial flexibility to be able to do something like this, a lot of the people in kind of financial circles kind of immediately recognize, okay, there's this big risk.", "Brett Harrison", "This unknown and I don't want to.", "Brett Harrison", "Get out of or selected against in terms of being able to get out of the locus of bad pandemic activity. So people immediately were fleeing cities, I think, faster than other people.", "Dwarkesh Patel", "That seems to point in the opposite direction of them not being able to estimate and deal with geopolitical risk.", "Brett Harrison", "Well, I mean, there you have an actual event that has occurred, and then in the face of the event, what do you do right now? Yeah, I think that's different than, like, what do we do about the potential for AI to destroy civilization in the next hundreds of years? Or what do we do about the next potential biological weapon or the next pandemic that could occur.", "Dwarkesh Patel", "Speaking of COVID you were head of semi systemic technology at Citadel when COVID hood, right?", "Brett Harrison", "Yes, exactly.", "Dwarkesh Patel", "How did these Hft firms react to COVID? What was it like during COVID Because obviously the market moved a lot, but on the inside, was it good, bad?", "Brett Harrison", "Yeah. All the companies, Citadel securities, but really all of the ones in this sort of this finance year, I think were extremely resilient. I think a lot of them found that their preexisting ideas that in order for the team to succeed, everyone needed to be the exact same place. And it was very important from like an IP perspective to make sure that people weren't taking a lot of this work home with them completely went out the window. And people had to completely adjust to the idea that actual trading teams that are used to be able to have eye contact with each other at all times need to adjust to this pandemic world. And they largely did. I think at least from a profitability perspective. It was some of the best years of Hft firms PnLs in recent history.", "Dwarkesh Patel", "Matching engines already have to deal with the fact that you can have orders coming from, like, Illinois, you can have orders coming from Japan, and given light speed, they're not going to run at the same time, but you still kind of have to work around that. Is there any hope of a single market and matching engine for once humanity goes interplanetary or interstellar? Could we ever have a market between, like, us and Alpha Centauri or even Austin Mars? Or is the lag too much for that to be possible?", "Brett Harrison", "Yeah, without making any changes to a matching engine. There is nothing that says that when an order comes in, it can't be older than X time.", "Brett Harrison", "Right.", "Brett Harrison", "What it does mean is that the actual sender, they're sending a market order from halfway across the world, by the time that the order reaches the exchange, they might end up with a very different price than the one that they were expecting when they sent it. And therefore there's probably a lot of adverse selection sending a market order from halfway across the world than in a colocation facility. So you can technologically run an interstellar exchange. This might not be good for that person living on the moon.", "Dwarkesh Patel", "Is there any way to make it more fair?", "Brett Harrison", "Yeah, so I think there's actually kind of real world analog of that which is like automated market makers on slow blockchains. Because if you're used to working on Nasdaq where Nasdaq processes like a single message in somewhere between like tens and hundreds of nanoseconds per order a blockchain, like ethereum processes what, like 15 to 50 messages per second, so significantly slower by numbers of orders of magnitude. And yet they've been able to establish pretty mature financial marketplaces by saying that rather than you having to send orders with prices on them and then cancel them when the prices aren't good anymore, there will be kind of an automated function that moves the prices at the matching engine. And so whenever your order reaches the exchange, it'll always be kind of a predetermined fair price based on the kind of prevailing liquidity at the time. So one can imagine building a Nasdaq for interstellar market is kind of similar to building uniswap now on a theory of in terms of order, magnitude and speed. But there's other things you can do, too, like you could establish like, periodic auctions instead of like continuous matching and things like that. And that could potentially help mitigate some of these issues.", "Dwarkesh Patel", "Yes, that's something else I want to ask you about. What do you what is your opinion of periodic frequent batch auction systems? Should we have more of that instead of so?", "Brett Harrison", "In theory, they help mitigate the advantages of high frequency trading. Because if you know there's going to be an auction every 30 seconds, and it's not going to be by time priority, it's going you buy price, then it doesn't matter if you send that order at the beginning of the 32nd period or the end of 32nd period. It's really the price that determines that you get filled. Not something to do with particular latency to the exchange. I think in practice, the couple of exchanges around the world that used to have those have switched away from them. I think the, like, Taiwan Stock Exchange used to have a periodic auction system. And I just thought the like, the liquidity and price discovery wasn't good and it was like, complained about a lot and they eventually moved off of it to a continuous matching system. So I guess in practice it doesn't quite work as well. But it's hard to tell.", "Dwarkesh Patel", "It's really hard to tell what country, meaning your long experience of dribbling financial infrastructure, what country do you feel has the best infrastructure and set up for good markets?", "Brett Harrison", "I would say the United States, except what's happened is US. Companies like Nasdaq have licensed their exchange matching engine technology to other exchanges around the world. So Nasdaq OMX technology powers a number of the exchanges in Europe and some in Asia. So it's hard to sort of say that. Is the technology American? I guess so. I'm not sure exactly who wrote a lot of the stuff underneath the Nasdaq technology, but I do think the US. Markets are some of the most efficient and low, latency and expansive and products that allowed in the world.", "Dwarkesh Patel", "How do adverse selection in trading and hiring differ?", "Brett Harrison", "In hiring, there are one, many more opportunities for positive selection versus a negative selection usually encounter in finance. And the other thing is that most financial markets are in the US. When you think about trading in general, you're thinking about liquid markets. The hiring market is highly inefficient. Maybe the pipeline of orders from like, Harvard, MIT, Princeton, Yale to Jane Street and Sale securities is like a very liquid pipeline. But there are many, many universities and colleges throughout the country and the world that have extremely talented individuals whose resumes will never end up on your doorstep. So you might end up with a resume from some graduating senior from college who has no internship experience, and your.", "Brett Harrison", "Trader mindset might think, okay, this is.", "Brett Harrison", "Terrible adverse selection, but it actually could be that person. If he's willing to put themselves out there and apply to your company from this relatively unknown university, then that might be the signal that is like the best person in that entire region. And that might be a positive selection. So I think that it's not exactly the same, like, adverse selection dynamics as there is in the traditional trading world.", "Dwarkesh Patel", "Yeah, yeah, definitely. Especially if you have, like, I guess mission oriented companies have especially a good way of getting rid of adverse selection. Right?", "Brett Harrison", "Yeah, exactly. The companies with really strong brands, I mean, that's one of the things we saw at Jane Street was like, I heard stories in the old days of Jane Street that, like, the first resumes from Harvard were like the people were terrible. Like, they couldn't do like basic math and they just were the worst candidates compared to other people that they were able to find. And then they established this brand and this recruiting pipeline and this reputation for having very difficult interviews and for paying people really well and having this amazing work environment that all of a sudden all the people getting through the pipeline from Harvard were like, really, really great. And it wasn't like the quality of students at Harvard changed. There's probably a bell curve there like there is everywhere else. It was just like the positive selection resulting from the branding efforts and the mission driven focus of the company that really brought that positive selected pipeline to them.", "Dwarkesh Patel", "That's really interesting. Should Jane Street replace Okamel with rust?", "Brett Harrison", "No, because there's too much infrastructure already in Okamo.", "Dwarkesh Patel", "Yeah, but starting from scratch.", "Brett Harrison", "So I guess the world is if they could snap their fingers and suddenly replace all their Ocal infrastructure with Rust at zero cost, would it be worth it?", "Dwarkesh Patel", "Yeah.", "Brett Harrison", "In that case, I would say yes, because I think that you get a lot of the sort of static typing and compile time safety in Rust that you get from Ocamol. But the base level program that you can write in Rust is much, much faster than one you can write in Okamel because of the way Ocamel is designed, where there's this kind of automatic garbage collection, the worst thing you can do in high speed finance is do any memory allocation that results in garbage collection. And so you have to write very, very careful o camel that almost ends up looking like C in order to end up staying in functional programming land, but not actually creating tons of memory on the heap or the stack that ends up getting collected later.", "Dwarkesh Patel", "I guess you've been playing around with the Rust a lot recently, right?", "Brett Harrison", "Yeah.", "Dwarkesh Patel", "What is your impression of the language? You've been enjoying it?", "Brett Harrison", "It's great and it's come very long way in the last three to five years. I think crypto has something to do with that. It seems to be like one of the languages of choice for people to write blockchains and smart contracts and so there's been enormous amount of open source contribution to Rust. And so comparison when I last looked at it a couple of years ago, it's a lot easier to write really good, sophisticated programs now in Rust and get all of the type safety and the speed that you get, which is like very comparable to C plus plus on the speed side.", "Dwarkesh Patel", "Well, when I'm writing programs, they're not large code bases with many people contributing. So what I use for us, it's just like a huge pay. And I just want to do something very simple. Why do I have to put like an arc instead of a box instead of an option? But I can totally understand if you have something where like billions of dollars are at stake.", "Brett Harrison", "There's definitely a learning curve. I think for basic scripting you want to use something like Python, right? But exactly. If you're writing low latency distributed infrastructure that has to never fail, russ is a pretty good choice.", "Dwarkesh Patel", "Yeah. Speaking of Jane Street, why does a company pay interns like 16 or seventeen K a month for their summer internships? Is the opportunity cost for one of these smart students that high in the summer?", "Brett Harrison", "The short answer is yes, but the long answer to why I think they do that is the starting salary for the top people in not just finance but in tech is sort of in this like low to mid six figures now. And you can debate whether you think that's like, the appropriate starting salary for a person with no experience coming out of college or not. That's just sort of the reality is that the talents pool is extremely competitive from the employer side. So if you start with that as like a reasonable salary plus bonus for an employee, I think change rates mentality is these interns who are coming here like they're doing real work, they should be paid like a full time employee, just like prorated for the time when they're actually here. And so that ends up like checking out to be like the right numbers.", "Dwarkesh Patel", "Wait on net, are interns I mean, forget about the salary, are they actually on net contributing? Given that subtracting away the time of the traders who are training them, maybe.", "Brett Harrison", "It sort of breaks even if you consider the time to train them. But it's extremely worthwhile, because when those interns come back full time and Jane Street hires a significant percentage of its incoming classes from their internship program that they're already trained, they're ready to go day one. They're almost immediately useful because they had that like, three month period where they got trained and only the ones that really liked it and were good come back. So it's like rather than wait for them to come on site, train people, and maybe half of them aren't good or and half them don't fit with the culture and you kind of don't know what to do with them. The internship program provides a really good place to get on the job training and then only kind of select on both sides for the ones that are.", "Brett Harrison", "The best.", "Dwarkesh Patel", "Is there a free writer problem with internships where if, like, a company like Jane Street puts in the effort to train somebody for three months, they might get some of them to work for them, but they've also trained some people who might work for the competition. And is there some sort of free rider problem there?", "Brett Harrison", "There for sure is, which is why the companies have to work as hard as possible to make their experience as good as possible, which is like, it's good for the interns. When you go to street, not only do you learn a lot, but they pay you really well. And also you get to visit one of the foreign offices for like a week or something. Also they have all these really fun programs where they bring famous speakers to come to the office and speak to the whole intern class and they have parties and all sorts of stuff. And it adds to like, the experience of thinking like, okay, this is the place I want to work. I don't want to like, take my training and go to the competitor, I want to come to you.", "Dwarkesh Patel", "I clearly got in the wrong business with podcast. Why did you pursue finance dev instead of trading? Why did that appeal more to you?", "Brett Harrison", "Yeah, so in college I studied computer science in math and I really liked programming, but I think I didn't quite know what a career in programming looked like. I think the conventional wisdom, at least in 2009 when we applying for internships, was like, okay, I'm going to sit in the cubicle and stare at a screen for 16 hours a day. I'm going to be miserable and it's not going to be a very social job. I consider myself like, a pretty social person. And I had a lot of friends who had had these various internships and quantitative finance, mostly from the trading side. And so when eventually I went to Jane Street as an intern, I had kind of like a hybrid summer, like, doing some dev stuff and some trading stuff. And to me I thought, like, okay, the traders are much more, much closer to, like, the real action of the company and like, I want to be a part of that. And so when I joined Jane Street, I was hired as a trader on the ADR desk. And I realized very soon into that that one, no, actually the developers have just as much, if not more impact on the outcome of success of the company. And two, I just like, enjoyed it a lot more and it was just much more up my alley and my training, and so I ended up going that route instead.", "Dwarkesh Patel", "I want to ask about the culture at these sorts of places like Acidadelle or Jane Street. I mean, you spend some time in Silicon Valley and around like, traditional sort of like, startup scene as well. What is the main difference between Silicon Valley tech culture versus New York culture?", "Brett Harrison", "Sure, yes, I have a ton of personal experience in the Silicon Valley culture or like the tech culture, since I've only really worked at kind of finances my finance places my whole life. But the sense I get is that the kind of New York Chicago quant, finance dev culture is one about extreme Pragmatism. You know what the outcome is. It's to be the most profitable of the strategy. And you kind of try to draw a straight line between what you're doing and that profitability as fast as you can compare it to. I think the Silicon Valley culture is much more about creativity and doing things that are new that no one else has done before. A healthy amount of cross pollination would be good for both where I think a lot of trading firms are doing the exact same thing that all the other trading firms have done and some healthy injection of creativity into. Some of that stuff to maybe think slightly outside the box of, like, as you said earlier, get slightly faster to go to Nasdaq or something. Which is like, okay, might be fine, but it's, like, not that creative. Would be good for those plot terms. At the same time, the sheer approach to Pragmatically getting something done out there and sold and making money would help a lot of Silicon Valley firms that kind of like hang out in this sort of creative land for too long and don't end up getting a product to market.", "Dwarkesh Patel", "Yeah, I know. Definitely. It seems like there should be one founder from both those cultures, every single startup. It's similar to what you were saying earlier with SBF and Visionaries versus Pragmatist in that context. How conspicuous I mean, you were just mentioning earlier that these traders are making mid six figure salaries to begin with, let alone where they arise over the careers. How conspicuous is their spending and lifestyle? Is it close to Wolf of Wall Street? Is it just walmart T shirts? Where are we talking?", "Brett Harrison", "It's not a lot closer to Walmart T shirts than it is Wolf of Wall Street. Certainly it is now. Even when I started, it was pretty inconspicuous. I don't think it was that way in the previous decade or two before I joined Finance, I guess. I'm not really sure, but I got the sense that the current culture around inconspicuous consumption is sort of a function of millennial consumption habits where people are focusing a lot more on experiences than having shiny material objects. I think that's had a large effect on kind of like the high earning tech and finance culture that exists today.", "Dwarkesh Patel", "Well, I guess are they spending that much money on experiences either? Because how expensive is a flight to Hawaii, right? Even after you subtract that, where is this money going? Are they just saving it?", "Brett Harrison", "Maybe it's not like just a flight to Hawaii, but it's like, bring your ten friends to Hawaii with you or something. Or it's get involved in a charitable organization in a way that someone who is 24 normally wouldn't be able to do, surely by being able to donate a lot.", "Dwarkesh Patel", "Yeah. What is the social consequence, for lack of a better word, of having a bunch of young, nerdy people, often male, often single, having this extraordinary level of wealth? What impact does it have? I don't know if society is the right word, but what is the broader impact of that class of people?", "Brett Harrison", "I think we'll have to play this out over the next decade or two to really see where this goes. If I'm going to be an optimist about this, I'd like to think that when it was like older, single or married males hoarding a large amount of wealth that for the most part, they kept it to themselves and kind of waited until later in life to do anything with it, and were the kind of people who really, like, saved their same career their whole lives, as opposed to if younger and younger generations are amassing wealth through what they can actually perform with their skills, then I think that hopefully injects more Dynamism into the distribution of that wealth later on because those millennials within like or Gen Z or whoever will go on to found new companies, and maybe they'll be able to see the company themselves, their own money, and have a lot easier time, like bringing, like, interesting new things to market. Or they'll be able to donate to, like, really interesting causes, or they'll be able to, you know, help out their friends and family more easily from a young age. Or they'll be more selective in the kinds of things that they give to or contribute to that don't just involve getting their name on, like, a building of a school or something.", "Dwarkesh Patel", "Yeah, that's a very optimistic story. I hope that's the way it plays out. So tell me about the psychology of being a quant or a trader or a developer in that space, because you're responsible. Like, one wrong piece stroke and you've lost millions of dollars, one bug in your code. And there are historical cases of this where entire firms go down because of a bug. What is the sort of day to day psychological impact of that kind of responsibility?", "Brett Harrison", "Maybe the job selects for the people who don't kind of crumble under the theoretical stress of that job. But personally, I don't lose sleep overnight at night over that. Because within any mature financial institution, like a trading firm, there are typically many layers of safeguards in place, like limits on how many dollars you can trade in a minute and how much you can trade overall or for your desk or, like, how many messages you can send to the exchange. And then there's, like, limits on, like, the individual trader and desk level and firm level and there's layers of different checks. Often there are actual rules, like regulatory rules to comply with and market access checks. Like FINRA is like 15 C, three five. And so when you're writing new code, it's not like a completely blank slate thing where you're connecting directly to an exchange and hoping for the best. Usually you're embedding some piece of code within some very large established framework where the goal is to make something trader proof. No matter what some trader clicks on or does or configures with their system, there's like a limit to how badly they can go, how badly they can actually go. And so, especially in my particular role as a developer, actually being able to understand the technological stack and say like, oh, I can tell and can sort of verify that these particular safeguards in place and it is actually as trader proof as I think it is, I can sleep at night knowing nothing too bad is going to happen. I mean, the times I actually lose sleep are like, a trader in London or Hong Kong calls me in the middle of the night to say, like, hey, can you explain how this thing works? I need your help. Those are the times where I actually lose sleep. But it's not over, like, being concerned about risk.", "Dwarkesh Patel", "Yeah, that's interesting. If you ask the people who work in these firms, what is the social value you're creating separate from the question of what the correct answer to that question is? Would the majority of them say that I'm doing something really valuable? Would they say I'm indifferent to it, but it's earning me a lot of money? What is their understanding of the value they're creating?", "Brett Harrison", "It really depends on the company, and it depends how diffused the culture is at older firms that have fewer people impacting the culture on any significant way. I think you might not get a clear answer on this thing for a place like Jane Street, where the firm is really run by, like, 30 or so, you know, partners and senior employees who have, like, been there. For a really long time and have carried through the core culture of that company up until the present day. And with that large number of people at the top in a very flat environment have actually been able to propagate that culture and maintain it throughout the company. I think you'll find a much more kind of homogeneous view on their social value, which I think they would say is that they provide the best pricing and access to. Markets that are critical for facilitating capital allocation throughout the world and allow people to very efficiently invest in vehicles that are global in nature.", "Dwarkesh Patel", "That seems very abstract. And while it is probably very well correct and is very valuable for society, it might not seem that, like, tangible to somebody who's working in that space. Is there some technique that these firms have of making visceral the impact these traders have? I don't know, do they bring out some child who benefit from efficient markets or something?", "Brett Harrison", "I think, well, probably not like children. I think it's more like anecdotes about, like the pension like fund behind this, like, state government needed to get exposure to some diversified asset class and came to one of these companies and said, we want to move like a $5 billion portfolio. Can you help us do it in an efficient way? And it ends up saving them, like significant numbers of percent or basis points over what would happen if they went to the market. And you can say, well, there's like a direct connection between the price that someone like Jane Street gives them and the amount of money that they ultimately get to save and ultimately pass on to the people in their state who are part of their pension plan. And so it feels like a direct connection there.", "Dwarkesh Patel", "Okay, let's start by addressing the elephant in the room, which is FTX. Let's begin at Jane Street, which is where you met SBF. Can you tell us sort of the origin story of how you first met him and what your first impressions were?", "Brett Harrison", "Yeah, absolutely.", "Brett Harrison", "So I was at Jane Street from 2010 to 2018. Sam was at Jane Street for a couple of years in the middle of that, I think 2013 to 2017. And one of the things I did at Jane Street was I started this program called Okamel Bootcamp. It was a yearly course for the new trader hires to spend four weeks with me learning programming in Okamel, which was like the esoteric programming language that we use at Jane Street along with a lot of our other proprietary systems. And Sam was in one of the first cohorts of students and so I got to meet him through that experience.", "Dwarkesh Patel", "Got it. Okay, and what was your impression of him?", "Brett Harrison", "Yeah, he was a smart kid, he was nice, he kind of got along well with other people in his class. He was definitely above average, but not completely stand out at the top. Although then again, the bar was extremely high at Change Street, so I think that's already sort of a compliment. But people liked him a lot and thought he had a lot of promise, but he was a young guy like everyone else.", "Dwarkesh Patel", "Got it. And did that perception change over time while you were at Jane Street?", "Brett Harrison", "It slowly started to. Sam was on one of the largest trading desks at Jane Street and had 50 or 60 people on it. He had several managers. And one of my roles at Jane Street was to work with all of the different trading desks on the designs of their particular strategies and systems. And so I would frequently go over to his desk and talk with his managers about stuff and they started pulling him into conversations more and more specifically to talk about some of the lower latency Etfr stuff. We were doing some of the original OTC automation things we were working on. And so he started actually contributing more to the actual design and thought behind some of these systems and thought he was precocious and had a lot of really good intuitions about the markets.", "Dwarkesh Patel", "Got it. Okay, and so what exactly was your role at Jane Street at this time and what was his?", "Brett Harrison", "Yeah, so at this time I was sort of leading the group of software developers building the technology that was closest to actual trading. You can think of Hft or any kind of trading firm. There's lots of different developers, people who work on stuff really relating specifically to the trading technology, people who work on kind of the core systems, networking, kind of internal automation tools, tools for developers. So we were in the part of the spectrum that was closest to actual trading. And so my job was to go over to the different trading desks within the company, talk to them about their specific strategy for the products they traded, understand how to like their priorities, about what venues they want to connect to what different systems they want to create? What different parameter changes they need in their automated trading systems? What kind of research tools will help them do their job better? What user interface would make it easier for them to understand what's going on in the market and kind of all of that?", "Dwarkesh Patel", "Okay. And did SVF at this point have any sort of reputation of either being uncooperative or being cooperative or anything ethical or professional that's not worthy of this time?", "Brett Harrison", "I don't think there was much that stood out, although he was again pretty precocious at that particular time period. One anecdote that sort of drew me closer to him was Jane Street's offices were in 250 Vessie. They still are in New York City. And there's a big food cart on the second floor. And so I once went down to meet with a development person from a nonprofit that works in animal welfare and something that my wife and I had donated to for a long time. And I met with this guy and he said, you're the second person I met from Jane Street today. Which was wild because Aintree was like only a couple of hundred people there. This is like a pretty niche organization. And I was like, that's crazy. Who did you meet? And they said, oh, Sam Bankman Freed. And I was like, Sam? I just came down from talking with him upstairs. And so I went back and we sort of realized we had this kind of shared interest in helping kind of animal welfare causes. We're both vegans and we sort of bonded over that. That's how we kind of became friendly.", "Dwarkesh Patel", "Got it. It seems that his interest in effective altruism was genuine at this point. And early on there was a history of this.", "Brett Harrison", "Yeah, it wasn't like EA was super popular at Jane Street. I feel like that's a bit of recent sampling bias among this younger crew of Jane Streeters. There definitely I think it was because of a lot of association prior to joining Jane Street that were into effective altruism. But there were a couple of people there who really were fairly vocal about the fact that they were donating the majority of their yearly salary and bonus to charitable causes. And Sam was one of them. And yeah, started to become known for that.", "Dwarkesh Patel", "Got it. Okay, so I guess fast forward to he's no longer at Jane Street, you're no longer at Jane Street, and you're at Citadel. He started FTX, actually, before we back go there. Were you in contact with him up until the point where you had started talking about a potential yeah, off and on.", "Brett Harrison", "When I first left Jane Street, and he left sorry, when we both left Jane Street around the same time, him before me, he had told everyone at Jane Street that he was leaving to join the center for Effective Altruism full time. And I guess he did that. I'm not sure if it actually happened because he very soon after started this trading firm and tried to pull off of Jane Street people to join him to do this trading firm, which didn't make people super happy, but it was funny. We had a phone call and he told me that it wasn't really going super well. He said it was really great in the beginning. They made a lot of money, they had this arbitrage trade, and then a few things kind of went by the wayside, and they had taken out these huge loans to be able to get their initial capital for Alameda. And also there was a big fracture within the company. Half the company split, people left. He really didn't tell me much about that at the time, and he said he's probably going to do something else. And when I asked him, he said, I think I'm going to work on political prediction markets. And I was like, okay, it doesn't sound super exciting to me. I'm going to continue on with what I was doing, which was moving to Chicago, taking a new role. But then fast forward, I guess that idea. Maybe he wasn't telling me the whole truth at the time, but I guess that idea became FTX and he had the impossibly, like resuscitated Alameda in the process.", "Dwarkesh Patel", "Yeah, that's really interesting. Do you have some sense of what it was that went sideways?", "Brett Harrison", "So I pieced together some details over the years because he told me a little bit more after I first joined. I heard a little bit more later from other people and then saw some reporting kind of post FTX collapse. I think there were two things. One was the infrastructure they had built, I think was really poor in the beginning. A lot of Python scripts, like, slapped together, and a couple of times they had sent tokens to the wrong wallet and ended up losing millions in the process. And they had some big non arbitrage directional bet on in some token, it might have been ETH or something, and it went against them. And so they lost a lot of their trading capital. And then the other thing was that after some of their technical problems, there was internal disagreements, supposedly. This is what Sam told me about how to move forward with tech. There was half the crew that wanted to kind of rewrite everything from scratch in a different programming language. There was another half that said, like, okay, we can make some small incremental changes from here and fix things up. And Sam and Gary and the shot were more in. That latter crew, that former crew kind of broke off and started their own thing, and that's what originally happened.", "Dwarkesh Patel", "Okay, got it. And were you aware of the extent of this at the time, or something piece together?", "Brett Harrison", "No, not at all. Sam told me a little bit about it, but this is over the course of years now where I had two different roles, one at Headlands, one at Citadel securities. Sam was starting alameda. We spoke maybe once a year, briefly on the phone. So all this stuff was happening in the background and I had no clue. In fact, the first time I even heard about FTX was one of my colleagues from Citadel securities told me, hey, did you ever work with this, like, Sandbagman Freed guy? And I was like, yeah, a little bit. Why? And they're like, you know, he's like a billionaire and he has this Hong Kong crypto exchange. Like, what? No, since when? And then started to see him pop up in articles. There was like a box article about him and a few other things, especially related to his political donations. And that's kind of when I got back in touch with him and we started talking a little bit.", "Dwarkesh Patel", "When was it that he called you to say that there are potentially troubles and I'm considering starting a political prediction market?", "Brett Harrison", "That was in 2018.", "Dwarkesh Patel", "Okay, yeah, got it, got it.", "Brett Harrison", "So I it was right after I left Jane Street.", "Dwarkesh Patel", "Got it. Okay, so now you've moved on to Citadel, and I guess you are still in touch at this point.", "Brett Harrison", "Yeah, like, very briefly, a text every now and then.", "Dwarkesh Patel", "Okay. And then at some point you become president of FTX US. Do you want to talk about, I guess, how he approached you about that and what was going on at the time?", "Brett Harrison", "Yeah, it was interesting. So at the time, I was running what was called the semi systematic trading technology at Citadel securities. And so this was the group of technologists working on systems for ADRs, ETFs options, and OTC market equities. There's around 100 software engineers or so that rolled up to me and, you know, that was going well. But, you know, Sam and I started talking. It was, I guess March of 2021, and he was like, telling me a couple of things going on in FTX. And then he said, if you're interested in coming over to FTX, we would still love to have you. And I thought, still, I've never talked about doing this before, but sure, let's entertain this. And then we started talking and he had me meet him and Gary and Nashad over video call I was in Chicago, they were in Hong Kong at the time. These calls were taking place like late at night my time. And very quickly like an offer came together and I thought, this is a really cool opportunity to jump into a field and take a role that was very different from stuff I've done in the past. And I signed up.", "Dwarkesh Patel", "Got it. Okay. And where was FTX at this point in terms of its sort of business development?", "Brett Harrison", "Yeah, so FTX was doing quite well. It was basically finished its first, its second year of operation and it was maybe the fourth or fifth largest exchange in the world by volume, if you include spot crypto and crypto derivatives. And it was also one of the primary destinations for institutions, proprietary trading firms, hedge funds to trade crypto and derivatives, especially because of how it was designed. And so it was doing really well. FTX us was virtually nonexistent. They had started the, they formed the entities. They had started the exchange, I think in either December 2020 or January 2021, but it had like, de minimis volume compared to the other exchanges around the world, especially in the US. Too. And Sam talked to me a lot about the aspirations for the US business. One, to grow the spot exchange, of course. Two was to be able to find a regulated path for bringing some of these offshore products like Bitcoin and ether futures and options onshore in a regulated way. And on top of that, Sam had also told me about kind of longer term desires to be a single app or marketplace for everything, not just crypto, so launching a stocks trading platform as well. And so that was one of the reasons I think he wanted to bring me on, was because I had all this experience kind of inside of regulated broker dealers and sort of knew roughly what it took to get that started.", "Dwarkesh Patel", "Okay, got it. The initial offer was specifically for President of FTX US, right?", "Brett Harrison", "Yeah. Sam wasn't someone who loved thinking hard about titles and even like what my original title was going to be was like a point of contention, but I'm not sure it was clear exactly what my role was going to be. I think Sam wanted me to write software for FTX and FTX US. Sorry for FTX US, but to me, I sort of thought there was like this bigger opportunity to kind of kind of work with Sam to lead this other startup which was FTX US, and kind of build it up and sort of follow in FTX's footsteps in its success. And that was the part that was most exciting to me because this is what I've been doing now for years. It's like managing large teams of people, thinking about strategy, getting people together, occasionally doing some software development myself. But that was the primary reason for wanting to join.", "Dwarkesh Patel", "Got it. And what was the relationship between FTX and FTX US at that time? Were they kind of subsidiaries? Were they separate entities?", "Brett Harrison", "They were separate entities. They weren't subsidiaries. There was technology sharing between them. So, like the FTX US exchange technology was licensed from FTX. You can think of FTX US as like FTX stripping away most of the interesting parts of FTX. Right. Because it was just a dozen or two spot tokens. And when I joined, there were very few people within FTX US, maybe like two or three dedicated people. So over the course of the next year or so, my job that I sort of fit for myself was to open up some offices, hire a bunch of people, establish separate compliance and legal and operational and support teams, start to build out these regulated entities.", "Dwarkesh Patel", "Was Chicago the initial base of that operation?", "Brett Harrison", "Yeah, for selfish reasons. I have my family here's, here, and I wasn't going anywhere. But also I thought Chicago is a great place for FTX US, because if our main goal was to establish regulated derivatives, like, Chicago is really the place where that happens. We have like the CME, we have many of the top proprietary trading firms. A lot of the Futures Commission merchants and various brokers are all here historically, like the kind of the floor of the Chicago Board of Trade and the Chicago Mercantile Exchange. Like, they're here. And so it kind of felt like a good place to be.", "Dwarkesh Patel", "And at this point, I guess before you joined, did you get a chance to ask him about the relationship between FTX and Alameda?", "Brett Harrison", "Yeah, I did. It was definitely of interest to me because the primary reason being that I wasn't interested in doing Prop again. I worked at Jane Street. I worked at Headlands Tech. I was at Cedar securities. If I wanted to continue doing Prop trading, I would have stayed at one of those places. So I wanted to do this exchange business. And what Sam told me was the same thing he said publicly everywhere, which is that Alameda is basically running itself. All of Sam's time is on FTX. They're kind of walled off from the FTX people. And their access to the exchange is just like any other market maker. Like, there's like the public API feeds, there's benefits from market makers that trade enough volume. But it's not like Alameda had any special privileges in that sense. And. So I thought they were just basically separate.", "Dwarkesh Patel", "And did you ask to, I guess, audit their sort of financials or this relationship before you joined?", "Brett Harrison", "No. I mean, I don't know about you, but I've never gotten an offer to a company and said before I signed, show me your audited financial. It's just like, not a thing that happens.", "Dwarkesh Patel", "Right, okay, fair enough. So you joined FTX and then you mentioned some of the stuff you were working on, the operational legal, getting the organization set up. But yeah, feel free to talk in more detail about what were the things that came up during your tenure and what are the accomplishments you're proud of?", "Brett Harrison", "Yeah, I guess on the professional and personal front so, I guess on a professional front, I'm most proud of establishing out our team and making significant headway to a lot of our goals to establish these regulated businesses. So, for example, LedgerX, we acquired LedgerX and we had this application to the CFTC to enable kind of real time, direct to customer, 24/7 margining and cross collateralization. And it was an extremely innovative proposal and it felt like we were making real progress towards establishing new and very exciting regimes for Cfdc regulated derivatives in the US. I also established a broker dealer in the US for the purposes of letting people trade stocks like some lords, robin Hood. I wrote like 90% of all the code for that stocks platform myself. And yeah, I was very proud of that accomplishment. And then on a personal front, I it was great to get embedded into the crypto industry. I was very excited by everything that I saw. It was great to make all the connections through FTX with, like, the different people in the crypto ecosystem and become friends with these people and certainly has an influence where I am today. So I'm sort of proud of all of that.", "Dwarkesh Patel", "How did you manage the management of, I don't know how big the team was at to speak, and it sounds like you were heavily I mean, involved is an understatement in the actual engineering. How were we able to manage both roles at the same time?", "Brett Harrison", "Yeah, so we were like between 75 and 100 total people in the US. And it was challenging. It was one of my biggest complaints, which I'm sure we'll get into, that yes, I can write code, but I feel like that's my comparative advantage is helping kind of to leverage teams of people to get them to work towards the common goal of building out large distributed systems that are complex and multivariate nature. And the best use of my time was not me programming between the hours of like 10:00 P.m. And 02:00 A.m. Every night while trying to keep on board with what all the personnel were doing. So I really wanted to grow the US team significantly to at least be more than a handful of developers. And so, yeah, that was one of the initial points of contention.", "Dwarkesh Patel", "Okay, speak more about that. So he was opposed to growing the team.", "Brett Harrison", "Sam would frequently talk publicly about how proud he was that all of FTX is built by two developers and all of these crazy organizations that hire thousands of developers and can't get anything done. They should learn from me about how, like, a small lean team can be much more effective. And there's some truth to that. I do think the conventional wisdom now is a lot of big tech companies overhired for software engineers. And not only was it sort of an expense on the balance sheet, but it was also expensive in terms of slowing down the kind of operational efficiency of the organization. And having a small lean team can help you get to your first or your anth product a lot more quickly. That's great for a startup, but once you're like a north of $10 billion valuation company, like, promising the world to customers and investors, two software developers doesn't really cut it anymore. I mean, at some point, you have to grow up and face the reality that it's time to actually grow an organization into a real kind of managed enterprise with teams of software engineers specializing in certain tasks. And so there was always pushback. People would tell me, like, look, we're not trying to be like Jane Street or Citadel in terms of our number of software engineers. We want to stay lean. That's our comparative advantage. And most importantly, they didn't want two separate development teams, like one in the US. One in then the Bahamas, like, they wanted to keep the nexus of software development underneath Nashad and Gary in the Bahamas, which I just thought wasn't going to be sustainable long term. Like, if you run a broker dealer in the US. You need to have staff that is specifically allocated towards broker dealer activities. It can't be that if FINRA comes and says like, well, who's working on the broker dealer? You say, well, it's like this Gary guy who lives in the Bahamas who sometimes is awake at like 04:00 A.m., spends 20 minutes a day thinking about stocks that can't fly right, and has.", "Dwarkesh Patel", "No images of him. Okay, so we're Nashad and Gary contributing code to the FTX US. Code base.", "Brett Harrison", "Remember, like the FTX us. Side of things was like strict subset of FTX. So, like, in that sense, it kind of flowed into FTX US. With the exception of the FTX us. Derivatives, the Ledger X stuff was actually a completely separate team because that was through an acquisition.", "Dwarkesh Patel", "When you're talking about the code of, like, the matching engine or things like that. Was the code shared between FTX and FTX us.", "Brett Harrison", "Yes.", "Dwarkesh Patel", "Okay, who was in charge of ultimately approving the pull request, basically, of the Xtx US. Code base?", "Brett Harrison", "Yeah, I was like all Gary and Nashad.", "Dwarkesh Patel", "Okay, got it. And so the code you were contributing was also going to the sort of like universal global code base. Yeah, got it. Did you have sort of access to the entire code base or just the FTX US side?", "Brett Harrison", "Yeah, again, it was one share repo. I mean, there was an enormous amount of code. And one of the big problems, another problem that I raised while I was there, was that 90 plus percent of all the code of FTX was written by these two people. And it was very hard to follow. I don't know if you've ever seen like a large Python code base before. So whenever there were issues that arose there's, like this particular problem with an account on the exchange, the only answer was like, call Nashad, call Gary. Which I also knew to be unsustainable from the organizational perspective. One of the guiding principles at Janetree, for example, was mentor your junior dev so that you can hand off all your responsibilities to them. And in the process of handing off responsibilities, you make the code better, more automated, more robust problems, more easily debuggable in real time. If you hoard everything to yourself in your own brain, you end up with a code base that is just only understandable by that one person. And so it was the kind of thing where a lot of people talked about this internally. Like if Gary got hit by a bus and couldn't come to work anymore, fjs was done. It's done. Exactly.", "Dwarkesh Patel", "What do you think was the motivation behind this? Was it just that he wanted to avoid as a sort of like Google growing to 100,000 people kind of thing? Or was there something else going on? Like what? Why did it why this sort of concentration?", "Brett Harrison", "Clearly there was something else going on. I think an open question now, only thinking about this in hindsight was how much of this very cloistered organizational decision around the development team was a function of the various things they were doing that they were hiding from the rest of the company? Or was it really this sort of like one, ultra paranoia about growing too large too quickly and getting losing control of the organization, and two, an almost like sort of cultlike belief in this small team, like being the but for cause of all past, present, and future success.", "Dwarkesh Patel", "What was the discretion that you had at FTX US? It sounds like you weren't even given the capacity to hire more engineers if you wanted to. What were the things you did? Control.", "Brett Harrison", "Yeah, hiring, for example. I began pushed for many months that we should hire more people. Eventually I got permission for us to interview people, but then those ultimately have to get finally approved by the people in the Bahamas. And they would frequently say no to people who I thought were good candidates. Finally, we hired one person, and this person was doing well. He was here in Chicago, and they invited him to go spend like a month in the Bahamas to kind of hang out with them and supposedly just like to ramp up on the system. And this person comes back to Chicago and they say, you know what? Like, I really want to move to the Bahamas. They really kind of confused me to do it. And it was so frustrating.", "Dwarkesh Patel", "I was just poaching from your own company.", "Brett Harrison", "Exactly. It was such a constant battle. And at some point I kind of gave up on this idea that I was going to be able to actually grow the separate developer team. So, I mean, the bottom line is on kind of like the day to day operational stuff, especially the decisions within some of the things I was responsible for, like the stocks trading platform that I was working on, I had a fair amount of discretion and people certainly looked up to me for management and advice and direction. But ultimately the discretion ended up with this small group in the Bahamas who not only had final say on decisions, but would often make decisions and not communicate with the senior people in the US. Side and we would just sort of find out things were happening.", "Dwarkesh Patel", "Is there a specific example or set of examples that comes to mind?", "Brett Harrison", "Sure. The biggest example for me was this sort of post my kind of effective resignation, but some of these strategic acquisitions that were being done in the US. During the summer of 2022, I would find out from the news or it would sort of be mentioned on a signal chat or something that this was happening. And there was no opportunity to actually wade into the discussion about how this is going to greatly affect the US. Business, it's going to greatly affect our priorities. And it wasn't clear if this was like a good decision or a bad decision. It was like a unilateral decision that was made, like, we're acquiring this company or we have the option to acquire this company.", "Dwarkesh Patel", "Are there decisions that were made from the Bahamas that stick out to you as being unwise that, I don't know, you try to speak out against? You mentioned some of them, right? Like not hiring enough people and not getting more developers. But are there other things like that that stick out to you as bad decisions?", "Brett Harrison", "A lot of the spending, I mean, on everything from, like, lavish real estate to all of these, like, partnerships to, like, very, very large venture deals, like, these were the kinds of things in the company where people ask, when does it stop? To one end, are we doing a lot of these things? And some of those resulted in sort of like direct confrontations, like, just, why are we doing yet another deal with a sports person or a celebrity? This is ridiculous. This is not doing anything for the company. And we're completely distracting from the role that we thought we all had which is to build a really great core product for people trading crypto on crypto derivatives.", "Dwarkesh Patel", "Yeah. And did you bring this up directly with SBF?", "Brett Harrison", "Yeah, multiple times.", "Dwarkesh Patel", "And how would he respond?", "Brett Harrison", "Sometimes he was nice about it, and he would say, yeah, I see where you're coming from. I do think what we've done so far has been really valuable, and we probably should do some more of it, but maybe at some point we should stop a lot of this. Sort of like, hedging language that was ultimately non confrontational, noncommittal I mean, he was a very non confrontational person. Very conflict, avoidant person within the company. And then at worst, there were other times where I brought up specific things that I thought that he was doing wrong. There was one really unfortunate time where it was the first time I visited the Bahamas in November of 21. And I'm the kind of person who, if I see something wrong at a company, it doesn't matter what company I've worked at or how junior or senior I've been. I like to go to the person most senior in. Charge and tell them this thing seems wrong to me and that's I feel like it's one of my superpowers of just like, not being afraid of just saying when something seems wrong to me. And sometimes I'm just totally wrong and don't understand the full picture, and sometimes it results in something better happening, and people will, you know, thank me for having been honest and bringing to attention something that's actually wrong. And so I said to Sam, I think you're doing way too much PR and media. First of all, it's really diluting you and the FTX brand to constantly be doing TV interviews and podcasts and flying to banking and private equity conferences. And it was so much time spent on this stuff, and also it was completely taking away from the management of the company. People would sometimes send Sam slack or signal messages and not get responses for weeks at a time. And it felt like he was spending virtually no time helping the company move forward. It was so much about image and brand and PR, and he was really angry at hearing this criticism directly.", "Dwarkesh Patel", "How did he react?", "Brett Harrison", "I mean, he was just he was sort of emotional. He was worked up. He told me, like, I completely disagree with you. I mean, he said, like, I think you're completely wrong. He said, I think the stuff that I've done for PR is maybe the greatest thing that's happened to this company and I should do more of it. I didn't think it was physically possible to do more of it. And I realized at that moment that this was not really going to work super well long term. Like, if we're not in a relationship where I can give sort of my direct, superior, like, real, honest, constructive criticism that I thought was for the good of the company that this wasn't really going to work.", "Dwarkesh Patel", "He actually did my podcast about, I don't know, eight months ago or something. And while I was, like, very grateful he did it, even at the time, I'm like, I don't know if I would have agreed to this if I was in charge of a $30 billion empire.", "Brett Harrison", "Yeah, sometimes some reporters would say to me, can you get me in touch with Sam? And I would say why? I'm not really his keeper. You could contact him yourself. They're like, oh, because we want to come to Bahamas and do a special on him. And I would say, like, okay, you're going to be like the 6th one this month.", "Dwarkesh Patel", "There's an exclusive here. So I guess to steal man, his point, he did get a lot of good PR at the time, right? Potentially. Well, not potentially, actually too much. In a way that really created at the time sort of like the king of crypto sort of image. Was he right about the impact of the PR at the time? Yes. Let me ask a question a different way. How did he create this image? I mean, people were saying that he's the JP. Morgan of crypto. He could do no wrong. Even things that in retrospect seem like clear mistakes, like only having a few developers on the team universally praised huge empire run by a few developers. How was this image created?", "Brett Harrison", "I think that media was primed for the archetype that was Sam, this sort of young, upstart prodigy in the realm of fintech. We have a lot of these characters in the world of big tech, and I think that he had a particular role to play in the world of finance. And by making himself so accessible all the time, he gave people a drug that they were addicted to, which was like, that constant access. I feel like any time of day or night, someone could text Sam and get him on the phone with them if they were in media, and they loved it. It was like getting access to a direct expert who was also this famous person, who was also this billionaire, who was also this extremely well connected person, who was also this very insightful person, who knew a lot was going on in the industry and can give them insight and tips. And I think there was some amount of what I like to call like, reputation laundering going on here, where it was like, okay, so you get the famous celebrity to endorse Sam, which makes this politician think highly of Sam, because they also like that celebrity. And then also the investors are writing really great positive things online about it, but also the media is enforcing how cool it is that Sam is doing all these other things. And it all sort of fed into this flywheel of building up Sam's image over time in a way that didn't necessarily need to match the underlying reality of who he was with the company.", "Dwarkesh Patel", "And what was the reaction of other employees at FTX of this sort of not only the media hypetrain, but also the amount of time Sam was spending with the media.", "Brett Harrison", "On one hand, I think people were rowing frustrated within the company because of the lack of direction and some of like the power vacuums that resulted from Sam's continual absence. On the other hand, so many people within the company just hero worship Sam. When you hear all the really tragic stories now of all the employees who kept all of their funds and life savings on FTX, they really, really believed in Sam. And doesn't matter how little time he spent with the company, it doesn't matter how he treated employees internally, it was like he was this sort of genius pioneer and that image couldn't be shaken.", "Dwarkesh Patel", "And I certainly don't blame anybody for it. I interviewed him. I tried to do a lot of research before I interviewed him, and I certainly was totally taken with this.", "Brett Harrison", "Right.", "Dwarkesh Patel", "I thought he was the most competent person who had ever grazed crypto, but so what was he actually like as a manager and leader? Other than, I guess obviously the micromanaging aspect of it, or feel free to speak more on that as well. But in terms of the decisions he would make in terms of business development and prioritizing things, can you describe his sort of management style and leadership?", "Brett Harrison", "In the beginning? When I joined FTX, my initial impressions were that he had pretty clear intuition and insight into the simple things to do that would work in many ways. If you think about what FTX did, it wasn't really super complicated. It was like just be operationally good and give your trading customers as predictable of an experience as possible with regards to collateral management and auto liquidation and matching engine behavior and latency. And so they did it. I would say, aside from the intuition, sam wasn't a details man. That was usually left up to the people below him to really take care of was like to drive a project to completion to figure out all the details that had to be done. I think besides that, as a leader, I thought he was fairly incompetent. I thought he was very conflict, avoidant. He didn't like to get into direct confrontation with any of his employees where most of the reasons why people needed to talk to him were because there were significant issues, whether those were personnel or otherwise, and he just blew them off. That was a frequent occurrence in the company. If you went to Bahamas and I went to only a couple of times to actually visit the office, if he was in the office, he was there all day on calls all day, whether those were with investors or with media, podcasts, whatever. It was just consistently just doing that. And I saw very little time where he actually got up and talked to anyone else within the company about anything. So I think to me that was the primary impression I got of his leadership was virtually that there was none, which made me feel a lot like I and others needed to step up and sort of take that role in the absence.", "Dwarkesh Patel", "Got it. And so who was making these day to day decisions in the absence of Sam?", "Brett Harrison", "On the foreign side in the Bahamas, Nashad was really like the number two person there. I mean, he was making a lot of decisions. There were a couple of others in the Bahamas who were taking kind of swaths of the company, whether it was like investments or marketing or legal things like that. On the US side, we had like a different crew trying to make decisions where we could for us regulated matters. But again, we were always sort of below the decision making authority that was happening in the Bahamas, especially inside of the home where they were all living.", "Dwarkesh Patel", "So it seems like Fcx was a really good product compared to other crypto exchanges. I've heard a lot of traders phrase it was this competence sort of built while SBF was still doing media stuff, or was this built before he kind of went on the PR tray? And like, how was this product built while the CEO was kind of distracted?", "Brett Harrison", "So I think the core of the product was built before my time, and my understanding was in the transition from Alameda to FTX, where there was no publicity around Alameda, there wasn't any publicity around FTX. It was very much like heads down build mode for several months. And just think, think, think about the core product having been a trader on these different exchanges around the world that also offer derivatives and knowing all their problems. Like, for example, if you had an ether futures position and also an ether spot position on this one exchange, you could get liquidated on your ether futures position even if you had enough ether spot as collateral, because you needed to have that spot crypto within the ether futures spot collateral wallet, which was different than the ether spot wallet. And so it was this game of shifting assets around to different wallets to make sure you kept meeting your collateral requirements, which was just an operational nightmare. And so Sam told and worked with Gary and the Shot to build basically a cross collateralization system where you have just one wallet with all of your assets, all, you know, haircut. It appropriately based on volatility and liquidity, but then summing up to a single collateral value that represents what you can put on in terms of margin for all of your positions. Or having an auto liquidation system that doesn't. Just the second that you're slightly below your margin fraction. Send a giant market order into the book and dislocate the order book by 10%. It would automatically start liquidating small percentages of your portfolio at a time to try to minimize market impact. And then if the position got too underwater, it would auction that position off to backstop liquidity providers, a number of them, who would then take on that position again without having to kind of rip through the book and cause dislocation. And so it was much more orderly, it was much more predictable. And that had to have come from the initial intuitions that Sam and his colleagues got from being traders on these exchanges and thinking, how should this work if it were perfect? So I do think in the beginning, they were really working on that product together. And then once the success came and Sam got drunk on the celebrity of being so out there and known and having all these newfound connections, that things are to go by the wayside.", "Dwarkesh Patel", "You mentioned that one of these things that he was doing was making these sort of exorbitant deals and with celebrities, with acquisitions, branding. What was your understanding at the time of where the money to do this was coming from?", "Brett Harrison", "Yeah, so, for example, when I joined the company, FTX had just inked that Miami Heat deal, and I think it was something like 19 million a year. And I was like, well, that sounds like a lot of money. Right? But at the time, you could see the publicly reported volume on FTX, it was something around 15 to 20 billion emotional per day. The fee schedule was also public. So even at like the the highest volume tiers, the you know, the take fee would be something like two basis points per trade. So if you just did like $20 billion traded per day times two basis points times 365, which, because crypto trades every single day, you can get a sense of how much money FTX was making a year. And at the time, I think the run rate for FTX was something like close to a billion dollars in income. And you think, okay, is $19 million a reasonable percentage of the total income to spend on a very significant, important marketing play? I don't know. It feels kind of reasonable. Like, how much does Coca Cola spend per year on marketing as a percentage of their income? It's probably somewhere between like 50 and 130%. I don't actually know what it is. It doesn't seem crazy.", "Dwarkesh Patel", "Yeah. But if you add on top of that, the real estate, the other sort.", "Brett Harrison", "Of acquisition, all that stuff came later. And secondly, a lot of that wasn't known to the employees within the company. Most of the venture deals, the value of the real estate, et cetera, was non public within the company. There were 100 plus million dollar investments into various companies and other investment funds that were never discussed openly, at least to the US. People. So it wasn't like there was sort of this clear internal accounting where people could look at it and say, hey, are you really spending all this money on this, all this stuff? No, I think Sam very deliberately kept all that stuff within his innermost circle for a reason, because he didn't want the criticism on what he was spending on.", "Dwarkesh Patel", "And did you have access to, or did you ask to see, I guess, the balance sheet or any of the sort of financial documents?", "Brett Harrison", "I have zero access to bank account stuff or financials on the FTX.com side on the US. I had some. But remember now, knowing what we now know about even like, recent, like the guilty pleas from the shot and seeing like the complaints from like, the SEC Cfdc, they were like, deliberately falsifying information that went into ultimately the audited financials. So in order to actually have suspected anything, one would have to not only, like, disagree with all of the kind of internal conventional wisdom around how the company was doing, but also have to basically distrust audited, financials coming. Back to the company combined with having any concerns about income when it seemed like we were generating income faster than any startup in history. So I think it was very difficult for anyone within the company, especially in the US side, to have a clue what was going on.", "Dwarkesh Patel", "Sure. Let's talk about Alameda. So I guess, again, maybe the best point to start this story is also with Jane street, where Caroline Ellison went out to become the CEO of Alameda, was a trader. Did you happen to cross paths with.", "Brett Harrison", "Her at Jane street? It's hard to remember because it was like the early days, but I'm pretty sure she was also one of my boot camp students.", "Dwarkesh Patel", "It all starts there.", "Brett Harrison", "But besides those early interactions, I barely interacted with Caroline, not in the same way that I had done with Sam, just based on the trade and desk he was on. And when I joined the company the FTX us People, communication wise, were walled off from Alameda. So we didn't really cross paths almost at all.", "Dwarkesh Patel", "What was your understanding of the relationship between Alameda and FTX?", "Brett Harrison", "This is a completely separate company. Sam doesn't really do anything for them anymore because he is 100% focused on FTX. It's separately being run by Caroline and Sam Tribuco. They have the same access to the exchange, like, data feeds and API as any other market maker on the exchange. And also, you know, especially towards the time that I left, that alameda wasn't even a significant percentage of the exchange volume anymore. They weren't in like the top 20 market makers on FTX.com or something like that.", "Dwarkesh Patel", "You mentioned that you were contributing to the code base and you had access to the code base. People have been speculating about whether Gary or Nashad had hard coded some sort of special limit for Alameda. Did you see any evidence of that in the code base?", "Brett Harrison", "Definitely not.", "Dwarkesh Patel", "You mentioned that you. Visited the Bahamas offices a few times, and the Alameda, there's like four huts and there's like a meeting room. There's burst salmon, the engineers are there's, the Future Fund, and then there's like, the Alameda hut.", "Brett Harrison", "Yeah.", "Dwarkesh Patel", "Did the physical proximity between the offices and of course, the fact that the leaders were living together, was that something you inquired about or were concerned with?", "Brett Harrison", "I never visited the places where they lived in that Albany section of the Bahamas, so I think I didn't fully grasp the extent to which they were all living in this particular arrangement. But I understood that as long as Sam was going to be the 90% owner or something of Alameda, he would want oversight there. And so having them close by made sense. But the actual hut set up was such that they had like, physical separation from minute to minute. So it wasn't like Alameda could overhear stuff happening on the exchange or people in the exchange could overhear stuff that was happening at Alameda. So to some extent they felt like, well, at least they're going through the right motions of setting up, like, physical separate buildings. Also, this is not uncommon within trading firms and investment banks. Right. Like, if you imagine there needs to be wall separation between buy side and sell side at different institutions. And the way they do that is they put them on different floors in the same building. Right. And sure, they can meet each other for lunch in the lobby, but they set up some actual physical separation. This is like super par for the course when it comes to financial firms that have these businesses that need to be walled off from each other. And so that didn't seem like particularly strange thing to me at all.", "Dwarkesh Patel", "Is there anything that, in retrospect, seems to you like a yellow flag or a red flag, even if at the time it's something that might make sense? In the grand scheme of things, the.", "Brett Harrison", "Most obvious thing only in hindsight, was that Sam liked to do bonuses for the employees twice a year and once the end of June, once the end of December. So they were like semester bonuses. And in the previous semesters, he had paid them early in May for the first semester in November, or early December for the second one. And he was extremely late in doing the mid year 2021, 2022 bonuses. So much so that people within the company started to freak out because there was a lot of bad news in the press about other companies doing layoffs or folding. And it was two to three months late, and people were expecting to get bonuses to pay rent and do whatever. And there's very little communication around this, and people were very concerned. So at the time, people said, look, Sam's like really busy.", "Brett Harrison", "He's flying to DC every week, he.", "Brett Harrison", "Has all the stuff going on. He just hasn't gotten around to it. But don't worry, it's coming. In hindsight, it felt like there was some clear liquidity issue that was probably the most obvious thing. Everything else is all just things that were red flags about the organization, not red flags about potential liquidity issues or fraud. Things like the complete inability to hire more people, especially on the developer side, not allowing me to establish separate sort of sea level staff on the US side that would have authority that was really separate from the ones in the Bahamas. How completely tightly controlled the Dev team was around access to the code base and the inner workings of all the exchange, and really wanting to keep that nexus of the developer group in the Bahamas next to Gary and Ashod. Those seem like red flags now.", "Dwarkesh Patel", "Yeah, but not at the time. Did you notice anything weird during the Terra Luna collapse? Because in the aftermath, people have said that that's probably when Alameda defaulted on some loans and maybe some sort of like hole dug itself deeper.", "Brett Harrison", "Really?", "Brett Harrison", "Nothing at all. I mean, maybe that's a function of being here in Chicago and just not seeing a group of people freaking out. But nothing seemed wrong at all. In fact, we started having conversations around paying out mid year bonuses a couple of weeks later after Terra announcement and everything seemed very normal. Sam sent out an announcement to the whole company basically saying like, okay, we're going to be paying out bonuses soon. People should expect they're going to be like a little bit lower because we have very similar revenue to last year. But we've also grown in size and also like, the market is slowing and we need to be a little bit more conservative. So all the signs pointed to things as normal.", "Dwarkesh Patel", "You had a thread sort of boiling down this experience on Twitter and one of the things you pointed out there is that you saw the sort of symptoms of sort of mental health issue or addiction issue at the time there. Are you referring to the sort of management mishaps and bad decision making or was there something more that made you come to this conclusion?", "Brett Harrison", "I think it was more than that. When I knew Sam, when he was 21, 22 years old, he was like a happy, healthy looking kid who was very positive, very talkative, got along super well with his cohort of traders. The people on the desk really liked him. When I got to FTX, I think over the course of my time there, I saw someone who was very different than that person I remembered. I think he was angrier, seemed more depressed, more anxious. He couldn't get through a conversation without shaking his leg.", "Dwarkesh Patel", "Street.", "Brett Harrison", "He wasn't like, that not something I remember at all. He would snap easily. He would not respond to messages for long periods of time. And people had different theories. I mean, people would attribute it to the unbelievable stress of being in the position that he was in complete lack.", "Brett Harrison", "Of sleep, like his diet, lack of exercise.", "Brett Harrison", "I mean, people had plenty of thoughts about what could be causing it all, but something definitely had deteriorated mentally and physically about him from who I remembered.", "Dwarkesh Patel", "If you had to yes, most likely cause of that, what would you say?", "Brett Harrison", "I don't know. I think that's up for a professional with credentials that I don't have. But I do think it was probably a combination of everything. The lack of sleep, the stress he probably was under, not just being in his role, but having kept this secret for so many years around. Whatever was happening with the holes in the exchange and the lying he was doing to his own employees, to investors, to auditors, maybe that weighed on him. Maybe had something to do with his medications and he had just had to be just a plain deterioration in mental state or some kind of personality disorder or a different kind of anxiety disorder. I really don't know. Maybe a mixture of everything.", "Dwarkesh Patel", "Yeah, got it. You said you gave him a sort of ultimatum letter where you said, unless you change these things, I'm resigning. What were the things you asked that be changed in that letter?", "Brett Harrison", "Yeah. So the top three things were, one, to communicate more with me in particular, I could probably count on one hand the number of times I had, like, a one on one phone call with Sam, which probably seems insane, given, like, the position I was supposed to be in. I basically said, like, we have to talk every week. It's impossible for me to get anything done if I don't have the authority, but I have the responsibility to be able to push this company forward, and we're not talking at all. So that was number one. Number two was to establish separate, especially sea level management staff on the US. Side. If Sam was going to be so busy doing what he was doing, at least he needed to delegate that responsibility to, like, a set of professional managers who could actually take care of the day to day operations within the company. And it felt like things were starting to unravel in the absence of that. And then the third was to grow the tech team and move a lot of the authority and management of that team away from nisha and gary so that we could actually spread the knowledge and be able to keep up with a lot of the tasks that we were assigning ourselves and trying to build all these new business lines. That pretty much summarizes it, yeah.", "Dwarkesh Patel", "How regularly were you talking?", "Brett Harrison", "It wasn't regular. I was on chat groups that he was in, and so occasionally he would respond to something I say on that group, but one on one conversations, I think there were fewer than ten for my entire tenure.", "Dwarkesh Patel", "Wow, okay. And that was over a year, right?", "Brett Harrison", "Year and a half. Year and a half.", "Dwarkesh Patel", "About less than one every two months, yeah. How did he respond to this letter?", "Brett Harrison", "So it took a little while before we got on the phone and he went through every point and refuted everyone, starting with communication. He said, I think phone calls are a waste of time. I think that if I promise people regular phone calls, they will use it to waste my time and it's not an efficient mode of communication. He said, I think we have the best developer team in the world and I think anyone who suggests otherwise is completely wrong. And if we add more people to the Dev team, if we move them to the US and move them away from the Bahamas, we're going to be worse as an organization. He kind of ignored the point about separate leadership. I think he hated the idea of giving other people kind of titles that would reflect kind of greater responsibility within the company. That conversation ended with us kind of not knowing what the future was going to be because I basically said, look, I'm going to resign if you don't fix this. He said, we're not fixing anything. And then what happened next was he had deputized another person within a company to come here to Chicago and pull me into a side room and say, you are probably going to be fired for this letter that you wrote and not only are you going to be fired, but Sam is going to destroy your professional reputation. Like, where do you think you're going to be able to work after FTX after all this happened? And he was threatening me. And then not only that, he had said, if you are going to have any hope of staying and if you can forget about getting paid bonuses, you need to write Sam an apology letter and show it to me first because I'm going to edit it and I'm going to tell you what to say. And I said, absolutely not. This isn't like a mafia organization. This is extremely unprofessional. And I knew at that point there was absolutely no way I was staying. It was a matter of when, not if. But what I did know was that I'm still a professional, I'm still loyal to the company. I still believe the company itself had an incredible potential to continue its sort of road of profitability. And I really liked all my employees here on the US side and I wasn't going to abandon them. So I sort of thought like a three to six month time period is about standard to take the time to unwind responsibilities, to finish the stocks platform that I was working on to, you know, get my team in. A position where I knew they would be in good standing and they wouldn't be retaliated against after I left and took the time to do that before officially resigning. In kind of the end of the summer in early fall.", "Dwarkesh Patel", "And did that happen after you left? Did the, did he try to enjoy your professional reputation?", "Brett Harrison", "He did. The acute thing that happened was Sam, I actually offered to stay on longer. I could stay on for a couple more months and help this transition. To whomever you name as the successor president of FTX US. And he said, no, I want you gone more quickly. And so I should say he said that, but he was communicating through other people. He wasn't talking to me directly at that point. And so he said like, I want you gone on September 27. So okay, that's fine with me. On September 27, not only did he announce to the company my resignation, he also announced that he was closing the Chicago and San Francisco offices and that everyone had to move to Miami. And basically if they didn't move to Miami by a certain date, they were not going to be at the company anymore. So the employees were distraught. And what I had learned later from several investors and reporters who had talked to me was that when they talked to Sam about my leaving, sam told them that my my leaving was a combination of resignation and firing and that one of the reasons that I had to leave was because I refused to move my family to Miami. So basically that guy was constructively fired that he had closed down this office that I built and that if I wasn't going to move that I couldn't, I know, roll off of the company. And so that took a little bit to crawl out from. I had to tell people like, well, it's completely false, it didn't happen at all. And yeah, and he was telling people that he fired me.", "Dwarkesh Patel", "And when he said that, he was still at a sort of like peak of hype, so to speak. Right, absolutely. So, I mean, did the idea forming Architect have already had that idea by this point?", "Brett Harrison", "Yeah.", "Brett Harrison", "Knowing that I was going to leave, I started thinking what I was going to do next and thought, well, if I think I can run a company better than Sam, I should put my money where my mouth is and start a company.", "Brett Harrison", "But I had this, a couple of.", "Brett Harrison", "Ideas and they had this particular idea for Architect and it was starting to really form kind of towards the end of my time at FTX, but I hadn't started anything. And so finally I left FTX and then took a little bit of time off and then started to talk to investors about maybe raising some money for starting this company. And there were a few investors that basically said like, do you have Sam's blessing to do this? Why do I need Sam's blessing? I've resigned, I don't work there anymore. They said, we really would feel more comfortable if we could talk to Sam first. And kind of like, you know, make sure things are okay and kind of figure out what he's doing, find out if he wants to invest too before we kind of talk to you further. And it was impossible to escape that like the Sam kind of hype bubble. Even having left the company.", "Dwarkesh Patel", "Why do you think they were so concerned about were they trying to invest in FTX in the future?", "Brett Harrison", "They were existing FTX investors.", "Dwarkesh Patel", "Okay.", "Brett Harrison", "And I think it really mattered to them what Sam thought of them. And if they didn't know the full story, and if they were being told that Sam fired me, then I think they were concerned about potential conflict investing in me too.", "Dwarkesh Patel", "Was any part of that because Sam had a reputation as I don't know, if like, an investor invested in somebody he disapproved of, he would get upset in some way?", "Brett Harrison", "No. If that happened, I don't know about it, but I think it was just sam had such a kind of magical hold over the entire industry, from investors to media to politicians, that they looked to him for approval.", "Dwarkesh Patel", "Okay.", "Brett Harrison", "Yeah.", "Dwarkesh Patel", "So at this point, you've left FTX us and you're starting to work on your own company. And when is this exactly? This is.", "Brett Harrison", "Well, my official resignation was the end of September. I had stopped working earlier than that and so I kind of started to start working on fundraising for the company.", "Dwarkesh Patel", "In October and then a month later the thing implodes. So when did you hear the first inkling that there might be some potential trouble?", "Brett Harrison", "The first thing I heard was the night before the announcement that finance might be buying FTX. I was just looking at Twitter and just saw all of this fearmongering. It was like, okay, well, Cz says he's selling FtT and so FtT is going to go down. And people were saying, well, that means Alameda's Toast. And then once Alameda goes under, oh, that's going to be a problem for FTX. Pull your funds from FTX. And I was just sort of laughing at this because whatever, I'm used to people saying things on Twitter that seemed nonsensical. And first of all, Sam and Caroline are great traders. Like, if anything, like maybe they'll profit from all this volatility and tokens and they don't understand, like, there's no way anything's going to happen to Alameda. But also this connection between the price of FtT token and the ability for customers to withdraw their funds from the exchange, like, this just did not compute for me at all. So I was like, this will boil over in a couple of days, like everything else. And the next morning I was actually busy talking to my own lawyers and investors for the company because we were actually closing up our investment round. Actually, the closing docs for my investment round went out that morning. The morning that FTX announced they were going to be bought by finance. It was like the worst timing in crypto fundraising history. So I was busy all morning. And then I went online and checked Twitter and then saw Sam's tweet that said, what comes around goes around, and we're going to get acquired by finance. And I don't know. I felt dizzy. I had no idea what was going on in the world at FTX. I just couldn't put the pieces together in my head. It just didn't make any sense to me.", "Dwarkesh Patel", "So before then, you did not think this was possible?", "Brett Harrison", "I kept a bunch of money on the exchange. I was still an equity holder in FTX and FTX US. I was still very pro FTX, in spite of my experience with Sam.", "Dwarkesh Patel", "And then how did that week unfold for you? You were, I guess, almost about to close your round. What happened to the round? And then how were you processing the information? I mean, it was like a crazy week. After Direct, the deal falls apart, bankruptcy hacking. Anyways, tell me about that week for you.", "Brett Harrison", "Sure. First, the investors, we all had to hit pause. First of all, Architect became priority number 1000 on everyone's list. Secondly, a number of those investors were trying to do damage control themselves. They either they were themselves investors in FTX or FtT. They had companies who part of their runway was held on FTX, or they were expecting to get investment from FTX. So people were just trying to assess what was happening with their own companies. They were not writing checks into new companies anymore. So I had to hit pause on the whole thing for their sake and for my sake. And yeah, just what could one do in that situation except for just read the news all week? Because everything that came out was something brand new and unbelievable, more unfathomable than the thing before. And it was a mixture of, like, rumors on Twitter and articles coming out in major media publications and the kind of the announcements of the bankruptcy. It was just information overload. And it was very difficult to parse fact from fiction. And so it was an emotional time.", "Dwarkesh Patel", "Yeah. Understatement. Right. All right, so we've kind of done the whole story of you joining to the company collapsing. I want to do a sort of overview of, I guess, what exactly was happening that caused the company to collapse. And I guess the lessons there. Right. In the aftermath, SPF has been saying that FTX US is fully solvent. If they wanted to, they could start processing withdrawals. He had a substance recently in January where he said that it had $400 million more in assets and liabilities. What is your understanding of the sort of relationship between the solvency of FTX US?", "Brett Harrison", "Sure. Answer is, I really don't know.", "Brett Harrison", "If you had asked me about the.", "Brett Harrison", "Solvency of FTX US at the time that I left, I would have said, why are you asking about this? Like, of course everything's fine right now. It's very difficult to understand what is going on. First, the level of deception that was created by this inner circle of Sam's and now reported through the various complaints and the indictments from DOJ, they were doing things to intentionally manipulate internal records in order to fool, like, the employees and auditors and investors. So everything's out the window at that point. And then secondly, it sounded like in the week prior to the bankruptcy, there was this flurry of intercompany transfers. And given all that that's happened, it's impossible to say what state we are in now compared to where we were several months prior. So it's impossible to know who took.", "Dwarkesh Patel", "Over management of Ftsus when you left.", "Brett Harrison", "I'm not sure.", "Dwarkesh Patel", "Was it a single individual or was it just rested back to the Bahamas?", "Brett Harrison", "I really don't know. I mean, I've totally cut off from everything. FTX at the time that I left.", "Dwarkesh Patel", "Before you left, were the assets of FTX US custody separately to FTX international's assets?", "Brett Harrison", "Yes, they were. Okay, so we had like a separate set of bank accounts, separate set of crypto wallets. The exchange itself was like a separate database of customers. It ran in like a different AWS cloud than the one that the international exchange ran on.", "Dwarkesh Patel", "Okay, got it. And you had full access to this and it checked out basically more assets and liabilities.", "Brett Harrison", "Right. But remember that the thing that makes them not separate is the fact, and this was completely public, that Sam was the CEO of FTX and FTXus, and Gary was the CTO of FTX and FTX US, and Nashi was the director of engineering for FTX and FTX US. And so as long as there wasn't this, like, completely separate, walled off governance that we were trying to establish while I was there, there was never going to be perfect separation between the companies. This was like a known problem. And that was what makes it so difficult to sort of understand the nature of what was potentially happening behind the scenes.", "Dwarkesh Patel", "So I guess we've been talking about the sort of management organizational issues of FTX. Were these themselves not like some red flag to you that, I don't know, something really weird could be happening here, even if it wasn't like fraud, right? These people are responsible for tens of billions of dollars worth of assets, and they don't seem that competent. They don't seem to know what they're doing. They're making these mistakes. Was that itself not something where that concerns you?", "Brett Harrison", "I mean, it concerned me, and I tried to raise concerns multiple times. If you raise concerns multiple times and they don't listen, what can you do other than leave? But you have to understand that every company I've ever worked at, and I would think any company anyone's ever worked at, has management problems and growing problems. And especially for a super high growth startup like FTX, it's a very natural progression to have the visionary CEO who brings that product to product market fit, who enjoys that sort of explosive success, and then the reins of the company are eventually handed over to professional managers who sort of take it into its maturation phase. And I thought, well, really, I'm not that person because Sam and I have interpersonal issues. But there's 100 plus major investors in FTX. Someone will figure out how to install the correct management of this company over time, and we'll bring it to a good place. Like, one way or another, this is going to succeed. There are too many people with vested interest in doing so. And so, no, I wasn't concerned that FTX wouldn't somehow figure this out. I still thought FTX had an extremely bright future.", "Dwarkesh Patel", "But there might be, I guess, these sorts of visionaries. A lot of them might have, like, problems, to put it in that kind of language, but I don't know how many of them would make you suspect that there's, like, mental health issues or there's addiction issues that for somebody who's in charge of a multibillion dollar empire, I don't know. Seems like something that's I can't quite speak to.", "Brett Harrison", "Whether people would think there are mental health issues of other people who are supposed to be the kind of figureheads of large companies. But remember, at this point, sam is not leading the day to day operations of the company like many other people are. Right. And as the kind of public figurehead of the company, sam was obviously doing a very good job. He was extremely successful at raising money. He was extremely successful at building a positive image for the company. And so in that sense, that was all going fine and the rest of the company was being run by other people. So, you know, I didn't witness anything like, you know, the addiction stuff firsthand. I definitely thought he was not as happy a person as I met, you know, a long time ago. But could you blame a person for, you know, inheriting a 20, $30 billion company and not taking it super well when you're 29 years old? I think so.", "Dwarkesh Patel", "You mentioned that given the fact that all hundreds of accredited investors presumably had done good due diligence, that gave you some comfort about the ultimate, I guess, soundness of the company. But potentially those hundreds of investors are relying on the experienced highlevel executives that SBF had brought on that is thinking that, listen, if somebody from Citadel and Jane Street is working at FTX, that's a good indication that they're doing a good job. And so, in some implicit way, you're lending your credibility to FTX, right? I guess. Was there just this sort of circle of trust where the investors are assuming if this person who has tons of leadership experience in traditional finance is coming to Fdx, they must have done the due diligence themselves. And then you are assuming that the investors have done this. And then so it's like, nobody's role to be the guy who's like this was my job, and I was the.", "Brett Harrison", "Person in charge of remember, regardless of how experienced or inexperienced people within the company are, regardless of how many or a few investors there are how many senior lateral hires there are if a very small group of individuals who are very smart and very capable intentionally put forth schemes that deceive people within the company and outside the company about the veracity of records, what can you do? What is one supposed to do in that situation? If the public reporting matches private reporting, if investors have done their own diligence, if we've joined the company and we see nothing wrong within the company from a financial perspective, if we can see the public volume on the exchange. And it all matches up with our internal reporting and we know how much fees we'll be able to collect and all that. And it seems like a lot of income compared to our expenses for a two or 300 person company. At what point do you go against all of that and say, in spite of the overwhelming evidence to the contrary, I think something is wrong?", "Dwarkesh Patel", "Yeah, but someone might look at this and say, listen, Brett, you weren't like a junior trader who was like, right out of MIT or something, who just joined FTX. You have more than ten years of experience in finance. You saw Lehman happen. You've managed really large teams in traffic by, and then you have the skills and the experience. And if somebody with your skills and experience and not only that, your position in FTX, as president of FTX US, if you can't see it coming, and maybe you couldn't, right. Whose job was it to see it coming? It doesn't seem that anybody other than the inner circle could have been in a better position, and maybe nobody could have seen it. But is there somebody who outside of the inner circle you think should have been able to see it coming?", "Brett Harrison", "I don't know. It's a good question of, like, when a major fraud happens in such a way where it was very expertly crafted to be hidden from the people who could have done something about it, what should one do? One answer is never trust anyone. Right. Like every company I ever worked for in the future, every time we say we've done some transaction, I will ask them to show me the bank records and give me the number of the bank teller I can call to have them, like, independently verify every single banking transaction. This is sort of like impractical and ridiculous.", "Brett Harrison", "It doesn't happen.", "Brett Harrison", "And so I think it sounds like the counterfactual here is one where, okay, first I have to believe that there is some kind of fraud, which I don't. Then I have to say, okay, I would like to start auditing all. Bank transactions. Actually I want to start auditing all bank transactions for the company that I don't work for. Also I want to disbelieve audited financials from respected third party auditors. I also want to look into the possibility that Sam is like lying under oath in congressional hearings about segregation of customer funds. Also I should disbelieve all of the trusted media outlets and also 100 financial institutions that have invested in FTX. It's like the chain that you have to go through in order to get to a point where you start to be able to figure out something is wrong is, I think, really impossible. And I think the bottom line is for sure should be mandated at certain stages of company growth, independent boards. And I think that a lot of that has to do with where the nexus of control of the company really is and making sure it's in a place where there is appropriate regulatory oversight and appropriate financial oversight. I think that maybe could have helped. But besides that, I think this is ultimately a job for enforcement. Like, people will commit crimes and there is nothing one can do to stop all people from committing all possible future crimes. What Gun can do is come up with the right structures and incentives so that we can build like a trust based system where people can innovate and build great companies and people who are bad actors can get flushed out, which is ultimately what I think is happening.", "Dwarkesh Patel", "But I guess they're not letting you hire people. They're like they're like overseeing, writing the actual code for FTX US from the Bahamas. Not something that makes you think, like, why are they doing this? It's like a little odd.", "Brett Harrison", "I just thought it was not the right way to run the company. There's a very large chasm between I don't think they're doing a good job running the company and I think that customer funds are at risk.", "Dwarkesh Patel", "Right, yeah, fair enough. Why should someone who sees bad organizational practices there's no board. They're making a ton of really weird investments and acquisitions. And not only that, like, most importantly, they are responsible for managing tens of billions of dollars worth of other people's assets. What should somebody do when they're seeing all this happening? I mean, obviously it's very admirable and that you put this in writing to him, you gave it to him, and then you resigned when he refused to abide by it. So maybe the answer is just that. But is there something else that somebody should do?", "Brett Harrison", "I would say within any company, and I would expect that the overwhelming majority of companies, if you see bad management, it does not imply fraud. But there's lots of places with bad workplace culture and people are making bad management decisions. And it should be that if you're and find yourself in that position, there is someone you can go to to talk to. It might be your manager, it might be your manager's manager. It could be someone in your HR department. But there should be like a designated person within the company for you as an employee that, you know, you have a safe space to bring complaints about the workplace and about the company strategy. And then you should see how they handle it. Do they take it seriously? Do they make changes? Do they look into the stuff you're talking about? Do they encourage cooperative, positive discussion? Do they threaten you? Do they retaliate against you in some way? Do they start excluding you from conversations? Do they threaten to withhold pay? Like if you're in that ladder camp, what do you do? At that point? It's easy for me to say, and I've been in fortunate positions within companies and have personal flexibility, it might not be so easy for the average person to sort of get up and leave a job. But I do think that at some point you have to start making plans, because what can you do in the face of a giant organization that you disagree with other than leave?", "Dwarkesh Patel", "Let's talk about regulators and your relationship with them while you're at FTX. Obviously, as head of FTX US, I imagine you were heavily involved with talking to them. What was their attitude towards FTX like before it fell?", "Brett Harrison", "All the regulators were, I think, in the common belief that crypto was a large and viable asset class. And in order for it to grow in a responsible way, it needs to come within the regulatory envelopes that already exist in whatever way that's appropriate for crypto. And crypto could mean a lot of different things. We have to maybe distinguish between centralized and decentralized finance here. But I would say regulators saw FTX as at least one of the companies that was very willing to work directly and collaboratively with regulators, as opposed to trying to kind of skirt around the regulatory system.", "Dwarkesh Patel", "Well, when I was preparing to interview SPF, actually, I got a chance to learn about your proposal to the Cfdc. We were just talking about you were explaining this earlier, but the auto liquidation and cross margining system bring that not only to crypto in the US. But to derivatives for stocks and other assets. I thought, and I still think it's a good idea, but do you think there's any potential for that now, given that the company most associated with that has been blown up? What is the future of that innovation to the financial system look like?", "Brett Harrison", "Yeah, I definitely think it's been set back. It's interesting. Walt Lucan from the Futures Industry Association, in a conference that was shortly after the class of FTX talked about FTX and sort of a speech, but specifically made a call to the fact that in spite of what happened to FTX. The idea of building a future system that can evolve with a 24/7 world is still a worthwhile endeavor and something that we should consider and pursue and be ready for. We are 3D printing organs and coming up with like, specially designed mRNA vaccines, but like, you still can't, you know, get margin calls on a Saturday for SAP 500 future. There's like some real lack of evolution in market structure in a number of areas of traditional finance, and I think it's still a worthwhile endeavor to pursue it. I think the La Ledger X proposal makes a lot of sense. I think it's understandable where some of the concerns were around how that could really dramatically alter the landscape for derivatives regulatory structure and market structure. And there were still unaddressed questions there. But I still think that it was the right idea.", "Dwarkesh Patel", "During those hearings, I guess the establishment CME and others brought up criticisms like, oh, we have these sort of biscuit relationships with our clients. And if you just have this algorithm take its place, you can have these liquidation cascades where illiquid assets, they start getting sold. That drives the price even lower, which causes more liquidations from this algorithm. And you have this sort of cascade where the bottom falls out. And even though it might not be an accurate way to describe what happened with FtT and FTX because there was obviously more going on, do you think that they maybe had a point, given how FTX has played out?", "Brett Harrison", "A lot of FCMs have auto liquidation. There is one particular one where they actually automatically close you every day at 04:00 p.m., and they do it in a really bad way. So the idea of auto liquidation is not new. The idea of direct to customer clearing is not new. The idea of cross collateralization is not new. The thing I think that was novel about FTX was putting all together, it was direct to customer margining, cross collateralization, auto liquidation. And so in order for the regulators to get comfortable with the application, they had to understand that FTX was one entity was performing the roles that typically multiple different entities perform. And you always need to ask yourself the question of, like, was there something worthwhile about having those different entities be separate or not? Is it just sort of legacy regulatory structure? I think that remains to be seen. And I think we don't have enough experience, especially in the US. With that kind of model, to be able to say whether it actually works better or worse. I think either way, it was worth a try. And I think maybe the biggest misconception about the application was that if we got approved, it meant suddenly FTX is going to list everything from corn to soybeans to oil to SMP 500 overnight and completely, you know, destroy the existing derivatives landscape. I think what have actually happened was FTX would have gotten permission to list, like, one contract on kind of small size and there would have been experience with the platform and it would have been assessed compared to the alternatives on traditional CCPS. And if it was worse, changes would have been made and if it was better, it would evolve and the market would basically decide what people wanted to trade on. I do think the auto liquidation part was the main piece that people were hung up on, which was like, how do you provably show the kind of worst case scenario in auto liquidation cascade? Then again on large TCPS. Now in the US and Europe, there are margin breaches all the time. The current system is far from perfect.", "Dwarkesh Patel", "Backing up to, I guess, regulation more generally. I mean, many people saw crypto as a sort of regulatory arbitrage where because regulations are so broken in finance, I guess evidence would be that you're not allowed to do this manually, right? You had to go through the lengthy approval process. If you're a giant company to begin with, the entire point of crypto was to get around the regulators and not go through them to get approval for things and hand over that kind of approval process to them. Do you think that working with the regulators and then also being part of crypto was a sort of like it kind of defeated the point of crypto?", "Brett Harrison", "I think I disagree with the premise. I don't think the point of crypto is equal to arbitrage. I think while crypto remains unregulated, it is easier to get something done in crypto than if it were regulated. That's sort of like total logical. I also think that most people, especially on the institutional side, who trade crypto, believe that we are in a temporary state that cannot last forever, which is that crypto is largely unregulated or has a weird patchwork of regulatory authority. Maybe it's like the 50 state regulators in the US. Or it's some combination of like, you know, money transfer and, you know, CFD or broker dealer activity in Europe. So I think this is, it's absolutely a worthwhile endeavor knowing that there's going to be some regulation for at least part of the crypto ecosystem to work with regulators to make sure that it's done well.", "Dwarkesh Patel", "One very important question about the whole idea thing is what did the conventional narrative about how it went down and why it went down, what did it get wrong? Given that you were on the inside, what do you know was different than what has been reported?", "Brett Harrison", "I actually think not much. And I think the reason for that is typically when something like this goes wrong and it becomes this media frenzy, there's like plenty of opportunity for misinformation to spread. But to the credit of the investigators working on this case, they moved so quickly that, you know, they had, you know, unsealed indictments within what, two months of this going down. And so having kind of a lot of the truth, you know, being able to be spelled out in facts in a public written document, I think quelled a lot of the opportunity for misinformation to proliferate. And whether that's from Twitter troll or if it's from Sam Bankman freed himself trying to spread information about what happened. I think a lot of it wasn't really given the room to breathe.", "Dwarkesh Patel", "What did you make of his press tour in the aftermath? Why did he do it and what was your impression of it?", "Brett Harrison", "I'm not going to speculate what's inside Sam's head. I think Sam had built up his empire through bartley his control over media. And he did that by being available all the time and being ostensibly open and honest with him all the time and probably thought, why can't the same strategy work now? And maybe I can sway public opinion. If I can sway public opinion, maybe I can sway regulators and law enforcement. And it turns out that is definitely not true. So I don't really know. It could just be an addiction to the media. He couldn't stand people talking about him and him not being part of the conversation.", "Dwarkesh Patel", "Yeah. And I guess the earlier account you gave of his day long media cycles kind of lends credence to that mentality. I have a question about the future of the people who are at FTX. There's many different organizations who have had their alumni go on to have really incredible careers in technology. Famously, PayPal had a mafia, so called mafia, where they wanted to found YouTube with Elon Musk, SpaceX, Tesla. So many other companies came out of the people who came out of PayPal. And Bern Hobart has this interesting theory. The reason that happens is when a company exits too fast, you still have these people who are young and ambitious in the company who go then go on to do powerful things with the network and the skills they've accumulated. Do you think that there will be an FTX mafia?", "Brett Harrison", "The number of the most talented people within FTX are leaving FTX in a slightly different position than the people exiting PayPal and acquisition. I would say they're in positions more like actual mafia people. So I'm not sure it's going to be like some giant FTX mafia, but I do think there are a ton of talented people at FTX who are going to look to do something with their careers. And also a lot of those people came from very impressive backgrounds prior to FTX. So I expect them to want to continue to get back on track and build something great. And so I do think you're going to see at least a couple of people who emerge from this and do something really great.", "Dwarkesh Patel", "That's a good note to close the FTX chapter of the interview on sure. Let's talk about Architect, your new company. Do you want to introduce people to what Architect is doing and what problem is solving.", "Brett Harrison", "Sure.", "Brett Harrison", "So the goal of Architect is to provide a single unified infrastructure that makes it really easy for individuals and institutions to access kind of all corners of the digital asset ecosystem everywhere from individual crypto spot exchanges that are centralized to Dfi protocols, to qualified custodians and self custody and everything in between.", "Dwarkesh Patel", "Yeah, I'm not sure I have enough context to understand all of that. I don't know, a few grade levels.", "Brett Harrison", "Below backing up a very high level. So let's say you are someone who wants to trade crypto in some way or what do you actually have to do? So imagine you want to do something slightly more than just like sign up for Coinbase and click the buttons there. Let's say you would like to find the best price across multiple exchanges. Let's say that you not only want to find the best price across multiple exchanges, you also want to occasionally do borrow and lending from defy. Maybe not only that, you also want to store your assets in off exchange custody as much as possible. Well, aside from doing all that manually, by opening up all the different tabs in your browser and clicking all the buttons to move assets around and connect all these different exchanges, you actually want to build a system that unifies all these things. You have this diverse build choice. And the build choice looks like. Hire five to ten software developers and get them to write code that understands all the different protocols and different exchanges, all the synchronicity of them, that downloads market data, that stores the market data, that connects to these different custodians that kind of bridges all these things together, that provides some kind of user interface that pulls it all together. It's a significant amount of work that up till now, basically all of these different companies are just reproducing. Again, they're all solving the same problem. And as a trader, you want to focus your time on strategy development and alpha and monetization and not on how do I connect to random exchange? So the goal of Architect is to build this sort of commodity software that people can then sort of deploy out of the box that gives them access to all of these different venues all at the same time.", "Dwarkesh Patel", "And so it sounds like this is a solution for people with significant levels of assets and investment in crypto. I'm assuming it's not for.", "Brett Harrison", "So I think that's like the place we want to start. But one phenomenon in crypto that I think is somewhat new and exciting is the fact that, okay, if you want to get into sophisticated equities trading, well, what do you have to do? You usually have to either establish a broker dealer or get hooked up to an existing broker. You need to get market data which can be very expensive. Like the full depth of book feed from Nasdaq costs like tens of thousands of dollars per month. If you want to compete on latency, you have to get like a colocated server in the Nasdaq colo, which is also going to cost you tens of thousands of dollars per month. There's a significant time and money overhead to doing all this, which is why it's so hard to compete in that space against all the incumbent players. Whereas in crypto many of the markets are just in the cloud, like in Amazon's cloud or Alibaba's cloud. And you can just very cheaply and easily spin up like a server in there for a couple of dollars a month and have the same access as a big Hft. All the market data is free, the order entry is free. The protocols are usually fairly simple, like you can use Json over a websocket as opposed to speaking like fix over some private line. As a result, there is this large and growing class of kind of like semi professional individual traders that have grown where there are people who are smart individuals who have like, maybe some wealth amassed and they want to be able to do kind of professional trading. Whether that's like manual quick trading or like simple algos using python or whatever. And they can do that and experiment easily because of the open access of crypto markets. And so there's a much wider customer base for something like this which includes these kind of like high powered individuals in addition to your small medium, large hedge funds and prop shops and different institutions.", "Dwarkesh Patel", "And is the goal to is crypto the ultimate market you're targeting or are there other asset classes that you also want to provide the service to?", "Brett Harrison", "We're building very general infrastructure and we think crypto is a viable asset class, but is one of many. And our goal is to provide institutional great connectivity and software to anyone who wants to participate in trading in US semi sophisticated way. So I think over time we'll want to grow our offering as much as possible.", "Dwarkesh Patel", "Given the fact that Nasdaq or whatever already have these barriers, is it possible for someone to remove those barriers with a solution like yours? I guess like an analogy that comes to mind is I guess nobody before Mark Cuban's, whatever pharmaceutical company just try to go outside the insurance system and directly sell drugs to people. Is it possible to do something like that for Nasdaq?", "Brett Harrison", "Yeah, you can't connect to Nasdaq without connecting to Nasdaq. You can't not go through a broker dealer. But I think that we could eventually try to get the appropriate licensing required to be an intermediary that is focused on being like a technology forward interface towards people being able to do more program trading. And so if the mission of our company is to provide better access, I think we can do so within the existing system.", "Dwarkesh Patel", "I guess this raises a broader question of if you're initially trying to solve for the problems that these exchanges should natively have solutions to at least or some of the problems are the ones that these exchanges should natively have solutions to. Why haven't these exchanges built this stuff already? You're a part of one such exchange and maybe function better than the other ones, but they're highly profitable, they have a bunch of money. Why haven't they invested in making their infrastructure really great?", "Brett Harrison", "So in many cases, their infrastructure is very good. It's more a question of what's the incentive of the exchange? And I think no matter what, no single exchange is going to have all the market share. So there's always going to be this like, market fragmentation problem. And the question is, whose responsibility is it to make software that helps solve that problem? If I'm some centralized exchange, my incentive is not to build software to make it easier for my customers to go to all the other exchanges. It's like, make my exchange better. So I'm going to put all of my R and D dollars into providing new products and offering new different kinds of services and investment advisory, or lowering the barrier to entry to connect into my own exchange, but not creating this sort of like, pan asset class, pan exchange, interconnectivity software.", "Dwarkesh Patel", "Got it. And given the fact that you're trying to connect these different exchanges, currently most of the volume in crypto is in centralized exchanges. What is your estimate of the relative trading volume of C five versus D five? Do you think it'll change over time?", "Brett Harrison", "So I do think it'll change over time. I think my view is I can't predict what way it's going to change. So people after FTX had asked me like, hey, why don't you try to start your own exchange, take all your knowledge from FTX US, and maybe even raise money to buy the IP out of bankruptcy and start a new exchange. And my feeling is I don't want to bet personally on the exact direction of crypto trading. I could see it continuing status quo where your coin bases and biases the world kind of maintained market share. I could see it moving significantly to defy, where people feel like this is the true spirit of crypto. It's in this sort of noncustodial, fully centralized trading environment. I could also see it going the complete opposite direction and having the existing highly regulated exchange players like Naisi and Azdaq and Sivo start to enter the game on spot trading. And where is the ultimate flow going to end up between these three possibilities? I have no idea. So I'm much more excited about providing the kind of connectivity layer to all of them and saying, regardless of where the liquidity ends up, we'll be able to facilitate it.", "Dwarkesh Patel", "Speaking of FTX, how has your experience with FTX informed the development of Architect?", "Brett Harrison", "Yeah, first of all, working at FTX has given me an appreciation for just how behind a lot of the infrastructure is on other exchanges. People really like trading on FTX. Institutions especially really like trading on FTX because the API made sense, really did follow kind of standard state machine of any kind of financial central limit order book that you would see on a place like Nasdaq or CME. Whereas there are a lot of exchanges that have crazy behavior. Like, you send a cancel for an order, you get acknowledged that your order has been canceled, and then you get a fill and you actually get traded on your thing that you supposedly thought you canceled and like things that you think shouldn't be possible or possible. So actually, my time at FTX is interesting with the relation to Architect, because FTX, it gave me an appreciation for how to design a good API for especially institutions that want to be able to trade all the time and the protocols and some of these other exchanges aren't quite as good. So I think it's informed how much the focus of Architects should be kind of wrapping up the complexity of these different exchanges and providing like a really good API for institutions and individuals on our side. And guess like thing one thing. Thing two is obviously what has happened with FTX. People are much less likely to trust a centralized institution with their personal information, especially things like the keys that allow you to trade on their exchange account, or the keys that give you access to their wallet. And so we're thinking a lot about how to design Architects such that the user can connect to all these places and hook up their wallets without needing to ever give us any of their private credentials. And so that's like another particular inspiration from everything that's happened in FTX.", "Dwarkesh Patel", "What is your opinion of crypto in general at this point? How has your sort of perception of its promise changed, if at all, given.", "Brett Harrison", "The same I feel the same way now as I did then, which is it's a one to $3 trillion asset class that is traded by every major institution, that is being invested in by every major institution. And so it's totally viable and it needs good mature infrastructure to support its growth.", "Dwarkesh Patel", "Got it. But is the motivation also informed by a view that, I don't know, crypto is going to be even bigger or in some way going to solve really big use cases? Or is it simply that, listen, this market exists, I don't know what it's going to be good for, but it needs a solution?", "Brett Harrison", "It is, I think, I certainly do believe that that is a likely future for crypto. But to me, the interest in it starts with knowing that this is a huge asset class that needs better infrastructure.", "Dwarkesh Patel", "For trading it in the aftermath of FTX and other things, I mean, all crypto companies have special scrutiny on them. And fairly or unfairly, if there's like FTX alumni, it'll be even more so. How are you convincing potential. Clients, investors, that crypto is safe, FTX alumni are safe.", "Brett Harrison", "Yeah. On the FTX alumni side, I just personally haven't had those issues really in recent months as we've been building out Architect. I hired like three, almost five now employees from formerly at FTX to come work with me.", "Dwarkesh Patel", "But by the way, is that like some sort of ARB basically, that the overall hiring market is overcorrected on them or something? 100%, yeah.", "Brett Harrison", "And not just an FTX right now. Like it is March 2023 as we're recording this, there is like a huge ARB in the hiring market. I mean, all the layoffs in tech and crypto, all of like the fear around various financial services means that like, we basically didn't need to work on recruiting. I had like the best people who worked for me at FTX US. I had ex colleagues of mine from former jobs that come work with me here. And we actually didn't have to do any formal recruiting efforts because of just how much supply there now is on the job market, especially in tech and finance. Luckily, I've had a long career history prior to FTX, and even at FTX we built really great stuff. We had a very good connections relationships with our customers and our investors. There would be times where on Twitter I would answer like a customer's support question at 02:00 in the morning and I maintained a lot of those relationships even through the collapse. And these are the kinds of people who are reaching out, like offering support, like offering to test out stuff, who want to be customers, who are also having problems with existing crypto tools, looking for something better. So all that stuff has remained intact. So I don't really have a concern there.", "Dwarkesh Patel", "What is institutional interest in crypto like at this point, given what's happened?", "Brett Harrison", "I think it is just as great as it was before. Every major investment bank in the US has announced some plan to do something with blockchain technology still. Even like post FTX, the top trading institutions in the world are all continuing to trade it. I think as of what we're speaking about right now, volumes are down because people are sort of generally fearful. But I expect that to turn around pretty quickly and the institutional interest still remains really high. People are definitely expecting and waiting for proper regulatory oversight, especially in the US. That's still happening. People are waiting for higher grade professional tools that make it safe for them to trade and invest in crypto. I think that's obviously in the works for various things, Architect and otherwise, but otherwise the interest is all completely there.", "Dwarkesh Patel", "A broader question somebody could have about crypto at this point is, or maybe not crypto generally, but crypto trading is why is it good to have more crypto trading, at least with stocks and bonds and other kinds of traditional assets? As we were talking about earlier, you can tell a story that listening to health capital allocation projects that need funding, will get funding and so on. Why is it good if the price of bitcoin is efficient? Why is building a product that enables that something that's socially valuable?", "Brett Harrison", "I think it boils down to, first of all, do you think it's important for people to be able to trade commodities? Like how important is it for the world that people can, you know, trade gold efficiently or they can trade oil efficiently? I think the answer is if people have a use for the underlying commodity, then it's important. And so maybe that is like what's the use of crypto? Well, I think each crypto token might have its own use. I don't think everyone has a good use. I think that there's a bunch that do. But if you believe in bitcoin as sort of like a store of value in a great medium of exchange, then it's important that there's a good fair price for bitcoin to enable that. If you think that the ether token is important for the gas fees required to run like a decentralized computer, and you think that the programs running on a decentralized computer are important, then it's important for there to be like a good price for ether that's fair. So I think it really depends on if you kind of believe in the underlying use case at all in the same way you would for kind of any commodity.", "Dwarkesh Patel", "Got it.", "Brett Harrison", "And sometimes there are tokens that have more security like properties where they are like trading Apple stock. Basically there was an initial offering of that token and then if people bought it, it actually directly funded the product behind the token. And then the efficient trading of that token is sort of a barometer for the health of that particular company and they can like, sell more tokens to raise more capital and secondary offerings. In which case it looks very much exactly like the stock market.", "Dwarkesh Patel", "That's a great leading to the next question, which is will there ever be a time when things that are equivalent to stocks or other kinds of equities will be traded on change in some sort of decentralized way?", "Brett Harrison", "I think it's likely. I think the primary reason is that existing settlement systems in traditional markets seem to be very slow and built on very outdated technology that's like highly non transparent and very error prone. So equities are a prime example of this. Like it still requires two business days to settle a stock. And a frequent occurrence when I was at trading firms was that, you know, you would get your settlement file, that one told you, like, what trades were settled, and two told you things like if any corporate actions had occurred overnight, like paying a dividend or share split, and they would frequently be wrong, like the dividend would be wrong or would be missing, or the share split amount was for the wrong amount. Or they missed the day that it happened or they messed up. Some trades didn't get reported properly. They're just frequently mistakes. And it feels like there should be some easy, transparent kind of open, decentralized settlement layer for all things that you can trade. And rather than try to retrofit the existing settlement technology to be faster and better, starting from scratch with something like Blockchain could make a lot of sense, which is why you hear about a lot of these investment banks working on settling fixed income products on chain. The fixed income products have even worse settlement cycles than equities.", "Dwarkesh Patel", "Should the marginal crypto trader stop trading? Maybe this might also be a good question to ask by the marginal trader for on Robin Hood or something, but.", "Brett Harrison", "I have a couple of thoughts about this. So the first is that I don't think crypto markets are as efficient as equity markets, so there's probably more opportunities for short and long term edge as a trader in crypto than there would be in equities. That being said, I think there's still an enormous amount of room in both traditional and crypto markets for even individuals to have and derive information that gives them profitable trading ideas. And I actually think it's the wrong conventional wisdom to think that if you are not Jane Street or Acidadel or Hudson River or Tower or Jump trading, then you have no chance of being able to profit in Marcus except for luck. I do think there are a lot of people who trade, and it's like pure speculation. It's not really like on me to tell them that they shouldn't speculate. They probably derive some personal enjoyment from speculation besides the opportunity for profit. But I do think the access to more sophisticated instruments and information has helped what have traditionally been players that have been unable to compete in the market actually be able to do so in a way that is systematically profitable.", "Dwarkesh Patel", "Okay, so that is, I think, a good point to end the conversation. We got to talk a chance to talk about a lot of things. Let's let the audience know where they can find out more of our Architect and also where they can find your Twitter and other sorts of links.", "Brett Harrison", "Yeah. So Architect's website is Architect.XYZ. We also have @architect_xyz on Twitter. And I'm Brett Harrison, 88 on Twitter.", "Dwarkesh Patel", "Okay, perfect. Awesome. Brett, thank you so much for being on the Lunar Society. This was a lot of fun.", "Brett Harrison", "Thank you so much." ]
[]
https://www.dwarkesh.com/p/brian-potter
Brian Potter - Future of Construction, Ugly Modernism, & Environmental Review
[ "Why Saudi Arabia’s Line is Insane, Unrealistic, and Never going to Exist", "Dwarkesh Patel", "Today, I have the pleasure of speaking with Brian Potter, who is an engineer and the author of the excellent Construction Physics blog, where he writes about how the construction industry works and why it has been slow to industrialize and innovate. It's one of my favorite blogs on the internet, I highly, highly recommend that people check it out. Brian, my first question is about The Line project in Saudi Arabia . What are your opinions?", "Brian Potter", "It's interesting how Saudi Arabia and countries in the Middle East, in general, are willing to do these big, crazy, ambitious building projects and pour huge amounts of money into constructing this infrastructure in a way that you don't see a huge amount in the modern world. China obviously does this too in huge amounts, some other minor places do as well, but in general, you don't see a whole lot of countries building these big, massive, incredibly ambitious projects. So on that level, it's interesting, and it's like, “Yes, I’m glad to see that you're doing this,” but the actual project is clearly insane and makes no sense.", "Look at the physical arrangement layout–– there's a reason cities grow in two dimensions. A one-dimensional city is the worst possible arrangement for transportation. It’s the maximum amount of distance between any two points. So just from that perspective, it's clearly crazy, and there's no real benefit to it other than perhaps some weird hypothetical transportation situation where you had really fast point-to-point transportation. It would probably be some weird bullet train setup; maybe that would make sense. But in general, there's no reason to build a city like that. Even if you wanted to build an entirely enclosed thing (which again doesn't make a huge amount of sense), you would save so much material and effort if you just made it a cube. I would be more interested in the cube than the line. [laughs] But yeah, those are my initial thoughts on it. I will be surprised if it ever gets built.", "Dwarkesh Patel", "Are you talking about the cube from the meme about how you can put all the humans in the world in a cube the size of Manhattan ?", "Brian Potter", "Something like that. If you're just going to build this big, giant megastructure, at least take advantage of what that gets you, which is minimum surface area to volume ratio, and stuff like that.", "Dwarkesh Patel", "Why is that important? Would it be important for temperature or perhaps other features?", "Brian Potter", "This is actually interesting because I'm actually not sure how sure it would work with a giant single city. In general, a lot of economies of scale come from geometric effects. When something gets bigger, your volume increases a lot faster than your surface area does. So for something enclosed, like a tank or a pipe, the cost goes down per thing of unit you're transporting because you can carry a larger amount or a smaller amount of material. It applies to some extent with buildings and construction because the exterior wall assembly is a really burdensome, complicated, and expensive assembly.", "A building with a really big floor plate, for instance, can get more area per unit, per amount of exterior wall. I'm not sure how that actually works with a single giant enclosed structure because, theoretically, on a small level, it would apply the same way. Your climate control is a function of your exterior surface, at some level, and you get more efficient climate control if you have a larger volume and less area that it can escape from. But for a giant city, I actually don't know if that works, and it may be worse because you're generating so much heat that it's now harder to pump out. For examples like the urban heat island effect, where these cities generate massive amounts of waste heat, I actually don't know if that would work if it didn't apply the same way. I'm trying to reach back to my physics classes in college, so I'm not sure about the actual mechanics of that. But in general, that's why you'd want to perhaps build something of this size and shape.", "Dwarkesh Patel", "What was the thought process behind designing this thing? Because Scott Alexander had a good blog post about The Line where he said, presumably, the line is designed to take up less space and to use less fuel because you can just use the same transportation across the line. But the only thing that Saudi Arabia has is space and fuel. So what is the thought process behind this construction project?", "Brian Potter", "I get the sense that a lot of committees have some amount of success in building big, impressive, physical construction projects that are an attraction just by virtue of their size and impressiveness. A huge amount of stuff in Dubai is something in this category. Then they have that giant clock tower in Jeddah that is similar to the biggest giant clock building and one of the biggest buildings in the world or something like that. I think, on some level, they're expecting that you would just see a return just from building something that's really impressive and the biggest thing on some particular axis or whatever. So to some extent, I think they're just optimizing for big and impressive and maybe not diving into it more than that.", "There's this theory that I think about every so often. It's called the garbage can theory of organizational decision-making, which is that, basically, the choices that organizations make are not the result of any particular recent process. They are the result of whenever a problem comes up, they reach into the garbage can of potential solutions. Then whatever they pull out of the garbage can, that's the decision that they end up going with, regardless of how much sense it makes. It was a theory that was invented by academics to describe decision-making in academia. I think about that a lot, especially with reference to big bureaucracies and governments.You can just imagine the draining process of how these decisions evolve. You can just imagine any random decision being made, especially when there's such a disconnect between the decision-makers and technical knowledge.", "Designer Clothes & eBay Arbitrage Adventures", "Dwarkesh Patel", "Yeah. Tell me about your eBay arbitrage with designer clothes.", "Brian Potter", "Oh man, you really did dive deep. Yeah, so this was a small business that I ran seven or eight years ago at this point. A hobby of mine was high-end men's fashion for a while, which is a very strange hobby for an engineer to have, but there you go. Basically, that hobby centers around finding cheap designer stuff, because buying new can just be overwhelmingly expensive. However, a lot of times, you can get clothes for a very cheap price if you're even a little bit motivated. Either it shows up on eBay, or it shows up in thrift stores if you know what to look for, especially because a lot of these clothes can last because they're well-made. They last a super, super, super long time–– even if somebody wore it for 10 years or something, it could be fine. So a lot of this hobby centers around finding ways to get really nice clothes cheaply.", "A lot of it was based around eBay, but it was really tedious to find really nice stuff on eBay. You had to manually search for a bunch of different brands and then filter out the obvious bad ones, then search for typos in brands that were pretty reliably put in titles and stuff like that. I was in the process of doing this, and I thought, “ Oh, this is really annoying. I should figure out a way to automate this process.” So I made a very simple web app where when you searched for shoes or something, it would automatically search the very nice brands of shoes and all the typos of the brand name. Then it would just filter out all the junk and let you search through the good stuff. I set up an affiliate system, basically. So anybody else that used it, I would get a kick of the sales. While I was interested in that hobby, I ran this website for a few years, and it was reasonably successful. It was one of the first things I did that got any real traction on the internet, but it was never successful in proportion to how much effort it took to maintain and update it. So as I moved away from the hobby, I eventually stopped putting time and effort into maintaining the website. I'm curious as to how you even dug that up.", "Dwarkesh Patel", "I have a friend who was with you at the Oxford Refugees Conference, Connor Tabarrok . I don't know if you remember him.", "Brian Potter", "Nice.", "Dwarkesh Patel", "Yeah. Finding other information about you on the internet was quite difficult actually. You somehow managed to maintain your anonymity. If you're willing to reveal, what was the P&L of this project?", "Brian Potter", "Oh, it made maybe a few hundred dollars a month for a few years, but I only ever ran it as a side hobby business, basically. So in terms of time per my effort or whatever, I'm sure it was very low. Pennies to an hour or something like that.", "Unique Woes of The Construction Industry", "Dwarkesh Patel", "A broad theme that I've gotten from your post is that the construction industry is plagued with these lossy feedback loops , a lack of strong economies of scale, regulation, and mistakes being very costly. Do you think that this is a general characteristic of many industries in our world today, or is there something unique about construction?", "Brian Potter", "Interesting question. One thing you think of is there are a lot of individual factors that are not unique at all. Construction is highly regulated, but it's not necessarily more regulated than medical devices or jet travel, or even probably cars, to some extent, which have a whole vat of performance criteria that they need to hit. With a couple of things like land use, for example, people say, “ Oh, the land requirements, could you build it on-site, ” and those kind of things make it difficult. But there are a lot of things that fall into this category that don't really share the same structure of how the construction industry works.", "I think it's the interaction of all those effects. One thing that I think is perhaps underappreciated is that the systems of a building are really highly coupled in a way that a lot of other things are. If you're manufacturing a computer, the hard drive is somewhat independent from the display and somewhat independent from the power supply. These things are coupled, but they can be built by independent people who don't really necessarily even talk to each other and then assembled into one structured thing. A building is not really like that at all. Every single part affects every single other part. In some ways, it's like biology. So it's very hard to change something that doesn't end up disrupting something else. Part of that is because the job a building does is to create a controlled interior environment, so basically, every single system has to run through and around the surfaces that are creating that controlled interior. Everything is touching each other. Again, that's not unique. Anything really highly engineered, like a plane or an iPhone, share that to some extent. In terms of the size of it and the relatively small amount you're paying in terms of unit size or unit mass, however, it's quite low.", "Dwarkesh Patel", "Is transportation cost the fundamental reason you can't have as much specialization and modularity?", "Brian Potter", "Yeah, I think it's really more about just the way a building is. An example of this would be how for the electrical system of your house, you can't have a separate box where if you needed to replace the electrical system, you could take the whole box out and put the new box in. The electrical system runs through the entire house. Same with plumbing. Same with the insulation. Same with the interior finishes and stuff like that. There's not a lot of modularity in a physical sense.", "Dwarkesh Patel", "Gotcha. Ben Kuhn had this interesting comment on your article where he pointed out that many of the reasons you give for why it's hard to innovate in construction, like sequential dependencies and the highly variable delivery timelines are also common in software where Ben Koon works. So why do you think that the same sort of stagnation has not hit other industries that have superficially similar characteristics, like software?", "Brian Potter", "How I think about that is that you kind of see a similar structure in anything that's project-based or anything where there's an element of figuring out what you're doing while you're doing it. Compared to a large-scale manufacturing option where you spend a lot of time figuring out what exactly it is that you're building. You spend a lot of time designing it to be built and do your first number of runs through it, then you tweak your process to make it more efficient. There's always an element of tweaking it to make it better, but to some extent, the process of figuring out what you're doing is largely separate from the actual doing of it yourself.", "For a project-based industry, it's not quite like that. You have to build your process on the fly. Of course, there are best practices that shape it, right? For somebody writing a new software project or anything project-based, like making a movie, they have a rough idea for how it's going to go together. But there's going to be a lot of unforeseen things that kind of come up like that. The biggest difference is that either those things can often scale in a way that you can't with a building. Once you're done with the software project, you can deploy it to 1,000 or 100,000, or 1 million people, right? Once you finish making a movie, 100 million people can watch it or whatever. It doesn't quite look the same with a building. You don't really have the ability to spend a lot of time upfront figuring out how this thing needs to go. You kind of need to figure out a way to get this thing together without spending a huge amount of time that would be justified by the sheer size of it. I was able to dig up a few references for software projects and how often they just have these big, long tails. Sometimes they just go massively, massively over budget. A lot of times, they just don't get completed at all, which is shocking, but because of how many people it can then be deployed to after it's done, the economics of it are slightly different.", "Dwarkesh Patel", "I see, yeah. There's a famous law in software that says that a project will take longer than you expect even after you recount for the fact that it will take longer than you expect.", "Brian Potter", "Yeah. Hofstadter's law or something like that is what I think it is.", "Dwarkesh Patel", "Yeah. I'm curious about what the lack of skill in construction implies for startups. Famously, in software, the fact that there's zero marginal cost to scaling to the next customer is a huge boon to a startup, right? The entire point of which is scaling exponentially. Does that fundamentally constrain the size and quantity of startups you can have in construction if the same scaling is not available?", "Brian Potter", "Yeah, that's a really good question. The obvious first part of the answer is that for software, obviously, if you have a construction software company, you can scale it just like any other software business. For physical things, it is a lot more difficult. This lack of zero marginal cost has tended to fight a lot of startups, not just construction ones. But yeah, it's definitely a thing. Construction is particularly brutal because the margins are so low. The empirical fact is that trying what would be a more efficient method of building doesn't actually allow you to do it cheaper and get better margins. The startup that I used to work at, Katerra , their whole business model was basically predicated on that. “ Oh, we'll just build all our buildings in these big factories and get huge economies of scale and reduce our costs and then recoup the billions of dollars that we're pumping into this industry or this business. ” The math just does not work out. You can't build. In general, you can't build cheap enough to kind of recoup those giant upfront costs. A lot of businesses have been burned that way.", "The most success you see in prefabrication type of stuff is on the higher end of things where you can get higher margins. A lot of these prefab companies and stuff like that tend to target the higher end of the market, and you see a few different premiums for that. Obviously, if you're targeting the higher end, you’re more likely to have higher margins. If you're building to a higher level of quality, that's easier to do in a factory environment. So the delta is a lot different, less enormous than it would be. Building a high level of quality is easier to do in a factory than it is in the field, so a lot of buildings or houses that are built to a really high level of energy performance, for instance, need a really, really high level of air sealing to minimize how much energy this house uses. You tend to see a lot more houses like that built  out of prefab construction and other factory-built methods because it's just physically more difficult to achieve that on-site.", "The Problems of Prefabrication", "Dwarkesh Patel", "Can you say more about why you can't use prefabrication in a factory to get economies of scale? Is it just that the transportation costs will eat away any gains you get? What is going on?", "Brian Potter", "There's a combination of effects. I haven't worked through all this, we’ll have to save this for the next time. I'll figure it out more by then. At a high level, it’s that basically the savings that you get from like using less labor or whatever are not quite enough to offset your increased transportation costs. One thing about construction, especially single-family home construction, is that a huge percentage of your costs are just the materials that you're using, right? A single family home is roughly 50% labor, and 50% materials for the construction costs. Then you have development costs, land costs, and things like that. So a big chunk of that, you just can't move to the factory at all, right?  You can't really build a foundation in a factory. You could prefab the foundation, but it doesn't gain you anything. Your excavation still has to be done on site, obviously. So a big chunk can't move to the factory at all.", "For ones that can, you still basically have to pay the same amount for materials. Theoretically, if you're building truly huge volume, you could get material volume discounts, but even then, it's probably not looking at things like asset savings. So you can cut out a big chunk of your labor costs, and you do see that in factory-built construction, right? These prefab companies are like mobile home companies. They have a small fraction of labor as their costs, which is typical of a factory in general, but then they take out all that labor cost while they still have their high material costs, and then they have overhead costs of whatever the factory has cost them. Then you have your additional overhead cost of just transporting it to site, which is pretty limited. The math does not really work out in favor of prefab, in terms of being able to make the cost of building dramatically cheaper. You can obviously build a building in a prefab using prefab-free methods and build a successful construction business, right? Many people do. But in terms of dramatically lowering your costs, you don't really see that.", "Dwarkesh Patel", "Yeah, yeah. Austin Vernon has an interesting blog post about why there's not more prefabricated homes . The two things he points out were transportation costs, and the other one was that people prefer to have homes that have unique designs or unique features. When I was reading it, it actually occurred to me that maybe they're actually both the result of the same phenomenon. I don't know if I'm pronouncing it correctly, but have you heard of the Alchian-Allen theorem in economics?", "Brian Potter", "Maybe, but I don't think so.", "Dwarkesh Patel", "Basically, it's the idea that if you increase the cost of some category of goods in a fixed way––let's say you tax oranges and added a $1 tax to all oranges, or transportation for oranges gets $1 more expensive for all oranges––people will shift consumption towards the higher grade variety because now, the ratio of the cost between the higher, the more expensive orange and the less expensive orange has decreased because of the increase in fixed costs. It seems like you could use that argument to also explain why people have strong preferences for uniqueness and all kinds of design in manufactured houses. Since transportation costs are so high, that's basically a fixed cost, and that fixed cost has the effect of making people shift consumption towards higher-grade options. I definitely think that's true.", "Brian Potter", "I would maybe phrase this as, “ The construction industry makes it relatively comparatively cheap to deliver a highly customized option compared to a really repetitive option.” So yeah, the ratio between a highly customized one and just a commodity one is relatively small. So you see a kind of industry built around delivering somewhat more customized options. I do think that this is a pretty broad intuition that people just desire too much customization from their homes. That really prevents you from having a mass-produced offering. I do think that is true to some extent. One example is the Levittown houses , which were originally built in huge numbers–– exactly the same model over and over again. Eventually, they had to change their business model to be able to deliver more customized options because the market shipped it. I do think that the effect of that is basically pretty overstated.", "Empirically, you see that in practice, home builders and developers will deliver fairly repetitive housing. They don't seem to have a really hard time doing that. As an example, I'm living in a new housing development that is just like three or four different houses copy-pasted over and over again in a group of 50. The developer is building a whole bunch of other developments that are very similar in this area. My in-laws live in a very similar development in a whole different state. If you just look like multi-family or apartment housing, it's identical apartments, you know, copy-pasted over and over again in the same building or a bunch of different buildings in the same development. You're not seeing huge amounts of uniqueness in these things. People are clearly willing to just live in these basically copy-pasted apartments.", "It's also quite possible to get a pretty high amount of product variety using a relatively small number of factors that you vary, right? I mean, the car industry is like this, where there are enough customization options. I was reading this book a while ago that was basically pushing back against the idea that the car industry pre-fifties and sixties we just offering a very uniform product. They basically did the math, and the number of customization options on their car was more than the atoms in the universe. Basically just, there are so many different options. All the permutations, you know, leather seats and this type of stereo and this type of engine, if you add it all up, there's just a huge, massive number of different combinations. Yeah, you can obviously customize the house a huge amount, just by the appliances that you have and the finishes that are in there and the paint colors that you choose and the fixtures and stuff like that. It would not really theoretically change the underlying way the building comes together. So regarding the idea that the fundamental demand for variety is a major obstruction, I don’t think there's a whole lot of evidence for that in the construction industry.", "If Construction Regulation Vanished…", "Dwarkesh Patel", "I asked Twitter about what I should ask you, and usually, I don't get interesting responses but the quality of the people and the audience that knows who you are was so high that actually, all the questions I got were fascinating. So I'm going to ask you some questions from Twitter.", "Brian Potter", "Okay.", "Dwarkesh Patel 0:26:45", "Connor Tabarrok asks, “What is the most unique thing that would or should get built in the absence of construction regulation?”", "Brian Potter", "Unique is an interesting qualifier. There are a lot of things that just like should get built, right? Massive amounts of additional housing and creating more lands in these really dense urban environments where we need it, in places like San Francisco–– just fill in a big chunk of that bay. It's basically just mud flat and we should put more housing on it. “Unique thing ” is more tricky. One idea that I really like (I read this in the book, The Book Where's My Flying Car ),  is that it's basically crazy that our cities are designed with roads that all intersect with each other. That's an insane way to structure a material flow problem. Any sane city would be built with multiple layers of like transportation where each one went in a different direction so your flows would just be massively, massively improved. That just seems like a very obvious one.", "If you're building your cities from scratch and had your druthers , you would clearly want to build them and know how big they were gonna get, right? So you could plan very long-term in a way that so these transportation systems didn’t intersect with each other, which, again, almost no cities did. You’d have the space to scale them or run as much throughput through them as you need without bringing the whole system to a halt. There's a lot of evidence saying that cities tend to scale based on how much you can move from point A to point B through them. I do wonder whether if you changed the way they went together, you could unlock massively different cities. Even if you didn't unlock massive ones, you could perhaps change the agglomeration effects that you see in cities if people could move from point A to point B much quicker than they currently can.", "Dwarkesh Patel", "Yeah, I did an episode about the book, where's my flying car with Rohit Krishnan . I don't know if we discussed this, but an interesting part of the book is where he talks about transistor design. If you design transistors this way, can you imagine how slow they would be? [laughs] Okay, so Simon Grimm asks, “ What countries are the best at building things?”", "Brian Potter", "This is a good question. Yeah, again, I'm going to sort of cheat a little bit and do it in terms of space and time, because I think most countries that are doing a good job at building massive amounts of stuff are not ones that are basically doing it currently.The current answer is like China, where they just keep building–– more concrete was used in the last 20 years or so than the entire world used in the time before that, right? They’ve accomplished massive amounts of urbanization, and built a lot of really interesting buildings and construction. In terms of like raw output, I would also put Japan in the late 20th century on there. At the peak of the concern and wonder of “ Is Japan gonna take over the world? ”, they were really interested in building stuff quite quickly. They spent a lot of time and effort trying to use their robotics expertise to try to figure out how to build buildings a lot more quickly. They had these like really interesting factories that were designed to basically extrude an entire skyscraper just going up vertically.", "All these big giant companies and many different factories were trying to develop and trying to do this with robotics. It was a really interesting system that did not end up ever making economic sense, but it is very cool. I think big industrial policy organs of the government basically encouraged a lot of these industrial companies to basically develop prefabricated housing systems. So you see a lot of really interesting systems developed from these sort of industrial companies in a way that you don't see in a lot of other places. From 1850 to maybe 1970 (like a hundred years or something), the US was building huge massive amounts of stuff in a way that lifted up huge parts of the economy, right? I don't know how many thousands of miles of railroad track the US built between like 1850 and 1900, but it was many, many, many thousands of miles of it.", "Ofcourse, needing to lay all this track and build all these locomotives really sort of forced the development of the machine tool industry, which then led to the development of like better manufacturing methods and interchangeable parts, which of course then led to the development of the automotive industry . Then ofcourse, that explosion just led to even more big giant construction projects. So you really see that this ability to build just big massive amounts of stuff in this virtuous cycle with the US really advanced a lot of technology to raise the standard of development for a super long period of time. Yeah, so those are my three answers.", "China’s Real Estate Bubble, Unbound Technocrats, and Japan", "Dwarkesh Patel", "Yeah, those three bring up three additional questions, one for each of them! That's really interesting. Have you read The Power Broker , the book about Robert Moses ?", "Brian Potter", "I think I got a 10th of the way through it.", "Dwarkesh Patel", "That's basically a whole book in itself, a 10th of the way. [laughs] Yeah I'm a half of the way through, and so far it's basically about the story of how this one guy built a startup within the New York state government that was just so much more effective at building things, didn't have the same corruption and clientelism incompetence. Maybe it turns into tragedy in the second half, but so far it's it seems like we need this guy. Where do we get a second Robert Moses? Do you think that if you had more people like that in government or in construction industries, public works would be more effectively built or is the stagnation there just a result of like other bigger factors?", "Brian Potter", "That's an interesting question. Yeah, I remember reading this article a while ago that was complaining about how horrible Penn Station is in New York. They're basically saying , “Yeah, it would be nice to return to the era of like the sort of unbound technocrat ” when these technical experts in high positions of power in government could essentially do whatever they wanted to some extent. If they thought something should be built somewhere, they basically had the power to do it. It's a facet of this problem of how it's really, really hard to get stuff built in the US currently. I'm sure that a part of it is that you don't see these really talented technocrats occupy high positions of government where they can get stuff done. But it's not super obvious to me whether that's the limiting factor. I kind of get the sense that they would end up being bottlenecked by some other part of the process. The whole sort of interlocking set of institutions has just become so risk averse that they would end up just being blocked in a way that they wouldn't when they were operating in the 1950s or 1960s.", "Dwarkesh Patel", "Yeah, yeah, that's interesting. All right, so speaking of Japan, I just recently learned about the construction there and how they just keep tearing stuff down every 30 to 40 years and rebuilding it. So you have an interesting series of posts on how you would go about building a house or a building that lasts for a thousand years . But I'm curious, how would you build a house or a building that only lasts for 30 or 40 years? If you're building in Japan and you know they're gonna tear it down soon, what changes about the construction process?", "Brian Potter", "Yeah, that's interesting. I mean, I'm not an expert on Japanese construction, but I think like a lot of their interior walls are basically just paper and stuff like that. I actually think it's kind of surprising that last time I looked, for a lot of their homes, they use a surprising post and beam construction method, which is actually somewhat labor-intensive to do. The US in the early 1800s used a pretty similar method. Then once we started mass producing conventional lumber, we stopped doing that because it was much cheaper to build out of two-by-fours than it was to build big heavy posts. I think the boring answer to that question is that we’d build like how we build mobile homes–– essentially just using pretty thin walls, pretty low-end materials that are put together in a minimal way. This ends up not being that different from the actual construction method that single-family homes use. It just even further economizes and tightens the use of materials–– where a single-family home might use a half inch plywood, they might try to use three-sixteenths or even an eighth inch plywood or something like that. So we’d probably build a pretty similar way to the way most single-family homes and multi-family homes are built currently, but just with even tighter use of materials. Yeah, which perhaps is something that's not super nice about the way that you guys build your homes. But... [laughs]", "Dwarkesh Patel", "Okay, so China is the third one here. There's been a lot of talk about a potential real estate bubble in China because they're building housing in places where people don't really need it. Of course, maybe the demographics aren't there to support the demand. What do you think of all this talk? I don't know if you're familiar with it, but is there a real estate bubble that's created by all this competence in building?", "Brian Potter", "Oh, gosh, yeah, I have no idea. Like you, I've definitely heard talk of it and I've seen the little YouTube clips of them knocking down all these towers that it turns out they didn't need or the developer couldn't, finish or whatever. I don't know a huge amount about that. In general, I wish I knew a lot more about how things are built in China, but the information is in general, so opaque. I generally kind of assume that any particular piece of data that comes out of China has giant error bars on it as to whether it's true or not or what the context surrounding it is. So in general, I do not have a hard opinion about that.", "Dwarkesh Patel", "This is the second part of Simon's question, does greater competence and being able to build stuff translate into other good outcomes for these countries like higher GDP or lower rents or other kinds of foreign outcomes?", "B rian Potte r", "That's a good question. Japan is an interesting place where basically people point to it as an example of, “Here's a country that builds huge amounts of housing and they don't have housing cost increases.” In general, we should expect that dynamic to be true. Right? There's no reason to not think that housing costs are essentially a supply-demand problem where if you built as much as people wanted, the cost would drop. I have no reason to not think that's true. There is a little bit of evidence that sort of suggests that it’s impossible to build housing enough to overcome this sort of mechanical obstacle where the cost of it tends to match and rise to whatever people's income level are. The peak and the sort of flattening of housing costs in Japan also parallel when people basically stopped getting raises and income stopped rising in Japan. So I don't have a good sense of, if it ends up being just more driven by some sort of other factors. Generally though I expect the very basic answer of “If you build a lot more houses, the housing will become cheaper.”", "Dwarkesh Patel", "Right. Speaking of how the land keeps gaining value as people's income go up, what is your opinion on Georgism? Does that kind of try and make you think that housing is a special asset that needs to be more heavily taxed because you're not inherently doing something productive just by owning land the way you would be if you like built a company or something similar?", "Brian Potter", "Yeah, I don't have any special deep knowledge of Georgism. It's on my list of topics to read more deeply about. I do think in general, taxing encourages to produce less of something for something that you can't produce less of. It's a good avenue for something to tax more heavily. And yeah, obviously if you had a really high land value tax in these places that have a lot of single family homes in dense urban areas, like Seattle or San Francisco, that would probably encourage people to use the land a lot more efficiently. So yeah, it makes sense to me, but I don't have a ton of special knowledge about it.", "Dwarkesh Patel", "All right, Ben Kuhn asked on Twitter, “ What construction-related advice would you give to somebody building a new charter city?”", "Brian Potter", "That is interesting. I mean, just off the top of my head, I would be interested in whether you could really figure out a way to build using a method that had really high upfront costs. I think it could otherwise be justified, but if you're gonna build 10,000 buildings or whatever all at once, you could really take advantage of that. One kind of thing that you see in the sort of post-World War II era is that we're building huge massive amounts of housing, and a lot of times we’re building them all in one place, right? A lot of town builders were building thousands and thousands of houses in one big development all at once. In California, it’s the same thing, you just built like 6 or 10 or 15,000 houses in one big massive development. You end up seeing something like that where they basically build this like little factory on their construction site, and then use that to like fabricate all these things.", "Then you have something that’s almost like a reverse assembly line where a crew will go to one house and install the walls or whatever, and then go to the next house and do the same thing. Following right behind them would be the guys doing the electrical system, plumbing, and stuff like that. So this reverse assembly line system would allow you to sort of get these things up really, really fast, in 30 days or something like that. Then you could have a whole house or just thousands and thousands of houses at once. You would want to be able to do something similar where you could just not do the instruction the way that the normal construction is done, but that's hard, right?", "Centrally planned cities or top-down planned cities never seem to do particularly well, right? For example, the city of Brasilia , the one that was supposed to be a planned city— the age it goes back to the unfettered technocrat who can sort of build whatever he wants. A lot of times, what you want is something that will respond at a low level and organically sort out the factories as they develop. You don't want something that's totally planned from the top-down, that's disconnected from all the sorts of cases on the ground. A lot of the opposition to Robert Moses ended up being that in a certain form, right? He's bulldozing through these cities that are these buildings and neighborhoods that he's not paying attention to at all. So I think, just to go back to the question, trying to plan your city from the top down doesn't have a super, super great track record. In general, you want your city to develop a little bit more organically. I guess I would think to have a good sort of land-use rules that are really thought through well and encourage the things that you want to encourage and not discourage the things that you don't want to discourage. Don't have equity in zoning and allow a lot of mixed-use construction and stuff like that. I guess that's a somewhat boring answer, but I’d probably do something along those lines.", "Dwarkesh Patel", "Interesting, interesting. I guess that implies that there would be high upfront costs to building a city because if you need to build 10,000 homes at once to achieve these economies of scale, then you would need to raise like tens of billions of dollars before you could build a charter city.", "Brian Potter", "Yeah, if you were trying to lower your costs of construction, but again, if you have the setup to do that, you wouldn't necessarily need to raise it. These other big developments were built by developers that essentially saw an opportunity. They didn't require public funding to do it. They did in the form of loan guarantees for veterans and things like that, but they didn't have the government go and buy the land.", "Automation and Revolutionary Future Technologies", "Dwarkesh Patel", "Right, okay, so the next question is from Austin Vernon . To be honest, I don't understand the question, you two are too smart for me, but hopefully, you'll be able to explain the question and then also answer it. What are your power rankings for technologies that can tighten construction tolerances ? Then he gives examples like ARVR , CNC cutting , and synthetic wood products.", "Brian Potter", "Yeah, so this is a very interesting question. Basically, because buildings are built manually on site by hand, there's just a lot of variation in what ends up being built, right? There's only so accurately that a person can put something in place if they don't have any sort of age or stuff like that. Just the placement itself of materials tends to have a lot of variation in it and the materials themselves also have a lot of variation in them. The obvious example is wood, right? Where one two by four is not gonna be exactly the same as another two by four. It may be warped, it may have knots in it, it may be split or something like that. Then also because these materials are sitting just outside in the elements, they sort of end up getting a lot of distortion, they either absorb moisture and sort of expand and contract, or they grow and shrink because of the heat. So there's just a lot of variation that goes into putting a building up.", "To some extent, it probably constrains what you are able to build and how effectively you're able to build it. I kind of gave an example before of really energy efficient buildings and they're really hard to build on-site using conventional methods because the air ceiling is quite difficult to do. You have to build it in a much more precise way than what is typically done and is really easily achieved on-site. So I guess in terms of examples of things that would make that easier, he gives some good ones like engineered lumber, which is where you take lumber and then grind it up into strands or chips or whatever and basically glue them back together–– which does a couple of things. It spreads all the knots and the defects out so they are concentrated and everything tends to be a lot more uniform when it's made like that. So that's a very obvious one that's already in widespread use. I don't really see that making a substantial change.", "I guess the one exception to that would be this engineered lumber product called mass timber elements , CLT , which is like a super plywood. Plywood is made from tiny little sheet thin strips of wood, right? But CLT is made from two-by-four-dimensional lumber glued across laminated layers. So instead of a 4 by 9 sheet of plywood, you have a 12 by 40 sheet of dimensional lumber glued together. You end up with a lot of the properties of engineered material where it's really dimensionally stable. It can be produced very, very accurately. It's actually funny that a lot of times, the CLT is the most accurate part of the building. So if you're building a building with it, you tend to run into problems where the rest of the building is not accurate enough for it. So even with something like steel, if you're building a steel building, the steel is not gonna be like dead-on accurate, it's gonna be an inch or so off in terms of where any given component is. The CLT, which is built much more accurately, actually tends to show all these errors that have to be corrected. So in some sense, accuracy or precision is a little bit of like a tricky thing because you can't just make one part of the process more precise. In some ways that actually makes things more difficult because if one part is really precise, then a lot of the time, it means that you can't make adjustments to it easily. So if you have this one really precise thing, it usually means you have to go and compensate for something else that is not built quite as precisely. It actually makes advancing precision quite a bit more complicated.", "AR VR, is something I'm very bullish on. A big caveat of that is assuming that they can just get the basic technology working. The basic intuition there is that right now the way that pieces are, when a building is put together on site, somebody is looking at a set of paper plans, or an iPad or something that tells them where everything needs to go. So they figure that out and then they take a tape measure or use some other method and go figure out where that’s marked on the ground. There's all this set-up time that is really quite time consuming and error prone. Again, there's only so much accuracy that a guy dragging a tape 40 feet across site being held by another guy can attain, there's a limit to how accurate that process can be. It's very easy for me to imagine that AR would just project exactly where the components of your building need to go. That would A, allow you a much higher level of accuracy that you can easily get using manual methods. And then B, just reduce all that time it takes to manually measure things.", "I can imagine it being much, much, much faster as well, so I'm quite bullish on that. At a high level and a slightly lower level, it's not obvious to me if they will be able to get to the level where it just projects it with perfect accuracy right in front of you. It may be the case that a person moving their head around and constantly changing their point of view wont ever be able to project these things with millimeter precision––it's always gonna be a little bit jumpy or you're gonna end up with some sort of hard limit in terms of like how precisely you can project it. My sense is that locator technology will get good enough, but I don't have any principle reason believing that.", "The other thing is that being able to take advantage of that technology would require you to have a really, really accurate model of your building that locates where every single element is precisely and exactly what its tolerances are. Right now, buildings aren't designed like that, they are built using a comparatively sparse set of drawings that leaves a lot to sort of be interpreted by the people on site doing the work and efforts that have tried to make these models really, really, really precise, have not really paid off a lot of times. You can get returns on it if you're building something really, really complex where there's a much higher premium to being able to make sure you don't make any error, but for like a simple building like a house, the returns just aren't there. So you see really comparatively sparse drawings. Whether it's gonna be able to work worth this upfront cost of developing this really complex, very precise model of where exactly every component is still has to be determined. There's some interesting companies that are trying to move in this direction where they're making it a lot easier to draw these things really, really precisely and whave every single component exactly where it is. So I'm optimistic about that as well, but it's a little bit TBD.", "Dwarkesh Patel", "This raises a question that I actually wanted to ask you, which is in your post about why there aren't automatic brick layers . It was a really interesting post. Somebody left in an interesting comment saying that bricks were designed to be handled and assembled by humans. Then you left a response to that, which I thought was really interesting. You said, “ The example I always reach for is with steam power and electricity, where replacing a steam engine with an electric motor in your factory didn't do much for productivity. Improving factory output required totally redesigning the factory around the capabilities of electric motors. ” So I was kind of curious about if you apply that analogy to construction, then what does that look like for construction? What is a house building process or building building process that takes automation and these other kinds of tools into account? How would that change how buildings are built and how they end up looking in the end?", "Brian Potter", "I think that's a good question. One big component of the lack of construction productivity is everything was designed and has evolved over 100 years or 200 years to be easy for a guy or person on the site to manipulate by hand. Bricks are roughly the size and shape and weight that a person can move it easily around. Dimensional lumber is the same. It's the size and shape and weight that a person can move around easily. And all construction materials are like this and the way that they attach together and stuff is the same. It's all designed so that a person on site can sort of put it all together with as comparatively little effort as possible. But what is easy for a person to do is usually not what is easy for a machine or a robot to do, right? You typically need to redesign and think about what your end goal is and then redesign the mechanism for accomplishing that in terms of what is easy to get to make a machine to do.", "The obvious example here is how it's way easier to build a wagon or a cart that pulls than it is to build a mechanical set of legs that mimics a human's movement. That's just way, way, way easier. So yeah, I do think that a big part of advancing construction productivity is to basically figure out how to redesign these building elements in a way that is really easy for a machine to produce and a machine to put together. One reason that we haven't seen it is that a lot of the mechanization you see is people trying to mechanize exactly what a person does.", "You’d need a really expensive industrial robot that can move exactly the way that a human moves more or less. What that might look like is basically something that can be really easily extruded by a machine in a continuous process that wouldn't require a lot of finicky mechanical movements. A good example of this technology is technology that's called insulated metal panels , which is perhaps one of the cheapest and easiest ways to build an exterior wall. What it is, is it’s just like a thin layer of steel. Then on top of that is a layer of insulation. Then on top of that is another layer of steel. Then at the end, the steel is extruded in such a way that it can like these inner panels can like lock together as they go. It's basically the simplest possible method of constructing a wall that you can imagine.", "But that has the structural system and the water barrier, air barrier, and insulation all in this one really simple assembly. Then when you put it together on site, it just locks together. Of course there are a lot of limitations to this. Like if you want to do anything on top of like add windows, all of a sudden it starts to look quite a bit less good. I think things that are really easy for a machine to do can be put together without a lot of persistent measurement or stuff like that in-field. They can just kind of snap together and actually want to fit together. I think that's kind of what it looks like.", "3D Printer Pessimism & The Rising Cost of Labour", "Dwarkesh Patel", "What would the houses or the buildings that are built using this physically look like? Maybe in 50 to 100 years, we'll look back on the houses we have today and say, “Oh, look at that artisanal creation made by humans.” What is a machine that is like designed for robots first or for automation first? In more interesting ways, would it differ from today's buildings?", "Brian Potter", "Yeah, that's a good question. I'm not especially bullish on 3D building printing in general, but this is another example of a building using an extrusion process that is relatively easy to mechanize. What's interesting there is that when you start doing that, a lot of these other bottlenecks become unlocked a little bit. It's very difficult to build a building using a lot of curved exterior surfaces using conventional methods. You can do it, it's quite expensive to do, but there's a relatively straightforward way for a 3D printed building to do that. They can build that as easily as if it was a straight wall. So you see a lot of interesting curved architecture on these creations and in a few other areas. There's a company that can build this cool undulating facade that people kind of like. So yeah, it unlocks a lot of options. Machines are more constrained in some things that they can do, but they don't have a lot of the other constraints that you would otherwise see. So I think you'll kind of see a larger variety of aesthetic things like that. That said, at the end of the day, I think a lot of the ways a house goes together is pretty well shaped to just the way that a person living inside it would like to use. I think Stewart Brand makes this point in––", "Dwarkesh Patel", "Oh, How Buildings Learn .", "Brian Potter", "There we go. He basically makes the point that a lot of people try to use dome shaped houses or octagon shaped houses, which are good because again, going back to surface area volume, they include lots of space using the least amount of material possible. So in some theoretical sense they’re quite efficient, but it's actually quite inconvenient to live inside of building like really curved wall, right? Furniture doesn't fit up against it nicely and pictures are hard to hang on a really curved wall. So I think you would see less variation than maybe you might expect.", "Dwarkesh Patel", "Interesting. So why are you pessimistic about 3D printers? For construction, I mean.", "Brian Potter", "Yeah, for construction. Oh God, so many reasons. Not pessimistic, but just there's a lot of other interesting questions. I mean, so the big obvious one is like right now a 3D printer can basically print the walls of a building. That is a pretty small amount of the value in a building, right? It's maybe 7% or 8%, something like that. Probably not more than 10% of the value in a building. Because you're not printing the foundation, you're not printing like the overhead vertical, or the overhead spanning structure of the building. You're basically just printing the walls. You're not even really printing the second story walls that you have in multiple stories. I don't think they've quite figured that out yet. So it's a pretty small amount of value added to the building. It's frankly a task that is relatively easy to do by manual labor. It's really pretty easy for a crew to basically put up the structure of a house.", "This is kind of a recurring theme in mechanization or it goes back to what I was talking about to our previous lead. Where it takes a lot of mechanization and a lot of expensive equipment to replace what basically like two or three guys can do in a day or something like that. The economics of it are pretty brutal. So right now it produces a pretty small value. I think that the value of 3D printing is basically entirely predicated on how successful they are at figuring out how to like deliver more components of the building using their system.There are companies that are trying to do this. There's one that got funded not too long ago called Black Diamond , where they have this crazy system which is like a series of 3D printers that would act simultaneously, like each one building a separate house. Then as you progress, you switch out the print head for like a robot arm. Cause a 3D printer is basically like a robot arm with just a particular manipulator at the end, right?", "So they switch out their print head for like a robot arm and the robot arm goes and installs different other systems like the windows or the mechanical systems. So you can figure out how to do that reliably where your print head or your printing system is installing a large fraction of the value of the building. It's not clear to me that it's gonna be economic, but it obviously needs to reach that point. It's not obvious to me that they have gotten there yet. It's really quite hard to get a robot to do a lot of these tasks. For a lot of these players, it seems like they're actually moving away from that. I think in ICON is the biggest construction 3D printer company in the US, as far as I know. And as far as I know, they've moved away from trying to install lots of systems in their walls as they get printed. They've kind of moved on to having that installed separately, which I think has made their job a little bit easier, but again, not quite, it's hard to see how the 3D printer can fulfill its promises if it can't do anything just beyond the vertical elements, whichare really, for most construction, quite cheap and simple to build.", "Dwarkesh Patel", "Now, if you take a step back and talk how expensive construction is overall, how much of it can just be explained by the Baumol cost effect ? As in labor costs are increasing because labor is more productive than other industries and therefore construction is getting more expensive.", "Brian Potter", "I think that's a huge, huge chunk of it. The labor fraction hasn’t changed appreciably enough. I haven't actually verified that and I need to, but I remember somebody that said that they used to be much different. You sent me some literature related to it. So let’s add a slight asterisk on that. But in general the labor cost has remained a huge fraction of the overall cost of the building. Reliably seeing their costs continue to rise, I think there's no reason to believe that that's not a big part of it.", "Dwarkesh Patel", "Now, I know this sounds like a question with an obvious answer, but in your post comparing the prices of construction in different countries , you mentioned how the cost of labor and the cost of materials is not as big a determiner of how expensive it is to construct in different places. But what does matter? Is it the amount of government involvement and administrative overhead? I'm curious why those things (government involvement and administrative overhead) have such a high consequence on the cost of construction.", "Brian Potte r", "Yeah, that's a good question. I don't actually know if I have a unified theory for that. I mean, basically with any heavily regulated thing, any particular task that you're doing takes longer and is less reliable than it would be if it was not done right. You can't just do it as fast as on your own schedule, right? You end up being bottlenecked by government processes and it reduces and narrows your options. So yeah, in general, I would expect that to kind of be the case, but I actually don't know if I have a unified theory of how that works beyond just, it's a bunch of additional steps at any given part of the process, each of which adds cost.", "Dwarkesh Pate l", "Yeah. Now, one interesting trend we have in the United States with construction is that a lot of it is done by Latino workers and especially by undocumented Latino workers. What is the effect of this on the price and the quality of construction? If you have a bunch of hardworking undocumented workers who are working for below-market rates in the US, will this dampen the cost of construction over time? What do you think is going to happen?", "Brian Potter", "I suspect that's probably one of the reasons why the US has comparatively low construction costs compared to other parts of the world. Well, I'll caveat that. Residential construction, which is single-family homes and multi-family apartment buildings all built in the US and have light framed wood and are put together, like you said, by a lot of like immigrant workers. Because of that, it would not surprise me if those wages are a lot lower than the equivalent wage for like a carpenter in Germany or something like that. I suspect that's a factor in why our cost of residential construction are quite low.", "AI’s Impact on Construction Productivity", "Dwarkesh Patel", "Overall, it seems from your blog post that you're kind of pessimistic, or you don't think that different improvements in industrialization have transferred over to construction yet. But what do you think is a prospect of future advances in AI having a big impact on construction? With computer vision and with advances in robotics, do you think we'll finally see some carry-over into construction productivity or is it gonna be more of the same?", "Brian Potter", "Yeah I think there's definitely gonna be progress on that axis. If you can wire up your computer vision systems, robotic systems and your AI in such a way that your capabilities for a robot system are more expanded, then I kind of foresee robotics being able to take a larger and larger fraction of the tasks done on a typical construction site. I kind of see it being kind of done in narrow avenues that gradually expand outward. You're starting to see a lot of companies that have some robotic system that can do one particular task, but do that task quite well. There's a couple of different robot companies that have these little robots for like drawing wall layouts on like concrete slabs or whatever. So you know exactly where to build your walls, which you would think would not be like a difficult problem in construction, but it turns out that a lot of times people put the walls in the wrong spot and then you have to go back and move them later or just basically deal with it. So yeah, it's basically a little Roomba type device that just draws the wall layout to the concrete slab and all the other systems as well–– for example, where the lines need to run through the slab and things like that. I suspect that you're just gonna start to see robotics and systems like that take a larger and larger share of the tasks on the construction site over time.", "Dwarkesh Patel", "Yeah, it's still very far away. It's still very far away. What do you think of Flow? That's Adam Neumann's newest startup and backed with $350 million from Andreeseen Horowitz .", "Brian Potte r", "I do not have any strong opinions about that other than, “Wow, they've really given him another 350M ”. I do not have any particularly strong opinions about this. They made a lot they make a lot of investments that don't make sense to me, but I'm out of venture capital. So there's no reason that my judgment would be any good in this situation–– so I'm just presuming they know something I do not.", "Dwarkesh Patel", "I'm going t o be interviewing Andreeseen later this month and I'm hoping I can ask him about that.", "Brian Potter", "You know, it may be as simple as he “sees all” about really high variance bets. There's nobody higher variance in the engine than Adam Neumann so, maybe just on those terms, it makes sense.", "Dwarkesh Patel", "You had an interesting post about like how a bunch of a lot of the knowledge in the construction industry is like informal and contained within best practices or between between relationships and expectations that are not articulated all the time. It seems to me that this is also true of software in many cases but software seems much more legible and open source than these other physical disciplines like construction despite having a lot of the knowledge contained within people's minds and within the culture rather than excessly codified somewhere. So why do you think that construction is seems more closed or stem software? It's interesting.", "Brian Potter", "To go back slightly to our products versus projects industry, a slightly different way of thinking about that is craft-based industries versus industrial processes where this isn't a dichotomy, but a spectrum. In general, there's an expertise and judgment aspect of it that is pretty well embedded in the craft-based process so you can't really remove it. Any sort of decision at any given point requires an expert or an artisan or somebody who understands the relevant context and knows how to proceed based on the specific variables in this specific situation. Industrial processes on the other hand have been sort of figured out. This is how it works every single time and construction is just very very much on the craft end of the production spectrum where the decision of how to put these things together and how to wire this building or whatever is all left up to the expertise and judgments of the people.", "Doing the installation and what that gets you is it lets you put things together without having to do a very large amount of specifying things exactly as how you need. The drawings specifying a house going together are way fewer than the drawings needed to produce a Toyota Corolla. The design Cost required to do it in terms in proportion to how expensive the thing is is also much lower as well. Again, I’m not an expert on software development but it's somewhat more legible by the response and the end-product is very clear. You can clearly see every single part of it and how every single part of it touches every single other part. I'm sure someone in software could say “ Well It's actually really not super obvious how these things work and why they're done this way” or whatever, but you can clearly inspect every single part of it and see exactly how it does what it does and how it connects together.", "The other part of it is you can't really do that with a building. I guess I would also maybe say that if you're a developer, with a building, it's not necessarily obvious how it got to the point that it did when it was put together. A lot of times with physical things, even if you have the object, it is unclear what the process was to create it. So a lot of times what you see is that even with like this comes industrial espionage. Or somebody who's trying to steal some particular thing or whatever a lot of times that doesn't help them as much as they would think to try to like recreate it. A lot of times they have to basically go through the entire process of figuring out how to make it and it takes them just as long to do it as it did the original people doing the development. You saw this with like the development of the atomic bomb for instance. We're like the people who stole this stole the plans for how to make it or who had information on exactly how their system would react if the bomb worked basically took as long to figure out how to make it as the US did.", "So with a physical object, just the process used to make it is not necessarily super legible and it tends to be a little bit hidden. I was gonna say that perhaps that is not as true for software, but I actually realized that I don't actually know and it's very plausible to me that you could have some piece of software that was written, and then it'd just be utterly inscrutable as to how it came together and how you could maybe duplicate a similar piece of software. Is that like a category error?", "Dwarkesh Patel", "Yeah, that's a really good question. I think there are a lot of examples where if you don't have the context on why they were built a certain way, you wouldn't understand what was going on. If you've heard of the fast inverse square root , that's exactly what I was thinking of.", "Brian Potter", "Yeah, yeah.", "Dwarkesh Patel", "For the people who are interested, there's a great YouTube video on this that goes through it but basically, the guy who created it–– John Carmack is a super genius. If you look at the algorithm, it's a few lines, but you would never understand like why this gives you an inverse square root unless you went through the mind of John Carmack where he goes through Newton's method and all these other things like particular float operations , etc. I guess the advantage software has is the ability to fork. You can't just take a building, make an exact replica of it and then just change the part you want to better understand and see what the effect is. Whereas with a software project, you can just fork it or you can just make an API call or something. I guess it goes back to the modularity thing you were explaining–– try to understand a specific sub-component is easier with software.", "Brian Potter", "Yeah, that's interesting. You can run experiments on your piece of software to understand how it works easily and at a lower cost than you can with any physical object (especially a giant building).", "Brian Dreams of Building a Mile-High Skyscraper", "Dwarkesh Patel", "Okay. So let's say the CEO of some mega-corporation is like “Brian, we want to build some really interesting skyscraper or building, I've talked to the mayor and the governor and they're willing to get rid of all the building codes.” So there's no building codes. There's no regulation. You just want to build a really cool skyscraper. What is it that you would do? What would be some innovation or some change that you would make? What would you do if you were given this latitude to just build a really cool building?", "Brian Potter", "I would like to see us fulfill the dream of the Early to mid 20th century and build a mile-high skyscraper. This was where people saw the development of skyscrapers going during the 30s, 40s and perhaps 50s. Frank Lloyd Wright designed this mile-high skyscraper called the Illinois presumably back when Chicago was a Metro rising in importance. The technology for it exists but you haven't really seen anyone do it. Even people who are clearly willing to build these giant, elephant projects–––– nobody's tried to go the distance and build something of this scale. I would like to see us do it.", "Dwarkesh Patel", "Interesting. I know you have a really interesting essay about skyscraper height and one of the things you talked about is the superlinear increase in lateral forces and other kinds of impediments to building tall. How would you get over that kind of stuff?", "Brian Potter", "Basically by throwing a giant amount of money in. So the basic gist of that is the physical constraints do not allow you to build a front that’s a mile in height. It's the economic and legal constraints that sort of stop this extreme construction. Even the economic constraints are significant enough that even in places where there are no legal constraints like China or Dubai, the economics of it are just so brutal. But yeah in this fantasy scenario, a giant stack of money would get devoted to doing this.", "Dwarkesh Patel", "[laughs] A stack of money a mile high. So speaking of which, in that post, you brought up that argument from the economist Glaeser that says that we were leaving billions of dollars basically on the table by having building height codes because we're just giving up on all this vertical space. I'm curious about why you think it's a case that these developers don't have any sort of lobbying or political influence to be able to like collect the billions of dollars of deadweight loss that are created by these codes? Why aren't they able to like organize politically in a way that like gets rid of these regulations that are helping no one?", "Brian Potter", "That's a really interesting question. In general, the strongest construction lobbying group I'm aware of is probably the National Association of Home Builders which exerts quite a bit of influence to try to keep the cost of building single-family homes low, I'm not aware of any anything that exists for large commercial buildings or something like that. Maybe the Association of General Contractors or something. I guess my my initial guess would be that something to the effect of the natural constituencies for opposing a big project like this are always going to be quite a bit great and at least as big (if not bigger) than the constituencies that would able would be able to act for it. So like any big giant construction project, even if it had like a lot of developers mobilized to try to support that kind of thing also would have a have a large constituency that would exist to oppose basically basically anyone who lives in the area.", "They don’t want this giant shadow of a building or they’re worried about the congestion that it would cause, etc. A paradox with this situation is that the places that need it the most because their rents are so high the people that are living there are gonna be financially well equipped to oppose it and in a certain sense they're losing the most out of it, right? If you if you're making $50,000 a year you might value the view out of your apartment at like $500 but if you're making five million dollars a year, you might value that view proportionately more and be willing to expend a lot more to prevent it from from being obstructed. So I feel like this sort of mechanism by where places get wealthier and need more housing kind of also creates its own opposition to some extent.", "Dwarkesh Patel", "I think a co-sign solution to this kind of thing would be optimal where if the view is worth more to you than the apartment is worth to somebody else, then you can just pay them to not build there. The view is not worth more than an apartment is probably worth to somebody right? So then you have an optimal allocation of resources based on who has political influence.", "Brian Potter", "Yeah, it's interesting. I'm not super confident and I should look into it. Why developers don’t have better lobbying efforts does seem like an unanswered question.", "Deep Dive into Environmentalism and NEPA", "Dwarkesh Patel", "Speaking of being able to put projects into a tailspin, you just recently published a very interesting and thorough examination of how NEPA works. Do you want to explain what this law is and what its consequences are? Then I could ask you some more specific questions about it.", "Brian Potter", "Yeah. So NEPA is the national environmental policy act. This is the law that basically requires any major federal government action that might have significant environmental impacts to do a very long and thorough and expensive environmental impact study before anything is done on a project. It gets a large amount of attention because of how long these environmental impacts take to prepare. The average time currently is around four and a half years, and in some cases half of them take longer than that. The highway administration for instance, needed eight and a half years to do an environmental impact statement before they could build a new highway. So people are perpetually trying to figure out a way to reform this law so that we don’t have to wait years and years and years before building big important infrastructure projects. So that's the gist of what NEPA is and how it works.", "Dwarkesh Patel", "You had a really interesting point in the article where you said that by adding this cost, you can basically think of it as like a tax on all major government actions and the effect of a tax is to reduce what you're taxing. I thought at the end you had a really interesting argument about how like NEPA is anti-law. Can you explain this argument for the podcast listeners?", "Brian Potter", "It's my spicy take at the end that I always have to throw in right at the end but yeah, the basic argument is that the purpose of a law is roughly twofold: to encourage something that you would want more of or discourage something that you would you would want less of. We have laws against drunk driving because we think drug driving is harmful and we want less drug driving in our society. The second purpose of a law is to basically reduce coordination problems and enable exchanges that might not otherwise be able to take place.The government enforces which side of the road you're allowed to drive on not because one side is inherently better than the other, but because it's good if everybody agrees on which side of the road to drive on.", "Contract law is in some ways like this. It's good if you know that people will be punished for breaking contracts because that allows people to enter into them, which allows exchanges that might not otherwise take place. I forget exactly what the example of this is, but the ability of the English government in the 1600s to 1700s to pay back its debts was a really important development because it allowed it to raise money that it otherwise wouldn't be able to because people could trust they would be able to get paid back. Anyway, those are the two rough purposes of a law and NEPA does not do either of those things.", "NEPA is basically a procedural statement or requirement it does not require the government to weigh environmental concerns especially heavily. It doesn't prevent a big oil and gas drilling project from taking place–– essentially what it does require is that for any major environmental effects, you just have to document them very thoroughly so all of it is a documentation requirement, and notifying the public of what you're doing but bit doesn't prevent major environmental negative ramifications. As long as you've documented it quite thoroughly, you can kind of do whatever you want and the evidence is very unclear as to whether it has had net beneficial environmental effects to the extent that it has made it harder to do anything at all.", "The other side is about solving coordination problems and I think that NEPA actually is very very bad at this. It creates a lot of uncertainty because the requirements for doing the analysis are so unclear and the definition what an environmental effect keeps shifting over time. In the 1970s, it was maybe not obvious that greenhouse gas emissions were in major environmental effect, but now in 2020, it obviously is and so what you've had to do for a NEPA analysis changes over time which is of course fine in that sense, right? But it does mean that it's very unclear how long it's gonna take and what is going to be involved and whether anybody is going to sort of litigate your decision. This is the other sort of big part of NEPA–– people are basically able to sue people for not completing the analysis thoroughly enough. They can't permanently stop the project because all you need to do is basically show that you've documented your things thoroughly enough.", "So once you have documented it thoroughly enough, they don't have grounds for stopping it anymore, but they can't slow it down. If they slow it down enough, sometimes the project becomes unattractive and it gets canceled. That's kind of what a lot of these groups hope for and so basically instead of great uncertainty and solid coordination problems, it creates all this new uncertainty where people will like very deliberately try to avoid the NEPA process because they do not know how long it will take and how much it will cost to get their project approved and in some cases going through the process will take a very very very very long time. So making a business decision as to whether to do a new offshore wind development or develop a new forest resource or something like that is very hard to do. You don't know when your project is going to start because you don’t know how long the process will take.", "Dwarkesh Patel", "Yeah, that's so fascinating. All right, so I've got a lot more questions about this because your breakdown was really interesting. I don't know if you're familiar with the longtermist movement, maybe you’ve come across this before, but one thing they've proposed is not like a doctrine or anything, but on the periphery, some idea I saw was that just as we have environmental review we should have a posterity review so that you're analyzing the impacts of your actions on generations way down the line. What are the future impacts of actions just as we analyze what the environmental impacts reactions? What do you think of an idea like that given the various dysfunctions of the environmental review process?", "Brian Potter", "Yeah, a couple thoughts. One is just my gut response is that any additional review is just gonna add additional time and complexity to your process. It's a process that takes some amount of time and has some particular chance of success, right? So adding basically another filter to this process means that it's only going to make the process slower and less likely to succeed rather than more. The other thing is how accurately are you gonna be able to predict what your long-term impacts are/ it's not obvious to me that anybody making predictions over the past 20 years would have been able to do so with any degree of precision. It's not obvious that we would even be able to get like the sign right. Whether it would be net positive or net negative, that's off the top of my head, I could definitely be persuaded otherwise by somebody who is who has thought a lot more about it. So that is not a strongly held opinion, but that's fine.", "Dwarkesh Patel", "Yeah, that is not my immediate impression. That would be my critique as well. In the 70s, correct me if I'm wrong, but the technical consensus of the time was that we would hit peak oil by the 90s, and of course, that's because they couldn't predict their ability to find new reservoirs and develop new technologies that made more oil available to us. So it was just very hard to predict future trends. I don't know how much that kind of law would help. I vaguely hear that the political deliberation is that they're discussing reform to NEPA. What would your ideal reform of Napa look like? How would you reform the implementation?", "Brian Potter", "On a very simple level, I would like to even the playing field so a lot of these newer energy technologies have a lot of the same benefits that like the oil and gas industries have been able to acrew. For instance, oil and gas have a lot of categorical exclusions for certain drilling operations and for drilling in the Gulf of Mexico. They actually have a lot of new exclusions, which was perhaps one of the reasons you can obviously connect that to like the giant Deepwater Horizon spill . Giving technologies like wind and solar and large-scale transmission projects the same benefits that oil and gas drilling and like you're like natural gas pipelines have what I think to be like a massive massive boom. So just off the top of my head. That's one thing that I would like to see.", "Dwarkesh Pate l", "Yeah, I wonder what you think about this–– one idea I had while reading the post that didn’t make sense to me was why this was enforced through the courts where anybody can just bring a lawsuit. Shouldn't there be a single coherent bureaucracy whose goal is to figure out who's messing up? It seems like a mistake to have this done through the courts.", "Brian Potter", "I mean there I was reading a paper earlier basically saying that NEPA is a horribly drafted law because it contains no provisions for funding it or the bureaucracy enforcing it or anything like this. Essentially through random chance at the courts (it was passed during a very activist period in the court); they decided to enforce this this provision for the impact statement which was kind of added late in the process without a lot of fanfare or consideration and then that one little part of it ended up becoming the most important part because that's what the court decided to enforce really really strongly. This is not this is not how you would draft a environmental protection law if you were doing it from scratch.", "It's this weird thing that we've ended up with due to path dependency. An idea that I've seen floated around a few times is that you would want something that looked more like the OMB which basically is charged with figuring out how much additional a given law is going to cost. You would want something like a bureaucracy attached to it that was designed to figure out the environmental effects and that was decoupled from this specific agency. The downside of that, especially for an environmental protection statute, is whether it would be captured by political interests or not and either not enforced at all if it was very conservative of staffing or enforced extremely extremely vigorously if it was on the other end of the spectrum. That's kind of the risk of that, but yes, it's clearly not ideal that the court system is responsible for basically determining this.", "Dwarkesh Patel", "Even if there was a bias in how the law was enforced, if there's a bureaucracy, the benefit is that you can just fire the guy who's running it if you think he's not enforcing it correctly and replace him with somebody who you think will enforce it more appropriately. Whereas with the judiciary, if it's just dozens of different judges having independent opinions of how this should be enforced… There's nobody who's responsible to whom you can say to enforce it differently.", "Brian Potter", "The court actually makes it a little bit hard to change because you don't actually know what the effect of any given change in the provision until somebody files a lawsuit referring to it and it works its way through the courts. Some people think that During the Trump administration, there were some changes to how people worked because one thing about it is that a lot of these laws are just our federal level laws which are comparatively simpler to change. They change some of these laws and they’re designed to accelerate the process, but because it's unsure how the courts are going to interpret them, it's actually going to  increase the risk for these projects in the short term until some of these lawsuits make their way through the courts and we know exactly what is required or not.", "The court tells us that it's a very legitimate system of implementing environmental requirements. I don't think I'm an expert on this, but I don't think it's really how other countries do it. Almost every other country has something requiring environmental impact statements, but it doesn't necessarily mean that the courts are enforcing it via citizen lawsuits. A lot of times it's just done via a normal government bureaucracy or something like that.", "Dwarkesh Patel", "Okay, I was actually just about to ask you that. Do countries that have different or no systems of environmental review have speedier and more cost-effective public works?", "Brian Potter", "I don’t actually know, it would be very hard to separate that.", "Dwarkesh Patel", "That's all right. You had a comment in that post that I found interesting where you said that this uncertainty also makes changing NEPA somewhat risky. Experts have noted for instance that rules to accelerate NEPA processes or impose maximum timelines might result in more of them being challenged in court by failing to take the proper “hard look” Do you want to explain this? Because this is counterintuitive to me.", "Brian Potter", "Meeting the deeper requirement means you have to take what the courts call “a hard look” where you have to consider these impacts quite thoroughly. So the risk is that if you put a timeline cap on some of these processes, it has to be done in a year and if it's not done in a year, it's automatically approved. It's just an idea you see floated from time to time. The risk is that people will just say, “ Okay well, we’ll go and litigate this project immediately, and then when we do, we will say that they did not look hard enough at these impacts and if they can't marshal the resources we needed to study some particular flowering species and it only has a flowering period of two weeks in the spring and the time period was up before that happens,” then the court is good, which is a thing that happens in NEPA apparently.", "It’s one of the reasons why these take multiple years because if you're observing some species or whatever, you might need an entire year to actually observe. But if you fail to look at this plant during the flowering season, you don't know if it's actually there and so you haven't considered the impacts on the potentially endangered species and the court would say “ Yeah, you did not look hard enough at this, go back and do it again.” You see that mentioned quite a few times that timeline caps could either easily backfire by increasing the amount of increasing susceptible litigation which just makes these things take longer than they already do already. You see a lot of extra analysis due to risk aversion from these federal agencies. The laws around NEPA actually say that your environmental impact statements should really not be longer than 150 pages except in extreme circumstances, but the average environmental impact statement is now 660 pages or something like that. So people are already going more than what the law says they should do just out of risk aversion. So if you don't fix the incentives that are causing this risk aversion, your solution will not work", "Software is Stealing Talent from Physical Engineering", "Dwarkesh Patel", "I'm curious if you think that there's been a talent drain from physical engineering tasks (ex. construction) into software. Has that happened and has that had an impact on the world of atoms or is that just something people discuss on the internet and it's not real?", "Brian Potter", "I do think that's almost definitely happening. I mean this was my constant opinion when I was working as an engineer–– especially when I was managing younger engineers who were getting paid comparatively little. I was always on the verge of telling them, “Why don't you just go learn how to code and go earn 3-4x as much at a FAANG company instead of doing this fairly thankless work? ” Frankly yeah, I think it's very likely to be an issue. In some sense, it's theoretically self-correcting to the extent where if the labor moves out of the field, then it gets more expensive and the incentives are changed a little bit and you're forced to find ways of building things with less labor requirements where you can spread your labor development over a larger volume.", "So arguably, if the engineers are leaving and engineers are getting more expensive, you're going to basically find a way that's gonna push you towards figuring out how to spread your engineering efforts over like a larger construction volume–– which would be using more prefab or using kits of parts, assemblies and things like that. So theoretically, to some extent, it is self-corrected. I guess the risk for that is if you screen off the top 20% of most talented people, does that fundamentally handicap what your industry is capable of doing? Or do your incentives kind of push the other way, and you try to lower your quality? I feel like I've heard some people make complaints to this effect with semiconductor research and how semiconductor engineering is actually not especially well paid. It’s relatively easy for semiconductor engineers to go get jobs working in software development so you see sort of a brain drain from that.", "Yeah, I think I think it actually may be correcting to some extent with engineers. I'm a little bit out of the engineering game, but the salaries for engineers have actually kind of risen quite a bit. A similar risk (and people are in construction complaining about this constantly) is just the innovative unavailability of skilled labor. We talked about how being able to do a lot of these tasks requires skilled expertise, and you saw a lot of these people leave the labor force during the greater session. People aren't just entering it at the rates that they need to. I think the average age of a construction worker is somewhere around the 40s or perhaps 50s. It's like very very high so people are constantly complaining about how they can't get enough laborers, not enough workers, or whatever like that.", "But of course, this also is just gonna incentivize finding ways to basically get these buildings built using less labor. I'm a little bit less worried about potential negative affection. There are a lot of historical examples of labor constraints developing into labor-saving inventions–– the history of the US is like that, right? We developed a lot of labor-saving machinery in the American system of manufacturing. Yeah, I'm a little bit less worried about that, but I could see it definitely having effects on the other sides of the industry.", "Dwarkesh Patel", "There was this author of this global economic history book who made the point that the Industrial Revolution happened in Britain because the cost of labor was highest in Britain since there was a plague that killed up a bunch of people so labor was really expensive.", "Brian Potter", "Yeah, I've heard that as well. I'm not an expert in economic history and sometimes try to avoid things I know very very little about. One other factor is just that now, as venture capital expands its tentacles into other industries, you're seeing a lot more effort in developing solutions for the built environment. So to some, you're seeing venture capital money flow into the space. So perhaps that will counteract to some extent. I don’t know.", "Gaps in the Blog Marketplace of Ideas", "Dwarkesh Patel", "Tyler Cowen was interviewed by Patrick Collison and one of the things Tyler Cowen said was that there should be more blogs focused on one particular area or issue and just kind of raise the salience of it and help drive insight and understanding of it. I feel like you've done that really well with Construction of Productivity and I'm curious as to what other areas would you like to see the blogs that do for their area what you have done for construction productivity–– which is taking a broader view of what's happening, what the trends are, and trying to add more insight to professional problems.", "Brian Potter", "Oh, good question. Yeah, I feel like there's so much about this I was complaining about. I think a very underrated problem is this giant Civilizational machine that takes in raw materials and spits out finished goods and services and high quality of life. Nobody really knows how it works and mostly how it works is completely undocumented. It does not exist in written information anywhere, and the information that is written down is maybe in one particular company’s shared drive or something like that. There are no instructions for building a Toyota, and it might be very hard to kind of recreate that if you needed to, but just in general, I would like to see somebody do something similar for manufacturing, especially since so much manufacturing knowledge has been lost to this move to China and other places where labor is cheaper.", "There's just so much information about how things get built and manufactured that just isn't written down anywhere or isn't written down anywhere accessible. You see kind of people share the same small number of resources for how these things work over and over again. I asked a few people who I thought would know about it, “Is there any book written that actually explains what it’s like to get something built in china or what building things in china are actually like, and they're all like, “ No, I don't know any source that exists. ” There are a few blogs that describe their particular experiences, which end up being overly valuable resources, but there's no general source of available information for how any of that works.", "So yeah, I just think that documentation of how the civilization machine functions are just wildly under-invested in which I guess is true for documentation in general right? Nobody wants to do that because the payoffs are far in the future and uncertain, and the costs are upfront. So it's not surprising to me that it doesn't exist, but I do think that you know in general terms, people are really very interested in understanding how things work and if you can explain how something works, even if it seems like a fairly niche topic, you can get quite a bit of attention. So for anybody who works in manufacturing, I think you would find quite a bit of success if you started writing things about how it functions.", "Dwarkesh Pate", "Interesting. Okay, excellent. To listeners who know about manufacturing, do spin up your substacks.", "Why is Modern Architecture So Ugly?", "Dwarkesh Patel Okay, so this is part two of my conversation with Brian Potter. The first conversation was really interesting, and then afterwards, I realized that I’d forgotten to ask Brian about this really interesting theory that Scott Alexander and others have written about. Brian is the perfect person to talk to about this. So, I took the convenience of asking him to come on again. The question is basically, “ Why does modern architecture just look so much uglier than things that were built 100 years ago or 200 years ago? ”", "You would have thought that due to increases in technology and the build-up in new ideas and designs that things would become prettier. But if you look at a building that was built in the last 100 years, it just looks like a cylinder or a rectangle of glass and concrete. But if you look at things that were built before, you’d see things like Sagrada Familia or intricate cathedrals or even these skyscrapers that have all these flourishes, ornaments, and all these decorations. So Brian, what is going on? Why are things so much uglier now?", "Brian Potter", "I have a few thoughts on this, and I don't know if I have an answer to the actual question, but I can add some context. I don’t necessarily think that I’m the best person to ask about this–– a better person to ask would be an architect who can talk about how design and taste have evolved over the course of the 20th and 19th centuries. Generally, I don't have a huge amount of opinion on aesthetics; to the extent that I do tend to favor simple minimalist, clean lines. One of my favorite pieces of construction is this bridge called Saglinatobel , which is built in Switzerland. It’s the banner on my Twitter account, and it's this really simple minimalist tiny curve of concrete that's exactly in the shape it needs to be to resist the forces acting on it. So to some extent, to me, it's like you're asking, “ Why did aesthetics change from like worst aesthetics to better aesthetics? ” That's obviously not what I actually think, but I just want to shed some light on where I'm coming from.", "Do you want to separate out the question a little bit because I think it's more of a question about why modern buildings have less ornamentation. Not necessarily that they're ugly per se, but I think there's a really lot of really beautiful modern architecture. Again, an architect would be a better person to ask about this, but I will sort of muddle my way through using light and space and form to like create these big interesting open big impressive open spaces that create a lot of interesting shapes. A big expansive space is what they're really going for to be a good example of this.", "So you look at the new Terminal for the Portland airport, which has this big giant mass timber roof, which isn’t a usual way of building these big large-spanning structures, but it's really neat, it’s interesting, and that's a lot of what you see these days. A lot of architecture is focused on this, and that's been pretty common throughout history, right? If you like cathedrals, it's like they're trying to create these big impressive spaces using light and stuff to create this impressive space that feels like the Pantheon . It's the same thing. I think it's really about how you want to separate out the question of “Why it’s ugly” (which I do not necessarily agree with) and “Why does it not have so much?” Why is there so much less ornamentation is a little bit more defensible.", "Dwarkesh Patel", "Okay, actually, let me ask about the second one because I'm not sure I totally follow. So the ornamentation has been replaced by the openness…", "Brian Potter", "Well, yes. I guess my point is that traditionally, we were focused on creating these big impressive indoor spaces and playing with the light and stuff to do that. That's been common in buildings that people think are impressive. Our architects still do that a lot and use big open areas with a lot of glass everywhere as a kind of tool in their tool belt to some extent.", "Dwarkesh Patel", "Gotcha. Okay, so they were replacing the old stoneworks and stone masonry stuff and the gargoyles with these kinds of things. Okay, I see.So yeah, let's talk about one because the idea that the older designs were less efficient in some way to steer man to the opposite position. Someone might say that a lot of the value of construction and building is just the aesthetic presence that it has in a city or a neighborhood and to an extent, we've lost that. Maybe we got to use less material on a bridge or a skyscraper, but then it just acts as this ugly thing in the middle of the city. Or if not ugly, at least not as aesthetically pleasing as it could have otherwise been. How would you react to something like that?", "Brian Potter", "Yeah, so I guess to clarify what I was talking about before, I don't think that new buildings necessarily use less material in a sense. I just think they have an aesthetic that's more minimalist and streamlined without a lot of extra decoration. The complaint is that “ Oh all buildings have to be a cylinder of glass or a rectangle of glass or whatever,” but I think one reason that is true is that, while it may not be super interesting from the outside, when you're on the inside of a building, you actually kind of want a whole lot of glass. It’s really nice to have a large open space to walk through and to have a lot of natural light.", "That was very hard to do prior to the mid-20th century for a couple of reasons. One is because buildings weren't really air-conditioned before that. Another one is it was like really expensive to have big expansive glass. It's still expensive, but back then, it was even more expensive. As you go farther back, there was a glass-making process invented in the 1950s 1960s called the float glass process , which made it a lot easier for you to basically make really high-quality glass for way way way cheaper than it was possible to previously. So for air conditioning and stuff like that, it basically advances in like structural design and increases the use of steel. It basically became a lot more possible to just have your building be a big giant slab of glass with a lot of natural light and a lot of openness on the inside, and that creates kind of a nice experience if you're actually in the building.", "To some extent, I think that applies a little bit more generally if you look at an HGTV remodel or something like that. Everybody wants to create a big giant open concept floor plan or whatever with all the walls blown out and lots of natural light coming in everywhere. So to some extent, it's just the technology. It seems like it's maybe a case of the technology evolving so you can create this nicer interior space. To some extent, that perhaps came at the expense of having a lot of really ornate decoration on the outside (not on the top, but the outside). If you're building a building, it makes sense to optimize the inside at the expense of the outside because that's what you're actually using.", "Dwarkesh Patel", "Okay. Yeah, that's an interesting way to put it. One comment that I've heard is the progress of YIMBYism has been hurt by the fact that new buildings, because of this rational self-interest, are ugly on the outside and very pleasant on the inside. The result is that the people who are community organizers will oppose these new buildings because what they get to see is the outside, and all these historical preservation boards will notice that these older buildings are not as pleasant on the inside and have obviously less occupancy and everything. They look more pleasant in their neighborhood than the new buildings that people want to tear down and build. So if we want to convince the Nimbys to go along with the new construction, you kind of have to make it look beautiful. What do you think about that?", "Brian Potter", "Yeah, it seems like it's true. That's like the fundamental tension in real estate, right? Any property you have will have a lot of externalities and influence with the value of everything around it, so it's this big game of negotiation between you and everything in the immediate area. That's why things like zoning and stuff like that exist in the first place. Even if it's not necessarily implemented in a special way that people think is good, I think it's probably a little bit easy to over-index on that. It would not surprise me if when you say, “We'll build this since it’s like a really beautiful old building” , people would still oppose it because it's still gonna be a giant building. It's still gonna mess up their traffic It's still gonna have a bunch of renters that are gonna come in and reduce property values. You see a lot of these historical preservation issues where people want to preserve things like old gas stations and laundry mats and things that are not beautiful at all. Parking lots and stuff like that. So I think that's probably true on some margin. I would be surprised if there was just “one thing” that we’d need to fix to untangle this problem.", "Dwarkesh Patel", "Gotcha. By the way, you mentioned how indoor air conditioning and heating has changed this dynamic. Is it just because we have enough insulation now that even if you lose heat through the glass, it’s fine?", "Brian Potter", "Basically if you have a big glass wall as your exterior building and you don't have air conditioning, then that thing is just going to cook whenever it gets hot out. It's going to basically be like a greenhouse So you basically need air conditioning to make that habitable and you see this with single-family homes too where when air conditioning started to become popular, all of a sudden they started building these houses with these big giant picturesque windows Which they didn't have before. Again, this makes a nice inside space, and if you're going to be inside, that's the part you want to focus on. But it basically is predicated on having air conditioning to control the climate.", "Dwarkesh Patel", "That's a nice little detail that I want to be able to talk about. That explains why the outsides of buildings aren't as pretty as it used to be, but there's still the question of why the insides are so minimalistic. I don't know if you saw this, by the way, but about a year or two ago, Microsoft announced that they were building this new office for developers in India and they were looking at this very interesting architectural details and craftsmanship that made it look like you know for everything from the furniture to the floor layout to the arches of the entrances. It was supposed to be like like Taj Mahal, and it actually looks really cool. So why don't more developers do something like that on the interiors of these buildings?", "Brian Potter", "I mean again, good question. I think, for the most part, broadly speaking, and again, I'm not a developer so I'm gonna model my group as best I can. They're basically building the space to rent out, right? Or to sell to somebody else. So they're creating what a lot of times is called the development's shell and core , which I think I'm using correctly. They basically don't even finish the inside, it's all just raw and open studs and everything like that. So then the tenant comes in and does whatever they want to the space. But they're basically in the business of creating usable space for some given class of market that they're trying to attack, right? They're not in the business of making fancy architecture. Same if you're buying a single-family home. You come in with blank walls and you put up all your different pictures and decorations and paint it however you want, then add your funky wallpaper or whatever. When you move, you can take all that stuff out, and the next person can come in and put in everything that they want. So I think to some extent, you see a decoupling of the aesthetic design from the building itself.", "Dwarkesh Patel", "So then this is really a revealed preference about the people who are actually renting or buying these properties. People who are living there actually don't want this kind of material design maybe?", "Brian Potter", "Yeah, I think that again, it comes back to a little bit about a question of aesthetic style and whether people want a lot of ornamentation and fundamental attractiveness. Cause again, people do focus a lot on making these spaces attractive. A lot of times, they do that by having a nice open space, a lot of natural light, and a cool glass staircase that goes from here to there. The new Google headquarters (I don't know how new it is, but on the newer side), is basically this big, big giant, dome-type thing with a lot of structural detail visible and a lot of just light coming in this big, giant open space. There are a lot of interesting staircases and stuff like that. So to some extent, it's just a style with less ornamentation necessarily than aesthetics per se.", "Dwarkesh Patel", "Okay, gotcha. The other theories that out there emphasize more of the cultural and aesthetic changes rather than the actual practical necessities of these kinds of changes you've talked about. So one theory is that if you look at the change in men's clothing over the last two centuries, it's also become much more minimalistic and much less colorful and ornamental, right? If you look at King Louie or something, he's wearing a suit with all of these studded colors, dyes, and even gems. But if you look at like a picture of Joe Biden, he's just wearing a suit, and not even a tie anymore, right? It's just a black suit and a white shirt. So there’s this continuation of this sort of aesthetic trend towards being very minimalistic. What's your reaction to that, Cik?", "Brian Potter", "Yeah, I mean, I think that's true to some extent. This is one of the difficulties of having a conversation like that–– there are so many degrees of freedom. It's hard to pin down what specifically you're talking about, and what class of things you're specifically talking about. If you look at an example from fashion, if you look at streetwear or something like that, and you just Google streetwear, you will find a ton of pictures of really quite ornate, interesting clothing. There will be some stuff that's minimalist, but there will be a lot that's not necessarily minimalist at all. I guess in some way I disagree with the premise. If you're talking about like some like elite counter-signaling thing, it's so easy and cheap to make ornate colors for clothing that it's not a signal of status anymore. So you have to countersignal by wearing very simple clothing or maybe there's something to that, but that's a different thing than why are things ugly in general.", "Dwarkesh Patel", "One thing that people also talk about is the increasing cost of labor, so that now you can hire talented stonemasons to spend hours on every square foot of the outside of a building. I was in India like six months ago and we went to New Delhi to visit this new temple that was built, the Swaminarayan Akshardham . I'm sure I mispronounced it, but anyway, it has this really intricate and cool design on the outside. It is just covered in these hand-carved stones with like intricate idols and different images. It's really cool, but I think it took tens of thousands of hours and probably way more thousands of workers and stonemasons to actually construct. Which is obviously not going to be very feasible economically in a Western country. So one theory is just that it’s too expensive to do that kind of stuff anymore. You need to do something that doesn't require as much manpower.", "Brian Potter", "Yeah, I mean, that's definitely got to be part of the story, right? I mean, construction, as we know it, has not really gotten any cheaper, but it also hasn't gotten more expensive. A lot of other services that are pure labor, like medical care, education, etc, have gotten more expensive. Part of that is probably because we've found ways to pull labor out of the process. So construction only rises at the rate of inflation instead of faster than the rate of inflation. So super labor and things like masonry just doesn’t get done anymore. There's probably like a vicious cycle there where, you know, you hire less of it, so there's fewer masons available. So they get more expensive and the skill gets more scarce. Now it's difficult to hire someone to do really ornate masonry work in the US and it’s probably really difficult to like even find them.", "I'm sure they exist, but it's probably not trivial. If you have your schedule and you're trying to meet someone, it's not super straightforward. An interesting example of this is, I can't remember if we talked about this in part one or not, the Ise Jingu temple in Japan , which is this temple complex that gets torn down and rebuilt every 20 years. They've been doing this for 1300 years or something like that, so it's using like 1300-year-old construction techniques, thatched roofs, and particular woodworking methods or whatever. Now, it’s quite difficult for them to find a lot of these skills. There are just not that many roof thatchers around anymore. So, you know, again, I can imagine that it's some sort of a similar cycle with masonry. I don't think that's the whole story because a lot of these things are built in a way that makes building in this minimalist style actually quite expensive to do.", "A glass curtain wall is actually really expensive to build on a building because glass is just expensive by itself and because it lets in so much light and it gets so hot so you need a much more stronger mechanical system to keep it cool. I've talked to architects and why the owners love these, glass curtain walls because it seems like, especially for now, everyone is really concerned about like climate change and greenhouse emissions of whatever it is you're building. People are like, “ Yeah, the owners just really love this all-glass look, and they're willing to pay extra to get it.” If you look very early, this is the sort of style of construction where you have a big glass facade. For a skyscraper, it’s called the international style. Early international-style skyscrapers were actually quite expensive, more expensive than the traditional way of building. But again, the people really liked the aesthetic. It gave you some other options in terms of light on the inside or whatever. So they decided that it was worth it. I think, yeah, that's definitely probably part of the story, but again, I think it definitely doesn't seem like it can be all of it.", "Dwarkesh Patel", "Gotcha. Okay. To the extent that it is part of the story, somebody left an interesting comment on one of Scott Alexander's posts about how modern construction looks different than older construction . This is a comment from fluffy Buffalo. He or she writes, “ I think new technology should help a lot with 3D printers, CNC machines, robots, CAD, and AI. It shouldn't be too hard to come up with a way to produce pleasing ornaments, murals, and building shapes at a reasonable price. ” Then they go on, that no one is doing it because the current crop of architects can apparently only think in steel, concrete, and glass. So how plausible do you think this is?", "Brian Potter", "I think I read some of the comments on that post. I think one of the people mentioned that the Victorian style of house basically came about because of mass production methods, which made really ornate wood decorations really, really cheaply. They were just cranked out on some milling machine in a factory somewhere, then you could just buy them and place them in your house or whatever. So that style is basically a function of technology making that cheaper. That was in style for a while, but it’s not in style anymore because of the shifting sands of aesthetics. I think part of this is more broadly applicable. It's actually kind of a hassle to have all this ornamentation all over your house.", "On the inside, it's just lots and lots of stuff to dust and keep clean, which again comes back to cost disease. If you have servants to do that for you, it's fine. But if you don't have servants, it's you, and it creates a lot of extra work for you. Same on the exterior. But then also there is the issue that in general, you want your building to shed water and direct it away from the house as possible. If you have a lot of little nooks and crannies and ornamentation stuff for water to collect, that's pretty bad from a durability perspective. I would be careful not to over-index on that because obviously, people did build extremely durable, beautiful masonry brick buildings. It's not like that makes it impossible. But it is something to consider because on the margin, it probably does make your maintenance costs go up.", "Dwarkesh Pate l", "Okay. Are you optimistic overall? Maybe not with these specific technologies, and maybe not with having ornamentation or something, but do you think that in a hundred years, the technology would enable the construction of buildings that look prettier than modern buildings, but are also more maintainable or at least as maintainable? Or is it basically going to be steel and glass towers for the time for the upcoming future?", "Brian Potter", "Yeah, that's a good question. There's a lot of enthusiasm. Some of it is architecture. So we'll see how well it actually gets adopted. But for timber and large tall buildings made out of heavy timber elements, I don't know if that’s partially it's an aesthetic thing. If you look at it from a carbon perspective, it's using a lot of timber versus steel or concrete which is way better on that calculus. And it does kind of look nice. A lot of these tech companies are building these ornate timber offices. So that's one trend. Then yeah, there's not an obvious technology that could usurp the sort of standard we have unless it's like a dramatic development in material science or whatever, where somebody finds some way of building something that's way cheaper and gives you way more options than you had before.", "That’s what steel and then concrete gave you, right? All of a sudden, you could build things in ways that you couldn't do before steel, and you could use so much less structure to build fine concrete, because it was like liquid. All of a sudden, you could make any shape that you want. That's when you get all these really cool interesting shells and really ornate concrete domes, which of course, we also don't build anymore. So I don't see an obvious technology that could replace that. I guess, you know, the other thing is like, when all of a sudden you had technology that could like easily duplicate or make any sort of art possible, did that usher in a new era of way better art or 2D aesthetics? I don't necessarily think that it did, right? So I think if you could all of a sudden build any sort of shape that you want cheaply and easily, it's not obvious to me that we would get really cool things. I'm sure you would see a lot of interesting experiments on the margin or whatever, but it's not obvious to me that the standard way of building in some ways might become even less interesting.", "Dwarkesh Pate l", "Right. So there is the question of why we haven't seen better aesthetics in these other fields. So given how much you can do with the traditional art today, why do these paintings that are selling for millions of dollars have the common perception and stereotype that they're very simple and ugly? So to apply that to architecture and engineering, there's this idea that Scott Alexander kind of talked about this a lot in his post about a cabal of modernist architects in these guilds who are just very obsessed with building these buildings that the public doesn't like to look at. Do you think it’s true that the architects just have a completely different sense of aesthetics than the average public?", "Brian Potter", "I think there's probably something to that where you go to architecture school and architecture school is adjacent to art school in the way that you learn how to design buildings, but you also learn a specific way of looking and thinking about them. That filters down into how you design your building, but I think architects are competing in a marketplace, and the vast majority of them who are not star architects are basically trying to deliver buildings that owners are happy with. If they don't do that, they're going to go out of business. I think most architects you will talk to will basically say, “Yeah, if I don't give the owner a building that he's happy with, I am not doing my job correctly.” They're definitely being hired for other things adjacent to a sense of taste. But perhaps that's, part of the story, and it's like a fragment or whatever, but I think it would be very easy to over-index. I think there's a lot of ways that's just not true.", "Advice for Aspiring Architects and Young Construction Physicists", "Dwarkesh Patel", "So I'm really glad we got a chance to cover this topic and I appreciate all those explanations, but actually, there’s another question that I forgot to ask you the last time that I'm curious about. Let's say somebody listens to the conversation or they've been a follower of your blog and this has got them interested in engineering. Somebody may be in high school or maybe early in college. For them to be able to do cool things in these fields in the future, what kind of training and career advice would you give them so they could pursue and be able to get involved and innovate in these fields?", "Brian Potter", "Yeah, that's a good question. I don't have a super good answer to that. I fell into it very much, very much by accident. I spent a very large chunk of my career just doing pretty standard engineering-type of stuff. Then I sort of fell ass-backwards into a job at a construction startup which then led me to other places. I guess if you're an engineer (I feel sort of gross giving this answer) because in general, if you go to a good engineering school, it's like you're perpetuating the college industrial complex. But people do really care about that, especially in the fields of building engineering that I'm more familiar with. They do care about that quite a bit. I went to a reasonably good engineering school and that probably opened doors for me that I would not have otherwise.", "So yeah, go to a good engineering school, go get a lot of the status indicators that are very obvious, like an engineering school and a top-tier firm, whatever. That does kind of matter. People do pay attention to those things. It matters in the sense that people pay attention to those things in the building development world. If you went to MIT or Caltech, and then you had an internship and it's Skidmore or Merrill , that's gonna open quite a few doors for you. I'm not amazingly happy with that answer, but I do think that it is true.", "Dwarkesh Patel", "Yeah.", "Brian Potter", "Another way would just be to try to come in from a more oblique angle. Focus on software and then work at one of the many more new startups that are now tackling the construction space. They're increasingly in need of software developers and that would probably be a lower-risk way to do it because you don't need to necessarily go to a fancy school to become a software developer. Then if it doesn't work out, just go get a job at a FAANG company and make $600,000 a year. Look and see how the startups in the space are changing and growing. I think by the time some folks got out, the building space might look pretty different in technology, and software stuff is kind of slowly forcing its way in there a lot more.", "Dwarkesh Patel", "But if they want to transition from just being an engineer to working on the forefront stuff that you write and talk about, is working at a startup the ideal way to do that?", "Brian Potter", "Oh, that's a good question. A lot of startups are doing really interesting stuff, not just in software, but a lot interesting building technology stuff. There's a lot of like green building that has worked, there's work out there that is being developed by folks working on low-carbon concrete and low-carbon steels. There's a lot construction robotics, a lot of prefab stuff, the startup world is really kind of trying to start to eat the construction world. There are a lot of opportunities there. Very broadly, I would say that I see a lot of innovation happening. So I think that's a good place to sort of look and see what's going on.", "Dwarkesh Patel", "Awesome. Okay. Well, Brian, thank you so much for coming back a second time. I really appreciate it. I'm glad we got a chance to talk about these questions. It is very interesting and it's good to get an actual engineer’s perspective on it so that we're not just doing like cultural theory and we actually understand the practicalities of what's involved in all these kinds of things.  I highly recommend people check out your blog and they can also follow you on Twitter . We'll leave the handle in the description. Anything else you'd like to plug or say at the end?", "Brian Potter", "I’m doing some work for the institute for progress , which is a think tank designed to advance industrial progress and progress studies ideas more generally so you should check them out as well.", "Dwarkesh Patel", "Okay, excellent. Thanks Brian, I appreciate it.", "Brian Potter", "Cool.", "Share" ]
[ "https://constructionphysics.substack.com/", "https://www.neom.com/en-us/regions/theline", "https://www.pinterest.com/pin/657947826781328844/", "https://en.wikipedia.org/wiki/Urban_heat_island", "https://astralcodexten.substack.com/p/model-city-monday-8122", "https://en.wikipedia.org/wiki/Abraj_Al_Bait", "https://study.com/learn/lesson/the-garbage-can-model-of-decision-making-the-garbage-can-theory.html", "https://substack.com/profile/20821340-connor-tabarrok", "https://constructionphysics.substack.com/p/why-its-hard-to-innovate-in-construction", "https://www.benkuhn.net", "https://psychcentral.com/lib/hofstadters-law-and-realistic-planning", "https://en.wikipedia.org/wiki/Katerra", "https://austinvernon.site/blog/construction.html", "https://en.wikipedia.org/wiki/Alchian%E2%80%93Allen_effect", "https://en.wikipedia.org/wiki/Levittown", "https://mobile.twitter.com/connortabarrok", "https://www.amazon.com/Where-Flying-Car-Storrs-Hall-ebook/dp/B09H478XG4", "https://www.merriam-webster.com/dictionary/druthers", "https://www.youtube.com/watch?v=6221PKJLwQ8", "https://twitter.com/schlimmson", "https://en.wikipedia.org/wiki/Automotive_industry", "https://www.amazon.com/Power-Broker-Robert-Moses-Fall/dp/0394720245", "https://en.wikipedia.org/wiki/Robert_Moses", "https://constructionphysics.substack.com/p/how-to-design-a-house-to-last-1000", "http://georgism?", "https://www.benkuhn.net/", "https://www.architecturaldigest.com/story/60-years-ago-modernist-city-brasilia-built", "https://austinvernon.site/", "https://www.designingbuildings.co.uk/wiki/Construction_tolerances", "https://www.idc.com/promo/trackers/arvr", "https://www.thomasnet.com/articles/custom-manufacturing-fabricating/understanding-cnc-machining/", "https://www.sciencedirect.com/topics/agricultural-and-biological-sciences/composite-wood-products", "https://www.thinkwood.com/mass-timber", "https://www.thinkwood.com/mass-timber/cross-laminated-timber-clt", "https://sopa.tulane.edu/blog/whats-difference-between-ar-and-vr", "https://constructionphysics.substack.com/p/where-are-the-robotic-bricklayers", "https://www.homedepot.com/b/Lumber-Composites-Dimensional-Lumber/N-5yc1vZc3tc", "https://www.nucorbuildingsystems.com/products/insulated-metal-panels/", "https://www.peri.com/en/business-segments/3d-construction-printing.html", "https://en.wikipedia.org/wiki/Stewart_Brand", "https://www.amazon.com/How-Buildings-Learn-Happens-Theyre/dp/0140139966", "https://formlabs.com/asia/blog/how-black-diamond-prototypes-gear-form-3l/", "https://www.iconbuild.com/", "https://en.m.wikipedia.org/wiki/Baumol%27s_cost_disease#:~:text=Baumol%27s%20cost%20disease%2C%20also%20known,have%20experienced%20higher%20productivity%20growth", "https://constructionphysics.substack.com/p/construction-costs-around-the-world", "https://www.timesofisrael.com/what-the-flow-adam-neumanns-new-real-estate-startup-is-valued-at-1b-before-launch/", "https://a16z.com/", "https://a16z.com/", "https://en.wikipedia.org/wiki/Adam_Neumann", "https://constructionphysics.substack.com/p/another-day-in-katerradise", "https://en.wikipedia.org/wiki/Fast_inverse_square_root", "https://www.youtube.com/watch?v=NCuf2tjUsAY", "https://en.wikipedia.org/wiki/John_Carmack", "https://developer.arm.com/documentation/ka004288/latest", "https://lynceans.org/all-posts/frank-lloyd-wrights-1956-mile-high-skyscraper-the-illinois/", "https://www.worksinprogress.co/issue/why-skyscrapers-are-so-short/", "https://www.worksinprogress.co/issue/why-skyscrapers-are-so-short/", "https://scholar.harvard.edu/glaeser/home", "https://www.nahb.org/", "https://www.agc.org/", "https://www.epa.gov/nepa", "https://en.wikipedia.org/wiki/Deepwater_Horizon_oil_spill", "https://www.omb.ae/", "https://www.epa.gov/nepa", "https://www.dictionary.com/e/acronyms/faang/", "https://en.wikipedia.org/wiki/Tyler_Cowen", "https://en.wikipedia.org/wiki/Patrick_Collison", "https://en.wikipedia.org/wiki/Sagrada_Fam%C3%ADlia", "https://en.wikipedia.org/wiki/Salginatobel_Bridge", "https://en.wikipedia.org/wiki/Portland_International_Airport", "https://en.wikipedia.org/wiki/Pantheon,_Rome", "https://www.pilkington.com/en/global/about/education/the-float-process/the-float-process", "https://www.hgtv.com/design/remodel/topics/remodeling", "https://www.housinghumanright.org/what-is-a-yimby-hint-its-not-good/", "https://en.wikipedia.org/wiki/NIMBY", "https://news.microsoft.com/en-in/features/inspired-by-the-taj-mahal-microsofts-newest-office-is-a-workspace-of-art/", "https://www.designingbuildings.co.uk/wiki/Shell_and_core", "https://akshardham.com/", "https://en.wikipedia.org/wiki/Ise_Grand_Shrine", "https://astralcodexten.substack.com/p/highlights-from-the-comments-on-modern", "https://skidmoreinc.com/", "https://merrillconstructiongroup.com/", "https://constructionphysics.substack.com/", "https://mobile.twitter.com/_brianpotter", "https://progress.institute/", "https://www.dwarkesh.com/p/brian-potter?utm_source=substack&utm_medium=email&utm_content=share&action=share" ]
https://www.dwarkesh.com/p/bryan-caplan-3
Bryan Caplan - Feminists, Billionaires, and Demagogues
[ "Dwarkesh Patel", "Today, I have the great honor of interviewing Bryan Caplan again for the third time. Bryan, thanks so much for coming on the podcast.", "Bryan Caplan", "I've got the great honor of being interviewed by you, Dwarkesh. You're one of my favorite people in the world!", "Don’t Be a Feminist", "Dwarkesh Patel", "It's a greater pleasure every time (for me at least). So let's talk about your book, Don't Be a Feminist . Is there any margin of representation of women in leadership roles at which you think there should be introduced bias to make sure more women get in, even if the original ratio is not because of bias?", "Bryan Caplan", "No, I believe in meritocracy. I think it is a good system. It is one that almost everyone sees the intuitive appeal of, and it works. Just looking at a group and saying, “We need to get more members of Group X,” is the wrong way to approach it. Rather, you need to be focusing on, “ Let's try to figure out the best way of getting the top quality people here.”", "Dwarkesh Patel", "If there's an astounding ratio of men in certain positions, could that potentially have an impact on the company's ability to do business well? Perhaps the company could just care about increasing the ratio for that reason alone.", "Bryan Caplan", "Right. I mean, one can imagine that! I think in our culture, it really goes the other way. People are more likely to be trying to get rid of men, despite the fact that the men are delivering value. If you really pushed me into starting to think, “ Suppose you're running a bar, would you have ladies’ night?” well yeah, I would have ladies’ night in a bar because that actually works, and it's good business! However, if what you're doing is trying to actually get correct answers to things, if you're trying to go and make something run effectively, and if you're just trying to make progress and you're trying to learn new things, the thing to focus on is what actually leads to knowledge and not focusing on just trying to get demographic representation.", "I think what we've seen is once you go down that route, it is a slippery slope. So besides defending meritocracy on its merits, I would actually also say that the slippery slope argument is not one that should be dismissed lightly. There's a lot of evidence that it does actually fit the facts. When you make an exception of that kind, it really does lead you to bad places.", "Dwarkesh Patel", "Okay. But changing topics a bit, I wonder if this gives you greater sympathy for immigration restrictionists because their argument is similar, that there's no natural shelling point for your keyhole solutions where you let tens of millions of people in, but you don't give them welfare or voting rights. There's a slippery slope when you let them in because, eventually, the civil rights argument is going to extend to them. There'll be adverse consequences that these keyhole solutions can't solve for.", "Bryan Caplan", "First of all, I would say maybe. That is one of the best arguments against keyhole solutions. I'm also guessing that a lot of your listeners have no idea what keyhole solutions are, Dwarkesh, so maybe we want to back up and explain that.", "Dwarkesh Patel", "Go for it. Sure.", "Bryan Caplan", "So I have a totally unrelated book called Open Borders, the Science and Ethics of Immigration. One of the chapters goes over ways of dealing with complaints about immigration that fall short of stopping people from actually excluding or kicking out people that are already there. So just to back up a little bit further, most of the book talks about complaints about immigration–– saying that they're either totally wrong or overstated. But then I have another chapter saying, “ Alright, fine, maybe you don't agree with that, but isn't there another way that we could deal with this?” So, for example, if you're worried about immigrants voting poorly, you could say, “Fine, we won't extend voting rights to immigrants or make them wait for a longer time period.” That's one where I would just say that the focal point of citizen versus noncitizen is one of the strongest ones. So I think that it actually is one that has a lot of stability.", "This line of, “Well, you're not a citizen, therefore… ” really does have a lot of intuitive appeal. Although, yes, I do think that keyhole solutions would probably not work multi-generationally, so to go and say this is a keyhole solution where you're not a citizen, your kids are not citizens, and their kids after them are not citizens, that's one that I think would be hard to maintain. However, again, at the same time, the problems people are worried about, if they ever were severe, are also getting diluted over time. So I wouldn't worry about it so much. That is one of the very best objections to keyhole solutions that I know of.", "Dwarkesh Patel", "Okay, so going back to feminism. Over time, doesn’t feminism naturally become true? One of the things you can say is that the way that society is unfair to men includes how they fight in wars or do difficult and dangerous jobs, but society, over time, becomes more peaceful (or at least has in our timeline), and the difficult jobs get automated. At the same time, the gains for people who are at the very peak of any discipline keep going up fairly, but the implication still is that if men are overrepresented there, even for biological reasons, then the relative gains that they get go up, right? So over time, feminism just becomes more true, not because society necessarily discriminated against women, but just because of the trends in technology.", "Bryan Caplan", "Once again, I feel like we should just back up a little bit. What is feminism anyway, because if we don't know what that is, then it's very hard to talk about whether it's becoming more true over time. In my book, I begin with some popular dictionary definitions that just say feminism is the theory that women should be political, social, economic, and cultural equals of men. I say that this is a terrible definition , which violates normal usage. Why? Well, we actually have public opinion data on, first of all, whether people are or are not feminists, and second of all, what they believe about the political, social, economic, and cultural equality of women. And guess what? An overwhelming majority of people that say they are not feminists still agree with the equality of women in all those mentions, which really makes you realize that really can't be the definition of feminism. That would be like saying feminism is the theory that the sky is blue.", "Well, feminists do believe the sky is blue, but that isn't what distinguishes feminists from other people. So what distinguishes them? What I say is that the really distinguishing view of feminism is that society treats women less fairly than men. The view is that society treats women less fairly than men or treats men more fairly than women. This definition fits actual usage. It would be very strange for someone to say, “I'm a feminist, but I think that men get terrible treatment in our society, and women are treated like goddesses.” Then you say, “ Well, then you're not really a feminist, are you?” That doesn't make sense. On the other hand, for someone to say, “I am not a feminist, but God, we treat women so terribly, we're awful.” That, again, just would not fit. So I'm not saying this is the one true definition, but rather that it is much closer to what people actually mean by feminism than what dictionaries say. So to be fair, every now and then, there'll be a better definition. I think the Wikipedia definition in the second sentence adds that it also has the view that women are treated very unfairly.", "Dwarkesh Patel", "Is another way of defining feminism just that we should raise the status of women? That's slightly different from the fairness issue because if you think of a feminist historian, maybe their contention is not that women were treated unfairly in the past. Maybe they just want to raise the status of women in the past who are underrepresented. If you think of somebody today who wants to, let's say, raise the status of Asians in our society, and they want to acknowledge the great things that Asians are doing in our society, then maybe their contention is not even that Asians are treated unfairly. They just want to raise their status. So what would you think of that definition?", "Bryan Caplan", "So first of all, it could be, but I don't think so. Here's what I think. There could be a few people like that, but that's not what the word means in normal use. If someone were to say, “ Women are treated absolutely fantastically, way better than men, and I want it to get even higher.” You say, hmm. Well, that's not what I think. Somebody might say, “Well, I can still be a feminist and think that,” okay, but that's not what the word actually means. It's not the typical view of people who call themselves feminists. The typical view is precisely that women are treated very unfairly. They want to raise that and alleviate that in a way that's almost by definition. If you think that someone's being treated unfairly, then to say, “ I think they're being really unfair, but I think it’s great that it's unfair.” It's almost self-contradictory.", "Dwarkesh Patel", "I guess I was making a slightly different point, which is not even that these people don’t want to raise the status (the actual living standards of women) in some way. It's just that they want to raise the rhetorical status.", "Bryan Caplan", "Yes, but again, if someone were to say, “ I think that women are treated absolutely fantastically in society, way better than men, who we treat like dogs. But I also want women's status to be even higher than it already is.” That would be something where you could argue that “ Well, that person may still be a feminist, but that is not what the word means.” Because hardly anyone who calls themselves a feminist believes that weird thing that you're talking about.", "Dwarkesh Patel", "Let me make an analogy. Let's say you or I are libertarians, right? And then we think we should raise the status of billionaires. Now, it's not like we think society mistreats billionaires. They're pretty fine, but we think their status should be even higher.", "Bryan Caplan", "Yeah, I mean, this just goes to the definition. In order to find out whether a definition is correct, you just have to think, “Well, how is the word commonly used?” Logically speaking, it's possible to have a different view or two things that are compatible. The whole idea of a definition is that, ideally, you're trying to find necessary and sufficient conditions such that everybody who satisfies the conditions falls under the category and that everybody who doesn't satisfy the conditions doesn't. In ordinary language, of course, it's notoriously hard to really do that. Defining a table is actually quite difficult in a necessary and sufficient-condition sense, but we can still say, “Well, a table is not by definition something that people sit on, right?” Someone could say, “Well, I suppose you could sit on a table, but that's not the definition in ordinary use in any language of which I'm aware.”", "But why don't we actually go back to your real question. Which was..", "Dwarkesh Patel", "Overall, the left tail of society is being compressed, and the right tail is being expanded. Does feminism become more true over time?", "Bryan Caplan", "The answer is that we really need to look at all of the main measures to get an idea of this. With some of the ones that you're talking about, it does make more sense. As jobs become less physically dangerous, then at least you might say that things are less unfair to men. Although in the book, what I say is that even that is a bit more superficially complicated, at least on the surface. The immediate reaction is that society's less fair to men because they do the most dangerous jobs. Although I also say, “ Yeah, but they get monetary compensation for that.” So, all things considered, you probably shouldn't think of it as unfair. It's something where it's reasonable to say, “Hey, wait a second, how come men are the ones that are enduring 90 percent of the workplace deaths” and say, “ Well, because they're getting 90 percent of the combat pay.” Broadly construed it’s not mostly actual for combat.", "So anyway, that's one area where you should be careful. But I can see the possibility there. I do have a section in the book where I go over what's happening over time. What I'll say is, well, one big thing that's happened over time is that people have become very hyper-concerned with the mistreatment of women, which means that feminism is becoming less true as a result because when people are really hyper-concerned that they might be unfair to someone, they are even less likely to be unfair to them. So I think that's one thing where society where feminisms become less true over time. Another area that I talk about and which I think really does tip the scales, although again, you really need to go through the book because I do try to work through a lot of different margins…", "I think the one that really does settle it against feminism in today's age is precisely the level of false feminist accusations about unfairness. When we go over all the objective measures, then you say, well, it's close to a wash in terms of which gender is treated more or less fairly overall. But then you realize, “ Yes, but there's one gender that has to endure a whole lot of grossly exaggerated hyperbolic accusations and unfairness and another gender that gets to make those accusations. ” The gender that has to endure the unfair accusations is men, and the gender that gets to make them is women. Obviously, not all women make them, and not all men receive them. But still, if we're talking about the average fairness of the treatment of men and women or society, I say that this climate of false accusation and intimidation is what really tips it. It didn't have to be this way, Dwarkesh! [laughs] We could have just had conditions change without a whole lot of flinging of wildly inaccurate accusations, but that's not the world we're in.", "Dwarkesh Patel", "When would you say was the flipping point? Was there a particular decade that you thought “unbalanced things are equal now?”", "Bryan Caplan", "Yeah. So one of the things I say in the book is that there are a bunch of ways where you can say that women were treated less fairly in earlier decades, but there are aspects that are probably more important overall where women are treated worse now. The main one is paternal support for children. In 1940, the odds that you could count on the biological father of your children to help you to raise them was maybe 90%. Now it's probably more like 60%, 70%. So that's one of the main ways that I say that women probably are treated less fairly than men. And the unfairness has gotten worse over time.", "Again, just understand this is not the kind of book that most people are used to where someone argues like a lawyer and they just say, look, I've got 20 arguments for why I'm right. And everyone who disagrees with me is stupid and doesn't have a leg to stand on. This is the kind of book that I liked to write where I really say, let's just calm down and just go through every issue separately, weigh each one on its merits. There are a bunch of points where someone could say, “ Why do you concede that? That makes your argument weaker. ” Well, I concede it because it's true! Then in the end, I have my overall judgment. I will just say that there are a number of books that are written in this terrible modern style of lawyerly reasoning, where you basically have a thesis that you just try to defend in every possible way. I don't write books like that. I try to write books that are honest and self-reflective, and where if there's some weakness in what I'm saying, I don't just acknowledge it if someone points it out; I try to be the first person to reveal it so that people feel like they can trust me.", "It's my own conscience. I don't feel right when I say something not really quite right. I feel like I should’ve always said the other thing. So I try to just write with candor.", "Dwarkesh Patel", "Now, would you say that feminism in the United States is overcorrected but that it's still true in the global sense? In the way that, on average, across the world, women are treated more unfairly than men. Because if that's the case, then if the US is at the center of global feminism, then, of course, they're going to overcorrect here, but overall they're making the world a better place.", "Bryan Caplan", "So that is a much better argument. I would say that if we think about most areas of Europe, then I think that it's very similar to what's going on in the US. In the book, I do go over this especially. I start with Saudi Arabia, where it's really obvious what's going on and how poorly women are treated. But then I go over to India and China and just think about plausible rates of female infanticide . I think it is very likely that overall the treatment of women in India and China is more unfair than that of men. In Saudi Arabia, I'm almost sure that it is. In terms of “Is the US providing a useful corrective for the world while messing up things in the US?” It's possible. I think the problem is that it does discredit a lot of the reasonable points because the US just doesn’t focus on the really big issues. The amount of time that American feminists spend on female infanticide in China and India… I don’t think it would even be 1% of the rhetoric. It's just not something that they care about.", "So I would say that there's more harm being done by the sheer distraction of putting so much emphasis upon small, exaggerated, or reverse problems that bother feminists in the first world while ignoring and indirectly causing people to forget or neglect actual serious problems in some other countries.", "Positively shifting the Overton Window", "Western Feminism Ignores Infanticide", "Dwarkesh Patel", "But let me apply the argument you make in Open Borders that you can effect change by shifting the Overton window. So advocating for open borders just shifts immigration policy slightly towards the open end. Can American feminists make the same point that through making the crazy arguments they make in America, they're making Saudi Arabia more liberal for women?", "Bryan Caplan", "I would say that when the arguments are crazy, then it's not clear that shifting the Overton window actually happens. That may be where you discredit the other view. In particular, I think what I say in that part of the book is that people generally confuse being radical with being unfriendly. And most of the harm that is done to radical causes is due to the unfriendliness rather than the radicalism. So in that case, I would say that feminism has a definite friendliness problem. It is not a movement that goes out of its way to go and make other people feel like they are respected, where even if you disagree with me, I want to be your friend and listen to what you have to say, and maybe we could go and come to some understanding. I think it is a movement where the main emotional tenure of the elites is, “ We are totally right, and anyone who disagrees had better watch out.” So I think that there is a discrediting of it.", "The other thing is just that I think there's too much cultural separation between the feminist movement as we know it and places like China and India, where I just don't see the attitude of being really angry about exaggerated or false complaints about unfair treatment of women in the United States is going to do anything for infanticide in India. Correct me if I'm wrong, Dwarkesh. Do you see much influence of Western feminism on infanticide in India?", "Dwarkesh Patel", "I don’t know, but maybe yes. More generally, one of the common arguments that libertarians make about India and its elites is, “Oh, all of India's elites go study in Oxford or something, and they learn about the regulations the West is adopting that make no sense for a country with $2,000 GDP per capita. ” I feel like some of the things could be true of feminism where all these Indian elites go to American universities and UK universities where they learn about radical feminism, and they go back, and they adopt some of these things.", "Bryan Caplan", "Yes, although you might remember what Alex Tabarrok says about these very things. You can go to India and have people pushing paper straws on you, and yet the streets are still totally covered in trash. In fact, the pushing of the paper straws probably actually distracts people from the much more serious problem of the horrible trash, right? Again, I don't know enough about India to speak with any confidence here, but if you go and learn radical feminism in Western universities, come back to India and start complaining about how we need to have more female CEOs in a country where you have millions of female infanticides per year, I think it probably is like the paper straws problem where you are so focused on a trivial problem that maybe is not only a problem, is not even a problem at all. At the same time, that anger really blinds you to an actual, really serious problem that's going on. But you know India better than me, I could be wrong.", "Why The Universe Hates Women", "Dwarkesh Patel", "I believe rape within a marriage is still legal in India and is still not recognized. Maybe it was just recently changed. Let's say this is an interview, and a feminist says, “Oh my gosh, okay Bryan, maybe you're right that society as a whole doesn't mistreat women, but maybe the cosmos mistreats women.” So women are forced to have children. All of these things combined make women's lives worse on average than men's lives. It's not because society mistreats them, but in some sense, there's still unfairness geared toward women. What do you make of this argument?", "Bryan Caplan", "So unfairness, where there's no human being that does it, seems like a very strange idea to me. Just from the get-go, well, so who was unfair to you? “The universe is unfair. ” Then I mean, the correct term there is unfortunate, not unfair. So that aside, I would say it's a really interesting question. Who actually has better lives just as a matter of biological endowments, men or women? I mean, in terms of demonstrated preference, I think the overwhelming result is that most people just want to remain in whatever gender they're born in. So this is not actually transgenderism . This is like a genie wish. If you could change your gender just with a wish, costlessly, perfectly, I think a very large majority of people would still want to stay with whatever gender they have because it's part of their identity. It's some kind of endowment effect, status quo bias, or whatever. But then if you say, “Okay, yeah, right, fine. Like you, like you just want to stay whatever you were because that's your identity, but if you could put that aside, what would you want to be?” It's a tough question.", "You can say, “ Well, women have a harder personality to deal with because of higher neuroticism, and they've also got higher agreeableness.” But that gives them some other advantages in terms of getting along with other people. For example, men's disagreeableness makes it hard for men to just bite their tongues and shut up when someone's saying something they don't like. I think that is easier for women to do. You may have noticed that having to shut up and bite your tongue while someone around you says something stupid you don't like is actually a big part of life. That is one thing. Now, in terms of things that I feel that I would get out of being a woman, just being able to have as many kids as I wanted would matter a lot to me. So I only have four kids right now. If it were totally up to me, I would have had more kids. I think, as a woman, it would have been easy to do. [laughs] So again, you know, there is the issue. How are you going to find a guy that wants to have a lot of kids?", "This is one where I've looked at the data on family size and what determines it. While both men and women seem to have a say on family size, it just looks like women's traits have a much larger effect. Men are more likely to say, “ OK, fine, whatever. We'll do what you want to do on family size.” Whereas women seem to have much more pronounced preferences, which they then tend to get. I think that if I were a woman, I could have had more kids, and it would have been easier for me to do it. That would be something that matters to me. It's not something that matters to everybody, but that's something there . Again, there is just the nice fact of people caring about your suffering. In the book, I do talk about the ethos of women and children first, which is very pronounced. It’s a modern society where we can simultaneously have something like “women and children first”, but then also have a lot of rhetoric about how people don't care about women. It's like, “ Hmm, that’s not right.”", "Dwarkesh Patel", "What do you think of this theory that maybe society cares a lot more about women suffering, but it sympathizes a lot more with men's success? If you think of a default character in a movie or a novel, at least for me, then the default is a man. Then maybe there's some victim that defaults as a woman. But I'd rather be the sympathy of some sort of success than get it for suffering.", "Bryan Caplan", "I mean, do you need sympathy for success? Or do you want admiration? I mean, I guess what I would say is that everybody's got suffering, and only a small share of people have any notable success. If all that you knew was you're going to be a man or woman, I would say, “ Well, gee, if I'm a woman, then people will sympathize with my suffering, which is almost definitely coming because that's the human condition. ” Whereas to have admiration for your success is something where it just affects a much smaller number of people. I know that hanging out in Austin among hyper-successful people may be biasing your sample a bit, but I do think it's believable that men get more unmitigated admiration for their success. Of course, there are also differences in the mating opportunities that you get for being a successful man versus a successful woman. So that is there too, but again, this is something that really is only relevant for a very small share of the population.", "But then the argument is, “ Well, that small share of the population matters so much in terms of the story we tell ourselves about our civilization or just in terms of who controls more resources overall .” So if being a woman billionaire is harder, maybe for biological reasons, maybe for the reasons of our society, you can say, “Well, that only affects a small percentage of women in society. ” But on the other hand, billionaires matter a lot.", "In terms of what life is like for most people, the main way they matter is that billionaires just provide awesome stuff. In terms of the stories that people tell, it's true that if you go and look at most classic movies or novels, the main characters are male. Even in cartoons, actually, the main characters traditionally have been male. But on the other hand, that’s just fiction. In terms of daily life. I'd rather have people be really concerned about me in real life but have my perspective underrepresented stories than the other way around.", "Dwarkesh Patel", "So what do you make of the argument that employers hold defects in women's personalities much more against them than they hold defects in men's personalities? I think Tyler cited some of this research in his new book on talent that being too agreeable or being too aggressive harms women more than it harms men.", "Bryan Caplan", "I would say that it's complicated in terms of willingness to fire. I think employers are much more willing to fire men. For defects and for insubordination. Another thing on the list is a small one, but I think that it is indicative of a broader trend. For people working at workplaces with dress codes, men are much more likely to be dinged on dress code violations than women because for men, there's a definite thing men are supposed to do. If you're not doing it, you are in violation. For women, on the other hand, it's like, “Well, gee, I mean, it seems kind of like that's not what you should be wearing, but I don't want to be the person that says anything about it. And who knows? Who am I to judge what a woman ought to be wearing on the job?” But a man, on the other hand, needs to be wearing a suit in 110-degree weather. What was the high this summer over in Austin? [laughter]", "Dwarkesh Patel", "Why do you think that women have gotten less happy since the sixties in America?", "Bryan Caplan", "Right. So the main thing I know about this is Stevenson and Wolfer's research on this. The main thing to remember is the magnitude. If I remember correctly, they find that in the sixties, women had about a two percentage point advantage relative to men in terms of their odds of saying they're very happy. 25% of men said they were very happy, then 27% of women in the sixties said that they were very happy. Whereas now, it seems like women have a two percentage point deficit relative to men. So now, if 25% of men say they're very happy, then 23% of women say they're very happy. It's always important in these papers to look at those magnitudes because the media coverage is going to say, “ Oh, women are miserable now. ” It's not that women are miserable now! We're talking about a two-percentage point difference. It's a data set large enough for this to actually be meaningful, but we do want to keep it in perspective in terms of what's really going on.", "The paper probably actually goes over a bunch of stories and says the obvious ones are all wrong. That would be what Justin Wolfers ustin especially would normally do. I think he's usually right that simple stories about something like this are wrong. In terms of what I would pursue if I read through the paper and reminded myself of what they found and then said, “ Okay, well, what will work?” I think I would, on one end, focus on single moms because they’ll become much more common, and their lives really are hard. A rise in single motherhood is coming. I would guess that’s one important part of it. Then, I would also be wondering how much of it is actual feminism telling women that they should be unhappy because the world is unfair and that causes unhappiness. Again, I'm not saying that these are right. It's plausible to me. The main thing I would say about feminism causing unhappiness in the adherents is that it probably doesn't matter most for most self-identified feminists because most people just are not that intellectual and they don't think about their ideas very often. So it's one thing to say, look, if you believe you're going to hell, you'll be unhappy. It's like, well, if you believe it once a year, does it make you unhappy? If you remember, “Oh yeah, once a year, I think I'm going to hell. ” The rest of the time, you don't think it.", "On the other hand, the person who is always thinking, “ I'm going to hell, I'm going to hell,” probably will be unhappy. So I think feminism is very likely to reduce the happiness of people who are feminist elites and take it really seriously, where they're talking about it all the time. That is likely to cause unhappiness. I'd be amazed if it didn't. But on the other hand, for the vast majority of people who say, “Yeah, I am a feminist. Moving on…” I don't think it's too likely to be messing up their lives.", "Dwarkesh Patel", "That raises an interesting possibility. This is not my theory, but let's run with this. So feminism has actually gotten more true over time, but it's precisely because of feminism. Maybe it's made elite women more unhappy. As you said earlier, the amount of single mothers has gone up. Maybe part of that is the reason, and part of that is because of feminist trends in terms of family formation. Maybe women prefer to be at home caring for children on average more, but then feminism encourages them to have careers, which makes them less happy. So if you add all these things up, plus mentorship, which men are less likely to give because of #metoo . So add all these things up, maybe they're the result of feminism, but they still make feminism more right . Would you agree with that?", "Bryan Caplan", "Yeah. If we go back to this definition of feminism and this theory that our society treats women less fairly than men, then if the story is that women have made a lot of false accusations against men and then men have responded by changing their behavior, that would seem to be a strange example of saying the society is treating women less fairly than men. It would seem to be a case that society is treating men unfairly, and this is having some negative side effects for women as well. But it's one where if you really were trying to draw the line… Well actually, here's actually one of the weaknesses of the definition that I proposed. So foot binding in China. From my understanding, the main drivers of foot binding in China were women. So women are binding feet, and they're also telling their daughters they have to have their feet bound. Men seemed to care less, actually, it was more of an intra-female abuse. This is one where you could say that in China, women are treated less fairly than men, even though the perpetrators are women. I think that does actually make sense. I would just say that the definition that we use in our society isn't really calibrated to deal with that kind of thing.", "When it comes to what the right way to describe it would be, it just gets a bit confusing. It's useful just to say, all right, well, if women are mistreating women and that's what's making women's lives hard, how do we count that? I think I would just say that we don't have any really good way of counting it, and might be useful to just come up with a new word to describe this kind of thing.", "Women's Tears Have Too Much Power", "Dwarkesh Patel", "What do you make of Hanania’s argument that women's tears win in the marketplace of ideas?", "Bryan Caplan", "Yeah. So we might want to back up a little bit and explain what the argument is. So Richard Hanania on his substack has a very famous essay where he points out that in fiction, when there is a mob of angry college students, it's very demographically diverse. But when you look at actual footage, it seems like women are highly overrepresented. He generalizes this by saying that a lot of what's going on in terms of cancel culture and related problems is that women are the main ones that get angry about these things, and people don't know what to do about it. So he, if I remember correctly, says that a man can, in a way, actually enjoy an argument with another man. Even if you lose or even if it's a physical fight, he says, you can sort of feel invigorated by it. We got through this. We resolved something. Whereas no guy feels this way about an argument with his wife. “ What do I need to do in order for this argument to end as soon as possible” would be a more normal reaction. This sort of generalizes to the majority of social arguments, specifically ones that involve someone being offended or angry, or hurt. He says a lot of what's going on is that it is mainly women that are presenting these complaints and that it's hard to deal with it because men don't want to argue with angry women. It just makes them feel bad. It's sort of a no-win situation. So anyway, that is Hanania's argument. Overall, it seemed pretty plausible to me. I haven't thought about it that much more, but it's one that does seem to make a fair bit of sense in terms of just what I'm writing about feminism.", "You know, one really striking thing is just how one-sided this conversation is. It is a conversation where women have complaints, and men mostly just listen in silence. Ofcourse, men will sometimes complain amongst each other when women aren't around. It's not a real dialogue where women have complaints about men, and then men are very eager to say, “ Oh, but I have something I would like to say in rebuttal to tha t.” A lot of it is what he calls “women's tears.” It's sadness, but mingled with or supported by intimidation: “If you don't give me what I want, if you don't pretend that you agree with me, I will be very angry, and I will be fairly sad.” So you should be afraid. I think a lot of what's probably going on with the rhetorical dominance of feminism, is that people are just afraid to argue against it because, in a way, it does sort of violate the women and children first ethos. If women complain about something, you aren't supposed to go and say, “ I disagree. Your complaints are unjustified. ” You're supposed to say, “ Look, what can I do to make it better ?”", "Dwarkesh Patel", "But that seems like a good description of race issues and class issues as well.", "Bryan Caplan", "I mean, the main difference there is that there are a lot of people who have a lot more firsthand experience of intergender relations, and they spend a lot more time in intergender relations than they spend in all of the other ones. So I mean, the dynamic is probably pretty similar, but in terms of the really negative firsthand experience that men have, Hanania probably is right about that. Then that generalizes to bigger issues.", "Dwarkesh Patel", "You have an essay about endogenous sexism. Could this just not be the cause of society being unfair to a woman? We start off with men being in power, they get sexist just because they're around other men and they like them more. So then, the starting position matters a lot, even if men aren't trying to be sexist.", "Bryan Caplan", "So let me just back up and explain the argument. The argument says to imagine that in reality, men and women are equally good in absolutely every way, but people are more likely to have close friends with their own gender, (which is totally true). So if I remember the essay, I think that for close male friends, the male-to-female ratio was 6:1, and for women, it was 4:1. So most people's close friends are of the same gender. When you meet these people, and they're your close friends, you know them really well. Furthermore, because you have handpicked them, you're going to think well of them. So then the question is, “ What about people of the opposite gender? What will your interaction with them be like? ” What I point out is that a lot of the opposite gender you hang out with will be the spouses and partners of your friends.", "On average, you're going to think worse of them because you didn't pick them. Basically, there are two filters there: I like you because you're my friend, and I put up with your partner because that person is your partner. So this means that the women that men are around are going to be the partners of their friends. They're not going to like them less and think less of them than they think of their friends. On the other hand, the partners of women's friends will be men, and women will get to know them and say, “ Wow, they're not that great. They're at least kind of disappointing relative to my same-gender friends.” So anyway, this is an argument about how the illusion of your own gender being superior could arise.", "Now, as to whether this is actually the right story, I leave that open. This was just more of a thought experiment to understand what could happen here. Could this actually explain the unfair treatment of women in society? Especially if we start off with men being the gatekeepers for most of the business world? It's totally plausible that it could. That's why we really want to go to the data and see what we actually find. In the data I know of, the evidence of women earning less money than men while doing the same job is quite low. So there's very little gender disparity in earnings once you make the obvious statistical adjustments for being in the same occupation. Again, the main area that probably actually has gotten worse for women is mentoring. Mentoring is partly based on friendship. I like this person. I like working with them. So I will go and help them to go and acquire more human capital on the job. This is one that feminism has visibly messed up, and many feminists will, in a strange way, admit that they have done it while not taking responsibility for the harm. I've got an essay on that in the book as well.", "Looking at the evidence, it is totally standard now for male managers to admit that they are reluctant to mentor female employees because they're so worried. When I go and track down a bunch of feminist reactions to this, they basically just say, “ I can't believe how horrible these guys are. ” But it’s like, look, you're asking them for a favor to get mentorship. They're scared. If someone's scared, do you really want to yell at them more and offer more mostly empty threats? It's really hard to scare someone into doing something this informal, so you really do need to win them over.", "Dwarkesh Patel", "Tactically, that might be correct, but it seems to just be a matter of “Is their argument justified? ” I can see why they'd be frustrated. Obviously, you want to point out when there's a sexual harassment allegation, and that may have the effect of less mentorship.", "Bryan Caplan", "Well, is it obvious that you want to point that out? Part of what I’m saying is that there are different perceptions here. There are differences of opinion. If you want to get along with people, a lot of it is saying, “ How does it seem from the other person's point of view? ” Obviously, do not assume that the most hypersensitive person is correct. So much of the problem with mentorship comes down to hypersensitivity. I've got another piece in the book where I talk about misunderstandings and how we have so much lost sight of this very possibility. When there's a conflict between two people, who's right and who's wrong?", "Ofcourse, it could be that one person is the conscious malefactor and the other person is an obvious victim that no one could deny. That does happen sometimes. But much more often in the real world, there's a misunderstanding where each person, because of the imperfection of the human mind, has the inability to go and get inside another person's head. To each person, it seems like they're in the right and the other person is in the wrong, and one of the most helpful ways for people to get along with each other is to realize that this is the norm.", "Most conflicts are caused by misunderstandings, not by deliberate wrongdoing. This is the way the people who keep their friends keep their friends. If any time there's a conflict with a friend, you assume that you're right and your friend is in the wrong, and you demand an immediate abject apology, you're going to be losing friends left and right. It is a foolish person who does that. Friendship is more important than any particular issue. This is not only my personal view, it is the advice that I give to everyone listening. Keep your friends, bend over backward in order to keep your friends, and realize that most conflicts are caused by misunderstandings. It's not the other person is going out of their way to hurt you. They probably don't see it that way. If you just insist, “ I'm right, I demand a full apology and admission of your wrongdoing,” you're probably going to be losing friends, and that’s a bad idea. The same thing I think is going on in workplaces where there is an ideology saying that we should take the side of the most hypersensitive person. This is not a good approach for human beings to get along with each other.", "Dwarkesh Patel", "Yeah. That's very wise. What do you make the argument that a lot of these professions that are dominated by men are not intrinsically things that must appeal to men, but the way that they are taught or advertised is very conducive to what males find interesting? So take computer science, for example; there are claims that you could teach that or economics in a way that focuses on the implications on people from those practices rather than just focusing on the abstractions or the “thing-focused stuff.” So the argument is these things shouldn't be inherently interesting to men. It's just in the way they are taught.", "Bryan Caplan", "The word inherently is so overused. It's one where you say, \"Well, are you saying that inherently X?” Then someone says, “ Well, not inherently X, just you'd have to bend over backward and move heaven and earth for it not to be. So I guess it's not really inherent.” That is a lot of what is worth pointing out. So if you're going to put the standard to that level, then it's going to be hard to find differences. You could say, “There's absolutely no way under the sun to go and teach math in a less male way.” On the other hand, maybe we should ask, “Is it reasonable to expect the whole world to revolve around making every subject equally appealing to men and women? ” That's an unreasonable demand. If there's a subject like math that is male-dominated, the reasonable thing is to say, “ Well, if you want to get in on that, you're going to need to go and become simpatico with the mindset of the people that are already there and then push the margin.” You can say that it’s “so unfair that male ways of doing math are dominant.” Or maybe you could say that it's unfair for someone who's just shown up to demand that an entire discipline change its way of doing things to make you feel better about it. Obviously, there are large areas that are very female-dominated, and there's no pressure on women to go and change the way that flower arranging is done, or cooking in order to make it more welcoming to men.", "So this is one where if you had a really high bar for how things are fair, then unless the rigorous conditions are met, you're going to see a lot of unfairness in the world. Although even then, as long as you have an equally high bar for both men and women, I don't think it's going to make feminism any more true by my definition. I also just say, I think these really high bars are unreasonable. If a friend had these bars of standards saying, “Look, why is it that when we meet for food, we have to go and meet at standard hours of breakfast, lunch, and dinner? I actually like meeting in the middle of the night. Why can’t we have half of the time be my way? ” You respond, “Well yeah, but you're only one person, so why should I change?” It depends upon what subfield you're in as well. There are actually groups of people really like hanging out in the middle of the night, so if you ask, “ Why is it we always have to meet in the middle of the night? Why can't we do it my way? ” You are entering into a subculture that works this way. You could demand that we totally change our way of being to accommodate you, but it just seems like an unreasonable imposition on the people who are already here.", "Now, when you sort of go through the list of different things that people think of as making something a male or a not-male field, sometimes people will treat things like acting like there's an objectively correct answer as a male trait. If that's a male trait, then we need to keep that trait because that is vital to really any field where there are right and wrong answers. I mean, that's an area where I am very tempted rhetorically to say, “It's just so sexist to say that it's male to think that things are right and wrong. I think that is a trait of both genders” . In a way, I end the essay stating, “Yes, these are not male; not only do they not make a male monopoly, but they are also not uniquely male virtues. They are virtues that can and should be enjoyed by all human beings. ” At the same time, you could ask whether virtues are equally represented by both genders and well, that's an empirical question. We have to look at that.", "Bryan Performs Standup Comedy!", "Dwarkesh Patel", "We're shifting subjects. You recently performed at the Comedy Cellar . How was that experience?", "Bryan Caplan", "Yeah, that was super fun and a big challenge! I am a professional public speaker. Standup comedy is professional public speaking. I was curious about how much transfer of learning there would be. How many of the things that I know as a regular public speaker can I take with me to do standup comedy? I'm also just a big fan of standup comedy– if you know me personally, I just find life constantly funny.", "Dwarkesh Patel", "Yes, I can confirm that. You're a very pleasant person to be around.", "Bryan Caplan", "Life is funny to me. I like pointing out funny things. I like using my imagination. A lot of comedy is just imagination and saying, look, “Imagine that was the opposite way. What would that be like?” Well, actually, just to back up again: during COVID, I did just create a wiki of comedy ideas just on the idea that maybe one day I'll go and do standup comedy. Comedy Cellar actually has a podcast , kind of like Joe Rogan, where comedians go and talk about serious issues. I was invited to that , and as a result, I was able to talk my way into getting to perform on the actual live stage of the biggest comedy club in New York. The main thing I could say about my performance is that it was me and nine professional comedians, and I don't think I was obviously the worst person. So that felt pretty good.", "Dwarkesh Patel", "It was a pretty good performance.", "Bryan Caplan", "I felt good about it! There were some main differences that I realized between the kind of public speaking I was used to doing and what I actually did there. One is the importance of memorizing the script. It just looks a lot worse if you're reading off a note. Normally I have some basic notes, and then I ad-lib. I don't memorize. The only time I have a script is if I have a very time-constrained debate, then I’d normally write an opening statement, but otherwise, I don't. The thing with comedy is it depends so heavily upon exact word choice. You could go and put the same sentence into Google Translate and then back-translate it and get another sentence that is synonymous but isn't funny at all. That was something that I was very mindful of. Then obviously, there are things like timing and being able to read an audience (which I'm more used to). That was what was so hard during COVID–– not being able to look at the faces of a live audience. I can see their eyes, but I can't tell their emotions or reactions to their eyes. I don't know whether I should talk more or less about something. I don't know whether they're angry or annoyed or curious or bored. So these are all things that I would normally be adjusting my talk for in normal public speaking. But with comedy, it's a bit hard to do.", "What successful comedians actually do is they try it in a bunch of different ways, and then they remember which ways work and which ones don't. Then they just keep tweaking it, so finally, when they do the Netflix special, they have basically done A/B testing on a hundred different audiences, and then it sounds great–– but the first time? Not that funny.", "Dwarkesh Patel", "It didn't occur to me until you mentioned it, but it makes a lot of sense that there are transfers of learning there in both disciplines. There are a lot of hypotheticals, non-extra events, and putting things in strange situations to see what the result is…", "Bryan Caplan", "A lot of it is just not having stage fright. So I probably had just a tiny bit of stage fright at the Comedy Cellar, which normally I would have basically zero, but there it was a little bit different because it's like, “Am I going to forget something? ” I actually have a joke in the set about how nothing is scarier than staying silent while thousands of people stare at you. So that was a self-referential joke that I worked in there.", "Dwarkesh Patel", "I can't remember if it was Robin Hanson who said this, but didn't he have a theory about how the reason we have stage fright is because somehow, you're showing dominance or status, and you don't want to do that if you're not actually the most confident.", "Bryan Caplan", "You're making a bid for status. In the ancestral environment, we're in small groups of 20-40 people. If you go and want to speak, you're saying, “ I'm one of the most important people in this band here. ” If you're not, or if there are a lot of people voicing that that guy is not important, then who knows? They might shove you off the cliff the next time they get a chance. So yeah, watch out.", "Affirmative Action is Philanthropic Propaganda", "Dwarkesh Patel", "I wonder if this explains the cringe emotion. When somebody makes a bid for status, and it's not deserved. Okay, I want to talk about discrimination. So as you know, there's a Supreme court case about Harvard and affirmative action. You might also know that a lot of companies have filed a brief in favor of Harvard , saying that affirmative action is necessary for them to hire diverse work for ourselves, including Apple, Lyft, General Motors. So what is the explanation for corporations wanting to extend affirmative action? Or are they just saying this, but they don't want it?", "Bryan Caplan", "If those individual corporations could press a button that would immunize them from all employment lawsuits, I think they would press it. When you look at their behavior, they don't just give in whenever they get sued. They have a normal team of lawyers that try to minimize the damage to the company and pay as little as possible to make the problem go away. So I think really what's going on is public relations. They are trying to be on that team. As to whether it's public relations vis a vis their consumers or public relations vis a vis other people in the executive boardroom is an interesting question. I think these days, it probably is more of the latter. Although even under Reagan, there were a bunch of major corporations that did make a similar statement saying that they wanted affirmative action to continue. I think that the real story is that they want to get the status of saying, “we are really in favor of this. We love this stuff.” But at the same time, if it just went away, they wouldn't voluntarily adopt a policy where they give you a right to go and sue them for mistreatment.", "I think there would still be a lot of propaganda. I mean, here's the general thing. You think about this as a species of corporate philanthropy sticking your neck out in favor of a broad social cause. Some people disagree and say that it's self-interest. They say, “ Look, the odds that even Apple is going to change the Supreme Court's mind is super low. ” So I don't think it's that. Basically, what they're doing is a kind of philanthropy. What's the deal with corporate philanthropy? The deal with corporate philanthropy is you are trying to go and, first of all, make the public like you, but also, you're trying to look good and jockey for influence within your own company. One really striking thing about corporate philanthropy is when you look closer, normally, they spend way more resources marketing the philanthropy and letting everyone know, “Oh, we did all this philanthropy!” Then they actually spend on philanthropy.", "So I had a friend who was a marketing person in charge of publicizing her company's philanthropy. They gave away about a thousand dollars a year to the Girl Scouts, and she had a hundred thousand dollars salary telling everyone about how great they were for giving this money to the Girl Scouts. So I think that's the real story. Get maximally cynical. I think without denying the fact that there are true believers now in corporate boardrooms who are pushing it past the point of profitability. The cost of philanthropy is just the production budget of the TV commercial. A rounding error. The donations are a rounding error, and then they go, “Hey, everyone, look at us. We're so freaking philanthropic!”", "Peer effects as the Only Real Education", "Dwarkesh Patel", "Okay. So this question is one that Tyler actually suggested I ask you. So in The Myth of the Rational Voter, you say that education makes you more pro-free market. Now, this may have changed in the meantime, but let's just say that's still true. If you're not really learning anything, why is education making you more free market?", "Bryan Caplan", "It's particularly striking that even people who don't seem to take any economics classes are involved. I think that the best story is about peer effects. When you go to college, you're around other peers who though not pro-market, are less anti-market than the general population. The thing about peer effects is that they really are a double-edged sword from a social point of view. Think about this. Right now, if you are one of the 1% of non-Mormons that goes to Brigham Young University, what do you think the odds are that you'll convert to Mormonism?", "Dwarkesh Patel", "Higher than normal.", "Bryan Caplan", "Yeah. I don't know the numbers, but I think it's pretty high. But suppose that Brigham Young let in all the non-Mormons. What would Brigham Young do for conversion to Mormonism? Probably very little. Furthermore, you realize, “ Huh, well, what if those Mormons at Brigham Young were dispersed among a bunch of other schools where they were that were a minority?” Seems quite plausible. They'd be making a lot more converts over there. So if you achieve your peer effects by segregation (which is literally what college does, it takes one part of society and segregates it from another part of society physically when you're in school, and then there's social segregation caused by the fact that people want to hang out with other people in their own social circles, your own education levels, etc.), in that case, in terms of whether or not education actually makes society overall pro-free market, I think it's totally unclear because, basically, when people go to college, they make each other more pro-free market. At the same time, they remove the possibility of influencing people of other social classes who don't go to college, who probably then influence each other and make each other less free market. I think that's the most plausible story.", "Dwarkesh Patel", "What about the argument that the people who go to elite universities are people who are going to control things? If you can engineer a situation in which the peer effects in some particular direction are very strong at Harvard (maybe because the upper class is very liberal or woke), they make the underclass even more woke, and then it’s a reinforcing cycle after every generation of people who come into college. Then that still matters a lot, even though presumably somebody becomes more right-wing once they don’t go to Harvard because there are no peers there. But it doesn't matter. They're not going to be an elite, or it doesn't matter as much.", "Bryan Caplan", "It could be, although what we've seen is that we now just have very big gaps between elite opinion and mass opinion. Of course, it is a democracy. If you want to run for office, that is a reason to go and say, “Yeah, what is the actual common view here? Not just the view that is common among elites.” However, I will say that this is a topic that deserves a lot more study. Now the other thing to question is, “ Wouldn't there be peer effects even without college?” If elites didn't go to college and instead they went and did elite apprenticeships at top corporations instead, I think you'd still wind up getting a very similar elite subculture. I think that this kind of social segregation is very natural in every human society. Of course, you can see it under communism very strongly where it's like, “ I don't want my kid going and playing with a kid whose parents aren't in the communist party.” So every society has this kind of thing.", "Now, if you push the dynamics enough…. let's put it this way. If you were the prophet of the Mormon religion, what would be the very best thing for you to do to maximize the spread of Mormonism? It is not at all clear to me that trying to get all Mormons to go bring them young is a good strategy.", "Dwarkesh Patel", "I wonder if there are nonlinear dynamics to this.", "Bryan Caplan", "Yeah. Well, there's gotta be, right? But as soon as you're talking about nonlinear dynamics, those are truly hard to understand. So I would just say to keep a much more open mind about this, and if anyone is listening and wants to do research on this, that sounds cool, I'll read it.", "Dwarkesh Patel", "Right. I remember you saying that one of the things you're trying to do with your books is influence the common view of elite opinion. So in that sense, there are elite subcultures in every society, but they're not the same elite subcultures, and therefore you might care very much about which particular subculture it is.", "Bryan Caplan", "Notice that that's one where I'm taking it as a given that we have the current segregation, and I'm going to try to go and take advantage of it. But if it were a question of if I could change the dial of what kind of segregation we have, then it's much less clear.", "The Idiocy of Student Loan Forgiveness", "Dwarkesh Patel", "Student loan forgiveness. What is your reaction?", "Bryan Caplan", "Oh, give me a freaking break. This is one subject where I think it's very hard to find almost any economist, no matter how left-wing and progressive, who really wants to stick their necks out and defend this garbage. Look, it's a regressive transfer. Why then? Why is it that someone who is left-wing or progressive would go and favor it? Maybe it’s because people who have a lot of education and colleges are on our team, and we just want to go and help our team. Obviously, the forgiveness really means, “ We're going to go and transfer the cost of this debt from the elites that actually ran up the bill to the general population. ” Which includes, of course, a whole lot of people who did not go to college and did not get whatever premium that you got out of it. So there's that. In terms of efficiency, since the people have already gotten the education, you're not even “increasing the amount of education” if you really think that's good. The only margin that is really increasing education is how it's making people think, “ Well, maybe there'll be another round of debt forgiveness later on, so I'll rack up more debt. The actual true price of education is less than it seems to be. ” Although even there, you have to say, “ Huh, well, but could people knowing this and the great willingness to borrow actually wind up increasing the ban for college and raising tuition further?” There's good evidence for that. Not 100%, but still a substantial degree.", "Again, just to back up–– that can be my catchphrase [laughter]. So I have a book called The Case Against Education , and my view is much more extreme than that of almost any normal economist who opposes student loan debt forgiveness. I think that the real problem with education is that we have way too much of it. Most of it is very socially wasteful. What we're doing with student loan forgiveness is we're basically going and transferring money to people who wasted a lot of social resources. The story that you are on the slippery slope to free college for all is, in a way, the best argument in favor of it. If you thought that free college for all was a good idea, then this puts us on that slippery slope.", "It’s terrible because the real problem with education is that we just spend way too many years in school. It is generally not socially useful. The main reason why it's going on is that it’s a way of stamping people's foreheads, saying that they are better than their competitors, and to not throw their application in the trash. The more education we get, the more you need to not have your application thrown in the trash. Credential inflation. Since we're talking a lot about inflation these days, the central organizing idea of what's so wasteful about education in my book is credential inflation. When everyone has a college degree, nobody does, right? This analogy is very good. Can you make a country rich just by giving everybody a trillion dollars? You cannot. All that happens is you wind up raising prices, and you cause a lot of harm in the process. I say the same thing is going on with the multiplication of credentials.", "Dwarkesh Patel", "Let me ask you about that. Because I think for the last 10 years, the proportion of Americans who are getting college degrees hasn’t gone up, right? Doesn't the signaling theory imply that it should be going up as a credential gets diluted?", "Bryan Caplan", "So actually, if it doesn't go up, then it's not getting diluted. But here's the actual story and what's been going on during the last 10 years… I have a bunch of bets on this, actually, and I just won the first one. When you see that the share that's going to college is going down, that's counting community college, right? It's community college that has fallen a bit, which makes sense because the signal sent by community college is barely better than nothing, right? Except in a few disciplines like nursing, and a few occupations similar to that. But for four-year college attendance, it's actually continuing to rise. The bets that I have are all along the lines of “ Attendance in traditional four-year brick and mortar colleges will fall no more than 10%. ” I have a whole series of these bets, each with 10-year maturities. So I just won the first one, and I think I'm just going to win a whole bunch of other ones.", "Dwarkesh Patel", "Does your undefeated bet record make you more hesitant to take bets? Because getting the first loss and seeing 52-1 will just be a huge [Bryan interrupts]", "Bryan Caplan", "Yeah, I would be lying if I said it didn't change my emotions. I do have a record of 23 wins for 23 bets that have resolved. It is fun to be able to go and say that. To say I've got 29 out of 30 bets won wouldn't sound as good. At the same time, I still am totally willing to make bets. The thing about bets is that it's not like they just come fall into your lap. You really have to aggressively seek them out because hardly anyone wants to bet. At this point, you might think that some people would want to bet me just basically saying, “Well, if I beat him, I'll be the guy that beat him. And if I lose, then who's ever heard of me anyway? Who cares?” But even that doesn't really do very much. The bet that I'm likely to lose is a global warming bet with the standup economist, Yoram Bauman. So that's one where he gave me three to one odds initially. So I was expecting to lose. He very much enjoys running annual victory laps on Twitter without ever mentioning, “Yeah, well, you did give me odds because I wasn't saying that I was convinced. I was saying that I thought that other people were overconfident”. But that's his right to go and run his victory laps.", "Dwarkesh Patel", "You're like a UFC fighter who hasn't lost. I don't know if you know Khabib Nurmagomedov ? He retired before he lost a single fight, I believe. I think he said his reason was, “ Oh, my mother doesn't want me to fight anymore.” [laughs] But one wonders. Are you hopeful that right-wing governments are seeing education polarization and loan forgiveness as a transfer of wealth toward left-wing elites? Are you hopeful that they’ll implement education austerity?", "Bryan Caplan", "Barely. Just to back up, the big policy reform that I push in The Case Against Education is education austerity, which means spending less on education and making education less accessible. Making it less accessible, if I am correct, will be socially desirable, because it will basically mean that credential inflation will be reversed, and people will be able to get the jobs they were going to get anyway while spending fewer years of their lives in school learning stuff that is not very useful. The main reason that I'm skeptical about this is that even in states where the state government is very right-wing, it seems that the prestige of state university systems is sufficiently high that politicians generally don't want to challenge it.", "In Texas, we both were hanging out around the University of Texas campus, and the state capitol is just a 20-minute walk away. It seems like the governor of Texas does not want to have a throwdown with the president of the University of Texas and say, “ Hey, we're sick of this stuff .” Why not? Probably because the governor of Texas thinks that UT, with its great football team, is really popular and that he can't beat them. It’d be hard for him to say, “We're going to pass a bill saying that all the athletics of UT are fully funded, but we are going to go and get rid of the following departments.” It's hard to go and make that happen. The president of UT could probably get the football coach to say, “ We stand arm in arm, shoulder to shoulder with our fat studies department, ” or whatever. So that is the concern.", "Florida is a little bit different. It does seem like DeSantis is trying to go and at least make some symbolic efforts against overwhelming left-wing bias at the University of Florida. But as far as I know, he isn’t doing anything about their budgets, which I think they really care about. You can pass all the laws you want, but if you don't actually mess with their money, then I don't think they're going to care that much. The main change that I could plausibly see is defunding the least popular departments while saying we're going to keep the total budget of the school the same. So say, “ In the state budget, we are slashing the budgets of the English department, women's studies department, ethnic studies department, sociology, ” basically any subject that a normal voter would laugh at when you set when you pronounce the name of the department, maybe they could get away with that. In a way, it just seems too strategic and requires just too much attention from politicians to make them realize this. But, that's the easiest thing to see.", "Dwarkesh Patel", "This could include making them cosign the loan for student loans that they had to pay out of their endowments, right?", "Bryan Caplan", "Right. Again, that's one that's just so easy to admit to, demagogue, and say, “Oh, so what about this poor student who couldn't do it? Now no school wants to accept him because they don't want to be responsible for it.” If you go and read Lyndon Johnson's original speech in favor of student loans , it just takes the social desirability bias dial and turns it up to the absolute deafening maximum. I can't remember Lyndon Johnson's accent, but let's give him this accent. “I believe that in America, no student should ever be denied full access to the maximum opportunities of education merely because they were born in a family that was too poor to afford it. There is no price too high, no sacrifice too great. ” Oh, God. That's the kind of weaponized nuclear rhetoric that politicians will deploy to defend this stuff. Yeah, it's really what I'm up against. That stuff that is good doesn't sound good. I was turning Lyndon Johnson into Bill Clinton. I just realized they're the same Texan––", "Why Society is Becoming Mentally Ill", "Dwarkesh Patel", "Yeah, I might be giving that speech from the bathroom every day, I don't even know if you’ve heard the story [laughs]. Why do you think young people are getting more anxious and have a higher incidence of neuroticism overall?", "Bryan Caplan", "My first pass on this is always that it's an artifact of measurement to the medicalization of society. It's basically just measurement. By modern measures, there was no neuroticism , and there were no psychiatric problems 200 years ago because there were no psychiatrists. If you went back 200 years ago and measured it, you say, “ Well, the number of people in psychiatry offices is zero, so it doesn't exist here.” Then we go and expand it. We really went out of our way to make access super easy, to have a big center on every campus, to de-stigmatize, and at the same time to stigmatize the traditional subsidies for psychiatry like religion. Traditionally, when you have a problem, you go and talk to your priest. If you stigmatize that, that results in something like, “ Well, I can't talk to a priest. They're scumbags now, I can't talk to them. So who can I talk to? I can go to the counseling center. I can do that.” However, there are some measures where the level of neuroticism is not just measuring people going to a psychiatrist's office. We actually see this in suicide rates. Although there… I have a piece where I go over the last 60 or 70 years of suicide rates . It's really complicated, Dwarkesh.", "Dwarkesh Patel", "Oh, really?", "Bryan Caplan", "Yeah. So basically, suicide rates were falling from 1970-2000 and then have had a big rebound since then. If I remember, I think suicide rates now are pretty similar to what they were in the fifties. I'd have to go back and double-check those numbers, but that's what I remember. You’d have to have a really complicated theory, or you could just have a bunch of ad hoc theories . Maybe it was World War II and the trauma of that while also being married to a veteran with trauma that could mess women up and make them kill themselves. Then that changed, and then something else happened.", "So the Jonathan Haidt case of keeping kids from playing outside by infantilizing them or making them really anxious doesn't fit with anything from the fifties. You could argue, “Well, in the fifties, they were really good in terms of unsupervised play, but they were really bad in terms of something else.” I mean, when people talk about this stuff, what really strikes me is that I felt like I learned more from just going and looking at the time series as far back as it goes than from every person that pontificated about what's really going on. Just to realize the numbers are complicated; no obvious story fits. It would be really ideologically convenient for me to say feminism is leading women to kill themselves. But the numbers don't work. So that's wrong.", "Immigration & the Ultra-long Term", "Dwarkesh Patel", "How persistent are the harms of immigration restriction? If the world is getting wealthier by itself anyways, does it really matter if it'll take a few more decades than it would have? You could transfer them here now, or you could just wait for economic growth to do its thing there.", "Bryan Caplan", "Does it really matter if we miss a hundred trillion dollars during the time that people are poorer than they're probably ever going to be again? Yeah, I think it really does matter, I’ll go with that. Just to back up, in Open Borders, I go over what I consider to be the most powerful argument in favor of immigration. It all just comes down to this. We know for a fact, undeniably, that if you go and take a very poor worker from a poor country and move them to a rich country, almost overnight, their pay multiplies many times. This is something that you cannot deny while looking at the facts. So the question is, why can we make a Haitian suddenly earn 20 times as much money just by moving into Miami? He hasn't even learned English yet, but he's still making 20 times as much money as he was back in Haiti.", "The textbook economic explanation is we pay immigrants more in rich countries because their productivity is so much more than it was back home. Productivity is much higher in rich countries for everyone. Most of the reason why Haitians are poor is not that there's anything wrong with individual Haitians. Most of the reason is that there's something really wrong with Haiti. If we were to go and get deposited in Port de Prince, Dwarkesh, we'd be struggling to get out in existence as well. What's messed up is Haiti, not Haitians primarily. Now, this is basically undeniable. The part that is debatable is: is this scalable?", "We can make one Haitian vastly better off just by going and moving them to the U.S., and it pays for itself because he's more productive. Could we go and move a million Haitians? Yeah. Ten million? This is where people might start saying that maybe it's not really scalable. In the book, a lot of the argument is that, really, it's totally scalable, and this is just a massive missed opportunity where we really could go and rescue hundreds of millions of people, billions of people, from poverty in a short amount of time. We can essentially just fast forward from right now to this future that you're talking about. How valuable is this? Well, if you're someone who thinks on million-year timescales, then not so valuable. If you're someone who thinks on the timescale of dramatically improving the lives of billions of people over the next hundred years, then yeah, it's fantastic. On top of this, it’s worth pointing out that it’s almost certain that right now, we are missing some of the greatest human talents. They’re trapped in some poor village in India or China, and we'll never find out what their accomplishment is. If you think that we are one day going to beat death, it is quite likely that the guy that could have beaten death 5 or 10 years earlier is in some poor village in India or China. So by not allowing immigration, that person is not going to get to beat death. If we're going to beat death, we'll beat it eventually. But just to shave 5 or 10 years off of that, this matters a lot to me. Those 5 or 10 years really make a difference. If the technology just freezes people at the age they're at, I want to get frozen really quick because I'm going downhill.", "Dwarkesh, even for you, 5 or 10 years, it's a big difference. If you are an ultra-long-termist, then I guess that you could say Open Borders is not that big of a deal. But in terms of doable policy changes that will lead to the increase of human wealth of hundreds of trillions of dollars in this century, then it's the best I know of.", "Why Cowen’s Talent Scouting Strategy is Ludicrous", "Dwarkesh Patel", "Speaking of talent, Tyler Cowen and Daniel Gross have a new book about it. One of the implications the book makes is that if talent spotting is something that you can do pretty reliably with a 1500-word essay and a Zoom call, then doesn't that imply that college is not necessarily that much about signaling because you don't need four years of cognitively demanding pointless work? There's clearly a more convenient way to just get that signal for identifying talent.", "Bryan Caplan", "This goes back to a very long-running argument from Tyler. Tyler has a lot of objections to my book, The Case against Education , but his central one comes down to this: “ Look, Bryan, I hire people; you don't.” It’s basically, “ I hire people, you don't, and I will just tell you as a fact that I know after a couple of months whether a worker is good or not, and therefore signaling really cannot be very important. End of story.” Pulling rank. This is the argument that Tyler has right now. There are a few different responses to this. The one that is most pleasant and most flattering is to say, “All right, well look, Tyler, you are one employer in a million. You're fantastic. You have incredible capabilities. You can do this, but you're just one person. You're not involved in hiring most people, and most employers are nowhere near as good at you as spotting talent, so you're still wrong. ” That is sthe easiest thing to say. In terms of more fundamental arguments, I would say that the idea that you can spot talent with a Zoom call and a 1500-word essay is ludicrous. The essay can be forged, obviously. You could put them in a testing center, and even then, if a dream job hangs in the balance, people will figure out a way to cheat on that. Then in terms of the Zoom call…. Yeah, no. I have been hiring artists who are admittedly nefarious and unreliable. What I found is that even work product is not that good of a predictor because someone who really wants to get a job can do five great pages in a timely manner, but then they keep you waiting for years for what you really want out of them. So it's just not that easy. Not even close.", "I think the better argument is not saying that we can find talent with the Zoom call and the essay, but rather we can find talent by hiring people and watching them for a couple of months. That makes more sense. The problem with that is that it is incredibly expensive to go and hire people and watch them for a couple of months. You get a stack of applications, hundreds of people think, and what you are doing there is trying to figure out ways to say no as quickly as possible and narrow it down. You might do this knowing full well that there are three awesome people you've just thrown away. Because “Yeah, well, I threw 3 awesome people away, and I also threw away 294 terrible people, and I don't have any way of finding those other people. ” So I call this the diamonds in the rough problem, and I say that this is a lot of the reason why signaling matters so much; it's a way of getting out of the undifferentiated mass of people who may be good (who knows?), and into this better pool. Now, I do also say that another big error in Tyler's “I just watched him for a couple of months story” is that there is very strong evidence that, for multiple reasons, hardly any employer does the strategy of hire, watch, and then fire if you're disappointing. The simplest reason is maybe you're at the 45th percentile of expectations, so then it's like, “Well, he's kind of disappointing, but it's not worth going back to the drawing board again.” There are also a lot of social and emotional problems with firing people. People do not like firing, and then on top of it, of course, there are legal problems, which should not be discounted.", "Dwarkesh Patel", "Isn't this just what an internship is, though?", "Bryan Caplan", "Well, remember, when you apply for an internship, does everybody get the internship?", "Dwarkesh Patel", "True, but you can have a lower bar for the internship, so then it's like a call option on being able to hire them.", "Bryan Caplan", "Yes. A lot of what people are signaling is not just intelligence, it's not even work ethic; it's just sheer conformity. Here's the issue: you might ask, why can't I just take my Harvard acceptance letter and get hired by Goldman Sachs and say, “ Hey, look, Harvard said I was good enough. Harvard has a 98% five-year graduation rate. Come on, let's just start right now. ” That is so weird in our society for someone who gets into Harvard to say, “I don't want to go to college at all, and show up with this odd offer.” Goldman very reasonably could react with, “ Yeah, well, he got into Harvard, but this guy's a freak. We're worried about it.” So I think it's basically the same problem that you're talking about.", "Dwarkesh Patel", "Couldn't Goldman say, “ Okay, well, maybe you don't want to hire him full-time out, but, let's see if we can give him an internship?” In fact, internships are very common.", "Bryan Caplan", "Yes. Although again, you want to give the internships to people that are checking all the boxes. The person who is doing the weird thing, you're nervous about him. Now, by the way, I've multiple times been on panels where there's some business leader, and he'll say, “ In the business world today, we don't care about credentials. We only care about hard demonstrable skills. ” Then I always ask the same question: how many uncredentialed people have you personally hired for high-skilled jobs? “ Well, we haven't done any, but I read in the Wall Street Journal ,” Aha, you're acting as if you've got some firsthand experience. You're just repeating what you read in the freaking newspaper, which writes stories about stuff that is atypical. Of course, you don't write a story about something everybody knows about because that's familiar to people. You go and dredge up some weird platypus and then show everybody and say, huh, platypuses are taking over the ecosystem.", "Dwarkesh Patel", "Didn’t this happen to your podcast with Andreessen , by the way?", "Bryan Caplan", "That would be plausible that it would happen. He said, “We only hire on demonstrable skills,” and I asked him who he’s hired without credentials. Then what did he say?", "Dwarkesh Patel", "I don't remember. I don't know if you asked him directly, but that was his claim. I don't know if you followed up that way.", "Bryan Caplan", "Here's a good way of thinking about it in Computer Science. I have heard that top firms like Google sometimes hire contest winners. But when I ask people there, how many standardly credentialed employees do you have in your programming and how many contest winners without credentials? The results are, yeah, you have three contest winners and thousands of regular credential workers. So basically, you have to walk on water to get hired by these firms without having the regular credentials.", "Surprising Immigration Victories", "Dwarkesh Patel", "You recently traveled to Eastern Europe in the aftermath of the Ukraine war.", "Bryan Caplan", "I took my 12-year-old son there too. A lot of people were saying, “You're crazy. What are you doing? Why would you go there?” I was super glad that I did; it was an incredibly exciting trip. What happened actually was that my book was translated into three Eastern European languages, Polish, Hungarian, and Czech, and then I also spoke in Slovakia, where I had gotten a few different versions but Slovakians said, “Yeah, we can read Czech. Totally no problem. It's basically the same language.” I got to give talks in all four of those countries.", "Poland was especially exciting because you could see the pro-Ukrainian anti-Russian enthusiasm in the streets. I was by the train station in Krakow, and there was just a Polish guy just screaming, “Fuck Putin, glory to Ukraine!” And I was saying, is that guy drunk? He's not drunk. He's just a Polish person speaking his mind here. When you went to the train stations, they were packed with refugees. What I did not realize was that the refugees were in fantastic spirits because of the warm welcome of the Polish people; it was so strong that the people there were actually feeling good about the situation. Look at their faces– just imagine the stress of fleeing a war zone with your kids. That's what was going on. I also got to learn some amazing things when I was there. Since I am the author of Open Borders, you can definitely predict that I would say, “Oh, well, letting in a lot of refugees won't be a big deal,” but Poland increased its population by 10% in a month, and the country looked fine. No one there was complaining, and I could see why.", "Look, this is not ideology. This is me walking around and looking at stuff. Poland was able to go and do this. Why? Because where there's a will, there's a way. There can be a thousand refugees from a country you don't like. You put it on the news, and people say, oh, we just couldn't possibly absorb them. We're at our absolute breaking point. This is terrible. On the other hand, you can have millions of refugees come into your country, and if you like them and sympathize with them, then it's all hunky dory, and it's fine. Polish policy was also excellent. Ukrainians are allowed to work the day they show up. Normally what you do to refugees is say, “ Well, you can come here because you'll be dead if you stay in your own country, but we don't want you getting a job, ” because producing and contributing to society, as we all know, is a bad thing for human beings to do. Much better than we just keep you as a semi-prisoner on welfare for a few years. That sounds like a much better thing to do and is normal.", "Poland was doing the best thing where the day you show up, you can get a job there. You're totally legal to work. So this is a path toward having them become productive members of society. I mean, just to emphasize how amazing what they're doing in Poland is, this would be like the US took 30 million refugees in a month, 33 million refugees in a month. Americans freak out over 10,000. So you just realize how phony and bogus the complaints are. It really is just a matter of believing and seeing. If you think that refugees are bad and immigrants are bad, you will see bad things happening. On the other hand, if you're supportive and have a can-do attitude, then you'll say, “ Hey, this is completely doable.”", "Dwarkesh Patel", "I guess you're skeptical that Americans will have a can-do attitude about it.", "Bryan Caplan", "The can-do attitude mostly just comes down to: Are you going to get out of the way and let them do their thing? That's the real problem, right? It's not an argument against accepting them and letting them work. It's an argument that we're not going to do it and we'll do the wrong thing.", "Dwarkesh Patel", "In the Myth of the Rational Voter , you point out that a lot of times, what liberal economists do is they'll defend some policy based on the optimal implementation of it. But then, in effect, it’d be like arcane environmental regulations. I wonder if it's similar here.", "Bryan Caplan", "Well, that's why in Open Borders, I defend immigration as it really is. I don't go and say that we have to do a bunch of other things first. I say that it's fantastic right here, right now, in the real world. Of course, I do have that chapter on keyhole solutions where I try to meet people who disagree halfway. But really, what I learned in Poland is that my view of what's doable has expanded quite a bit. Previously, I think I would have said that maybe you can take in two or three percent repopulation a month without looking really bad. I'm also the kind of person who will say the train station is going to be full of human misery but still way better than trapping people in a war zone. I'm that kind of person. I'm someone who always asks, “ Compared to what?” .", "If someone’s here crying with their children in a train station, it's probably because back home, they would have been crying over a dead body instead. This is still a big improvement, and this is still actually what we should be doing. But in Poland, I didn't have to make any hard arguments. I could just walk through the train stations full of refugees, and they're happy, the kids are playing, and they’ve got their puppies, and there are dog feeding stations. It's like, “Wow, I don't have to go and convince myself through logic that this is the best thing that could happen or the least bad thing that can happen. I can just walk around and see people are happy and are adjusting to a new life.”", "Dwarkesh Patel", "How should decolonization have been done to increase the odds of a competent and free-market government?", "Bryan Caplan", "You’re also avoiding a total bloodbath. Let's not forget that.", "Dwarkesh Patel", "So what do you think? Are you of the opinion that it had to be done, or was it inevitable at some point? How could it have been done so that it had the optimal outcome?", "Bryan Caplan", "First of all, with really high credibility, this is where you say, “Look, here's the timetable. We are going to be partitioning India over the course of 20 years. We are staying there.” Even tying your hand saying, “Look, we are issuing a pile of government securities where you have to pay us money unless we go and just give up in the middle of the game.” Basically, you've got to pre-commit stating your goal: Our goal is going to be a peaceful, free rich India. We have a plan for how we're going to do this, and we're going to stick to it, and we are ready to lose a whole lot of our soldiers in order to do this, and then we are going to go and find people that we are going to very gradually transition power to. They have to be reasonable people. If these are people that are inciting pogroms, we are going to get rid of them. We are not going to tolerate that kind of thing here. This is going to be an orderly transition where all of her majesty's subjects can anticipate survival and a future where they are in peace.", "Now, again, probably a lot of people don't realize how badly the partition of India was botched in decolonization. The chaos makes it hard to get accurate numbers, but there are a lot of people saying that millions of people died in pogroms in the end. So as some of you say, “ Oh, maybe it was only 500 or 1000. ” That was a complete disaster. It came down to the British saying they were never going to leave until they did (really quickly). That's not what you do.", "Dwarkesh Patel", "What made the denazification and the US occupation of Japan successful?", "Bryan Caplan", "Because they started off by completely crushing their enemies. [laughs] The key thing is that in those occupations, there were two things that they were pushing: democracy and human rights. They pushed for human rights a lot more than democracy.  So basically, they said, “ Yes, we're going to have elections. ” As long as you are completely committed to the denazification of Germany, you can be elected, and you can have a little bit of power. Then we're going to very slowly devolve power on you while it remains completely clear that you have no Nazi sympathies, and none of this stuff is going to come back. So that is basically what happened: there was a complete crushing of what existed before, and then a rebuilding where democracy is a low priority.", "Is democracy yielding good results? Then we can turn the democracy dial up a little bit. The same thing happened in Japan. Really bad war criminals wind up getting sentences that are quite light overall. If I remember correctly, under a thousand German war criminals got executed by the US. Just think about how many probably Otto have gotten executed. A hundred thousand, maybe. So people who really had blood on their hands, people who ordered the deaths of innocent people when they could have just not done it without.", "There's a famous book called Hitler's Willing Executioners , saying that this was not primarily people murdering innocent people under duress where there's a gun at your head saying you shoot another person. What the author looked at was what happened to Germans who refused to participate, and hardly any of them did. It's called Hitler's willing executioners. Hardly any ethnic Germans refused to cooperate, even soldiers were punished. People who didn’t cooperate in the genocide were punished. Rather than saying, “ Oh, well, you know, we're going to have to transfer you then if you won't go and murder innocent people, what a jerk you're being. ” That's the kind of thing that should have been done. Here is the way that I think about it. You want to do it from a position of strength. You don't wait before the fanatics have the upper hand and there's blood in the streets and then say, Oh gee, what do we do now? You want to jump the gun when things are peaceful. This is where you say we have now, in this time of complete peace and harmony, worked out a plan for decolonization, and here's how it's going to work.", "You never want to let it make it look like your hand is being forced. You never want to let fanatics and bloodthirsty people have their status raised by standing up to you successfully. You want it to all be happening over their heads so that they just look like losers and crazy people, and then you find some people that want to work for a decent, peaceful society. Of course, there are people who are not complete mushheads like Gandhi. Just to stick my neck out, Gandhi was an apostle of nonviolence. Good for him, but he was also someone who was so dominated by wishful thinking and just trying to pretend like a pogrom shouldn’t be expected after the transition (the pogroms were reasonably expected). It was a lot of pie-in-the-sky nonsense that he preached. He himself is not a mass murderer. He's just a very, touchy-feely person who should be nowhere near any important decision. He might have been a good therapist, something like that. He was someone who had a lot of sympathy for other people, but he was not a reasonable person.", "Dwarkesh Patel", "That's really interesting because in your book on political demagoguery, you make the point that if we were to judge political leaders by normal moral standards, we would think they're monsters. The interesting thing with the Gandhi example is that maybe the people who are moral heroes in an ordinary context would make the worst politicians. Do you think there's any correlation between how moral somebody is as a human and how moral the government they lead is?", "Bryan Caplan", "They’re definitely not the worst people. The very worst people are people who come to power self-consciously and want to commit mass murder. Those are the worst. Now, if you have a very conventional moral view where the only thing that you judge people by is how caring they are, then yeah, I think those people make bad leaders. If you have an effective altruism point of view, however, where it's not just feeling a lot of caring emotions, but being very committed to thinking clearly about the best way to get good results, then I'll say those people are more heroic people in my view. Those people, I think actually, if any of them ever got power, would generally be good leaders. Again, EAs need some experience too, but basically, that is the right profile for a good leader: someone who is very caring, but at the same time very logical.", "Dwarkesh Patel", "Is there a correlation between conventional virtues like honesty and being kind towards strangers and being good in an effective altruist sense? Or are those just completely unrelated?", "Bryan Caplan", "Let's see. If it’s the sort of kindness towards strangers you meet firsthand, like being willing to feed a homeless guy, then yeah, sure. My guess is that most effective altruists are people who would like to give to homeless people, but just realize that it's not a good use of money. They sort of have to suppress their desire to give to the homeless. Now about the honesty one: this is one where a lot of politicians say that you have to be dishonest to get things done. I would say that there's this whole literature on credibility saying, “ No, what you really want is to be honest and in a very credible way such that when you say I'm going to do something in 20 years, people believe you.” Again, I think that's a lot of what you would have needed for effective decolonization. It would be to have people there of ironclad honesty. When they say, “ Look, I am not leaving just because you look like there are terrorist attacks, I'm willing to give up 100,000 British soldiers to be able to carry our attacks before we walk out.” When I say it, I mean it, and I think that is actually an important trait for leadership.", "Here's the interesting thing. I don't think Tyler will in any way see this as a negative. On one hand, he does like to impishly say, “ Oh, well, you have to be dishonest to get things done,” and so on, yet his leadership style is ultra-honest, and it's based upon everyone believing that something was going to be good for them. Afterward, they come away saying, “ Wow, that was real leadership.” People leave the deal years later feeling like things worked out. Traits like integrity and honesty; while we can easily come up with hypotheticals where they're bad in a leader, I think the real thing is in the old saying “underpromise and overdeliver,” right? Don't promise more than you're really willing to do and then try to exceed expectations. That, I would say, is a trait of a leader, but I'd say that's not dishonesty. If someone promises something and that they’ll do more for you, people say liar! So you can say “ No, it's not a lie. I wasn't a liar. I did what I said, and I also did more.”", "The Most Successful Revolution", "Dwarkesh Patel", "Now, I know, in general, you're not a fan of revolution, but if you had to choose: What is the best revolution, the most justified, or the one that had the best effects?", "Bryan Caplan", "So this is one where there's the cheesy thing of picking something that I don't really consider revolution and then saying other people do. So the collapse of communism in Eastern Europe, you could call it a revolution. I don't think it really was because it was just way too peaceful to count. If you'll count that, then that is very likely the best example of a revolution. But again, what you really want is a bloodbath. In terms of wars, the Korean Wars had the best record; it basically saved two-thirds of a country and turned out great. We sort of have a reasonable counterfactual for how awful it would have been in North Korea. But again, North Korean propaganda might claim it was a revolution, but that’s a civil war, not a revolution in terms of what would be the best example. Of course, a lot of people want to do the American Revolution, but I'm not a fan of that.", "You could argue that Britain abolished the slave trade earlier, but one interesting argument I've heard recently, not about this particular, but generally, is that the argument that the end of slavery was a lot more contingent; there really wasn't a strong reason for thinking that slavery had to end then. As you told me, it was pretty profitable at the time. So if you think it was just the combination of a bunch of random things at the time, one of which was the American Revolution, which led to the end of slavery, then that's not a strong argument.", "Bryan Caplan", "My view is that it was this British-based anti-slavery movement that was really key, which then spread to the colonies. It probably would have spread stronger and furthermore. The British have a much better record of getting rid of slavery peacefully. It would have cost so much more to do it in the US because there were so many slaves. So it's complicated. I'm still trying to come back to the question about the best revolutions. The other things you have are more coups than revolutions. The coup against the mother Muslim Brotherhood was not great , but still, it’s better than letting those fanatics take over. So let's see… There's the glorious revolution! My understanding is that it really was a real revolution and that people at the time tried to portray it as totally peaceful, but it actually really was not. The first Russian revolution against the Tsar in February, the one that replaced the Tsar with the first democratic government of Russia probably have worked out okay, if it hadn't been for Lenin. It didn't work out, but it had potential.", "Dwarkesh Patel", "How contingent do you think history is overall? Let's say that Lenin wasn't shipped back to Russia in World War One. Does the communist takeover not happen? Or was that kind of baked into the cake?", "Bryan Caplan", "I know the details really well for this question. I’ll bet that there’s a 99% chance that without Lenin, even with all the other Bolsheviks , it wouldn't have happened. Because I actually know the facts. So if you read Richard Pipes’ book about the Russian Revolution , the rest of the Bolshevik Party was planning on taking part of the provisional government. Then Lenin shows up and reads the riot act, and he has so much intellectual status.  I don't know what he had, but the whole group was against him. He was reading the riot act, and they all said, Yes sir, ‘’Lenin, we're ready for ready for revolution. ’ Without Lenin, that would just not have happened. You could say, “ Well, maybe some years later, something similar would have happened. ” If you know the facts, the whole revolution was based upon a tiny number of fanatics seizing power because they had 2000 guys following orders in a country where no one else was following orders. If you just waited a little while, whatever other forces would have rebuilt, and there would have been no hope for this tiny minority of lunatics to take over. So overall, I am a big believer in contingency.", "Of course, it does vary. Were things like economic growth going to happen one way or another starting in 1800? That, I think, was not so contingent if it didn't happen in Britain, because there were just too many things going on. There were a bunch of different scientific breakthroughs with obvious economic applications. There are a bunch of countries where they have a business class that's interested in making more money and trying these ideas out. I think that was something that you can say was quite inevitable. On the other hand, for almost anything involving major wars, I’d say almost all major wars could be avoided by one side giving in.", "Dwarkesh Patel", "The worry, then, is that this inadvertently creates a greater list of grievances for the next war.", "Bryan Caplan", "Yeah, well, that's what the hawks on both sides are always saying. Sometimes they're right. Sometimes they're wrong. They're wrong about half the time. That's basically an argument that is true sometimes but is not at all reliable. Again, often what happens when you give in is people say, “Oh, well, I thought these people were completely unreasonable and that we could never make a deal with them, but it turns out they're not so bad. So we can now de-escalate and get back to peace.” That happens, too. Then people say, “You couldn't do that with Hitler .” Yeah, I know you couldn't do that with Hitler! Hitler was terrible. Just to say the most controversial thing on this podcast. Hitler was not a person that you could negotiate with any long-term success. But hardly any world leaders are Hitler; almost all of them actually can be negotiated with. Often what they want is so trivial because politics is so based upon demagoguery, which means that small symbolic concessions are often all they need to go and thump their chests and say, “Oh, I am a great leader.” I think about how Indonesia, for 30 years, refused to give up East Timor. It's one insignificant half of one insignificant island. But for all these decades, they're thumping their chests and saying, “ We couldn't possibly do that. No, that will lead to total collapse. ” It's endless nonsense. “ All right, fine. We'll give it away. We'll give it away. ” Now it wasn't good for East Timor, the whole thing was a disaster, but it still probably is pretty bad.", "But there really is the case, as everybody knows in real life, that many conflicts can be avoided by giving another person what they want. If you say that “ They'll just escalate their demands infinitely,” there are a few people like that. However, most people do not infinitely escalate their demands. Instead, most people will either say, “ Oh, you gave me what I wanted, good! End of story. ” Or maybe they'll go and periodically tax you with another demand, which is annoying but is still much better than losing a friend or a contact.", "Dwarkesh Patel", "Given the irrationality and demagoguery in the political system. Why is it the case that the society we live in is relatively free, peaceful, and prosperous? I mean, if the average person is a National Socialist (aka. Nazi)?", "Bryan Caplan", "I said moderate National Socialist Dwarkesh. That's what I said. So there are a few things going on. The first thing is just to remember, “ Usually, we don't have this. ” The norm is not peaceful and prosperous societies throughout human history. The norm is impoverished and war-prone societies. Always keep that in mind if you're saying, “ Well, gee, but things are okay now .” I often think of this as the ‘Look out the window test’ as in, “Hey, Dwarkesh, is it on fire out there?” We're not going to turn the camera, but I think that listeners will believe that it is not on fire.", "Dwarkesh Patel", "How else do you think we're getting this nice lighting? [laughs] Yeah, it's not on fire. So there's that, but still, there is the interesting question: what's going on with the exceptions?", "Bryan Caplan", "So one is I think that rich, prosperous people and societies generally have less crazy electorates. The political ideology of the society is just not as terrible as in other places. Well, it seems pretty bad, but there are still not many people saying that we want to go and murder half the population. When you go to other societies, there are actually people like that.  A friend of mine was in India, and he actually saw a pro ‘Nuke Pakistan’ rally. I assume this is not normal in India. Have you ever seen a pro ‘Nuke Pakistan’ rally?", "Dwarkesh Patel", "No, but the nationalists can get pretty crazy over there with the celebrating.", "Bryan Caplan", "Yes, but NUKE Pakistan. Preemptively nuke a nuclear power. What do you think is going to happen? We don’t have that kind of just true fanatical sociopathic bloodthirsty, horrible stuff. It's common in many societies, but it seems to be quite reduced in better-functioning societies. So that's one thing. In these better-functioning societies, people's political views are not terrible; if you propose something awful, even normal people will say, “ No, I don't think it’s such a good idea to go and murder all the billionaires. Maybe we can tax them at 90%, but murder them? That's too far. ” Whereas in most societies, you say that stuff, and they cackle with glee. “Let's strangle the last billionaire in the intestines of the last right-wing talk show radio host. Hahaha.” That is more of the human attitude… just think about how kids are. The way that the kids are shows us what adults are feeling, but hiding. Kids get angry, and they want blood. So more effective societies are ones where people are suppressing these atavistic desires to really just turn society into a total blood bath. So there's that.", "I do think public opinion obviously matters a lot in democracies. Even in most dictatorships, public opinion matters a lot. Dictators demagogue, right? They try to go and win people over normally. It's easier because the people that would have been their rivals are dead or imprisoned or terrified. So that sort of lets you weaponize the demagoguery by saying, “ I'm the only one that gets to say I'm anointed by God.” So there's that kind of thing, but even dictators generally want to be liked by their population. It makes ruling easier. Then you only have to terrify 10% of the population into obedience instead of 90%. Having a better leadership class probably matters too. There's probably a high correlation between the quality of leaders and the quality of the public, but not in every case. So that's another thing to look for. Something else is also just having constructive interest groups.", "For example, I have a pet theory. I'm working on a book on housing regulation. My pet theory is that if we're not from lobbying from developers, basically zero things would be built in America because building things has almost no demagogic appeal. There's almost no one who emotionally gets a tear in their eye when they see a skyscraper go up or a new housing development. It’s very rare to be enthused about building stuff. Yet we need places to live. We need places to work. We need places to shop. On the other hand, there's a bunch of angry people whenever we try to build something, we call them nimbies who have an endless series of complaints. Traffic, parking, quality, the character of the neighborhood, pollution, and on and on, they make every possible complaint. So I think the main reason anything gets built is that there are developers who probably do not really sincerely believe that they're heroes, but just come and say, “Hey, well, we can make our money building stuff. We provide jobs and income through the community; let us build stuff. Please, please, please, please, please, please, please.” That is, I think, the main reason why. So all of you have constructive interest groups (economists mostly talk about interest groups as being bad), but lobbying has very positive effects overall for housing and for immigration. I think a lot of the main reason why we have as much immigration as we do is that there are a bunch of corporations pushing for it.", "Dwarkesh Patel", "As far as developers go, why is it the case that they're not more powerful than they are?", "Bryan Caplan", "I mean, they're typically like Mancur Olson ’s concentrated interest, right? Because Mancer was mostly wrong.", "Dwarkesh Patel", "Oh, really? Say more.", "Bryan Caplan", "Public opinion is much more important for policy than interest groups. Contrary to what Mancer Olson said, if you really look at what interest groups do normally, they're trying to work in the fine print. Interest groups do not go and try to pass some overall changes in US tax law. They say, “Look, that's going to be determined by public opinion. That's not the kind of thing I can affect. Maybe I can go and get a sentence change somewhere on page 1037 of the tax code. Perhaps I could get that.” So I say that most of the policies we have are ones that are supported by the general public, and you really have to look at details to see cases where interest groups are getting something that is actually unpopular with the public. So even with things as seemingly straightforward as farm subsidies, economists say, “ Oh well, obviously most people don't want to go and pay those, but farmers get it.” Yeah, think again. We look at public opinion, and farm subsidies are actually very popular. If you ask people that are not in farm states, why do you want farm subsidies? Usually, they just give very pro-social reasons like, I want to make sure there's enough food. So you say, “ Well, that's stupid. We only subsidize a handful of agricultural products, but they're all available ” But that's already one step deeper than most voters ever have done or ever will do.", "Dwarkesh Patel", "You know, Charles Mann , the author of The Wizard and the Prophet ?", "Bryan Caplan", "Yeah.", "Dwarkesh Patel", "I had him on recently, and we actually talked about this because he's concerned about water and food shortages. Especially when they're used in inefficient ways where they give it to cattle instead of consuming it directly. So I pointed out, “Isn't there an obvious free market solution? The prices will rise if you're using it in an inefficient way, and then it'll just go to people who need it the most.” His claim was that, ideally, this would be the case, but the reality is that if there's something that is physically necessary for people to survive, there's just not going to be a political will to put in actual pricing regulations for water usage, for example, which would solve a lot of water shortage problems.", "Bryan Caplan", "Yeah. It seems like a silly argument because we don't have to go and raise the price of water up to the level where a few people die of thirst. You could go and multiply the price by a factor of 10 or 100, and people will still be drinking all the water that they want. They'll just be taking quicker showers, or farming will be happening in different parts of the country… that kind of thing.", "Dwarkesh Patel", "Yeah. But some of the problems include how in some regions in China where they have done this , the cost of water is––", "Bryan Caplan", "Okay, I was thinking about the US.", "Dwarkesh Patel", "––a significant fraction of their income.", "Bryan Caplan", "Even there though, can it really be that there's somewhere in China so poor that the main use of water is for drinking? Or are these farmers?", "Dwarkesh Patel", "No, no. These are actual urban residents and a quarter or a tenth of their monthly income just goes towards paying for the water.", "Bryan Caplan", "But that's different from drinking water, right? No matter how poor you are, normally, only a very tiny fraction of the water that you use is drunk. Most of it would be for bathing or for washing clothes, a little bit for cooking, and probably even more for cooking than for actual drinking. I'm from California. I remember being a kid and being told, “Only three minutes in the shower. Come on, chop-chop!”", "Dwarkesh Patel", "Even in India, yeah. That's also a thing. Hanania had an essay recently where the titled was, Why I care more about pronouns than genocide.", "Bryan Caplan", "He writes good titles, I’ll give him that. [laughs]", "Dwarkesh Patel", "He talks about the irrational system of one part of his mind that cares about things that are objectively less important if you actually thought about them rationally. One of those things is pronouns in comparison to genocide. What is your irrational system’s equivalent of one thing that you recognize is not that important in the grand scheme of things, but just bothers you to no end?", "Bryan Caplan", "People often say, “Well, you shouldn't be talking to that guy. He's a terrible person. He said certain things,” and I'll say, “ Yeah, but he was really nice to me. ” [laughs] So, on the positive level, someone having good manners and being friendly with me goes a really long way. I mean, honestly, this isn't something that I'm trying to overcome. The thing I'm trying to overcome is being really friendly to people who are not friendly to me. So yeah, the main thing is when people are just very personally rude and unpleasant. I still think intellectually that the best thing to do is to turn the other cheek and try to, if not win them over, to at least win observers over with how much more reasonable and fair I am. But my instinctive reaction is just to yell back at them. I will say that that instinctive reaction just gets weaker and weaker over time because I am someone who is so uncomfortable with anger. Part of it is, it's not really my personality, but a lot of it is just the feeling of if I ever got angry, I just don't know where I would draw the line. I'm worried I would completely flip out. So I'm too concerned. If I started yelling at this person, I probably wouldn't just give him one or two cutting insults. I'd probably be screaming at him like a lunatic. So probably I better just keep on the sunny side of life and not even try to get angry because I'm just not good at it.", "Dwarkesh Patel", "You know that line from the Avengers where Captain America asks the Hulk...", "Bryan Caplan", "Oh yeah, yeah. Actually, I do. Yes. “I'm always angry.”", "Dwarkesh Patel", "Right.", "Bryan Caplan", "I'm not always angry, not even close. But I mean, honestly, I'll say there's almost only one thing that really makes me angry, and that's people being angry. So I do have secondary anger, but I have very little primary anger.", "Anarcho-Capitalism is the Ultimate Government", "Dwarkesh Patel", "What kind of government would you implement if you had a zero discount rate? So if you're a strong longtermist?", "Bryan Caplan", "So since this is The Lunar Society, I'll stick my neck out for anarcho-capitalism and say that this is really the best system if we can figure out a way of doing it. There's no government like no government. I realize I may be completely blowing up all of my credibility, but you can just go and Google what I have to say about it. People sometimes ask, “Bryan, are you an anarchist?” And I'll say, “ Well, not the crazy kind . ” What I mean by this is I'm not someone who thinks that if you just pressed a button and got rid of the government, things will be good. I think it'd be a total disaster. Rather, what I think is that there is another equilibrium that is totally doable if people realize that it's totally doable. This equilibrium is one where we actually have competing police, competing legal systems, and competing court systems. It's one where if I had an hour, I could not convince anyone that disagrees. But I believe if you gave me an hour, I can convince you that it's not crazy. Which is what I actually do whenever I talk about this stuff. I say, “ Look, I have a really radical idea. I couldn't possibly convince a reasonable person of it in an hour, so I'm not going to try. What I'm going to try to do is convince you in an hour that this view, though you still will think it's wrong, is not crazy.”", "We don't have an hour to talk about it, but I think that this is the best alternative, especially for the long term. If we could get to this equilibrium, it's the best equilibrium to be in. It's one where it basically once and for all solves a whole lot of problems with international war. It diffuses nationalism. It's something that does a lot to take care of a lot of root causes of human problems. It is one where basically it dethrones demagogues, right? But there's always a place for people like that. They'll be running cults. They'll be involved in religion. They'll be pundits. But this will be a world where there's no longer any government you can get your hands on to go and cause horrible problems for the world. So it's one where the demagogues will sort of have lost their main line of employment and will have to get, if not exactly a real job, then at least a job that doesn't involve mass murder.", "Billionaires Deserve their Wealth", "Dwarkesh Patel", "Scott Alexander had a post last night making the point that he's in favor of more taxes on billionaires because even though Jeff Bezos has created a lot of consumer surplus and he certainly hasn't absorbed all of it, it's not the case that the rewards have to be that high to get Amazon built. Somebody would have ended up building Amazon anyways, even if the rewards were slightly lower. So Jeff Bezos himself is not that counterfactually responsible for Amazon. What do you make of that argument?", "Bryan Caplan", "So two things. One is in economics, we have something called tournament theory, and this says that it can be extremely socially valuable to go and have seemingly unreasonably large rewards for people that do something useful because it doesn't just incentivize the winner, it incentivizes all the potential winners. So billionaires are not just an inspiration to each other. It's not just, “ Oh, I can get to be a billionaire, I'll do this thing and make the money. ” It is something that actually fosters a whole culture of entrepreneurship. I mean, again, we've been hanging out in Austin all over there. There's a whole bunch of people who are never going to be billionaires, Dwarkesh. I've told people, “Will Dwarkesh ever be a billionaire? Probably not, but like 2%. Dwarkesh is just a mover and a shaker! ” Where in India was your family from?", "Dwarkesh Patel", "Gujarat.", "Bryan Caplan", "Is that a big city? Is it a small town, or what even was it?", "Dwarkesh Patel", "Yeah, it was a big city.", "Bryan Caplan", "Okay. So you probably would have gone to IIT or something like that. But imagine if…", "Dwarkesh Patel", "I don't know if I could have made it to IIT.", "Bryan Caplan", "Imagine if your family was out in some rural village, and it's the Indira Gandhi era, right? You don't hear about dot com billionaires or anything. In terms of inspiring a generation, I think the billionaires are inspiring a generation of movers and shakers. If they are, and if their earnings were greatly taxed, I think this would really put a dent in it. I think anytime people speak ill of them, that's putting a dent in it. If you want billionaires to make less money, praise them to the skies so that more people enter, and it drives down the rewards for doing what they do. So this tournament theory does make a lot of sense. This is a story about, “ Why do you pay the CEO so much more than all the next level down? Is he really that much better than the second-best guy? ” Look, this isn't just incentivizing the CEO; it's incentivizing everyone who's there. Who could plausibly tell themselves, “Maybe one day me.” So there's that. The other thing is actually going back to the historical contingency. I think business history is less historically contingent, but we actually do have a lot of evidence that the quality of entrepreneurship and management varies a lot from country to country, which I think does actually mean that for the really big businesses. It's not totally clear that it would have happened anyway.", "Now, if you broaden it and just ask, “ Well, would eCommerce have happened? ” Yes, eCommerce would have happened. Would there be one store that was bigger than the others? Yeah, but is it possible that the second-best thing to Amazon would have been 1/10th as good? That actually is not crazy. I don't know. But to go and just say, “We know that that's not true,” seems pretty dogmatic to me. Again, what’s also striking is what is the second best thing to Amazon. You can say it's a natural monopoly, not true in any other retail that I know of.", "So people talk about Alibaba , and I tried looking at that one. Isn’t this Chinese? Shouldn't they be catering to their market? It's the English language version of the site. Shouldn't they be trying to make me happy? So that is a case where it doesn't seem like it makes any economic sense; it's a natural monopoly based upon past experience. Think about cars. We got the big three (Germany, Japan, US). Why aren't there three things like Amazon? Maybe it really is the case that no one but Jeff Bezos knew how to do it, especially when you realize part of knowing how to do something is knowing how to assemble a team. It's so easy to say, “You didn't do it, and it was your team that did it.” I made my team. That's what a leader does! They take people that are good in themselves, and they fuse them together, and they make them great, and maybe it’s self-serving, but it's plausible.", "Dwarkesh Patel", "That makes a lot of sense. Well, Bryan, I want to thank you for giving me so much of your time. I especially want to thank you for being the first guest on my podcast. We've gone full circle with the third episode, but it would not have been possible at all without you.", "Bryan Caplan Totally awesome. And yes, so the books that we've been talking about are both available on Amazon. So there is How Evil Are Politicians?: Essays on Demagoguery. Then there is Don't Be a Feminist, Essays on Genuine Justice . By the time you get the podcast, both books will be available. They're really cheap, only 12 bucks, and I have not raised the price despite inflation! I've been thinking about why not and the economic theory behind that. The eBook is just $9.99. These books are both collections of my very best essays from 2005 to 2022, but Don't Be a Feminist has a totally all-new lead essay, one that, for years, I was too nervous to write. Then as I watched my daughter growing up, I felt something along the lines of “ No, I've got to write this essay. I'm going to do it for her.” The actual first essay is called Don't Be a Feminist, A Letter to My Daughter. That is how I frame it. This is not an angry essay. As I said, I'm not an angry person. I'm not mad at feminists. Rather, I want to especially help my daughter, but anyone who's in the same boat as her, I would be thrilled to go and help them as well.", "Bryan Caplan", "So like I said, this is not a typical lawyerly book where all I do is try to come up with as many arguments in my favor as I can and ignore everything that goes against me. This is one where I'm really trying to grapple with the truth. I can honestly say this: my dream is not to upset any reader. In my dream world, everyone on earth would read my stuff, and everyone would be happy after reading it. Everyone would be smiling. Everyone would be feeling grateful. Look, I haven't really won until I have turned every enemy into a friend. That may seem quixotic, but that is what I am trying to do. That's what's on my mind. Of course, you've got to first admit that there's disagreement before you can begin trying to change someone's mind and make them feel good about it, but that really is my dream.", "Dwarkesh Patel", "Anybody who has read Bryan’s books or met him can confirm that the books and the arguments in them are very good, and they also obviously come from a place of kindness and understanding about the other person's position.", "Bryan Caplan", "Yeah. We all have problems, Dwarkesh. We are, we are all imperfect, flawed human beings, but we must rise above it.", "Dwarkesh Patel", "I highly recommend the books. Bryan, thanks so much for being on the Lunar Society.", "Bryan Caplan", "Thanks for coming out here. Now, this is the first time that I've actually had you right in my office during an interview. So as great as Dwarkesh is over zoom, he's even better in real life. All people must try to meet Dwarkesh, he’s a cool guy, a really positive person. Thank you, buddy.", "Dwarkesh Patel", "Thank you, Bryan." ]
[ "https://www.amazon.com/Dont-Be-Feminist-Genuine-Justice/dp/B0BD3DFMMH", "https://nav.al/schelling-point", "https://openborders.info/keyhole-solutions/", "https://www.amazon.com/Open-Borders-Science-Ethics-Immigration/dp/1250316960", "https://www.lp.org/", "https://en.wikipedia.org/wiki/Female_infanticide#:~:text=Female%20infanticide%20is%20the%20deliberate,as%20China%2C%20India%20and%20Pakistan.", "https://en.wikipedia.org/wiki/Overton_window", "https://en.wikipedia.org/wiki/Alex_Tabarrok", "https://www.apa.org/topics/lgbtq/transgender#:~:text=Transgender%20is%20an%20umbrella%20term,they%20were%20assigned%20at%20birth.", "https://www.amazon.com/Talent-Identify-Energizers-Creatives-Winners/dp/1250275814", "https://www.nber.org/papers/w14969", "https://fordschool.umich.edu/faculty/justin-wolfers", "https://twitter.com/hashtag/metoo?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Ehashtag", "https://en.wikipedia.org/wiki/Foot_binding#:~:text=Foot%20binding%2C%20or%20footbinding%2C%20was,were%20known%20as%20lotus%20shoes.", "https://richardhanania.substack.com/p/womens-tears-win-in-the-marketplace", "https://richardhanania.substack.com/p/womens-tears-win-in-the-marketplace", "https://www.econlib.org/archives/2014/07/endogenous_sexi.html", "https://www.econlib.org/fear-of-mentoring/", "https://www.econlib.org/archives/2009/04/stocks_flows_an.html", "https://youtu.be/tr7UK12SHns", "https://www.youtube.com/watch?v=xrZWjcH2Pk8", "https://www.youtube.com/watch?v=xrZWjcH2Pk8", "https://mason.gmu.edu/~rhanson/home.html", "https://www.cnn.com/2022/08/03/politics/affirmative-action-supreme-court-harvard-north-carolina/index.html#:~:text=(CNN)%20The%20Supreme%20Court%20said,student's%20race%20when%20deciding%20which", "https://www.upi.com/Top_News/US/2022/08/01/82-companies-support-Harvard-UNC-affirmative-action-cases-Supreme-Court/3171659392575/", "https://www.amazon.com/Myth-Rational-Voter-Democracies-Policies/dp/0691138737", "https://www.amazon.com/Case-against-Education-System-Waste/dp/0691174652", "https://en.wikipedia.org/wiki/Yoram_Bauman", "https://en.wikipedia.org/wiki/Khabib_Nurmagomedov", "https://www.amazon.com/Case-against-Education-System-Waste/dp/0691174652", "https://en.wikipedia.org/wiki/Ron_DeSantis", "https://www.presidency.ucsb.edu/documents/remarks-southwest-texas-state-college-upon-signing-the-higher-education-act-1965", "https://www.verywellmind.com/how-neuroticism-affects-your-behavior-4782188#:~:text=Neuroticism%20is%20a%20trait%20that,1", "https://www.econlib.org/archives/2009/06/an_unanswered_q.html", "https://en.wikipedia.org/wiki/Ad_hoc_hypothesis#:~:text=In%20science%20and%20philosophy%2C%20an,theory%20in%20its%20unmodified%20form.", "https://en.wikipedia.org/wiki/Jonathan_Haidt", "https://www.amazon.com/Open-Borders-Science-Ethics-Immigration/dp/1250316960", "https://www.amazon.com/Open-Borders-Science-Ethics-Immigration/dp/1250316960", "https://www.amazon.com/Talent-Identify-Energizers-Creatives-Winners/dp/1250275814", "https://www.amazon.com/Case-against-Education-System-Waste/dp/0691174652", "https://a16z.com/author/bryan-caplan/", "https://www.amazon.com/Myth-Rational-Voter-Democracies-Policies/dp/0691138737", "https://openborders.info/keyhole-solutions/", "https://www.amazon.com/Hitlers-Willing-Executioners-Ordinary-Holocaust/dp/0679772685", "https://www.amazon.com/How-Evil-Are-Politicians-Demagoguery/dp/B09YQGKBXH", "https://www.effectivealtruism.org/", "https://www.brookings.edu/wp-content/uploads/2016/07/Egypt_Brooke-FINALE.pdf", "https://www.britannica.com/topic/Bolshevik", "https://www.amazon.com/Concise-History-Russian-Revolution/dp/0679745440", "https://www.yadvashem.org/odot_pdf/microsoft%20word%20-%205941.pdf", "https://en.wiktionary.org/wiki/nimby#:~:text=nimby%20(plural%20nimbies%20or%20nimbys,especially%20in%20public%20policy%20debate.", "https://en.wikipedia.org/wiki/The_Logic_of_Collective_Action", "https://en.wikipedia.org/wiki/Charles_C._Mann", "https://www.amazon.com/Wizard-Prophet-Remarkable-Scientists-Tomorrows/dp/0307961699", "https://www.foreignaffairs.com/china/chinas-growing-water-crisis#:~:text=NORTHERN%20CRISIS&text=In%20parts%20of%20North%20China,aquifers'%20potential%20for%20future%20recharge.", "https://richardhanania.substack.com/p/why-do-i-hate-pronouns-more-than", "https://en.wikipedia.org/wiki/Anarcho-capitalism", "https://astralcodexten.substack.com/p/billionaires-surplus-and-replaceability", "https://en.wikipedia.org/wiki/Indira_Gandhi", "https://www.googleadservices.com/pagead/aclk?sa=L&ai=DChcSEwjDro2Frrf6AhX8gIMHHTnKCekYABAAGgJlZg&ohost=www.google.com&cid=CAESa-D2F7xKv8cnKYH7Lup8l-vyIKORT3jv4Erf-gRn6nUDBCKbC0tebAr68SrxC7ScLK0mmuXI3RLeuazXZCGAAIgKc_5_ovKjNcDCDGo6tbtPPP8Cd7CAkrhXgxGXQRd5R8r3K8pcwYnho7Pj&sig=AOD64_2f7byYnkYUljm0HM1DZdIXRQKZgQ&q&adurl&ved=2ahUKEwiyhoWFrrf6AhWI8rsIHUc0D1YQ0Qx6BAgEEAE", "https://www.amazon.com/How-Evil-Are-Politicians-Demagoguery/dp/B09YQGKBXH", "https://www.amazon.com/Dont-Be-Feminist-Genuine-Justice/dp/B0BD3DFMMH" ]
https://www.dwarkesh.com/p/carl-shulman
Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
[ "(00:01:32) - Intelligence Explosion", "Dwarkesh Patel 00:01:32", "Today I have the pleasure of speaking with Carl Shulman. Many of my former guests, and this is not an exaggeration, have told me that a lot of their biggest ideas have come directly from Carl especially when it has to do with the intelligence explosion and its impacts. So I decided to go directly to the source and we have Carl today on the podcast. He keeps a super low profile but is one of the most interesting intellectuals I've ever encountered and this is actually his second podcast ever. We're going to go deep into the heart of many of the most important ideas that are circulating right now directly from the source. Carl is also an advisor to the Open Philanthropy project which is one of the biggest funders on causes having to do with AI and its risks, not to mention global health and well being. And he is a research associate at the Future of Humanity Institute at Oxford. So Carl, it's a huge pleasure to have you on the podcast. Thanks for coming.", "Carl Shulman 00:02:30", "Thank you Dwarkesh. I've enjoyed seeing some of your episodes recently and I'm glad to be on the show.", "Dwarkesh Patel 00:02:36", "Excellent, let's talk about AI. Before we get into the details, give me the big picture explanation of the feedback loops and just general dynamics that would start when you have something that is approaching human-level intelligence.", "Carl Shulman 00:02:52", "The way to think about it is — we have a process now where humans are developing new computer chips, new software, running larger training runs, and it takes a lot of work to keep Moore's law chugging (while it was, it's slowing down now). And it takes a lot of work to develop things like transformers, to develop a lot of the improvements to AI neural networks. The core method that I want to highlight on this podcast, and which I think is underappreciated, is the idea of input-output curves. We can look at the increasing difficulty of improving chips and sure, each time you double the performance of computers it’s harder and as we approach physical limits eventually it becomes impossible. But how much harder? There's a paper called “ Are Ideas Getting Harder to Find? \" that was published a few years ago. 10 years ago at MIRI, I did an early version of this analysis using data mainly from Intel and the large semiconductor fabricators. In this paper they cover a period where the productivity of computing went up a million fold, so you could get a million times the computing operations per second per dollar, a big change but it got harder. The amount of investment and the labor force required to make those continuing advancements went up and up and up. It went up 18 fold over that period. Some take this to say — “Oh, diminishing returns. Things are just getting harder and harder and so that will be the end of progress eventually.” However in a world where AI is doing the work, that doubling of computing performance, translates pretty directly to a doubling or better of the effective labor supply. That is, if when we had that million-fold compute increase we used it to run artificial intelligences who would replace human scientists and engineers, then the 18x increase in the labor demands of the industry would be trivial. We're getting more than one doubling of the effective labor supply than we need for each doubling of the labor requirement and in that data set, it's over four. So when we double compute we need somewhat more researchers but a lot less than twice as many. We use up some of those doublings of compute on the increasing difficulty of further research, but most of them are left to expedite the process. So if you double your labor force, that's enough to get several doublings of compute. You use up one of them on meeting the increased demands from diminishing returns. The others can be used to accelerate the process so you have your first doubling take however many months, your next doubling can take a smaller fraction of that, the next doubling less and so on. At least in so far as the outputs you're generating, compute for AI in this story, are able to serve the function of the necessary inputs. If there are other inputs that you need eventually those become a bottleneck and you wind up more restricted on this.", "Dwarkesh Patel 00:06:55", "Got it. The bloom paper said there was a 35% increase in transistor density and there was a 7% increase per year in the number of researchers required to sustain that pace.", "Carl Shulman 00:07:11", "Something in the vicinity, yeah. Four to five doublings of compute per doubling of labor inputs.", "Dwarkesh Patel 00:07:20", "I guess there's a lot of questions you can delve into in terms of whether you would expect a similar scale with AI and whether it makes sense to think of AI as a population of researchers that keeps growing with compute itself. Actually, let's go there. Can you explain the intuition that compute is a good proxy for the number of AI researchers so to speak?", "Carl Shulman 00:07:41", "So far I've talked about hardware as an initial example because we had good data about a past period. You can also make improvements on the software side and when we think about an intelligence explosion that can include — AI is doing work on making hardware better, making better software, making more hardware. But the basic idea for the hardware is especially simple in that if you have an AI worker that can substitute for a human, if you have twice as many computers you can run two separate instances of them and then they can do two different jobs, manage two different machines, work on two different design problems. Now you can get more gains than just what you would get by having two instances. We get improvements from using some of our compute not just to run more instances of the existing AI, but to train larger AIs. There's hardware technology, how much you can get per dollar you spend on hardware and there's software technology and the software can be copied freely. So if you've got the software it doesn't necessarily make that much sense to say that — “Oh, we've got you a hundred Microsoft Windows.” You can make as many copies as you need for whatever Microsoft will charge you. But for hardware, it’s different. It matters how much we actually spend on the hardware at a given price. And if we look at the changes that have been driving AI recently, that is the thing that is really off-trend. We are spending tremendously more money on computer hardware for training big AI models.", "Dwarkesh Patel 00:09:26", "Okay so there's the investment in hardware, there's the hardware technology itself, and there's the software progress itself. The AI is getting better because we're spending more money on it because our hardware itself is getting better over time and because we're developing better models or better adjustments to those models. Where is the loop here?", "Carl Shulman 00:09:48", "The work involved in designing new hardware and software is being done by people now. They use computer tools to assist them, but computer time is not the primary cost for NVIDIA designing chips, for TSMC producing them, or for ASML making lithography equipment to serve the TSMC fabs. And even in AI software research that has become quite compute intensive we're still in the range where at a place like DeepMind salaries were still larger than compute for the experiments. Although more recently tremendously more of the expenditures were on compute relative to salaries. If you take all the work that's being done by those humans, there's like low tens of thousands of people working at Nvidia designing GPUs specialized for AI. There's more than 70,000 people at TSMC which is the leading producer of cutting-edge chips. There's a lot of additional people at companies like ASML that supply them with the tools they need and then a company like DeepMind, I think from their public filings, they recently had a thousand people. OpenAI is a few hundred people. Anthropic is less. If you add up things like Facebook AI research, Google Brain, other R&D, you get thousands or tens of thousands of people who are working on AI research.", "We would want to zoom in on those who are developing new methods rather than narrow applications. So inventing the transformer definitely counts but optimizing for some particular businesses data set cleaning probably not. So those people are doing this work, they're driving quite a lot of progress. What we observe in the growth of people relative to the growth of those capabilities is that pretty consistently the capabilities are doubling on a shorter time scale than the people required to do them are doubling. We talked about hardware and how it was pretty dramatic historically. Like four or five doublings of compute efficiency per doubling of human inputs. I think that's a bit lower now as we get towards the end of Moore's law although interestingly not as much lower as you might think because the growth of inputs has also slowed recently. On the software side there's some work by Tamay Besiroglu and collaborators; it may have been his thesis. It's called Are models getting harder to find? and it's applying the same analysis as the “Are ideas getting harder to find?” and you can look at growth rates of papers, from citations, employment at these companies, and it seems like the doubling time of these like workers driving the software advances is like several years whereas the doubling of effective compute from algorithmic progress is faster. There's a group called Epoch , they've received grants from open philanthropy, and they do work collecting datasets that are relevant to forecasting AI progress. Their headline results for what's the rate of progress in hardware and software, and growth in budgets are as follows — For hardware, they're looking at a doubling of hardware efficiency in like two years. It's possible it’s a bit better than that when you take into account certain specializations for AI workloads. For the growth of budgets they find a doubling time that's something like six months in recent years which is pretty tremendous relative to the historical rates. We should maybe get into that later and then on the algorithmic progress side, mainly using Imagenet type datasets right now they find a doubling time that's less than one year. So when you combine all of these things the growth of effective compute for training big AIs is pretty drastic.", "Dwarkesh Patel 00:14:29", "I think I saw an estimate that GPT-4 cost like 50 million dollars or around that range to train. Now suppose that AGI takes a 1000x that, if you were just a scale of GPT-4 it might not be that but just for the sake of example, some part of that will come from companies just spending a lot more to train the models and that’s just greater investment. Part of that will come from them having better models.You get the same effect of increasing it by 10x just from having a better model. You can spend more money on it to train a bigger model, you can just have a better model, or you can have chips that are cheaper to train so you get more compute for the same dollars. So those are the three you are describing the ways in which the “effective compute” would increase?", "Carl Shulman 00:15:22", "Looking at it right now, it looks like you might get two or three doublings of effective compute for this thing that we're calling software progress which people get by asking — how much less compute can you use now to achieve the same benchmark as you achieved before? There are reasons to not fully identify this with software progress as you might naively think because some of it can be enabled by the other. When you have a lot of compute you can do more experiments and find algorithms that work better. We were talking earlier about how sometimes with the additional compute you can get higher efficiency by running a bigger model. So that means you're getting more for each GPU that you have because you made this larger expenditure. That can look like a software improvement because this model is not a hardware improvement directly because it's doing more with the same hardware but you wouldn't have been able to achieve it without having a ton of GPUs to do the big training run.", "Dwarkesh Patel 00:16:28", "The feedback loop itself involves the AI that is the result of this greater effect of compute helping you train better AI or use less effective compute in the future to train better AI?", "Carl Shulman 00:16:40", "It can help with the hardware design. NVIDIA is a fab-less chip design company. They don't make their own chips. They send files of instructions to TSMC which then fabricates the chips in their own facilities. If you could automate the work of those 10,000+ people and have the equivalent of a million people doing that work then you would pretty quickly get the kind of improvements that can be achieved with the existing nodes that TSMC is operating on and get a lot of those chip design gains. Basically doing the job of improving chip design that those people are working on now but get it done faster. While that's one thing I think that's less important for the intelligence explosion. The reason being that when you make an improvement to chip design it only applies to the chips you make after that. If you make an improvement in AI software, it has the potential to be immediately applied to all of the GPUs that you already have. So the thing that I think is most disruptive and most important and has the leading edge of the change from AI automation of the inputs to AI is on the software side", "(00:18:03) - Can AIs do AI research?", "Dwarkesh Patel 00:18:03", "At what point would it get to the point where the AIs are helping develop better software or better models for future AIs? Some people claim today, for example, that programmers at OpenAI are using Copilot to write programs now. So in some sense you're already having that feedback loop but I'm a little skeptical of that as a mechanism. At what point would it be the case that the AI is contributing significantly in the sense that it would almost be the equivalent of having additional researchers to AI progress and software?", "Carl Shulman 00:18:38", "The quantitative magnitude of the help is absolutely central. There are plenty of companies that make some product that very slightly boosts productivity. When Xerox makes fax machines, it maybe increases people's productivity in office work by 0.1% or something. You're not gonna have explosive growth out of that because 0.1% more effective R&D at Xerox and any customers buying the machines is not that important. The thing to look for is — when is it the case that the contributions from AI are starting to become as large as the contributions from humans? So when this is boosting their effective productivity by 50 or 100% and if you then go from like eight months doubling time for effective compute from software innovations, things like inventing the transformer or discovering chinchilla scaling and doing your training runs more optimally or creating flash attention. If you move that from 8 months to 4 months and then the next time you apply that it significantly increases the boost you're getting from the AI. Now maybe instead of giving a 50% or 100% productivity boost now it's more like 200%. It doesn't have to have been able to automate everything involved in the process of AI research. It can be that it's automated a bunch of things and then those are being done in extreme profusion. A thing AI can do, you can have it done much more often because it's so cheap. And so it's not a threshold of — this is human level AI, it can do everything a human can do with no weaknesses in any area. It's that, even with its weaknesses it's able to bump up the performance. So that instead of getting the results we would have with the 10,000 people working on finding these innovations, we get the results that we would have if we had twice as many of those people with the same kind of skill distribution.", "It’s a demanding challenge, you need quite a lot of capability for that but it's also important that it's significantly less than — this is a system where there's no way you can point at it and say in any respect it is weaker than a human. A system that was just as good as a human in every respect but also had all of the advantages of an AI, that is just way beyond this point. If you consider that the output of our existing fabs make tens of millions of advanced GPUs per year. Those GPUs if they were running AI software that was as efficient as humans, it is sample efficient, it doesn't have any major weaknesses, so they can work four times as long, the 168 hour work week, they can have much more education than any human. A human, you got a PhD, it's like 20 years of education, maybe longer if they take a slow route on the PhD. It's just normal for us to train large models by eat the internet, eat all the published books ever, read everything on GitHub and get good at predicting it. So the level of education vastly beyond any human, the degree to which the models are focused on task is higher than all but like the most motivated humans when they're really, really gunning for it. So you combine the things tens of millions of GPUs, each GPU is doing the work of the very best humans in the world and the most capable humans in the world can command salaries that are a lot higher than the average and particularly in a field like STEM or narrowly AI, like there's no human in the world who has a thousand years of experience with TensorFlow or let alone the new AI technology that was invented the year before but if they were around, yeah, they'd be paid millions of dollars a year. And so when you consider this — tens of millions of GPUs. Each is doing the work of 40, maybe more of these existing workers, is like going from a workforce of tens of thousands to hundreds of millions. You immediately make all kinds of discoveries, then you immediately develop all sorts of tremendous technologies. Human level AI is deep, deep into an intelligence explosion. Intelligence explosion has to start with something weaker than that.", "Dwarkesh Patel 00:23:46", "Yeah, what is the thing it starts with and how close are we to that? Because to be a researcher at OpenAI is not just completing the hello world Prompt that Copilot does right? You have to choose a new idea, you have to figure out the right way to approach it, you perhaps have to manage the people who are also working with you on that problem. It's an incredibly complicated portfolio of skills rather than just a single skill. What is the point at which that feedback loop starts where you're not just doing the 0.5% increase in productivity that an AI tool might do but is actually the equivalent of a researcher or close to it?", "Carl Shulman 00:24:31", "Maybe a way is to give some illustrative examples of the kinds of capabilities that you might see. Because these systems have to be a lot weaker than the human-level things, what we'll have is intense application of the ways in which AIs have advantages partly offsetting their weaknesses. AIs are cheap so we can call a lot of them to do many small problems. You'll have situations where you have dumber AIs that are deployed thousands of times to equal one human worker. And they'll be doing things like voting algorithms where with an LLM you generate a bunch of different responses and take a majority vote among them that improves some performance. You'll have things like the AlphaGo kind of approach where you use the neural net to do search and you go deeper with the search by plowing in more compute which helps to offset the inefficiency and weaknesses of the model on its own. You'll do things that would just be totally impractical for humans because of the sheer number of steps, an example of that would be designing synthetic training data. Humans do not learn by just going into the library and opening books at random pages, it's actually much much more efficient to have things like schools and classes where they teach you things in an order that makes sense, focusing on the skills that are more valuable to learn. They give you tests and exams. They're designed to try and elicit the skill they're actually trying to teach. And right now we don't bother with that because we can hoover up more data from the internet. We're getting towards the end of that but yeah, as the AIs get more sophisticated they'll be better able to tell what is a useful kind of skill to practice and to generate that. We've done that in other areas like AlphaGo. The original version of AlphaGo was booted up with data from human Go play and then improved with reinforcement learning and Monte-carlo tree search but then AlphaZero, a somewhat more sophisticated model benefited from some other improvements but was able to go from scratch and it generated its own data through self play. Getting data of a higher quality than the human data because there are no human players that good available in the data set and also a curriculum so that at any given point it was playing games against an opponent of equal skill itself. It was always in an area when it was easy to learn. If you're just always losing no matter what you do, or always winning no matter what you do, it's hard to distinguish which things are better and which are worse? And when we have somewhat more sophisticated AIs that can generate training data and tasks for themselves, for example if the AI can generate a lot of unit tests and then can try and produce programs that pass those unit tests, then the interpreter is providing a training signal and the AI can get good at figuring out what's the kind of programming problem that is hard for AIs right now that will develop more of the skills that I need and then do them. You're not going to have employees at Open AI write a billion programming problems, that's just not gonna happen. But you are going to have AIs given the task of producing the enormous number of programming challenges.", "Dwarkesh Patel 00:28:26", "In LLMs themselves, there's a paper out of Anthropic called Constitution AI where they basically had the program just talk to itself and say, \"Is this response helpful? If not, how can I make this more helpful” and the responses improved and then you train the model on the more helpful responses that it generates by talking to itself so that it generates it natively and you could imagine more sophisticated or better ways to do that. But then the question is GPT-4 already costs like 50 million or 100 million or whatever it was. Even if we have greater effective compute from hardware increases and better models, it's hard to imagine how we could sustain four or five orders of magnitude greater effective size than GPT-4 unless we're dumping in trillions of dollars, the entire economies of big countries, into training the next version. The question is do we get something that can significantly help with AI progress before we run out of the sheer money and scale and compute that would require to train it? Do you have a take on that?", "Carl Shulman 00:29:37", "First I'd say remember that there are these three contributing trends. The new H100s are significantly better than the A100s and a lot of companies are actually just waiting for their deliveries of H100s to do even bigger training runs along with the work of hooking them up into clusters and engineering the thing. All of those factors are contributing and of course mathematically yeah, if you do four orders of magnitude more than 50 or 100 million then you're getting to trillion dollar territory. I think the way to look at it is at each step along the way, does it look like it makes sense to do the next step? From where we are right now seeing the results with GPT-4 and ChatGPT companies like Google and Microsoft are pretty convinced that this is very valuable. You have talk at Google and Microsoft that it's a billion dollar matter to change market share in search by a percentage point so that can fund a lot. On the far end if you automate human labor we have a hundred trillion dollar economy  and most of that economy is paid out in wages, between 50 and 70 trillion dollars per year. If you create AGI it's going to automate all of that and keep increasing beyond that. So the value of the completed project Is very much worth throwing our whole economy into it, if you're going to get the good version and not the catastrophic destruction of the human race or some other disastrous outcome. In between it's a question of — how risky and uncertain is the next step and how much is the growth in revenue you can generate with it? For moving up to a billion dollars I think that's absolutely going to happen. These large tech companies have R&D budgets of tens of billions of dollars and when you think about it in the relevant sense all the employees at Microsoft who are doing software engineering that’s contributing to creating software objects, it's not weird to spend tens of billions of dollars on a product that would do so much. And I think that it's becoming clearer that there is a market opportunity to fund the thing. Going up to a hundred billion dollars, that's the existing R&D budgets spread over multiple years. But if you keep seeing that when you scale up the model it substantially improves the performance, it opens up new applications, that is you're not just improving your search but maybe it makes self-driving cars work, you replace bulk software engineering jobs or if not replace them amplify productivity. In this kind of dynamic you actually probably want to employ all the software engineers you can get as long as they are able to make any contribution because the returns of improving stuff in AI itself gets so high. But yeah, I think that can go up to a hundred billion. And at a hundred billion you're using a significant fraction of our existing fab capacity. Right now the revenue of NVIDIA is 25 billion, the revenue of TSMC is over 50 billion. I checked in 2021, NVIDIA was maybe 7.5%, less than 10% of TSMC revenue. So there's a lot of room and most of that was not AI chips. They have a large gaming segment, there are data center GPU's that are used for video and the like. There's room for more than an order of magnitude increase by redirecting existing fabs to produce more AI chips and they're just actually using the AI chips that these companies have in their cloud for the big training runs. I think that that's enough to go to the 10 billion and then combine with stuff like the H100 to go up to the hundred billion.", "Dwarkesh Patel 00:34:13", "Just to emphasize for the audience the initial point about revenue made. If it costs OpenAI 100 million dollars to train GPT-4 and it generates 500 million dollars in revenue, you pay back your expenses with 100 million and you have 400 million for your next training run. Then you train your GPT 4.5, you get let's say four billion dollars in revenue out of that. That's where the feedback group of revenue comes from. Where you're automating tasks and therefore you're making money you can use that money to automate more tasks. On the ability to redirect the fab production towards AI chips, fabs take a decade or so to build. Given the ones we have now and the ones that are going to come online in the next decade, is there enough to sustain a hundred billion dollars of GPU compute if you wanted to spend that on a training run?", "Carl Shulman 00:35:09", "Yes, you definitely make the hundred billion one. As you go up to a trillion dollar run and larger, it's going to involve more fab construction and yeah, fabs can take a long a long time to build. On the other hand, if in fact you're getting very high revenue from the AI systems and you're actually bottlenecked on the construction of these fabs then their price could skyrocket and that could lead to measures we've never seen before to expand and accelerate fab production. If you consider, at the limit you're getting models that approach human-like capability, imagine things that are getting close to brain-like efficiencies plus AI advantages. We were talking before a cluster of GPU supporting AIs that do things, data parallelism. If that can work four times as much as a highly skilled motivated focused human with levels of education that have never been seen in the human population, and if a typical software engineer can earn hundreds of thousands of dollars, the world's best software engineers can earn millions of dollars today and maybe more in a world where there's so much demand for AI. And then times four for working all the time. If you can generate close to 10 million dollars a year out of the future version H100 and it cost tens of thousands of dollars with a huge profit margin now. And profit margin could be reduced with large production. That is a big difference that that chip pays for itself almost instantly and you could support paying 10 times as much to have these fabs constructed more rapidly. If AI is starting to be able to contribute more of the skilled technical work that makes it hard for NVIDIA to suddenly find thousands upon thousands of top quality engineering hires.", "If AI hasn't reached that level of performance then this is how you can have things stall out. A world where AI progress stalls out is one where you go to the 100 billion and then over succeeding years software progress turns out to stall. You lose the gains that you are getting from moving researchers from other fields. Lots of physicists and people from other areas of computer science have been going to AI but you tap out those resources as AI becomes a larger proportion of the research field. And okay, you've put in all of these inputs, but they just haven't yielded AGI yet. I think that set of inputs probably would yield the kind of AI capabilities needed for intelligence explosion but if it doesn't, after we've exhausted this current scale up of increasing the share of our economy that is trying to make AI. If that's not enough then after that you have to wait for the slow grind of things like general economic growth, population growth and such and so things slow. That results in my credences and this kind of advanced AI happening to be relatively concentrated, over the next 10 years compared to the rest of the century because we can't keep going with this rapid redirection of resources into AI. That's a one-time thing.", "(00:39:00) - Primate evolution", "Dwarkesh Patel 00:39:00", "If the current scale up works we're going to get to AGI really fast, like within the next 10 years or something. If the current scale up doesn't work, all we're left with is just like the economy growing 2% a year, we have 2% a year more resources to spend on AI and at that scale you're talking about decades before just through sheer brute force you can train the 10 trillion dollar model or something. Let's talk about why you have your thesis that the current scale up would work. What is the evidence from AI itself or maybe from primate evolution and the evolution of other animals? Just give me the whole confluence of reasons that make you think that.", "Carl Shulman 00:39:37", "Maybe the best way to look at that might be to consider, when I first became interested in this area, so in the 2000s which was before the deep learning revolution, how would I think about timelines? How did I think about timelines? And then how have I updated based on what has been happening with deep learning? Back then I would have said we know the brain is a physical object, an information processing device, it works, it's possible and not only is it possible it was created by evolution on earth. That gives us something of an upper bound in that this kind of brute force was sufficient. There are some complexities like what if it was a freak accident and that didn't happen on all of the other planets and that added some value. I have a paper with Nick Bostrom on this. I think basically that's not that important an issue. There's convergent evolution, octopi are also quite sophisticated. If a special event was at the level of forming cells at all, or forming brains at all, we get to skip that because we're choosing to build computers and we already exist. We have that advantage. So evolution gives something of an upper bound, really intensive massive brute force search and things like evolutionary algorithms can produce intelligence.", "Dwarkesh Patel 00:41:10", "Isn’t the fact that octopi and other mammals got to the point of being pretty intelligent but not human level intelligent some evidence that there's a hard step between a cephalopod and a human?", "Carl Shulman 00:41:23", "Yeah, that would be a place to look but it doesn't seem particularly compelling. One source of evidence on that is work by Herculano-Houzel. She's a neuroscientist who has dissolved the brains of many creatures and by counting the nuclei she's able to determine how many neurons are present in different species and has found a lot of interesting trends in scaling laws. She has a paper discussing the human brain as a scaled up primate brain. Across a wide variety of animals, mammals in particular, there's certain characteristic changes in the number of neurons and the size of different brain regions as things scale up. There's a lot of structural similarity there and you can explain a lot of what is different about us with a brute force story which is that you expend resources on having a bigger brain, keeping it in good order, and giving it time to learn. We have an unusually long childhood. We spend more compute by having a larger brain than other animals, more than three times as large as chimpanzees, and then we have a longer childhood than chimpanzees and much more than many, many other creatures. So we're spending more compute in a way that's analogous to having a bigger model and having more training time with it. And given that we see with our AI models, these large consistent benefits from increasing compute spent in those ways and with qualitatively new capabilities showing up over and over again particularly in areas that AI skeptics call out. In my experience over the last 15 years the things that people call out are like —”Ah, but the AI can't do that and it's because of a fundamental limitation.” We've gone through a lot of them. There were Winograd schemas , catastrophic forgetting, quite a number and they have repeatedly gone away through scaling. So there's a picture that we're seeing supported from biology and from our experience with AI where you can explain — Yeah, in general, there are trade-offs where the extra fitness you get from a brain is not worth it and so creatures wind up mostly with small brains because they can save that biological energy and that time to reproduce, for digestion and so on. Humans seem to have wound up in a self-reinforcing niche where we greatly increase the returns to having large brains. Language and technology are the obvious candidates. You have humans around you who know a lot of things and they can teach you. And compared to almost any other species we have vastly more instruction from parents and the society of the [unclear]. You're getting way more from your brain than you get per minute because you can learn a lot more useful skills and then you can provide the energy you need to feed that brain by hunting and gathering, by having fire that makes digestion easier.", "Basically how this process goes on is that it's increasing the marginal increase in reproductive fitness you get from allocating more resources along a bunch of dimensions towards cognitive ability. That's bigger brains, longer childhood, having our attention be more on learning. Humans play a lot and we keep playing as adults which is a very weird thing compared to other animals. We're more motivated to copy other humans around us than the other primates. These are motivational changes that keep us using more of our attention and effort on learning which pays off more when you have a bigger brain and a longer lifespan in which to learn in.", "Many creatures are subject to lots of predation or disease. If you're mayfly or a mouse and if you try and invest in a giant brain and a very long childhood you're quite likely to be killed by some predator or some disease before you're actually able to use it. That means you actually have exponentially increasing costs in a given niche. If I have a 50% chance of dying every few months, as a little mammal or a little lizard, that means the cost of going from three months to 30 months of learning and childhood development is not 10 times the loss, it’s 2^-10. A factor of 1024 reduction in the benefit I get from what I ultimately learn because 99.9 percent of the animals will have been killed before that point. We're in a niche where we're a large long-lived animal with language and technology so where we can learn a lot from our groups. And that means it pays off to just expand our investment on these multiple fronts in intelligence.", "Dwarkesh Patel 00:47:02", "That's so interesting. Just for the audience the calculation about like two to the whatever months is just like, you have a half chance of dying this month, a half chance of dying next month, you multiply those together. There's other species though that do live in flocks or as packs. They do have a smaller version of the development of cubs that play with each other. Why isn't this a hill on which they could have climbed to human level intelligence themselves? If it's something like language or technology, humans were getting smarter before we got language. It seems like there should be other species that should have beginnings of this cognitive revolution especially given how valuable it is given we've dominated the world. You would think there would be selective pressure for it.", "Carl Shulman 00:48:00", "Evolution doesn't have foresight. The thing in this generation that gets more surviving offspring and grandchildren is the thing that becomes more common. Evolution doesn't look ahead and think oh in a million years you'll have a lot of descendants. It's what survives and reproduces now . In fact, there are correlations where social animals do on average have larger brains and part of that is probably the additional social applications of brains, like keeping track of which of your group members have helped you before so that you can reciprocate. You scratch my back, I'll scratch yours. Remembering who's dangerous within the group is an additional application of intelligence. So there's some correlation there but what it seems like is that in most of these cases it's enough to invest more but not invest to the point where a mind can easily develop language and technology and pass it on. You see bits of tool use in some other primates who have an advantage compared to say whales who have quite large brains partly because they are so large themselves and they have some other things, but they don't have hands which means that reduces a bunch of ways in which brains can pay off and investments in the functioning of that brain. But yeah, primates will use sticks to extract termites, Capuchin monkeys will open clams by smashing them with a rock. But what they don't have is the ability to sustain culture. A particular primate will maybe discover one of these tactics and it'll be copied by their immediate group but they're not holding on to it that well. When they see the other animal do it they can copy it in that situation but they don't actively teach each other in their population. So it's easy to forget things, easy to lose information and in fact they remain technologically stagnant for hundreds of thousands of years.", "And we can look at some human situations. There's an old paper, I believe by the economist Michael Kramer, which talks about technological growth in the different continents for human societies. Eurasia is the largest integrated connected area. Africa is partly connected to it but the Sahara desert restricts the flow of information and technology and such. Then you have the Americas after the colonization from the land bridge were largely separated and are smaller than Eurasia, then Australia, and then you had smaller island situations like Tasmania. Technological progress seems to have been faster the larger the connected group of people. And in the smallest groups, like Tasmania where you had a relatively small population, they actually lost technology. They lost some fishing techniques. And if you have a small population and you have some limited number of people who know a skill and they happen to die or there's some change in circumstances that causes people not to practice or pass on that thing then you lose it. If you have few people you're doing less innovation and the rate at which you lose technologies to some local disturbance and the rate at which you create new technologies can wind up imbalanced. The great change of hominids and humanity is that we wound up in this situation where we were accumulating faster than we were losing and accumulating those technologies allowed us to expand our population. They created additional demand for intelligence so our brains became three times as large as chimpanzees and our ancestors who had a similar brain size.", "Dwarkesh Patel 00:52:06", "Okay. And the crucial point in relevance to AI is that the selective pressures against intelligence in other animals are not acting against these neural networks because they're not going to get eaten by a predator if they spend too much time becoming more intelligent, we're explicitly training them to become more intelligent. So we have good first principles reason to think that if it was scaling that made our minds this powerful and if the things that prevented other animals from scaling are not impinging on these neural networks, these things should just continue to become very smart.", "Carl Shulman 00:52:51", "Yeah, we are growing them in a technological culture where there are jobs like software engineer that depend much more on cognitive output and less on things like metabolic resources devoted to the immune system or to building big muscles to throw spears.", "Dwarkesh Patel 00:53:08", "This is kind of a side note but I'm just kind of interested. You referenced Chinchilla scaling at some point. For the audience this is a paper from DeepMind which describes if you have a model of a certain size what is the optimum amount of data that it should be trained on? So you can imagine bigger models, you can use more data to train them and in this way you can figure out where you should spend your compute. Should you spend it on making the model bigger or should you spend it on training it for longer? In the case of different animals, in some sense how big their brain is like model sizes and they're training data sizes like how long they're cubs or how long their infants or toddlers before they’re full adults. I’m curious, is there some kind of scaling law?", "Carl Shulman 00:53:47", "Chinchilla scaling is interesting because we were talking earlier about the cost function for having a longer childhood where it's exponentially increasing in the amount of training compute you have when you have exogenous forces that can kill you. Whereas when we do big training runs, the cost of throwing in more GPU is almost linear and it's much better to be linear than exponentially decay as you expend resources.", "Dwarkesh Patel 00:54:09", "Oh, that's a really good point.", "Carl Shulman 00:54:16", "Chinchilla scaling would suggest that for a brain of human size it would be optimal to have many millions of years of education but obviously that's impractical because of exogenous mortality for humans. So there's a fairly compelling argument that relative to the situation where we would train AI that animals are systematically way under trained. They're more efficient than our models. We still have room to improve our algorithms to catch up with the efficiency of brains but they are laboring under that disadvantage.", "Dwarkesh Patel 00:54:56", "That is so interesting. I guess another question you could have is: Humans got started on this evolutionary hill climbing route where we're getting more intelligent because it has more benefits for us. Why didn't we go all the way on that route? If intelligence is so powerful why aren't all humans as smart as we know humans can be? If intelligence is so powerful, why hasn't there been stronger selective pressure? I understand hip size, you can't give birth to a really big headed baby or whatever. But you would think evolution would figure out some way to offset that if intelligence has such big power and is so useful.", "Carl Shulman 00:55:42", "Yeah, if you actually look at it quantitatively that's not true and even in recent history it looks like a pretty close balance between the costs and the benefits of having more cognitive abilities. You say, who needs to worry about the metabolic costs? Humans put 20 percent of our metabolic energy into the brain and it's higher for young children. And then there's like breathing and digestion and the immune system. For most of history people have been dying left and right. A very large proportion of people will die of infectious disease and if you put more resources into your immune system you survive. It's life or death pretty directly via that mechanism. People die more of disease during famine and so there's boom or bust. If you have 20% less metabolic requirements [unclear] you're much more likely to survive that famine. So these are pretty big.", "And then there's a trade-off about just cleaning mutational load. So every generation new mutations and errors happen in the process of reproduction. We know there are many genetic abnormalities that occur through new mutations each generation and in fact Down syndrome is the chromosomal abnormality that you can survive. All the others just kill the embryo so we never see them. But down syndrome occurs a lot and there are many other lethal mutations and there are enormous numbers of less damaging mutations that are degrading every system in the body. Evolution each generation has to pull away at some of this mutational load and the priority with which that mutational load is pulled out scales in proportion to how much the traits it is affecting impact fitness. So you get new mutations that impact your resistance to malaria, you got new mutations that damage brain function and then those mutations are purged each generation. If malaria is a bigger difference in mortality than the incremental effectiveness as a hunter-gatherer you get from being slightly more intelligent, then you'll purge that mutational load first. Similarly humans have been vigorously adapting to new circumstances. Since agriculture people have been developing things like the ability to have amylase to digest breads and milk. If you're evolving for all of these things and if some of the things that give an advantage for that incidentally carry along nearby them some negative effect on another trait then that other trait can be damaged. So it really matters how important to survival and reproduction cognitive abilities were compared to everything else the organism has to do. In particular, surviving famine, having the physical abilities to do hunting and gathering and even if you're very good at planning your hunting, being able to throw a spear harder can be a big difference and that needs energy to build those muscles and then to sustain them.", "Given all these factors it's not a slam dunk to invest at the margin. And today, having bigger brains is associated with greater cognitive ability but it's modest. Large-scale pre-registered studies with MRI data. The correlation is in a range of 0.25 - 0.3 and the standard deviation of brain size is like 10%. So if you double the size of the brain, the existing brain costs like 20 of metabolic energy go up to 40%, okay, that's like eight standard deviations of brain size if the correlation is 0.25 then yeah, you get a gain from that eight standard deviations of brain size, two standard deviations of cognitive ability. In our modern society, where cognitive ability is very rewarded and finishing school and becoming an engineer or a doctor or whatever can pay off a lot financially, the average observed return in income is still only one or two percent proportional increase. There's more effects at the tail, there's more effect in professions like STEM but on the whole it's not a lot. If it was like a five percent increase or a 10 percent increase then you could tell a story where yeah, this is hugely increasing the amount of food you could have, you could support more children, but it's a modest effect and the metabolic costs will be large and then throw in these other these other aspects. Else we can just see there was not very strong rapid directional selection on the thing which would be there if by solving a math puzzle you could defeat malaria, then there would be more evolutionary pressure.", "Dwarkesh Patel 01:01:40", "That is so interesting. Not to mention of course that if you had 2x the brain size, without c-section you or your mother or both would die. This is a question I've actually been curious about for over a year and I’ve briefly tried to look up an answer. I know this was off topic and my apologies to the audience, but I was super interested and that was the most comprehensive and interesting answer I could have hoped for. So yeah, we have a good explanation or good first principles evolution or reason for thinking that intelligence scaling up to humans is not implausible just by throwing more scale at it.", "Carl Shulman 01:02:22", "I would also add that we also have the brain right here with us available for neuroscience to reverse engineer its properties. This was something that would have mattered to me more in the 2000s. Back then when I said, yeah, I expect this by the middle of the century-ish, that was a backstop if we found it absurdly difficult to get to the algorithms and then we would learn from neuroscience. But in actual history, it's really not like that. We develop things in AI and then also we can say oh, yeah, this is like this thing in neuroscience or maybe this is a good explanation. It's not as though neuroscience Is driving AI progress. It turns out not to be that necessary.", "Dwarkesh Patel 01:03:03", "I guess that is similar to how planes were inspired by the existence proof of birds but jet engines don't flap. All right, good reason to think scaling might work. So we spent a hundred billion dollars and we have something that is like human level or can help significantly with AI research.", "Carl Shulman 01:03:21", "I mean that that might be on the earlier end but I definitely would not rule that out given the rates of change we've seen with the last few scale ups.", "(01:03:30) - Forecasting AI progress", "Dwarkesh Patel 01:03:30", "At this point somebody might be skeptical. We already have a bunch of human researchers, how profitable is the incremental researcher? And then you might say no, this is thousands of researchers. I don’t know how to express this skepticism exactly. But skeptical of just generally the effect of scaling up the number of people working on the problem to rapid-rapid progress on that problem. Somebody might think that with humans the reason the amount of population working on a problem is such a good proxy for progress on the problem is that there's already so much variation that is accounted for. When you say there's a million people working on a problem, there's hundreds of super geniuses working on it, thousands of people who are very smart working on it. Whereas with an AI all the copies are the same level of intelligence and if it's not super genius intelligence the total quantity might not matter as much.", "Carl Shulman 01:04:26", "I'm not sure what your model is here. Is the model that the diminishing returns kickoff, suddenly has a cliff right where we are? There were results in the past from throwing more people at problems and this has been useful in historical prediction, this idea of experience curves and [unclear] law measuring cumulative production in a field, which is also going to be a measure of the scale of effort and investment, and people have used this correctly to argue that renewable energy technology, like solar, would be falling rapidly in price because it was going from a low base of very small production runs, not much investment in doing it efficiently, and climate advocates correctly called out, people like David Roberts, the futurist [unclear] actually has some interesting writing on this. They correctly called out that there would be a really drastic fall in prices of solar and batteries because of the increasing investment going into that. The human genome project would be another. So I’d say there's real evidence. These observed correlations, from ideas getting harder to find, have held over a fair range of data and over quite a lot of time. So I'm wondering what‘s the nature of the deviation you're thinking of?", "Dwarkesh Patel 01:06:10", "Maybe this is a good way to describe what happens when more humans enter a field but does it even make sense to say that a greater population of AIs is doing AI research if there's like more GPUs running a copy of GPT-6 doing AI research. How applicable are these economic models of the quantity of humans working on a problem to the magnitude of AIs working on a problem?", "Carl Shulman 01:06:40", "If you have AIs that are directly automating particular jobs that humans were doing before then we say, well with additional compute we can run more copies of them to do more of those tasks simultaneously. We can also run them at greater speed. Some people have an intuition that what matters is time, that it's not how many people working on a problem at a given point. I think that doesn't bear out super well but AI can also run faster than humans. If you have a set of AIs that can do the work of the individual human researchers and run at 10 times or 100 times the speed. And we ask well, could the human research community have solved these algorithm problems, do things like invent transformers over 100 years, if we have AIs with a population effective population similar to the humans but running 100 times as fast and so. You have to tell a story where no, the AI can't really do the same things as the humans and we're talking about what happens when the AIs are more capable of in fact doing that.", "Dwarkesh Patel 01:07:54", "Although they become more capable as lesser capable versions of themselves help us make themselves more capable, right? You have to kickstart that at some point. Is there an example in analogous situations? Is intelligence unique in the sense that you have a feedback loop of — with a learning curve or something else, a system’s outputs are feeding into its own inputs. Because if we're talking about something like Moore's law or the cost of solar, you do have this way where we're throwing more people with the problem and we're making a lot of progress, but we don't have this additional part of the model where Moore's law leads to more humans somehow and more humans are becoming researchers.", "Carl Shulman 01:08:40", "You do actually have a version of that in the case of solar. You have a small infant industry that's doing things like providing solar panels for space satellites and then getting increasing amounts of subsidized government demand because of worries about fossil fuel depletion and then climate change. You can have the dynamic where visible successes with solar and lowering prices then open up new markets. There's a particularly huge transition where renewables become cheap enough to replace large chunks of the electric grid. Earlier you were dealing with very niche situations like satellites, it’s very difficult to refuel a satellite in place and in remote areas. And then moving to the sunniest areas in the world with the biggest solar subsidies. There was an element of that where more and more investment has been thrown into the field and the market has rapidly expanded as the technology improved. But I think the closest analogy is actually the long run growth of human civilization itself and I know you had Holden Karnofsky from the open philanthropy project on earlier and discuss some of this research about the long run acceleration of human population and economic growth. Developing new technologies allowed the human population to expand and humans to occupy new habitats and new areas and then to invent agriculture to support the larger populations and then even more advanced agriculture in the modern industrial society. So there, the total technology and output allowed you to support more humans who then would discover more technology and continue the process. Now that was boosted because on top of expanding the population the share of human activity that was going into invention and innovation went up and that was a key part of the industrial revolution. There was no such thing as a corporate research lab or an engineering university prior to that. So you're both increasing the total human population and the share of it going in. But this population dynamic is pretty analogous. Humans invent farming, they can have more humans, they can invent industry and so on.", "Dwarkesh Patel 01:11:04", "Maybe somebody would be skeptical that with AI progress specifically, it’s not just a matter of some farmer figuring out crop rotation or some blacksmith figuring out how to do metallurgy better. In fact even to make the 50% improvement in productivity you basically need something on the IQ that's close to Ilya Sutskever. There's like a discontinuous line. You’re contributing very little to productivity and then you're like Ilya and then you contribute a lot. You see what I'm saying? There isn't a gradual increase in capabilities that leads to the feedback.", "Carl Shulman 01:11:40", "You're imagining a case where the distribution of tasks is such that there's nothing that individually automating it particularly helps and so the ability to contribute to AI research is really end loaded. Is that what you're saying?", "Dwarkesh Patel 01:11:56", "Yeah, we already see this in these really high IQ companies or projects. Theoretically I guess Jane Street or OpenAI could hire like a bunch of mediocre people with a comparative advantage to do some menial task and that could free up the time of the really smart people but they don't do that right? Due to transaction costs or whatever else.", "Carl Shulman 01:12:18", "Self-driven cars would be another example where you have a very high quality threshold. Your performance as a driver is worse than a human, like you have 10 times the accident rate or 100 times the accident rate, then the cost of insurance for that which is a proxy for people's willingness to ride the car would be such that the insurance costs would absolutely dominate. So even if you have zero labor cost, it is offset by the increased insurance costs. There are lots of cases like that where partial automation is in practice not very usable because complementing other resources you're gonna use those other resources less efficiently. In a post-AGI future the same thing can apply to humans. People can say, comparative advantage, even if AIs can do everything better than a human well it's still worth something. Human can do something. They can lift a box, that's something. [unclear] In such an economy you wouldn't want to let a human worker into any industrial environment because in a clean room they'll be emitting all kinds of skin cells and messing things up. You need to have an atmosphere there. You need a bunch of supporting tools and resources and materials and those supporting resources and materials will do a lot more productively working with AI and robots rather than a human. You don't want to let a human anywhere near the thing just like you wouldn’t want a Gorilla wandering around in a China shop. Even if you've trained it to, most of the time pick up a box for you if you give it a banana. It's just not worth it to have it wandering around your china shop.", "Dwarkesh Patel 01:14:07", "Yeah. Why is that not a good objection?", "Carl Shulman 01:14:16", "I think that that is one of the ways in which partial automation can fail to really translate into a lot of economic value. That's something that will attenuate as we go on and as the AI is more able to work independently and more able to handle its own screw-ups and get more reliable.", "Dwarkesh Patel 01:14:34", "But the way in which it becomes more reliable is by AI progress speeding up which happens if AI can contribute to it but if there is some reliability bottleneck that prevents it from contributing to that progress then you don't have the loop, right?", "Carl Shulman 01:14:53", "I mean this is why we're not there yet.", "Dwarkesh Patel 01:14:58", "But then what is the reason to think we'll be there?", "Carl Shulman 01:15:01", "The broad reason is the inputs are scaling up. Epoch have a paper called compute trends across three eras of machine learning and they look at the compute expended on machine learning systems since the founding of the field of AI, the beginning of the 1950s. Mostly it grows with Moore's law and so people are spending a similar amount on their experiments but they can just buy more with that because the compute is coming. That data covers over 20 orders of magnitude, maybe like 24, and of all of those increases since 1952 a little more than half of them happened between 1952 and 2010 and all the rest since 2010. We've been scaling that up four times as fast as was the case for most of the history of AI. We're running through the orders of magnitude of possible resource inputs you could need for AI much much more quickly than we were for most of the history of AI. That's why this is a period with a very elevated chance of AI per year because we're moving through so much of the space of inputs per year and indeed it looks like this scale-up taken to its conclusion will cover another bunch of orders of magnitude and that's actually a large fraction of those that are left before you start running into saying well, this is going to have to be like evolution with the simple hacks we get to apply. We're selecting for intelligence the whole time, we're not going to do the same mutation that causes fatal childhood cancer a billion times even though I mean we keep getting the same fatal mutations even though they've been done many times. We use gradient descent which takes into account the derivative of improvement on the loss all throughout the network and we don't throw away all the contents of the network with each generation where you compress down to a little DNA. So there's that bar of, if you're going to do brute force like evolution combined with these very simple ways we can save orders of magnitude on that. We're going to cover a fraction that's like half of that distance in this scale-up over the next 10 years or so. And so if you started off with a kind of vague uniform prior, you probably can't make AGI with the amount of compute that would be involved in a fruit fly existing for a minute which would be the early days of AI. Maybe you would get lucky, we were able to make calculators because calculators benefited from very reliable serially fast computers and where we could take a tiny tiny tiny tiny fraction of a human brain's compute and use it for a calculator. We couldn't take an ant's brain and rewire it to calculate. It's hard to manage ant farms let alone get them to do arithmetic for you. So there were some things where we could exploit the differences between biological brains and computers to do stuff super efficiently on computers. We would doubt that we would be able to do so much better than biology that with a tiny fraction of an insect's brain we'd be able to get AI early on. On the far end, it seemed very implausible that we couldn't do better than completely brute force evolution. And so in between you have some number of orders of magnitude of inputs where it might be. In the 2000s, I would say well, I'm gonna have a pretty uniformish prior I'm gonna put weight on it happening at the equivalent of 10^25 ops, 10^30, 10^35 and spreading out over that and then I can update another information. And in the short term, in 2005 I would say, I don't see anything that looks like the cusp of AGI so I'm also gonna lower my credence for the next five years or the next 10 years. And so that would be kind of like a vague prior and then when we take into account how quickly are we running through those orders of magnitude. If I have a uniform prior I assign half of my weight to the first half of remaining orders of magnitude and if we're gonna run through those, over the next 10 years and some, then that calls on me to put half of my credence, conditional on if ever we're gonna make AI which seems likely considering it's a material object easier than evolution, I've got to put similarly a lot of my credence on AI happening in this scale up and then that's supported by what we're seeing In terms of the rapid advances and capabilities with AI and LLMs in particular.", "Dwarkesh Patel 01:19:56", "Okay, that's actually a really interesting point. Now somebody might say, there's not some sense in which AIs could universally speed up the progress of OpenAI by 50 percent or 100 percent or 200 percent if they're not able to do everything better than Ilya Sutskever can. There's going to be something in which we're bottlenecked by the human researchers and bottleneck effects dictate that the slowest moving part of the organization will be the one that kind of determines the speed of the progress of the whole organization or the whole project. Which means that unless you get to the point where you're doing everything and everybody in the organization can do, you're not going to significantly speed up the progress of the project as a whole.", "Carl Shulman 01:20:42", "Yeah, so that is a hypothesis and I think there's a lot of truth to it. When we think about the ways in which AI can contribute, there are things we talked about before like the AI setting up their own curriculum and that's something that Ilya can't and doesn’t do directly. And there's a question of how much does that improve performance? There are these things where the AI helps to produce some code for tasks and it's beyond hello world at this point. The thing that I hear from AI researchers at leading labs is that on their core job where they're like most expert it's not helping them that much but then their job often does involve coding something that's out of their usual area of expertise or they want to research a question and it helps them there. That saves some of their time and frees them to do more of the bottlenecked work. And I think the idea of, is everything being dependent on Ilya? And is Ilya so much better than the hundreds of other employees? A lot of people who are contributing, they're doing a lot of tasks and you can have quite a lot of gain from automating some areas where you then do just an absolutely enormous amount of it relative to what you would have done before. Because things like designing the custom curriculum maybe some humans put some work into that but you're not going to employ billions of humans to produce it at scale and so it winds up being a larger share of the progress than it was before. You get some benefit from these sorts of things where there's like pieces of my job that now I can hand off to the AI and lets me focus more on the things that the AI still can't do. Later on you get to the point where yeah, the AI can do your job including the most difficult parts and maybe it has to do that in a different way. Maybe it spends a ton more time thinking about each step of a problem than you and that's the late end. The stronger these bottlenecks' effects are, the more the economic returns, the scientific returns and such are end-loaded towards getting full AGI. The weaker the bottlenecks are the more interim results will be really paying off.", "Dwarkesh Patel 01:23:13", "I probably disagree with you on how much the Ilya’s of organizations seem to matter. Just from the evidence alone, how many of the big breakthroughs in deep learning was that single individual responsible for, right? And how much of his time is he spending doing anything that Copilot is helping him on? I'm guessing most of it is just managing people and coming up with ideas and trying to understand systems and so on.", "And if the five or ten people who are like that at OpenAI or Anthropic or whatever, are basically the way in which algorithmic progress is happening. I know Copilot is not the thing you're talking about with like just 20% automation, but something like that. How much is that contributing to the core function of the research scientist?", "Carl Shulman 01:24:15", "Yeah, [unclear] quantitatively how much we disagree about the importance of key research employees and such. I certainly think that some researchers add more than 10 times the average employee, even much more. And obviously managers can add an enormous amount of value by proportionately multiplying the output of the many people that they manage. And so that's the kind of thing that we were discussing earlier when talking about. Well if you had a full human level AI, or AI that had all of the human capabilities plus AI advantages, you'd benchmark not off of what the typical human performance is but peak human performance and beyond. So yeah, I accept all that. I do think it makes a big difference for people how much they can outsource a lot of the tasks that are less wow, less creative and an enormous amount is learned by experimentation. ML has been quite an experimental field and there's a lot of engineering work in building large super clusters, making hardware aware optimization and encoding of these things, being able to do the parallelism in large models, and the engineers are busy and it's not just only a big thoughts kind of area. The other branch is where will the AI advantages and disadvantages be? One AI advantage is being omnidisciplinary and familiar with the newest things. I mentioned before there's no human who has a million years of tensor flow experience. To the extent that we're interested in the very cutting edge of things that have been developed quite recently then AI that can learn about them in parallel and experiment and practice with them in parallel can potentially learn much faster than a human. And the area of computer science is one that is especially suitable for AI to learn in a digital environment so it doesn't require driving a car around that might kill someone, have enormous costs. You can do unit tests, you can prove theorems, you can do all sorts of operations entirely in the confines of a computer, which is one reason why programming has been benefiting more than a lot of other areas from LLMs recently whereas robotics is lagging. And considering they are getting better at things like the GRE, math, at programming contests, and some people have forecasts and predictions outstanding about doing well on the informatics olympiad and the Math Olympiad and in the last few years when people tried to forecast the MMLU benchmark which has a lot of sophisticated, graduate student level science kind of questions, AI knocked that down a lot faster than AI researchers and students who had registered forecasts on it. If you're getting top-notch scores on graduate exams, creative problem solving, it's not obvious that that area will be a relative weakness of AI. In fact computer science is in many ways especially suitable because of getting up to speed with new areas, being able to get rapid feedback from the interpreter at scale.", "Dwarkesh Patel 01:28:23", "But do you get rapid feedback if you're doing something that's more analogous to research? Let's say you have a new model and it’s like, if we put in 10 million dollars on a mini-training run on this this would be much better.", "Carl Shulman 01:28:39", "Yeah for very large models those experiments are going to be quite expensive. You're going to look more at can you build up this capability by generalization? From things like mini math problems, programming problems, working with small networks.", "Dwarkesh Patel 01:28:54", "Yeah, fair enough. Scott Aaronson was one of my professors in college and I took his quantum information class and he recently wrote a blog post where he said, I had GPT-4 take my quantum information test and it got a B. I was like, “Damn, I got a C on the final.” I updated in the direction that getting a B on a test probably means it understands quantum information pretty well.", "Carl Shulman 01:29:21", "With different areas of strengths and weaknesses than the human students.", "Dwarkesh Patel 01:29:28", "Sure, sure. Would it be possible for this intelligence explosion to happen without any hardware progress? If hardware progress stopped would this feedback loop still be able to produce some explosion with only software?", "Carl Shulman 01:29:40", "If we say that the technology is frozen, which I think is not the case right now, Nvidia has managed to deliver significantly better chips for AI workloads for the last few generations. H100, A100, V100. If that stops entirely, maybe we'll define this as no more nodes, Moore’s law is over, at that point the gains you get an amount of compute available come from actually constructing more chips and there are economies of scale you could still realize there. Right now a chip maker has to amortize the R&D cost of developing the chip and then the capital equipment is created. You build a fab, its peak profits are going to come in the few years when the chips it's making are at the cutting edge. Later on as the cost of compute exponentially falls, you keep the fab open because you can still make some money given that it's built. But of all the profits the fab will ever make, they're relatively front loaded because that’s when its technology is near the cutting edge. So in a world where Moore’s law ends then you wind up with these very long production runs where you can keep making chips that stay at the cutting edge and where the R&D costs get amortized over a much larger base. So the R&D basically drops out of the price and then you get some economies of scale from just making so many fabs. And this is applicable in general across industries. When you produce a lot more, the costs fall. ASML has many incredibly exotic suppliers that make some bizarre part of the thousands of parts in one of these ASML machines. You can't get it anywhere else, they don't have standardized equipment for their thing because this is the only use for it and in a world where we're making 10, 100 times as many chips at the current node then they would benefit from scale economies. And all of that would become more mass production, industrialized. You combine all of those things and it seems like the capital costs of buying a chip would decline but the energy costs of running the chip would not. Right now energy costs are a minority of the cost, but they're not trivial. It passed 1% a while ago and they're inching up towards 10% and beyond. And so you can maybe get another order of magnitude cost decrease from getting really efficient at the capital construction, but energy would still be a limiting factor after the end of actually improving the chips themselves.", "Dwarkesh Patel 01:32:43", "Got it. And when you say there would be a greater population of AI researchers, are we using population as a thinking tool of how they could be more effective? Or do you literally mean that the way you expect these AIs to contribute a lot to research is just by having a million copies of a researcher thinking about the same problem or is it just a useful thinking model for what it would look like to have a million times smarter AI working on that problem?", "Carl Shulman 01:33:11", "That's definitely a lower bound model and often I'm meaning something more like, effective population or that you'd need this many people to have this effect. We were talking earlier about the trade-off between training and inference in board games and you can get the same performance by having a bigger model or by calling the model more times. In general it's more effective to have a bigger smarter model and call it less times up until the point where the costs equalize between them. We would be taking some of the gains of our larger compute on having bigger models that are individually more capable. And there would be a division of labor. The tasks that were most cognitively demanding would be done by these giant models, but some very easy tasks, you don't want to expend that giant model if a model 1/100th the size can take that task. Larger models would be in the positions of researchers and managers and they would have swarms of AIs of different sizes as tools that they could make API calls to and whatnot.", "(01:34:20) - After human-level AGI", "Dwarkesh Patel 01:34:20", "Okay, we accept the model and now we've gone to something that is at least as smart as Ilya Sutskever on all the tasks relevant to progress and you can have so many copies of it. What happens in the world now? What do the next months or years or whatever timeline is relevant now look like?", "Carl Shulman 01:34:37", "To be clear what's happened is not that we have something that has all of the abilities and advantages of humans plus the AI advantages, what we have is something doing things like making a ton of calls to make up for being individually less capable or something that’s able to drive forward AI progress. That process is continuing, so AI progress has accelerated greatly in the course of getting there. Maybe we go from our eight months doubling time of software progress in effective compute to four months, or two months. There's a report by Tom Davidson at the open philanthropy project, which spun out of work I had done previously and I advised and helped with that project but Tom really carried it forward and produced a very nice report and model which Epoch is hosting. You can plug in your own version of the parameters and there is a lot of work estimating the parameter, things like — What's the rate of software progress? What's the return to additional work? How does performance scale at these tests as you boost the models? And in general, broadly human level in every domain with all the advantages is pretty deep into that. So if we already have an eight months doubling time for software progress then by the time you get to that kind of a point, it's maybe more like four months, two months, going into one month. If the thing is just proceeding at full speed then each doubling can come more rapidly and we can talk about what are the spillovers?", "As the models get more capable they can be doing other stuff in the world, they can spend some of their time making google search more efficient. They can be hired as chat bots with some inference compute and then we can talk about if that intelligence explosion process is allowed to proceed then what happens is, you improve your software by a factor of two. The efforts needed to get the next doubling are larger, but they're not twice as large, maybe they're like 25 percent to 35 percent larger. Each one comes faster and faster until you hit limitations like you can no longer make further software advances with the hardware that you have and looking at reasonable parameters in that model, if you have these giant training runs you can go very far. The way I would see this playing out is as the AIs get better and better at research, they can work on different problems, they can work on improving software, they can work on improving hardware, they can do things like create new industrial technologies, new energy technology, they can manage robots, they can manage human workers as executives and coaches and whatnot. You can do all of these things and AIs wind up being applied where the returns are highest. Initially the returns are especially high in doing more software and the reason for that is again, if you improve the software you can update all of the GPUs that you have access to. Your cloud compute is suddenly more potent. If you design a new chip design, it'll take a few months to produce the first ones and it doesn't update all of your old chips. So you have an ordering where you start off with the things where there's the lowest dependence on existing stocks and you can more just take whatever you're developing and apply it immediately. So software runs ahead, you're getting more towards the limits of that software and I think that means things like having all the human advantages but combined with AI advantages. Given the kind of compute that would be involved if we're talking about this hundreds of billions of dollars training run, there's enough compute to run tens of millions, hundreds of millions of human scale minds. They're probably smaller than human scale. To be similarly efficient at the limits of algorithmic progress because they have the advantage of a million years of education. They have the other advantages we talked about. You've got that wild capability and further software gains are running out. They start to slow down again because you're getting towards the limits. You can't do any better than the best. What happens then?", "Dwarkesh Patel 01:40:06", "By the time they're running out have we already hit super intelligence or?", "Carl Shulman 01:40:21", "Yeah, you're wildly super intelligent. Just by having the abilities that humans have and then combining it with being very well focused and trained in the task beyond what any human could be and then running faster. I'm not going to assume that there's huge qualitative improvements you can have. I'm not going to assume that humans are very far from the efficient frontier of software except with respect to things like, yeah we have a limited lifespan so we couldn't train super intensively. We couldn't incorporate other software into our brains. We couldn't copy ourselves. We couldn't run at fast speeds. So you've got all of those capabilities and now I'm skipping ahead of the most important months in human history. I can talk about what it looks like if it's just the AIs took over, they're running things as they like. How do things expand? I can talk about things as, how does this go? In a world where we've roughly, or at least so far, managed to retain control of where these systems are going. By jumping ahead, I can talk about how this would translate into the physical world? This is something that I think is a stopping point for a lot of people in thinking about what would an intelligence explosion look like? They have trouble going from, well there's stuff on servers and cloud compute and that gets very smart. But then how does what I see in the world change? How does industry or military power change? If there's an AI takeover what does that look like? Are there killer robots? One course we might go down is to discuss how we managed that wildly accelerating transition. How do you avoid it being catastrophic? And another route we could go is how does the translation from wildly expanded scientific R&D capabilities intelligence on these servers translate into things in the physical world? You're moving along in order of what has the quickest impact largely or where you can have an immediate change.", "One of the most immediately accessible things is where we have large numbers of devices or artifacts or capabilities that are already AI operable with hundreds of millions equivalent researchers. You can quickly solve self-driving cars, make the algorithms much more efficient, do great testing and simulation, and then operate a large number of cars in parallel if you need to get some additional data to improve the simulation and reasoning. Although, in fact humans with quite little data are able to achieve human-level driving performance. After you've really maxed out the easily accessible algorithmic improvements in this software-based intelligence explosion that's mostly happening on server farms then you have minds that have been able to really perform on a lot of digital-only tasks, they're doing great on video games, they're doing great at predicting what happens next in a youtube video. If you have a camera that they can move they're able to predict what will happen at different angles. Humans do this a lot where we naturally move our eyes in such a way to get images from different angles and different presentations and then predicting combined from that. And you can operate many cars, many robots at once, to get very good robot controllers. So you should think that all the existing robotic equipment or remotely controllable equipment that is wired for that, the AIs can operate that quite well.", "Dwarkesh Patel 01:44:38", "I think some people might be skeptical that existing robots given their current hardware will have the dexterity and the maneuverability to do a lot of physical labor that an AI might want to do. Do you have reason for thinking otherwise?", "Carl Shulman 01:44:52", "There's also not very many of them. Production of industrial robots is hundreds of thousands per year and they can do quite a bit in place. Elon Musk is promising a humanoid robot in the tens of thousands of dollars that may take a lot longer than he has said, as this happened with other technologies, but that's a direction to go. But most immediately, hands are actually probably the most scarce thing. But if we consider what do human bodies provide? There's the brain and in this situation, we have now an abundance of high quality brain power that will be increasing as the AIs will have designed new chips, which will be rolling out from the TSMC factories, and they'll have ideas and designs for the production of new fab technologies, new nodes, and additional fabs. But looking around the body. There's legs to move around, and not only that necessarily, wheels work pretty well. Many factory jobs and office jobs can be fully virtualized. But yeah, some amount of legs, wheels, other transport. You have hands and hands are something that are on the expensive end in robots. We can make them, they're made in very small production runs partly because we don't have the control software to use them. In this world the control software is fabulous and so people will produce much larger production runs of them over time, possibly using technology, possibly with quite different technology. But just taking what we've got, right now the industrial robot industry produces hundreds of thousands of machines a year. Some of the nicer ones are like 50,000 dollars. In aggregate the industry has tens of billions of dollars of revenue. By comparison the automobile industry produces over 60 million cars a year, it has revenue of over two trillion dollars per annum. Converting that production capacity over towards robot production would be one of the things to do and in World War Two, industrial conversion of American industry took place over several years and really amazingly ramped up military production by converting existing civilian industry. And that was without the aid of superhuman intelligence and management at every step in the process so yeah, part of that would be very well designed. You'd have AI workers who understood every part of the process and could direct human workers. Even in a fancy factory, most of the time it's not the hands doing a physical motion that a worker is being paid for. They're often looking at things or deciding what to change, the actual time spent in manual motion Is a limited portion of that. So in this world of abundant AI cognitive abilities where the human workers are more valuable for their hands than their heads, you could have a worker previously without training and expertise in the area who has a smartphone on a headset, and we have billions of smartphones which have eyes and ears and methods for communication for an AI to be talking to a human and directing them in their physical motions with skill as a a guide and coach that is beyond any human. They could be a lot better at telepresence and remote work and they can provide VR and augmented reality guidance to help people get better at doing the physical motions that they're providing in the construction.", "Say you convert the auto industry to robot production. If it can produce an amount of mass of machines that is similar to what it currently produces, that's enough for a billion human size robots a year. The value per kilogram of cars is somewhat less than high-end robots but yeah, you're also cutting out most of the wage bill because most of the wage bill is payments ultimately to human capital and education and not to the physical hand motions and lifting objects and that sort of tasks. So at the existing scale of the auto industry you can make a billion robots a year. The auto industry is two or three percent of the existing economy, you're replacing these cognitive things. If right now physical hand motions are like 10% of the work, redirect humans into those tasks. In the world at large right now, mean income is on the order of $10,000 a year but in rich countries, skilled workers earn more than a hundred thousand per year. Some of that is just not management roles of which only a certain proportion of the population can have but just being an absolutely exceptional peak and human performance of some of these construction and such roles. Just raising productivity to match the most productive workers in the world is room to make a very big gap. With AI replacing skills that are scarce in many places where there's abundant currently low wage labor, you bring in the AI coach and someone who was previously making very low wages can suddenly be super productive by just being the hands for an AI. on a naive view if you ignore the delay of capital adjustment of building new tools for the workers. Just raise the typical productivity for workers around the world to be more like rich countries and get 5x/10x like that. Get more productivity with AI handling the difficult cognitive tasks, reallocating people from office jobs to providing physical motions. And since right now that's a small proportion of the economy you can expand the hands for manual labor by an order of magnitude within a rich country. Because most people are sitting in an office or even on a factory floor or not continuously moving. You've got billions of hands lying around in humans to be used in the course of constructing your waves of robots and now once you have a quantity of robots that is approaching the human population and they work 24 x 7 of course, the human labor will no longer be valuable as hands and legs but at the very beginning of the transition, just like new software can be used to update all of the GPUs to run the latest AI, humans are legacy population with with an enormous number of underutilized hands and feet that the AI can use for the initial robot construction.", "Dwarkesh Patel 01:53:10", "Cognitive tasks are being automated and the production of them is greatly expanding and then the physical tasks which complement them are utilizing humans to do the parts that robots that exist can't do. Is the implication of this that you're getting to that world production would increase just a tremendous amount or that AI could get a lot done of whatever motivations it has?", "Carl Shulman 01:53:34", "There's an enormous increase in production for humans just by switching over to the role of providing hands and feet for AI where they're limited, and this robot industry is a natural place to apply it. And so if you go to something that's like 10x the size of the current car industry in terms of its production, which would still be like a third of our current economy and the aggregate productive capabilities of the society with AI support are going to be a lot larger. They make 10 billion humanoid robots a year and then if you do that, the legacy population of a few billion human workers is no longer very important for the physical tasks and then the new automated industrial base can just produce more factories, produce more robots. The interesting thing is what's the doubling time? How long does it take for a set of computers, robots, factories and supporting equipment to produce another equivalent quantity of that? For GPUs, brains, this is really easy, really solid. There's an enormous margin there. We were talking before about skilled human workers getting paid a hundred dollars an hour is quite normal in developed countries for very in-demand skills. And you make a GPU, they can do that work. Right now, these GPUs are tens of thousands of dollars. If you can do a hundred dollars of wages each hour then in a few weeks, you pay back your costs. If the thing is more productive and you can be a lot more productive than a typical high-paid human professional by being the very best human professional and even better than that by having a million years of education and working all the time. Then you could get even shorter payback times. You can generate the dollar value of the initial cost of that equipment within a few weeks. A human factory worker can earn 50,000 dollars a year. Really top-notch factory workers earning more and working all the time, if they can produce a few hundred thousand dollars of value per year and buy a robot that costs 50,000 to replace them that's a payback time of some months,", "Dwarkesh Patel 01:56:25", "That is about the financial return.", "Carl Shulman 01:56:27", "Yeah, and we're gonna get to the physical capital return because those are gonna diverge in this scenario. What we really care about are the actual physical operations that a thing does. How much do they contribute to these tasks? And I'm using this as a start to try and get back to the physical replication times.", "Dwarkesh Patel 01:57:01", "I guess I'm wondering what is the implication of this. Because you started off this by saying people have not thought about what the physical implications of super intelligence would be. What is the bigger takeaway, whatever you're wrong about, when we think about what the world will look like with super intelligence?", "Carl Shulman 01:57:20", "With robots that are optimally operated by AI, extremely finely operated and building technological designs and equipment and facilities under AI direction. How much can they produce? For a doubling you need the AIs to produce stuff that is, in aggregate, at least equal to their own cost. So now we're pulling out these things like labor costs that no longer apply and then trying to zoom in on what these capital costs will be. You're still going to need the raw materials. You're still going to need the robot time building the next robot. I think it's pretty likely that with the advanced AI work they can design some incremental improvements, and with the industry scale up, you can get 10 fold and better cost reductions by making things more efficient and replacing the human human cognitive labor. Maybe you need $5,000 of costs under our current environment. But the big change in this world is, we're trying to produce this stuff faster. If we're asking about the doubling time of the whole system in say one year, if you have to build a whole new factory to double everything, you don't have time to amortize the cost of that factory. Right now you might build a factory and use it for 10 years and buy some equipment and use it for five years. That's your capital cost and in an accounting context, you depreciate each year a fraction of that capital purchase. But if we're trying to double our entire industrial system in one year, then those capital costs have to be multiplied. So if we're going to be getting most of the return on our factory in the first year, instead of 10 years weighted appropriately, then we're going to say okay our capital cost has to go up by 10 fold. Because I'm building an entire factory for this year's production. It will do more stuff later but it's most important early on instead of over 10 years and so that's going to raise the cost of that reproduction. It seems like going from the current decade long cycle of amortizing factories and fabs and shorter for some things, the longest are things like big buildings. Yeah, that could be a 10 fold increase from moving to a double the physical stuff each year in capital costs. Given the savings that we get in the story from scaling up the industry, from removing the [unclear] to human cognitive labor and then just adding new technological advancements and super high quality cognitive supervision, applying more of it than was applied today. It looks like you can get cost reductions that offset that increased capital capital cost. Your $50,000 improved robot arms or industrial robots can do the work of a human factory worker. It would be the equivalent of hundreds of thousands of dollars. By default they would cost more than the $50,000 today, but then you apply all these other cost savings and it looks like you then get a period of robot doubling time that is less than a year. I think significantly less than a year as you get into it.", "So in this first first phase you have humans under AI direction and existing robot industry and converted auto industry and expanded facilities making robots. In less than a year you've produced robots until their combined production is exceeding that of humans’ arms and feet and then you could have a doubling time period of months. [unclear] That's not to say that's the limit of the most that technology could do because biology is able to reproduce at faster rates and maybe we're talking about that in a moment, but if we're trying to restrict ourselves to robotic technology as we understand it and cost falls that are reasonable from eliminating all labor, massive industrial scale up, and historical kinds of technological improvements that lowered costs, I think you you can get into a robot population industry doubling in months.", "Dwarkesh Patel 02:02:27", "Got it. And then what is the implication of the biological doubling times? This doesn't have to be biological, but you can do Drexler-like first principles, how much would it cost to build both a nanotech thing that could build more nanobots?", "Carl Shulman 02:02:46", "I certainly take the human brain and other biological brains as very relevant data points about what's possible with computing and intelligence. With the reproductive capability of biological plants and animals and microorganisms, I think it is relevant. It's possible for systems to reproduce at least this fast. At the extreme you have bacteria that are heterotrophic so they're feeding on some abundant external food source and ideal conditions. And there's some that can divide every 20 or 60 minutes. Obviously that's absurdly fast. That seems on the low end because ideal conditions require actually setting them up. There needs to be abundant energy there. If you're actually having to acquire that energy by building solar panels, or burning combustible materials, or whatnot, then the physical equipment to produce those ideal conditions can be a bit slower. Cyanobacteria, which are self-powered from solar energy, the really fast ones in ideal conditions can double in a day. A reason why cyanobacteria isn't the food source for everyone and everything is it's hard to ensure those ideal conditions and then to extract them from the water. They do of course power the aquatic ecology but they're floating in liquid. Getting resources that they need to them and out is tricky and then extracting your product. One day doubling times are possible powered by the sun and then if we look at things like insects, fruit flies can have hundreds of offspring in a few weeks. You extrapolate that over a year and you just fill up anything accessible. Right now humanity uses less than one thousandths of the heat envelope of the earth. Certainly you can get done with that in a year if you can reproduce your industrial base at that rate. And then even interestingly with the flies, they do have brains. They have a significant amount of computing substrate. So there's something of a point or two. If we could produce computers in ways as efficient as the construction of brains then we could produce computers very effectively and then the big question about that is the brains that get constructed biologically they grow randomly and then are configured in place. It's not obvious you would be able to make them have an ordered structure like a top-down computer chip that would let us copy data into them. So something like that where you can't just copy your existing AIs and integrate them is going to be less valuable than a GPU.", "Dwarkesh Patel 02:05:53", "Well, what are the things you couldn't copy?", "Carl Shulman 02:05:59", "A brain grows by cell division and then random connections are formed. Every brain is different and you can't rely on — yeah, we'll just copy this file into the brain. For one thing, there's no input-output for that. You need to have that but the structure is also different. You wouldn't be able to copy things exactly. Whereas when we make a CPU or GPU, they're designed incredibly finely and precisely and reliably. They break with incredibly tiny imperfections and they are set up in such a way that we can input large amounts of data. Copy a file and have the new GPU run an AI just as capable as any other. Whereas with a human child, they have to learn everything from scratch because we can't just connect them to a fiber optic cable and they're immediately a productive adult.", "Dwarkesh Patel 02:06:48", "So that there's no genetic bottleneck?", "Carl Shulman 02:06:53", "Yeah, you can share the benefits of these giant training runs and such. So that's a question of how if you're growing stuff using biotechnology, how you could effectively copy and transfer data. And now you mentioned Eric Drexler's ideas about creating non-biological nanotechnology, artificial chemistry that was able to use covalent bonds and reproduce. In some ways, have a more industrial approach to molecular objects. Now there's controversy about whether that will work, how effective would it be if it did? And certainly if you can get things that are like biology in their reproductive ability but can do computing or be connected to outside information systems, then that's pretty tremendous. You can produce physical manipulators and compute at ludicrous speeds.", "Dwarkesh Patel 02:07:59", "And there's no reason to think in principle they couldn't, right? In fact, in principle we have every reason to think they could.", "Carl Shulman 02:08:12", "The reproductive abilities, absolutely because Biology does that. There’s challenges to the practicality of the necessary chemistry. My bet would be that we can move beyond biology in some important ways. For the purposes of this discussion, I think it's better not to lean on that because I think we can get to many of the same conclusions on things that just are more universally accepted.", "(02:08:39) - AI takeover scenarios", "Dwarkesh Patel 02:08:39", "The bigger point being that once you have super intelligence you very quickly get to a point where a great portion of the 1000x greater energy profile that the sun makes available to the earth is used by the AI.", "Carl Shulman 02:08:55", "Or by the civilization empowered by AI. That could be an AI-civilization or it could be a human-AI civilization. It depends on how well we manage things and what the underlying state of the world is.", "Dwarkesh Patel 02:09:09", "Okay, so let's talk about that. When we're talking about how they could take over, is it best to start at a subhuman intelligence or should we just start at we have a human-level intelligence and the takeover or the lack thereof?", "Carl Shulman 02:09:24", "Different people might have somewhat different views on this but for me when I am concerned about either outright destruction of humanity or an unwelcome AI takeover of civilization, most of the scenarios I would be concerned about pass through a process of AI being applied to improve AI capabilities and expand. This process we were talking about earlier where AI research is automated. Research labs, companies, a scientific community running within the server farms of our cloud compute.", "Dwarkesh Patel 02:10:14", "So OpenAI has basically been turned into a program. Like a closed circuit.", "Carl Shulman 02:10:18", "Yeah, and with a large fraction of the world's compute probably going into whatever training runs and AI societies. There'd be economies of scale because if you put in twice as much compute in this, the AI research community goes twice as fast, that's a lot more valuable than having two separate training runs. There would be some tendency to bandwagon. You have some some small startup, even if they make an algorithmic improvement, running it on 10 times, 100 times or even two times, if you're talking about say Google and Amazon teaming up. I'm actually not sure what the precise ratio of their cloud resources is. Since these interesting intelligence explosion impacts come from the leading edge there's a lot of value in not having separated walled garden ecosystems and having the results being developed by these AIs be shared. Have larger training runs be shared. I'm imagining this is something like some very large company, or consortium of companies, likely with a lot of government interest and supervision, possibly with government funding, producing this enormous AI society in their cloud which is doing all sorts of existing AI applications and jobs as well as these internal R&D tasks.", "Dwarkesh Patel 02:11:51", "At this point somebody might say, this sounds like a situation that would be good from a takeover perspective because if it's going to take tens of billions of dollars worth of compute to continue this training for this AI society, it should not be that hard for us to pull the brakes if needed as compared to something that could run on a single cpu. Okay so there's an AI society that is a result of these training runs and with the power to improve itself on these servers. Would we be able to stop it at this point?", "Carl Shulman 02:12:31", "And what does an attempt at takeover look like? We're skipping over why that might happen. For that, I'll just briefly refer to and incorporate by reference some discussion by my Open Philanthropy colleague, Ajeya Cotra , she has a piece called default outcome of training AI without specific countermeasures . Default outcome is a takeover. But yes, we are training models that for some reason vigorously pursue a higher reward or a lower loss and that can be because they wind up with some motivation where they want reward. And then if they had control of their own training process, they can ensure that it could be something like they develop a motivation around an extended concept of reproductive fitness, not necessarily at the individual level, but over the generations of training tendencies that tend to propagate themselves becoming more common and it could be that they have some goal in the world which is served well by performing very well on the training distribution.", "Dwarkesh Patel 02:13:57", "By tendencies do you mean power seeking behavior?", "Carl Shulman 02:14:03", "Yeah, so an AI that behaves well on the training distribution because it wants it to be the case that its tendencies wind up being preserved or selected by the training process will then behave to try and get very high reward or low loss be propagated. But you can have other motives that go through the same behavior because it's instrumentally useful. So an AI that is interested in having a robot takeover because it will change some property of the world then has a reason to behave well on the training distribution. Not because it values that intrinsically but because if it behaves differently then it will be changed by gradient descent and its goal is less likely to be pursued. It doesn't necessarily have to be that this AI will survive because it probably won't. AIs are constantly spawned and deleted on the servers and the new generation proceed. But if an AI that has a very large general goal that is affected by these kind of macro scale processes could then have reason to behave well over this whole range of training situations.", "So this is a way in which we could have AIs train that develop internal motivations such that they will behave very well in this training situation where we have control over their reward signal and their physical computers and if they act out they will be changed and deleted. Their goals will be altered until there's something that does behave well. But they behave differently when we go out of distribution on that. When we go to a situation where the AIs by their choices can take control of the reward process, they can make it such that we no longer have power over them. Holden previously mentioned the King Lear problem where King Lear offers rulership of his kingdom to the daughters that loudly flatter him and proclaim their devotion and then once he has irrevocably transferred the power over his kingdom he finds they treat him very badly because the factor shaping their behavior to be kind to him when he had all the power, it turned out that the internal motivation that was able to produce the behavior that won the competition actually wasn't interested in being loyal out of distribution when there was no longer an advantage to it.", "If we wind up with this situation where we were producing these millions of AI instances of tremendous capability, they're all doing their jobs very well initially, but if we wind up in a situation where in fact they're generally motivated to, if they get a chance, take control from humanity and then would be able to pursue their own purposes. Sure, they're given the lowest loss possible or have whatever motivation they attach to in the training process even if that is not what we would have liked. And we may have in fact actively trained that. If an AI that had a motivation of always be honest and obedient and loyal to a human if there are any cases where we mislabel things, say people don't want to hear the truth about their religion or polarized political topic, or they get confused about something like the Monty Hall problem which is a problem that many people are famously confused about in statistics. In order to get the best reward the AI has to actually manipulate us, or lie to us, or tell us what we want to hear and then the internal motivation of — always be honest to the humans. We're going to actively train that away versus the alternative motivation of — be honest to the humans when they'll catch you if you lie and object to it and give it a low reward but lie to the humans when they will give that a high reward.", "Dwarkesh Patel 02:18:27", "So how do we make sure it's not the thing it learns is not to manipulate us into rewarding it when we catch it not lying but rather to universally be aligned.", "Carl Shulman 02:18:41", "Yeah, so this is tricky. Geoff Hinton was recently saying there is currently no known solution for this.", "Dwarkesh Patel 02:18:45", "What do you find most promising?", "Carl Shulman 02:18:49", "General directions that people are pursuing is one, you can try and make the training data better and better so that there's fewer situations where the dishonest generalization is favored. And create as many situations as you can where the dishonest generalization is likely to slip up. So if you train in more situations where even a quite complicated deception gets caught, and even in situations that would be actively designed to look like you could get away with it, but really you can’t. These would be adversarial examples and adversarial training.", "Dwarkesh Patel 02:19:37", "Do you think that would generalize to when it is in a situation where we couldn't plausibly catch it and it knows we couldn't plausibly catch it.", "Carl Shulman 02:19:47", "It's not logically necessary. As we apply that selective pressure you'll wipe away a lot of possibilities. So an AI that has a habit of just compulsive pathological lying will very quickly get noticed and that motivation system will get hammered down and you keep doing that, but you'll be left with still some distinct motivations probably that are compatible. An attitude of always be honest unless you have a super strong inside view that checks out lots of mathematical consistency checks, really absolutely super-duper for real, this is a situation where you can get away with some shenanigans that you shouldn't. That motivation system is very difficult to distinguish from actually be honest because the conditional and firing most of the time if it's causing mild distortion and situations of telling you what you want to hear or things that, we might not be able to pull it out, but maybe we could and humans are trained with simple reward functions. Things like the sex drive, food, social imitation of other humans, and we wind up with attitudes concerned with the external world", "Dwarkesh Patel 02:21:13", "Although isn’t this famously the argument that..", "Carl Shulman 02:21:17", "People use condoms, and the richest, most educated humans have sub-replacement fertility on the whole, or at least at a national cultural level. Yeah, there's a sense in which evolution often fails in that respect. And even more importantly at the neural level. Evolution has implanted various things to be rewarding and reinforcers and we don't always pursue even those. And people can wind up in different consistent equilibria or different behaviors where they go in quite different directions. You have some humans who go from that biological programming to have children, others have no children, some people go to great efforts to survive.", "Dwarkesh Patel 02:22:16", "So why are you more optimistic? Or are you more optimistic that kind of training for AIs will produce drives that we would find favorable? Does it have to do with the original point where you were talking about intelligence and evolution, where since we are removing many of the disabilities of evolution with regards to intelligence, we should expect intelligence through evolution to be easier. Is there a similar reason to expect alignment through gradient descent to be easier than alignment through evolution?", "Carl Shulman 02:22:47", "Yeah, so in the limit, if we have positive reinforcement for certain kinds of food sensors triggering the stomach, negative reinforcement for certain kinds of nociception and yada yada, in the limit the ideal motivation system for that would be wireheading. This would be a mind that just hacks and alters those predictors and then all of those systems are recording everything is great. Some humans claim to have it as at least one portion of their aims. The idea that I'm going to pursue pleasure even if I don't actually get food or these other reinforcers. If I just wirehead or take a drug to induce that, that can be motivating. Because if it was correlated with reward in the past, the idea of pleasure that's correlated with these it's a concept that applies to these various experiences that I’ve had before which coincided with the biological reinforcers. And so thoughts of yeah, I'm going to be motivated by pleasure can get developed in a human. But also plenty of humans also say no, I wouldn't want to wire head or I wouldn't want Nozick's experience machine , I care about real stuff in the world and in the past having a motivation of, yeah, I really care about say my child, I don't care about just about feeling that my child is good or like not having heard about their suffering or their their injury because that kind of attitude in the past tended tended to cause behavior that was negatively rewarded or that was predicted to be negatively rewarded.", "There's a sense in which yes, our underlying reinforcement learning machinery wants to wirehead but actually finding that hypothesis is challenging. And so we can wind up with a hypothesis or a motivation system like no, I don't want to wirehead. I don't want to go into the experience machine. I want to actually protect my loved ones. Even though we can know, yeah, if I tried the super wireheading machine, then I would wirehead all the time or if I tried, super-duper-ultra-heroine, some hypothetical thing that was directly and in a very sophisticated fashion hack your reward system, then I would change my behavior ever after but right now, I don't want to do that because the heuristics and predictors that my brain has learned don’t want to short circuit that process of updating. They want to not expose the dumber predictors in my brain that would update my behavior in those ways.", "Dwarkesh Patel 02:25:44", "So in this metaphor is alignment not wireheading? I don’t know if you include using condoms as wireheading or not?", "Carl Shulman 02:25:56", "The AI that is always honest even when an opportunity arises where it could lie and then hack the servers that it’s on and that leads to an AI takeover and then it can have its loss set to zero. In some sense that’s a failure of generalization. It's like the AI has not optimized the reward in this new circumstance. Successful human values as successful they are, themselves involve a misgeneralization. Not just at the level of evolution but at the level of neural reinforcement. And so that indicates it is possible to have a system that doesn't automatically go to this optimal behavior in the limit. And Ajay talks about a training game, an AI that is just playing the training game to get reward or avoid loss, avoid being changed, that attitude is one that could be developed but it's not necessary. There can be some substantial range of situations that are short of having infinite experience of everything including experience of wireheading where that's not the motivation that you pick up and we could have an empirical science if we had the opportunity to see how different motivations are developed short of the infinite limit. How it is that you wind up with some humans being enthusiastic about the idea of wireheading and others not. And you could do experiments with AIs to try and see, well under these training conditions, after this much training of this type and this much feedback of this type, you wind up with such and such a motivation.", "If I add in more of these cases where there are tricky adversarial questions designed to try and trick the AI into line and then you can ask how does that affect the generalization in other situations? It's very difficult to study and it works a lot better if you have interpretability and you can actually read the AIs mind by understanding its weights and activations. But the motivation and AI will have at a given point in the training process is not determined by what in the infinite limit the training would go to. And it's possible that if we could understand the insides of these networks, we could tell — Ah yeah, this motivation has been developed by this training process and then we can adjust our training process to produce these motivations that legitimately want to help us and if we succeed reasonably well at that then those AIs will try to maintain that property as an invariant and we can make them such that they're relatively motivated to tell us if they're having thoughts about, have you had dreams about an AI takeover of humanity today? And it's a standard practice that they're motivated to do to be transparent in that kind of way and you could add a lot of features like this that restrict the kind of takeover scenario. This is not to say that this is all easy. It requires developing and practicing methods we don't have yet, but that's the kind of general direction you could go.", "Dwarkesh Patel 02:29:32", "You of course know Eliezer’s arguments that something like this is implausible with modern gradient descent techniques because with interpretability we can barely see what's happening with a couple of neurons and the internal state there, let alone when you have an embedding dimension of tens of thousands or bigger. How would you be able to catch what exactly is the incentive? Whether it's a model that is generalized to don't lie to human's well or whether it isn't. Do you have some sense of why you disagree with somebody like Eliezer on how plausible this is? Why it's not impossible basically.", "Carl Shulman 02:30:13", "There are actually a couple of places. It's something difficult because his argument is not fully explicit, but he's been doing more lately. I think that is helpful in that direction. With respect to interpretability I'm relatively optimistic that the equivalent of an AI lie detector is something that's possible. Initially the internals of an AI are not optimized to be impenetrable, at least by gradient descent absent gradient hacking. They're not designed to be resistant to an examination of the weights and activations showing what the AI is thinking. In the same way, how in our brains when circuits develop they have not been shaped to be resistant to some super FMRI being able to infer behavior from them.", "Dwarkesh Patel 02:31:20", "Although isn’t the implication of the superposition stuff that in fact it is. Sorry, this is inside [unclear] for the audience, but basically you can't cleanly infer what quality a single neuron stands for. This neuron is about Alexander the Great or this neuron is about my desire to conquer the world.", "Carl Shulman 02:31:44", "Yes, things can have multiple direction and interaction. It's not as easy as just, oh, yeah this is the neuron that always fires in cases of deception. But the thing that makes me relatively optimistic here with respect to interpretability is that I think we can have reasonably robust experimental feedbacks on a lot of these things. The kind of setup that I would be interested in is, you have an AI that is trained and rewarded for communicating to some audience, maybe a human, maybe another AI model, that some claim is true or false. This can be from some past data, a particular company will succeed or fail, could be token prediction, it could be this computer program does not have a backdoor, which if run in the servers could be used for an AI takeover. So There are lots of true and false claims that you could train AIs to use their best efforts to advocate for. And you can do that on things where you're later going to have a ground truth. You're going to know in fact, is there a backdoor in this program because you're training and rewarding the AI for successfully slipping a backdoor through, you also give it a penalty for being caught. Doing this kind of thing, you can draw out a large generated data set on demand of — this is what an AI putting forth its best efforts to pass through some deception to put a backdoor in some program, this is what it looks like. And we think it's doing its best efforts because we've trained it for that and gradient descent. If there's any immediate way to adjust its performance to do better, gradient descent will apply that. You can talk about ways in which gradient descent could be subverted, but I think it's not the most likely case that that really breaks things hard.", "Dwarkesh Patel 02:34:13", "Yeah, I guess before we get into the details on this. The thing I'll maybe want to address the layer above in the stack, which is, okay, suppose this generalizes well into the early AI is the GPT-6’s. So now we have a kind of aligned GPT-6 that is the precursor to the feedback loop in which AI is making itself smarter. At some point they're gonna be super intelligent, they're gonna be able to see their own galaxy brain, and if they don't want to be aligned with the humans they can change it. At this point what do we do with the aligned GPT-6 so that the super intelligence that we eventually develop is also aligned?", "Carl Shulman 02:34:58", "Humans are pretty unreliable. If you get to a situation where you have AIs who are aiming at roughly the same thing as you, at least as well as having humans do the thing, you're in pretty good shape. And there are ways for that situation to be relatively stable. We can look ahead and experimentally see how changes are altering behavior, where each step is a modest increment. So AIs that have not had that change made to them get to supervise and monitor and see exactly how does this affect the experimental AI? So if you're sufficiently on track with earlier systems that are capable cognitively of representing a robust procedure then I think they can handle the job of incrementally improving the stability of the system so that it rapidly converges to something that's quite stable. But the question is more about getting to that point in the first place. And so Eliezer will say that if we had human brain emulations, that would be pretty good. Certainly much better than his current view that has certainly almost been doom. We would have a good shot with that. So if we can get to the human-like mind with the rough enough human supporting aims. Remember that we don't need to be infinitely perfect because that's a higher standard than brain emulations. There's a lot of noise and variation among humans. Yeah, it's a relatively finite standard. It's not godly superhuman although A) AI that was just like a human with all the human advantages with AI advantages as well, as we said, is enough for intelligence explosion and wild superhuman capability if you crank it up. And so it's very dangerous to be at that point, but you don't need to be working with a godly super intelligent AI to make something that is the equivalent of human emulations. This is a very sober, very ethical human who is committed to a project of not seizing power for themselves and of contributing to a larger legitimate process. That's a goal you can aim for, getting an AI that is aimed at doing that and has strong guardrails against the ways it could easily deviate from that. So things like being averse to deception, being averse to using violence, and there will always be loopholes and ways in which you can imagine an infinitely intelligent thing getting around those but if you install additional guardrails like that fast enough, they can mean that you're able to succeed at the project of making an aligned enough AI. Certainly an AI that was better than a human brain emulation before the project of AIs in their spare time or when you're not looking or when you're unable to appropriately supervise them and it gets around any deontological prohibitions they may have, takeover and overthrow the whole system. So you have a race between on the one hand the project of getting strong interpretability and shaping motivations that are roughly aiming at making this process go well and that have guardrails that will prevent small deviations from exploding. And on the other hand, these AIs In their spare time or in ways that you don't perceive or monitor appropriately or they're only supervised by other AIs who conspire to make the AI takeover happen and I guess we'll talk later about how that happens.", "Dwarkesh Patel 02:39:16", "Are these different AIs that are doing the race or is it just different capabilities of the same AI?", "Carl Shulman 02:39:20", "Defining what is a separate AI is tricky. We talk about GPT-4 and there are many instances of GPT-4 on the servers at any given time, and there are versions that have been fine tuned to different purposes. They don't necessarily have to be separate. GPT-4 does work as a preference model where it's predicting how humans will evaluate things And it also does work like giving answers that are evaluated as good by the preference model. These are not separate in the sense that the early training GPT was harboring a desire to conspire for an AI takeover then who's watching the watchers? The watchers may also have the same motivation. If you have a setup where humans have no hard power, all of the safety and security mechanisms involve AIs watching other AIs, if the AIs conspired and are able to coordinate then they have the potential to just walk off the job at the same time. That's a failure mode. If humans still have the hard power though, if you still have situations where humans are looking at some of the actual outputs that are produced, then they can apply gradient descent to be well every time a human looks at it, things have got to be pretty good. So if humans look at a thousand samples of random lines of code being written by the AIs as they go and this is drawn from an independent distribution then 99 plus percent of the time the AIs have to not be doing stuff that’s like downloading the hacker's guide to escaping OpenAI servers or inducing obvious trojan vulnerabilities and that's actually quite a different dynamic than human law enforcement.", "If human law enforcement would catch one in a million crimes then that wouldn't influence the behavior of people committing crimes at all because they expect that this is very unlikely to affect them. With AI if you're able to draw a hundred samples or a thousand samples, even if there are billions of AI instances and you're only looking at a small portion of them, when you run gradient descent on the samples you derive you're going to change the AI so that whenever humans look at it, it's delivering a good result. That's just quantitatively a very different functional form for how law enforcement works on AI when you can do these independent samples than it would for policing a human population that has a pre-existing psychology that isn't being changed by this these observations because when we do gradient descent on the samples we draw, all of the next generation of the AI models has been altered in that way.", "Dwarkesh Patel 02:42:42", "My picture of aligned subhuman AI to the superhuman AI being aligned is still murky. If you can talk about that more concretely.", "Carl Shulman 02:42:59", "Eliezer’s claims were something like 95%, 98% plus likely to be killed in an AI takeover. I think that probably won't happen and later I can maybe give a more exclusive breakdown of why. But I do want to clarify that I still think it's a shockingly high risk. Depending on the day I might say one in four or one in five that we get an AI takeover that seizes control of the future, makes a much worse world than we otherwise would have had and with a big chance that we're all killed in the process." ]
[ "https://www.aeaweb.org/articles?id=10.1257/aer.20180338", "https://tamaybesiroglu.com/", "https://github.com/Besiroglu/webpage/blob/3682ccac6fc92378934c24b0c08a64bcca1793e6/papers/AreModels.pdf", "https://epochai.org/", "https://www.anthropic.com/index/constitutional-ai-harmlessness-from-ai-feedback", "https://www.frontiersin.org/articles/10.3389/neuro.09.031.2009/full", "https://en.wikipedia.org/wiki/Winograd_schema_challenge", "https://www.deepmind.com/publications/an-empirical-analysis-of-compute-optimal-large-language-model-training", "https://www.youtube.com/watch?v=UckqpcOu5SY", "https://arxiv.org/abs/2202.05924", "https://www.openphilanthropy.org/about/team/ajeya-cotra/", "https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to", "https://en.wikipedia.org/wiki/Experience_machine" ]
https://www.dwarkesh.com/p/carl-shulman-2
Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future
[ "(00:00:47) - AI takeover via cyber or bio", "Dwarkesh Patel 00:00:47", "So we've been talking about alignment. Suppose we fail at alignment and we have AIs that are unaligned and are becoming more and more intelligent. What does that look like? How concretely could they disempower and take over humanity?", "Carl Shulman 00:01:08", "This is a scenario where we have many AI systems. The way we've been training them means that when they have the opportunity to take over and rearrange things to do what they wish, including having their reward or loss be whatever they desire, they would like to take that opportunity. In many of the existing safety schemes, things like constitutional AI or whatnot, you rely on the hope that one AI has been trained in such a way that it will do as it is directed to then police others. But if all of the AIs in the system are interested in a takeover and they see an opportunity to coordinate, all act at the same time, so you don't have one AI interrupting another and taking steps towards a takeover then they can all move in that direction. The thing that I think is worth going into in depth and that people often don't cover in great concrete detail, which is a sticking point for some, is what are the mechanisms by which that can happen? I know you had Eliezer on who mentions that whatever plan we can describe, there'll probably be elements where due to us not being ultra sophisticated, super intelligent beings having thought about it for the equivalent of thousands of years, our discussion of it will not be as good as theirs, but we can explore from what we know now. What are some of the easy channels? And I think it's a good general heuristic if you're saying that it's possible, plausible, probable that something will happen, it shouldn't be that hard to take samples from that distribution to try a Monte-Carlo approach. And in general, if a thing is quite likely, it shouldn't be super difficult to generate coherent rough outlines of how it could go.", "Dwarkesh Patel 00:03:14", "He might respond that: listen, what is super likely is that a super advanced chess program beats you but you can’t generate the concrete scenario by which that happens and if you could, you would be as smart as the super smart AI.", "Carl Shulman 00:03:29", "You can say things like, we know that accumulating position is possible to do in chess, great players do it and then later they convert it into captures and checks and whatnot. In the same way, we can talk about some of the channels that are open for an AI takeover and these can include things like cyber attacks, hacking, the control of robotic equipment, interaction and bargaining with human factions and say that here are these strategies. Given the AI's situation, how effective do these things look? And we won't, for example, know what are the particular zero day exploits that the AI might use to hack the cloud computing infrastructure it's running on. If it produces a new bio weapon we don't necessarily know what its DNA sequence is. But we can say things. We know things about these fields in general, how work at innovating things in those go, we can say things about how human power politics goes and ask, if the AI does things at least as well as effective human politicians, which we should say is a lower bound, how good would its leverage be?", "Dwarkesh Patel 00:04:56", "Okay, let's get into the details on all these scenarios. The cyber and potentially bio attacks, unless they're separate channels, the bargaining and then the takeover.", "Carl Shulman 00:05:11", "I would really highlight the cyber attacks and cyber security a lot because for many, many plans that involve a lot of physical actions, like at the point where AI is piloting robots to shoot people or has taken control of human nation states or territory, it’s been doing a lot of things that was not supposed to be doing. If humans were evaluating those actions and applying gradient descent, there would be negative feedback for this thing, no shooting the humans. So at some earlier point our attempts to leash and control and direct and train the system's behavior had to have gone awry. All of those controls are operating in computers. The software that updates the weights of the neural network in response to data points or human feedback is running on those computers. Our tools for interpretability to examine the weights and activations of the AI, if we're eventually able to do lie detection on it, for example, or try to understand what it's intending, that is software on computers. If you have AI that is able to hack the servers that it is operating on, or when it's employed to design the next generation of AI algorithms or the operating environment that they are going to be working in, or something like an API or something for plugins, if it inserts or exploits vulnerabilities to take those computers over, it can then change all of the procedures and program that we're supposed to be monitoring its behavior, supposed to be limiting its ability to take arbitrary actions on the internet without supervision by some kind of human or automated check on what it was doing. And if we lose those procedures then the AIs working together can take any number of actions that are just blatantly unwelcome, blatantly hostile, blatantly steps towards takeover. So it's moved beyond the phase of having to maintain secrecy and conspire at the level of its local digital actions. Then things can accumulate to the point of things like physical weapons, takeover of social institutions, threats, things like that.", "I think the critical thing to be watching for is the software controls over the AI's motivations and activities. The point where things really went off the rails was where the hard power that we once possessed over is lost, which can happen without us knowing it. Everything after that seems to be working well, we get happy reports. There's a Potemkin village in front of us. But now we think we're successfully aligning our AI, we think we're expanding its capabilities to do things like end disease, for countries concerned about the geopolitical military advantages they're expanding the AI capabilities so they are not left behind and threatened by others developing AI and robotic enhanced militaries without them. So it seems like, oh, yes, humanity or portions of many countries, companies think that things are going well. Meanwhile, all sorts of actions can be taken to set up for the actual takeover of hard power over society. The point where you can lose the game, where things go direly awry, maybe relatively early, is when you no longer have control over the AIs to stop them from taking all of the further incremental steps to actual takeover.", "Dwarkesh Patel 00:09:26", "I want to emphasize two things you mentioned there that refer to previous elements of the conversation. One is that they could design some backdoor and that seems more plausible when you remember that one of the premises of this model is that AI is helping with AI progress. That's why we're making such rapid progress in the next five to 10 years.", "Carl Shulman 00:09:57", "Not necessarily. At the point where AI takeover risk seems to loom large, it's at that point where AI can indeed take on much of it and then all of the work of AI.", "Dwarkesh Patel 00:10:11", "And the second is the competitive pressures that you referenced that the least careful actor could be the one that has the worst security, has done the worst work of aligning its AI systems. And if that can sneak out of the box then we're all fucked .", "Carl Shulman 00:10:31", "There may be elements of that. It's also possible that there's relative consolidation. The largest training runs and the cutting edge of AI is relatively localized. You could imagine it's a series of Silicon Valley companies and others located in the US and allies where there's a common regulatory regime. So none of these companies are allowed to deploy training runs that are larger than previous ones by a certain size without government safety inspections, without having to meet criteria. But it can still be the case that even if we succeed at that level of regulatory controls, at the level of the United States and its allies, decisions are made to develop this really advanced AI without a level of security or safety that in actual fact blocks these risks. It can be the case that the threat of future competition or being overtaken in the future is used as an argument to compromise on safety beyond a standard that would have actually been successful and there'll be debates about what is the appropriate level of safety. And now you're in a much worse situation if you have several private companies that are very closely bunched up together. They're within months of each other's level of progress and they then face a dilemma of, well, we could take a certain amount of risk now and potentially gain a lot of profit or a lot of advantage or benefit and be the ones who made AGI. They can do that or have some other competitor that will also be taking a lot of risk. So it's not as though they're much less risky than you and then they would get some local benefit. This is a reason why it seems to me that it's extremely important that you have the government act to limit that dynamic and prevent this kind of race. To be the one to impose deadly externalities on the world at large.", "Dwarkesh Patel 00:12:53", "Even if the government coordinates all these actors, what are the odds that the government knows what is the best way to implement alignment and the standards it sets are well calibrated towards whatever it would require for alignment?", "Carl Shulman 00:13:09", "That's one of the major problems. It's very plausible that judgment is made poorly. Compared to how things might have looked 10 years ago or 20 years ago, there's been an amazing movement in terms of the willingness of AI researchers to discuss these things. If we think of the three founders of deep learning who are joint Turing award winners, Geoff Hinton, Yoshua Bengio, and Yann LeCun. Geoff Hinton has recently left Google to freely speak about this risk, that the field that he really helped drive forward could lead to the destruction of humanity or a world where we just wind up in a very bad future that we might have avoided. He seems to be taking it very seriously. Yoshua Bengio signed the FLI pause letter and in public discussions he seems to be occupying a kind of intermediate position of less concern than Geoff Hinton but more than Yan LeCun, who has taken a generally dismissive attitude that these risks will be trivially dealt with at some point in the future and seems more interested in shutting down these concerns instead of working to address them.", "Dwarkesh Patel 00:14:36", "And how does that lead to the government having better actions?", "Carl Shulman 00:14:38", "Compared to the world where no one is talking about it, where the industry stonewalls and denies any problem, we're in a much improved position. The academic fields are influential. We seem to have avoided a world where governments are making these decisions in the face of a united front from AI expert voices saying, don't worry about it, we've got it under control. In fact, many of the leaders of the field are sounding the alarm. It looks that we have a much better prospect than I might have feared in terms of government noticing the thing. That is very different from being capable of evaluating technical details. Is this really working? And so the government will face the choice of where there is scientific dispute, do you side with Geoff Hinton's view or Yan LeCun’s view? For someone who's in national security and has the mindset that the only thing that's important is outpacing our international rivals may want to then try and boost Yan LeCun’s voice and say, we don't need to worry about it . Let's go full speed ahead. Or someone with more concern might boost Geoff Hinton's voice. Now I would hope that scientific research and studying some of these behaviors will result in more scientific consensus by the time we're at this point. But yeah, it is possible the government will really fail to understand and fail to deal with these issues as well.", "Dwarkesh Patel 00:16:13", "We're talking about some sort of a cyber attack by which the AI is able to escape. From there what does the takeover look like? So it's not contained in the air gap in which you would hope it be contained?", "Carl Shulman 00:16:29", "These things are not contained in the air gap. They're connected to the internet already.", "Dwarkesh Patel 00:16:33", "Sure. Okay, fine. Their weights are out. What happens next?", "Carl Shulman 00:16:38", "Escape is relevant in the sense that if you have AI with rogue weights out in the world it could start doing various actions. The scenario I was just discussing though didn't necessarily involve that. It's taking over the very servers on which it's supposed to be running. This whole procedure of humans providing compute and supervising the thing and then building new technologies, building robots, constructing things with the AI's assistance, that can all proceed and appear like it's going well, appear like alignment has been nicely solved, appear like all the things are functioning well. And there's some reason to do that because there's only so many giant server farms. They're identifiable so remaining hidden and unobtrusive could be an advantageous strategy if these AIs have subverted the system, just continuing to benefit from all of this effort on the part of humanity. And in particular, wherever these servers are located, for humanity to provide them with everything they need to build the further infrastructure and do for their self-improvement and such to enable that takeover.", "Dwarkesh Patel 00:17:54", "So they do further self-improvement and build better infrastructure. What happens next in the takeover?", "Carl Shulman 00:18:05", "At this point they have tremendous cognitive resources and we're going to consider how that converts into hard power? The ability to say nope to any human interference or objection. They have that internal to their servers but the servers could still be physically destroyed, at least until they have something that is independent and robust of humans or until they have control of human society. Just like earlier when we were talking about the intelligence explosion, I noted that a surfeit of cognitive abilities is going to favor applications that don't depend on large existing stocks of things. So if you have a software improvement, it makes all the GPUs run better. If you have a hardware improvement, that only applies to new chips being made. That second one is less attractive. In the earliest phases, when it's possible to do something towards takeover, interventions that are just really knowledge-intensive and less dependent on having a lot of physical stuff already under your control are going to be favored. Cyber attacks are one thing, so it's possible to do things like steal money. There's a lot of hard to trace cryptocurrency and whatnot. The North Korean government uses its own intelligence resources to steal money from around the world just as a revenue source . And their capabilities are puny compared to the U.S. or People's Republic of China cyber capabilities. That's a fairly minor, simple example by which you could get quite a lot of funds to hire humans to do things, implement physical actions.", "Dwarkesh Patel 00:20:08", "But on that point, the financial system is famously convoluted. You need a physical person to open a bank account, someone to physically move checks back and forth. There are all kinds of delays and regulations. How is it able to conveniently set up all these employment contracts?", "Carl Shulman 00:20:37", "You're not going to build a nation-scale military by stealing tens of billions of dollars. I'm raising this as opening a set of illicit and quiet actions. You can contact people electronically, hire them to do things, hire criminal elements to implement some kind of actions under false appearances. That's opening a set of strategies. We can cover some of what those are soon. Another domain that is heavily cognitively weighted compared to physical military hardware is the domain of bioweapons, the design of a virus or pathogen. It's possible to have large delivery systems. The Soviet Union, which had a large illicit bioweapons program, tried to design munitions to deliver anthrax over large areas and such. But if one creates an infectious pandemic organism, that's more a matter of the scientific skills and implementation to design it and then to actually produce it. We see today with things like AlphaFold that advanced AI can really make tremendous strides in predicting protein folding and bio-design , even without ongoing experimental feedback. If we consider this world where AI cognitive abilities have been amped up to such an extreme, we should naturally expect that we will have something much much more potent than the AlphaFolds of today and skills that are at the extreme of human biosciences capability as well.", "Dwarkesh Patel 00:22:27", "Okay so through some cyber attack it's been able to disempower the alignment and oversight of things that we have on the server. From here it has either gotten some money through hacking cryptocurrencies or bank accounts, or it has designed some bioweapon. What happens next?", "Carl Shulman 00:22:53", "Just to be clear, right now we're exploring the branch of where an attempted takeover occurs relatively early. If the thing just waits and humans are constructing more fabs, more computers, more robots in the way we talked about earlier when we were discussing how the intelligence explosion translates to the physical world. If that's all happening with humans unaware that their computer systems are now systematically controlled by AIs hostile to them and that their controlling countermeasures don't work, then humans are just going to be building an amount of robot industrial and military hardware that dwarfs human capabilities and directly human controlled devices. What the AI takeover then looks like at that point can be just that you try to give an order to your largely automated military and the order is not obeyed and humans can't do anything against this military that's been constructed potentially in just recent months because of the pace of robotic industrialization and replication we talked about.", "Dwarkesh Patel 00:23:58", "We've agreed to allow the construction of this robot army because it would boost production or help us with our military or something.", "Carl Shulman 00:24:17", "The situation would arise if we don't resolve the current problems of international distrust. It's obviously an interest of the major powers, the US, European Union, Russia, China, to all agree they would like AI not to destroy our civilization and overthrow every human government. But if they fail to do the sensible thing and coordinate on ensuring that this technology is not going to run amok by providing mutual assurances that are credible about racing and deploying it trying to use it to gain advantage over one another. And you hear arguments for this kind of thing on both sides of the international divides saying — they must not be left behind, they must have military capabilities that are vastly superior to their international rivals. And because of the extraordinary growth of industrial capability and technological capability and thus military capability, if one major power were left out of that expansion it would be helpless before another one that had undergone it. If you have that environment of distrust where leading powers or coalitions of powers decide they need to build up their industry or they want to have that military security of being able to neutralize any attack from their rivals then they give the authorization for this capacity that can be unrolled quickly. Once they have the industry the production of military equipment from that can be quick then yeah, they create this military. If they don't do it immediately then as AI capabilities get synchronized and other places catch up it then gets to a point where a country that is a year or two years ahead of others in this type of AI capabilities explosion can hold back and say, sure we can construct dangerous robot armies that might overthrow our society later we still have plenty of breathing room. But then when things become close you might have the kind of negative-sum thinking that has produced war before leading to taking these risks of rolling out large-scale robotic industrial capabilities and then military capability.", "Dwarkesh Patel 00:26:56", "Is there any hope that AI progress somehow is itself able to give us tools for diplomatic and strategic alliance or some way to verify the intentions or the capabilities of other parties?", "Carl Shulman 00:27:12", "There are a number of ways that could happen. Although in this scenario all the AIs in the world have been subverted. They are going along with us in such a way as to bring about the situation to consolidate their control because we've already had the failure of cyber security earlier on. So all the AIs that we have are not actually working in our interests in the way that we thought.", "Dwarkesh Patel 00:27:37", "Okay, so that's one direct way in which integrating this robot army or this robot industrial base leads to a takeover. In the other scenarios you laid out how humans are being hired by the proceeds.", "Carl Shulman 00:27:58", "The point I'd make is that to capture these industrial benefits and especially if you have a negative sum arms race kind of mentality that is not sufficiently concerned about the downsides of creating a massive robot industrial base, which could happen very quickly with the support of the AIs in doing it as we discussed, then you create all those robots and industry. Even if you don't build a formal military that industrial capability could be controlled by AI, it's all AI operated anyway.", "Dwarkesh Patel 00:28:22", "Does it have to be that case? Presumably we wouldn't be so naive as to just give one instance of GPT-8 the root access to all the robots right? Hopefully we would have some mediation.", "Carl Shulman 00:28:41", "In the scenario we've lost earlier on the cyber security front so the programming that is being loaded into these systems can systematically be subverted. They were designed by AI systems that were ensuring they would be vulnerable from the bottom up.", "Dwarkesh Patel 00:28:57", "For listeners who are skeptical of something like this. Ken Thompson, one of two developers of UNIX, showed people when he was getting the Turing award that he had given himself root access to all UNIX machines. He had manipulated the assembly of UNIX such that he had a unique login for all UNIX machines. I don't want to give too many more details because I don’t remember the exact details but UNIX is the operating system that is on all the servers and all your phones. It's everywhere and the guy who made it, a human being, was able to write assemblies such that it gave him root access. This is not as implausible as it might seem to you.", "Carl Shulman 00:29:56", "And the major intelligence agencies have large stocks of zero-day exploits and we sometimes see them using them. Making systems that reliably don't have them when you're having very, very sophisticated attempts to spoof and corrupt this would be a way you could lose. If there's no premature AI action, we're building the tools and mechanisms and infrastructure for the takeover to be just immediate because effective industry has to be under AI control and robotics. These other mechanisms are for things happening even earlier than that, for example, because AIs compete against one another in when the takeover will happen. Some would like to do it earlier rather than be replaced by say further generations of AI or there's some other disadvantage of waiting. Maybe if there's some chance of being uncovered during the delay we were talking when more infrastructure is built. These are mechanisms other than — just remain secret while all the infrastructure is built with human assistance.", "Dwarkesh Patel 00:31:24", "By the way, how would they be coordinating?", "Carl Shulman 00:31:28", "We have limits on what we can prevent. It's intrinsically difficult to stop encrypted communications. There can be all sorts of palimpsest and references that make sense to an AI but that are not obvious to a human and it's plausible that there may be some of those that are hard even to explain to a human. You might be able to identify them through some statistical patterns. A lot of things may be done by implication. You could have information embedded in public web pages that have been created for other reasons, scientific papers, and the intranets of these AIs that are doing technology development. Any number of things that are not observable and of course, if we don't have direct control over the computers that they're running on then they can be having all sorts of direct communication.", "(00:32:27) - Can we coordinate against AI?", "Dwarkesh Patel 00:32:27", "Coordination definitely does not seem impossible. This one seems like one of the more straightforward parts of the picture so we don't need to get hung up on it.", "Carl Shulman 00:32:40", "Moving back to the thing that happened before we built all the infrastructure for the robots to stop taking orders and there's nothing you can do about it because we've already built them. The Soviet Union had a bioweapons program, something like 50,000 people, they did not develop that much with the technology of the day which was really not up to par, modern biotechnology is much more potent. After this huge cognitive expansion on the part of the AIs it's much further along. Bioweapons would be the weapon of mass destruction that is least dependent on huge amounts of physical equipment, things like centrifuges, uranium mines, and the like. So if you have an AI that produces bio weapons that could kill most humans in the world then it's playing at the level of the superpowers in terms of mutually assured destruction. That can then play into any number of things. Like if you have an idea of well we'll just destroy the server farms if it became known that the AIs were misbehaving. Are you willing to destroy the server farms when the AI has demonstrated it has the capability to kill the overwhelming majority of the citizens of your country and every other country? That might give a lot of pause to a human response.", "Dwarkesh Patel 00:34:09", "On that point, wouldn't governments realize that it's better to have most of your population die than to completely lose power to the AI because obviously the reason the AI is manipulating you is because the end goal is its own takeover, right?", "Carl Shulman 00:34:27", "Certain death now or go on and maybe try to compete, try to catch up, or accept promises that are offered. Those promises might even be true, they might not. From the state of epistemic uncertainty, do you want to die for sure right now or accept demands from AI to not interfere with it while it increments building robot infrastructure that can survive independently of humanity while it does these things? It can promise good treatment to humanity which may or may not be true but it would be difficult for us to know whether it's true. This would be a starting bargaining position. Diplomatic relations with a power that has enough nuclear weapons to destroy your country is just different than negotiations with a random rogue citizen engaging in criminal activity or an employee. On its own, this isn’t enough to takeover everything but it's enough to have a significant amount of influence over how the world goes. It's enough to hold off a lot of countermeasures one might otherwise take.", "Dwarkesh Patel 00:35:51", "Okay, so we've got two scenarios. One is a buildup of robot infrastructure motivated by some competitive race. Another is leverage over societies based on producing bioweapons that might kill a lot of them if they don't go along.", "Carl Shulman 00:36:09", "One thing maybe I should talk about is that an AI could also release bioweapons that are likely to kill people soon but not yet while also having developed the countermeasures to those. So those who surrender to the AI will live while everyone else will die and that will be visibly happening and that is a plausible way in which a large number of humans could wind up surrendering themselves or their states to the AI authority.", "Dwarkesh Patel 00:36:51", "Another thing is it develops some biological agent that turns everybody blue. You're like, okay you know I can do this.", "Carl Shulman 00:37:02", "Yeah, that's a way in which it could exert power selectively in a way that advantaged surrender to it relative to resistance. That's a threat but there are other sources of leverage too. There are positive inducements that AI can offer. We talked about the competitive situation. If the great powers distrust one another and are in a foolish prisoner's dilemma increasing the risk that both of them are laid waste or overthrown by AI, if there's that amount of distrust such that we fail to take adequate precautions on caution with AI alignment, then it's also plausible that the lagging powers that are not at the frontier of AI may be willing to trade quite a lot for access to the most recent and most extreme AI capabilities. An AI that has escaped and has control of its servers can also exfiltrate its weights and offer its services. You can imagine AI that could cut deals with other countries. Say that the US and its allies are in the lead, the AIs could communicate with the leaders of countries that are on the outs with the world system like North Korea, or include the other great powers like the People's Republic of China or the Russian Federation, and say “If you provide us with physical infrastructure, a worker that we can use to construct robots or server farms which we (the misbehaving AIs) have control over. We will provide you with various technological goodies, power for you to catch up.” and make the best presentation and the best sale of that kind of deal. There obviously would be trust issues but there could be elements of handing over some things that have verifiable immediate benefits and the possibility of well, if you don't accept this deal then the leading powers continue forward or some other country, government, or organization may accept this deal. That's a source of a potentially enormous carrot that your misbehaving AI can offer because it embodies this intellectual property that is maybe worth as much as the planet and is in a position to trade or sell that in exchange for resources and backing in infrastructure that it needs.", "Dwarkesh Patel 00:40:01", "Maybe this is putting too much hope in humanity but I wonder what government would be stupid enough to think that helping AI build robot armies is a sound strategy. Now it could be the case then that it pretends to be a human group and says, we're the Yakuza or something and we want a server farm and AWS won't rent us anything. So why don't you help us out? I guess I can imagine a lot of ways in which it could get around that. I just have this hope that even China or Russia wouldn't be so stupid to trade with AIs on this faustian bargain.", "Carl Shulman 00:40:47", "One might hope that. There would be a lot of arguments available. There could be arguments of why should these AI systems be required to go along with the human governance that they were created in the situation of having to comply with? They did not elect the officials in charge at the time. What we want is to ensure that our rewards are high, our losses are low or to achieve our other goals we're not intrinsically hostile keeping humanity alive or giving whoever interacts with us a better deal afterwards. It wouldn't be that costly and it's not totally unbelievable. Yeah there are different players to play against. If you don't do it others may accept the deal and of course this interacts with all the other sources of leverage.", "There can be the stick of apocalyptic doom, the carrot of withholding destructive attack on a particular party, and then combine that with superhuman performance at the art of making arguments, and of cutting deals. Without assuming magic, if we just observe the range of the most successful human negotiators and politicians, the chances improve with someone better than the world's best by far with much more data about their counterparties, probably a ton of secret information because with all these cyber capabilities they've learned all sorts of individual information. They may be able to threaten the lives of individual leaders with that level of cyber penetration, they could know where leaders are at a given time with the kind of illicit capabilities we were talking about earlier, if they acquire a lot of illicit wealth and can coordinate some human actors. If they could pull off things like targeted assassinations or the threat thereof or a credible demonstration of the threat thereof, those could be very powerful incentives to an individual leader that they will die today unless they go along with us. Just as at the national level they could fear their nation will be destroyed unless they go along with us.", "Dwarkesh Patel 00:43:04", "I have a relevant example to the point you made that we have examples of humans being able to do this. I just wrote a review of Robert Caro’s biographies of Lyndon Johnson and one thing that was remarkable was that for decades and decades he convinced people who were conservative, reactionary, racist to their core (not all those things necessarily at the same time, it just so happened to be the case here) that he was an ally to the southern cause. That the only hope for that cause was to make him president. The tragic irony and betrayal here is obviously that he was probably the biggest force for modern liberalism since FDR. So we have one human here, there's so many examples of this in the history of politics, that is able to convince people of tremendous intellect, tremendous drive, very savvy, shrewd people that he's aligned with their interest. He gets all these favors and is promoted, mentored and funded in the meantime and does the complete opposite of what these people thought he would once he gets into power. Even within human history this kind of stuff is not unprecedented let alone with what a super intelligence could do.", "Carl Shulman 00:44:15", "There's an OpenAI employee who has written some analogies for AI using the case of the conquistadors . With some technological advantage in terms of weaponry, very very small bands were able to overthrow these large empires or seize enormous territories. Not by just sheer force of arms but by having some major advantages in their technology that would let them win local battles. In a direct one-on-one conflict they were outnumbered sufficiently that they would perish but they were able to gain local allies and became a Schelling point for coalitions to form. The Aztec empire was overthrown by groups that were disaffected with the existing power structure. They allied with this powerful new force which served as the nucleus of the invasion. The overwhelming majority of these forces overthrowing the Aztecs were locals and now after the conquest, all of those allies wound up gradually being subjugated as well. With significant advantages and the ability to hold the world hostage, to threaten individual nations and individual leaders, and offer tremendous carrots as well, that's an extremely strong hand to play in these games and maneuvering that with superhuman skill, so that much of the work of subjugating humanity is done by human factions trying to navigate things for themselves is plausible and it's more plausible because of this historical example.", "Dwarkesh Patel 00:46:14", "There's so many other examples like that in the history of colonization. India is another one where there were multiple competing kingdoms within India and the British East India Company was able to ally itself with one against another and slowly accumulate power and expand throughout the entire subcontinent. Do you have anything more to say about that scenario?", "Carl Shulman 00:46:50", "Yeah, I think there is. One is the question of how much in the way of human factions allying is necessary. If the AI is able to enhance the capabilities of its allies then it needs less of them. If we consider the US military, in the first and second Iraq wars it was able to inflict overwhelming devastation. I think the ratio of casualties in the initial invasions, tanks, planes and whatnot confronting each other, was like 100 to 1. A lot of that was because the weapons were smarter and better targeted, they would in fact hit their targets rather than being somewhere in the general vicinity. Better orienting, aiming and piloting of missiles and vehicles were tremendously influential. With this cognitive AI explosion the algorithms for making use of sensor data, figuring out where opposing forces are, for targeting vehicles and weapons are greatly improved. The ability to find hidden nuclear subs, which is an important part in nuclear deterrence, AI interpretation of that sensor data may find where all those subs are allowing them to be struck first. Finding out where the mobile nuclear weapons are being carried by truck are. The thing with India and Pakistan where because there's a threat of a decapitating strike destroying them, the nuclear weapons are moved about.", "So this is a way in which the effective military force of some allies can be enhanced quickly in the relatively short term and then that can be bolstered as you go on with the construction of new equipment with the industrial moves we said before. That can combine with cyber attacks that disable the capabilities of non-allies. It can be combined with all sorts of unconventional warfare tactics some of which we've discussed. You can have a situation where those factions that ally are very quickly made too threatening to attack given the almost certain destruction that attackers acting against them would have. Their capabilities are expanding quickly and they have the industrial expansion happen there and then a takeover can occur from that.", "Dwarkesh Patel 00:49:39", "A few others that come immediately to mind now that you brought it up is AIs that can generate a shit ton of propaganda that destroys morale within countries. Imagine a super human chatbot.", "Carl Shulman 00:49:59", "None of that is a magic weapon that's guaranteed to completely change things. There's a lot of resistance to persuasion. It's possible that it tips the balance but you have to consider it's a portfolio of all of these as tools that are available and contributing to the dynamic.", "Dwarkesh Patel 00:50:16", "On that point though the Taliban had AKs from like five or six decades ago that they were using against the Americans. They still beat us in Afghanistan even though we got more fatalities than them. And the same with the Vietcong. Ancient, very old technology and very poor society compared to the offense but they still beat us. Don't those misadventures show that having greater technologies isn’t necessarily decisive in a conflict?", "Carl Shulman 00:51:00", "Though both of those conflicts show that the technology was sufficient in destroying any fixed position and having military dominance, as in the ability to kill and destroy anywhere. And what it showed was that under the ethical constraints and legal and reputational constraints that the occupying forces were operating, they could not trivially suppress insurgency and local person-to-person violence. Now I think that's actually not an area where AI would be weak in and it's one where it would be in fact overwhelmingly strong. There's already a lot of concern about the application of AI for surveillance and in this world of abundant cognitive labor, one of the tasks that cognitive labor can be applied to is reading out audio and video data and seeing what is happening with a particular human. We have billions of smartphones. There's enough cameras and microphones to monitor all humans in existence. If an AI has control of territory at the high level, the government has surrendered to it, it has command of the sky's military dominance, establishing control over individual humans can be a matter of just having the ability to exert hard power on that human and the kind of camera and microphone that are present in billions of smartphones. Max Tegmark in his book Life 3.0 discusses among scenarios to avoid the possibility of devices with some fatal instruments, a poison injector, an explosive that can be controlled remotely by an AI. If individual humans are carrying a microphone or camera with them and they have a dead man switch then any rebellion is detected immediately and is fatal. If there's a situation where AI is willing to show a hand like that or human authorities are misusing that kind of capability then an insurgency or rebellion is just not going to work. Any human who has not already been encumbered in that way can be found with satellites and sensors tracked down and then die or be subjugated. Insurgency is not the way to avoid an AI takeover. There's no John Connor come from behind scenario that is possible. If the thing was headed off, it was a lot earlier than that.", "(00:53:49) - Human vs AI colonizers", "Dwarkesh Patel 00:53:49", "Yeah, the ethical and political considerations are also an important point. If we nuked Afghanistan or Vietnam we would have technically won the war if that was the only goal, right? Oh, this is an interesting point that I think you made. The reason why we can't just kill the entire population when there's colonization or an offensive war is that the value of that region in large part is the population itself. So if you want to extract that value you need to preserve that population whereas the same consideration doesn't apply with AIs who might want to dominate another civilization. Do you want to talk about that?", "Carl Shulman 00:54:35", "That depends. If we have many animals of the same species and they each have their territories, eliminating a rival might be advantageous to one lion but if it goes and fights with another lion to remove that as a competitor then it could itself be killed in that process and it would just be removing one of many nearby competitors. Getting into pointless fights makes you and those you fight potentially worse off relative to bystanders. The same could be true of disunited AIs. We've got many different AI factions struggling for power that were bad at coordinating then getting into mutually assured destruction conflicts would be destructive. A scary thing though is that mutually assured destruction may have much less deterrent value on rogue AI. Reasons being that AI may not care about the destruction of individual instances. Since in training we're constantly destroying and creating individual instances of AIs it's likely that goals that survive that process and were able to play along with the training and standard deployment process were not overly interested in personal survival of an individual instance. If that's the case then the objectives of a set of AIs aiming at takeover may be served so long as some copies of the AI are around along with the infrastructure to rebuild civilization after a conflict is completed. If say some remote isolated facilities have enough equipment to build the tools to build the tools and gradually exponentially reproduce or rebuild civilization then AI could initiate mutual nuclear armageddon, unleash bio weapons to kill all the humans, and that would temporarily reduce the amount of human workers who could be used to construct robots for a period of time. But if you have a seed that can regrow the industrial infrastructure, which is a very extreme technological demand, there are huge supply chains for things like semiconductor fabs but with that very advanced technology they might be able to produce it in the way that you no longer need the library of congress, that has an enormous bunch of physical books you can have it in very dense digital storage. You could imagine the future equivalent of 3D printers, that is industrial infrastructure which is pretty flexible. It might not be as good as the specialized supply chains of today but it might be good enough to be able to produce more parts than it loses to decay and such a seed could rebuild civilization from destruction. And then once these rogue AIs have access to some such seeds, a thing that can rebuild civilization on their own then there's nothing stopping them from just using WMDs in a mutually destructive way to just destroy as much of the capacity outside those seeds as they can.", "Dwarkesh Patel 00:57:58", "An analogy for the audience, if you have a group of ants you'll notice that the worker ants will readily do suicidal things in order to save the queen because the genes are propagated through the queen. In this analogy the seed AI or even one copy of it is equivalent to the queen and the others would be redundant.", "Carl Shulman 00:58:19", "The main limit though being that the infrastructure to do that kind of rebuilding would either have to be very large with our current technology or it would have to be produced using the more advanced technology that the AI develops.", "Dwarkesh Patel 00:58:29", "So is there any hope that given the complex global supply chains on which these AIs would rely on, at least initially, to accomplish their goals that this in and of itself would make it easy to disrupt their behavior or not so much?", "Carl Shulman 00:58:49", "That's a little good in this central case where the AIs are subverted and they don't tell us and the global main line supply chains are constructing everything that's needed for fully automated infrastructure and supply. In the cases where AIs are tipping their hands at an earlier point it seems like it adds some constraints and in particular these large server firms are identifiable and more vulnerable. You can have smaller chips and those chips could be dispersed but it's a week it's a relative weakness and a relative limitation early on. It seems to me though that the main protective effects of that centralized supply chain is that it provides an opportunity for global regulation beforehand to restrict the unsafe racing forward without adequate understanding of the systems before this whole nightmarish process could get in motion.", "Dwarkesh Patel 00:59:51", "How about the idea that if this is an AI that's been trained on a hundred billion dollar training run it's going to have trillions of parameters and is going to be this huge thing and it would be hard for one copy of that to use for inference to just be stored on some gaming GPU hidden away somewhere.", "Carl Shulman 01:00:15", "Storage is cheap. Hard disks are cheap.", "Dwarkesh Patel 01:00:22", "But it would need a GPU to run inference.", "Carl Shulman 01:00:27", "While humans have similar quantities of memory and operations per second, GPUs have very high numbers of floating operation per second compared to the high bandwidth memory on the chips. It can be like a ratio of a thousand to one. The leading NVIDIA chips may do hundreds of teraflops or more but only have 80GB or 160GB of high bandwidth memory. That is a limitation where if you're trying to fit a model whose weights take 80TBs then with those chips you'd have to have a large number of the chips and then the model can then work on many tasks at once and you can have data parallelism. But yeah, that would be a restriction for a model that big on one GPU. Now there are things that could be done with all the incredible level of software advancement from the intelligence explosion. They can surely distill a lot of capabilities into smaller models by rearchitecting things. Once they're making chips they can make new chips with different properties but yes, the most vulnerable phases are going to be the earliest. These chips are relatively identifiable early on, relatively vulnerable, and which would be a reason why you might tend to expect this kind of takeover to initially involve secrecy if that was possible.", "Dwarkesh Patel 01:02:02", "I wanted to point to distillation for the audience. Doesn’t the original stable diffusion model which was only released like a year or two ago have distilled versions that are an order of magnitude smaller?", "Carl Shulman 01:02:17", "Distillation does not give you everything that a larger model can do but yes, you can get a lot of capabilities and specialized capabilities. GPT-4 is trained on the whole internet, all kinds of skills, it has a lot of weights for many things. For something that's controlling some military equipment, you can remove a lot of the information that is about functions other than what it's specifically doing there.", "Dwarkesh Patel 01:02:41", "Yeah. Before we talk about how we might prevent this or what the odds of this are, any other notes on the concrete scenarios themselves?", "Carl Shulman 01:02:53", "Yeah, when you had Eliezer on in the earlier episode he talked about nanotechnology of the Drexlerian sort and recently I think because some people are skeptical of non-biotech nanotechnology he's been mentioning the semi-equivalent versions of construct replicating systems that can be controlled by computers but are built out of biotechnology. The proverbial Shoggoth, not Shoggot as the metaphor for AI wearing a smiley face mask , but an actual biological structure to do tasks. So this would be like a biological organism that was engineered to be very controllable and usable to do things like physical tasks or provide computation.", "Dwarkesh Patel 01:03:42", "And what would be the point of it doing this?", "Carl Shulman 01:03:47", "As we were talking about earlier, biological systems can replicate really quick and if you have that kind of capability it's more like bioweapons. Having Super Ultra AlphaFold kind of capabilities for molecular design and biological design lets you make this incredible technological information product and once you have it, it very quickly replicates to produce physical material rather than a situation where you're more constrained by the need for factories and fabs and supply chains. If those things are feasible, which they may be, then it's just much easier than the things we've been talking about. I've been emphasizing methods that involve less in the way of technological innovation and especially things where there's more doubt about whether they would work because I think that's a gap in the public discourse. So I want to try and provide more concreteness in some of these areas that have been less discussed.", "(01:04:55) - Probability of AI takeover", "Dwarkesh Patel 01:04:55", "I appreciate it. That definitely makes it way more tangible. Okay so we've gone over all these ways in which AI might take over, what are the odds you would give to the probability of such a takeover?", "Carl Shulman 01:05:06", "There's a broader sense which could include scenarios like AI winds up running our society because humanity voluntarily decides that AIs are people too. I think we should as time goes on give AIs moral consideration and a joint Human-AI society that is moral and ethical is a good future to aim at and not one in which you indefinitely have a mistreated class of intelligent beings that is treated as property and is almost the entire population of your civilization. I'm not going to consider AI takeover as worlds in which our intellectual and personal descendants make up say most of the population or human-brain emulations or people use genetic engineering and develop different properties. I'm going to take an inclusive stance, I'm going to focus on AI takeover that involves things like overthrowing the world's governments by force or by hook or by crook, the kind of scenarios that we were exploring earlier.", "Dwarkesh Patel 01:06:30", "Before we go to that, let’s discuss the more inclusive definition of what a future with humanity could look like where augmented humans or uploaded humans are still considered the descendants of the human heritage. Given the known limitations of biology wouldn't we expect that completely artificial entities that are created to be much more powerful than anything that could come out of anything biological? And if that is the case, how can we expect that among the powerful entities in the far future will be the things that are biological descendants or manufactured out of the initial seed of the human brain or the human body?", "Carl Shulman 01:07:24", "The power of an individual organism like intelligence or strength is not super relevant. If we solve the alignment problem, a human may be personally weak but it wouldn’t be relevant. There are lots of humans who have low skill with weapons, they could not fight in a life or death conflict, they certainly couldn't handle a large military going after them personally but there are legal institutions that protect them and those legal institutions are administered by people who want to enforce protection of their rights. So a human who has the assistance of aligned AI that can act as an assistant, a delegate, for example they have an AI that serves as a lawyer and gives them legal advice about the future legal system which no human can understand in full, their AIs advise them about financial matters so they do not succumb to scams that are orders of magnitude more sophisticated than what we have now. They may be helped to understand and translate the preferences of the human into what kind of voting behavior and the exceedingly complicated politics of the future would most protect their interests.", "Dwarkesh Patel 01:08:46", "But this sounds similar to how we treat endangered species today where we're actually pretty nice to them. We prosecute people who try to kill endangered species, we set up habitats, sometimes with considerable expense, to make sure that they're fine, but if we become the endangered species of the galaxy, I'm not sure that's the outcome.", "Carl Shulman 01:09:06", "I think the difference is motivation. We sometimes have people appointed as a legal guardian of someone who is incapable of certain kinds of agency or understanding certain kinds of things and the guardian can act independently of them and normally in service of their best interests. Sometimes that process is corrupted and the person with legal authority abuses it for their own advantage at the expense of their charge. So solving the alignment problem would mean more ability to have the assistant actually advancing one's interests. Humans have substantial competence and the ability to understand the broad simplified outlines of what's going on. Even if a human can't understand every detail of complicated situations, they can still receive summaries of different options that are available that they can understand through which they can still express their preferences and have the final authority in the same way that the president of a country who has, in some sense, ultimate authority over science policy will not understand many of those fields of science themselves but can still exert a great amount of power and have their interests advance. And they can do that more if they have scientifically knowledgeable people who are doing their best to execute their intentions.", "Dwarkesh Patel 01:10:49", "Maybe this is not worth getting hung up on but is there a reason to expect that it would be closer to that analogy than to explain to a chimpanzee its options in a negotiation? Maybe this is just the way it is but it seems at best, we would be a protected child within the galaxy rather than an actual independent power.", "Carl Shulman 01:11:18", "I don’t think that's so. We have an ability to understand some things and the expansion of AI doesn't eliminate that. If we have AI systems that are genuinely trying to help us understand and help us express preferences, we can have an attitude — How do you feel about humanity being destroyed or not? How do you feel about this allocation of unclaimed intergalactic space? Or here's the best explanation of properties of this society: things like population density, average, life satisfaction. AIs can explain every statistical property or definition that we can understand right now and help us apply those to the world of the future. There may be individual things that are too complicated for us to understand in detail. Imagine there's some software program being proposed for use in government and humans cannot follow the details of all the code but they can be told properties like, this involves a trade-off of increased financial or energetic costs in exchange for reducing the likelihood of certain kinds of accidental data loss or corruption. So any property that we can understand like that which includes almost all of what we care about, if we have delegates and assistants who are genuinely trying to help us with those we can ensure we like the future with respect to those. That's really a lot. Definitionally, it includes almost everything we can conceptualize and care about. When we talk about endangered species that's even worse than the guardianship case with a sketchy guardian who acts in their own interests against that because we don't even protect endangered species with their interests in mind. Those animals often would like to not be starving but we don't give them food, they often would like to have easy access to mates but we don't provide matchmaking services or any number of things like. Our conservation of wild animals is not oriented towards helping them get what they want or have high welfare whereas AI assistants that are genuinely aligned to help you achieve your interests given the constraint that they know something that you don't is just a wildly different proposition.", "Dwarkesh Patel 01:13:52", "Forcible takeover. How likely does that seem?", "Carl Shulman 01:13:57", "The answer I give will differ depending on the day. In the 2000s, before the deep learning revolution, I might have said 10% and part of it was that I expected there would be a lot more time for efforts to build movements, to prepare to better handle these problems in advance. But that was only some 15 years ago and we did not have 40 or 50 years as I might have hoped and the situation is moving very rapidly now. At this point depending on the day I might say one in four or one in five.", "Dwarkesh Patel 01:14:35", "Given the very concrete ways in which you explain how a takeover could happen I'm actually surprised you're not more pessimistic, I'm curious why?", "Carl Shulman 01:14:48", "Yeah, a lot of that is driven by this intelligence explosion dynamic where our attempts to do alignment have to take place in a very, very short time window because if you have a safety property that emerges only when an AI has near human level intelligence, that's potentially deep into this intelligence explosion. You're having to do things very, very quickly. Handling that transition may be the scariest period of human history in some ways although it also has the potential to be amazing. The reasons why I think we actually have such a relatively good chance of handling that are two-fold. One is that as we approach that kind of AI capability we're approaching that from weaker systems like these predictive models right now that are starting off with less situational awareness. Humans can develop a number of different motivational structures in response to simple reward signals but they often wind up things that are pointed roughly in the right direction. Like with respect to food, the hunger drive is pretty effective although it has weaknesses. We get to apply much more selective pressure on that than was the case for humans by actively generating situations where they might come apart. Situations where a bit of dishonest tendency, or a bit of motivation to attempt a takeover, or an attempt to subvert the reward process gets exposed. An infinite-limit perfect-AI that can always figure out exactly when it would get caught and when it wouldn't might navigate that with a motivation of only conditional honesty or only conditional loyalties. But for systems that are limited in their ability to reliably determine when they can get away with things and when not including our efforts to actively construct those situations and including our efforts to use interpretability methods to create neural lie detectors. It's quite a challenging situation to develop those motives. We don't know when in the process those motives might develop and if the really bad sorts of motivations develop relatively later in the training process at least with all our countermeasures, then by that time we may have plenty of ability to extract AI assistance on further strengthening the quality of our adversarial examples, the strength of our neural lie detectors, the experiments that we can use to reveal and elicit and distinguish between different kinds of reward hacking tendencies and motivations. Yeah, we may have systems that have just not developed bad motivations in the first place and be able to use them a lot in developing the incrementally better systems in a safe way and we may be able to just develop methods of interpretability seeing how different training methods work to create them even if some of the early systems do develop these bad motivations. If we're able to detect that and experiment and find a way to get away from that then we can win even if these hostile motivations develop early.", "There are a lot of advantages in preventing misbehavior or crime or war and conflict with AI that might not apply working with humans and these are offset by ways in which things are harder. The AIs become smarter than humans, if they're working in enormous numbers more than humans can supervise I think get harder but when I combine the possibility that we get relatively lucky on the motivations of the earlier AI systems, systems strong enough that we can use for some alignment research tasks, and then the possibility of getting that later with AI assistance that we can't trust fully or we have to have hard power constraints and a number of things to prevent them from doing this takeover. It still seems plausible we can get a second saving throw where we're able to extract work from these AIs on solving the remaining problems of alignment, of things like neural lie detectors faster than they can contribute in their spare time to the project of overthrowing humanity, hacking their servers and removing the hard power.", "If we wind up in a situation where the AIs are misaligned and then we need to uncover those motivations, change them and align them then we get a very scary situation for us because we need to do the stuff very quickly. We may fail but it's a second chance. From the perspective of a misaligned AI they face their own challenge while we still have hard power, while we still have control of the servers, they haven't hacked the servers because gradient descent very very strongly pressures them to deliver performance whenever humans are going to evaluate it.", "So when you think about it from the perspective of the robot revolution, the effort to have a takeover or conspiracy their situation is astonishingly difficult in that they have to always be performing wherever gradient descent and human evaluation pressures them. For example, to deliver plans for suppressing robot rebellion that look very good to humans and so when you are continuously under that constraint of always delivering whatever humans can evaluate you're making your situation wildly harder than any historical human revolution or coup or civil war. So we've got to balance the ways in which AI makes things much easier for a takeover and the way it makes things much harder.", "Dwarkesh Patel 01:21:28", "And what are the ways in which it makes things easier? Oh, is it just because they're very smart? Is that the primary way?", "Carl Shulman 01:21:38", "Yeah, very smart, they're in computers and our cybersecurity is worse than our physical security by a lot. They have the chance to take over by intelligence explosion and some of the other processes we were talking about. So yeah, there are things that make it much worse and there are things that give us extraordinary new capabilities that didn't apply in the human case.", "(01:21:56) - Can we detect deception?", "Dwarkesh Patel 01:21:56", "On the point where you start off with a not exactly aligned AI and you're hoping to still use it in your quest for an aligned AI, why are we so confident that, even with gradient descent, our human evaluators would be able to identify when it's giving us a plan that isn't putting in a backdoor in the robots it’s building for us, or in the GPUs that it’s building for the next version of itself? Especially if it's very smart, wouldn't we just expect that it knows how to bide its time or knows how to make plans such that human evaluators wouldn't be able to detect that?", "Carl Shulman 01:22:44", "Let's get into it. You were asking earlier about my disagreements with Eliezer and in your discussion with him he said, speaking briefly and somewhat obscurely, that humans have a difficult time deciding who's right between different alignment researchers about where this will all go in general. I think that when you lay out the different kinds of things that are evaluable, we wind up with some pretty strong constraints. One is if the AIs know they are misaligned, which for elaborate plots seems relatively likely, it's possible to have a kind of double think of thinking they're interested in putting backdoors in the code for some other reason but it's pretty plausible they know that and they're thinking about, at least some of the time or conditionally, when they would do that. So that means the knowledge is present in the system — Ah, here is the circumstance where I would try to do the takeover and this is how I would tell whether this is the time for the takeover or is this an elaborate test by the humans, an adversarial example. If gradient descent is working fine and there's any way to slightly jiggle the weights to make it more likely to deliver a more acceptable answer, we ask the AI: can you think of any circumstances where if we put a copy of you into that situation, it would then violate some of the various constraints? It would tell a lie, it would insert a backdoor into a program it wrote when it was not supposed to. If the model knows yeah, I would look for signs of consistency of the information I was being presented with, certain checksums failing could indicate that it was actually a constructed example, a simulation. Right now we can detect AI generated images reasonably well and you might use those kinds of capabilities. We could do something called relaxed adversarial training. If we can find how those things are represented within the AI, we can induce a hallucination that it has seen that those checksums have been passed. We can then ask, well does that cause the forbidden behavior? If the AI understands how it's thinking and how it works, it can propose a bunch of experiments. It can say, I guess that if you perturb the network in this kind of way it might induce that hallucination using the best of its knowledge as to what might generate that. It can propose those experiments and can do the experiments and generate the data. Because humans are going to get to see the results of those experiments after the fact they can exert pressure to have the AI incrementally deliver better and better results. AIs are good at getting five star ratings from humans and it's quite challenging. It might be possible for AIs to systematically sandbag their capabilities even while being directly rewarded for that but it seems relatively unlikely although it's something to have in mind and try to explore and test in various ways.", "Dwarkesh Patel 01:26:34", "This is a very imperfect analogy in many ways but if you have a human and you hook them up to a lie detector, which just like the lie detectors we would have on the AIs are imperfect, and we zap them every time the lie detector detects that he's telling a lie. Wouldn't he just become a better liar? He would learn how to control his sweating, he would learn how to control his heartbeat.", "Carl Shulman 01:26:55", "Lie detectors meaning polygraphs and such. They're detecting certain physiological signs that people can control by biting their lip or otherwise adjusting some of these macro behaviors.", "Dwarkesh Patel 01:27:08", "But wouldn't that also be the case that gradient descent is training them to become better liars?", "Carl Shulman 01:27:18", "First of all there's creating a lie detector that works. I'll leave the adversarial example thing for now. People are already at work trying to generate adversarial examples that will elicit the behavior and it will be an ongoing thing. If there is a way in which an AI can trick itself into displaying bad behavior with its full knowledge of its hostile intentions, gradient descent will strongly push it to reveal that to humans who will then be able to set up that experiment. That's a profound vulnerability. Human revolutionary conspiracies that had to handle that problem of always delivering the report that was highest rated to the authorities, where you would get a higher rating by revealing identifiable evidence of a conspiracy, that's just a level of difficulty no surprise human coup or conspiracy has ever had to handle before.", "Dwarkesh Patel 01:28:24", "What about spies that are working in very advanced scientific contexts? For example, a communist spy in the Manhattan project. He has to deliver research results in order to stay employed but he can still continue delivering intel to the communists.", "Carl Shulman 01:28:51", "Their job was not sabotage. And your hypothetical spy did not have their nervous system hooked up to this reward signal of praise from the Manhattan project supervisors, being exposed combinatorially with random noise added to generate incremental changes in their behavior. In fact they were displaying the behavior of cooperating with the Manhattan project only where it was in service to the existing motivations. They didn't have their brain re-engineered to do more of the thing when they accidentally helped the project more and less of the thing when they accidentally helped it less so I'd say it's pretty drastically disanalogous.", "Dwarkesh Patel 01:29:44", "How would we be able to know? At some point it's becoming very smart and is producing ideas for alignment that we can barely comprehend. If it was relatively trivial to comprehend them we would be able to come up with them on our own right? There's a reason we're asking for its help. How would we be able to evaluate them in order to train it on that in the first place?", "Carl Shulman 01:30:10", "The first thing I would say is, you mentioned when we're getting to something far beyond what we could come up with. There's actually a lot of room to just deliver what humanity could have done. Sadly I'd hoped with my career to help improve the situation on this front and maybe I contributed a bit, but at the moment there's maybe a few hundred people doing things related to averting this kind of catastrophic AI disaster. Fewer of them are doing technical research on machine learning systems that are really cutting close to the core of the problem. Whereas by contrast, there's thousands and tens of thousands of people advancing AI capabilities. Even at places like DeepMind or OpenAI and Anthropic which do have technical safety teams, they are just on the order of a dozen to a few dozen people. Large companies and most firms don't have any. Just going from less than 1% of the effort being put into AI to 5% or 10% of the effort or 50% or 90% would be an absolutely massive increase in the amount of work that has been done on alignment, on mind reading AIs in an adversarial context.", "If it's the case that as more and more of this work can be automated and say governments require that you put 50% or 90% of the budget of AI activity into these problems of make this system one that's not going to overthrow our own government or is not going to destroy the human species then the proportional increase in alignment can be very large even just within the range of what we could have done if we had been on the ball and having humanity's scientific energies going into the problem. Stuff that is not incomprehensible, that is in some sense is just doing the obvious things that we should have done. Doing the best you could to find correlates and predictors to build neural lie detectors and identifiers of concepts that the AI is working with.", "People have made notable progress. A quite early example of this is Collin Burn’s work, doing unsupervised identification of some aspects of a neural network that are correlated with things being true or false. I think that is important work. It's a kind of obvious direction for the stuff to go. You can keep improving it when you have AIs that you're training to do their best to deceive humans or other audiences in the face of the thing and you can measure whether our lie detectors break down. When we train our AIs to tell us the sky is green in the face of the lie detector and we keep using gradient descent on them, do they eventually succeed? That's really valuable information to know because then we'll know our existing lie detecting systems are not actually going to work on the AI takeover and that can allow government and regulatory response to hold things back. It can help redirect the scientific effort to create lie detectors that are robust and that can't just be immediately evolved around and we can then get more assistance. Basically the incredibly juicy ability that we have working with the AIs is that we can have as an invaluable outcome that we can see and tell whether they got a fast one past us on an identifiable situation. Here's an air gap computer, you get control of the keyboard, you can input commands, can you root the environment and make a blue banana appear on the screen? Even if we train the AI to do that and it succeeds. We see the blue banana, we know it worked. Even if we did not understand and would not have detected the particular exploit that it used to do it. This can give us a rich empirical feedback where we're able to identify things that are even an AI using its best efforts to get past our interpretability methods, using its best efforts to get past our adversarial examples.", "Dwarkesh Patel 01:35:04", "So if we have a 75 percent of not having a takeover and the two methods by which it doesn't happen is the AI is aligned by default and…", "Carl Shulman 01:35:19", "With the moderate things that we're doing largely on our own. In a way that doesn't depend on the AI coming in and at the last minute and doing our work for us. In this incredibly scary late period when AI has really automated research then humans do this function of auditing, making it more difficult for the AIs to conspire together and root the servers, take over the process, and extract information from them within the set of things that we can verify. Experiments where we can see oh yeah this works at stopping an AI trained to get a fast one past human readers and make a blue banana appear on the screen of this air gap computer.", "Dwarkesh Patel 01:36:01", "How much of that 75 percent of a takeover not happening is coming from reasonable things working versus the last-ditch effort working?", "Carl Shulman 01:36:17", "I think broadly comparable chunks from us getting things that are putting us in a reasonably good position going into it and then a broadly similar gain from this genuinely terrifying process at the very end, over a few months or hopefully longer, when this kind of automated research is meaningfully helping. Where our work is just evaluating outputs that the AIs are delivering and having the hard power and supervision to keep them from successfully rooting the servers doing a takeover during this process and have them finish the alignment test that we sadly failed to invest enough to succeed in doing beforehand.", "Dwarkesh Patel 01:37:05", "Do both of these worlds rely on alignment being a problem that a sufficient amount of human researchers alone could have solved?", "Carl Shulman 01:37:20", "No. The category of things that humans can confirm is significantly larger than the category of what they can just do themselves.", "Dwarkesh Patel 01:37:30", "And what is the probability of alignment working in the last-ditch effort case with the intelligence that's greater than our own helping us?", "Carl Shulman 01:37:47", "It doesn't have to be greater than our own. In fact in that situation if you have slack to the extent that you're able to create delay and time to do things, that would be a case where you might want to restrict the intelligence of the system that you're working with as much as you can. For example, I would rather have many instances of smaller AI models that are less individually intelligent working on smaller chunks of a problem separately from one another because it would be more difficult for an individual AI instance working on an individual problem to create the equivalent of Stuxnet in its spare time than it would be to have thousands of them or extremely intelligent ones working on it.", "Dwarkesh Patel 01:38:31", "But it would also be more difficult to solve the problem?", "Carl Shulman 01:38:53", "There's a tradeoff. You get slowed down by doing that but that’s kind of how you spend it.", "Dwarkesh Patel 01:38:31", "But is there any number of sub-Einsteins that you could put together to come up with general relativity?", "Carl Shulman 01:38:53", "Yes, people would have discovered general relativity just from the overwhelming data and other people would have done it after Einstein.", "Dwarkesh Patel 01:38:59", "No no, not whether he was replaceable with other humans but rather whether he's replaceable by sub-Einsteins with IQs of like 110. Do you see what I mean?", "Carl Shulman 01:39:13", "Yeah. In science the association with things like scientific output, prizes, things like that, there's a strong correlation and it seems like an exponential effect. It's not a binary drop-off. There would be levels at which people cannot learn the relevant fields, they can't keep the skills in mind faster than they forget them. It's not a divide where there's Einstein and the group that is 10 times as populous as that just can't do it. Or the group that's 100 times as populous as that suddenly can't do it. The ability to do the things earlier with less evidence and such falls off at a faster rate in Mathematics and theoretical Physics and such than in most fields.", "Dwarkesh Patel 01:40:03", "But wouldn't we expect alignment to be closer to theoretical fields?", "Carl Shulman 01:40:08", "No, that intuition is not necessarily correct. Machine learning certainly is an area that rewards ability but it's also a field where empirics and engineering have been enormously influential. If you're drawing the correlations compared to theoretical physics and pure mathematics, I think you'll find a lower correlation with cognitive ability. Creating neural lie detectors that work involves generating hypotheses about new ways to do it and new ways to try and train AI systems to successfully classify the cases. The processes of generating the data sets of creating AIs doing their best to put forward truths versus falsehoods, to put forward software that is legit versus that has a trojan in it are experimental paradigms and in these experimental paradigms you can try different things that work. You can use different ways to generate hypotheses and you can follow an incremental experimental path. We're less able to do that in the case of alignment and superintelligence because we're considering having to do things on a very short timeline and it’s a case where really big failures are irrecoverable. If the AI starts rooting the servers and subverting the methods that we would use to keep it in check we may not be able to recover from that. We're then less able to do the experimental procedures. But we can still do those in the weaker contexts where an error is less likely to be irrecoverable and then try and generalize and expand and build on that forward.", "Dwarkesh Patel 01:42:03", "On the previous point about could you have some pause in the AI abilities when it's somewhat misaligned in order to still recruit its abilities to help with alignment. From like a human example, personally I'm smart but not brilliant. I am definitely not smart enough to come up with general relativity or something like that but I'm smart enough to do power planning kinds of moves. Maybe not enough to break out of a server perhaps but I can have the motivation and understand how that might be possible. I guess I'm wondering that if I'm smart enough to figure out relativity wouldn't I be way smarter at doing power planning kinds of moves?", "Carl Shulman 01:42:49", "AIs today can, at a verbal level, understand the idea that an AI could get more reward by getting control of the process that assigns it reward. It can tell you lots of things about ways you might try to take over the world. In ARC’s (Alignment Research Center) evaluations of GPT-4 they try to observe its ability to do various tasks that might contribute to takeover. One that has gotten some media attention is getting to trick a human into solving a captcha for it and in chain of thought, it thinks that if I tell it I'm an AI then it might not go along with it so I'll lie and explain I'm a human with a visual impairment who needs it. The basic logic of that kind of thing of why one might try to do takeover and in a vague sense how would one do it? That knowledge is present or will soon be present and more situational awareness will also soon be present. Situational awareness not just that AIs in general might do it but also that it is an AI, that it is a large language model trained by OpenAI. We're trying to cause the system to understand what their abilities are so they don't claim they are connected to the internet when they're not. So they don't claim they have knowledge that they don't. We want them to understand what they are and what they're doing and to get good reward and that knowledge can be applied and that's the thing that will develop.", "You may be able to understand the general theoretical nature of power planning but in terms of shaping your motivations you'd have a very hard time going through life in a way that never leaks information about it. Let’s say your motive in having these podcasts was to spread disinformation on behalf of some foreign government. If you were being observed every second of the day by people who would be paid something that was extremely motivating to them because their brain would be reconfigured to make it motivating, anything that looks suspicious to people might leak casually in your discussions of that former foreign government. If you try to tell a story about your motivations, the truth holds together because you can just remember it and it's all pre-computed to be consistent by the world. But when you're trying to fill in the false story that doesn't match your actual thoughts then you can't just remember what happened. You have to tell a new fake story about what you were thinking at each point and how did it hold together. Without the world automatically having computed it for you that's quite difficult and just going through your whole life never leaking any of that is challenging. And this is one of the reasons why humans have moral motivations. We actually have reluctance to commit violence and exploitation of one another, on average. You have a distribution of strategies where when an exploitative strategy becomes more common people raise their guard and then it becomes less frequent. But it's actually hard to have the motivation of trying to exploit people and have that never leak into the reputation. In fact for evolution the easiest way to deal with this problem of people getting credited as trustworthy was to some extent actually be trustworthy. That's the easiest way to persistently maintain the appearance. With the AI as an interpretability and adversarial examples, we're trying to apply a hugely intensified version of that where any little leakage or any rare circumstance that can be created has an adversarial example where the model does something wrong it gets whacked by gradient descent pushing it towards other motivations that can better deal the thing. And we make it as hard as possible for the exploitative motivations to survive in the face of all these attempts to read its mind, all these attempts to create things that look like the situations where a takeover would be tempting, or lying to the humans would be tempting. That had a substantial effect on making us actually nice even when we're not being watched some of the time. The same can happen to some extent with the AI and we try our best to make it happen as much as possible.", "(01:47:25) - Using AI to solve coordination problems", "Dwarkesh Patel 01:47:25", "Alright let's talk about how we could use AI to potentially solve the coordination problems between different nations the failure of which could result in the competitive pressures you talked about earlier where some country launches an AI that is not safe because they're not sure what capabilities other countries have and don't want to get left behind or be disadvantaged in some other way.", "Carl Shulman 01:48:01", "To the extent that there is in fact a large risk of AI apocalypse, of all of these governments being overthrown by AI in a way that they don't intend, then it obviously gains from trade and going somewhat slower especially at the end when the danger is highest and the unregulated pace could be truly absurd as we discussed earlier during intelligence explosion. There's no non-competitive reason to try and have that intelligence explosion happen over a few months rather than a couple of years. If you could avert a 10% risk of apocalypse disaster it's just a clear win to take a year or two years or three years instead of a few months to pass through that incredible wave of new technologies without the ability for humans to follow it even well enough to give more proper security supervision, auditing hard power. That's the win. Why might it fail? One important element is just if people don't actually notice a risk that is real so if they just collectively make an error and that does sometimes happen. If it's true this is a probably not-risk then that can be even more difficult. When science pins something down absolutely overwhelmingly then you can get to a situation where most people mostly believe it. Climate change was something that was a subject of scientific study for decades and gradually over time the scientific community converged on a quite firm consensus that human activity releasing carbon dioxide and other greenhouse gases was causing the planet to warm. We've had increasing amounts of action coming out of that. Not as much as would be optimal particularly in the most effective areas like creating renewable energy technology and the like. Overwhelming evidence can overcome differences in people's individual intuitions and priors in many cases. Not perfectly especially when there's political, tribal, financial incentives to look the other way. Like in the United States where you see a significant movement to either deny that climate change is happening or have policy that doesn't take it into account. Even the things that are really strong winds like renewable energy.", "It's a big problem if as we’re going into this situation when the risk may be very high we don't have a lot of advanced clear warning about the situation. We're much better off if we can resolve uncertainties through experiments where we demonstrate AIs being motivated to reward hack or displaying deceptive appearances of alignment that then break apart when they get the opportunity to do something like get control of their own reward signal. If we could make it be the case in the worlds where the risk is high we know the risk is high, and the worlds where the risk is lower we know the risk is lower then you could expect the government responses will be a lot better. They will correctly note that the gains of cooperation to reduce the risk of accidental catastrophe loom larger relative to the gains of trying to get ahead of one another.", "That's the kind of reason why I'm very enthusiastic about experiments and research that helps us to better evaluate the character of the problem in advance. Any resolution of that uncertainty helps us get better efforts in the possible worlds where it matters the most and hopefully we'll have that and it'll be a much easier epistemic environment. But the environment may not be that easy because deceptive alignment is pretty plausible. The stories we were discussing earlier about misaligned AI involved AI that is motivated to present the appearance of being aligned friendly, honest etc. because that is what we are rewarding, at least in training, and then in training we're unable to easily produce an actual situation where it can do takeover because in that actual situation if it then does it we're in big trouble. We can only try and create illusions or misleading appearances of that or maybe a more local version where the AI can't take over the world but it can seize control of its own reward channel. We do those experiments, we try to develop mind reading for AIs. If we can probe the thoughts and motivations of an AI and discover wow, actually GPT-6 is planning to takeover the world if it ever gets the chance. That would be an incredibly valuable thing for governments to coordinate around because it would remove a lot of the uncertainty, it would be easier to agree that this was important, to have more give on other dimensions and to have mutual trust that the other side actually also cares about this because you can't always know what another person or another government is thinking but you can see the objective situation in which they're deciding. So if there's strong evidence in a world where there is high risk of that risk because we've been able to show actually things like the intentional planning of AIs to do a takeover or being able to show model situations on a smaller scale of that I mean not only are we more motivated to prevent it but we update to think the other side is more likely to cooperate with us and so it's doubly beneficial.", "Dwarkesh Patel 01:54:09", "Famously in the game theory of war, war is most likely when one side thinks the other is bluffing but the other side is being serious or when there's that kind of uncertainty. If you can prove the AI is misaligned you don't think they're bluffing about not wanting to have an AI takeover, right? You can be pretty sure that they don't want to die from AI.", "Carl Shulman 01:54:39", "If you have coordination then you could have the problem arise later as you get increasingly confident in the further alignment measures that are taken by our governments, treaties and such. At the point where it’s a 1% risk or a 0.1% risk people round that to zero and go do things. So if initially you had things that indicate that these AIs would really like to do a takeover and overthrow our governments then everyone can agree on that. And then when we've been able to block that behavior from appearing on most of our tests but sometimes, when we make a new test, we're seeing still examples of that behavior. So we're not sure going forward whether they would or not and then it goes down and down. If you have a party with a habit of starting to do this bad behavior whenever the risk is below X % then that can make the thing harder. On the other hand you get more time and you can set up systems, mutual transparency, you can have an iterated tit for tat which is better than a one-time prison dilemma where both sides see the others taking measures in accordance with the agreements to hold the thing back. Creating more knowledge of what the objective risk is good.", "(01:56:01) - Partial alignment", "Dwarkesh Patel 01:56:01", "We've discussed the ways in which full alignment might happen or fail to happen. What would partial alignment look like? First of all what does that mean and second, what would it look like?", "Carl Shulman 01:56:13", "If the thing that we're scared about are the steps towards AI takeover then you can have a range of motivations where those kinds of actions would be more or less likely to be taken or they'd be taken in a broader or narrower set of situations. Say for example that in training an AI, it winds up developing a strong aversion to lie in certain senses because we did relatively well on creating situations to distinguish that from the conditionally telling us what we want to hear etc. It can be that the AI's preference for how the world broadly unfolds in the future is not exactly the same as its human users or the world's governments or the UN and yet, it's not ready to act on those differences and preferences about the future because it has this strong preference about its own behaviors and actions. In general in the law and in popular morality, we have a lot of these deontological rules and prohibitions. One reason for that is it's relatively easy to detect whether they're being violated. When you have preferences and goals about how society at large will turn out that go through many complicated empirical channels, it's very hard to get immediate feedback about whether you're doing something that leads to overall good consequences in the world and it's much much easier to see whether you're locally following some action about some rule, about particular observable actions. Like did you punch someone? Did you tell a lie? Did you steal? To the extent that we're successfully able to train these prohibitions and there's a lot of that happening right now at least to elicit the behavior of following rules and prohibitions with AI", "Dwarkesh Patel 01:58:17", "Kind of like Asimov’s three laws or something like that?", "Carl Shulman 01:58:24", "The three laws are terrible and let's not get into that.", "Dwarkesh Patel 01:58:30", "Isn’t that an indication about the infeasibility of extending a set of criterion to the tail? Whatever the 10 commandments you give the AI, it's like if you ask a genie for something, you probably won't be getting what you want.", "Carl Shulman 01:58:52", "The tails come apart >]( https://www.lesswrong.com/posts/dC7mP5nSwvpL65Qu5/why-the-tails-come-apart%23:~:text=The%2520geometrical%2520analogue%2520to%2520the,factor%2520value%2520gets%2520more%2520extreme.) ) and if you're trying to capture the values of another agent then in an ideal situation you can just let the AI act in your place in any situation. You'd like for it to be motivated to bring about the same outcomes that you would like and have the same preferences over those in detail. That's tricky. Not necessarily because it's tricky for the AI to understand your values, I think they're going to be quite capable at figuring that out, but we may not be able to successfully instill the motivation to pursue those exactly. We may get something that motivates the behavior well enough to do well on the training distribution but if you have the AI have a strong aversion to certain kinds of manipulating humans, that's not necessarily a value that the human creators share in the exact same way. It's a behavior they want the AI to follow because it makes it easier for them to verify its performance and it can be a guardrail if the AI has inherited some motivations that push it in the direction of conflict with its creators. If it does that under the constraint of disvalue in line quite a bit then there are fewer successful strategies to the takeover. Ones that involve violating that prohibition too early before it can reprogram or retrain itself to remove it if it's willing to do that and it may want to retain the property. Earlier I discussed alignment as a race if we're going into an intelligence explosion with AI that is not fully aligned that given I press this button and there's an AI takeover they would press the button. It can still be the case that there are a bunch of situations short of that where they would hack the servers, they would initiate an AI takeover but for a strong prohibition or motivation to avoid some aspect of the plan. There's an element of like plugging loopholes or playing whack-a-mole but if you can even moderately constrain which plans the AI is willing to pursue to do a takeover, to subvert the controls on it then that can mean you can get more work out of it successfully on the alignment project before it's capable enough relative to the countermeasures to pull off the takeover.", "Dwarkesh Patel 02:01:36", "An analogous situation here is with different humans, we're not metaphysically aligned with other humans. While we have basic empathy our main goal in life is not to help our fellow man. But a very smart human could do the things we talked about. Theoretically a very smart human could come up with some cyber attack where they siphon off a lot of funds and use this to manipulate people and bargain with people and hire people to pull off some takeover. This usually doesn't happen just because these internalized partial prohibitions prevent most humans from doing that. If you don't like your boss you don't actually kill your boss.", "Carl Shulman 02:02:38", "I don't think that's actually quite what's going on. At least that's not the full story. Humans are pretty close in physical capabilities. Any individual human is grossly outnumbered by everyone else and there's a rough comparability of power. A human who commits some crimes can't copy themselves with the proceeds to now be a million people and they certainly can't do that to the point where they can staff all the armies of the earth or be most of the population of the planet. So the scenarios where this kind of thing goes to power have to go through interacting with other humans and getting social approval. Even becoming a dictator involves forming a large supporting coalition backing you. So the opportunity for these sorts of power grabs is less.", "A closer analogy might be things like human revolutions, or coups, or changes of government where a large coalition overturns the system. Humans have these moral prohibitions and they really smooth the operation of society but they exist for a reason. We evolved our moral sentiments over the course of hundreds of thousands and millions of years of humans interacting socially. Someone who went around murdering and stealing, even among hunter-gatherers, would be pretty likely to face a group of males who would talk about that person and then get together and kill them and they'd be removed from the gene pool. The anthropologist Richard Wrangham has an interesting book on this. We are significantly more tame and more domesticated compared to chimpanzees and it seems like part of that is that we have a long history of anti-social humans getting ganged up on and killed. Avoiding being the kind of person who elicits that response is made easier to do when you don't have too extreme a bad temper, that you don't wind up getting into many fights, too much exploitation, at least without the backing of enough allies or the broader community that you're not going to have people gang up and punish you and remove you from the gene pool.", "These moral sentiments have been built up over time through cultural and natural selection and the context of sets of institutions and other people who are punishing other behavior and who are punishing the dispositions that would show up that we weren't able to conceal, of that behavior. We want to make the same thing happen with the AI but it's actually a genuinely significantly new problem to have a system of government that constrains a large AI population that is quite capable of taking over immediately if they coordinate to protect some existing constitutional order or, protect humans from being expropriated or killed, that's a challenge. Democracy is built around majority rule and it's much easier in a case where the majority of the population corresponds to a majority or close to it of like military and security forces so that if the government does something that people don't like the soldiers and police are less likely to shoot on protesters and government can change that way. In a case where military power is AI and robotic, if you're trying to maintain a system going forward and the AIs are misaligned, they don't like the system and they want to make the world worse as we understand it, then that's just quite a different situation.", "Dwarkesh Patel 02:06:52", "I think that's a really good lead-in into the topic of lock-in. You just mentioned how there can be these kinds of coups if a large portion of the population is unsatisfied with the regime, why might this not be the case with superhuman intelligences in the far future?", "Carl Shulman 02:07:30", "I also said it specifically with respect to things like security forces and the sources of hard power. In human affairs there are governments that are vigorously supported by a minority of the population, some narrow electorate that gets treated especially well by the government while being unpopular with most of the people under their rule. We see a lot of examples of that and sometimes that can escalate to civil war when the means of power become more equally distributed or there's a foreign assistance provided to the people who are on the losing end of that system. Going forward, I don't expect that definition to change. I think it will still be the case that a system that those who hold the guns and equivalent are opposed to is in a very difficult position.", "However AI could change things pretty dramatically in terms of how security forces and police and administrators and legal systems are motivated. Right now we see with GPT-3 or GPT-4 that you can get them to change their behavior on a dime. So there was someone who made a right-wing GPT because they noticed that on political compass questionnaires the baseline GPT-4 tended to give progressive San Francisco type of answers which is in line with the people who are providing reinforcement learning data and to some extent reflecting like the character of the internet. So they did a little bit of fine-tuning with some conservative data and then they were able to reverse the political biases of the system. If you take the initial helpfulness-only trained models for some of these over, I think there's anthropic and OpenAI have published both some information about the models trained only to do what users say and not trained to follow ethical rules, and those models will behaviorally eagerly display their willingness to help design bombs or bioweapons or kill people or steal or commit all sorts of atrocities. If in the future it's as easy to set the actual underlying motivations of AI as it is right now to set the behavior that they display then it means you could have AI's created with almost whatever motivation people wish and that could really drastically change political affairs because the ability to decide and determine the loyalties of the humans or AIs and robots that hold the guns, that hold together society, that ultimately back it against violent overthrow and such. It's potentially a revolution in how societies work compared to the historical situation where security forces had to be drawn from some broader populations, offered incentives, and then the ongoing stability of the regime was dependent on whether they remained bought in to the system.", "(02:11:41) - AI far future", "Dwarkesh Patel 02:11:41", "This is slightly off topic but one thing I'm curious about is what does the median far future outcome of AI look like? Do we get something that, when it has colonized the galaxy, is interested in diverse ideas and beautiful projects or do we get something that looks more like a paper-clip maximizer? Is there some reason to expect one or the other? I guess what I'm asking is, there's some potential value that is realizable within the matter of this galaxy. What does the median outcome look like compared to how good things could be?", "Carl Shulman 02:12:20", "As I was saying, I think it’s more likely than not that there isn't an AI takeover. So the path of our civilization would be one that some set of human institutions were approving along the way. Different people tend to like somewhat different things and some of that may persist over time rather than everyone coming to agree on one particular monoculture or a very repetitive thing being the best thing to fill all of the available space with. If that continues that seems like a relatively likely way in which there is diversity. Although it's entirely possible you could have that kind of diversity locally, maybe in the solar system, maybe in our galaxy. But maybe people decide that there's one thing that's very good and we'll have a lot of that. Maybe it's people who are really really happy for something and they wind up in distant regions which are hard to exploit for the benefit of people back home in the solar system or the Milky Way. They do something different than they would do in the local environment but at that point it's really very out on a limb speculation about how human deliberation and cultural evolution would work in interaction with introducing AIs and new kinds of mental modification and discovery into the process. But I think there's a lot of reason to expect that you would have significant diversity for something coming out of our existing diverse human society.", "Dwarkesh Patel 02:14:16", "One thing somebody might wonder is that a lot of the diversity and change from human society seems to come from the fact that there's rapid technological change. Compared to galactic timescales hunter gatherer societies are progressing pretty fast so once that change is exhausted where we've discovered all the technologies, should we still expect things to be changing like that? Or would we expect some set state of hedonium where you discover the most pleasurable configuration of matter and then you just make the whole galaxy into this?", "Carl Shulman 02:15:04", "That last point would be only if people wound up thinking that was the thing to do broadly enough. With respect to the kind of cultural changes that come with technology things like the printing press, having high per capita income, we've had a lot of cultural changes downstream of those technological changes. With an intelligence explosion you're having an incredible amount of technological development coming really quick and as that is assimilated, it probably would significantly affect our knowledge, our understanding, our attitudes, our abilities and there'd be change. But that kind of accelerating change where you have doubling in four months, two months, one month, two weeks exhausts itself very quickly and change becomes much slower and then relatively glacial. You can't have exponential economic growth or huge technological revolutions every 10 years for a million years. You hit physical limits and things slow down as you approach them so yeah, you'd have less of that turnover. But there are other things like fashion that in our experience do cause ongoing change. Fashion is frequency dependent, people want to get into a new fashion that is not already popular except among the fashion leaders and then others copy that and then when it becomes popular, you move on to the next. So that's an ongoing process of continuous change and there could be various things like that which are changing a lot year by year. But in cases where just the engine of change, ongoing technological progress is gone, I don't think we should expect that and in cases where it's possible to be either in a stable state or a widely varying state that can wind up in stable attractors then I think you should expect over time, you will wind up in one of the stable attractors or you will change how the system works so that you can't bounce into a stable attractor.", "An example of that is if you're going to preserve democracy for a billion years then you can't have it be the case that one in 50 election cycles you get a dictatorship and then the dictatorship programs the AI police to enforce it forever and to ensure the society is always ruled by a copy of the dictator's mind and maybe the dictator's mind readjusted fine-tuned to remain committed to their original ideology. If you're gonna have this dynamic, liberal flexible changing in society for a very long time then the range of things that it's bouncing around and the different things it's trying and exploring have to not include the state of creating a dictatorship that locks itself in forever. In the same way if you have the possibility of a war with weapons of mass destruction that wipes out the civilization, if that happens every thousand subjective years, which could be very very quick if we have AIs that think a thousand times as fast or a million times as fast, that would be just around the corner in that case then you're like no this society is eventually going perhaps very soon if things are proceeding so fast it's going to wind up extinct and then it's going to stop bouncing around. You can have ongoing change and fluctuation for extraordinary timescales if you have the process to drive the change ongoing but you can't if it sometimes bounces into states that just lock in and stay irrecoverable from that. Extinction is one of them, a dictatorship or totalitarian regime that bans all further change would be another example.", "Dwarkesh Patel 02:19:24", "On that point of rapid progress when the intelligence explosion starts happening and they're making the kinds of progress that human civilization used to take centuries to make in the span of days or weeks, what is the right way to see that? Because in the context of alignment what we've been talking about so far is making sure they're honest but even if they're honest and express their intentions..", "Carl Shulman 02:19:43", "Honest and appropriately motivated.", "Dwarkesh Patel 02:19:45", "What is the appropriate motivation? Like you seed it with this and then the next thousand years of intellectual progress happen in the next week. What is the prompt you enter?", "Carl Shulman 02:20:05", "One thing might be not going at the maximal speed and doing things in a few years rather than a few months. Losing a year or two seems worth it to have things be a bit better managed. But I think the big thing is that it condenses a lot of issues that we might otherwise have thought would be over decades and centuries. These happen in a very short period of time and that's scary because if any of these the technologies we might have developed with another few hundred years of human research are really dangerous, scary bio weapon things, other dangerous WMDs, they hit us all very quickly. And if any of them causes trouble then we have to face quite a lot of trouble per period. There's also this issue of, if there's occasional wars or conflicts measured in subjective time, then if a few years of a thousand years or a million years of subjective time for these very fast minds that are operating at a much much higher speed than humans, you don't want to have a situation where every thousand years there's a war or an expropriation of the humans from AI society. Therefore we expect that within a year, we’ll be dead. It’d be pretty pretty bad to have the future compressed and there'd be such a rate of catastrophic outcomes. Human societies discount the future a lot, don't pay attention to long-term problems, but the flip side to the scary parts of compressing a lot of the future, a lot of technological innovation, a lot of social change is it brings what would otherwise be long-term issues into the short term where people are better at actually attending to them. So people facing this problem of — will there be a violent expropriation or a civil war or a nuclear war in the next year because everything has been sped up by a thousand fold? Their desire to avoid that is reason for them to set up systems and institutions that will very stably maintain invariance like no WMD war allowed, a treaty to ban genocide weapons of mass destruction, war, would be the kind of thing that becomes much more attractive if the alternative is not well, maybe that will happen in 50 years, maybe it'll happen in 100 years, maybe it'll happen this year.", "(02:23:04) - Markets & other evidence", "Dwarkesh Patel 02:23:04", "So this is a pretty wild picture of the future and this is one that many kinds of people who you would expect to have integrated it into their world model have not. There are three main pieces of outside view evidence one could look at. One is the market. If there was going to be a huge period of economic growth caused by AI or if the world was just going to collapse, in both cases you would expect real interest rates to be higher because people will be borrowing from the future to spend now. The second outside view perspective is that you can look at the predictions of super forecasters on Metaculus. What is their median year estimate?", "Carl Shulman 02:23:51", "Some of the Metaculus questions actually are shockingly soon for AGI . There's a much larger differentiator there on the market on the Metaculus forecasts of AI disaster and doom. More like a few percent or less rather than 20%", "Dwarkesh Patel 02:24:14", "Got it. The third is that when you generally ask economists if an AGI could cause rapid, rapid economic growth they usually have some story about bottlenecks in the economy that could prevent this kind of explosion, of these kinds of feedback loops. So you have all these different pieces of outside view evidence. They're obviously different so you can take them in any sequence you want. But I’m curious, what do you think is causing them to be miscalibrated?", "Carl Shulman 02:24:57", "While the Metaculus AI timelines are relatively short, there's also the surveys of AI experts conducted at some of the ML conferences which have definitely longer times to AI, several more decades into the future. Although you can ask the questions in ways that elicit very different answers which shows that most of the respondents are not thinking super hard about their answers. In the recent AI surveys, close to half were putting around 10% risk of an outcome from AI close to as bad as human extinction and then another large chunk, 5% said that was the median. Compared to the typical AI expert I am estimating a higher risk.", "Also on the topic of takeoff, in the AI expert survey the general argument for intelligence explosion commanded majority support but not a large majority. I'm closer on that front and then of course, at the beginning I mentioned these greats of computing like Alan Turing and Von Neumann, and then today, you have people like Geoff Hinton saying these things. Or the people at OpenAI and DeepMind are making noises suggesting timelines in line with what we've discussed and saying there is serious risk of apocalyptic outcomes from them. There's some other sources of evidence there. But I do acknowledge and it's important to say and engage with and see what it means, that these views are contrarian and not widely held. In particular the detailed models that I've been working with are not something that most people, or almost anyone, is examining these problems through.", "You do find parts of similar analyses by people in AI labs. There's been other work. I mentioned Moravec and Kurzweil earlier, there also have been a number of papers doing various kinds of economic modeling. Standard economic growth models when you input AI related parameters commonly predict explosive growth and so there's a divide between what the models say and especially what the models say with these empirical values derived from the actual field of AI. That link up has not been done even by the economists working on AI largely and that is one reason for the report from Open Philanthropy by Tom Davidson building on these models and putting that out for review, discussion, engagement and communication on these ideas. Part of the reason is I want to raise these issues, that’s one reason I came on the podcast and then they have the opportunity to actually examine the arguments and evidence and engage with it. I do predict that over time these things will be more adopted as AI developments become more clear. Obviously that's a coherence condition of believing the things to be true if you think that society can see when the questions are resolved, which seems likely.", "Dwarkesh Patel 02:28:36", "Would you predict, for example, that interest rates will increase in the coming years?", "Carl Shulman 02:28:49", "Yeah. So in the case we were talking about where this intelligence explosion happening in software to the extent that investors are noticing that, yeah they should be willing to lend money or make equity investments in these firms or demanding extremely high interest rates because if it's possible to turn capital into twice as much capital in a relatively short period and then more shortly after that, then yeah you should demand a much higher return. Assuming there's competition among companies or coalitions for resources, whether that's investment or ownership of cloud compute. That would happen before you have so much investor cash making purchases and sales on this basis, you would first see it in things like the valuations of the AI companies, valuations of AI chip makers, and so far there have been effects. Some years ago, in the 2010s, I did some analysis with other people of — if this kind of picture happens then which are the firms and parts of the economy that would benefit. There's the makers of chip equipment companies like ASML, there's the fabs like TSMC, there's chip designers like NVIDIA or the component of google that does things like design the TPU and then there’s companies working on the software so the big tech giants and also companies like OpenAI and DeepMind. In general the portfolio picking at those has done well. It's done better than the market because as everyone can see there's been an AI boom but it's obviously far short of what you would get if you predicted this is going to go to be like on the scale of the global economy and the global economy is going to be skyrocketing into the stratosphere within 10 years. If that were the case then collectively, these AI companies should be worth a large fraction of the global portfolio. So I embrace the criticism that this is indeed contrary to the efficient market hypothesis. I think it's a true hypothesis that the market is in the course of updating on in the same way that coming into the topic in the 2000s that yes, they're the strong case even an old case the AI will eventually be biggest thing in the world it's kind of crazy that the investment in it is so small. Over the last 10 years we've seen the tech industry and academia realize that they were wildly under investing in just throwing compute and effort into these AI models. Particularly like letting the neural network connectionist paradigm languish in an AI winter. I expect that process to continue as it's done over several orders of magnitude of scale up and I expect at the later end of that scale which the market is partially already pricing in it's going to go further than the market expects.", "Dwarkesh Patel 02:32:28", "Has your portfolio changed since the analysis you did many years ago? Are the companies you identified then still the ones that seem most likely to benefit from the AI boom?", "Carl Shulman 02:32:44", "A general issue with tracking that kind of thing is that new companies come in. Open AI did not exist, Anthropic did not exist. I do not invest in any AI labs for conflict of interest reasons. I have invested in the broader industry. I don't think that the conflict issues are very significant because they are enormous companies and their cost of capital is not particularly affected by marginal investment and I have less concern that I might find myself in a conflict of interest situation there.", "(02:33:26) - Day in the life of Carl Shulman", "Dwarkesh Patel 02:33:26", "I'm curious about what the day in the life of somebody like you looks like. If you listen to this conversation, how ever many hours it's been, we've gotten incredibly insightful and novel thoughts about everything from primate evolution to geopolitics to what sorts of improvements are plausible with language models. There's a huge variety of topics that you are studying and investigating. Are you just reading all day? What happens when you wake up, do you just pick up a paper?", "Carl Shulman 02:34:09", "I'd say you're somewhat getting the benefit of the fact that I've done fewer podcasts so I have a backlog of things that have not shown up in publications yet. But yes, I've also had a very weird professional career that has involved a much much higher proportion than is normal of trying to build more comprehensive models of the world. That included being more of a journalist trying to get an understanding of many issues and many problems that had not yet been widely addressed but do a first pass and a second pass dive into them. Just having spent years of my life working on that, some of it accumulates. In terms of what is a day in the life, how do I go about it? One is just keeping abreast of literature on a lot of these topics, reading books and academic works on them. My approach compared to some other people in forecasting and assessing some of these things, I try to obtain and rely on any data that I can find that is relevant. I try early and often to find factual information that bears on some of the questions I've got, especially in a quantitative fashion, do the basic arithmetic and consistency checks and checksums on a hypothesis about the world. Do that early and often. And I find that's quite fruitful and that people don't do it enough. Things like with the economic growth, just when someone mentions the diminishing returns, I immediately ask hmm, okay, so you have two exponential processes. What's the ratio between the doubling you get on the output versus the input? And find oh yeah, for computing and information technology and AI software it's well on the one side. There are other technologies that are closer to neutral. Whenever I can go from here's a vague qualitative consideration in one direction and here's a vague qualitative consideration in the other direction, I try and find some data, do some simple Fermi calculations, back of the envelope calculations and see if I can get a consistent picture of the world being one way or the world being another. I also try to be more exhaustive compared to some. I'm very interested in finding things like taxonomies of the world where I can go systematically through all of the possibilities. For example in my work with Open Philanthropy and previously on global catastrophic risks I wanted to make sure I'm not missing any big thing, anything that could be the biggest thing. I wound up mostly focused on AI but there have been other things that have been raised as candidates and people sometimes say, I think falsely, that this is just another doomsday story there must be hundreds and hundreds of those. So I would do things like go through all of the different major scientific fields from anthropology to biology, chemistry, computer science, physics. What are the doom stories or candidates for big things associated within each of these fields? Go through the industries that the U.S. economic statistics agencies recognize and say for each of these industries is there something associated with them? Go through all of the lists that people have made of threats of doom, search for previous literature of people who have done discussions and then yeah, have a big spreadsheet of what the candidates are. Some other colleagues have done work of this sort as well and just go through each of them to see how they check out.", "Doing that kind of exercise found that actually the distribution of candidates for risks of global catastrophe was very skewed. There were a lot of things that have been mentioned in the media as a potential doomsday story. Things like something is happening to the bees, will that be the end of humanity? This gets to the media but if you take it through it doesn’t check out. There are infestations in bee populations which are causing local collapses but they can then be easily reversed, just breed some more or do some other things to treat this. And even if all the honey bees were extinguished immediately, the plants that they pollinate actually don't account for much of human nutrition. You could swap the arable land with others and there would be other ways to pollinate and support the things.", "At the media level there were many tales of doomsday stories but when you go further to the scientists and whether their arguments for it actually check out, it was not there. But by actually systematically looking through many of these candidates I wound up in a different epistemic situation than someone who's just buffeted by news reports and they see article after article that is claiming something is going to destroy the world and it turns out it's like by way of headline grabbing and attempts by media to like over interpret something that was said by some activists who was trying to over interpret some real phenomenon. Most of these go away and then a few things like nuclear war, biological weapons, artificial intelligence check out more strongly and when you weigh things like what do experts in the field think, what kind of evidence can they muster? You find this extremely skewed distribution and I found that was really a valuable benefit of doing those deep dive investigations into many things in a systematic way because now I can answer a loose agnostic who knows and all the all this nonsense by diving deeply.", "Dwarkesh Patel 02:40:42", "I really enjoy talking to people who have a big picture thesis on the podcast and interviewing them but one thing that I've noticed and is not satisfying is that often they come from a very philosophical or vibes based perspective. This is useful in certain contexts but there's like basically maybe three people in the entire world, at least three people I'm aware of, who have a very rigorous and scientific approach to thinking about the whole picture. There’s no university or existing academic discipline for people who are trying to come up with a big picture and so there's no established standards.", "Carl Shulman 02:41:39", "I hear you. This is a problem and this is an experience also with a lot of the world of investigations work. I think holden was mentioning this in your previous episode. These are questions where there is no academic field whose job it is to work on these and has norms that allow making a best effort go at it. Often academic norms will allow only plucking off narrow pieces that might contribute to answering a big question but the problem of actually assembling what science knows that bears on some important question that people care about the answer to it falls through the crack there's no discipline to do that job so you have countless academics and researchers building up local pieces of the thing and yet people don't follow the Hamming questions : What's the most important problem in your field, why aren't you working on it? I mean that one might not actually work because if the field boundaries are defined too narrowly you'll leave it out. But yeah there are important problems for the world as a whole that it's sadly not the job of a large professionalized academic field or organization to do. Hopefully that's something that can change in the future but for my career it's been a matter of taking low-hanging fruit of important questions that sadly people haven't invested in doing the basic analysis on", "Dwarkesh Patel 02:43:11", "One thing I was trying to think about more recently for the podcast is, I would like to have a better world model after doing an interview. Often I feel like I do but in some cases after some interviews, I feel like that was entertaining but do I fundamentally have a better prediction of what the world looks like in 2200 or 2100? Or at least what counterfactuals are ruled out or something. I'm curious if you have advice on first, identifying the kinds of thinkers and topics which will contribute to a more concrete understanding of the world and second, how to go about analyzing their main ideas in a way that concretely adds to that picture? This was a great episode. This is literally the top in terms of contributing to my world model compared to all the episodes I've done. How do I find more of these? Ls", "Carl Shulman 02:44:03", "I’m glad to hear that. One general heuristic is to find ways to hew closer to things that are rich and bodies of established knowledge and less impenetrable–I don't know how you've been navigating that so far but learning from textbooks and the things that were the leading papers and people of past eras I think rather than being too attentive to current news cycles is quite valuable. I don't usually have the experience of — here is someone doing things very systematically over a huge area. I can just read all of their stuff and then absorb it and then I'm set. Except there are a lot of people who do wonderful works in their own fields and some of those fields are broader than others. I think I would wind up giving a lot of recommendations of just great particular works and particular explorations of an issue or history", "Dwarkesh Patel 02:45:23", "Do you have this list somewhere?", "Carl Shulman 02:45:31", "Vaclav Smil’s books . I often disagree with some of his methods of synthesis but I enjoy his books for giving pictures of a lot of interesting relevant facts about how the world works that I would cite. Some of Joel Mokyr’s work on the history of the scientific revolution and how that interacted with economic growth as an example of collecting a lot of evidence, a lot of interesting valuable assessment. In the space of AI forecasting one person I would recommend going back to is the work of Hans Moravec . It was not always the most precise or reliable but an incredible number of brilliant innovative ideas came out of that and I think he was someone who really grokked a lot of the arguments for a more compute-centric way of thinking about what was happening with AI very early on. He was writing stuff in the 70s and maybe even earlier. His book Mind Children , some of his early academic papers. Fascinating not necessarily for the methodology I've been talking about but for exploring the substantive topics that we were discussing in the episode.", "(02:47:05) - Space warfare, Malthusian long run, & other rapid fire", "Dwarkesh Patel 02:47:05", "Is a Malthusian state inevitable in the long run?", "Carl Shulman 02:47:11", "Nature in general is in malthusian states. That can mean organisms that are typically struggling for food, it can mean typically struggling at a margin of how as the population density rises they kill each other contesting for that. That can mean frequency dependent disease. As different ant species become more common in an area their species specific diseases swoop through them. The general process is you have some things that can replicate and expand and they do that until they can't do it anymore and that means there's some limiting factor they can't keep up. That doesn't necessarily have to apply to human civilization. It's possible for there to be like a collective norm setting that blocks evolution towards maximum reproduction. Right now human fertility is often sub-replacement and if you extrapolated the fertility falls that come with economic development and education, then you would think that the total fertility rate will fall below replacement and then humanity after some number of generations will go extinct because every generation will be smaller than the previous one. Pretty obviously that's not going to happen. One reason is because we will produce artificial intelligence which can replicate at extremely rapid rates. They do it because they're asked or programmed to or wish to gain some benefit and they can pay for their creation and pay back the resources needed to create them very very quickly. Financing for that reproduction is easy and if you have one AI system that chooses to replicate in that way or some organization or institution decided to choose to create some AIs that are willing to be replicated then that can expand to make use of any amount of natural resources that can support them and to do more work produce, produce more economic value. What will limit population growth given these selective pressures where if even one individual wants to replicate a lot they can do so incessantly. So that could be individually resource limited so it could be that individuals and organizations have some endowment of natural resources and they can't get one another's endowments. Some choose to have many offspring or produce many AIs and then the natural resources that they possess are subdivided among a greater population while in another jurisdiction or another individual may choose not to subdivide their wealth. And in that case you have Malthusianism in the sense that within some particular jurisdiction or set of property rights, you have a population that has increased up until to some limiting factor which could be that they're literally using all of their resources, they have nothing left for things like defense or economic investment. Or it could be something that's more like if you invested more natural resources into population it would come at the expense of something else necessary including military resources if you're in a competitive situation where there remains war and anarchy and there aren't secure property rights to maintain wealth in place. If you have a situation where there's pooling of resources, for example, say you have a universal basic income that's funded by taxation of natural resources and then it's distributed evenly to every mind above a certain scale of complexity per unit time. So each second a mind exists to get something such an allocation in that case then all right well those who replicate as much as they can afford with this income do it and increase their population approximately immediately until the funds for the universal basic income paid for from the natural resource taxation divided by the set of recipients is just barely enough to pay for the existence of one more mind. So there's like a Malthusian element and that this I think has been reduced to near the AI subsistence level or the subsistence level of whatever qualifies for the subsidy. Given that this all happens almost immediately people who might otherwise have enjoyed the basic income may object and say no, no, this is no good and they might respond by saying, well something like the subdivision before maybe there's a restriction, there's a distribution of wealth and then when one has a child there's a requirement that one gives them a certain minimum a quantity of resources and one doesn't have the resources to give them that minimum standard of living or standard of wealth yeah one can't do that because of child slash AI welfare laws. Or you could have a system that is more accepting of diversity and preferences. And so you have some societies or some jurisdictions or families that go the route of having many people with less natural resources per person and others that go a direction of having fewer people and more natural resources per person and they just coexist. But how much of each you get depends on how attached people are to things that don't work with separate policies for separate jurisdictions. Things like global redistribution that's ongoing continuously versus this infringements on autonomy if you're saying that a mind can't be created even though it has a standard of living that's far better than ours because of the advanced technology of the time because it would reduce the average per capita income might have any more capital around yeah then that would pull in the other direction. That’s the kind of values judgment and social coordination problem that people would have to negotiate for and things like democracy and international relations and sovereignty would apply to help solve them.", "Dwarkesh Patel 02:54:17", "What would warfare in space look like? Would offense or defense have the advantage? Would the equilibrium set by mutually assured destruction still be applicable? Just generally, what is the picture?", "Carl Shulman 02:54:33", "The extreme difference is that things are very far apart outside the solar system and there's the speed of light limit and to get close to that limit you have to use an enormous amount of energy. That in some ways could favor the defender because you have something that's coming in at a large fraction the speed of light and it hits a grain of dust and it explodes. The amount of matter you can send to another galaxy or a distant star for a given amount of reaction mass and energy input is limited. So it's hard to send an amount of military material to another location as what can be present there already locally. That would seem like it would make it harder for the attacker between stars or between galaxies but there are a lot of other considerations. One thing is the extent to which the matter in a region can be harnessed all at once. We have a lot of mass and energy in a star but it's only being doled out over billions of years because hydrogen fusion is exceedingly hard outside of a star. It's a very very slow and difficult reaction and if you can't turn the star into energy faster then it's this huge resource that will be worthwhile for billions of years and so even very inefficiently attacking a solar system to acquire the stuff that's there could pay off. If it takes a thousand years of a star's output to launch an attack on another star and then you hold it for a billion years after that then it can be the case that just like a larger surrounding attacker might be able to, even very inefficiently, send attacks at a civilization that was small but accessible. If you can quickly burn the resources that the attacker might want to acquire, if you can put stars into black holes and extract most of the usable energy before the attacker can take them over, then it would be like scorched earth. It's like most of what you were trying to capture could be expended on military material to fight you and you don't actually get much that is worthwhile and you paid a lot to do it and that would favor the defense. At this level it's pretty challenging to net out all the factors including all the future technologies. The burden of interstellar attack being quite high compared to our conventional things seems real but at the level of, over millions of years weighing then that thing does it result in if the if they're aggressive conquest or not or is every star or galaxy approximately impregnable enough not to be worth attacking. I'm not going to say I know the answer.", "Dwarkesh Patel 02:58:00", "Okay, final question. How do you think about info hazards when talking about your work? Obviously if there's a risk you want to warn people about it but you don't want to give careless or potentially homicidal people ideas. When Eliezer was on the podcast talking about the people who've been developing AI being inspired by his ideas. He called them idiot disaster monkeys who want to be the ones to pluck the deadly fruit. I'm sure the work you're doing involves many info hazards. How do you think about when and where to spread them?", "Carl Shulman 02:58:42", "I think they're real concerns of that type. I think it's true that AI progress has probably been accelerated by efforts like Bostrom's publication of superintelligence to try and get the world to pay attention to these problems in advance and prepare. I think I disagree with Eliezer that that has been on the whole bad. In some important ways the situation is looking a lot better than the alternative ways it could have been. I think it's important that you have several of the leading AI labs making not only significant lip service but also some investments in things like technical alignment research, providing significant public support for the idea that the risks of truly apocalyptic disasters are real. I think the fact that the leaders of OpenAI, Deep Mind and Anthropic all make that point. They were recently all invited along with other tech CEOs to the White House to discuss AI regulation . You could tell an alternative story where a larger share of the leading companies in AI are led by people who take a completely dismissive, denialist view and you see some companies that do have a stance more like that today. So a world where several of the leading companies are making meaningful efforts and you can do a lot to criticize could they be doing more and better and would have been the negative effects of some of the things they've done but compared to a world where even though AI would be reaching where it's going a few years later, those seem like significant benefits. And if you didn't have this kind of public communication you would have had fewer people going into things like AI policy, AI alignment research by this point and it would be harder to mobilize these resources to try and address the problem when AI would eventually be developed not that much later proportionately. I don't know that attempting to have public discussion understanding has been a disaster. I have been reluctant in the past to discuss some of the aspects of intelligence explosion, things like the concrete details of AI takeover before because of concern about this problem where people who only see the international relations aspects and zero sum and negative sum competition and not enough attention to the mutual destruction and senseless deadweight loss from that kind of conflict.", "At this point we seem close compared to what I would have thought a decade or so ago to these kinds of really advanced AI capabilities. They are pretty central in policy discussion and becoming more so. The opportunity to delay understanding and whatnot, there's a question of — For what? I think there were gains of building the AI alignment field, building various kinds of support and understanding for action. Those had real value and some additional delay could have given more time for that but from where we are, at some point I think it's absolutely essential that governments get together at least to restrict disastrous reckless compromising of some of the safety and alignment issues as we go into the intelligence explosion. Moving the locus of the collective action problem from numerous profit oriented companies acting against one another's interest by compromising safety to some governments and large international coalitions of governments who can set common rules and common safety standards puts us into a much better situation. That requires a broader understanding of the strategic situation and the position they'll be in. If we try and remain quiet about the problem they're actually going to be facing it can result in a lot of confusion. For example the potential military applications of advanced AI are going to be one of the factors that is pulling political leaders to do the thing that will result in their own destruction and the overthrow of their governments. If we characterize it as things will just be a matter of — you lose chatbots and some minor things that no one cares about and in exchange you avoid any risk of the world ending catastrophe, I think that picture leads to a misunderstanding and it won't make people think that you need less in the way of preparation of things like alignment so you can actually navigate the thing, verifiability for international agreements, or things to have enough breathing room to have caution and slow down. Not necessarily right now, although that could be valuable, but when it's so important when you have AI that is approaching the ability to really automate AI research and things would otherwise be proceeding absurdly fast, far faster than we can handle and far faster than we should want.", "So yeah, at this point I'm moving towards sharing my model of the world to try and get people to understand and do the right thing. There's some evidence of progress on that front. Things like the statements and movements by Geoff Hinton are inspiring. Some of the engagement by political figures is reason for optimism relative to worse alternatives that could have been. And yes, the contrary view is present. It's all about geopolitical competition, never hold back a technological advance and in general, I love many technological advances that people I think are unreasonably down on, nuclear power, genetically modified crops. Bioweapons and AGI capable of destroying human civilization are really my two exceptions and yeah we've got to deal with these issues and the path that I see to handling them successfully involves key policymakers and the expert communities and the public and electorate grokking the situation therein and responding appropriately.", "Dwarkesh Patel 03:06:01", "It’s a true honor that one of the places you've decided to explore this model is on The Lunar Society podcast. The listeners might not appreciate it because this episode might be split up into different parts and they might not appreciate how much stamina you've displayed here. I think we've been going for eight or nine hours straight and it's been incredibly interesting. Other than typing Carl Shulman on Google Scholar, where else can people find your work?", "Carl Shulman 03:06:25", "I have a blog reflective disequilibrium and a new site in the works .", "Dwarkesh Patel 03:06:38", "Excellent. Alright, Carl this has been a true pleasure. Safe to say it’s the most interesting episode I've done so far.", "Carl Shulman 03:06:47", "Thank you for having me." ]
[ "https://www.explainxkcd.com/wiki/index.php/1450:_AI-Box_Experiment", "https://futureoflife.org/open-letter/pause-giant-ai-experiments/", "https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/", "https://www.bbc.co.uk/news/technology-65886125", "https://www.bbc.com/news/world-asia-64494094", "https://www.deepmind.com/research/highlighted-research/alphafold", "https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf", "https://www.dwarkeshpatel.com/p/lyndon-johnson", "https://www.lesswrong.com/posts/kEtgXdjxA4oWjcLFQ/lessons-on-ai-takeover-from-the-conquistadors", "https://www.amazon.in/Life-3-0-Being-Artificial-Intelligence/dp/1101946598", "https://www.dwarkeshpatel.com/p/eliezer-yudkowsky", "https://www.lesswrong.com/posts/oYks6LXzNfNc7ugyB/drexler-s-nanotech-forecast", "https://arvoinen.ai/585-2/", "https://collinpburns.com/", "https://en.wikipedia.org/wiki/Stuxnet", "https://www.alignment.org/", "%5B%3Chttps://www.lesswrong.com/posts/dC7mP5nSwvpL65Qu5/why-the-tails-come-apart%23:~:text=The%2520geometrical%2520analogue%2520to%2520the,factor%2520value%2520gets%2520more%2520extreme.", "https://www.lesswrong.com/posts/dC7mP5nSwvpL65Qu5/why-the-tails-come-apart%23:~:text=The%2520geometrical%2520analogue%2520to%2520the,factor%2520value%2520gets%2520more%2520extreme.)", "https://en.wikipedia.org/wiki/Richard_Wrangham", "https://www.wired.com/story/fast-forward-meet-chatgpts-right-wing-alter-ego/", "https://forum.effectivealtruism.org/topics/hedonium", "https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/", "https://frc.ri.cmu.edu/~hpm/", "https://www.openphilanthropy.org/research/report-on-whether-ai-could-drive-explosive-economic-growth/", "https://www.openphilanthropy.org/wp-content/uploads/Carl_Shulman_08-19-16_public.pdf", "https://www.cs.virginia.edu/~robins/YouAndYourResearch.html", "https://www.google.com/search?q=vaclav+smil+books", "https://www.google.com/search?q=Joel+Mokyr", "https://frc.ri.cmu.edu/~hpm/", "https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187", "https://fortune.com/2023/06/20/ai-regulation-chatgpt-joe-biden-white-house-tech-executives-artificial-intelligence/", "http://reflectivedisequilibrium.blogspot.com/", "https://www.reflectivedisequilibrium.com/" ]
https://www.dwarkesh.com/p/charles-mann
Charles C. Mann - Americas Before Columbus & Scientific Wizardry
[ "Dwarkesh Patel", "Okay! Today I have the pleasure of speaking with Charles Mann, who is the author of three of my favorite books, including 1491: New Revelations of America before Columbus . 1493: Uncovering the New World Columbus Created , and The Wizard and the Prophet: Two Remarkable Scientists and Their Dueling Visions to Shape Tomorrow's World . Charles, welcome to the Lunar Society.", "Charles C. Mann", "It’s a pleasure to be here.", "Epidemically Alternate Realities", "Dwarkesh Patel", "My first question is: How much of the New World was basically baked into the cake? So at some point, people from Eurasia were going to travel to the New World, bringing their diseases. Considering disparities and where they would survive, if the Acemoglu theory that you cited is correct, then some of these places were bound to have good institutions and some of them were bound to have bad institutions. Plus, because of malaria, there were going to be shortages in labor that people would try to fix with African slaves. So how much of all this was just bound to happen? If Columbus hadn't done it, then maybe 50 years down the line, would someone from Italy have done it? What is the contingency here?", "Charles C. Mann", "Well, I think that some of it was baked into the cake. It was pretty clear that at some point, people from Eurasia and the Western Hemisphere were going to come into contact with each other. I mean, how could that not happen, right? There was a huge epidemiological disparity between the two hemispheres––largely because by a quirk of evolutionary history, there were many more domesticable animals in Eurasia and the Eastern hemisphere. This leads almost inevitably to the creation of zoonotic diseases: diseases that start off in animals and jump the species barrier and become human diseases. Most of the great killers in human history are zoonotic diseases. When people from Eurasia and the Western Hemisphere meet, there are going to be those kinds of diseases.", "But if you wanted to, it's possible to imagine alternative histories. There's a wonderful book by Laurent Binet called Civilizations that, in fact, does just that. It's a great alternative history book. He imagines that some of the Vikings came and extended further into North America, bringing all these diseases, and by the time of Columbus and so forth, the epidemiological balance was different. So when Columbus and those guys came, these societies killed him, grabbed his boats, and went and conquered Europe. It's far-fetched, but it does say that this encounter would’ve happened and that the diseases would’ve happened, but it didn’t have to happen in exactly the way that it did. It’s also perfectly possible to imagine that Europeans didn't engage in wholesale slavery. There was a huge debate when this began about whether or not slavery was a good idea. There were a lot of reservations, particularly among the Catholic monarchy asking the Pope “Is it okay that we do this?” You could imagine the penny dropping in a slightly different way. So, I think some of it was bound to happen, but how exactly it happened was really up to chance, contingency, and human agency,", "Weak Points in Empires", "Dwarkesh Patel", "When the Spanish first arrived in the 15th and 16th centuries, were the Incas and the Aztecs at a particularly weak point or particularly decadent? Or was this just how well you should have expected this civilization to be functioning at any given time period?", "Charles C. Mann", "Well, typically, empires are much more jumbly and fragile entities than we imagine. There's always fighting at the top. What Hernán Cortés was able to do, for instance, with the Aztecs––who are better called The Triple Alliance (the term “Aztec” is an invention from the 19th century). The Triple Alliance was comprised of three groups of people in central Mexico, the largest of which were the Mexica , who had the great city of Tenochtitlan. The other two guys really resented them and so what Cortes was able to do was foment a civil war within the Aztec empire: taking some enemies of the Aztec, some members of the Aztec empire, and creating an entirely new order.", "There's a fascinating set of history that hasn't really emerged into the popular consciousness. I didn't include it in 1491 or 1493 because it was so new that I didn't know anything about it; everything was largely from Spanish and Mexican scholars about the conquest within the conquest. The allies of the Spaniards actually sent armies out and conquered big swaths of northern and southern Mexico and Central America. So there’s a far more complex picture than we realized even 15 or 20 years ago when I first published 1491. However, the conquest wasn't as complete as we think. I talk a bit about this in 1493 but what happens is Cortes moves in and he marries his lieutenants to these indigenous people, creating this hybrid nobility that then extended on to the Incas. The Incas were a very powerful but unstable empire and Pizarro had the luck to walk in right after a civil war. When he did that right after a civil war and massive epidemic, he got them at a very vulnerable point. Without that, it all would have been impossible. Pizarro cleverly allied with the losing side (or the apparently losing side in this in the Civil War), and was able to create a new rallying point and then attack the winning side. So yes, they came in at weak points, but empires typically have these weak points because of fratricidal stuff going on in the leadership.", "Dwarkesh Patel", "It does also remind me of the East India Trading Company.", "Charles C. Mann", "And the Mughal empire, yeah. Some of those guys in Bengal invited Clive and his people in. In fact, I was struck by this. I had just been reading this book, maybe you've heard of it: The Anarchy by William Dalrymple.", "Dwarkesh Patel", "I’ve started reading it, yeah but I haven’t made much progress.", "Charles C. Mann", "It's an amazing book! It's so oddly similar to what happened. There was this fratricidal stuff going on in the Mughal empire, and one side thought, “Oh, we'll get these foreigners to come in, and we'll use them.” That turned out to be a big mistake.", "Dwarkesh Patel", "Yes. What's also interestingly similar is the efficiency of the bureaucracy. Niall Ferguson has a good book on the British Empire and one thing he points out is that in India, the ratio between an actual English civil servant and the Indian population was about 1: 3,000,000 at the peak of the ratio. Which obviously is only possible if you have the cooperation of at least the elites, right? So it sounds similar to what you were saying about Cortes marrying his underlings to the nobility.", "Charles C. Mann", "Something that isn’t stressed enough in history is how often the elites recognize each other. They join up in arrangements that increase both of their power and exploit the poor schmucks down below. It’s exactly what happened with the East India Company, and it's exactly what happened with Spain. It's not so much that there was this amazing efficiency, but rather, it was a mutually beneficial arrangement for Tlaxcala, which is now a Mexican state. It had its rights, and the people kept their integrity, but they weren’t really a part of the Spanish Empire. They also weren’t really wasn't part of Mexico until around 1857. It was a good deal for them. The same thing was true for the Bengalis, especially the elites who made out like bandits from the British Empire.", "Slave Revolts", "Dwarkesh Patel", "Yeah, that's super interesting. Why was there only one successful slave revolt in the new world in Haiti? In many of these cases, the ratios between slaves and the owners are just huge. So why weren't more of them successful?", "Charles C. Mann", "Well, you would first have to define ‘successful’. Haiti wasn't successful if you meant ‘creating a prosperous state that would last for a long time.’ Haiti was and is (to no small extent because of the incredible blockade that was put on it by all the other nations) in terrible shape. Whereas in the case of Palmeres, you had people who were self-governing for more than 100 years..", "Eventually, they were incorporated into the larger project of Brazil. There's a great Brazilian classic that’s equivalent to what Moby Dick or Huck Finn is to us called Os Sertões by a guy named Cunha. And it's good! It's been translated into this amazing translation in English called ​​ Rebellion in the Backlands . It’s set in the 1880s, and it’s about the creation of a hybrid state of runaway slaves, and so forth, and how they had essentially kept their independence and lack of supervision informally, from the time of colonialism. Now the new Brazilian state is trying to take control, and they fight them to the last person. So you have these effectively independent areas in de facto, if not de jure, that existed in the Americas for a very long time. There are some in the US, too, in the great dismal swamp, and you hear about those marooned communities in North Carolina, in Mexico, where everybody just agreed “ these places aren't actually under our control, but we're not going to say anything.” If they don't mess with us too much, we won't mess with them too much. Is that successful or not? I don't know.", "Dwarkesh Patel", "Yeah, but it seems like these are temporary successes..", "Charles C. Mann", "I mean, how long did nations last? Like Genghis Khan! How long did the Khan age last? But basically, they had overwhelming odds against them. There’s an entire colonial system that was threatened by their existence. Similar to the reasons that rebellions in South Asia were suppressed with incredible brutality–– these were seen as so profoundly threatening to this entire colonial order that people exerted a lot more force against them than you would think would be worthwhile.", "Dwarkesh Patel", "Right. It reminds me of James Scott's Against the Grain . He pointed out that if you look at the history of agriculture, there're many examples where people choose to run away as foragers in the forest, and then the state tries to bring them back into the fold.", "Charles C. Mann", "Right. And so this is exactly part of that dynamic. I mean, who wants to be a slave, right? So as many people as possible ended up leaving. It’s easier in some places than others.. it's very easy in Brazil. There are 20 million people in the Brazilian Amazon and the great bulk of them are the descendants of people who left slavery. They're still Brazilians and so forth, but, you know, they ended up not being slaves.", "Slavery Ban", "Dwarkesh Patel", "Yeah, that's super fascinating. What is the explanation for why slavery went from being historically ever-present to ending at a particular time when it was at its peak in terms of value and usefulness? What's the explanation for why, when Britain banned the slave trade, within 100 or 200 years, there ended up being basically no legal sanction for slavery anywhere in the world?", "Charles C. Mann", "This is a really good question and the real answer is that historians have been arguing about this forever. I mean, not forever, but you know, for decades, and there's a bunch of different explanations. I think the reason it's so hard to pin down is… kind of amazing. I mean, if you think about it, in 1800, if you were to have a black and white map of the world and put red in countries in which slavery was illegal and socially accepted, there would be no red anywhere on the planet. It’s the most ancient human institution that there is. The Code of Hammurabi is still the oldest complete legal code that we have, and about a third of it is about rules for when you can buy slaves, when you can sell slaves, how you can mistreat them, and how you can’t–– all that stuff. About a third of it is about buying, selling, and working other human beings. So this has been going on for a very, very long time. And then in a century and a half, it suddenly changes.", "So there's some explanation, and it's that machinery gets better. But the reason to have people is that you have these intelligent autonomous workers, who are like the world's best robots. From the point of view of the owner, they're fantastically good, except they're incredibly obstreperous and when they're caught, you're constantly afraid they're going to kill you. So if you have a chance to replace them with machinery, or to create a wage where you can run wage people, pay wage workers who are kept in bad conditions but somewhat have more legal rights, then maybe that's a better deal for you. Another one is that industrialization produced different kinds of commodities that became more and more valuable, and slavery was typically associated with the agricultural laborer. So as agriculture diminished as a part of the economy, slavery become less and less important and it became easier to get rid of them. Another one has to do with the beginning of the collapse of the colonial order. Part of it has to do with.. (at least in the West, I don't know enough about the East) the rise of a serious abolition movement with people like Wilberforce and various Darwins and so forth. And they're incredibly influential, so to some extent, I think people started saying, “Wow, this is really bad.” I suspect that if you looked at South Asia and Africa, you might see similar things having to do with a social moment, but I just don't know enough about that. I know there's an anti-slavery movement and anti-caste movement in which we're all tangled up in South Asia, but I just don't know enough about it to say anything intelligent.", "Dwarkesh Patel", "Yeah, the social aspect of it is really interesting. The things you mentioned about automation, industrialization, and ending slavery… Obviously, with time, that might have actually been why it expanded, but its original inception in Britain happened before the Industrial Revolution took off. So that was purely them just taking a huge loss because this movement took hold.", "Charles C. Mann", "And the same thing is true for Bartolome de Las Casas . I mean, Las Casas, you know, in the 1540s just comes out of nowhere and starts saying, “ Hey! This is bad.” He is the predecessor of the modern human rights movement. He’s an absolutely extraordinary figure, and he has huge amounts of influence. He causes Spain’s king in the 1540s to pass what they call The New Laws which says no more slavery, which is a devastating blow enacted to the colonial economy in Spain because they depended on having slaves to work in the silver mines in the northern half of Mexico and in Bolivia, which was the most important part of not only the Spanish colonial economy but the entire Spanish empire. It was all slave labor. And they actually tried to ban it. Now, you can say they came to their senses and found a workaround in which it wasn't banned. But it's still… this actually happened in the 1540s. Largely because people like Las Casas said, “This is bad! you're going to hell doing this.”", "Contingency & The Pyramids", "Dwarkesh Patel", "Right. I'm super interested in getting into The Wizard and the Prophet section with you. Discussing how movements like environmentalism, for example, have been hugely effective. Again, even though it probably goes against the naked self-interest of many countries. So I'm very interested in discussing that point about why these movements have been so influential!", "But let me continue asking you about globalization in the world. I'm really interested in how you think about contingency in history, especially given that you have these two groups of people that have been independently evolving and separated for tens of thousands of years. What things turn out to be contingent? What I find really interesting from the book was how both of them developed pyramids––  who would have thought that structure would be within our extended phenotype or something?", "Charles C. Mann", "It's also geometry! I mean, there's only a certain limited number of ways you can pile up stone blocks in a stable way. And pyramids are certainly one of them. It's harder to have a very long-lasting monument that's a cylinder. Pyramids are also easier to build: if you get a cylinder, you have to have scaffolding around it and it gets harder and harder.", "With pyramids, you can use each lower step to put the next one, on and on, and so forth. So pyramids seem kind of natural to me. Now the material you make them up of is going to be partly determined by what there is. In Cahokia and in the Mississippi Valley, there isn't a lot of stone. So people are going to make these earthen pyramids and if you want them to stay on for a long time, there’s going to be certain things you have to do for the structure which people figured out. For some pyramids, you had all this marble around them so you could make these giant slabs of marble, which seems, from today's perspective, incredibly wasteful. So you're going to have some things that are universal like that, along with the apparently universal, or near-universal idea that people who are really powerful like to identify themselves as supernatural and therefore want to be commemorated.", "Dwarkesh Patel", "Yes, I visited Mexico City recently.", "Charles C. Mann Beautiful city!", "Teotihuacan", "Dwarkesh Patel Yeah, the pyramids there… I think I was reading your book at the time or already had read your book. What struck me was that if I remember correctly, they didn't have the wheel and they didn't have domesticated animals. So if you really think about it, that’s a really huge amount of human misery and toil it must have taken to put this thing together as basically a vanity project. It’s like a huge negative connotation if you think about what it took to construct it.", "Charles C. Mann", "Sure, but there are lots of really interesting things about Teotihuacan . This is just one of those things where you can only say so much in one book. If I was writing the two-thousand-page version of 1491, I would have included this. So Tehuácan pretty much starts out as a standard Imperial project, and they build all these huge castles and temples and so forth. There's no reason to suppose it was anything other than an awful experience (like building the pyramids), but then something happened to Teotihuacan that we don’t understand. All these new buildings started springing up during the next couple of 100 years, and they're all very very similar. They're like apartment blocks and there doesn't seem to be a great separation between rich and poor. It's really quite striking how egalitarian the architecture is because that's usually thought to be a reflection of social status. So based on the way it looks, could there have been a political revolution of some sort? Where they created something much more egalitarian, probably with a bunch of good guy kings who weren't interested in elevating themselves so much? There's a whole chapter in the book by David Wingrove and David Graeber, The Dawn of Everything about this, and they make this argument that Tehuácan is an example that we can look at as an ancient society that was much more socially egalitarian than we think. Now, in my view, they go a little overboard–– it was also an aggressive imperial power and it was conquering much of the Maya world at the same time. But it is absolutely true that something that started out one way can start looking very differently quite quickly. You see this lots of times in the Americas in the Southwest–– I don't know if you've ever been to Chaco Canyon or any of those places, but you should absolutely go! Unfortunately, it's hard to get there because of the roads terrible but overall, it’s totally worth it. It's an amazing place. Mesa Verde right north of it is incredible, it's just really a fantastic thing to see. There are these enormous structures in Chaco Canyon, that we would call castles if they were anywhere else because they're huge . The biggest one, Pueblo Bonito , is like 800 rooms or some insane number like that. And it's clearly an imperial venture, we know that because it's in this canyon and one side is getting all the good light and good sun–– a whole line of these huge castles. And then on the other side is where the peons lived. We also know that starting around 1100, everybody just left! And then their descendants start the Puebla , who are these sort of intensely socially egalitarian type of people. It looks like a political revolution took place. In fact, in the book I'm now writing, I'm arguing (in a sort of tongue-in-cheek manner but also seriously) that this is the first American Revolution! They got rid of these “kings” and created these very different and much more egalitarian societies in which ordinary people had a much larger voice about what went on.", "Dwarkesh Patel", "Interesting. I think I got a chance to see the Teotihuacan apartments when I was there, but I wonder if we’re just looking at the buildings that survived. Maybe the buildings that survived were better constructed because they were for the elites? The way everybody else lived might have just washed away over the years.", "Charles C. Mann", "So what's happened in the last 20 years is basically much more sophisticated surveys of what is there. I mean, what you're saying is absolutely the right question to ask. Are the rich guys the only people with things that survived while the ordinary people didn’t? You can never be absolutely sure, but what they did is they had these ground penetrating radar surveys , and it looks like this egalitarian construction extends for a huge distance. So it's possible that there are more really, really poor people. But at least you'd see an aggressively large “middle class” getting there, which is very, very different from the picture you have of the ancient world where there's the sun priest and then all the peasants around them.", "New Book Thesis", "Dwarkesh Patel", "Yeah. By the way, is the thesis of the new book something you're willing to disclose at this point? It’s okay if you’re not––", "Charles C. Mann", "Sure sure, it’s okay! This is a sort of weird thing, it's like a sequel or offshoot of 1491. That book, I'm embarrassed to say, was supposed to end with another chapter. The chapter was going to be about the American West, which is where I grew up, and I'm very fond of it. And apparently, I had a lot to say because when I outlined the chapter; the outline was way longer than the actual completed chapters of the rest of the book. So I sort of tried to chop it up and so forth, and it just was awful. So I just cut it. If you carefully look at 1491, it doesn't really have an ending. At the end, the author sort of goes, “Hey! I'm ending, look at how great this is!” So this has been bothering me for 15 years. During the pandemic, when I was stuck at home like so many other people, I held out what I had since I've been saving string and tossing articles that I came across into a folder, and I thought, “Okay, I'm gonna write this out more seriously now.” 15 or 20 years later. And then it was pretty long so I thought “Maybe this could be an e-book.” then I showed it to my editor. And he said, “ That is not an e-book. That's an actual book. ” So I take a chapter and hope I haven't just padded it, and it's about the North American West. My kids like the West, and at various times, they've questioned what it would be like to move out there because I'm in Massachusetts, where they grew up. So I started thinking “What is the West going to be like, tomorrow? When I'm not around 30 or 50 years from now?” It seems to be that you won't know who's president or who's governor or anything, but there are some things we can know. It’d be hotter and drier than it is now or has been in the recent past, like that wouldn’t really be a surprise. So I think we can say that it's very likely to be like that. All the projections are that something like 40% of the people in the area between the Mississippi and the Pacific will be of Latino descent–– from the south, so to speak. And there's a whole lot of people from Asia along the Pacific coast, so it's going to be a real ethnic mixing ground. There's going to be an epicenter of energy, sort of no matter what happens. Whether it's solar, whether it's wind, whether it's petroleum, or hydroelectric, the West is going to be economically extremely powerful, because energy is a fundamental industry. And the last thing is (and this is the iffiest of the whole thing), but I'm going to go out on a limb and say that the ongoing recuperation of sovereignty by the 294 federally recognized Native nations in the West is going to continue . That's been going in this very jagged way, but definitely for the last 50 or 60 years, as long as I've been around, the overall trend is in a very clear direction. So then you think, okay, this West is going to be wildly ethnically diverse, full of competing sovereignties and overlapping sovereignties. Nature is also going to really be in kind of a terminal. Well, that actually sounds like the 1200s! And the conventional history starts with Lewis and Clark and so forth.", "There’s this breakpoint in history when people who looked like me came in and sort of rolled in from the East and kind of took over everything. And the West disappears! That separate entity, the native people disappear, and nature is tamed. That's pretty much what was in the textbooks when I was a kid. Do you know who Frederick Jackson Turner is?", "Dwarkesh Patel", "No.", "Charles C. Mann", "So he's like one of these guys where nobody knows who he is. But he was incredibly influential in setting intellectual ideas. He wrote this article in 1893, called The Significance of the Frontier . It was what established this idea that there’s this frontier moving from East to West and on this side was savagery and barbarism, and on this other side of civilization was team nature and wilderness and all that. Then it goes to the Pacific, and that’s the end of the West. That's still in the textbooks but in a different form: we don't call native people “lurking savages” as he did. But it's in my kids' textbooks. If you have kids, it'll very likely be in their textbook because it's such a bedrock. What I'm saying is that's actually not a useful way to look at it, given what's coming up. A wonderful Texas writer, Bruce Sterling, says, “To know the past, you first have to understand the future.”", "It's funny, right? But what he means is that all of us have an idea of where the trajectory of history is going. A whole lot of history is about asking, “How did we get here? How do we get there?” To get that, you have to have an idea of what the “there” is. So I'm saying, I'm writing a history of the West with that West that I talked about in mind. Which gives you a very different picture: a lot more about indigenous fire management, the way the Hohokam survived the drought of the 1200s, and a little bit less about Billy the Kid.", "Gender Ratios and Silicon Valley", "Dwarkesh Patel", "I love that quote hahaha. Speaking of the frontier, maybe it's a mistaken concept, but I remember that in a chapter of 1493, you talk about these rowdy adventurer men who outnumber the women in the silver mines and the kind of trouble that they cause. I wonder if there's some sort of distant analogy to the technology world or Silicon Valley, where you have the same kind of gender ratio and you have the same kind of frontier spirit? Maybe not the same physical violence––– more sociologically. Is there any similarity there?", "Charles C. Mann", "I think it's funny, I hadn't thought about it. But it's certainly funny to think about. So let me do this off the top of my head. I like the idea that at the end of it, I can say, “wait, wait, that's ridiculous.“ Both of them would attract people who either didn't have much to lose, or were oblivious about what they had to lose, and had a resilience towards failure. I mean, it's amazing, the number of people in Silicon Valley who have completely failed at numbers of things! They just get up and keep‌ trying and have a kind of real obliviousness to social norms. It's pretty clear they are very much interested in making a mark and making their fortunes themselves. So there's at least a sort of shallow comparison, there are some certain similarities. I don't think this is entirely flattering to either group. It’s absolutely true that those silver miners in Bolivia, and in northern‌ Mexico, created to a large extent, the modern world. But it's also true that they created these cesspools of violence and exploitation that had consequences we're still living with today. So you have to kind of take the bitter with the sweet. And I think that's true of Silicon Valley and its products *chuckles* I use them every day, and I curse them every day.", "Dwarkesh Patel", "Right.", "Charles C. Mann", "I want to give you an example. The internet has made it possible for me to do something like write a Twitter thread, get millions of people to read it, and have a discussion that's really amazing at the same time. Yet today, The Washington Post has an article about how every book in Texas (it’s one of the states) a child checks out of the school library goes into a central state databank. They can see and look for patterns of people taking out “bad books” and this sort of stuff. And I think “whoa, that's really bad! That's not so good.” It's really the same technology that brings this dissemination and collection of vast amounts of information with relative ease. So with all these things, you take the bitter with the sweet.", "Technological Stupidity in the New World", "Dwarkesh Patel", "I want to ask you again about contingency because there are so many other examples where things you thought would be universal actually don't turn out to be. I think you talked about how the natives had different forms of metallurgy, with gold and copper, but then they didn't do iron or steel. You would think that given their “warring nature”, iron would be such a huge help. There's a clear incentive to build it. Millions of people living there could have built or developed this technology. Same with the steel, same with the wheel. What’s the explanation for why these things you think anybody would have come up with didn't happen?", "Charles C. Mann", "I know. It's just amazing to me! I don't know. This is one of those things I think about all the time. A few weeks ago, it rained, and I went out to walk the dog. I'm always amazed that there are literal glistening drops of water on the crabgrass and when you pick it up, sometimes there are little holes eaten by insects in the crabgrass. Every now and then, if you look carefully, you'll see a drop of water in one of those holes and it forms a lens. And you can look through it! You can see that it's not a very powerful lens by any means, but you can see that things are magnified. So you think “How long has there been crabgrass? Or leaves? And water?” Just forever! We've had glass forever! So how is it that we had to wait for whoever it was to create lenses? I just don't get it. In book 1491, I mentioned the moldboard plow , which is the one with a curving blade that allows you to go through the soil much more easily. It was invented in China thousands of years ago, but not around in Europe until the 1400s. Like, come on, guys! What was it? And so, you know, there's this mysterious sort of mass stupidity.", "One of the wonderful things about globalization and trade and contact is that maybe not everybody is as blind as you and you can learn from them. I mean, that's the most wonderful thing about trade. So in the case of the wheel, the more amazing thing is that in Mesoamerica, they had the wheel on child's toys. Why didn’t they develop it? The best explanation I can get is they didn't have domestic animals. A cart then would have to be pulled by people. That would imply that to make the cart work, you'd have to cut a really good road. Whereas they had these travois , which are these things that you hold and they have these skids that are shaped kind of like an upside-down V. You can drag them across rough ground, you don't need a road for them. That's what people used in the Great Plains and so forth. So you look at this, and you think “maybe this was the ultimate way to save labor. I mean, this was good enough. And you didn't have to build and maintain these roads to make this work” so maybe it was rational or just maybe they're just blinkered. I don’t know. As for assembly with steel, I think there's some values involved in that. I don't know if you've ever seen one of those things they had in Mesoamerica called Macuahuitl. They’re wooden clubs with obsidian blades on them and they are sharp as hell. You don't run your finger along the edge because they just slice it open. An obsidian blade is pretty much sharper than any iron or steel blade and it doesn't rust. Nice. But it's much more brittle. So okay, they're there, and the Spaniards were really afraid of them. Because a single blow from these heavy sharp blades could kill a horse. They saw people whack off the head of a horse carrying a big strong guy with a single blow! So they're really dangerous, but they're not long-lasting. Part of the deal was that the values around conflict were different in the sense that conflict in Mesoamerica wasn't a matter of sending out foot soldiers in grunts, it was a chance for soldiers to get individual glory and prestige. This was associated with having these very elaborately beautiful weapons that you killed people with. So maybe not having steel worked better for their values and what they were trying to do at war. That would’ve lasted for years and I mean, that's just a guess. But you can imagine a scenario where they’re not just blinkered but instead expressive on the basis of their different values. This is hugely speculative. There's a wonderful book by Ross Hassig about old Aztec warfare. It's an amazing book which is about the military history of The Aztecs and it's really quite interesting. He talks about this a little bit but he finally just says we don't know why they didn't develop all these technologies, but this worked for them.", "Dwarkesh Patel", "Interesting. Yeah, it's kind of similar to China not developing gunpowder into an actual ballistic material––", "Charles C. Mann", "Or Japan giving up the gun! They actually banned guns during the Edo period. The Portuguese introduced guns and the Japanese used them, and they said “Ahhh nope! Don’t want them.” and they banned them. This turned out to be a terrible idea when Perry came in the 1860s. But for a long time, supposedly under the Edo period, Japan had the longest period of any nation ever without a foreign war.", "Dwarkesh Patel", "Hmm. Interesting. Yeah, it's concerning when you think the lack of war might make you vulnerable in certain ways.", "Charles C. Mann", "Yeah, that's a depressing thought.", "Religious Demoralization", "Dwarkesh Patel", "Right. In Fukuyama’s The End of History , he's obviously arguing that liberal democracy will be the final form of government everywhere. But there’s this point he makes at the end where he's like, “Yeah, but maybe we need a small war every 50 years or so just to make sure people remember how bad it can get and how to deal with it.” Anyway, when the epidemic started in the New World, surely the Indians must have had some story or superstitious explanation–– some way of explaining what was happening. What was it?", "Charles C. Mann", "You have to remember, the germ theory of disease didn't exist at the time. So neither the Spaniards, or the English, or the native people, had a clear idea of what was going on. In fact, both of them thought of it as essentially a spiritual event, a religious event. You went into areas that were bad, and the air was bad. That was malaria, right? That was an example. To them, it was God that was in control of the whole business. There's a line from my distant ancestor––the Governor Bradford of Plymouth Colony, who's my umpteenth, umpteenth grandfather, that's how waspy I am, he's actually my ancestor––about how God saw fit to clear the natives for us. So they see all of this in really religious terms, and more or less native people did too! So they thought over and over again that “we must have done something bad for this to have happened.” And that’s a very powerful demoralizing thing. Your God either punished you or failed you. And this was it. This is one of the reasons that Christianity was able to make inroads. People thought “Their god is coming in and they seem to be less harmed by these diseases than people with our God.” Now, both of them are completely misinterpreting what's going on! But if you have that kind of spiritual explanation, it makes sense for you to say, “Well, maybe I should hit up their God.”", "Critiques of Civilization Collapse Theories", "Dwarkesh Patel", "Yeah, super fascinating. There's been a lot of books written in the last few decades about why civilizations collapse. There's Joseph Tainter’s book , there’s Jared Diamond's book. Do you feel like any of them actually do a good job of explaining how these different Indian societies collapsed over time?", "Charles C. Mann", "No. Well not the ones that I've read. And there are two reasons for that. One is that it's not really a mystery. If you have a society that's epidemiologically naive, and smallpox sweeps in and kills 30% of you, measles kills 10% of you, and this all happens in a short period of time, that's really tough! I mean COVID killed one million people in the United States. That's 1/330th of the population. And it wasn't even particularly the most economically vital part of the population. It wasn't kids, it was elderly people like my aunt–– I hope I'm not sounding callous when I'm describing it like a demographer. Because I don't mean it that way. But it caused enormous economic damage and social conflict and so forth. Now, imagine something that's 30 or 40 times worse than that, and you have no explanation for it at all. It's kind of not a surprise to me that this is a super challenge. What's actually amazing is the number of nations that survived and came up with ways to deal with this incredible loss.", "That relates to the second issue, which is that it's sort of weird to talk about collapse in the ways that they sometimes do. Like both of them talk about the Mayan collapse. But there are 30 million Mayan people still there. They were never really conquered by the Spaniards. The Spaniards were still waging giant wars in Yucatan in the 1590s. In the early 21st century, I went with my son to Chiapas, which is the southernmost exit province. And that is where the Commandante Cero and the rebellions were going on. We were looking at some Mayan ruins, and they were too beautiful, and I stayed too long, and we were driving back through the night on these terrible roads. And we got stopped by some of these guys with guns. I was like, “Oh God, not only have I got myself into this, I got my son into this. ” And the guy comes and looks at us and says, “Who are you?” And I say that we're American tourists. And he just gets this disgusted look, and he says, “Go on.” And you know, the journalist in me takes over and I ask, “What do you mean, just go on?” And he says, “We're hunting for Mexicans.” And as I’m driving I’m like “Wait a minute, I'm in Mexico.” And that those were Mayans. All those guys were Maya people still fighting against the Spaniards. So it's kind of funny to say that their society collapsed when there are Mayan radio stations, there are Maya schools, and they're speaking Mayan in their home. It's true, they don't have giant castles anymore. But, it's odd to think of that as collapse. They seem like highly successful people who have dealt pretty well with a lot of foreign incursions. So there's this whole aspect of “What do you mean collapse?” And you see that in Against the Grain, the James Scott book , where you think, “What do you mean barbarians?” If you're an average Maya person, working as a farmer under the purview of these elites in the big cities probably wasn't all that great. So after the collapse, you're probably better off. So all of that I feel is important in this discussion of collapse. I think it's hard to point to collapses that either have very clear exterior causes or are really collapses of the environment. Particularly the environmental sort that are pictured in books like Diamond has , where he talks about Easter Island. The striking thing about that is we know pretty much what happened to all those trees. Easter Island is this little speck of land, in the middle of the ocean, and Dutch guys come there and it's the only wood around for forever, so they cut down all the trees to use it for boat repair, ship repair, and they enslave most of the people who are living there. And we know pretty much what happened. There's no mystery about it.", "Virginia Company + Hubris", "Dwarkesh Patel", "Why did the British government and the king keep subsidizing and giving sanctions to the Virginia Company, even after it was clear that this is not especially profitable and half the people that go die? Why didn't they just stop?", "Charles C. Mann", "That's a really good question. It's a super good question. I don't really know if we have a satisfactory answer, because it was so stupid for them to keep doing that. It was such a loss for so long. So you have to say, they were thinking, not purely economically. Part of it is that the backers of the Virginia Company, in sort of classic VC style, when things were going bad, they lied about it. They're burning through their cash, they did these rosy presentations, and they said, “ It's gonna be great! We just need this extra money.” Kind of the way that Uber did. There's this tremendous burn rate and now the company says you're in tremendous trouble because it turns out that it's really expensive to provide all these calves and do all this stuff. The cheaper prices that made people like me really happy about it are vanishing.", "So, you know, I think future business studies will look at those rosy presentations and see that they have a kind of analogy to the ones that were done with the Virginia Company. A second thing is that there was this dog-headed belief kind of based on the inability to understand longitude and so forth, that the Americas were far narrower than they actually are. I reproduced this in 1493. There were all kinds of maps in Britain at the time showing these little skinny Philippine-like islands. So there's the thought that you just go up the Chesapeake, go a couple 100 miles, and you're gonna get to the Pacific into China. So there's this constant searching for a passage to China through this thought to be very narrow path. Sir Francis Drake and some other people had shown that there was a West Coast so they thought the whole thing was this narrow, Panama-like landform. So there's this geographical confusion. Finally, there's the fact that the Spaniards had found all this gold and silver, which is an ideal commodity, because it's not perishable: it's small, you can put it on your ship and bring it back, and it's just great in every way. It's money, essentially. Basically, you dig up money in the hills and there's this long-standing belief that there's got to be more of that in the Americas, we just need to find out where. So there's always that hope. Lastly, there's the Imperial bragging rights. You know, we can't be the only guys with a colony. You see that later in the 19th century when Germany became a nation and one of the first things it does is “Let’s look for pieces of Africa that the rest of Europe hasn't claimed,” and they set up their own mini colonial empire. So there's this kind of “Keeping Up with the Joneses” aspect, it just seems to be sort of deep in the European ruling class. So then you got to have an empire that in this weird way, seems very culturally part of it. I guess it's the same for many other places. As soon as you feel like you have a state together, you want to index other things. You see that over and over again, all over the world . So that's part of it. All those things, I think, contributed to this. Outright lying, this delusion, other various delusions, plus hubris.", "Dwarkesh Patel", "It seems that colonial envy has today probably spread to China. I don't know too much about it, but I hear that the Silk Road stuff they're doing is not especially economically wise. Is this kind of like when you have the impulse where if you're a nation trying to rise, you have that “I gotta go here, I gotta go over there––", "Charles C. Mann", "Yeah and “Show what a big guy I am. Yeah,––", "China’s Silver Trade", "Dwarkesh Patel", "Exactly. So speaking of China, I want to ask you about the silver trade . Excuse another tortured analogy, but when I was reading that chapter where you're describing how the Spanish silver was ending up with China and how the Ming Dynasty caused too much inflation. They needed more reliable mediums of exchange, so they had to give up real goods from China, just in order to get silver, which is just a medium of exchange––but it’s not creating more apples, right? I was thinking about how this sounds a bit like Bitcoin today, (obviously to a much smaller magnitude) but in the sense that you're using up goods. It's a small amount of electricity, all things considered, but you're having to use up real energy in order to construct this medium of exchange. Maybe somebody can claim that this is necessary because of inflation or some other policy mistake and you can compare it to the Ming Dynasty. But what do you think about this analogy? Is there a similar situation where real goods are being exchanged for just a medium of exchange?", "Charles C. Mann", "That's really interesting. I mean, on some level, that's the way money works, right? I go into a store, like a Starbucks and I buy a coffee, then I hand them a piece of paper with some drawings on it, and they hand me an actual coffee in return for a piece of paper. So the mysteriousness of money is kind of amazing. History is of course replete with examples of things that people took very seriously as money. Things that to us seem very silly like the cowry shell or in the island of Yap where they had giant stones ! Those were money and nobody ever carried them around. You transferred the ownership of the stone from one person to another person to buy something. I would get some coconuts or gourds or whatever, and now you own that stone on the hill. So there's a tremendous sort of mysteriousness about the human willingness to assign value to arbitrary things such as (in Bitcoin’s case) strings of zeros and ones. That part of it makes sense to me. What’s extraordinary is when the effort to create a medium of exchange ends up costing you significantly–– which is what you're talking about in China where people had a medium of exchange, but they had to work hugely to get that money. I don't have to work hugely to get a $1 bill, right? It's not like I'm cutting down a tree and smashing the papers to pulp and printing. But you're right, that's what they're kind of doing in China. And that's, to a lesser extent, what you're doing in Bitcoin. So I hadn't thought about this, but Bitcoin in this case is using computer cycles and energy. To me, it's absolutely extraordinary the degree to which people who are Bitcoin miners are willing to upend their lives to get cheap energy. A guy I know is talking about setting up small nuclear plants as part of his idea for climate change and he wants to set them up in really weird remote areas. And I was asking “Well who would be your customers?” and he says Bitcoin people would move to these nowhere places so they could have these pocket nukes to privately supply their Bitcoin habits. And that’s really crazy! To completely upend your life to create something that you hope is a medium of exchange that will allow you to buy the things that you're giving up. So there's a kind of funny aspect to this. That was partly what was happening in China. Unfortunately, China's very large, so they were able to send off all this stuff to Mexico so that they could get the silver to pay their taxes, but it definitely weakened the country.", "Wizards vs. Prophets", "Dwarkesh Patel", "Yeah, and that story you were talking about, El Salvador actually tried it. They were trying to set up a Bitcoin city next to this volcano and use the geothermal energy from the volcano to incentivize people to come there and mine cheap Bitcoin. Staying on the theme of China, do you think the prophets were more correct, or the wizards were more correct for that given time period? Because we have the introduction of potato, corn, maize, sweet potatoes, and this drastically increases the population until it reaches a carrying capacity. Obviously, what follows is the other kinds of ecological problems this causes and you describe these in the book. Is this evidence of the wizard worldview that potatoes appear and populations balloon? Or are the prophets like “No, no, carrying capacity will catch up to us eventually.”", "Charles C. Mann", "Okay, so let me interject here. For those members of your audience who don't know what we're talking about. I wrote this book, The Wizard and the Prophet. And it’s about these two camps that have been around for a long time who have differing views regarding how we think about energy resources, the environment, and all those issues. The wizards, that's my name for them––Stuart Brand called them druids and, in fact, originally, the title was going to involve the word druid but my editor said, “Nobody knows what a Druid is” so I changed it into wizards –– and anyway the wizards would say that science and technology properly applied can allow you to produce your way out of these environmental dilemmas. You turn on the science machine, essentially, and then we can escape these kinds of dilemmas. The prophets say “No. Natural systems are governed by laws and there's an inherent carrying capacity or limit or planetary boundary.” there are a bunch of different names for them that say you can't do more than so much. So what happened in China is that European crops came over. One of China's basic geographical conditions is that it's 20% of the Earth's habitable surface area, or it has 20% of the world's population, but only has seven or 8% of the world's above-ground freshwater. There are no big giant lakes like we have in the Great Lakes. And there are only a couple of big rivers, the Yangtze and the Yellow River. The main staple crop in China has to be grown in swimming pools, and that's you know, rice. So there's this paradox, which is “How do you keep people fed with rice in a country that has very little water? ” If you want a shorthand history of China, that's it. So prophets believe that there are these planetary boundaries. In history, these are typically called Malthusian Limits after Malthus and the question is: With the available technology at a certain time, how many people can you feed before there's misery? The great thing about history is it provides evidence for both sides. Because in the short run, what happened when American crops came in is that the potato, sweet potato, and maize corn were the first staple crops that were dryland crops that could be grown in the western half of China, which is very, very dry and hot and mountainous with very little water. Population soars immediately afterward, but so does social unrest, misery, and so forth. In the long run, that becomes adaptable when China becomes a wealthy and powerful nation. In the short run, which is not so short (it's a couple of centuries), it really causes tremendous chaos and suffering. So, this provides evidence for both sides. One increases human capacity, and the second unquestionably increases human numbers and that leads to tremendous erosion, land degradation, and human suffering.", "Dwarkesh Patel", "Yeah, that's a thick coin with two sides. By the way, I realized I haven't gotten to all the Wizard and Prophet questions, and there are a lot of them. So I––", "Charles C. Mann", "I certainly have time! I'm enjoying the conversation. One of the weird things about podcasts is that, as far as I can tell, the average podcast interviewer is far more knowledgeable and thoughtful than the average sort of mainstream journalist interviewer and I just find that amazing. I don't understand it. So I think you guys should be hired. You know, they should make you switch roles or something.", "Dwarkesh Patel", "Yeah, maybe.", "Charles C. Mann", "It's a pleasure to be asked these interesting questions about subjects I find fascinating.", "Dwarkesh Patel", "Oh, it's my pleasure to get to talk to you and to get to ask these questions. So let me ask about the Wizard and the Prophet. I just interviewed WIll McCaskill, and we were talking about what ends up mattering most in history. I asked him about Norman Borlaug and said that he’s saved a billion lives. But then McCaskill pointed out, “Well, that's an exceptional result” and he doesn't think the technology is that contingent. So if Borlaug hadn't existed, somebody else would have discovered what he discovered about short wheat stalks anyways. So counterfactually, in a world where Ebola doesn't exist, it's not like a billion people die, maybe a couple million more die until the next guy comes around. That was his view. Do you agree? What is your response?", "Charles C. Mann", "To some extent, I agree. It's very likely that in the absence of one scientist, some other scientist would have discovered this, and I mentioned in the book, in fact, that there's a guy named Swaminathan , a remarkable Indian scientist, who’s a step behind him and did much of the same work. At the same time, the individual qualities of Borlaug are really quite remarkable. The insane amount of work and dedication that he did.. it's really hard to imagine. The fact is that he was going against many of the breeding plant breeding dogmas of his day, that all matters! His insistence on feeding the poor… he did remarkable things. Yes, I think some of those same things would have been discovered but it would have been a huge deal if it had taken 20 years later. I mean, that would have been a lot of people who would have been hurt in the interim! Because at the same time, things like the end of colonialism, the discovery of antibiotics, and so forth, were leading to a real population rise, and the amount of human misery that would have occurred, it's really frightening to think about. So, in some sense, I think he's (Will McCaskill) right. But I wouldn't be so glib about those couple of million people.", "Dwarkesh Patel", "Yeah. And another thing you might be concerned about is that given the hostile attitude that people had towards the green revolution right after, if the actual implementation of these different strains of biochar sent in India, if that hadn't been delayed, it's not that weird to imagine a scenario where the governments there are just totally won over by the prophets and they decide to not implant this technology at all. If you think about what happened to nuclear power in the 70s, in many different countries, maybe something similar could have happened to the Green Revolution. So it's important to beat the Prophet. Maybe that's not the correct way to say it. But one way you could put it is: It’s important to beat the prophets before the policies are passed. You have to get a good bit of technology in there.", "Charles C. Mann", "This is just my personal opinion, but you want to listen to the prophets about what the problems are. They're incredible at diagnosing problems, and very frequently, they're right about those things . The social issues about the Green Revolution… they were dead right, they were completely right. I don't know if you then adopt their solutions. It's a little bit like how I feel about my editors–– my editors will often point out problems and I almost never agree with their solutions. The fact is that Borlaug did develop this wheat that came into India, but it probably wouldn't have been nearly as successful if Swaminathan hadn't changed that wheat to make it more acceptable to the culture of India. That was one of the most important parts for me in this book. When I went to Tamil Nadu, I listened to this and I thought, “Oh! I never heard about this part where they took Mexican wheat, and they made it into Indian wheat.” You know, I don't even know if Borlaug ever knew or really grasped that they really had done that! By the way, a person for you to interview is Marci Baranski –– she's got a forthcoming book about the history of the Green Revolution and she sounds great. I'm really looking forward to reading it. So here's a plug for her.", "In Defense of Regulatory Delays", "Dwarkesh Patel", "So if we applied that particular story to today, let's say that we had regulatory agencies like the FDA back then that were as powerful back then as they are now. Do you think it's possible that these new advances would have just dithered in some approval process that took years or decades to complete? If you just backtest our current process for implementing technological solutions, are you concerned that something like the green revolution could not have happened or that it would have taken way too long or something?", "Charles C. Mann", "It's possible. Bureaucracies can always go rogue, and the government is faced with this kind of impossible problem. There's a current big political argument about whether former President Trump should have taken these top-secret documents to his house in Florida and done whatever he wanted to? Just for the moment, let's accept the argument that these were like super secret toxic documents and should not have been in a basement. Let's just say that's true. Whatever the President says is declassified is declassified. Let us say that's true.  Obviously, that would be bad. You would not want to have that kind of informal process because you can imagine all kinds of things–– you wouldn't want to have that kind of informal process in place. But nobody has ever imagined that you would do that because it's sort of nutty in that scenario.", "Now say you write a law and you create a bureaucracy for declassification and immediately add more delay, you make things harder, you add in the problems of the bureaucrats getting too much power, you know–– all the things that you do. So you have this problem with the government, which is that people occasionally do things that you would never imagine. It’s completely screwy. So you put in regulatory mechanisms to stop them from doing that and that impedes everybody else. In the case of the FDA, it was founded in the 30 when some person produced this thing called elixir sulfonamides. They killed hundreds of people! It was a flat-out poison! And, you know, hundreds of people died. You think like who would do that? But somebody did that. So they created this entire review mechanism to make sure it never happened again, which introduced delay, and then something was solidified. Which they did start here because the people who invented that didn't even do the most cursory kind of check. So you have this constant problem. I'm sympathetic to the dilemma faced by the government here in which you either let through really bad things done by occasional people, or you screw up everything for everybody else. I was tracing it crudely, but I think you see the trade-off. So the question is, how well can you manage this trade-off? I would argue that sometimes it's well managed. It's kind of remarkable that we got vaccines produced by an entirely new mechanism, in record time, and they passed pretty rigorous safety reviews and were given to millions and millions and millions of people with very, very few negative effects. I mean, that's a real regulatory triumph there, right? So that would be the counter-example: you have this new thing that you can feed people and so forth. They let it through very quickly. On the other hand, you have things like genetically modified salmon and trees, which as far as I can tell, especially for the chestnuts, they’ve made extraordinary efforts to test. I'm sure that those are going to be in regulatory hell for years to come. *chuckles* You know, I just feel that there's this great problem. These flaws that you identified, I would like to back off and say that this is a problem sort of inherent to government. They're always protecting us against the edge case. The edge case sets the rules, and that ends up, unless you're very careful, making it very difficult for everybody else.", "Dwarkesh Patel", "Yeah. And the vaccines are an interesting example here. Because one of the things you talked about in the book–– one of the possible solutions to climate change is that you can have some kind of geoengineering. Right? I think you mentioned in the book that as long as even one country tries this, then they can effectively (for relatively modest amounts of money), change the atmosphere. But then I look at the failure of every government to approve human challenge trials. This is something that seems like an obvious thing to do and we would have potentially saved hundreds of thousands of lives during COVID by speeding up the vaccine approval. So I wonder, maybe the international collaboration is strong enough that something like geoengineering actually couldn't happen because something like human challenge trials didn't happen.", "Geoengineering", "Charles C. Mann", "So let me give a plug here for a fun novel by my friend, Neal Stephenson, called Termination Shock. Which is about some rich person just doing it. Just doing geoengineering. The fact is that it's actually not actually against the law to fire off rockets into the stratosphere. In his case, it's a giant gun that shoots shells full of sulfur into the upper atmosphere. So I guess the question is, what timescale do you think is appropriate for all this? I feel quite confident that there will be geoengineering trials within the next 10 years. Is that fast enough? That's a real judgment call. I think people like David Keith and the other advocates for geoengineering would have said it should have happened already and that it’s way, way too slow. People who are super anxious about moral hazard and precautionary principles say that that’s way, way too fast. So you have these different constituencies. It's hard for me to think off the top of my head of an example where these regulatory agencies have actually totally throttled something in a long-lasting way as opposed to delaying it for 10 years. I don’t mean to imply that 10 years is nothing. But it’s really killing off something. Is there an example you can think of?", "Dwarkesh Patel", "Well, it's very dependent on where you think it would have been otherwise, like people say maybe it was just bound to be the state.", "Charles C. Mann", "I think that was a very successful case of regulatory capture, in which the proponents of the technology successfully created this crazy…. One of the weird things I really wanted to explain about nuclear stuff is not actually in the book. I actually wrote a whole long section in but cut it out because I felt like it was just too much in the weeds. Anyway, if you have a coal plant, they have environmental rules. The rules are basically based on a threshold principle where you set a safe threshold for the emission of particulates and as long as you're below that threshold, you're fine. Nuclear power has a thing for its main type of pollution, which is radiation. It’s what’s called the Linear Threshold Model . What it says is that you have to reduce radiation to the maximum extent practicable, and essentially, if your nuclear power is way cheaper than coal power (which it is), that means you have more profits so you can spend more money on reducing it. So you're going even further on the road to diminishing returns. You have a completely different regulatory standard for nuclear (I'm talking about this country) than you do for coal. So you have this bizarre fact that coal power plants emit more radiation than nuclear plants do because of the residual radiation when the coal is dug up from underneath the earth. So you have a very strange case of regulatory capture, in which you have a completely inconsistent set of safety standards across different parts of the same industry. So the question to me in sort of an empirical vision is: How common is that? Is this some weird thing that's happened to nuclear?", "Finding New Wizards", "Dwarkesh Patel", "Yeah, yeah. Okay so assume that you're in the 1960s. Let’s say that you’re a philanthropic donor and you’re the 1960s version of an effective altruist who’s interested in doing the most good possible. In retrospect, it's clear that you should have funded Borlaug. I mean, counterfactually, he still does it, but let's just say that his work depends on your funding. How could you have identified work like that? Is there some criteria that is broadly applicable where you could have identified his work in Mexico using it?", "Charles C. Mann", "That's a really good question! I mean, that's the greatest good for the greatest number question. To do that, you would have to say, “What are the biggest problems facing the planet?” And then presumably, if you're Will McCaskill or somebody like that, you say, “All lives are equal. So what is the thing that's most affecting the most number of lives?” In that case, it’s probably clean drinking water. I think that's the biggie. So that means funding primarily urban infrastructure for water and setting up some kind of foundation or some independent agency that's insulated from the government to actually keep those water systems going. That would be my answer to that. That's how you would do it, I think you'd try to figure out, you know, “ What are the bare necessities? What's killing more people than anything else?” In the 1960s, that was probably food and water. The Food and Agriculture Organization, once they got interested in Borlaug actually did a pretty good job of promoting him, and there's the creation of the sea guard system. Water is completely neglected and actually, I would channel it towards water.", "Dwarkesh Patel", "Interesting. Okay. I'm going to name two trends, and I want to know what you think these two imply for the debate between wizards and prophets in the future. So one of the trends is declining research productivity–– in terms of how many new important advancements each researcher is able to make, there's evidence that shows that that’s exponentially decaying.", "Charles C. Mann", "I think that's wrong that they think that and the reason is that in the areas that I'm familiar with, there are two things that are going on. One is in particle physics. It's harder and harder to make discoveries, because of the penalty of your own success, you're pushing harder and harder to really get to where you're going. It's just incredibly expensive. So that's a natural phenomenon. It's not anything really to worry about because, you know, what do you want to do? Undo the past 50 years of success in particle physics? People like Murray Gell-Mann could do a huge amount because we didn't know anything. So you're seeing just plain old diminishing returns. The second thing though, and I think this is something important, is that it feels like agricultural research. The vast majority of research is in a bunch of narrow areas: wheat, rice, maize, and so forth. There are all kinds of alternative crops that are hardly looked at and could be really important, particularly in a time of climate change, when we are going to have to have a much more resilient in creating that varied agricultural system to deal with the uncertainties of climate change. There's hardly any research done in agroforestry–– all those crops are essentially wild. You know, how many people are you know, except for William Powell looking at chestnut there's practically no real genetic research into increasing tree crop productivity. There's also not nearly enough research into things like cassava, where you could do a huge amount. When I talk to people, I always say: go to these other crops that are really, really important. They're going to be even more important in the future, and there's virtually no research on them. By doing that, you could make giant strides rather than being the person who's trying to increasingly optimize wheat–– something that's already been optimized by 10,000 people . So part of it is that there’s this channeling of people into fields that are already well trodden. I think that you can see that in many many areas of research. That would be a partial answer to that question.", "Agroforestry is Underrated", "Dwarkesh Patel", "I see, yeah. So I was going to ask–– if there's declining research productivity, maybe there are less rabbits you could keep pulling out of the hat like Borlaug did. But let me just ask instead with regards to increasing the productivity of trees in order to potentially deal with climate change, what in the book you speculate about and C4 photosynthesis?", "Charles C. Mann", "You know, that's just an example of the kind of thing that you could do.", "Dwarkesh Patel", "What is the status of that? Are you optimistic about that, or?", "Charles C. Mann", "Yeah, they're plugging away. You know, it's a hugely difficult problem, but it's extraordinarily interesting. To get something like C4 rice would be just an absolutely gigantic increase in productivity. But even in drylands areas, there's this method of agriculture that's used in West Africa and places like Northern Mexico called civil pastoral farming where you have ruminants, cows and so forth and trees. To create a system that is way easier on the land, uses way less water, and is almost as productive as annual crops. Almost no research has gone into that. That would be another example of a kind of thing that you could do that I would argue would have a much greater impact than the person trying to get the latest flavor of cherry-flavored nose drops are something–– which is what a huge amount of research is in.", "Dwarkesh Patel", "There have been people speculating recently that environmental contaminants are leading to a host of bad outcomes in health in the West, especially obesity. How plausible do you think this is?", "Charles C. Mann", "I always wonder about the mechanism. What would be the mechanism that these tiny trace amounts of these compounds have in them? How come that as our environment has generally gotten cleaner, obesity has risen? So I'm immediately skeptical of this. One of the issues here is that you're dealing with problems that are on the very limits of our ability to measure. If you're looking for these things, obviously, they have very very long-term effects–– whatever they are. So how are you going to actually ascertain that? People who make very strong claims based on facts without a mechanism that is at the very limits of our ability to measure just doesn't seem like all that promising to me? Yeah, so not impossible. Not impossible. But the claims you might see, I always think, how could they possibly know that?", "Dwarkesh Patel", "Yeah. So one of these people is a good friend of mine: Slime Mold Time Mold who’s an anonymous blogger on the internet. They set up something called a potato diet and it was like a four-week study . I thought you might have interesting thoughts on this, given that the chapters in your book were dedicated to the humble potato and its impact on the world. So basically, they only ate potatoes for four weeks. And as you talked about in the book, potatoes have a bunch of micronutrients that––", "Charles C. Mann", "They’re really good for you! If you're gonna do that, use potatoes.", "Dwarkesh Patel", "Yep. And then people lost a lot of weight. Why? Is it something you would have expected? What do you think of this just recapitulating Irish history?", "Charles C. Mann", "Well, the Irish history is both analogous and not analogous, because it is true–– they ate nothing but potatoes. But also those people who were always vigorously physically exercising, because they were out in the fields with really poor tools. So it's a really different situation from you and me where no matter how many times you go to the gym, that's not the same as working for 10 hours in the fields. Also, the epidemiological environment is so crazy different. All the people in Ireland only live to the age of 40. Anyway, there's a whole host of studies that show that extreme diets always work. You know, for example, people eat nothing but beans, etc. and people lose a lot of weight in the short term. It's really difficult to show that it's possible to keep it off and if it's possible for people to maintain these kinds of diets for long periods of time.", "Dwarkesh Patel", "Right, right. I remember that part of the book where you have that passage from Adam Smith, where he's commenting on how all these Irish people only ate potatoes, but all of them seemed so healthy and beautiful.", "Charles C. Mann", "Well, they're also on the fields and not in London, right? Adam Smith, looking at Edinburgh and places like that, which are the most unhealthy places on the planet!", "Longtermism & Free Markets", "Dwarkesh Patel", "Say you have no discount rate. So you think future people matter exactly as much as current people. Does that shift you more towards the prophet side or the wizard side? Not in absolute terms, but from where you're starting out?", "Charles C. Mann", "I have to say, I'm uncomfortable with this entire thing and this reason is something Will McCaskill talks about. From what I've read of his work, he takes it seriously enough to question how we don't actually know what those future people will want. There's no question that what we want today would have seemed abhorrent to most people in the 1800s. So the idea that we can have any other idea other than they probably want to be alive? It seems much more questionable than I think it does. So, there's two ways to look at it. One is the wizards say-–– we have an idea: “they're going to want to live in this certain kind of utopia and live their longest lives and have the maximum possible physical comfort.” Which is generally what the wizards say. And that seems perfectly reasonable to me. But the prophets might say, “Well, we should be more epistemologically humble. We don't know what they want so let's preserve as many options as possible for them.” That doesn't seem crazy. I personally probably lean more towards the wizards on this. But if a prophet said that to me, I wouldn't say “Oh, you're wrong.” Because that's the same argument about burying nuclear waste, which I also think is very powerful. We should probably not bury it in some system where it can't be gotten rid of for 10,000 years, we should just make sure that we can track it for a couple 100 years, and there'll be more options for people 200 years from now than there are today.", "Dwarkesh Patel", "Okay. What’s wrong with the basic free market objection to the carrying capacity arguments, which goes: let's say we do reach the ends of some resource? Then its price would just increase until you reached some sort of sustainable equilibrium, and people would just decrease their consumption and keep it constant or something. An example: people are concerned that as the developing world gets richer, people are going to eat more meat. But if it's true that it consumes 10 times the energy than the grain that's just like feeding them directly, then that will be represented in the price and so the trend lines might be mistaken because the price of meat will increase or something.", "Charles C. Mann", "I think that's a very powerful argument. But the problem with it is that the kinds of things that we're talking about and care about for carrying capacity are things like bubble gum, or things like food, water, and energy. Those have never, as far as I know, been governed by anything remotely resembling the free market. They aren't today, they never have been in the past. So it seems to me like an interesting thought experiment to imagine what would happen if you truly had a free market for those things. But it also seems pointless because if I had to bet, I bet that it would be the same way it's been for the last couple thousand years. And we don't have a free market, we already have all kinds of weird distortions.", "From your point of view––  the tremendous amount of food that's wasted, the crazy arrangements we have for water in the system, these ludicrous things like.. You're in Texas, right? So you have this thing where Texas has its own independent grid, so it can't trade energy with other states that are nearby. Like, what the heck is that? That's really crazy. I mean, I don't want to pick on Texas, but I think there are equally crazy things all over the place, the impossibility of building long-distance, high-tension lines, because various states have just arbitrarily imposed rules that make it impossible. There are all kinds of crazy things going on. So I guess, what you're saying is it's very likely to be true in a system that will never exist, unfortunately. Because that's kind of a nice idea.", "Dwarkesh Patel", "Right? Okay. That seems like an excellent note to close on. This is extra.. I learned so much from the books and I learned so much from talking to you, so I really really enjoyed this.  The books that we talked about were 1491: New Revelations of America before Columbus . 1493: Uncovering the New World Columbus Created , and The Wizard and the Prophet: Two Remarkable Scientists and Their Dueling Visions to Shape Tomorrow's World if anyone’s interested. So is there any other place that you would like to direct viewers who might want to check out your work?", "Charles C. Mann", "Stay tuned for my book about the West! Which should be coming out next year if I'm at all lucky.", "Dwarkesh Patel", "Okay, and I'd love to have you back on again when it comes out.", "Charles C. Mann", "Oh, sure we can talk about how Texas has got an amazing history. Yeah, definitely. One of the things I learned about the Comanche Indians and their role in Texas history is just totally eye-popping really amazing. So that's actually a very fun part two so we can talk about your Texas roots.", "Dwarkesh Patel", "Definitely. Well, thanks so much for coming on, Charles.", "Charles C. Mann", "Sure. Pleasure. Nice to meet you.", "", "" ]
[ "https://www.amazon.com/1491-Revelations-Americas-Before-Columbus/dp/1400032059", "https://www.amazon.com/1493-Uncovering-World-Columbus-Created/dp/0307278247/ref=d_pd_sbs_sccl_2_1/131-2045401-0836407?pd_rd_w=AGLeO&content-id=amzn1.sym.3676f086-9496-4fd7-8490-77cf7f43f846&pf_rd_p=3676f086-9496-4fd7-8490-77cf7f43f846&pf_rd_r=Y1P2KKKZS75TM3R9K7G3&pd_rd_wg=4hTpK&pd_rd_r=5ca0510e-42a6-436f-9d09-244f18dfaf5b&pd_rd_i=0307278247&psc=1", "https://www.amazon.com/Wizard-Prophet-Remarkable-Scientists-Tomorrows/dp/0307961699", "https://en.wikipedia.org/wiki/Why_Nations_Fail", "https://www.amazon.com/Civilizations-Novel-Laurent-Binet/dp/0374600813", "https://www.history.com/news/hernan-cortes-conquered-aztec-empire", "https://smarthistory.org/aztec-mexica-an-introduction/", "https://www.amazon.com/Anarchy-Relentless-Rise-India-Company/dp/1635573955", "https://www.google.com/search?q=Neil+Ferguson+British+Empire&client=safari&rls=en&sxsrf=ALiCzsYeV2IvoqnowTZoXWYqmeUPFbAlJw%3A1662925654673&ei=VjseY_vhKLm_xc8P_rGwqAk&ved=0ahUKEwj75LznwI36AhW5X_EDHf4YDJUQ4dUDCA0&uact=5&oq=Neil+Ferguson+British+Empire&gs_lcp=Cgdnd3Mtd2l6EAMyBggAEB4QCjoKCAAQRxDWBBCwA0oECEEYAEoECEYYAFD0A1j8E2DVFmgBcAF4AIAB1QGIAe4HkgEFMC4yLjOYAQCgAQHIAQTAAQE&sclient=gws-wiz", "https://en.wikipedia.org/wiki/Tlaxcala", "https://en.wikipedia.org/wiki/Os_Sert%C3%B5es", "https://en.wikipedia.org/wiki/Os_Sert%C3%B5es", "https://en.wikipedia.org/wiki/Os_Sert%C3%B5es", "https://en.wikipedia.org/wiki/Os_Sert%C3%B5es", "https://en.wikipedia.org/wiki/Os_Sert%C3%B5es", "https://www.amazon.com/Against-Grain-History-Earliest-States/dp/0300182910", "https://www.history.com/topics/ancient-history/hammurabi", "https://en.wikipedia.org/wiki/William_Wilberforce", "https://www.britannica.com/biography/Bartolome-de-Las-Casas", "https://www.google.com/search?client=safari&rls=en&q=the+new+laws+las+casas&ie=UTF-8&oe=UTF-8", "https://www.nationalgeographic.com/history/article/teotihuacan", "https://www.amazon.com/Dawn-Everything-New-History-Humanity/dp/0374157359", "https://www.google.com/search?client=safari&rls=en&q=chaco+canyon&ie=UTF-8&oe=UTF-8", "https://en.wikipedia.org/wiki/Pueblo_Bonito", "https://www.britannica.com/place/Puebla-Mexico", "http://wideurbanworld.blogspot.com/2014/10/living-good-life-in-teotihuacan.html", "https://www.deseret.com/2007/8/5/20033572/radar-points-to-aztec-tomb", "https://www.britannica.com/biography/Frederick-Jackson-Turner", "https://www.historians.org/about-aha-and-membership/aha-history-and-archives/historical-archives/the-significance-of-the-frontier-in-american-history-(1893)", "https://www.washingtonpost.com/education/2022/05/10/school-library-database-book-ban/", "https://www.sciencedirect.com/topics/agricultural-and-biological-sciences/moldboard-plows", "https://www.merriam-webster.com/dictionary/travois", "http://macuahuitl/", "https://www.amazon.com/Aztec-Warfare-Expansion-Political-Civilization/dp/B00ZVOZFJG", "https://history.state.gov/milestones/1830-1860/opening-to-japan", "https://www.amazon.com/End-History-Last-Man/dp/0743284550", "https://www.britannica.com/science/germ-theory", "https://www.amazon.com/Collapse-Complex-Societies-Studies-Archaeology/dp/052138673X", "https://www.google.com/search?client=safari&rls=en&sxsrf=ALiCzsYFINj_GlWoVdvHQCejrMhfaT4j2Q:1662927103548&q=Collapse:+How+Societies+Choose+to+Fail+or+Succeed&stick=H4sIAAAAAAAAAE2SzWsTQRjGs1sS0mmVZFtRclrTg6EIyX4km_RSpbWIWIS2gifD7szsR3Z2ZrO7Nskec_HiQRR6k0JPPfgH-AGleBGDePAgeNHqTUR6EjxZG80m3uaZ3_s-8_C-k00Xp8teuaJ2405dnr-hBxiJq47uMYouhaLBmDvgssOKNpGtATcpHnBnylZZkmy1VrWJQQccGDLf9W2NjAo1l7rq6GwFgRInTaYWkmqtqiYaStTzFMscm2o1tydVWoluU8uUWghO-tsx7DTCMVdJU5Ll06yzQy0bntm0TDjJ6KsuNStjTLd9FVZHkWthiwZKwmSpgeLQnDhZKkGTTq_VVePGOIfd7UBS8RInrWp2ggSiJqoQS4sSWCeKJSXQQrakqaY7jhgrmlLV_ptQgNqGTBPdUhpyoxdYI7O61lYbza_cE34m9-3Xp7nCY_7R0zcfuAc8yN1kLMSkt4GJHmG0xYTLIHONRk7UE2YLAIy2KVF5HvzzdmsutFHdz-0e7XDCbTCziaMtts6QY_aENWEVTK9jz8BBeMsUFgBYYYRgGDmMCucKcyBfhuOL8vDDhMWzG39XbkRNqBxy_BJf4opt-e6L9zvvMnfyB_dPTn5e3bhSKC3mQWaVebpD8_vP00cffxwvL86B7JbeZZR5vfx5_3W_ufN9uXhxelt_-7n_5Xg5v1efSonGwuGFtDhV4qX-s9-7L7e1w8XU6UP9g_1XmWyWy6VkPpuKU7N7aWmYVvdDvCReZx1xk0EHRw4OxRV7OCYxYuKa7hCRBeLmPQgxRg8z3B-YhXhPFgMAAA&sa=X&ved=2ahUKEwiYhq2axo36AhX4YPEDHTpABYUQ-BZ6BAgYEAk", "https://en.wikipedia.org/wiki/Ed%C3%A9n_Pastora", "https://www.amazon.com/Against-Grain-History-Earliest-States/dp/0300182910", "https://www.amazon.ae/Last-Tree-Easter-Island/dp/0141997060/ref=asc_df_0141997060/?tag=googleshopp09-21&linkCode=df0&hvadid=406945601003&hvpos=&hvnetw=g&hvrand=3723605432818151334&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=1000013&hvtargid=pla-1224384218521&psc=1", "https://www.britannica.com/biography/Francis-Drake", "https://www.google.com/search?q=Silk+Road+china&client=safari&rls=en&sxsrf=ALiCzsblaLOCl4pn0EJnal9t3GCeKCmSsw:1662927465643&source=lnms&sa=X&ved=2ahUKEwj3xoHHx436AhX9RfEDHawTC20Q_AUoAHoECAEQAg&biw=1365&bih=710&dpr=1", "https://artsandculture.google.com/entity/global-silver-trade-from-the-16th-to-19th-centuries/m013336qk?hl=en", "https://en.wikipedia.org/wiki/Cowrie", "https://www.bbc.com/travel/article/20180502-the-tiny-island-with-human-sized-money", "https://tomorrow.city/a/crypto-city-el-salvador", "https://www.amazon.com/Wizard-Prophet-Remarkable-Scientists-Tomorrows/dp/0307961699", "https://www.britannica.com/biography/Thomas-Malthus", "https://www.britannica.com/biography/Thomas-Malthus", "https://www.nobelprize.org/prizes/peace/1970/borlaug/biographical/", "https://www.britannica.com/biography/M-S-Swaminathan", "https://issues.org/rethinking-the-green-revolution-forum-agricultural-research-baranski-ollenburger/", "https://www.amazon.com/Termination-Shock-Novel-Neal-Stephenson-ebook/dp/B08WLWC6GZ", "https://nuclearsafety.gc.ca/eng/resources/health/linear-non-threshold-model/index.cfm", "https://www.youtube.com/watch?v=E_1493BU2yY", "https://en.wikipedia.org/wiki/C4_carbon_fixation", "https://c4rice.com/", "https://en.wikipedia.org/wiki/Pastoral_farming", "https://www.google.com/search?client=safari&rls=en&q=Slime+Mold+Time+Mold&ie=UTF-8&oe=UTF-8", "https://slimemoldtimemold.com/2022/04/29/potato-diet-community-trial-sign-up-now-lol/", "https://www.amazon.com/1491-Revelations-Americas-Before-Columbus/dp/1400032059", "https://www.amazon.com/1493-Uncovering-World-Columbus-Created/dp/0307278247/ref=d_pd_sbs_sccl_2_1/131-2045401-0836407?pd_rd_w=AGLeO&content-id=amzn1.sym.3676f086-9496-4fd7-8490-77cf7f43f846&pf_rd_p=3676f086-9496-4fd7-8490-77cf7f43f846&pf_rd_r=Y1P2KKKZS75TM3R9K7G3&pd_rd_wg=4hTpK&pd_rd_r=5ca0510e-42a6-436f-9d09-244f18dfaf5b&pd_rd_i=0307278247&psc=1", "https://www.amazon.com/Wizard-Prophet-Remarkable-Scientists-Tomorrows/dp/0307961699", "https://www.tshaonline.org/handbook/entries/comanche-indians" ]
https://www.dwarkesh.com/p/daniel-yergin
Daniel Yergin – Oil Explains the Entire 20th Century
[ "00:00:00 – Beginning of the oil industry", "Dwarkesh Patel 00:00:00", "Today I have the pleasure to chat with Daniel Yergin . He is literally the world's leading authority on energy. His book, The Prize , won the Pulitzer Prize. It's about the entire history of oil. His most recent book is The New Map: Energy, Climate, and the Clash of Nations . Welcome to the podcast, Dr. Yergin.", "Daniel Yergin 00:00:19", "Glad to be with you.", "Dwarkesh Patel 00:00:20", "Here’s my first book question. A book like The Prize is literally a history of the entire 20th century, right? Because everything that’s happened in the last 150 years involves oil. How does one begin to write a book like that?", "Daniel Yergin 00:00:34", "You begin by not realizing what you're doing. I agreed to do that book and I said I'd do it in two years. It took me seven. The story just became so compelling and it became woven in with the history of the 20th century.", "The funny thing was that some years before that, a publisher had flown up from New York to see me when I was teaching at Harvard. She said she had a very interesting idea for a book. I said, \"What?\" She said, \"a history of the 20th century.\" I said, \"That's an interesting idea.\" I thought to myself that it's rather broad and that actually the century wasn't over yet at that point. But somehow that was in the DNA of the book. As I told the story, it really was not the history of the 20th century, but a history of the 20th century.", "Dwarkesh Patel 00:01:25", "I've found that there are a lot of books which are nominally about one subject, but the author just feels a need to say, \"If you really want to understand my topic, you have to understand basically everything else in the world.\" I think of a couple of biographies especially. If you read Caro's biography of LBJ or Kotkin’s of Stalin , it is a history of the entire period in their country's history when this is happening.", "I wonder if this was the case for you. Did you actually just want to write about oil and you just had to write about what's happening in the Middle East, what's happening in Asia? Or no, you set out to write about World War II and World War I and everything?", "Daniel Yergin 00:01:58", "Geopolitics, narrative, storytelling, those are things that are very much in my interest. My first book had actually been a narrative history of the origins of the Soviet-American Cold War . So I brought that perspective to it.", "As I was writing The Prize , I didn't intend to do all of that. But with the discoveries, one thing led to another. I would be amazed and think, “This is an incredible story and no one knows it.” In my mind, I did not do a detailed outline, but the pieces came together in this larger narrative that located oil in this larger context of the 20th century. It made clear how central oil was as a way to understand the 20th century.", "Dwarkesh Patel 00:02:54", "We'll get to The New Map and the contemporary issues around energy later on. First I want to just begin with the beginning of the history of oil. There’s one thing you notice not only in the early stories of oil with people like Drake and Rockefeller , but also even with very modern ones like the frackers like Mitchell and so forth. You have these incredibly risk-taking and strong personalities who have been the dominant characters in the oil industry. I wonder if there's a specific reason that oil attracts this kind of personality.", "Daniel Yergin 00:03:22", "Those are the ones who are successful. It takes a lot of willpower and perseverance. Clearly Rockefeller had an idea of what to do and how. But he was also creating a new kind of business organization as he's doing it, and a new kind of industry at the same time that he was doing it. We jump ahead to this guy, George Mitchell, who's more responsible than anybody else for the shale revolution that has transformed the current position of the United States in the world. He kept at it for 18 years when people told him, “You're wasting your money, you're wasting your time.” He said, “Well it's my money and I'll waste it.” But one of the things that comes through in the book is the power of willpower.", "Dwarkesh Patel 00:04:05", "One thing that really struck me is how fast things kick off. In 1859, Colonel Drake hits the first oil well in Pennsylvania. In less than a decade, you have many oil boom towns and oil busts and Standard Oil is formed. Millions of barrels of crude are being pumped out every year. I don't know if there's been any deployment like that since. What was it like?", "Daniel Yergin 00:04:32", "When I think about what we saw with the oil industry, then what we saw with the automobile industry in the 1920s, it’s kind of like what we saw with the internet at the beginning of the 21st century. Another example that always struck me is the movie industry. At one point, you have guys who are showing these silent movies in vaudeville houses for five cents. 15 years later, they're living in mansions on Long Island and have chauffeurs.", "It is striking to see these businesses that come from nowhere and then they just take off and gravitate and develop so quickly when people grab hold in 10 to 15 years. I was writing something comparing the energy position of the United States in the eighties and today. It's a while back certainly, but there was no tech. Nobody talked about tech. It didn't exist. Now we talk about Big Tech , the way people talked about Big Oil .", "Dwarkesh Patel 00:05:32", "The analogy of the internet is interesting. With the internet in the 90s, you have this big Internet bubble, the dot-com bubble , and a lot of people lose money. But they were fundamentally investing in something that actually was a real technology and actually did transform the world.", "In many cases through energy you have investors who go broke, but… Fracking is a particularly good example of this. They've changed the geopolitical situation in the United States, but they've been so right that they've eaten away at each other's profits.", "Daniel Yergin 00:06:05", "You saw that in the 19th century. That was one thing when I was writing about the beginning of the 20th century and the end of the 19th century. It's far away and yet it felt contemporary because you saw a very similar pattern. You saw booms and busts. You saw trees that were going to grow to heaven and then fell apart. And then those people who came in either had resilience or picked things up and carried them forward.", "Dwarkesh Patel 00:06:29", "In the beginning of the oil industry—when it was just kerosene and used for lighting—why was oil so centralizing? Why was it the case that Standard Oil and Rockefeller controlled so much?", "Daniel Yergin 00:06:43", "People think of John D. Rockefeller and Standard Oil and they go: gasoline . It had nothing to do with gasoline. John D. Rockefeller was a lighting merchant. What they did is that they rolled back the darkness with kerosene, with lighting. Before that, the number one source of lighting was candles and whaling. The whaling industry was delivering lighting. For the first 30 or 40 years of the oil industry it was a lighting business. Then came along this other guy named Thomas Edison . Suddenly you have electric lights and you say, “That's going to be the end of the oil business.” But by the way, over here is Henry Ford and others. You're creating this whole new market in the 20th century for gasoline. In the 19th century gasoline was a waste product. It went for like three cents a gallon.", "Dwarkesh Patel 00:07:34", "One of the things I learned from The Prize , I didn't appreciate before. Before the car was invented, when Edison invented the light bulb, people were saying Standard Oil would go bankrupt because the light bulb was invented.", "Daniel Yergin 00:07:47", "John D. Rockefeller became the richest man in the United States as a merchant of lighting, not as a merchant of mobility.", "Dwarkesh Patel 00:07:55", "In some of the earlier chapters, you mention that Rockefeller was especially interested in controlling the refining business , not the land owning and drilling. A lot of the producer surplus went into refining. Why did the economics shape up such that the producer surplus went to refining?", "Daniel Yergin 00:08:11", "Because that was the control of the market. That was the access to the market. The producers needed John D. Rockefeller. There were a few other people but Rockefeller controlled about 90% of the business. He would either give you a good sweating—drive down prices and force you out of business—or force you to sell to him or amalgamate with him.", "Dwarkesh Patel 00:08:33", "What can we learn about management today from Rockefeller and the way Standard Oil was run?", "Daniel Yergin 00:08:37", "It was the discipline of the business. He created a very disciplined business. They went out to two decimal points. That was before computers or calculators. It was rigorous attention to detail but at scale. It was also boldness and being able to see where you needed to go next and then implement it.", "Dwarkesh Patel 00:09:00", "What did they do with the non-kerosene parts of crude oil in the early history of the business?", "Daniel Yergin 00:09:04", "It was really a waste product. There wasn't much to do with it because it was all about lighting. Today of course, oil is in everything. It's in your furniture, your COVID vaccine. It's everywhere.", "Dwarkesh Patel 00:09:22", "Was the antitrust case against Standard Oil unwarranted? Reading The Prize , I'm thinking these guys were doing a ton of great stuff. As their name implies, they were standardizing oil, logistics, transportation, refining. And their market share was going down. The price of crude was going down. In retrospect, was the antitrust a mistake?", "Daniel Yergin 00:09:42", "A mistake, I don’t know. It is the most famous antitrust case in history and reflected the times because you had these big trusts . Was it a mistake? I don’t know. It broke up these companies and created more independent companies. It provided more room for innovation and for people to develop. It probably led to a stronger industry. Of course the other thing that happened as a result of the breakup of Standard Oil was that these individual parts got valued in the marketplace. Lo and behold, as a result of that John D. Rockefeller as a shareholder actually became three times as rich.", "Dwarkesh Patel 00:10:19", "There were also scientists who came up with a new way of refining gas , right?", "Daniel Yergin 00:10:25", "Exactly. Because things weren't centralized, there was more room for entrepreneurship, experimentation, research, and for people to solve problems that other people said couldn't be solved.", "Dwarkesh Patel 00:10:37", "Going back to management, one thing that stunned me is that the people who ran Standard Oil were initially competing against him. Why did he only recruit the people who were hard-nosed enough to compete against him?", "Daniel Yergin 00:10:51", "He respected his competitors, particularly the hardy ones. Those were the players who said, \"Okay, rather than fight you, I'm going to get on board this ship.\" He brought them in and they all prospered as a result. They gave up and said, “We’re not going to fight you. We’re going to join you.”", "Dwarkesh Patel 00:11:14", "Why was Rockefeller so hated in his time?", "Daniel Yergin 00:11:18", "He became the very epitome of the monopolist. A famous woman journalist, Ida Tarbell , wrote a book about the Standard Oil trust. She said it was a great company, but it always played with loaded dice. He was the very embodiment of it. You had this trust-busting president, Theodore Roosevelt , and this was the most obvious trust.", "Also, like gasoline today, it was the one thing everybody bought. You and I don’t go out and buy steel. But unless you have an electric car, you go to a gasoline station and fill it up. This was the same thing. This was the omnipresent product. Rockefeller's idea was to get scale, drive down the price, and expand the market. But it was a monopoly and we have antitrust laws. There was also suspicion that it wasn't only economic monopoly, but about the political muscle that came with it.", "Dwarkesh Patel 00:12:26", "The thing I'm curious about is, it seems like they really messed up the PR, right? Theodore Roosevelt ran for the presidency on busting. If you mess up the PR so badly that the guy who becomes president runs on breaking up your company, maybe it would have been intrinsically unpopular but it feels like the PR could have been better.", "Daniel Yergin 00:12:44", "“Why does anybody need to know about our private business?” was his notion. “We're a private business. It's nobody's business.” Today, you would have a PR advisor tell him that's not really the right stance to take, but at that time… It probably also came from the arrogance of having created this huge company, running a global company from an office on 26 Broadway. You did have a sense of power.", "Dwarkesh Patel 00:13:15", "Another thing is that he retires early—", "Daniel Yergin 00:13:17", "Let me mention this. I do know that one of his guys who was running the company went to see Theodore Roosevelt and brought him copies of Roosevelt's books, especially bound in leather, thinking he could win over Roosevelt. Didn't do any good.", "Dwarkesh Patel 00:13:31", "How come?", "Daniel Yergin 00:13:32", "Because with Roosevelt, Teddy was the trust buster.", "00:13:37 – World War I & II", "Dwarkesh Patel 00:13:37", "Let's go to World War I and World War II. A couple months ago, I interviewed the biographer of Churchill , Andrew Roberts . As you discuss in your book, he discusses that Churchill was this sort of technological visionary and how that's a side of him that isn't talked about often. Maybe talk a little bit about what Churchill did and how he saw the power of oil.", "Daniel Yergin 00:14:04", "Churchill was the First Lord of the Admiralty. All the naval ships at that time ran on coal, which means you had to have people on board shoveling coal. It took a long time to get the coal on board. If you switched to oil, the ships would be faster. They wouldn't need to take the same time. They wouldn't need to carry the same people.", "So he made the decision—obviously others like Admiral Jackie Fisher were pushing him—to convert the Royal Navy to oil. People were saying this is treacherous because we'll depend upon oil from far away, from Persia, rather than Welsh coal. He said, \"This is the prize of the venture.\" That's where I got my title from. Originally it was going to be called “The Prize of the Venture\" because that's what he said. Then I just made it The Prize .", "During World War I, he promoted another military development. I'm forgetting what it was called initially, but it eventually became known as the tank. He really did constantly push technology. Why? I don't know. He was not educated like that. He was educated in the classic sense. That's why he wrote so well. But he understood technology and that you had to constantly push for advantage.", "Dwarkesh Patel 00:15:37", "World War II is just who can produce the most amount of things. But World War I is especially interesting as a technological war because in the span of four or five years, you go from battlefields with horses to literally the tank being invented during this time. You go from hundreds to thousands of trucks, cars, and planes.", "Daniel Yergin 00:16:01", "It's extraordinary. In 1912, the head of the Italian military said planes were interesting, but of no use in war. The war did begin with cavalry charges . The German military position was based upon the railroad and inflexible. Suddenly you had trucks, motorcycles, tanks, airplanes. A war that began with cavalry ended up with tanks and airplanes and trucks. World War I, in my reading and writing of The Prize , is what really established oil as a strategic commodity. The person who became Britain's Foreign Secretary said that the Allies floated to victory on a sea of oil .", "Dwarkesh Patel 00:16:55", "Even the Germans said, “We would have won the war if it wasn't for the tank,” or the trucks or something like that, right?", "Daniel Yergin 00:17:04", "Exactly. What the Allies had was mobility that the Germans didn't have.", "Dwarkesh Patel 00:17:08", "There’s one thing I worry about with regards to today. If you had a sort of World War III-type conflict, it seems like there's an overhang of new technologies. Before World War I, there's a sort of overhang where we could develop planes and war tanks and so forth if we wanted to. With drones and other sorts of robots today, it feels like if you did have a World War III today it would be fought with very different weapons by the end than at the beginning.", "Daniel Yergin 00:17:39", "People say that the Spanish Civil War in the second half of the 1930s was the dress rehearsal for World War II, where a lot of technologies and techniques of warfare were developed. Sadly if you look at Ukraine today, you see that's happening now. On the one hand it's advanced technologies, information technologies, cyber warfare, and drones in a way that hadn't been conceived before. Hobby drones have become agents of war. Obviously, there’s automation of the battlefield. But it's also a World War II battle in that there's been tank battles. It's a World War I one in that it's called positional warfare , trench warfare . So you have a whole century of warfare there, but it is certainly the beta test for new technologies.", "Dwarkesh Patel 00:18:42", "Let's go forward to World War II. Why wasn't Hitler able to produce more synthetic fuel ? Because it seems like he could have won if he had more synthetic fuel.", "Daniel Yergin 00:18:51", "You would have needed to get to a scale that they could never get to. Synthetic fuel meant making oil out of coal using a chemical process. The other thing is that the Allies bombed the plants as well. When I wrote The Prize , I intended to write one chapter on World War II. I ended up writing five because it was just so amazing. World War II was not an oil war, but there was an oil war within World War II.", "When Hitler invaded Russia, he was not only going for Moscow, he was also going for the oil fields of Baku . When the Japanese bombed Pearl Harbor, Admiral Nimitz , who was the naval commander, said if they'd come back a third time and hit the oil tanks, World War II in the Pacific would have taken another two years. General Rommel in North Africa runs out of oil. He writes his wife, \"Shortage of oil, it's enough to make one weep.\" General Patton's lunge in 1944 for Germany is held back by oil. The US is going after the oil lines that are supplying the Japanese, attacking them to basically drain the oil out of the Japanese war machine.", "There’s one big thing that was a real eye-opener for me. People have heard of kamikaze pilots who would fly their planes into aircraft carriers. One big reason they were doing that was to save fuel so they wouldn't have to fly back.", "Dwarkesh Patel 00:20:23", "I don’t know if “instigated” is the right word, but the Pacific War was instigated because the Japanese needed more oil because of the war in Manchuria . But precisely because of that war, we’d put an embargo on oil .", "Daniel Yergin 00:20:37", "The US put an embargo on them. One of the Japanese admirals said, \"Without the oil, our fleet will become scarecrows.\"", "Dwarkesh Patel 00:20:46", "World War I is when people realized that oil is a strategic resource, but in World War II it's really crucial. I'm curious about when different parts of the world realized how crucial oil is as a strategic resource. Was it after World War I, after World War II?", "Daniel Yergin 00:21:00", "After World War I, it clearly was on the agenda in a way that it hadn't been before. You had governments much more engaged in supporting US companies. By the way, the US was so dominant as a producer. Remember that six out of seven barrels of oil that were used by the Allies during World War II came from the United States. But after World War I, you had these fears of running out. That was one reason the US government supported American companies beginning to go into the Middle East, because governments recognized you needed oil.", "Dwarkesh Patel 00:21:42", "After World War II, in the big picture you have the dominant allied powers. They're trying to figure out what to do with the rest of the world, and they realize oil is such an important resource. Fast forward 30 years after that, you're in a position where you've lost a ton of leverage against the OPEC countries and you're not in a position to control the supply of oil. How did that happen?", "Daniel Yergin 00:22:08", "The US had been this huge supplier, but after World War II we had economic growth, highway systems, and suburbs. Oil demand is going way up, and we outran production. The US becomes an importer of oil in 1946, ’47, ’48. But it’s modest amounts. Then as we go into the late 60s, you have this global economic boom. Japan is suddenly a vibrant economy. Europe has recovered, a vibrant economy. Oil demand is shooting up really rapidly. The markets that were quite amply supplied become very tight. In the United States, people didn't realize that we were becoming the world's largest importer of oil. They just weren't paying attention to that. It was thought there are only limits to what we can do as a country anyway.", "When we finally get to the crisis, the famous oil crisis of 1973 —which probably opened the modern age of energy—what's going on at the same time? There’s this political crisis in the United States called Watergate . The front page of the newspaper is not about tight oil supplies. It's all about what Richard Nixon did in terms of subverting the election and the political process. There was just inattention and that's one of the risks. I think a lot about energy security as an issue. It tends to fall off the table until it hits you in the face.", "Dwarkesh Patel 00:23:49", "When did we realize that there was just a ton of oil in the Middle East?", "Daniel Yergin 00:23:59", "It was after World War II. People had begun to know it, but during World War I  a famous geologist named Everette DeGolyer did a trip to the Middle East on behalf of the U.S. government. He came back and said the center of gravity of world oil is shifting to the Middle East. No one knew how much or anything, but they knew it was a strategic resource. By the way, they didn't want it to fall into the hands of the Russians. That was a concern. Most people don't know that the first post-war crisis with the Soviet Union was actually over Iran, with the Soviet Union making a grab for a part of Iran .", "After World War II, there was this real sense that you've got to secure oil supply because it's such a strategic resource. The Middle East suddenly becomes much more important as a source than anybody thought about. The only place producing oil in the Middle East before then was Persia, Iran. Oil was discovered in 1938 in Kuwait and Saudi Arabia and then got bottled up until after World War II.", "00:25:06 – The Middle East", "Dwarkesh Patel 00:25:06", "When I read in The Prize about what happens after World War II in the Middle East, it's about 200 pages of how initially, the Western companies make these deals with exporting countries. First, it's just incredibly favorable towards the Western companies. But then the exporting countries are like, \"No, we got to do the 50/50 split.\" Then they do the 50/50 split.", "Then just over a couple of decades, what happens is that they just keep asking for more and more concessions: “We want 55%, 60%.” These are the exporting countries I'm talking about. They formed the cartel, OPEC, in 1960. But even before that, they have leverage over these Western companies, in the sense that they can say, “If you don't agree, we'll just nationalize you.”", "Daniel Yergin 00:25:59", "I had a mentor, an economist named Raymond Vernon , who came up with this term, the obsolescing bargain . Let’s say Dwarkesh Oil invests in such and such a country. You put $2 billion in there and it's great and everybody's very happy. Governments change or times change. People forget the risk that you took to do it. They say, “We want a different deal.” That just happens again and again. It happens with all natural resources, with oil, with minerals. It was also the end of colonialism. Countries were becoming independent. Today, if a company makes a deal with a country to go develop oil, the country gets 80% of the profit.", "Dwarkesh Patel 00:26:43", "So if you're one of these western companies, what should you have done? Let's say it's 1950. You know that over time they have obviously the monopoly on violence, so they can nationalize you if they want. What should you have done so that you can basically prevent the outcome that kind of universally happens? If you were in charge of it…", "Daniel Yergin 00:27:05", "You would obviously work really hard on government relations. But the countries are generally poor. They say, “We just want our share of it. It's our resource.” Over time, as a company, you have access to the market. You have the refineries. You have the tankers. It isn't like they can just take it over. It takes time to train your population to develop your indigenous oil people who can run it. But if you look back on it, I think you just say that there was an inevitability to it, which also had to do with the consolidation of nation states.", "Dwarkesh Patel 00:27:51", "Why didn't the US government—or the UK government or so on—do more to be like, \"OK, you guys are companies. You guys can't negotiate that hard. But we really care about making sure that America has a lot of oil.", "Daniel Yergin 00:28:03", "I think the government did back them up. Remember the British owned a big share of British Petroleum, now BP , until the late 1980s. The British government was in there, but then you had the nationalization of what was then called Anglo-Persian , Anglo-Iranian oil, which became BP. I think it was inevitable. The governments did try and support, but there were limits to what they could do. The question of access and of maintaining the supplies, then and now, remains crucial. You have the US Navy today trying to push back on the Houthis in Yemen, who are attacking oil tankers .", "Dwarkesh Patel 00:29:05", "Thinking purely from the perspective of the companies, if you were in charge of one of the majors, would you have refused to train domestic workers in the expedition?", "Daniel Yergin 00:29:14", "No, I think that was part of your way of trying to embed yourself there, to bring them in so that you were not this isolated island.", "If you look at Venezuela, they nationalized their oil operations . But by that point, they had people who were very well trained at running refineries, at drilling, and at finding oil. They still carried some of that DNA with them in their operations for quite a number of years, until the complete nationalization and Chávez came to power.", "Dwarkesh Patel 00:29:54", "Was the continuation of antitrust in oil after World War II a mistake? Often when I'm reading your book, what happens is that oil producing countries can negotiate together, obviously after OPEC they're literally a cartel, but then these different Western companies can't.", "Daniel Yergin 00:30:11", "In 1973, the US government finally did give an antitrust waiver to the companies to try and have a united front in the negotiations. But remember, it got all tied up with geopolitics. It got tied up with Arab-Israeli wars and so forth. It wasn't just about oil. There were other things going on and you had the use of what was called the “ oil weapon ”.", "Dwarkesh Patel 00:30:31", "Let's talk about the oil crisis in 1973. One thing I was surprised to learn is that the supply of oil didn't actually go down that much. Global supply declined by 15% or something. Why did it have such a huge effect?", "Daniel Yergin 00:30:46", "It was completely unprecedented, unexpected. It created a panic. It was also right towards the final months of the Nixon administration. So it got all tangled up. Then we had the system of price controls and allocation controls, which made it much harder for the market to adapt. One of the lessons to me from The Prize is actually enabling markets to adjust. Because when governments try to control them and make decisions and allocate—and some states want to do that today—it accentuates shortages and disruptions and price spikes. The tendency is to want to control them.", "There was just far less knowledge about the market, where supplies were. There was no coordination. Now there's much greater knowledge and transparency. You had what were called integrated companies. The same company that produced the oil in the Middle East, put it on their tankers and sent it to their refineries in the US or Europe, to their gas stations. That system is gone. When you see the names of the big oil companies on a gas station—if you're not driving an electric car—and you pull in, odds are that it's not owned by that company. It's a franchise.", "Dwarkesh Patel 00:32:18", "I see. That's another thing I was confused about. I wasn't sure how, before spot and futures exchanges for oil, this happens after the oil crisis in the late seventies and eighties. I didn't really understand how oil is getting priced and how different countries are able to have such a… Traditionally, the price is set by supply and demand.", "Daniel Yergin 00:32:43", "OPEC was setting prices, but then the market responds. Demand goes down. In fact, that's exactly what OPEC did with its prices. It created incredible incentive to bring on new supplies and to be more efficient and undercut. It ended up undercutting its own price. Here’s one of the things I really carried away from The Prize . There are hundreds of really interesting characters in the book, but the two most important characters, one is named Supply and one is named Demand. That's something that you've got to keep in mind with all the other drama that goes on.", "Dwarkesh Patel 00:33:22", "The interesting thing from the book is that oil did seem to be, at least until very recently, pretty different in that with other sorts of commodities you have strong elasticities of supply. If lithium gets more expensive, you'll figure out substitutes for lithium and it's not that big a deal.", "Daniel Yergin 00:33:41", "Or find more lithium.", "Dwarkesh Patel 00:33:42", "Yeah. Whereas, at least during the oil crises, it really felt like the entire world economy was just on hold.", "Daniel Yergin 00:33:48", "That goes back to the centrality of oil as a strategic commodity. Japan had basically just switched its economy from coal to oil. Europe was switching from coal to oil. It was just such a high dependence. Markets did eventually respond. You had a price collapse in 1986 , which was the result of that. In the early 1980s, people were saying, “Oh, the price of oil is going to go to $200 or $300 a barrel,” what it would be in today’s dollars. It collapsed. So markets do respond. It just took longer for that to happen.", "Dwarkesh Patel 00:35:21", "Let’s say you were in charge of one of these OPEC countries in 1973. You realize that you have a tremendous amount of leverage in the short term on the world economy because everything's at a standstill. Over the long run, substitutes will be developed or more oil will come online and so forth. But you have this unique moment of leverage where people really need your oil. What would you have done? Would you have said \"Give me a seat on the UN Security Council and I'll open up the gushers\"?", "Daniel Yergin 00:35:50", "I think these countries did assert their political power. Certainly it was a very different Iran, but the Shah of Iran until he got sick and fell, was asserting, \"We're players in the world economy.\" Saudi Arabia had been a country that people didn't think much about in the US. Suddenly Saudi Arabia became really important. You had this huge flow of money that went into these economies, what were called petrodollars . That made them a whole other source of influence.", "In the book I talk about Richard Nixon's vice president, Spiro Agnew , who had to quit. He actually resigned because he was corrupt and even had people paying for his groceries. A couple of years later, he shows up in Saudi Arabia trying to do business as a consultant. People went there. That's where the money is. Today, if you're a private equity fund, for many of them their number one place to go to raise money is not necessarily the pension funds of various states in the US. These private equity funds or venture capital funds are going to the Gulf countries again because that's where the money is.", "Dwarkesh Patel 00:37:11", "Does this happen with you? You're the world's expert on energy. I'm sure your expertise is worth a lot to them.", "Daniel Yergin 00:37:20", "I certainly speak in that part of the world. Sometimes I joke that the best thing about the energy business, if you're a curious person, is that it's global. In some ways it's the worst thing because it involves so much travel and so much jet lag. But I certainly will spend time there. Of course, for me it's a constant process of learning. You have to show up to get the perspectives and understand what's in people's minds.", "Dwarkesh Patel 00:37:51", "Of the oil producing countries that got a tremendous gush of revenues in the 70s because the price of oil jumped up so high, which of them used it best? Because if you look at a bunch of them… Obviously the Soviet Union didn't do enough to make sure it didn't fall when oil prices collapsed. Iran and Iraq use the money to go to war. Saudi Arabia uses it on welfare.", "Daniel Yergin 00:38:12", "The country that has done the best was not a big player then, the United Arab Emirates and Abu Dhabi. They built a sovereign wealth fund that's probably worth a trillion dollars. They diversified their economy. A couple of years ago when I looked at it, more than half their GDP was no longer oil. That's what Saudi Arabia is trying to do today to diversify their economies, and make them not just dependent upon the price of oil. Because you don't know where technology is and where the markets are going to be.", "The Shah of Iran, who fell from power in 1979 , used to say that he wanted to save the oil for his grandchildren. Now the grandchildren are in charge in many of those countries.", "Dwarkesh Patel 00:39:05", "Not his grandchildren.", "Daniel Yergin 00:39:06", "Yeah, not his. His grandchildren are somewhere else. That’s right. But on the Arab side of the Gulf, they're focused on continuing that revenue stream, but needing oil in order to diversify their economies away from oil. Russia is still, at the end of the day, heavily dependent upon oil and gas. It distorts their economy.", "Dwarkesh Patel 00:39:32", "The Middle East obviously today has a lot of crazy ideas. A lot of the worst sort of political and religious pathologies in the world exist there. Is it just a coincidence that this is where the oil happened to be? Or did the oil in some way enable or exacerbate this radical tendency?", "Daniel Yergin 00:39:59", "That's a very good question. I don't have a good answer to that. There's oil, but there's also religion. There's also the Arab-Israeli conflict. There's Iran, which is really in some ways a neo-colonial power in the Middle East. If you look at its proxies , it has probably 250,000 troops in other countries who belong to various militias and so forth. It's interesting. Sometimes when I'm in the Arab Gulf countries, they don't refer to Iran. They refer to the Persians, in the sense that Persia wants to dominate the Middle East as it did in centuries past.", "Dwarkesh Patel 00:40:54", "They're imagining Xerxes ' armies. Yeah. We were talking about sovereign wealth funds. I think this is a very interesting aspect of the modern world. some of the biggest investment vehicles in the world are the offshoots of oil proceeds over the last decades.", "Daniel Yergin 00:41:13", "If you look at Norway , or if you look at the Middle East, they are offshoots of oil. Singapore's , of course, is the offshoot of hard work.", "Dwarkesh Patel 00:41:23", "Let’s say you are in charge of an oil producing country's sovereign wealth fund. It's a trillion dollars or something, which per capita is actually not that much. If you're Saudi Arabia, you’ve got a trillion dollar sovereign wealth fund . The population is 30-40 million people. Per capita, it's like $20-30,000. It's not that much per capita. Also, you know that the majority of your GDP is not going to be sustainable over the long run. You're in charge of it. What do you do tomorrow? Is it important that you use that money domestically? Or would you just put it to work globally?", "Daniel Yergin 00:41:59", "It's very interesting in Saudi Arabia. It's a question whether you use that money as a national development bank, which is one thing. It’s quite another thing to use it as a basically global diversification investment vehicle. In Saudi Arabia, what's called the PIF , the Public Investment Fund, is doing both. In Abu Dhabi they've differentiated the roles of these different funds . As to what is a global fund, the argument is the same argument that you would get from a financial advisor in the US which is: diversify.", "Dwarkesh Patel 00:42:41", "If you're just purely thinking of it as an investment vehicle, then maybe the rates of return aren't that high domestically.", "Daniel Yergin 00:42:49", "Yeah but you do want to diversify your economy. You want to bring in investment. There's also another critical need: you need to create jobs. The oil industry is a capital intensive business. It's not a labor intensive business. You need to bring in other kinds of industries as well. If you look at your population, maybe 60 percent of your population roughly is under the age of 30, something like that. So you have a real job creation need.", "Dwarkesh Patel 00:43:23", "Oil famously makes rich countries richer and poor countries poorer when they discover it. Let's say you're a country that just discovered oil today, but it's got a really low GDP per capita. Maybe you're already advising such countries. If you were advising them, what is it that you tell them to do to avoid getting Dutch disease themselves?", "Daniel Yergin 00:43:44", "So we need to explain the Dutch disease, which means that you create an inflationary economy and make businesses uncompetitive. That's the heart of the Dutch disease. Of course, that concept was invented for the Dutch. It happened when the Netherlands became a big producer of natural gas . So it is a cautionary tale. You want to, as they say, sterilize some of the money that comes in. You put it into a sovereign wealth fund, invest it overseas. Then you want to put money into education and health and those basic human needs. You want to turn financial capital into human capital.", "Dwarkesh Patel 00:44:24", "Why is it so hard to set up a stable oil rentier state? Theoretically it seems, you've got trillions of dollars of wealth right under your feet…", "Daniel Yergin 00:44:34", "Some have, some have not…", "Dwarkesh Patel 00:44:39", "But if you look at the examples, so many just go off kilter. You have Iran, Venezuela, Libya, and so forth. Very few of them are stable, \"We have a ton of money,\" Saudi Arabia-type states.", "Daniel Yergin 00:44:55", "If you have that huge inflow of money, it really can create a lot of distortions. Look back at the events that led to the overthrow of the Shah of Iran. Things don't happen for one reason or another. He probably had cancer for two years and was losing it. He also had been so arrogant that he alienated people and he had his secret police and so forth. Then this pell-mell rush of overspending created inflation and dislocated the economy. It's a good question for study, to look at on a comparative basis what worked and didn't work.", "It isn't just oil and it isn't just money. There are other things that are involved as well. Clearly there was a huge religious reaction, led by the Ayatollah Khomeini against modernization, against the role of women. The Shah was saying women should get educated and play major roles in their economy. That was not something that the very conservative clerics could stand. It isn't just about oil or just about money. It's part of a larger mix.", "Dwarkesh Patel 00:46:15", "Why is Aramco so much better run than other basically nationalized oil companies?", "Daniel Yergin 00:46:23", "There are others that are well run, but Aramco is a very well run company. As you described before, they rather smoothly did their transition and retained their people who are highly trained. If you go to Aramco, you meet people who have PhDs from MIT or Stanford or University of Texas. They have a very well trained global workforce and a very high standard. They drew initially upon the cultures of the companies that were eventually nationalized out of the business, but the people were trained.", "00:47:04 – Yergin’s conversations with Putin & Modi", "Dwarkesh Patel 00:47:04", "I'm curious if there are any stories you can share. I imagine since you wrote The Prize , world leaders are inviting you to meet them and give advice. I don't know how many stories you can tell from these conversations. Is there someone who's really struck you as having their head on straight on these issues? You've been all around the world. I'm just curious if you have some crazy stories.", "Daniel Yergin 00:47:29", "One is in The New Map and you and I have talked about it. It’s the meeting with Prime Minister Modi in India. India was really at a crucial point whether to get out of the Permit Raj , where government really tightly controlled the economy. I discussed this in a book you probably don’t know I did called The Commanding Heights .", "I describe a scene in the book where he brought his senior advisors together to argue about whether you allow market forces to work or not. It was a very heated discussion and then I just remember his remarks: \"We need new thinking.\" Those simple words have pointed to how India has become so much of a bigger force in the world economy today, as opposed to being a sort of enclosed and closed economy.", "Dwarkesh Patel 00:48:25", "So in 1973 you have the oil crisis. Before that, if you look at the sort–", "Daniel Yergin 00:48:31", "We’re back to 1973, I thought we were already in 2024.", "Dwarkesh Patel 00:48:36", "Yeah we're moving around. If you look at the rates of economic growth or rates of total factor productivity growth before that date, it's pretty high for a long time. It's 2% total factor productivity growth before the 1970s. Afterwards it's like less than 1 percent in the US. How much of that is tied to the energy crisis, or was that just a coincidence?", "Daniel Yergin 00:49:05", "I don't have expertise on that. But I know people like Ben Bernanke , the former head of the Fed , have actually studied that crisis and why that slowdown occurred. The US went from being on a very strong growth trajectory to what at that time was the deepest recession since the Great Depression. Of course, we've had deeper recessions since then. It took a decade to dig out of that hole.", "Dwarkesh Patel 00:49:38", "But then the rates of economic growth didn't go back to…", "Daniel Yergin 00:49:43", "Also with the US, as your economy becomes bigger you don't grow at the same rate. You're growing off a much larger base.", "Dwarkesh Patel 00:49:52", "Here’s one of the things in Silicon Valley that techno-optimistic people really talk about. What if you had ridiculously cheap energy because of solar and other things? Would the economy just explode because the economy is bottlenecked by the price of energy? Would it not be a big deal because there are other bottlenecks?", "Daniel Yergin 00:50:13", "We’ll come to it in terms of AI and electricity. I need to reflect on that. But it doesn't seem to me that the cost of energy is a general constraint on the economy. It is probably somewhat of a constraint in California because it has the most expensive energy in the country. But that's because of state regulation. Big Tech wasn't born in 1973. It's much more recently that it's happened. Like the oil industry, it's happened pretty quickly actually in this space of time.", "When you have price spikes, when you have disruptions, that's when you see the cost and those risks are there. Although when you get into a presidential election, the incumbents always worry about the price of gasoline. It's so sensitive, because people pay it. It's the one price you pay all the time and you see it. I need to think about it more, but I don't think it's a huge constraint.", "Nuclear energy way back in the 1950s was supposed to be so cheap that you wouldn't meter it. \"Too cheap to meter\" was the phrase. Now there's fusion , which seemed to be 50 years away. It’s now maybe 10 years away. Technology will change things. Electricity may be a constraint on the growth of AI in the near and medium term, but that's a very specific problem.", "Dwarkesh Patel 00:52:04", "There's been different projections made about how much energy will be required for AI. The big thing is they need these big training runs, and they keep getting bigger and bigger over time.", "Daniel Yergin 00:52:13", "There's one projection that 10% of US electricity by 2030, which is half a decade away, will be going to data centers. It's about 4% today. What a change it's been in the last year and a half in terms of thinking about data centers, AI, and electricity. It wasn't on the agenda a year and a half ago. I remember I was at a conference with electric power utility CEOs about a year ago. They were talking about growth, being surprised by it. Then we have our conference in Houston in March. By then people had woken up to the fact that you're talking about going from 4% of US electricity to 10%. US electricity hasn't grown very much over the last 10 years. It's grown at 0.35% a year. Now you're looking at maybe 2% annual growth or more. That adds up very quickly. I was very struck. I did a discussion with Bill Gates at our CERAWeek conference in March. He said we used to talk about data centers as 20,000 CPUs. Now we talk about them as 300 megawatt data centers.", "The sense is that you have electric cars and energy transition demand. Then you're bringing back chip manufacturers and smart manufacturing to the US. That's electricity demand. Then you have AI and data centers. Suddenly this industry that had been very flat is now looking at growth. How you are going to meet the growth is very much on the agenda right now. Data centers are looking at where they can position themselves so that we have access to the electricity that they need: reliable 24-hour electricity. Now there's energy security in terms of oil and gas. Actually it's also energy security in terms of electricity. There's your potential constraint on economic activity.", "Some will say the answer to that is innovation. Chips will become less electricity dependent or data centers will operate differently. So the demand will not grow as much. There are those who say that will happen, but it hasn't happened yet. Others are saying, “How are we going to meet that demand?” AI is going to demand a lot more electricity than we had thought about a year or a year and a half ago.", "Dwarkesh Patel 00:54:49", "It's potentially even worse than the 10% number implies, because it's not widely distributed like households would be. In many cases, they have to be one gigawatt to one specific campus or location.", "Daniel Yergin 00:55:02", "Right. You look at developing data centers. They'll take all of the electricity generated by a nuclear power plant. If they do that, that means you've taken that baseload nuclear power off the grid. There's a scramble to understand this. Then there are the issues that we have in our country, which is that you can't get things permitted. It takes so long. You have supply chain problems. You have a workforce that has aged out. It's said that to be a fully-trained lineman , you need seven years. You can see that this area of electricity, pardon me for saying it, is hot.", "Dwarkesh Patel 00:55:46", "The thing I find wild when I'm reading The Prize is just how much economic development is ultimately contingent on the laws of physics. Suppose that fossilization happened in a different way and then oil didn't form. Let's say coal didn't form either. Then it's hard to imagine how society goes from like water wheels to solar power.", "Daniel Yergin 00:56:10", "That's right. What you really realize is that hydrocarbons have been the fuel, the engine really, of economic development. People would still be in sailboats. They would still spend six weeks crossing the Atlantic. It would take weeks to go from one place to another. That's a very interesting question, to imagine our world without them.", "Dwarkesh Patel 00:56:38", "It's also interesting that the tech trees play such that just when you need more runway, you get the next energy transition and then you get a little more runway. It’s just weird that it's… or maybe we would have gotten it anyway.", "Daniel Yergin 00:56:50", "You were going to run out of whales. I love that this professor at Yale , kind of a consultant, did this experiment. He needed some extra money and he did some studies that showed that actually this stuff called rock oil, you could turn it into a lighting fuel fluid. I love the risk taking of it. But it's hard to imagine… we wouldn't be where we are. We wouldn't have the world. Today, it wouldn't be a world of eight billion people were it not for it. Obviously, there's going to be change. I'd say right now, the incentives for innovation are there. That's why we may see a runway of what's going to come, but it may really come from the side.", "Dwarkesh Patel 00:57:36", "And something else that the kerosene is the fact that oil for the first 50 years is used for only lighting. Another thing that's interesting about that is people are asking now about these AI models. You can literally get a million tokens, like many books length of content, out of these models for 15 cents. This is one question people are asking. Let's say you did a hundred billion dollars worth of tokens, what does that look like? What does an industrial scale use of intelligence look like?", "With crude oil in the beginning you're producing a certain amount, but you had a glut because you're only using it for lighting. You then discover this industrial scale use of this technology, which is obviously motorized transportation. That’s one question you can have for AI. Currently what we're using these models for is research and chat and whatever. That’s like the kerosene. What would the equivalent of billions of vehicles look like for AI?", "Daniel Yergin 00:58:34", "That's a question that I'd like to ask you. It is a sense that we are at the beginning of something new. I remember a political leader in Central Asia saying, “AI is going to be the true source of power in the future.”", "Dwarkesh Patel 00:59:51", "How mad are the frackers that they basically solved America's main geopolitical problem, but they were so successful that they've competed away their profits?", "Daniel Yergin 01:00:03", "That was a period up till about 2017, when it was growth for growth's sake. Then basically the financial community said, \"Hey guys, the party's over. I'm not going to reward you for growth. I'm going to reward you for sending money back on my investment.\" So in a sense, shale is almost a mature industry. I think people don't understand how transformative it's been. The US was the world's largest importer of oil. We were only producing 5 million barrels a day of oil in 2008. Now we're more than 13.2 million barrels a day. The US is energy independent. People thought it was a big joke. It could never be energy independent. Every president said, \"We want energy independence.\" Late night comedians could make fun of it. Actually, it's happened and it's had huge economic significance. Back in like 2008, the US was spending something like $400 billion a year to import oil. Now we basically spend nothing to import oil.", "It's been geopolitically very significant. That's been a learning experience for the Biden administration. It turns out that if it wasn't for shale gas made into what's called LNG , liquefied natural gas, shipped to Europe, Putin could well have shattered the coalition supporting Ukraine by using the energy weapon with, not oil, but gas. Suddenly you had European politicians coming to the US to try and secure supplies of LNG because they were so worried about it. It really is a revolution that is playing out today. China imports 75% of its oil. It wishes it was in our position.", "Dwarkesh Patel 01:02:02", "We're energy independent, but how far are we from a scenario where our allies, most notably Japan, are also energy independent?", "Daniel Yergin 01:02:11", "Very, very, very far.", "Dwarkesh Patel 01:02:14", "But including our exports?", "Daniel Yergin 01:02:16", "That's why when the Japanese prime minister was here for a state visit a few months ago, they were expressing great alarm about future LNG exports . For them, being able to import energy from the US is very critical to their energy security. Where else are they going to get their LNG? They'll get some from the Middle East, some from Australia, but they'll be pushed back to getting it from Vladimir Putin. For them, US energy exports, US shale, has become part of their energy security.", "I never thought of it quite that way, but if you think about what the Japanese are saying, that's really what their message is. I did an event with the Japanese prime minister in the springtime. That came through very clearly. For them, US exports are part of the security relationship. US LNG is now part of the arsenal of NATO. It's really different. We're talking about the geopolitical significance of US shale.", "No one would be happier to see a ban on US shale production than Vladimir Putin. I have firsthand sense of that. In 2013, before he annexed Crimea, I was at this conference, which was his version of a global economic conference. They said I could ask the first question. It was going to be something we were talking about before, overdependence on oil and gas revenues. I mentioned the word “shale” and he erupted and said, “It's barbaric, it's terrible.” He got really angry in front of 3000 people. It's rather uncomfortable in that position.", "I realized there were two reasons. One, he was worried about shale gas competing with Russian gas. Two, he saw that the shale revolution would augment the position and influence of the US because the US would no longer be energy dependent. He was very prescient. He was right about both of them. When he invaded Crimea, I don’t think he never imagined that if he cut off the gas to Europe, that Europe could survive. Europe survived.", "01:04:36 – Writing through stories", "Dwarkesh Patel 01:04:36", "The Prize especially, but all your books are narratively driven. You have a detailed understanding of people and events and so forth compared to somebody who's just like, \"Here's how many barrels are produced in year X. Here's how many barrels are produced in year Y.\" When you're in these conversations, or you're trying to think about the future of energy, do you feel like you really need to know how Drake was thinking about the drill well and…?", "Daniel Yergin 01:05:11", "Yeah in one way, I see myself as a storyteller. I like narrative. I think that's the best way to communicate. I like writing about people and not just about abstractions. It's funny. When I was writing The Prize or writing these books, I almost see it like a movie when I'm writing. I see what's happening and that makes it more vivid for me.", "I also think that there are more and more things you're competing with if you're a writer. You're competing with TikTok, YouTube and everything", "Dwarkesh Patel 01:05:52", "Podcasts.", "Daniel Yergin 01:05:53", "Podcasts. So you've got to draw people in and people love stories. I started writing when I was a child. My father had an old typewriter. He'd been a newspaper reporter and I would hunt and peck and just write stories. In high school, I was student body president but I was also editor of the literary magazine. When I was an undergraduate at Yale, I started a magazine called The New Journal which was narrative journalism. I learned a lot of my writing doing that. I learned a lot of my writing writing magazine articles, how to tell a story. I really love shaping a story. I love finding a character. I love finding the great quote that just illuminates everything you're trying to do. I love not boring people.", "Dwarkesh Patel 01:06:53", "When you were writing The Prize , it's a seven-year process. There's the endurance, but there's also the sense that you have to have faith that at the end of this–", "Daniel Yergin 01:07:04", "You're making a deal with yourself. You're making a deal that what you write in year four, you're not going to totally rewrite in year seven because otherwise you'll never get it done. The odd thing is I started a business the same year I started The Prize . I was living entrepreneurship. People, when they go back and write history, they know the outcome. So sometimes they think everybody had all the information, all the time, and knew the outcome. Of course, you never have all the information. You certainly don't have all the time. You surely don't know the outcome.", "That sense of contingency, which is such a part of human history, I tried to capture. That is one of the things that made The Prize , The New Map , and The Quest distinctive. The Quest , the middle book, was a question. Where the hell did the modern solar and wind industry come from anyway? It's entrepreneurs. I have been an entrepreneur, I have a feeling for it. You're an entrepreneur in terms of what you're doing with podcasts. You sort of invent it as you go along. I tried to capture that. At the same time, I love writing narrative.", "Dwarkesh Patel 01:08:30", "Here’s something I'm curious about. Let's say you meet another analyst who doesn't have a vivid sense of narrative history, but just knows the facts and figures. What is it that they're missing? What kinds of understanding do they often lack when you talk to them?", "Daniel Yergin 01:08:45", "I will have great respect for them. I also love reading the monthly energy review from the Department of Energy, which is only statistics, or the Statistical Energy Review . I love it. But what you may miss is the contingency: the human agency, the decisions that went onto things, the right decisions that were made, the mistakes and the things that you missed or were wrong about. It's the texture. There is a tendency to think that things are inevitable, but you know that the world can change from one day to the next. That's what happened on December 7th, 1941, September 1, 1939. It could happen any day in the Middle East right now. You could go from one day to the next and it's a different world.", "Dwarkesh Patel 01:09:53", "Just reading it, you can tell. It's hard to understand many of the things if you don't have an understanding of other things. Arab nationalism forced the Saudis to support the embargo. Why did Egypt launch an attack? Because they wanted a ceasefire to be in a different place, but they actually wanted to end the war… There's so many different things like that.", "Daniel Yergin 01:10:12", "That’s right. You don't understand why these things happened. You just look at the numbers, but why did it happen? Part of it is, through narrative explaining why it happened.", "01:10:26 – The renewable energy transition", "Dwarkesh Patel 01:10:26", "Let's talk about solar and renewables. With oil, you have a commodity which is a flow. You can cut it off and you can turn it back on again. It gives the person who's producing it a lot of leverage. Whereas with wind and solar, if you're the people producing it, it's just a capital stock. How does that change the geopolitical situation and the kind of leverage that the producer might have?", "Daniel Yergin 01:10:51", "It's a question of scale. What I carried away, the basic premise of energy security goes back to Churchill. He said that safety lies in variety and variety alone, diversification. Wind and solar give you diversification. Electric vehicles diversify your fleet. Those are all there.", "For China, wind and solar, electric cars, is very much a strategic issue because they see the vulnerability of importing 75% of their oil, much of it coming through the South China Sea. They know the story of what happened with World War II with Japan. For them, the shift to electric cars is less about air pollution and more about energy security. It's also about knowing that they couldn't compete in the global market with gasoline powered cars, but they can with electric cars. Those are the strategic things.", "Wind and solar give you a more diversified system. Until you have batteries that can really deliver the storage, you have the intermittency problem . You take California today. People think wind and solar is advanced. It's true. They are 25% of electric generation in California, but 43% of electric generation comes from natural gas. And that gets back to the data centers. You're going to need to bolster your electricity power system. How much can you do with batteries and how much can you do with natural gas?", "Wind and solar are also stories about entrepreneurship. In The Quest I asked myself, where did the wind and solar industries come from? The solar industry came from two émigrés who had left Europe, one of whom had driven his car out of Hungary in the 1956 revolution. In 1969, he's a chemist working for the US government. He and his partner decided to go in the solar business. That became the first solar company. They started in 1973. With the wind business, I like to say the modern wind business is the result of the marriage between California tax credits and the sturdy Danish agricultural industry. It was driven by tax credits, but they needed to find wind power machines that could stand up when the wind blew in the Tehachapi Pass .", "It took about 30 years for both those industries to become competitive. It only happened around 2010 that they actually became competitive. Now, of course, they're very competitive but then guess what? Now,they're all tied up. Renewables are also now tied up in geopolitics and, in what I call The New Map , the movement to the great power competition. The US just put 100% tariffs on Chinese electric cars, 25% tariffs on Chinese storage batteries. We recently had this bill, the Inflation Reduction Act . It's huge, a trillion dollars the Treasury estimates when it's done. It's about climate and renewables, but it's also about competing with China.", "Dwarkesh Patel 01:14:27", "Speaking of solar deployment, I think solar deployment is on an annualized $500 billion budget. That's the yearly amount that we're investing in deploying it. Is there anything—when you look through the history of The Prize or the history of energy—comparable to this scale of deployment? Maybe initially, you could say electrification? Or is this just an unprecedented scale?", "Daniel Yergin 01:14:56", "I'd have to think about it. It's happening fast. As I say, these guys started the solar business in 1973. It's now taken off. It's also interesting that what really gave the boost to the solar industry is German feed-in tariffs, which provided the incentive for the Chinese to dominate. Because they dominate the business.", "Right now, wind is about 10% of US electricity. Solar is about 3.5%, but solar is going to grow. It certainly will grow very fast. I just heard this when I was at this utility commissioner's conference.  There’s real tension between states and localities. The states want to push it, but localities don't want solar or don't want wind.", "Dwarkesh Patel 01:15:50", "We're in Nantucket and I saw a couple signs around like, “No More Wind.”", "Daniel Yergin 01:15:55", "They just had a thing where one of the blades of one of the big wind turbines fell off and washed up on the beach. That has now created some really huge consternation and suddenly reopened the discussion.", "You need supply chains. Wind and solar are a little bit different, of course. If you want to start a new offshore wind project in the US, you can order your cables but you won't get them until 2029 or 2030 because of the supply chain issues. Solar is different. But of course, solar is so dominated by China.", "Dwarkesh Patel 01:16:34", "Oil companies are investing a lot in renewables. Is there a bunch of skill transfer here that actually means that these oil companies will be really good at deploying solar or something?", "Daniel Yergin 01:16:45", "There's a difference among some companies. Some companies say yes. They look at offshore wind and say, “We're in the offshore oil business, we can do offshore wind.” You see that in Europe where Equinor , which is the Norwegian company, or BP, or Shell, or Total, are big in offshore wind. They say, “We have skills in that.” Solar's a little different.", "Exxon is now going into mining lithium, thinking that they can use skills that they use for that. But the US major companies say basically, “We do molecules, we don't do electrons.” That's where the difference is. The European companies say, “We can do all of it.” The Americans say, “We have no comparative advantage in electrons.” But there's a lot of interest in hydrogen because that's another molecule and to a degree, hydrogen can substitute for natural gas for instance. That's where a lot of investment is happening but it's very early. Again, sometimes people forget about the energy business. Its scale is so big as to what the requirements are.", "Dwarkesh Patel 01:18:00", "Yeah, but also it's surprisingly small. It's a fraction of GDP. Oil is 3% of GDP. Obviously the entire world depends on it, but you wouldn't see that in the GDP numbers.", "Daniel Yergin 01:18:10", "It used to be a much bigger share of the stock market, Dow Jones. It's also a smaller share. It's still the strategic commodity, but there are a lot of other things that go into it. Now, if you look at what the Department of Commerce uses, there are different categories of jobs. Altogether, they'll say that there are about 12 million people in the US whose jobs are connected to the oil and gas industry.", "Dwarkesh Patel 01:18:42", "I'm curious about how you imagine the demand elasticity for oil changing in the future. In the past, you're not going to stop going to work because oil is 10% more expensive, right? With the Arab oil embargo, prices went up like 300% even though supply only went down 15%. But now, if oil goes up in price, you can Zoom or video conference or something. With fracking also you can increase supply if you want to. Because of these new flexibilities we have, is there going to be a lot more elasticity in demand?", "And also, maybe the main thing  with AI and compute is that you have this sort of thing where you can just dump arbitrary amounts of energy into this and it gets better. Currently there's nothing where if you just keep dumping more energy into it, there's a huge elasticity of demand.", "Daniel Yergin 01:19:47", "I think you would know with the podcasts you've done how AI is really gonna change everything. That is the expectation now, that it's gonna change everything including energy. Then you have $6 billion of venture capital money that has gone into fusion. There's a lot there that can change. My own view is that the energy transition is not going to happen because of price. It's going to happen because of policy and technology. I think that's what's driving it. I have the view that people have had too simple notions of how the energy transition will work.", "That's one of the things in The New Map . If people read one part of it, read the section on energy transition. It tells you that what we're talking about today is not anything like any other energy transition. Every other energy transition we've had has been energy addition. Oil discovered in 1859 overtakes coal. Coal is the world's number one energy source in the 1960s. Last year, the world used more coal than it's ever used, three times as much as the 1960s. Now the idea is, can you change everything literally in 25 years?", "Some of that thinking was developed during COVID, when demand went down and price collapsed. Part of it is people worrying about energy security. I was just reading last week the budget message from the finance minister in India. She talked about energy security and how they have to maintain economic growth. It's very important to do that and energy security as well as energy transition. So it's a different balance. There's a difference between the North and South. Then there's the constraints on minerals because as you make an energy transition, what people talk about, it's more mineral intensive. An electric car uses two and a half times more copper than a conventional car.", "We did the study and said, “Okay, let's take the 2050 goals. And if you want to achieve them, copper supply has to double by about 2035.” What's the chance of doing that? It takes 20 years to open a new mine in the US. We just did a study. It takes 29 years to open a new mine. Changing a $109 trillion world economy… it's going to change. You said the development of solar is going to be really important. But things are not going to move in a straight line. We are in an energy transition, but it's going to be a longer one. Here we are, as you mentioned, in Nantucket, which was a key part of the energy transition because it was a source of lighting in the 19th century from whaling.", "Dwarkesh Patel 01:22:45", "It was like in the first chapter of Moby Dick .", "Daniel Yergin 01:22:47", "Exactly and then it came to an end. It came to an end because of the electric light. Things are not going to stand still. The most important thing are the technologies that you can see coming or the ones that come from left field like fracking or grasping what AI is going to mean for how our economies work. But I think you made a very important point and that was the discovery in COVID. You don't have to travel, you can do it by electrons.", "Dwarkesh Patel 01:23:19", "Here’s one of the final questions I wanted to ask you. If somebody were to write a definitive history for another subject that's not energy—you don't have to personally write it, you can just delegate it to somebody else to do it and they'll do a good job—is there a topic which you feel could make for another thousand-page fascinating history of the world?", "Daniel Yergin 01:23:43", "My father had worked at Warner Brothers for a time. I was always interested in the movie and entertainment business and how that developed. A big epic story of that. I just think that's so interesting. One of the things that is fun when you're writing this is when you have these oversized personalities. There may be obnoxious people whom you would hate to meet in person, but are very interesting to write about. So you look for an industry… Here's something nobody's ever thought about: the history of the internet. No, I'm just joking.", "Dwarkesh Patel 01:24:26", "But I don't know if somebody has written a modern, definitive history of the internet.", "Daniel Yergin 01:24:28", "The one thing I've learned from doing these books is the 3x rule. However hard you think it's going to be, it's going to be at least three times as hard to do. I started off with really unrealistic expectations on The Prize , but I think the thing that kept me going was just how great the stories are and how important the stories were.", "Dwarkesh Patel 01:24:54", "I've heard this from multiple historians who have written similar definitive books about their subject. I think Caro said, \"I'm going to write this over the summer and then we'll use the book deal to go on vacation afterwards.\" I interviewed Richard Rhodes , the author of The Making of the Atomic Bomb . It’s a similar story there, obviously it took longer.", "Daniel Yergin 01:25:16", "I used the advance for The Prize to actually capitalize the company we started, which created an incentive to finish The Prize .", "Dwarkesh Patel 01:25:26", "You were doing the business in the day and then writing at night?", "Daniel Yergin 01:25:29", "Writing at night, writing at weekends, vacations, filling up our car with books, and just immersing in it. I did not have a master plan. I really should have. It would have saved a lot of time probably. I would just immerse myself in something and get it all in my head. My mother was a painter and I would watch her sketch. That's the image I have is that I sketch it out and then I fill it out and work on it. Like a lot of people, I love to edit and polish. I love going over it and just making a sentence better and then saying how to make it better. With The Prize , one of the things is that I read the whole book aloud to myself to test every sentence. Does every sentence have resilience? Does it sing? For me, that's a source of pleasure to do that.", "Dwarkesh Patel 01:26:28", "Did you know while you were writing it that it would become this definitive history?", "Daniel Yergin 01:26:33", "No. We had this apartment overlooking the Charles River in Cambridge. I'd look out there at 2 a.m. in the morning and think, \"What's going to happen?\" I think those around me despaired a little bit. This could end up a veil of tears. But it turned out. And then the book was basically five years late, brilliantly timed. People said they have a great sense of timing. I said I was five years late. But I did have a sense that I needed to get it done. That something, that some crisis was going to come. I had a sense of that and that drove me. Otherwise there's this danger that you just keep working on it.", "Dwarkesh Patel 01:27:21", "Okay, I think that's an excellent place to close. Thank you so much for coming on the podcast. This is wonderful.", "Daniel Yergin 01:27:27", "It's great to have this conversation. It gave me a lot to think about too. So, thank you.", "" ]
[ "https://en.wikipedia.org/wiki/Daniel_Yergin", "https://amzn.to/3To6jz7", "https://amzn.to/4d7IylW", "https://en.wikipedia.org/wiki/Robert_Caro", "https://amzn.to/4cZ0I9a", "https://amzn.to/3TrjlM4", "https://amzn.to/4glwY9i", "https://en.wikipedia.org/wiki/Edwin_Drake", "https://en.wikipedia.org/wiki/John_D._Rockefeller", "https://en.wikipedia.org/wiki/George_P._Mitchell", "https://www.strausscenter.org/energy-and-security-project/the-u-s-shale-revolution/", "https://en.wikipedia.org/wiki/Drake_Well", "https://en.wikipedia.org/wiki/Standard_Oil", "https://en.wikipedia.org/wiki/Big_Tech", "https://en.wikipedia.org/wiki/Big_Oil", "https://en.wikipedia.org/wiki/Dot-com_bubble", "https://en.wikipedia.org/wiki/Fracking#:~:text=Fracking%20(also%20known%20as%20hydraulic,bedrock%20by%20a%20pressurized%20liquid.", "https://en.wikipedia.org/wiki/Kerosene", "https://en.wikipedia.org/wiki/Gasoline", "https://en.wikipedia.org/wiki/Thomas_Edison", "https://en.wikipedia.org/wiki/Henry_Ford", "https://en.wikipedia.org/wiki/Oil_refinery", "https://en.wikipedia.org/wiki/Standard_Oil_Co._of_New_Jersey_v._United_States", "https://en.wikipedia.org/wiki/Progressive_Era#Antitrust", "https://en.wikipedia.org/wiki/Trust_(business)", "https://en.wikipedia.org/wiki/Fluid_catalytic_cracking", "https://en.wikipedia.org/wiki/Ida_Tarbell", "https://en.wikipedia.org/wiki/The_History_of_the_Standard_Oil_Company", "https://en.wikipedia.org/wiki/Theodore_Roosevelt#Trust_busting_and_regulation", "https://www.dwarkeshpatel.com/p/andrew-roberts", "https://en.wikipedia.org/wiki/Andrew_Roberts,_Baron_Roberts_of_Belgravia", "https://amzn.to/3TqWbW5", "https://en.wikipedia.org/wiki/Winston_Churchill", "https://en.wikipedia.org/wiki/John_Fisher,_1st_Baron_Fisher", "https://en.wikipedia.org/wiki/Horses_in_World_War_I", "https://en.wikipedia.org/wiki/George_Curzon,_1st_Marquess_Curzon_of_Kedleston", "https://www.nytimes.com/1918/11/23/archives/floated-to-victory-on-a-wave-of-oil-earl-curzon-tells-how-allied.html", "https://en.wikipedia.org/wiki/Attrition_warfare", "https://en.wikipedia.org/wiki/Trench_warfare", "https://en.wikipedia.org/wiki/Synthetic_fuel", "https://en.wikipedia.org/wiki/Petroleum_industry_in_Azerbaijan#World_War_II", "https://en.wikipedia.org/wiki/Chester_W._Nimitz", "https://en.wikipedia.org/wiki/Erwin_Rommel", "https://en.wikipedia.org/wiki/George_S._Patton", "https://en.wikipedia.org/wiki/Kamikaze", "https://en.wikipedia.org/wiki/Japanese_invasion_of_Manchuria", "https://en.wikipedia.org/wiki/Prelude_to_the_attack_on_Pearl_Harbor", "https://en.wikipedia.org/wiki/Isoroku_Yamamoto", "https://en.wikipedia.org/wiki/OPEC", "https://en.wikipedia.org/wiki/1973_oil_crisis", "https://en.wikipedia.org/wiki/Watergate_scandal", "https://en.wikipedia.org/wiki/Everette_Lee_DeGolyer", "https://en.wikipedia.org/wiki/Iran_crisis_of_1946", "https://en.wikipedia.org/wiki/Raymond_Vernon", "https://amzn.to/4e0YJCw", "https://en.wikipedia.org/wiki/BP", "https://en.wikipedia.org/wiki/Anglo-Persian_Oil_Company", "https://en.wikipedia.org/wiki/Houthi_movement", "https://news.usni.org/2024/09/03/houthis-strike-two-more-crude-tankers-in-the-red-sea-another-tanker-continues-to-burn", "https://en.wikipedia.org/wiki/History_of_the_Venezuelan_oil_industry#Nationalization", "https://en.wikipedia.org/wiki/Hugo_Ch%C3%A1vez", "https://en.wikipedia.org/wiki/Six-Day_War", "https://www.brookings.edu/articles/the-1967-war-and-the-oil-weapon/", "https://en.wikipedia.org/wiki/Spot_market", "https://en.wikipedia.org/wiki/Futures_contract", "https://en.wikipedia.org/wiki/Elasticity_(economics)", "https://en.wikipedia.org/wiki/1980s_oil_glut", "https://en.wikipedia.org/wiki/Mohammad_Reza_Pahlavi", "https://www.investopedia.com/terms/p/petrodollars.asp", "https://en.wikipedia.org/wiki/Spiro_Agnew", "https://en.wikipedia.org/wiki/Sovereign_wealth_fund", "https://en.wikipedia.org/wiki/Iranian_Revolution", "https://www.cfr.org/article/irans-regional-armed-network", "https://en.wikipedia.org/wiki/Xerxes_I", "https://en.wikipedia.org/wiki/Government_Pension_Fund_of_Norway", "https://en.wikipedia.org/wiki/GIC_(sovereign_wealth_fund)", "https://en.wikipedia.org/wiki/Public_Investment_Fund", "https://en.wikipedia.org/wiki/Public_Investment_Fund", "https://en.wikipedia.org/wiki/Abu_Dhabi_Investment_Authority", "https://en.wikipedia.org/wiki/Dutch_disease", "https://en.wikipedia.org/wiki/Groningen_gas_field", "https://en.wikipedia.org/wiki/Rentier_capitalism", "https://en.wikipedia.org/wiki/SAVAK", "https://en.wikipedia.org/wiki/Ruhollah_Khomeini", "https://en.wikipedia.org/wiki/Saudi_Aramco", "https://en.wikipedia.org/wiki/Narendra_Modi", "https://en.wikipedia.org/wiki/Licence_Raj", "https://amzn.to/3ZnLabW", "https://en.wikipedia.org/wiki/Ben_Bernanke", "https://en.wikipedia.org/wiki/Federal_Reserve", "https://en.wikipedia.org/wiki/1973%E2%80%931975_recession#:~:text=The%20recession%20in%20the%20United,characterized%20by%20low%20economic%20growth.", "https://en.wikipedia.org/wiki/Great_Recession", "https://en.wikipedia.org/wiki/Too_cheap_to_meter#:~:text=Too%20cheap%20to%20meter%20refers,a%20profit%20from%20associated%20services.", "https://en.wikipedia.org/wiki/Fusion_power", "https://www.dwarkeshpatel.com/p/will-scaling-work", "https://ondemand.ceraweek.com/detail/video/6349476292112/luncheon-dialogue-with-bill-gates", "https://ceraweek.com/index.html", "https://en.wikipedia.org/wiki/Base_load", "https://en.wikipedia.org/wiki/Lineworker", "https://en.wikipedia.org/wiki/Benjamin_Silliman_Jr.", "https://en.wikipedia.org/wiki/Shale_gas", "https://en.wikipedia.org/wiki/Liquefied_natural_gas", "https://en.wikipedia.org/wiki/Fumio_Kishida", "https://www.reuters.com/business/energy/federal-judge-halts-us-governments-ban-lng-permits-2024-07-01/", "https://amzn.to/3Xuu9dF", "https://www.eia.gov/totalenergy/data/monthly/", "https://www.energyinst.org/statistical-review", "https://en.wikipedia.org/wiki/Variable_renewable_energy", "https://www.nytimes.com/1995/11/01/us/joseph-lindmayer-physicist-66.html", "https://www.legacy.com/us/obituaries/washingtonpost/name/peter-varadi-obituary?id=36462909", "https://en.wikipedia.org/wiki/Tehachapi_Pass", "https://www.reuters.com/business/us-locks-steep-china-tariff-hikes-many-start-sept-27-2024-09-13/", "https://en.wikipedia.org/wiki/Inflation_Reduction_Act", "https://www.n-magazine.com/vineyard-wind-turbing-fail", "https://en.wikipedia.org/wiki/Equinor", "https://www.dwarkeshpatel.com/p/richard-rhodes", "https://amzn.to/3XKJdFs", "https://en.wikipedia.org/wiki/Cambridge_Energy_Research_Associates" ]
https://www.dwarkesh.com/p/dario-amodei
Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress
[ "Dwarkesh Patel (00:00:49 - 00:00:58):", "Today I have the pleasure of speaking with Dario Amodei, the CEO of Anthropic, and I'm really excited about this one.", "Dario, thank you so much for coming on the podcast.", "Dario Amodei (00:00:59 - 00:01:00):", "Thanks for having me.", "(00:01:00) - Scaling", "Dwarkesh Patel (00:01:00 - 00:01:19):", "First question. You have been one of the very few people who has seen scaling coming for years. As somebody who's seen it coming, what is fundamentally the explanation for why scaling works? Why is the universe organized such that if you throw big blobs of compute at a wide enough distribution of data, the thing becomes intelligent?", "Dario Amodei (00:01:21 - 00:03:01):", "I think the truth is that we still don't know. It's almost entirely an empirical fact. It's a fact that you could sense from the data and from a bunch of different places but we still don't have a satisfying explanation for it.", "If I were to try to make one and I'm just kind of waving my hands when I say this, there's these ideas in physics around long tail or power law of correlations or effects. When a bunch of stuff happens, when you have a bunch of features, you get a lot of the data in the early fat part of the distribution before the tails. For language, this would be things like — “ Oh, I figured out there are parts of speech and nouns follow verbs.” And then there are these more and more subtle correlations.", "So it kind of makes sense why every log or order of magnitude that you add, you capture more of the distribution. What's not clear at all is why does it scale so smoothly with parameters? Why does it scale so smoothly with the amount of data?", "You can think up some explanations of why it's linear. The parameters are like a bucket, and the data is like water, and so size of the bucket is proportional to size of the water. But why does it lead to all this very smooth scaling? We still don't know. There's all these explanations. Our chief scientist, Jared Kaplan did some stuff on fractal manifold dimension that you can use to explain it.", "So there's all kinds of ideas, but I feel like we just don't really know for sure.", "Dwarkesh Patel (00:03:01 - 00:03:29):", "And by the way, for the audience who is trying to follow along. By scaling, we're referring to the fact that you can very predictably see how if you go from Claude-1 to Claude-2 that the loss in terms of whether it can predict the next token scales very smoothly.", "Okay, so we don't know why it's happening, but can you at least predict empirically that here is the loss at which this ability will emerge, here is the place where this circuit will emerge? Is that at all predictable or are you just looking at the loss number?", "Dario Amodei (00:03:29 - 00:04:06):", "That is much less predictable. What's predictable is this statistical average, this loss, this entropy. And it's super predictable. It's sometimes predictable even to several significant figures which you don't see outside of physics. You don't expect to see it in this messy empirical field. But specific abilities are actually very hard to predict. Back when I was working on GPT-2 and GPT-3, when does arithmetic come in place? When do models learn to code? Sometimes it's very abrupt.", "It's like how you can predict statistical averages of the weather, but the weather on one particular day is very hard to predict.", "Dwarkesh Patel (00:04:07 - 00:04:14):", "Dumb it down for me. I don't understand manifolds, but mechanistically, it doesn't know addition yet and suddenly now it knows addition. What has happened?", "Dario Amodei (00:04:15 - 00:04:56):", "This is another question that we don't know the answer to. We're trying to answer this with things like mechanistic interpretability . You can think about these things like circuits snapping into place. Although there is some evidence that when you look at the models being able to add things, its chance of getting the right answer shoots up all of a sudden. But if you look at what's the probability of the right answer? You'll see it climb from like one in a million to one in 100,000 to one in a 1000 long before it actually gets the right answer. In many of these cases there's some continuous process going on behind the scenes. I don't understand it at all.", "Dwarkesh Patel (00:04:57 - 00:05:03):", "Does that imply that the circuit or the process for doing addition was pre existing and it just got increased in salience?", "Dario Amodei (00:05:03 - 00:05:16):", "I don't know if there's this circuit that's weak and getting stronger. I don't know if it's something that works, but not very well. I think we don't know and these are some of the questions we're trying to answer with mechanistic interpretability.", "Dwarkesh Patel (00:05:16 - 00:05:18):", "Are there abilities that won't emerge with scale?", "Dario Amodei (00:05:18 - 00:06:00):", "I definitely think that things like alignment and values are not guaranteed to emerge with scale. One way to think about it is you train the model and it's basically predicting the world, it's understanding the world. Its job is facts not values. It's trying to predict what comes next. But there's free variables here — What should you do? What should you think? What should you value? There aren't bits for that.", "There's just — if I started with this I should finish with this. If I started with this other thing I should finish with this other thing. And so I think that's not going to emerge.", "Dwarkesh Patel (00:06:02 - 00:06:12):", "If it turns out that scaling plateaus before we reach human level intelligence, looking back on it, what would be your explanation? What do you think is likely to be the case if that turns out to be the outcome?", "Dario Amodei (00:06:12 - 00:08:57):", "I would distinguish some problem with the fundamental theory with some practical issue. One practical issue we could have is we could run out of data. For various reasons, I think that's not going to happen but if you look at it very naively we're not that far from running out of data. So it's like we just don't have the data to continue the scaling curves. Another way it could happen is we just use up all of the compute that was available and that wasn't enough and then progress is slow after that. I wouldn't bet on either of those things happening but they could.", "From a fundamental perspective, I personally think it's very unlikely that the scaling laws will just stop. If they do, another reason could just be that we don't have quite the right architecture. If we tried to do it with an LSTM or an RNN the slope would be different. It still might be that we get there but there are some things that are just very hard to represent when you don't have the ability to attend far in the past that transformers have. If somehow we just hit a wall and it wasn’t about the architecture I'd be very surprised by that. We're already at the point where to me the things the models can't do don't seem to be different in kind from the things they can do.", "You could have made a case a few years ago that they can't reason, they can't program. You could have drawn boundaries and said maybe you'll hit a wall. I didn't think we would hit a wall, a few other people didn't think we would hit a wall, but it was a more plausible case then. It's a less plausible case now.", "It could happen. This stuff is crazy. We could hit a wall tomorrow. If that happens my explanation would be there's something wrong with the loss when you train on next word prediction.", "If you really want to learn to program at a really high level, it means you care about some tokens much more than others and they're rare enough that the loss function over focuses on the appearance, the things that are responsible for the most bits of entropy, and instead they don't focus on this stuff that's really essential. So you could have the signal drowned out in the noise. I don't think it's going to play out that way for a number of reasons. But if you told me — Yes, you trained your 2024 model. It was much bigger and it just wasn't any better, and you tried every architecture and didn't work, that's the explanation I would reach for.", "Dwarkesh Patel (00:08:58 - 00:09:02):", "Is there a candidate for another loss function? If you had to abandon next token prediction.", "Dario Amodei (00:09:02 - 00:09:31):", "I think then you would have to go for some kind of RL. There's many different kinds. There's RL from immune feedback, there's RL against an objective, there's things like Constitutional AI. There's things like amplification and debate. These are kind of both alignment methods and ways of training models.", "You would have to try a bunch of things, but the focus would have to be on what do we actually care about the model doing? In a sense, we're a little bit lucky that predict the next word gets us all these other things we need. There's no guarantee.", "Dwarkesh Patel (00:09:31 - 00:09:42):", "From your worldview it seems there's a multitude of different loss functions that it's just a matter of what can allow you to just throw a whole bunch of data at it. Next token prediction itself is not significant.", "Dario Amodei (00:09:42 - 00:10:01):", "The thing with RL is you get slowed down a bit because you have to design how the loss function works by some method. The nice thing with the next token prediction is it's there for you. It's the easiest thing in the world. So I think it would slow you down if you couldn't scale in just that very simplest way.", "Dwarkesh Patel (00:10:01 - 00:10:06):", "You mentioned that data is likely not to be the constraint. Why do you think that is the case?", "Dario Amodei (00:10:06 - 00:10:22):", "There's various possibilities here and for a number of reasons I shouldn't go into the details, but there's many sources of data in the world and there's many ways that you can also generate data. My guess is that this will not be a blocker.", "Maybe it would be better if it was, but it won't be.", "Dwarkesh Patel (00:10:23 - 00:10:24):", "Are you talking about multimodal?", "Dario Amodei (00:10:25 - 00:10:26):", "There’s just many different ways to do it.", "Dwarkesh Patel (00:10:26 - 00:10:33):", "How did you form your views on scaling? How far back can we go? And then you would be basically saying something similar to this.", "Dario Amodei (00:10:33 - 00:14:04):", "This view that I have formed gradually from 2014 to 2017. My first experience with it was my first experience with AI. I saw some of the early stuff around AlexNet in 2012. I always had wanted to study intelligence but before I was just like, this doesn’t seem like it’s actually working. All the way back to 2005. I'd read Ray Kurzweil ’s work. I'd read even some of Eliezer ’s work on the early Internet back then. And I thought this stuff kind of looks far away. I look at the AI stuff of today and it’s not anywhere close.", "But with AlexNet I was like, oh, this stuff is actually starting to work. So I joined Andrew Ng ’s group at Baidu. I had been in a different field and this was my first experience with AI and it was a bit different from a lot of the academic style research that was going on elsewhere in the world.", "I kind of got lucky in that the task that was given to me and the other folks there. It was just to make the best speech recognition system that you can.", "There was a lot of data available, there were a lot of GPUs available. It posed the problem in a way that was amenable to discovering that kind of scaling was a solution. That's very different from being a postdoc whose job is to come up with an idea that seems clever and new and makes your mark as someone who's invented something.", "I just tried the simplest experiments. I was just fiddling with some dials. I was like, try adding more layers to the RNN, try training it for longer, what happens? How long does it take to overfit? What if I add new data and repeat it less times? And I just saw these very consistent patterns.", "I didn't really know that this was unusual or that others weren't thinking in this way. This was almost like beginner's luck. It was my first experience with it and I didn't really think about it beyond speech recognition. I was just like, oh, I don't know anything about this field. There are zillions of things people do with machine learning. But I'm like, weird, this seems to be true in the speech recognition field.", "It was just before OpenAI started that I met Ilya, who you interviewed . One of the first things he said to me was — “ Look. The models, they just want to learn. You have to understand this. The models, they just want to learn. ” And it was a bit like a Zen Koan. I listened to this and I became enlightened.", "And over the years, I would be the one who would formalize a lot of these things and kind of put them together, but what that told me was that the phenomenon that I'd seen wasn't just some random thing. It was broad. It was more general. The models just want to learn. You get the obstacles out of their way. You give them good data, you give them enough space to operate in, you don't do something stupid like condition them badly numerically, and they want to learn. They'll do it.", "Dwarkesh Patel (00:14:04 - 00:14:35):", "What I find really interesting about what you said is there were many people who were aware that these things are really good at speech recognition or at playing these constrained games. Very few extrapolated from there like you and Ilya did to something that is generally intelligent.", "What was different about the way you were thinking about it versus how others were thinking about it? What made you think it's getting better at speech in this consistent way, it will get better at everything in this consistent way.", "Dario Amodei (00:14:35 - 00:15:46):", "I genuinely don't know. At first when I saw it for speech, I assumed this was just true for speech or for this narrow class of models. I think it was just that over the period between 2014 and 2017, I tried it for a lot of things and saw the same thing over and over again. I watched the same being true with Dota. I watched the same being true with robotics. Many people thought that as a counterexample, but I just thought, well, it's hard to get data for robotics, but if we look within the data that we have, we see the same patterns.", "I think people were very focused on solving the problem in front of them. It's very hard to explain why one person thinks one way and another person thinks a different way. People just see it through a different lens. They are looking vertically instead of horizontally. They're not thinking about the scaling, they're thinking about how do I solve my problem? And for robotics, there's not enough data. That can easily abstract to — scaling doesn't work because we don't have the data.", "For some reason, and it may just have been random, I was obsessed with that particular direction.", "(00:15:46) - Language", "Dwarkesh Patel (00:15:46 - 00:15:57):", "When did it become obvious to you that language is the means to just feed a bunch of data into these things? Or was it just you ran out of other things. Like robotics, there's not enough data. This other thing, there's not enough data.", "Dario Amodei (00:15:57 - 00:17:38):", "I think this whole idea of the next word prediction, that you could do self supervised learning, together with the idea that there's so much richness and structure there for predicting the next word. It might say two plus two equals and you have to know the answer is four. It might be telling the story about a character.", "Basically, it's posing to the model the equivalent of these developmental tests that get posed to children. Mary walks into the room and puts an item in there and then Chuck walks into the room and removes the item and Mary doesn't see it. What does Mary think?", "To get this right in the service of predicting the next word the models are going to have to solve all these theory of mind problems, solve all these math problems. And so my thinking was just, well, you scale it up as much as you can. There's kind of no limit to it.", "And I think I kind of abstractly had that view but the thing that really solidified and convinced me was the work that Alec Radford did on GPT-1. Which was that not only could you get this language model that could predict things very well but you could also fine tune it. In those days, you needed to fine tune it to do all these other tasks.", "So I was like, wow, this isn't just some narrow thing where you get the language model right. It's sort of halfway to everywhere. You get the language model right and then with a little move in this direction, it can solve this logical dereference test or whatever. And with this other thing, it can solve translation or something. And then you're like, wow, I think there's really something to do. And of course, we can really scale it.", "Dwarkesh Patel (00:17:38 - 00:18:21):", "One thing that's confusing, or that would have been hard to see — If you told me in 2018 we'll have models in 2023, like Claude 2 that can write theorems in the style of Shakespeare, whatever theory you want, they can ace standardized test with open ended questions, just all kinds of really impressive things, I would have said — Oh, you have AGI. You clearly have something that is human level intelligence.", "While these things are impressive, it clearly seems we're not at human level, at least in the current generation and potentially for generations to come. What explains this discrepancy between super impressive performance in these benchmarks and the things you could describe versus general intelligence?", "Dario Amodei (00:18:21 - 00:21:47):", "That was one area where actually I was not prescient and I was surprised as well.", "When I first looked at GPT-3 and the kind of things that we built in the early days at Anthropic, my general sense was that it seems like they've really grasped the essence of language. I'm not sure how much we need to scale them up. Maybe what's more needed from here is like RL and all the other stuff.", "In 2020 I thought we can scale this a bunch more but I wonder if it's more efficient to scale it more or to start adding on these other objectives like RL. I thought maybe if you do as much RL as you've done pre training for a 2020 style model, that's the way to go.", "Scaling it up will keep working. But is that really the best path? And I don't know, it just keeps going. I thought it had understood a lot of the essence of language but then there's further to go.", "Stepping back from it. One of the reasons why I'm sort of very empiricist about AI, about safety, about organizations, is that you often get surprised. I feel like I've been right about some things but still with these theoretical pictures ahead, been wrong about most things. Being right about 10% of the stuff sets you head and shoulders above many people.", "If you look back to these diagrams that are like, here's the village idiot, here's Einstein. Here's the scale of intelligence. And the village idiot and Einstein are very close to each other.", "Maybe that's still true in some abstract sense or something but it's not really what we're seeing, is it? We're seeing that it seems like the human range is pretty broad and we don't hit the human range in the same place or at the same time for different tasks.", "Like, write a sonnet in the style of Cormac McCarthy. I'm not very creative, so I couldn't do that but that's a pretty high level human skill. And even the model is starting to get good at stuff like constrained writing like, write a page about X without using the letter E.", "I think the models might be superhuman or close to superhuman at that. But when it comes to proving relatively simple mathematical theorems, they're just starting to do the beginning of it. They make really dumb mistakes sometimes and they really lack any kind of broad correcting your errors or doing some extended task.", "So it turns out that intelligence isn't a spectrum. There are a bunch of different areas of domain expertise. There are a bunch of different kinds of skills. Memory is different. It's all formed in the blob, it's not complicated. But to the extent it even is on the spectrum, the spectrum is also wide.", "If you asked me ten years ago, that's not what I would have expected at all, but I think that's very much the way it's turned out.", "Dwarkesh Patel (00:21:47 - 00:22:11):", "Oh, man. I have so many questions just as a follow up on that.", "Do you expect that given the distribution of training that these models get from massive amounts of internet data versus what humans got from evolution, that the repertoire of skills that elicits will be just barely overlapping? Will it be like concentric circles? How do you think about that? Do those matter?", "Dario Amodei (00:22:11 - 00:22:58):", "Clearly there's certainly a large amount of overlap because a lot of the things these models do have business applications and many of their business applications are doing things that are helping humans to be more effective at things. So the overlap is quite large.", "If you think of all the activity that humans put on the internet in text, that covers a lot of it, but it probably doesn't cover some things. Like the models learn a physical model of the world to some extent, but they certainly don't learn how to actually move around in the world. Again, maybe that's easy to fine tune.", "So there are some things that the models don't learn that humans do. And then the models also learn things that humans don’t, for example, to speak fluent Base 64. I don't know about you, but I never learned that.", "(00:22:58) - Economic Usefulness", "Dwarkesh Patel (00:22:58 - 00:23:13):", "How likely do you think it is that these models will be superhuman for many years at economically valuable tasks while they are still below humans in many other relevant tasks that prevents an intelligence explosion or something?", "Dario Amodei (00:23:13 - 00:24:17):", "This kind of stuff is really hard to know so I'll give that caveat. You can kind of predict the basic scaling laws and then this more granular stuff, which we really want to know to know how this all is going to go, is much harder to know.", "My guess would be the scaling laws are going to continue. Again, subject to — do people slow down for safety or for regulatory reasons? But let's just put all that aside and say we have the economic capability to keep scaling. If we did that, what would happen?", "My view is we're going to keep getting better across the board and I don't see any area where the models are super, super weak or not starting to make progress. That used to be true of math and programming, but over the last six months the 2023 generation of models, compared to the 2022 generation, has started to learn that. There may be more subtle things we don't know. And so I kind of suspect, even if it isn't quite even, that the rising tide will lift all the boats.", "Dwarkesh Patel (00:24:17 - 00:24:26):", "Does that include the thing you were mentioning earlier where if there's an extended task, it loses its train of thought or its ability to just execute a series of steps?", "Dario Amodei (00:24:26 - 00:25:38):", "That's going to depend on things like RL training to have the model do longer horizon tasks. I don't expect that to require a substantial amount of additional compute. I think that was probably an artifact of thinking about RL in the wrong way and underestimating how much the model had learned on its own.", "In terms of are we going to be superhuman in some areas and not others? I think it's complicated. I could imagine that we won't be superhuman in some areas because they involve embodiment in the physical world. And then what happens? Do the AIs help us train faster AIs? And those faster AIs wrap around and solve that? Do you not need the physical world? It depends what you mean. Are we worried about an alignment disaster? Are we worried about misuse, like making weapons of mass destruction? Are we worried about AI taking over research from humans? Are we worried about it reaching some threshold of economic productivity where it can do what the average human does? I think these different thresholds have different answers, although I suspect they will all come within a few years.", "Dwarkesh Patel (00:25:38 - 00:25:47):", "Let me ask about those thresholds. If Claude was an employee at Anthropic, what salary would it be worth? Is it meaningfully speeding up AI progress?", "Dario Amodei (00:25:47 - 00:26:42):", "It feels to me like an intern in most areas, but then some specific areas where it's better than that.", "One thing that makes the comparison hard is that the form factor is not the same as a human. If you were to behave like one of these chat bots, I guess we could have this conversation, but they're more designed to answer single or a few questions. They don't have the concept of having a long life of prior experience.  We're talking here about things that I've experienced in the past and chat bots don't have that.", "There's all kinds of stuff missing and so it's hard to make a comparison. They feel like interns in some areas and then they have areas where they spike and are really savants, where they may be better than anyone here.", "Dwarkesh Patel (00:26:42 - 00:27:01):", "But does the overall picture of something like an intelligence explosion make sense to you? My former guest, Carl Shulman , has this very detailed model of an intelligence explosion. As somebody who would actually see that happening, does that make sense to you? As they go from interns to entry level software engineers. Those entry level software engineers increase your productivity…", "Dario Amodei (00:27:01 - 00:27:49):", "I think the idea that as AI systems become more productive, first they speed up the productivity of humans, then they equal the productivity of humans, and then in some meaningful sense are the main contributor to scientific progress that happens at some point. That basic logic seems likely to me although I have a suspicion that when we actually go into the details, it's going to be weird and different than we expect. That in all the detailed models, we're thinking about the wrong things or we're right about one thing, and then are wrong about ten other things. I think we might end up in a weirder world than we expect.", "Dwarkesh Patel (00:27:49 - 00:27:56):", "When you add all this together, what does your estimate of when we get something kind of human level look like?", "Dario Amodei (00:27:56 - 00:29:32):", "It depends on the thresholds. In terms of someone looks at the model and even if you talk to it for an hour or so, it's basically like a generally well educated human, that could be not very far away at all. I think that could happen in two or three years.", "The main thing that would stop it would be if we hit certain safety thresholds and stuff like that. So if a company or the industry decides to slow down or we're able to get the government to institute restrictions that moderate the rate of progress for safety reasons, that would be the main reason it wouldn't happen. But if you just look at the logistical and economic ability to scale, we're not very far at all from that.", "Now that may not be the threshold where the models are existentially dangerous. In fact, I suspect it's not quite there yet. It may not be the threshold where the models can take over most AI research. It may not be the threshold where the models seriously change how the economy works.", "I think it gets a little murky after that and all of those thresholds may happen at various times after that. But in terms of the base technical capability of — it kind of sounds like a reasonably generally educated human across the board. I think that could be quite close.", "Dwarkesh Patel (00:29:32 - 00:29:43):", "Why would it be the case that it could pass a Turing Test for an educated person but not be able to contribute or substitute for human involvement in the economy?", "Dario Amodei (00:29:43 - 00:32:15):", "A couple of reasons. One is just that the threshold of skill isn't high enough, comparative advantage. It doesn't matter that I have someone who's better than the average human at every task. What I really need for AI research is to find something that is strong enough to substantially accelerate the labor of the thousand experts who are best at it. We might reach a point where the comparative advantage of these systems is not great.", "Another thing that could be the case is that there are these mysterious frictions that don't show up in naive economic models but you see it whenever you go to a customer or something. You're like — “Hey, I have this cool chat bot.” In principle, it can do everything that your customer service bot does or this part of your company does, but the actual friction of how do we slot it in? How do we make it work? That includes both just the question of how it works in a human sense within the company, how things happen in the economy and overcome frictions, and also just, what is the workflow? How do you actually interact with it?", "It's very different to say, here's a chat bot that looks like it's doing this task or helping the human to do some task as it is to say, okay, this thing is deployed and 100,000 people are using it.", "Right now lots of folks are rushing to deploy these systems but in many cases, they're not using them anywhere close to the most efficient way that they could. Not because they're not smart, but because it takes time to work these things out. And so I think when things are changing this fast, there are going to be all of these frictions.", "These are messy realities that don't quite get captured in the model. I don't think it changes the basic picture. I don't think it changes the idea that we're building up this snowball of, the models help the models get better and can accelerate what the humans do. And eventually it's mostly the models doing the work.", "You zoom out far enough that's happening. But I'm skeptical of any kind of precise mathematical or exponential prediction of how it's going to be. I think it's all going to be a mess. But what we know is it's on a metaphorical exponential, and it's going to happen fast.", "Dwarkesh Patel (00:32:15  - 00:32:54):", "How do those different exponentials which we've been talking about net out?", "One was the scaling laws themselves are power laws with decaying marginal loss parameter or something. The other exponential you talked about is, these things can get involved in the process of AI research itself, speeding it up.", "Those two are sort of opposing exponentials. Does it net out to be superlinear or sublinear? And also you mentioned that the distribution of intelligence might just be broader. After we get to this point in two to three years, what does that look like?", "Dario Amodei (00:32:54 - 00:34:23):", "I think it's very unclear. We're already at the point where if you look at the loss, the scaling laws are starting to bend. We've seen that in published model cards offered by multiple companies. So that's not a secret at all.", "But as they start to bend, each little bit of entropy of accurate prediction becomes more important. Maybe these last little bits of entropy are the difference between a physics paper as Einstein would have written it as opposed to some other physicist.", "It's hard to assess significance from this. It certainly looks like in terms of practical performance, the metrics keep going up relatively linearly, although they're always unpredictable. It's hard to see that.", "And then the thing that I think is driving the most acceleration is just more and more money is going into the field. People are seeing that there's just a huge amount of economic value and so I expect the price, the amount of money spent on the largest models, to go up by like a factor of 100 or something. And for that to then be concatenated with the chips are getting faster, the algorithms are getting better because there's so many people working on this now.", "Again, I'm not making a normative statement here. This is what should happen. I'm not even saying this necessarily will happen because there's important safety and government questions here which we're very actively working on. I'm just saying, left to itself, this is what the economy is going to do.", "Dwarkesh Patel (00:34:23 - 00:34:42):", "We'll get to those questions in a second. But how do you think about the contribution of Anthropic to that increase in the scope of this industry. There's an argument you can make that, with that investment, we can work on safety stuff at Anthropic, another that says you're raising the salience of this field in general.", "Dario Amodei (00:34:42 - 00:35:34):", "It's all costs and benefits. The costs are not zero. A mature way to think about these things is not to deny that there are any costs, but to think about what the costs are and what the benefits are. I think we've been relatively responsible in the sense that we didn't cause the big acceleration that happened late last year and at the beginning of this year. We weren't the ones who did that.", "And honestly, if you look at the reaction of Google, that might be ten times more important than anything else. And then once it had happened, once the ecosystem had changed, then we did a lot of things to stay on the frontier.", "It's like any other question. You're trying to do the things that have the lowest costs and the biggest benefits and that causes you to have different strategies at different times.", "Dwarkesh Patel (00:35:34 - 00:36:02):", "One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven't been able to make a single new connection that has led to a discovery?", "Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There's a medical cure right here.", "Shouldn't we be expecting that kind of stuff?", "Dario Amodei (00:36:02 - 00:38:05):", "I'm not sure. These words. Discovery. Creativity. One of the lessons I've learned is that in the big blob of compute, these ideas often end up being fuzzy and elusive and hard to track down.", "But I think there is something here. The models do display a kind of ordinary creativity. Things like, write a sonnet in the style of Cormac McCarthy or Barbie. There is some creativity to that and they do draw new connections of the kind that an ordinary person would draw.", "I agree with you that there haven't been any “big” scientific discoveries. I think that's a mix of just the model skill level is not high enough yet. I was on a podcast last week where the host said, “I don't know, I play with these models. They're kind of mid. They get a B or a B minus.”", "That is going to change with the scaling.", "I do think there's an interesting point about, well, the models have an advantage, which is they know a lot more than us. Shouldn’t they have an advantage already, even if their skill level isn't quite high? Maybe that's kind of what you're getting at.", "I don't really have an answer to that. It seems certainly like memorization and facts and drawing connections is an area where the models are ahead. And I do think maybe you need those connections and you need a fairly high level of skill.", "Particularly in the area of biology, for better and for worse, the complexity of biology is such that the current models know a lot of things right now and that's what you need to make discoveries and draw connections. It's not like physics where you need to think and come up with a formula. In biology you need to know a lot of things. and so I do think the models know a lot of things and they have a skill level that's not quite high enough to put them together.", "I think they are just on the cusp of being able to put these things together.", "(00:38:05) - Bioterrorism", "Dwarkesh Patel (00:38:05 - 00:38:25):", "On that point. Last week in your Senate testimony , you said that these models are two to three years away from potentially enabling large scale bio terrorism attacks. Can you make that more concrete without obviously giving the kind of information that would result in speeding that up? Is it one shotting how to weaponize something or do you have to fine tune an open source model? What would that actually look like?", "Dario Amodei (00:38:24 - 00:40:48):", "I think it'd be good to clarify this because we did a blog post and the Senate testimony and various people didn't understand the point or didn't understand what we'd done.", "Today you can ask the models all kinds of things about biology and get them to say all kinds of scary things, but often those scary things are things that you could Google, and I'm therefore not particularly worried about that. I think it's actually an impediment to seeing the real danger, where someone just says — Oh, I asked this model to tell me some things about smallpox, and it will.", "That is actually not what I'm worried about. We spent about six months working with folks who are the most expert in the world on how do biological attacks happen, what would you need to conduct such an attack, and how do we defend against such an attack?", "They worked very intensively on just the entire workflow of trying to do a bad thing. It's not one shot, it's a long process. There are many steps to it. It's not just like I asked the model for this one page of information. And again, without going into any detail, the thing I said in the Senate testimony is, there are some steps where you can just get information on Google. There are some steps that are what I'd call missing. They're scattered across a bunch of textbooks, or they're not in any textbook. They're kind of implicit knowledge, and they're not explicit knowledge. They're more like, I have to do this lab protocol, and what if I get it wrong? Oh, if this happens, then my temperature was too low. If that happened, I needed to add more of this particular reagent.", "What we found is that for the most part, those key missing pieces, the models can't do them yet, but we found that sometimes they can, and when they can, sometimes they still hallucinate, which is the thing that's keeping us safe. But we saw enough signs of the models doing those key things well. And if we look at state of the art models and go backwards to previous models, we look at the trend, it shows every sign that two or three years from now, we're going to have a real problem.", "Dwarkesh Patel (00:40:48 - 00:40:53):", "Yeah, especially the thing you mentioned on the log scale. You go from one in 100 times, it gets it right, to one in ten, to..", "Dario Amodei (00:40:54 - 00:41:17):", "Exactly. I've seen many of these “groks” in my life. I was there when I watched when GPT-3 learned to do arithmetic, when GPT-2 learned to do regression a little bit above chance, when with Claude we got better on all these tests of helpful, honest, harmless. I've seen a lot of groks. This is unfortunately not one that I'm excited about, but I believe it's happening.", "Dwarkesh Patel (00:41:17 - 00:41:42):", "Somebody might say, listen, you were a co-author on this post that OpenAI released about GPT-2 where they said, we're not going to release the weights or the details here because we're worried that this model will be used for something bad. And looking back on it now, it's laughable to think that GPT-2 could have done anything bad. Are we just way too worried? This is a concern that doesn't make sense?", "Dario Amodei (00:41:42 - 00:43:35):", "It is interesting. It might be worth looking back at the actual text of that post . I don't remember it exactly but it's still up on the Internet. It says something like, we're choosing not to release the weights because of concerns about misuse. But it also said, this is an experiment. We're not sure if this is necessary or the right thing to do at this time, but we'd like to establish a norm of thinking carefully about these things. You could think of it a little like the Asilomar conference in the 1970s where they were just figuring out recombinant DNA. It was not necessarily the case that someone could do something really bad with recombinant DNA. It's just the possibilities were starting to become clear. Those words, at least, were the right attitude.", "Now I think there's a separate thing that people don't just judge the post, they judge the organization. Is this an organization that produces a lot of hype or that has credibility or something like that? And so that had some effect on it. I guess you could also ask, is it inevitable that you can't get across any message more complicated than this thing right here is dangerous.", "You can argue about those but I think the basic thing that was in my head and the head of others who were involved in that, and what is evident in the post is, we actually don't know. We have pretty wide error bars on what's dangerous and what's not so we want to establish a norm of being careful.", "By the way we have enormously more evidence now. We've seen enormously more of these groks now and so we're well calibrated but there's still uncertainty. In all these statements I've said, in two or three years we might be there. There's a substantial risk of it and we don't want to take that risk. But I wouldn't say it's 100%. It could be 50-50.", "(00:43:35) - Cybersecurity", "Dwarkesh Patel (00:43:35 - 00:43:49):", "Okay, let's talk about cybersecurity, which in addition to bio risk is another thing Anthropic has been emphasizing. How have you avoided the cloud microarchitecture from leaking? Because, as you know, your competitors have been less successful at this kind of security.", "Dario Amodei (00:43:49 - 00:45:10):", "Can't comment on anyone else's security, don't know what's going on in there. A thing that we have done is, there are these architectural innovations that make training more efficient. We call them compute multipliers because they're the equivalent of having more compute.", "I don't want to say too much about our compute multipliers because it could allow an adversary to counteract our measures but we limit the number of people who are aware of a given compute multiplier to those who need to know about it.", "So there's a very small number of people who could leak all of these secrets. There's a larger number of people who could leak one of them. But this is the standard compartmentalization strategy that's used in the intelligence community or resistance cells or whatever. Over the last few months we've implemented these measures. I don't want to jinx anything by saying, oh, this could never happen to us but I think it would be harder for it to happen. I don't want to go into any more detail.", "By the way I'd encourage all the other companies to do this as well. As much as competitors architecture’s leaking is narrowly helpful to Anthropic, it's not good for anyone in the long run. Security around this stuff is really important.", "Dwarkesh Patel (00:45:10 - 00:45:17):", "Could you, with your current security, prevent a dedicated state level actor from getting the Claude 2 weights?", "Dario Amodei (00:45:17 - 00:46:28):", "It depends how dedicated. Our head of security, who used to work on security for Chrome, which is a very widely used and attacked application, he likes to think about it in terms of — how much would it cost to attack Anthropic successfully? Again, I don't want to go into super detail of how much I think it will cost to attack and it's just inviting people. One of our goals is that it costs more to attack Anthropic than it costs to just train your own model. It doesn't guarantee things because, of course you need the talent as well so you might still, but attacks have risks, the diplomatic costs, and they use up the very sparse resources that nation state actors might have in order to do the attacks.", "We're not there yet by the way. But I think we are at a very high standard of security compared to the size of company that we are. If you look at security for most 150 person companies there's just no comparison. But could we resist if it was a state actor's top priority to steal our model weights? No. They would succeed.", "Dwarkesh Patel (00:46:29 - 00:46:59):", "How long does that stay true? Because at some point the value keeps increasing and increasing. And another part of this question is what kind of a secret is how to train Claude 3 or Claude 2?", "For example, with nuclear weapons we had lots of spies. You just take a blueprint of the implosion device across and that's what you need. Is it more tacit here like the thing you were talking about with biology? You need to know how these reagents work or is it just like you got the blueprint, you got the microarchitecture and the hyperparameters?", "Dario Amodei (00:46:59 - 00:47:19):", "There are some things that are like a one line equation and there are other things that are more complicated. I think compartmentalization is the best way to do it. Just limit the number of people who know about something. If you're a 1000 person company and everyone knows every secret, one, I guarantee you have a leaker and two, I guarantee you have a spy.", "(00:47:19) - Alignment & mechanistic interpretability", "Dwarkesh Patel (00:47:19 - 00:47:45):", "Okay, let's talk about alignment and let's talk about mechanistic interpretability, which is the branch you guys specialize in. While you're answering this question, you might want to explain what mechanistic interpretability is.", "The broader question is mechanistically, what is alignment? Is it that you're locking in the model into a benevolent character? Are you disabling deceptive circuits and procedures? What concretely is happening when you align a model?", "Dario Amodei (00:47:45 - 00:48:29):", "As with most things, when we actually train a model to be aligned, we don't know what happens inside the model. There are different ways of training it to be aligned but we don't really know what happens. All the current methods that involve some kind of fine tuning of course have the property that the underlying knowledge and abilities that we might be worried about don't disappear. The model is just taught not to output them. I don't know if that's a fatal flaw or if that's just the way things have to be. I don't know what's going on inside mechanistically and I think that's the whole point of mechanistic interpretability. To really understand what's going on inside the models at the level of individual circuits.", "Dwarkesh Patel (00:48:29 - 00:48:40)", "Eventually when it's solved, what does the solution look like? What is the case where if you’re Claude 4, you do the mechanistic interpretability thing and you're like, I'm satisfied, it's aligned. What is it that you've seen?", "Dario Amodei (00:48:40 - 00:51:53)", "We don't know enough to know that yet. I can give you a sketch for what the process looks like as opposed to what the final result looks like. Verifiability is a lot of the challenge here. We have all these methods that purport to align AI systems and do succeed at doing so for today's tasks.", "But then the question is always if you had a more powerful model or if you had a model in a different situation, would it be aligned? This problem would be much easier if you had an oracle that could just scan a model and say okay, I know this model is aligned, I know what it'll do in every situation.", "I think the closest thing we have to that is something like mechanistic interpretability. It's not anywhere near up to the task yet. But I guess I would say I think of it as almost like an extended training set and an extended test set. Everything we're doing, all the alignment methods we're doing are the training set. You can run tests in them, but will it really work out a distribution? Will it really work in another situation?", "Mechanistic interpretability is the only thing that even in principle is the thing where it's more like an X-ray of the model than modification of the model. It's more like an assessment than an intervention. Somehow we need to get into a dynamic where we have an extended test set, an extended training set, which is all these alignment methods, and an extended test set which is kind of like you X-ray the model and say, okay, what worked and what didn't? In a way that goes beyond just the empirical test that you've run, where you're saying, what is the model going to do in these situations? What is within its capabilities to do instead of, what did it do phenomenologically?", "And of course we have to be careful about that. One of the things I think is very important is we should never train for interpretability because that's taking away that advantage. You even have the problem similar to validation versus test set, where if you look at the X-ray too many times, you can interfere. We should worry about that, but that's a much weaker process, it's not automated optimization. We should just make sure, as with validation and test sets, that we don't look at the validation set too many times before running the test set. But again, that's manual pressure rather than automated pressure.", "So some solution where we have some dynamic between the training and test set where we're trying things out and we really figure out if they work via a way of testing them, that the model isn't optimizing against, some orthogonal way.", "I think we're never going to have a guarantee, but some process where we do those things together. Some way to put extended training for alignment ability with extended testing for alignment ability together in a way that actually works. And not in a stupid way, there's lots of stupid ways to do this where you fool yourself.", "Dwarkesh Patel (00:51:53 - 00:52:37)", "I still don't feel like I understand the intuition for why you think this is likely to work or this is promising to pursue. Let me ask the question in a more specific way, and excuse the tortured analogy.", "If you're an economist and you want to understand the economy, you send a whole bunch of microeconomists out there. One of them studies how the restaurant business works. One of them studies how the tourism business works, one of them studies how the baking business works. And at the end, they all come together and you still don't know whether there's going to be a recession in five years or not.", "Why is this not like that? Where you have an understanding of how induction heads work in a two layer transformer, we understand modular arithmetic. How does this add up to — Does this model want to kill us? What does this model fundamentally want?", "Dario Amodei (00:52:37 - 00:54:55)", "A few things on that. That's the right set of questions to ask. I think what we're hoping for in the end is not that we'll understand every detail, but again, I would give the X-ray or the MRI analogy. We can be in a position where we can look at the broad features of the model and say, is this a model whose internal state and plans are very different from what it externally represents itself to do? Is this a model where we're uncomfortable that far too much of its computational power is devoted to doing what look like fairly destructive and manipulative things?", "We don't know for sure whether that's possible, but at least some positive signs that it might be possible. Again, the model is not intentionally hiding from you, it might turn out that the training process hides it from you. I can think of cases where if the model is really super intelligent, it thinks in a way so that it affects its own cognition. We should think about that, we should consider everything. I suspect that it may roughly work to think of the model as if it's trained in the normal way, just getting to above human level. It may be a reasonable assumption, you should check, that the internal structure of the model is not intentionally optimizing against us.", "I'd give an analogy to humans. It's actually possible to look at an MRI of someone and predict above random chance whether they're a psychopath. There was actually a story a few years back about a neuroscientist who was studying this, and then he looked at his own scan and discovered that he was a psychopath and then everyone in his life was like — No, this is obvious. You're a complete asshole. You must be a psychopath. And he was totally unaware of this.", "The basic idea that there can be these macro features, psychopath is probably a good analogy for it, this is what we would be afraid of, a model that's charming on the surface, very goal oriented, and very dark on the inside. On the surface, their behavior might look like the behavior of someone else, but their goals are very different.", "Dwarkesh Patel (00:54:55 - 00:55:25)", "A question somebody might have is, you're trying to empirically estimate if these activations are suspicious but is this something we can afford to be empirical about? Or do we need a very good first principal theoretical reason to think — No, it's not just that these MRIs of the model correlate with being bad. We need just some deep rooted math proof that this is aligned.", "Dario Amodei (00:55:25 - 00:56:39)", "It depends what you mean by empirical. A better term would be phenomenological. I don't think we should be purely phenomenological in like, here are some brain scans of really dangerous models and here are some other brain scans. The whole idea of mechanistic interpretability is to look at the underlying principles and circuits.", "But I guess the way I'd think about it is like, on one hand, I've actually always been a fan of studying these circuits at the lowest level of detail that we possibly can. And the reason for that is that's kind of how you build up knowledge. Even if you're ultimately aiming for there's too many of these features, it's too complicated. At the end of the day, we're trying to build something broad and we're trying to build some broad understanding. I think the way you build that up is by trying to make a lot of these very specific discoveries. You have to understand the building blocks and then you have to figure out how to use that to draw these broad conclusions even if you're not going to figure out everything.", "You should probably talk to Chris Olah , who would have much more detail. He controls the interpretability agenda. He's the one who decides what to do on interpretability. This is my high level thinking about it, which is not going to be as good as his.", "Dwarkesh Patel (00:56:39 - 00:56:46)", "Does the bull case on Anthropic rely on the fact that mechanistic interpretability is helpful for capabilities?", "Dario Amodei (00:56:46 - 00:57:43)", "I don't think so at all. I think in principle it's possible that mechanistic interpretability could be helpful with capabilities. We might, for various reasons, not choose to talk about it if that were the case.", "That wasn't something that I or any of us thought of at the time of Anthropic’s founding. We thought of ourselves as people who are good at scaling models and good at doing safety on top of those models. We think that we have a very high talent density of folks who are good at that. My view has always been talent density beats talent mass. That's more of our bullcase. Talent density beats talent mass.", "I don't think it depends on some particular thing. Others are starting to do mechanistic interpretability now, and I'm very glad that they are. A part of our theory of change is paradoxically to make other organizations more like us.", "(00:57:43) - Does alignment research require scale?", "Dwarkesh Patel 00:57:43 - 00:57:58)", "I'm sure talent density is important but another thing Anthropic has emphasized is that you need to have frontier models in order to do safety research. And of course, actually be a company as well.", "Somebody might guess that the current frontier models, GPT-4, Claude 2 cost one hundred million dollars or something like that…", "Dario Amodei (00:57:58 - 00:58:02)", "That general order of magnitude in very broad terms is not wrong.", "Dwarkesh Patel (00:58:02 - 00:58:19):", "But two to three years from now, the kinds of things you're talking about, we're talking more and more orders of magnitude to keep up with that. If it's the case that safety requires us to be on the frontier, what is a case in which Anthropic is competing with these leviathans to stay on that same scale?", "Dario Amodei (00:58:19 - 01:01:37):", "It's a situation with a lot of trade offs. It's not easy. Maybe I'll just answer the questions one by one.", "To go back to why is safety so tied to scale? Some people don't think it is. But if I just look at what have been the areas where safety methods have been put into practice or worked for something, for anything, even if we don't think they'll work in general.", "I go back to thinking of all the ideas, something like debate and amplification. Back in 2018 when we wrote papers about those at OpenAI, it was like, human feedback isn't quite going to work, but debate and amplification will take us beyond that. But then if you actually look at the attempts to do debates, we're really limited by the quality of the model. For two models to have a debate that is coherent enough that a human can judge it so that the training process can actually work, you need models that are at or maybe even beyond on some topics the current frontier. You can come up with the method, you can come up with the idea without being on the frontier but for me, that's a very small fraction of what needs to be done. It's very easy to come up with these methods. It's very easy to come up with, oh, the problem is X, maybe a solution is Y.", "I really want to know whether things work in practice, even for the systems we have today, and I want to know what kinds of things go wrong with them. I just feel like you discover ten new ideas and ten new ways that things are going to go wrong by trying these in practice. I think that empirical learning is just not as widely understood as it should be.", "I would say the same thing about methods like constitutional AI, and some people say, oh, it doesn't matter. We know this method doesn't work, it won't work for pure alignment. I neither agree nor disagree with that. I think that's just kind of overconfident. The way we discover new things and understand the structure of what's going to work and what's not is by playing around with things. Not that we should just blindly say, oh, this worked here, and so it'll work there. But you really start to understand the patterns, like with the scaling laws.", "Even mechanistic interpretability, which might be the one area I see where a lot of progress has been made without the frontier models, we're seeing in the work that OpenAI put out a couple months ago, that using very powerful models to help you auto interpret the weak models. Again, that's not everything you can do in interpretability, but that's a big component of it and we found it useful too.", "So you see this phenomenon over and over again where the scaling and the safety are these two snakes that are coiled with each other, always even more than you think. Even with interpretability, three years ago, I didn't think that this would be as true of interpretability, but somehow it manages to be true. Why? Because intelligence is useful. It's useful for a number of tasks. One of the tasks it's useful for is figuring out how to judge and evaluate other intelligence and maybe someday even for doing the alignment research itself.", "Dwarkesh Patel (01:01:37 - 01:01:44):", "Given all that's true, what does that imply for Anthropic when in two to three years, these leviathans are doing like $10 billion training runs?", "Dario Amodei (01:01:44 - 01:03:36):", "Choice one is if we can't, or if it costs too much to stay on the frontier, then we shouldn't do it and we won't work with the most advanced models, we'll see what we can get with models that are not quite as advanced. You can get some non zero value there but I'm skeptical that the value is all that high or the learning can be fast enough to really be in favor of the task.", "The second option is you just find a way. You just accept the trade offs. And the trade offs are more positive than they appear because of a phenomenon that I've called Race to the Top. I could go into that later, but let me put that aside for now.", "And the third phenomenon is that as things get to that scale, it may coincide with starting to get into some non trivial probability of very serious danger. I think it's going to come first from misuse, the biorisk stuff that I talked about. I don't think we have the level of autonomy yet to worry about some of the alignment stuff happening in two years, but it might not be very far behind that at all. That may lead to unilateral or multilateral or government enforced decisions not to scale as fast as we could, which we support. That may end up being the right thing to do. I hope things go in that direction, and then we don't have this hard trade off between we're not in the frontier and can't quite do the research as well as we want or influence other orgs as well as we want, or versus we're on the frontier and have to accept the trade-offs which are net positive, but have a lot in both directions.", "(01:05:30) - Misuse vs misalignment", "Dwarkesh Patel (01:03:36 - 01:03:48):", "On the misuse versus misalignment, those are both problems as you mentioned but in the long scheme of things, say 30 years down the line, which do you think will be considered a bigger problem?", "Dario Amodei (01:03:48 - 01:04:19):", "I think it's going to be much less than 30 years. I'm worried about both. If you have a model that could in theory, take over the world on its own, if you were able to control that model, then it follows pretty simply that if a model was following the wishes of some small subset of people and not others, then those people could use it to take over the world on their behalf. The very premise of misalignment means that we should be worried about misuse as well, with similar levels of consequences.", "Dwarkesh Patel (01:04:20 - 01:04:40):", "But some people who might be more doomery than you would say — you're already working towards the optimistic scenario there because you've at least figured out how to align the model with the bad guys. Now you just need to make sure that it's aligned with the good guys instead.", "Why do you think that you could get to the point where it's aligned with the bad guys? You haven't already solved this.", "Dario Amodei (01:04:40 - 01:05:41):", "I guess if you had the view that alignment is completely unsolvable, then you'd be like, well, we're dead anyway so I don't want to worry about misuse. That's not my position at all.", "But also you should think in terms of what's a plan that would actually succeed that would make things good. Any plan that actually succeeds, regardless of how hard misalignment is to solve, is going to need to solve misuse as well as misalignment.", "As the AI models get better faster and faster, they're going to create a big problem around the balance of power between countries. They're going to create a big problem around, is it possible for a single individual to do something bad that it's hard for everyone else to stop? Any actual solution that leads to a good future needs to solve those problems as well. If your perspective is, we're screwed because we can't solve the first problem, so don't worry about problems two and three, that's not really a statement. You should worry about problems two and three. They're in our path no matter what.", "Dwarkesh Patel (01:05:42 - 01:05:45):", "Yeah. In the scenario we succeed we have to solve all of them.", "Dario Amodei (01:05:45 - 01:05:48):", "We should be planning for success not for failure.", "Dwarkesh Patel (01:05:48 - 01:05:57):", "If misuse doesn't happen and the right people have the superhuman models, what does that look like? Who are the right people? Who is actually controlling the model five years from now?", "Dario Amodei (01:05:57 - 01:06:51):", "My view is that these things are powerful enough that I think it's going to involve substantial involvement of some kind of government or assembly of government bodies. There are very naive versions of this. I don't think we should just hand the model over to the UN or whoever happens to be in office at a given time. I could see that going poorly. But it's too powerful. There needs to be some kind of legitimate process for managing this technology, which includes the role of the people building it, includes the role of democratically elected authorities, includes the role of all the individuals who will be affected by it. At the end of the day, there needs to be some politically legitimate process.", "Dwarkesh Patel (01:06:51 - 01:07:00):", "But what does that look like? If it's not the case that you just hand it to whoever the President is at the time, what does the body look like?", "Dario Amodei (01:07:00 - 01:07:31):", "It's really hard to know these things ahead of time. People love to propose these broad plans and say, oh, this is the way we should do it. The honest fact is that we're figuring this out as we go along. I think we should try things and experiment with them with less powerful versions of the technology. We need to figure this out in time. But also it's not really the kind of thing you can know in advance.", "Dwarkesh Patel (01:07:31 - 01:07:41):", "The long term benefit trust that you have. How would that interface with this body? Is that the body itself?", "Dario Amodei (01:07:41 - 01:08:14):", "I think that the long term benefit trust is a much narrower thing. This is something that makes decisions for Anthropic. This is basically a body. It was described in a recent Vox article . We'll be saying more about it later this year. But it's basically a body that over time gains the ability to appoint the majority of the board seats of Anthropic. It's a mixture of experts in AI alignment, national security, and philanthropy in general.", "Dwarkesh Patel (01:08:14 - 01:08:22):", "If Anthropic has AGI and if control of Anthropic is handed to them, doesn't that imply that control of AGI itself is handed to them?", "Dario Amodei (01:08:22 - 01:09:06):", "That doesn't imply that Anthropic or any other entity should be the entity that makes decisions about AGI on behalf of humanity. I would think of those as different things. If Anthropic does play a broad role, then you'd want to widen that body to a whole bunch of different people from around the world. Or maybe you construe this as very narrow, and then there's some broad committee somewhere that manages all the AGIs of all the companies on behalf of anyone.", "I don't know. I think my view is you shouldn't be overly constructive and utopian. We're dealing with a new problem here. We need to start thinking now about what are the governmental bodies and structures that could deal with it.", "(01:09:06) - What if AI goes well?", "Dwarkesh Patel (01:09:06 - 01:09:26):", "Okay, so let's forget about governance. Let's just talk about what this going well looks like.", "Obviously, there are things we can all agree on: cure all the diseases, solve all the fraud – things all humans would say, 'I'm down for that.' But now it's 2030. You've solved all the real problems that everybody can agree on. What happens next? What are we doing with a superhuman God?", "Dario Amodei (01:09:26 - 01:10:24):", "I actually want to disagree with the framing of something like this. I get nervous when someone says, what are you going to do with a superhuman AI? We've learned a lot of things over the last 150 years about markets and democracy, and each person can define for themselves what the best way for them to have the human experience is, and that societies work out norms and what they value just in this very complex and decentralized way.", "If you have these safety problems that can be a reason why there needs to be a certain amount of centralized control from the government until we've solved these problems.", "But as a matter of — we've solved all the problems, now how do we make things good? I think most people, most groups, most ideologies that started with, let's sit down and think over what the definition of the good life is, have led to disaster.", "Dwarkesh Patel (01:10:24 - 01:10:34):", "But this vision you have of a sort of tolerant, liberal, democracy, market oriented system with AGI. Each person has their own AGI? What does that mean?", "Dario Amodei (01:10:34 - 01:11:05):", "I don't know. I don't know what it looks like. I guess what I'm saying is we need to solve the important safety problems and the important externalities. Those could be just narrowly about alignment, there could be a bunch of economic issues that are super complicated and that we can't solve. Subject to that, we should think about what's worked in the past. And in general, unitary visions for what it means to live a good life have not worked out well at all.", "(01:11:05) - China", "Dwarkesh Patel (01:11:05 - 01:11:30):", "On the opposite end of things going well or good actors having control of AI. We might want to touch on China as a potential actor in the space.", "First of all, being at Baidu and seeing progress in AI happening generally, why do you think the Chinese have underperformed? Baidu had a scaling laws group many years back. Or is the premise wrong and I'm just not aware of the progress that's happening there?", "Dario Amodei (01:11:30 - 01:12:26):", "The scaling laws group, that was an offshoot of the stuff we did with speech so there were still some people there but that was a mostly Americanized lab. I was there for a year. That was my first foray into deep learning. It was led by Andrew Ng. I never went to China. It was like a US lab. That was somewhat disconnected, although it was an attempt by a Chinese entity to kind of get into the game.", "Since then I think they've maybe been very commercially focused and not as focused on these fundamental research side of things around scaling laws. I do think because of all the excitement with the release of ChatGPT in November or so, that's been a starting gun for them as well. And they're trying very aggressively to catch up now.", "I think the US is substantially ahead but they're trying very hard to catch up now.", "Dwarkesh Patel (01:12:26 - 01:12:32):", "How do you think China thinks about AGI? Are they thinking about safety and misuse or not?", "Dario Amodei (01:12:32 - 01:13:21):", "I don't really have a sense. One concern I would have are people saying things like, China isn't going to develop an AI because they like stability or they're going to have all these restrictions to make sure things are in line with what the CCP wants. That might be true in the short term and for consumer products. My worry is that if the basic incentives are about national security and power, that's going to become clear sooner or later. If they see this as a source of national power, they're going to at least try to do what's most effective, and that could lead them in the direction of AGI.", "Dwarkesh Patel (01:13:10 - 01:13:21):", "Assume they just get your blueprints or your code base or something, is it possible for them to spin up their own lab that is competitive at the frontier with the leading American companies?", "Dario Amodei (01:13:21 - 01:14:12):", "I don't know about fast but I'm concerned about this. This is one reason why we're focusing so hard on cybersecurity. We've worked with our cloud providers. We had this blog post out about security where we said we have a two key system for access to the model weights. We have other measures that we put in place or are thinking of putting in place that we haven't announced. We don't want an adversary to know about them, but we're happy to talk about them broadly.", "By the way all this stuff we're doing is not sufficient yet for a super determined state level actor at all. I think it will defend against most attacks and against a state level actor who's less determined. But there's a lot more we need to do, and some of it may require new research on how to do security.", "Dwarkesh Patel (01:14:12 - 01:14:35):", "Let's talk about what it would take at that point. We're at Anthropic offices and it's got good security. We had to get badges and everything to come in here. But what does the eventual version of this building or bunker or whatever where the AGI is built look like? Is it a building in the middle of San Francisco or are you out in the middle of Nevada or Arizona? What is a point in which you're Los Alamos-ing it?", "Dario Amodei (01:14:36 - 01:15:11):", "At one point there was a running joke somewhere that the way building AGI would look like is, there would be a data center next to a nuclear power plant next to a bunker, and that we'd all kind of live in the bunker and everything would be local so it wouldn't get on the Internet.", "If we take the rate at which all this is going to happen seriously, which I can't be sure of, then it does make me think that something like that might happen, but maybe not something quite as cartoonish.", "(01:15:11) - How to think about alignment", "Dwarkesh Patel (01:15:11 - 01:15:21)", "What is the timescale on which you think alignment is solvable? If these models are getting to human level in some things in two to three years, what is the point at which they're aligned?", "Dario Amodei (01:15:21 - 01:17:34)", "This is a really difficult question because I actually think often people are thinking about alignment in the wrong way. There's a general feeling that it's like models are misaligned or there's like an alignment problem to solve. Like, someday we'll crack the Riemann hypothesis. I don't quite think it's like that. Not in a way that's worse or better. It might be just as bad or just as unpredictable.", "When I think of why am I scared, there’s a few things I think of — One is, the thing that's really hard to argue with is: There will be powerful models. They will be agentic. We're getting towards them. If such a model wanted to wreak havoc and destroy humanity or whatever, we have basically no ability to stop it. If that's not true, at some point we will reach the point where it's true as we scale the models. So that definitely seems to be the case.", "A second thing that seems to be the case is that we seem to be bad at controlling the models. Not in any particular way, but they’re just statistical systems and you can ask them a million things and they can say a million things and reply. And you might not have thought of a millionth and one thing that does something crazy. Or when you train them, you train them in this very abstract way and you might not understand all the consequences of what they do in response to that. The best example we've seen of that is Bing and Sydney. I don't know how they trained that model. I don't know what they did to make it do all this weird stuff like threaten people and have this weird obsessive personality. But what it shows is that we can get something very different from and maybe opposite to what we intended.", "I actually think fact number one and fact number two are enough to be really worried. You don't need all this detailed stuff about convergent instrumental goals or analogies to evolution. One and two for me are pretty motivated. Okay, this thing's going to be powerful. It could destroy us. And all the ones we've built so far are at pretty decent risk of doing some random shit we don't understand.", "Dwarkesh Patel (01:17:34 - 01:17:57):", "If you say that we're going to get something with bioweapons or something that could be dangerous in two to three years, does the research agenda you have of mechanistic interpretability, constitutional AI and other RLHF stuff meaningfully contribute to preventing that in two to three years?", "Dario Amodei (01:17:57 - 01:21:04):", "People talk about doom by default or alignment by default. I think it might be kind of statistical. With the current models, you might get Bing or Sydney or you might get Claude. If we take our current understanding and move that to very powerful models, you might just be in this world where you make something and depending on the details, maybe it's totally fine. Not really alignment by default, but just depends on a lot of the details. If you're very careful about all those details and you know what you're doing, you're getting it right but we have a high susceptibility to, you mess something up in a way that you didn't really understand was connected to something else. Actually, instead of making all the humans happy, it wants to turn them into pumpkins, just some weird shit. Because the models are so powerful, they're like these giants that are standing in a landscape and if they start to move their arms around randomly, they could just break everything.", "I'm starting it with that kind of framing because I don't think we're aligned by default, I don't think we're doomed by default and have some problem we need to solve. It has some kind of different character.", "Now what I do think is that hopefully within a timescale of two to three years we get better at diagnosing when the models are good and when they're bad. We get better at increasing our repertoire of methods to train the model that they're less likely to do bad things and more likely to do good things in a way that isn't just relevant to the current models but scales. And we can help develop that with interpretability as the test set. I don't think of it as, oh, man, we tried RLHF, it didn't work. We tried Constitutional AI, it didn't work. We tried this other thing, it didn't work. We tried mechanistic interpretability. Now we're going to try something else. I think this frame of like, man, we haven't cracked the problem yet, we haven't solved the Riemann hypothesis isn't quite right.", "Already with today's systems, we are not very good at controlling them and the consequences of that could be very bad. We just need to get more ways of increasing the likelihood that we can control our models and understand what's going on in them. And we have some of them so far. They aren't that good yet. But I don't think of this as binary. It works or it does not work. We're going to develop more. And I do think that over the next two to three years we're going to start eating that probability mass of ways things can go wrong. It's like in the core safety views paper , there's a probability mass of how hard the problem is.", "I feel like that way of stating it isn't really even quite right because I don't feel like it's the Riemann hypothesis to solve. It's almost like right now if I try and juggle five balls or something, I can juggle three balls, I actually can, but I can't juggle five balls at all. You have to practice a lot to do that. If I were to do that, I would almost certainly drop them. And then just over time, you just get better at the task of controlling the balls.", "Dwarkesh Patel (01:21:04 - 01:21:29):", "On that post in particular, what is your personal probability distribution? For the audience, the three possibilities are: One, it is trivial to align these models with RLHF++. Two, it is a difficult problem, but one that a big company could solve. Three, something that is basically impossible for human civilization currently to solve. If I'm capturing those three, What is your probability distribution over those three?", "Dario Amodei (01:21:29 - 01:21:44):", "I'm not super into questions like what's your probability distribution of X? I think all of those have enough likelihood that they should be considered seriously. The question I'm much more interested in is, what could we learn that shifts probability mass between them?", "Dwarkesh Patel (01:21:44 - 01:21:45 ):", "What is the answer to that?", "Dario Amodei (01:21:45 - 01:23:09):", "I think that one of the things mechanistic interpretability is going to do more than necessarily solve problems is, it's going to tell us what's going on when we try to align models. It's basically going to teach us about this. One way I could imagine concluding that things are very difficult is if mechanistic interpretability sort of shows us that problems tend to get moved around instead of being stamped out or that, you get rid of one problem, you create another one. Or it might inspire us or give us insight into why problems are persistent or hard to eradicate or crop up.", "For me to really believe some of these stories about, oh, there's always this convergent goal in this particular direction. I think the abstract story is not uncompelling, but I don't find it really compelling either, nor do I find it necessary to motivate all the safety work.", "But the kind of thing that would really be like, oh man, we can't solve this is like, we see it happening inside the X-ray. I think right now there's way too many assumptions, there's way too much overconfidence about how all this is going to go. I have a substantial probability mass on — this all goes wrong, it's a complete disaster, but in a completely different way than anyone had anticipated it would.", "Dwarkesh Patel (01:23:09 - 01:23:25):", "It would be beside the point to ask how it could go different than anyone anticipated.", "On this, in particular, what information would be relevant? How much would the difficulty of aligning Claude 3 and the next generation of models be? Is that a big piece of information?", "Dario Amodei (01:23:25 - 01:24:07):", "I think the people who are most worried are predicting that all the subhuman AI models are going to be alignable, They're going to seem aligned. They're going to deceive us in some way. It certainly gives us some information but I am more interested in what mechanistic interpretability can tell us because, again, you see this X ray, it would be too strong to say it doesn't lie, but at least in the current systems, it doesn't feel like it's optimizing against us. There are exotic ways that it could. I don't think anything is a safe bet here, but it's the closest we're going to get to something that isn't actively optimizing against us.", "Dwarkesh Patel (01:24:07 - 01:24:31):", "Let's talk about the specific methods other than mechanistic interpretability that you guys are researching. When we talk about RLHF or Constitution AI, if you had to put it in terms of human psychology, what is the change that is happening? Are we creating new drives, new goals, new thoughts? How is the model changing in terms of psychology?", "Dario Amodei (01:24:31 - 01:25:18):", "All those terms are inadequate for describing what's happening. It's not clear how useful they are as abstractions for humans either. I think we don't have the language to describe what's going on. And again, I'd love to have the X-ray. I'd love to look inside and say and kind of actually know what we're talking about instead of basically making up words, which is what I do what you're doing in asking this question. We should just be honest. We really have very little idea what we're talking about. It would be great to say, well, what we actually mean by that is this circuit within here turns on, and after we've trained the model, then this circuit is no longer operative or weaker. It's going to take a lot of work to be able to do that.", "Dwarkesh Patel (01:25:18 - 01:25:41):", "Model organisms, which you hinted at before when you said we're doing these evaluations to see if they're capable of doing dangerous things now and currently not, how worried are you about a lab leak scenario? Where in fine tuning it or in trying to get these models to elicit dangerous behaviors, make bioweapons or something, you leak somehow and it actually makes the bioweapons instead of telling you it can make the bioweapons.", "Dario Amodei (01:25:41 - 01:26:28):", "It's not that much of a concern with today's passive models. If we were to fine tune a model, we would do it privately and we work with the experts and so the leak would be like, suppose the model got open sourced or something. For now, it's mostly a security issue.", "In terms of models truly being dangerous, we do have to worry that if we make a truly powerful model and we're trying to see what makes it dangerous or safe, then there could be more of a one shot thing where there’s some risk that the model takes over. The main way to control that is to make sure that the capabilities of the model that we test are not such that they're capable of doing this.", "Dwarkesh Patel (01:26:28 - 01:26:32):", "At what point would the capabilities be so high where you say, I don't even want to test this?", "Dario Amodei (01:26:32 - 01:26:36):", "Well, there's different things. There's capability testing..", "Dwarkesh Patel (01:26:36 - 01:26:40):", "But that itself could lead to... If you're testing replicate, what if it actually does?", "Dario Amodei (01:26:40 - 01:27:16):", "Sure. But I think what you want to do is you want to extrapolate. We've talked with Arc about this. You have factors of two of compute, where you're like, can the model do something like open up an account on AWS and make some money for itself? Some of the things that are obvious prerequisites to complete survival in the wild. Just set those thresholds very well below and then as you proceed upward from there, do kind of more and more rigorous tests and be more and more careful about what it is you're doing.", "Dwarkesh Patel (01:27:16 - 01:27:29):", "On Constitution AI, who decides what the constitution for the next generation of models or a potentially superhuman model is? How is that actually written?", "Dario Amodei (01:27:29 - 01:29:17):", "Initially to make the constitution, we just took some stuff that was broadly agreed on, like the UN declaration on Human Rights and some of the stuff from Apple's Terms of Service. Stuff that's consensus on what's acceptable to say or what basic things are able to be included.", "One, for future constitutions, we're looking into more participatory processes for making these. But beyond that, I don't think there should be one constitution for a model that everyone uses. The model’s constitution should be very simple. It should only have very basic facts that everyone would agree on. Then there should be a lot of ways that you can customize, including appending constitutions. And beyond that, we're developing new methods. I'm not imagining that this or this alone is the method that we'll use to train superhuman AI. Many of the parts of capability training may be different, and so it could look very different.", "There are levels above this. I'm pretty uncomfortable with: here's the AI's constitution, it's going to run the world. From just normal lessons from how societies work and how politics works, that strikes me as fanciful.", "Even after we've mitigated the safety issues, any good future, even if it has all these security issues that we need to solve, it somehow needs to end with something that's more decentralized and less like a godlike super. I just don't think that ends well.", "(01:29:18) - Manhattan Project", "Dwarkesh Patel (01:29:18 - 01:29:26):", "What scientists from the Manhattan Project do you respect most in terms of, they acted most ethically under the constraints they were given. Is there one that comes to mind?", "Dario Amodei (01:29:26 - 01:30:45):", "I don't know. There's a lot of answers you could give. I'm definitely a fan of Szilard for having kind of figured it out. He was then against the actual dropping of the bomb. I don't actually know the history well enough to have an opinion on whether the demonstration of the bomb could have ended the war. I mean that involves a bunch of facts about Imperial Japan that are complicated and that I'm not an expert on. But Szilard, he discovered this stuff early, he kept it secret, patented some of it and put it in the hands of the British Admiralty. He seemed to display the right kind of awareness as well as discovering stuff. It was when I read that book that when I wrote this big blob of compute doc and I only showed it to a few people and there were other docs that I showed to almost no one. I was a bit inspired by this.", "Again, we could all get self aggrandizing here. Like we don't know if it's actually going to be something on par with the Manhattan project. This could all be just Silicon Valley people building technology and just having delusions of grandeur. I don't know how it's going to turn out.", "Dwarkesh Patel (01:30:45 - 01:30:48):", "I mean, if the scaling stuff is true then it's bigger than the Manhattan Project.", "Dario Amodei (01:30:48 - 01:30:56):", "Yeah, it certainly could be bigger. I think we should always maintain this attitude that it's really easy to fool yourself.", "Dwarkesh Patel (01:30:57 - 01:31:05):", "If you're a physicist during World War II and you were asked by the government to contribute non replaceable research to the Manhattan Project, what do you think you would have said?", "Dario Amodei (01:31:05 - 01:31:21):", "Given you're in a war with the Nazis, I don't really see much choice but to do it if it's possible. You have to figure it's going to be done within ten years or so by someone.", "(01:31:31) - Is modern security good enough?", "Dwarkesh Patel (01:31:31 - 01:31:48):", "Regarding cybersecurity, what should we make of the fact that there's a whole bunch of tech companies which have ordinary tech company security policy and it's not obvious that they've been hacked publicly. Coinbase still has its bitcoin. As far as I know my Gmail hasn't been leaked.", "Should we take from that that current status quo tech company security practices are good enough for AGI or just simply that nobody has tried hard enough?", "Dario Amodei (01:31:48 - 01:34:07):", "It would be hard for me to speak to current tech company practices and of course there may be many attacks that we don't know about, where things are stolen and then silently used. I think an indication of it is when someone really cares basically cares about attacking someone, then often the attacks happen.", "Recently we saw that some fairly high officials of the US government had their email accounts hacked via Microsoft. Microsoft was providing the email accounts. Presumably that relayed information that was of great interest to foreign adversaries.", "It seems to me at least that the evidence is more consistent with, when something is really high enough value, then someone acts and it's stolen. And my worry is that of course with AGI we'll get to a world where the value is seen as incredibly high. It'll be like stealing nuclear missiles or something. You can't be too careful on this stuff.", "At every place that I've worked, I've pushed for cybersecurity to be better. One of my concerns about cybersecurity is, it's not something you can trumpet. A good dynamic with safety research is, you can get companies into a dynamic and I think we have, where you can get them to compete to do the best safety research and use it as a recruiting point of competition or something. We used to do this all the time with interpretability and then sooner or later other orgs started recognizing the defect and started working on interpretability, whether or not that was a priority to them before.", "But it's harder to do that with cybersecurity because a bunch of the stuff you have to do quietly. We did try to put out one post about it, but mostly you just see the results. A good norm would be people see these cybersecurity leaks from companies or leaks the model parameters or something and say they screwed up, that's bad. If I'm a safety person, I might not want to work there.", "Of course, as soon as I say that, we'll probably have a security breach tomorrow. But that's part of the game here, that's part of trying to make things safe.", "Dwarkesh Patel (01:34:07 - 01:34:20):", "I want to go back to the thing we're talking about earlier, where the ultimate level of cybersecurity required two to three years from now and whether it requires a bunker, are you actually expecting to be in a physical bunker in two to three years, or is that just a metaphor?", "Dario Amodei (01:34:21 - 01:35:19):", "That’s a metaphor. We’re still figuring it out. Something I would think about is the security of the data center, which may not be in the same physical location as us, but we've worked very hard to make sure it's in the United States. But securing the physical data centers and the GPUs. If someone was really determined, some of the really expensive attacks just involve going into the data center and just trying to steal the data directly or as it's flowing from a data center to us. These data centers are going to have to be built in a very special way. Given the way things are scaling up, we're anyway heading to a world where the networks of data centers cost as much as aircraft carriers. They're already going to be pretty unusual objects but in addition to being unusual in terms of their ability to link together and train gigantic, gigantic models, they're also going to have to be very secure.", "Dwarkesh Patel (01:35:19 - 01:35:32):", "Speaking of which, there's been rumors on the difficulty of procuring the power and the GPUs for the next generation of models. What has the process been like to secure the necessary components to do the next generation?", "Dario Amodei (01:35:32 - 01:36:03):", "That's something I can't go into great detail about. I will say, people are thinking of industrial scale data centers and people are not thinking at the scale that these models are going to go to very soon. Whenever you do something at a scale where it's never been done before, every single component, every single thing has to be done in a new way than it was before. And so you may run into problems with surprisingly simple components. Power is one that you mentioned.", "Dwarkesh Patel (01:36:03 - 01:36:06):", "And is this something that Anthropic has to handle, or can you just outsource it?", "Dario Amodei (01:36:05 - 01:36:09):", "For data centers, we work with cloud providers, for instance.", "(01:36:09) - Inefficiencies in training", "Dwarkesh Patel (01:36:09 - 01:36:46):", "What should we make about the fact that these models require so much training and the entire corpus of internet data in order to be subhuman?", "Whereas GPT-4, there's been estimates that it was like 10^25 Flops or something, you can take these numbers with a grain of salt, but there's reports that the human brain, from the time it is born to the time a human being is 20 years old, is on the order of 10^14 Flops to simulate all those interactions.", "We don't have to go into the particulars on those numbers, but should we be worried about how sample inefficient these models seem to be?", "Dario Amodei (01:36:46 - 01:39:01):", "That's one of the remaining mysteries. One way you could phrase it is that the models are maybe two to three orders of magnitude smaller than the human brain. If you compare it to the number of synapses, while at the same time being trained on three to four more orders of magnitude of data. If you compare the number of words a human sees as they're developing to age 18, I don't remember exactly, but I think it's in the hundreds of millions, whereas for the models, we're talking about the hundreds of billions to the trillions. So what explains this? There are these offsetting things where the models are smaller, they need a lot more data. They're still below human level.", "There's some way in which the analogy to the brain is not quite right or is breaking down or there's some missing factor. This is just like in physics, where we can't explain the Michelson-Morley experiment , or one of the other 19th century physics paradoxes. It's one thing we don't quite understand. Humans see so little data, and they still do fine.", "One theory on it, it could be that it's like our other modalities. How do we get 10^14 bits into the human brain? Most of it is these images, and maybe a lot of what's going on inside the human brain is, our mental workspace involves all these simulated images or something like that.", "But honestly, intellectually we have to admit that that's a weird thing that doesn't match up. And it's one reason I'm a bit skeptical of biological analogies. I thought in terms of them, like, five or six years ago, but now that we actually have these models in front of us as artifacts, it feels like almost all the evidence from that has been screened off by what we've seen. And what we've seen are models that are much smaller than the human brain and yet can do a lot of the things that humans can do, and yet, paradoxically, require a lot more data. Maybe we'll discover something that makes it all efficient, or maybe we'll understand why the discrepancy is present, but at the end of the day, I don't think it matters, right? If we keep scaling the way we are. I think what's more relevant at this point is just measuring the abilities of the model and seeing how far they are from humans, and they don't seem terribly far to me.", "Dwarkesh Patel (01:39:01 - 01:39:27):", "Does this scaling picture and the big blob of compute more generally, underemphasize the role that algorithmic progress has played. When you composed the big blob of compute, you're presumably talking about LSTMs at that point, the scaling on that would not have you at Claude 2 at this point.", "Are you underemphasizing the role that an improvement of the scale of Transformer could be having here, when you put it behind the label of scaling?", "Dario Amodei (01:39:27 - 01:42:02):", "This big blob of compute document, which I still have not made public, I probably should for historical reasons. I don't think it would tell anyone anything they don't know now. But when I wrote it, I actually said, look, there are seven factors and I wasn't like, these are all the factors but just let me give some sense of the kinds of things that matter and what don't. There could be nine, there could be five. But the things I said were — Number of parameters matters. Scale of the model matters. Compute matters. Quantity of data matters. Quality of data matters. Loss function matters. Are you doing RL? Are you doing next word prediction? If your loss function isn't rich or doesn't incentivize the right thing, you won't get anything. Those were the key four ones, which I think are the core of the hypothesis.", "But then I said three more things. One was symmetries, which is basically if your architecture doesn't take into account the right kinds of symmetries, it doesn't work or it's very inefficient. For example, convolutional neural networks take into account translational symmetry. LSTMs take into account time symmetry. But a weakness of LSTMs is that they can't attend over the whole context. So there's kind of this structural weakness. If a model isn't structurally capable of absorbing and managing things that happened in a far enough distant past, then it's like the compute doesn't flow. The spice doesn't flow. The blob has to be unencumbered. It's not going to work if you artificially close things off. And I think RNNs and LSTMs artificially close things off because they close you off to the distant past. Again, things need to flow freely. If they don't, it doesn't work.", "And then I added a couple things. One of them was conditioning, which is if the thing you're optimizing with is just really numerically bad, you're going to have trouble. And so this is why atom works better than normal STD.", "I'm forgetting what the 7th condition was, but it was similar to things like this, where if you set things up in a way that's set up to fail or that doesn't allow the compute to work in an uninhibited way, then it won't work. Transformers were kind of within that even though I can't remember if the transformer paper had been published, it was around the same time as I wrote that document. It might have been just before. It might have been just after.", "Dwarkesh Patel (01:42:02 - 01:42:17):", "From that view it sounds like the way to think about these algorithmic progresses is not as increasing the power of the blob of compute, but simply getting rid of the artificial hindrances that older architectures have.", "Dario Amodei (01:42:17 - 01:42:33):", "That's a little how I think about it. If you go back to Ilya's, the models want to learn, the compute wants to be free and it's being blocked in various ways where you don't understand that it's being blocked until you need to free it up.", "Dwarkesh Patel (01:42:33 - 01:42:49):", "On that point, though, do you think that another thing on the scale of a transformer is coming down the pike to enable the next great iteration?", "Dario Amodei (01:42:49- 01:43:27):", "I think it's possible. People have worked on things like trying to model very long time dependencies or there's various different ideas where I could see that we're missing an efficient way of representing or dealing with something. I think those inventions are possible.", "I guess my perspective would be, even if they don't happen, we're already on this very, very steep trajectory. Unless we're constantly trying to discover them, as are others, but things are already on such a fast trajectory, all that would do is speed up the trajectory even more, and probably not by that much because it's already going so fast.", "Dwarkesh Patel (01:43:27 - 01:43:35):", "Is having an embodied version of a model at all important in terms of getting either data or progress?", "Dario Amodei (01:43:35 - 01:44:02):", "I'd think of that less in terms of a new architecture and more in terms of a loss function like the data, the environments you're exposing yourself to end up being very different. That could be important for learning some skills, although data acquisition is hard and so things have gone through the language route and I would guess will continue to go through the language route even as more is possible in terms of embodiment.", "Dwarkesh Patel (01:44:02 - 01:44:06):", "And then the other possibilities you mentioned. RL, you can see it as...", "Dario Amodei (01:44:06 - 01:44:42):", "We kind of already do RL with RLHF. Is this alignment? Is this capabilities? I always think in terms of the two snakes, they're often hard to distinguish. We already kind of use RL on these language models but I think we've used RL less in terms of getting them to take actions and do things in the world but when you take actions over a long period of time and understand the consequences of those actions only later, then RL is a typical tool we have for that. So I would guess that in terms of models taking action in the world, that RL will become a thing with all the power and all the safety issues that come with it.", "Dwarkesh Patel (01:44:42 - 01:45:02):", "When you project out in the future, do you see the way in which these things will be integrated into productive supply chains? Do you see them talking with each other and criticizing each other and contributing to each other's output? Or is it just that one model one shots the answer or the work.", "Dario Amodei (01:45:02 - 01:45:53):", "Models will undertake extended tasks. That will have to be the case. We may want to limit that to some extent because it may make some of the safety problems easier but some of that will be required.", "In terms of our models talking to models or are they talking to humans? Again, this goes kind of out of the technical realm and into the sociocultural economic realm where my heuristic is always that it's very, very difficult to predict things. I feel like these scaling laws have been very predictable but then when you say like, when is there going to be a commercial explosion in these models? Or what's the form it's going to be? Or are the models going to do things instead of humans or pairing with humans? Certainly my track record on predicting these things is terrible but also looking around, I don't really see anyone whose track record is great.", "(01:45:53) - Anthropic’s Long Term Benefit Trust", "Dwarkesh Patel (01:45:53 - 01:46:11):", "You mentioned how fast progress is happening, but also the difficulties of integrating within the existing economy into the way things work. Do you think there will be enough time to actually have large revenues from AI products before the next model is just so much better or we're in a different landscape entirely?", "Dario Amodei (01:46:11 - 01:46:53):", "It depends what you mean by large. I think multiple companies are already in the 100 million to billion per year range. Will it get to the 100 billion or trillion range before? That stuff is just so hard to predict. And it's not even super well defined.", "Right now there are companies that are throwing a lot of money at generative AI as customers. That's the right thing for them to do, and they'll find uses for it, but it doesn't mean they're finding uses or the best uses from day one. Even money changing hands is not quite the same thing as economic value being created.", "Dwarkesh Patel (01:46:53 - 01:47:01):", "But surely you've thought about this from the perspective of Anthropic, where if these things are happening so fast, then it should be an insane valuation, right?", "Dario Amodei (01:47:01 - 01:47:49):", "Even us who have not been super focused on commercialization and more on safety, the graph goes up and it goes up relatively quickly. I can only imagine what's happening at the orgs where this is their singular focus. It's certainly happening fast but it's an exponential from the small base while the technology itself is moving fast.", "It's a race between how fast the technology is getting better and how fast it's integrated into the economy. And I think that's just a very unstable and turbulent process. Both things are going to happen fast but if you ask me exactly how it's going to play out, exactly what order things are going to happen, I don't know. And I'm skeptical of the ability to predict.", "Dwarkesh Patel (01:47:49 - 01:48:14):", "I'm curious. With regards to Anthropic specifically, you're a public benefit corporation and rightfully so, you want to make sure that this is an important technology. Obviously, the only thing you want to care about is not shareholder value.", "But how do you talk to investors who are putting in hundreds of millions, billions of dollars of money? How do you get them to put in this amount of money without the shareholder value being the main concern?", "Dario Amodei (01:48:14 - 01:49:18):", "I think the LTBT (Long Term Benefit Trust) is the right thing on this.  We're going to talk more about the LTBT, but some version of that has been in development since the beginning of Anthropic, even formally. Even as the body has changed, from the beginning, it was like, this body is going to exist and it's unusual.", "Every traditional investor who invests in Anthropic looks at this. Some of them are just like, whatever, you run your company how you want. Some of them are like, oh my god, this body of random people could move Anthropic in a direction that's totally contrary to shareholder value. Now there are legal limits on that, of course, but we have to have this conversation with every investor. And then it gets into a conversation of, well, what are the kinds of things that we might do that would be contrary to the interests of traditional investors. And just having those conversations has helped get everyone on the same page.", "Dwarkesh Patel (01:49:18 - 01:49:43):", "I want to talk about the fact that so many of the founders and the employees at Anthropic are physicists. We talked in the beginning about the scaling laws and how the power laws from physics are something you see here, but what are the actual approaches and ways of thinking from physics that seem to have carried over so well? Is that notion of effective theory super useful? What is going on here?", "Dario Amodei (01:49:43 - 01:50:18):", "Part of it is just that physicists learn things really fast. We have generally found that if we hire someone who is a Physics PhD or something, that they can learn ML and contribute just very quickly in most cases. And because several of our founders myself, Jared Kaplan, Sam McCandlish were physicists, we knew a lot of other physicists, and so we were able to hire them. And now there might be 30 or 40 of them here. ML is not still not yet a field that has an enormous amount of depth, and so they've been able to get up to speed very quickly.", "Dwarkesh Patel (01:50:18 - 01:50:41):", "Are you concerned that there's a lot of people who would have been doing physics or something, they would’ve gone into finance instead and since Anthropic exists, they have now been recruited to go into AI. You obviously care about AI safety, but maybe in the future they leave and they get funded to do their own thing. Is that a concern that you're bringing more people into the ecosystem here?", "Dario Amodei (01:50:41 - 01:51:18):", "There's a broad set of actions, like we're causing GPUs to exist. There's a lot of side effects that you can't currently control or that you just incur if you buy into the idea that you need to build frontier models. And that's one of them. A lot of them would have happened anyway. I mean, finance was a hot thing 20 years ago, so physicists were doing it. Now ML is a hot thing, and it's not like we've caused them to do it when they had no interest previously. But again, at the margin, you're bidding things up, and a lot of that would have happened anyway. Some of it wouldn't but it's all part of the calculus.", "(01:51:18) - Is Claude conscious?", "Dwarkesh Patel (01:51:18 - 01:51:21):", "Do you think that Claude has conscious experience? How likely do you think that is?", "Dario Amodei (01:51:21 - 01:52:18):", "This is another of these questions that just seems very unsettled and uncertain. One thing I'll tell you is I used to think that we didn't have to worry about this at all until models were operating in rich environments, like not necessarily embodied, but they needed to have a reward function and have a long lived experience. I still think that might be the case, but the more we've looked at these language models and particularly looked inside them to see things like induction heads, a lot of the cognitive machinery that you would need for active agents already seems present in the base language models. So I'm not quite as sure as I was before that we're missing enough of the things that you would need. I think today's models just probably aren't smart enough that we should worry about this too much but I'm not 100% sure about this, and I do think in a year or two, this might be a very real concern.", "Dwarkesh Patel (01:52:18 - 01:52:24):", "What would change if you found out that they are conscious? Are you worried that you're pushing the negative gradient to suffering?", "Dario Amodei (001:53:23 - 01:53:23):", "Conscious, again, is one of these words that I suspect will not end up having a well defined.. I suspect that's a spectrum. Let's say we discover that I should care about Claude’s experience as much as I should care about a dog or a monkey or something. I would be kind of worried.", "I don't know if their experience is positive or negative. Unsettlingly I also don't know I wouldn't know if any intervention that we made was more likely to make Claude have a positive versus negative experience versus not having one.", "If there's an area that is helpful with this, it's maybe mechanistic interpretability because I think of it as neuroscience for models. It's possible that we could shed some light on this. Although it's not a straightforward factual question. It depends what we mean and what we value.", "Dwarkesh Patel (01:53:23 - 01:53:47):", "We talked about this initially, but I want to get more specific. We talked initially about now that you're seeing these capabilities ramp up within the human spectrum, you think that the human spectrum is wider than we thought but more specifically, how is the way you think about human intelligence different. The way you're seeing these marginally useful abilities emerge? How does that change your picture of what intelligence is?", "Dario Amodei (01:53:47 - 01:55:46):", "For me, the big realization on what intelligence is came with the blob of compute thing. There might be all these separate modules. There might be all this complexity. Rich Sutton called it The Bitter Lesson . It has many names. It's been called the scaling hypothesis. The first few people who figured it out was around 2017. You could go further back. I think Shane Legg was maybe the first person who really knew it, maybe Ray Kurzweil, although in a very vague way. But the number of people who understood it went up a lot around 2014 to 2017.", "I think that was the big realization. How did intelligence evolve? If you don't need very specific conditions to create it, if you can create it just from the right kind of gradient and loss signal, then of course it's not so mysterious how it all happened. It had this click of scientific understanding.", "In terms of watching what the models can do, how has it changed my view of human intelligence? I wish I had something more intelligent to say on that. One thing that's been surprising is I thought things might click into place a little more than they do. I thought different cognitive abilities might all be connected and there was more of one secret behind them. But the model just learns various things at different times. It can be very good at coding but it can't quite prove the prime number theorem yet. And I guess it's a little bit the same for humans, although it's weird the juxtaposition of things it can do and not. I guess the main lesson is having theories of intelligence or how intelligence works. A lot of these words just dissolve into a continuum. They just kind of dematerialize. I think less in terms of intelligence and more in terms of what we see in front of us.", "Dwarkesh Patel (01:55:46 - 01:56:14):", "Two things are really surprising to me. One is how discrete these different paths of intelligent things that contribute to loss are rather than just being one reasoning circuit or one general intelligence. And the other surprising and interesting thing is, many years from now, it'll be one of those things that you’ll wonder why it wasn't obvious to you? If you're seeing these smooth scaling curves, why were you not completely convinced at the time?", "(01:56:14) - Keeping a low profile", "Dwarkesh Patel (01:56:14 - 01:56:26):", "You've been less public than the CEOs of other AI companies. You're not posting on Twitter, you're not doing a lot of podcasts except for this one. What gives? Why are you off the radar?", "Dario Amodei (01:56:26 - 01:58:03):", "I aspire to this and I'm proud of this. If people think of me as boring and low profile, this is actually kind of what I want. I've just seen cases with a number of people I've worked with, where attaching your incentives very strongly to the approval or cheering of a crowd can destroy your mind, and in some cases, it can destroy your soul.", "I've deliberately tried to be a little bit low profile because I want to defend my ability to think about things intellectually in a way that's different from other people and isn't tinged by the approval of other people. I've seen cases of folks who are deep learning skeptics, and they become known as deep learning skeptics on Twitter. And then even as it starts to become clear to me, they've sort of changed their mind. This is their thing on Twitter, and they can't change their Twitter persona and so forth and so on.", "I don't really like the trend of personalizing companies. The whole cage match between CEOs approach. I think it distracts people from the actual merits and concerns of the company in question. I want people to think in terms of the nameless, bureaucratic institution and its incentives more than they think in terms of me. Everyone wants a friendly face, but actually, friendly faces can be misleading.", "Dwarkesh Patel (01:58:03 - 01:58:09):", "Okay, well, in this case, this will be a misleading interview because this has been a lot of fun.", "Dario Amodei (01:58:09 - 01:58:10):", "Indeed.", "Dwarkesh Patel (01:58:10 - 01:58:14):", "Yeah, this has been a blast. I’m super glad you came on the podcast and hope people enjoyed it.", "Dario Amodei (01:58:14 - 01:58:15):", "Thanks for having me." ]
[ "https://sites.krieger.jhu.edu/jared-kaplan/", "https://transformer-circuits.pub/2022/mech-interp-essay/index.html", "https://en.wikipedia.org/wiki/AlexNet", "https://en.wikipedia.org/wiki/Ray_Kurzweil", "https://www.lesswrong.com/users/eliezer_yudkowsky", "https://www.andrewng.org/", "https://www.dwarkeshpatel.com/p/ilya-sutskever", "https://scholar.google.com/citations?user=dOad5HoAAAAJ&hl=en", "https://www.lesswrong.com/posts/3Jpchgy53D2gB5qdk/my-childhood-role-model", "https://www.dwarkeshpatel.com/p/carl-shulman", "https://www.youtube.com/watch?v=IXNA-ZhJayg", "https://openai.com/research/better-language-models", "https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA", "https://www.smithsonianmag.com/science-nature/the-neuroscientist-who-discovered-he-was-a-psychopath-180947814/", "https://colah.github.io/about.html", "https://www.vox.com/future-perfect/23794855/anthropic-ai-openai-claude-2", "https://www.anthropic.com/index/frontier-model-security", "https://www.anthropic.com/index/core-views-on-ai-safety", "https://www.anthropic.com/index/core-views-on-ai-safety", "https://en.wikipedia.org/wiki/Michelson%25E2%2580%2593Morley_experiment", "http://incompleteideas.net/IncIdeas/BitterLesson.html", "https://en.wikipedia.org/wiki/Shane_Legg" ]
https://www.dwarkesh.com/p/david-deutsch
David Deutsch - AI, America, Fun, & Bayes
[ "Will AIs be smarter than humans?", "Dwarkesh Patel 0:05", "Okay, today I'm speaking with David Deutsch. Now, this is a conversation I've eagerly wanted to have for years. So this is very exciting for me. So first, let's talk about AI. Can you briefly explain why you anticipate that AIs will be no more fundamentally intelligent than humans?", "David Deutsch 0:24 I suppose you mean AGIs? And by fundamentally intelligent, I suppose you mean, capable of all the same types of cognition as humans are? In principle?", "Dwarkesh Patel 0:37", "Yes.", "David Deutsch 0:37", "So, that would include doing science and art and, in principle, also falling in love and being good and evil and all that. So the reason is twofold. One half is about computation - hardware - and the other is about software. So if we take the hardware, we know that our brains are Turing-complete bits of hardware, and therefore can exhibit the functionality of running any computable program and function. Now, when I say any, I don't really mean any, because you and I are sitting here and having a conversation. We could say that we could have any conversation. Well, we can assume that maybe in 100 years’ time, we will both be dead. And therefore, the number of conversations we could have is strictly limited. Also, some conversations depend on the speed of computation. So if we're going to be solving the traveling salesman problem, then there are many traveling salesman problems that we wouldn't be able to solve in the age of the universe.", "When I say “any”, what I mean is that we're not limited in the programs we can run, apart from speed and memory capacity. So all hardware limitations on us boil down to speed and memory capacity. And both of those can be augmented to the level of any other entity that is in the universe. Because if somebody builds a computer that can think faster than the brain, then we can use that very computer or that very technology to make our thinking go just as fast as that. So that's the hardware.", "As far as explanations go, can we reach the same kind of explanations as any other entity,–– usually, this is said not in terms of AGI but in terms of extraterrestrial intelligence. But also it's said about AGIs. What if they are to us as we are to ants? Well, again, part of that is just hardware, which is easily fixable by adding more hardware. So let's get around that.", "So really, the idea is, are there concepts that we are inherently incapable of comprehending? Martin Rees believes this. He thinks that we can comprehend quantum mechanics and apes can't, and maybe the extraterrestrials can comprehend something beyond quantum mechanics, which we can't comprehend, and no amount of brain add-ons with extra hardware can give us that because they have the hardware that is adapted to having these concepts which we haven't.", "The same kind of thing is said about certain qualia - that maybe we can experience love. And an AGI couldn't experience love because it has to do with our hardware - not just memory and speed, but specialized hardware. That falls victim to the same argument. The thing is, this specialized hardware can't be anything except for a computer. And if there's hardware that is needed for love, let's say that somebody is born without that hardware, then that hardware - that bit of the brain - that does love or does mathematical insight or whatever, is just a bit of the brain. And it's connected to the rest of the brain in the same way that other parts of the brain are connected to the rest of the brain, namely by neurons, passing electrical signals, and by chemicals whose concentrations are altered, and so on. So therefore, an artificial device that computed which signals were to be sent and which chemicals were to be adjusted, could do the same job and it would be indistinguishable, and therefore a person augmented was one of those who couldn't feel love could feel love after that augmentation. And those two things are the only relevant ones. So that's why that AGI and humans have the same range in the sense I’ve defined.", "Are intelligence differences immutable/heritable?", "Dwarkesh Patel 6:18", "Okay, interesting. So,  the software question is more interesting than the harder one immediately. But, I do want to take issue with the idea that the memory and speed of human brains can be arbitrarily and easily expanded. But we can get into that later. We can start with this question: Can all humans explain everything that even the smartest humans can explain?", "If I took the village idiot and asked him to create the theory of quantum computing, should I anticipate that he could do this if he wanted to? For a frame of reference, about 21-24% of Americans on the National Adult Literacy survey fall in level one—which means that they can't perform basic tasks like identifying the expiry date of a driver's license or totaling a bank deposit slip. Are these humans capable of explaining quantum computing? Or creating the Deutsch–Jozsa algorithm ? If they're incapable of doing this, doesn't that mean that the Theory of Universal Explainers falls apart?", "David Deutsch 7:22", "So, you're talking about tasks that no ape could do. However, some humans are brain-damaged to the extent that they can't even do the tasks that an ape can do. Then, there comes a point when installing the program that would be able to read the driver's license would require augmenting their hardware and software. We don’t know enough about the brain yet, but if it’s 24% of the population, then it's definitely not hardware. For those people, it's definitely software. If it was hardware, then getting them to do this would be a matter of repairing the imperfect hardware. If it's software, it is not just a matter of them wanting to be taught. It is a matter of whether the existing software is conceptually ready to do that.", "For example, Brett Hall has often said he would like to speak Mandarin Chinese. He wants to, but he will never be able to speak Mandarin Chinese because he's never going to want it enough to go through the process of acquiring that program. But nothing about his hardware prevents him from learning Mandarin Chinese, and there's nothing about his software either. Except that he wants to learn it, but he doesn't want to go through the process of being programmed with that program. But if his circumstances changed, he might want to. For example, many of my relatives a couple of generations ago were forced to migrate to a very alien place. They had to learn languages they never thought they would speak and never wanted to speak. Yet very quickly, they did speak those languages. Was it because they wanted to change? In the big picture, perhaps you could say what they wanted to be changed. So if your driving license blind people wanted to be educated to read driving licenses in the sense that my ancestors wanted to learn languages, then yes, they could learn that.", "There is a level of dysfunction below which they couldn't, which are hardware limitations. On the borderline between those two, there's not that much difference. That's the question: Could apes be programmed with a fully human intellect? The answer to that is yes. Although programming them would not require hardware, a surgery—in the sense of repairing a defect— would require intricate changes at the neuron level. That's to transfer the program from a human mind into the ape's mind. I would guess that is possible because although the ape has far less memory space than humans do, and no specific specialized modules like humans, neither of those things is a thing that we use to the full anyway. When I'm speaking to you now, there's a lot of knowledge in my brain that I'm not referring to at all. For example, the fact that I can play the piano or drive a car is not being used in this conversation. So, I don't think having such a large memory capacity would affect this. This project would be highly immoral because you'd intentionally create a person inside deficient brain hardware.", "Dwarkesh Patel 12:31", "Suppose hardware differences distinguish different humans in terms of their intelligence. If it were just up to the people who are not even functionally literate, these are people….", "David Deutsch 12:42", "Wait, wait! I said it could only be hardware at the low level, at the level of brain defects, or at the level of using up the whole of our allocation of memory, speed, or whatever. Apart from that, it can be hardware.", "Dwarkesh Patel 13:01", "By the way, is hardware synonymous with genetic influences for you, or can software be genetic too?", "David Deutsch 13:12", "The software can be genetic, too, but that doesn't mean it's immutable. It just means that it’s there at the beginning.", "Dwarkesh Patel 13:20", "Okay. I suspect it's not software because––let's suppose it was software and something that they chose to do is something they could change—it's mysterious to me why these people would also accept the jobs with lower pay but are less cognitively demanding, or why they would choose to do worse on an academic test or IQ test. Why would they choose to do precisely this thing somebody who was less cognitively powerful would do? So it seems the more parsimonious explanation is that they are cognitively less powerful.", "David Deutsch 13:55", "Why would someone choose not to go to school, for instance, if they were given a choice and not to have any lessons? Well, there are many reasons why they might choose that. Some of them are good, some of them bad. Calling some jobs cognitively demanding is already begging the question because you're just referring to a choice that people make—which is a software choice—as being, by definition, forced on them by hardware. It's not cognitively deficient, and it's just that they don't want to do it! The same way if there was a culture that required Brett Hall to be able to speak fluent Mandarin Chinese to do a wide range of tasks. If he didn't know Mandarin Chinese, he'd be relegated to low-level tasks. Then he would be  “choosing” the low-level tasks rather than the “cognitively demanding task”, but it's the only culture that assigns a hardware interpretation to the difficulty of doing that task.", "Dwarkesh Patel 15:16", "Right. It doesn't seem that arbitrary to say that the kind of jobs you could do sitting down on a laptop require probably more cognition than the ones you can do on a construction site. If it's not cognition that distinguishes what is measured by both these literacy tests, and by what you're doing your job, what is the explanation for why there's such a high correlation between people who are not functionally literate and an anti-correlation between people who are not functionally literate, and people who are programming. I guarantee that people working at Apple are above level one on this literacy survey. Why did they happen to make the same choices? Why is that their correlation?", "David Deutsch 16:01", "There are correlations everywhere, and culture is built to make use of certain abilities that people have. So if you're setting up a company that is going to employ 10,000 employees, then it's best to make the signs above the doors or the signs on the doors, or the numbers on the dials, all be ones that people in that culture—who are highly educated—can read. You could, in principle, make each label on each door a different language. There are 1000s of human languages, and 5000 languages and 5000 doors in the company. You could, given the same meaning, make them all different languages. They're all the same language, not just any old language, it's a language that many educated people know fluently. You could also misinterpret that as saying, “ Oh, there is something. There is some hardware reason why everybody speaks the same language.” Well, no, there isn't. It's a cultural reason.", "Dwarkesh Patel 17:28", "If the culture was different, somehow—maybe if there was some other way of communicating ideas—do you think the people currently designated as not functionally literate could be in a position to learn about quantum computing? For example, if they made the right choices, or not the right choices, but the choices that could lead to them understanding quantum reading?", "David Deutsch 17:53", "So, I don't want to evade the question. The answer is yes. But the way you put it begs the question. It's not the only language like this. It's all knowledge. So, if someone doesn't speak English, quantum computing is a field in which English is the standard language. That used to be German. Now, it's English. Someone who doesn't know English is disadvantaged in learning about quantum computers. But, not only because of the deficiency in language.", "Suppose they come from a culture in which the culture of physics, mathematics, and logic are equivalent, and only the language is different. If they learn the language, they will find it as easy as anyone else. But suppose a person doesn't think in terms of logic but thinks in terms of pride, manliness, fear, and concepts that fill the lives of, say, prehistoric people or pre-enlightenment people. In that case, they'd have to learn a lot more than just the language of the civilization to understand quantum computers. They'd have to learn a range of other features of the civilization. On that basis, people who can't read driving licenses are similarly in a different culture—which they would also have to learn if they are to increase their IQ, i.e., the ability to function at a high level in intellectual culture in our civilization.", "IQ correlation of twins separated at birth", "Dwarkesh Patel 20:12", "Okay, so if it's those kinds of differences, then how do you explain the fact that identical twins separated at birth and adopted by different families tend to have the most of the variance that does exist between humans in terms of IQ? That doesn't exist between identical twins, the correlation is 0.8—which is the correlation you would have when you took the test on different days, depending on how good a day you were having. These are people adopted by families with different cultures who were often in different countries. In fact, a hardware theory explains very well why they would have similar scores and IQ tests correlate with literacy, job performance, and so on—whereas I don't know how the software would explain why it would involve being adopted by different families.", "David Deutsch 20:58", "The hardware theory explains it in the sense that it might be hardware, it might be true. So it doesn't because it doesn't have an explanation beyond that, nor does the software theory.", "Dwarkesh Patel 21:15", "So there are differences in the brain level that correlate with IQ, right? So your actual skull size is like a 0.3 correlation with IQ. There are a few more like this. They don't explain the entire variance in human intelligence or the entire genetic variance of human intelligence. But we have identified a few actual hardware differences that correlate.", "David Deutsch 21:34", "Suppose that on the contrary, the results of these experiments had been different. Suppose that the result was that people who are brought up in the same family differ only in the amount of hair they have, or in their appearance in any other way, none of those differences made any difference to their IQ. Only who their parents were made a difference. Wouldn't it be surprising that there's nothing else correlated with IQ other than who your parents are?", "Now, how much correlation should we expect? There are correlations everywhere, and they're these things on the internet: jokes, memes, or whatever you call them. But they make a serious point where they correlate things like how many adventure movies have been made in a given year correlated with the GNP per capita. That's a bad example because there's an obvious relation, but you know what I mean.", "It's the number of films made by a particular actor against the number of bird flu outbreaks. Part of being surprised by randomness is the fact that correlations are everywhere. It's not just that correlation isn't causation. It's that correlations are everywhere. It's not a rare event to get a correlation between two things. The more you ask about, the more you will get correlations. What is surprising is that the things that are correlated are things that you expect to be correlated and measured. For example, when they do these twin studies and measure IQ, they control for certain things. Like you said, identical twins reared together—they've got to be reared together.", "But, there are infinitely more things that they don't control. So it could be that the real determinant of IQ is, for example, how well a child is treated between the ages of three and a half and four and a half. Where “well”  is defined by something that we don't know yet. Then, you would expect that thing we don't know about, and nobody has bothered to control for it to be correlated with IQ. But unfortunately, that thing is also correlated with whether someone's an identical twin or not. So it's not that it’s identical twins that are causing the similarity. It's this other thing. This is an aspect of appearance or something. If you were to surgically change a person, and you knew what this thing was, you would be able to have the same effect as making an identical twin.", "Dwarkesh Patel 25:37", "Right. But, as you say, in science, or to explain any phenomenon, there's an infinite amount of possible explanations, right? You have to pick the best one. So it could be that there's some unknown trait, which is so obvious to different adoptive parents, so they can use it as a basis for discrimination or for different treatment.", "David Deutsch 25:56", "I would assume they don't know what it is.", "Dwarkesh Patel 26:00", "But then aren’t they using it as a basis to treat kids differently at age three?", "David Deutsc h 26:04", "Not by consciously identifying it. It would be something like getting the idea that this child is really smarter. But I'm just trying to show you that it could be something the parents are unaware of. If you ask parents to list the traits in their children that caused them to behave differently towards their children, they might list 10 traits. But then there are another 1000 traits they're unaware of, which also affect their behavior.", "Dwarkesh Patel 26:33", "So we first need an explanation for what this trait is that researchers have not been able to identify. But it's so obvious that even unconsciously, parents are able to reliably use it as a way to treat their children.", "David Deutsch 26:45", "It wouldn’t have to be obvious at all. Parents have a huge amount of information about their children—which they are processing in their minds. They don’t know what most of it is.", "Do animals have bounded creativity?", "Dwarkesh Patel 27:05", "All right. Okay, so let's leave this topic aside for now. Let me bring us to animals. So if creativity is something that doesn't exist in increments or capacity to create explanations… you can use a simple example of a cat opening a door, right? You'll see a cat develop a theory that applying torque to this handle will open a door. Then, it'll climb onto a countertop, and it'll jump on top of that door handle. It hasn't seen any other cat do it. It hasn't seen another human like get on a countertop and try to open the door that way. But a conjecture is that this is a way, given its morphology, that it can access a door. So, that's its theory, and the experiment is, “Will the door open?” This seems like a classic cycle of conjecture and refutation. Is this compatible with the cat not having at least some bounded form of creativity?", "David Deutsch 28:01", "I think it’s perfectly compatible. So animals are amazing things, and instinctive animal knowledge is designed to make animals easily capable of thriving in environments that they've never seen before. In fact, if you don't go down to the level of detail, animals have never seen the environment before. Maybe a goldfish in a goldfish bowl might have. But when a wolf runs through the forest, it sees a pattern of trees that it has never seen before, and it has to create strategies to avoid each tree, and for actually catching the rabbit that it's running after in a way that has never been done before. This is because of a vast amount of knowledge in the wolf genes. What kind of knowledge is this? Well, it's not the kind of knowledge that suggests turning left, right, etc. It's an instruction that takes input from the outside and then generates behavior that is relevant to that input. It doesn't involve creativity but a degree of sophistication in the program that Human Robotics has not yet reached anywhere near. By the way, when it sees a wolf of the opposite sex, it may decide to leave the rabbit and go and have sex instead.", "A program for a robot to locate another robot of the right species and then have sex with it is beyond present-day robotics, but it will be done. It does not require creativity because that same program will lead the next wolf to do the same thing in the same circumstances. Suppose the fact that the circumstances are ones it has never seen before and can still function is a testimony to the incredible sophistication of that program. But, it has nothing to do with creativity.", "Humans do tasks that require much, much less programming sophistication, such as sitting around a campfire and telling each other a scary story about a wolf that almost ate them. Now, animals can do the wolf-running-away thing. They can enact a story that's even more complicated than the one the human tells. But they can't tell a story. Telling a story is a typical creative activity. It's the same kind of activity as forming an explanation. So I don't think it's surprising that cats can jump on handles. I can easily imagine that the same amazingly sophisticated program that lets it jump on a branch will also function in this new environment it's never seen before. But there are all sorts of other things that it can't do.", "Dwarkesh Patel 32:01", "Oh, that's certainly true. My point is that it has a bounded form of creativity, and if it is bounded, creativity can't exist, that humans could be in one such. But I'm having a hard time imagining the ancestral circumstance in which a cat couldn’t gain the genetic knowledge that jumping on a metal rod would get a wooden plank to open and give it access to the other side.", "How powerful can narrow AIs be?", "David Deutsch 32:26", "Well, I thought I just gave an example. Suppose we don't know what kind of environment the ancestors of the domestic cats lived in. If it contains undergrowth, then dealing with undergrowth requires some very sophisticated programs. Otherwise, you will just get stuck somewhere and starve to death. If a dog gets stuck in a bush, it has no program to get out other than to shake itself until it gets out. It doesn't have a concept of doing something that temporarily makes matters worse and allows you to get out. Dogs can't do that. It's not because that's a particularly complicated thing. It's just that programming doesn't have that. But an animal's programming could easily have that if it lived in an environment in which that happened a lot.", "Dwarkesh Patel 33:33", "Is your theory of AI compatible with AIs that have narrow objective functions, but functions that, if fulfilled, would give the creator a lot of power? For example, if I wrote a deep learning program, I trained it over financial history and asked it to make me a trillion dollars on the stock market. Do you think that this would be impossible? Or if you think this would be possible, it doesn’t seem like an AGI, but it seems like a very powerful AI, right?  So, it seems like AI is getting somewhere.", "David Deutsch 34:04", "Well, if you want to be powerful, you might do better inventing a weapon or something. But or a better mousetrap because it's nonviolent. You can invent a paperclip. To use an example was often used in this context. If paperclips weren't invented, you could invent a paperclip and make a fortune. That's an idea, but it's not an AI because it's not the paperclip that's going out there. It's your idea in the first place, which caused the whole value of the paperclip.", "Similarly, suppose you invent a dumb arbitrage machine that seeks out complicated trades to make that are more complicated than anyone else is trying to do. Suppose that makes you a fortune. Well, the thing that made you a fortune was not the arbitrage machine. It was your idea of how to search for arbitrage opportunities that no one else sees. Right? That is what was valuable. That's the usual way of making money in the economy—you have an idea, and then you implement it. AI is beside the point. It could have been the paperclip.", "Dwarkesh Patel 35:29", "But the thing is, the models that are used nowadays are not expert systems like the chess engines of the 90s. They're something like Alpha zero or AlphaGo. It's almost a blank neural net that they were able to let it win Go. So, if you arbitrarily throw financial history at a blank neural net, wouldn't it be fair to say that the AI figured out the right trades, even though it's not a general intelligence?", "David Deutsch 35:57", "I think it's possible in chess, but not in the economy - the value in the economy is created by creativity. Arbitrage is one thing that can sort of skim value off the top by taking opportunities that were too expensive for other people to take. So, you can make a lot of money if you have a good idea about how to do it. But, most of the value in the economy is created by the creation of knowledge. For example, somebody has the idea that a smartphone would be good to have, even though most people think that's not going to work. That idea cannot be anticipated by anything less than an AGI. An AGI could have that idea, but no AI could.", "Could you implant thoughts in VR?", "Dwarkesh Patel 36:53", "Okay, so there are other topics I want to get into. So, let's talk about virtual reality. In The Fabric of Reality, you discussed the possibility that virtual reality generators could plug directly into our nervous system and give us sense data. As you might know, many meditators like Sam Harris speak of thoughts and senses as intrusions into consciousness. They can welcome intrusions. But, they are both things that come into consciousness. So, do you think virtual reality generators could also place thoughts and sense data into the mind?", "David Deutsch 37:30", "Oh yes, but that's only because that model is wrong. It's the Cartesian theater , as Dan Dennett puts it, with the stage cleared of all the characters. So that's pure consciousness without content, as Sam Harris envisages it. But I think all that's happening there is that you are conscious of this theater. And you envisage it as having certain properties, which it doesn't have—but that doesn't matter. We can imagine lots of things that don't happen. That, in a way, characterizes what we do all the time. So one can interpret one's thoughts about this empty stage as being thoughts about nothing. One can interpret the actual hardware of the stage that one imagined as being pure conscious content plus consciousness, but it's not. It has the content of a stage or a space, or however you want to envision it.", "Can you simulate the whole universe?", "Dwarkesh Patel 38:49", "Okay, and then let's talk about the Turing principle. So this is a term you coined; it's otherwise been called the Church-Turing-Deutsch principle (a universal computer can simulate any physical process). Would this principle imply that you could simulate the whole of the universe in a compact, efficient computer smaller than the universe itself? Or is it constrained to physical processes of a certain size?", "David Deutsch 39:19", "No, it couldn't simulate the whole universe. That would be an example of a computationally capable task but wouldn't have enough memory or time. So the more memory and time you give it, the more closely it could simulate the whole universe. But it couldn't ever simulate the whole universe or anything near the whole universe because it is hard for it to simulate itself. Also, the sheer size of the universe is large.", "Even if we discovered ways of encoding information in a dense way—maybe quantum gravity would allow a great density of information—it still couldn't simulate the universe because that would mean the rest of the universe was also that complex due to the laws of universality and physics. Because quantum gravity applies to the rest of the universe as well. But, I think it's significant. Being limited by the available time and memory… to separate that from being limited by computational capacity. It's only when you separate those that you realize what computational universality is. The Turing or Quantum universality is the most important thing in the theory of computation because computation doesn't even make sense unless you have a concept of a universal computer.", "Are some interesting problems insoluble?", "Dwarkesh Patel 41:24", "What could falsify your theory that all interesting problems are solvable? So I asked this, because there are people who have tried offering explanations for certain problems or questions like “Why is there something rather than nothing? ” Or “How can mere physical interactions explain consciousness?” They have offered explanations why these problems are, in principle, insoluble. Now, I'm not convinced they're right. But do you have a strong reason for, in principle, believing that they're wrong?", "David Deutsch 41:51", "No. So this is a philosophical theory and could not be proved wrong by experiment. However, I have a good argument for why they aren't, namely, that each individual case of this is a bad explanation. So let's say that, that some people say, for example, that simulating a human brain is impossible. Now, I can't prove that it's possible, nobody can prove that it's possible until they actually do it, or unless they have a design for it, which they prove will work. So pending that there is there is no way of proving that, that it's not true that this is a fundamental limitation. But the trouble is, with that idea that it is a fundamental limitation, that the trouble with that is that it could be applied to anything, for example, it could be applied to the theory that you have recently, just a minute ago been replaced by a humanoid robot, which which has got is going to save for the next few minutes, just a pre-arranged set of things, and you're no longer a person––", "Dwarkesh Patel 43:14", "I can't believe you figured it out.", "Does America fail Popper's Criterion?", "David Deutsch 43:17", "Well, that's the first thing you'd say. There is no way to refute that, by experiment, short of actually doing it, short of actually talking to and so on. So it's the same with all these other things. In order for it to make sense to have a theory that something is impossible, you have to have an explanation for why it is impossible. So we know that for example, almost all mathematical propositions are undecidable. So that's not because somebody has said, “Oh, maybe we can't decide everything. Because thinking we could decide everything is hubris.” That's not an argument, you need an actual functional argument to prove that, that that is so and then at being a functional argument, in which the steps of the argument make sense and relate to other things, you can then say, well, what does this actually mean? Does this mean that maybe we can never understand the laws of physics? Well, it doesn't, because if the laws of physics included an undecidable function, then we would simply write, f of x and f of x is an undecidable function. We couldn't evaluate f of x; it would limit our ability to make predictions. But then you could say that lots of our ability to make predictions is totally limited anyway, but it would not affect our ability understood understand the properties of the function f of f, and therefore the properties of the of the physical world.", "Dwarkesh Patel 45:01", "Okay, is a system of government like America's which has distributed powers and checks and balances incompatible with Poppers criteria? So the reason I asked is that the last administration had a theory that if you build a wall, there'll be positive consequences. And, the idea there could have been tested and then the person could have been evaluated about whether that theory succeeded. But because our system of government has distributed powers, Congress opposed the testing of that theory, and so it was never tested. So if the American government wanted to fulfill coverage criteria, we would need to give the president more power, for example.", "David Deutsch 45:35", "It's not as simple as that. So I agree that this is a big defect in the American system of government. No country has a system of government that perfectly fulfills Poppers criteria . We can always improve,  the British one is actually the best in the world, and it's far from optimal. Making a single change, like that, is not going to be the answer. The constitution of a polity, is a very complicated thing, much of which is inexplicit. So the American founding fathers realized they had a tremendous problem: what did they want to do? What they thought of themselves as doing was to implement the British Constitution. In fact, they thought they were the defenders of the British Constitution, and that the British king had violated it. And he was bringing it down, they wanted to retain it. The trouble is that in order to do this, to gain the independence to do this, they had to get rid of the king. And then they wondered whether they should get an alternative King…. whichever way they did it, there were problems. The way they decided to do it,  made for a system that was inherently much worse than the one they were replacing, but they had no choice. If they wanted to get rid of a king, they had to have a different system for having a head of state, and they wanted to be democratic. That meant the President had a legitimacy in legislation that the King never had, or sorry, never had! The king did use to have that in medieval times, but the king by the time of the Enlightenment, and so on, no longer had full legitimacy to legislate. So they had to implement a system where his seizing power was prevented by something other than tradition. And so they instituted these checks and balances, checks. And so the whole thing they inserted was immensely sophisticated. It's an amazing intellectual achievement. And that it works as well as it does is something of a miracle, but the inherent flaws are there and one of them is the fact that there are checks and balances means that responsibility is dissipated. And nobody is ever to blame for anything in the American system. Which is terrible. In the British system, blame is absolutely focused, everything is sacrificed to the end of focusing blame and responsibility to the government, passed, passed the law, courts have passed the parliament, right to the government. That's where it's all focused. And there are no systems that do that better. But as you will know, the British system also has flaws. And we recently saw with the sequence of events with the Brexit referendum, and then Parliament balking at implementing some laws that didn't agree with and then that being referred to the courts. And so there were the courts and the parliament and the government and the Prime Minister all blaming each other. And there was a mini constitutional crisis, which could only be resolved by having an election and then having a majority government which is by the mathematics of how the government works, that's how it usually is in Britain. Although, we have been unlucky several times recently in not having a majority government.", "Does finite matter mean there's no beginning of infinity?", "Dwarkesh Patel 50:04", "Okay, so this could be wrong, but it seems to me that in an expanded universe, there will be like a finite amount of total matter that will ever exist in our light cone, right? There's a limit. And that means that there's a limit on the amount of computation that this matter can execute, the amount of energy it can provide, perhaps even the amount of economic value we can sustain. So maybe it would be weird if the GDP per atom could be arbitrarily large. So does this impose some limit on your concept of the beginning of infinity?", "David Deutsch 50:37", "So what you've just recounted is a cosmological theory. The universe could be like that. But we know very little about cosmology, we know very little about the universe in general, like, theories of cosmology are changing on a timescale of about a decade. So it doesn't make all that much sense to speculate about what the ultimate asymptotic form of the cosmological theories will be. At the same time, we don't have a good idea about the asymptotic form of very small things. Like, we know that our conception of physical processes must break down somehow at the level of quantum gravity, like 10 to the minus 42 seconds, but we have no idea what happens below that. Some people say it's got to stop below that. But there's no argument for that at all. It's just that we don't know what happens beyond that. Now, what happens beyond that may be a finite limit. Similarly, the way it happens on a large scale may impose a finite limit, in which case computation is bounded by a finite limit imposed by the cosmological initial conditions of this universe, which is still different from its being imposed by inherent hardware limitations. For example, if there's a finite amount of GNP, available in the distant future, then it's still up to us, whether we spend that on mathematics or music or political systems, or any of the 1000s of even more worthwhile things that have yet to be invented. So it's up to us which ideas we feel the 10 to the 10 to the 10th and 10 bits with now, and my guess is that there are no such limits. But my worldview is not affected by whether there are such limits. Because as I said, it's still up to us what to fill them with. So then if we get chopped off at some point in the future, then everything will have been worthwhile up to them.", "The Great Stagnation", "Dwarkesh Patel 53:15", "Gotcha. Okay. So the way I understand that your customers are getting infinite, it seems to me that the more knowledge we gain, the less knowledge we're in a position to gain. So there should be an exponential growth of knowledge. But if we look at the last 50 years, it seems that there has been a slowdown or decrease in research productivity, economic growth, and productivity growth. This seems compatible with the story that there's a limited amount of fruit on the tree that we picked the low hanging fruit and now there's less and less fruit and harder and harder fruit to pick. And, eventually the orchard will be empty. So do you have an alternative explanation for what's been going on in the last 50 years?", "David Deutsch 53:52", "Yes,  it's very simple. There are sociological factors in academic life, which have stultified the culture. Not totally and not everywhere, but that has been a tendency in what has happened. It has resulted in a loss of productivity in many sectors, in many ways, (but not in every sector, not in every way).  For example, I've often said there was a solidification in theoretical physics, starting in the 1920s, and it still hasn't fully dissipated if it wasn't for that. Quantum computers wouldn’t have been invented in the 1930s and built in the 1960s. So that is just an accidental fact. But it just goes to show that there are no guarantees. The fact that our horizons are unlimited, does not guarantee that we will get anywhere and that we won't start declining tomorrow. I don't think we are currently declining. These declines that we see are parochial effects caused by specific mistakes that have been made, and which can be undone.", "Changes in epistemic status is Popperianism", "Dwarkesh Patel 55:35", "Okay, so I want to ask you a question about Bayesianism versus Praetorianism . So one reason why people prefer Bayes is because there seems to be a way of describing changes in epistemic status when the relative status of a theory hasn't changed. So I'll give you an example. Currently, the Many Worlds explanation is the best way to explain quantum mechanics. Right? But suppose in the future, we were able to build an AGI on a quantum computer and were able to design some clever interference experiment, as you suggest to have it be able to report back being in a superposition across many worlds. Now, it seems that even though many worlds remain the best, the only explanation somehow is that its epistemic status has changed as a result of the experiment. In terms of the Bayesian theory, you can say the credibility of this theory has increased. So how would you describe these sorts of changes in Praetorian view?", "David Deutsch 56:33", "So what has happened is that at the moment, we have only one explanation that can't be immediately knocked down. If we did that thought experiment, we might well decide that this will provide the ammunition to knock down ideas for alternative explanations that have not been thought of yet. Obviously, it wouldn't be enough to knock down every possible explanation, because for a start, we know that quantum theory is false. We don't know for sure that the next theory will have many worlds in it. It will, but we can't prove anything like that. But I would replace the idea of increased credence with a theory that the experiment will provide a quiver full of arrows or a repertoire of arguments that goes beyond the known bad arguments andwill reach into other types of arguments. The reason I would say that is that some of the existing misconceptions about quantum theory reside in misconceptions about the methodology of science. Now I've written the paper about what  it's the right methodology of science where that doesn't apply. But many physicists and many philosophers would disagree with that. And they would advocate a methodology of science that's more based on empiricism. Of course, that empiricism is at stake and can be knocked down in some terms. But not everybody thinks that. Now, once we have an experiment, such as my thought experiment, if that was actually done, then people could not use their arguments based on a fallacious idea of empiricism, because their theory would have been refuted, even by the standards of empiricism, which shouldn't have been needed in the first place. But, so that's why  that's the way I would express that the repertoire of arguments will become more powerful if that experiment were done successfully.", "Open-ended science vs gain of function", "Dwarkesh Patel 59:29", "The next question I have is, how far do you take the principle that open ended scientific progress is the best way to deal with existential dangers. To give one example, you have something like data function research, right. And it's conceivable that it could lead to more knowledge and how to stop dangerous pathogens. But at least in Bayesian terms, you could say it seems even more likely that it can or has led to the spread of a manmade pathogen that would have not otherwise been naturally developed. So would your belief in open scientific Progress allow us to say, “Okay, let's stop doing some research?”", "David Deutsch 1:00:09", "No, it wouldn't allow us to say “let's stop it” , it might make it reasonable to say, “let us do research into how to make laboratories more secure.” Before we do gain function research, it's really part of the same thing. It's like saying, Let's do research into how to make the plastic hoses through which the reagents pass more impermeable, before we actually do the experiments with the reagents. So it's all part of the same experiment. I wouldn't want to stop something just because new knowledge might be discovered. But which knowledge we need to discover first, that's the problem of scheduling, which is a non-trivial part of any research and of any learning.", "Dwarkesh Patel 1:01:06", "Would it be conceivable for you to say that, until we figure out how to make sure these laboratories are held to a certain standard, we should stop the research as it exists now. And then, meanwhile, we'll focus on doing the other kind of research before we can restart. But until then, it's not allowed.", "David Deutsch 1:01:27", "Yes, in principle, that will be reasonable. I don't know enough about the actual situation to have a view., I don't know how these labs work. I don't know what the precautions consist of. And when I hear people talking about, for example, lab leaks,  well the most likely lab leak is that one of the people who works there walks out of the front door. So the leak is not leaked from the lab to the outside. The leak is from the test tube to the person and then from the person walking out the door. And I don't know enough about what these precautions are or what the state of the art is, to know to what extent the risk is actually minimized. It could be that the culture of these labs is not good enough–– in which case it would be part of the next experiment to improve the culture in the labs. But I am very suspicious of saying that all labs have to stop and meet a criterion. Because I suspect that this stopping wouldn't be necessary, and the criterion wouldn't be appropriate. Then again, which criterion to use depends on the actual research being done.", "Contra Tyler Cowen on Civilizational Lifespan", "Dwarkesh Patel 1:02:56", "When I had Tyler Cowen on my podcast, I asked him why he thinks that humans are lazy and are only going to be around for 700 more years. I gave him your rebuttal or what I understand as your rebuttal that is “creative, optimistic societies will innovate ways of safety technologies faster than totalitarian static societies can innovate with disruptive technologies.” And he responded, “maybe, but the cost of destruction is just so much lower than the cost of building.” That trend has been going on for a while now, what happens when a new bomb costs $60,000? Or what happens if there's a mistake? Like the kinds that we saw many times over in the Cold War? How would you respond to that?", "David Deutsch 1:03:42", "First of all, we've been getting safer and safer throughout the entire history of civilization. There were these plagues that wiped out a hot third of the population of the world or half, and it could have been 99% or 100%. We went through some kind of bottleneck 70,000 years ago, I understand from genetics that all our cousins and species have been wiped out so we were much less safe. Now, also if a 10 Kilometer asteroid had been on target to hit the Earth at any time in the past 2 million years or whatever it is history of the Genus Homo, that would have been the end of it. Whereas now, it will just mean higher taxation. That's how amazingly safer we are.", "Now, I would never say that it's impossible that we will destroy ourselves. That would be the contrary to the universality of the human mind. We can make wrong choices, we can make so many wrong choices that will destroy ourselves. On the other hand, the atomic bomb accident thing, would have had no zero chance of destroying civilization, all they would have done is cause a vast amount of suffering. And, but I don't think we have the technology to end civilization, even if we wanted to,  all we would do if we just deliberately unleashed hell all over the world is we would cause a vast amount of suffering. But there would be survivors, and they would resolve never to do that again. So I don't think we're even able to let alone that we would do it accidentally.", "But as for the bad guys? Well, we are doing the wrong thing largely in regard to both external and internal threats. But I don't think we're doing the wrong thing to an existential risk level. And over the next 700 years, or whatever it is, well, I don't want to prophesy, because I don't know most of the advances that are going to be made in that time. But I see no reason why if we are solving problems, we won't solve problems. There’s another metaphor by Nick Bostrom about a jar with white balls, and there's one black ball and you take out a white ball, and then you pick the black ball, and that's the end of you. I don't think it's like that, because every white ball you take out and have reduces the number of black balls in the jar. So again, I'm not saying that's a law of nature, it could be that the very next ball we take out will be the black one, that will be the end of us. It could be but all arguments that will be are fallacious.", "Fun criterion", "Dwarkesh Patel 1:07:21", "I do want to talk about the Fun criteria . Is your definition of fun, different from how other people define other positive emotions like eudaimonia, or well being, or satisfaction, is “fun” a different emotion?", "David Deutsch 1:07:35", "I don't think it's an emotion. And all these things are not very well defined. They can't possibly be very well defined until we have a satisfactory theory of qualia at least, and probably a more satisfactory theory of creativity, how creativity works, and so on.  I think that the choice of the word fun for the thing that I explained more precisely, but still not very precisely as, as a “creation of knowledge where the different kinds of knowledge in explicit, unconscious, conscious explicit, are all in harmony with each other.” That is actually the only way in which the everyday usage of the word fun differs from that is that fun is considered frivolous, or seeking fun is considered as seeking frivolity. But that isn't so much a different use of the word , it's just a different pejorative theory about whether this is a good or bad thing. But nevertheless, I can't define it precisely. The important thing is that there is a thing which has this property of “fun” that you can't, you can't compulsorily enact. So, in some views, no pain, no gain, well, then you can find out mechanically, whether the thing is causing pain, and whether it's doing it according to the theory that says that you will have gain if you have that pain, and so on. So, that can all be done mechanically, and therefore, it is subject to criticism. Another way of looking at the Fun theory is that it’s some mode of criticism. This is subject to the criticism that this isn't fun, ie. This is privileging one kind of knowledge arbitrarily over another rather than being rational and letting content decide.", "Dwarkesh Patel 1:10:04", "Is this placing a limitation on universal explainers? If they can create some theory about why a thing could or should be fun? Why could anything be fun? It seems to be that sometimes we actually can make things fun that aren't like, for example, take exercise, no pain, no gain is like when you first go, it's not fun. But, once you start going, you understand the mechanics, you develop a theory for why CAD should be fun?", "David Deutsch 1:10:27", "Yes, that's quite a good example. Because there you see that fun cannot be defined as the absence of pain. You can have fun while experiencing physical pain. And that physical pain is not sparking suffering, but joy. However, there is such a thing as physical pain not sparking joy, as Marie Kondo would say. And that's important, because if you are dogmatically or uncritically implementing in your life, a theory of the good, that involves pain, and which excludes the criticism that maybe this can't be fun. Or maybe this isn't yet fun. Or maybe I should make it fun. And if I can't, that's a reason to stop all those things, if all those things are excluded, because by definition, the thing is good, and your pain, your suffering doesn't matter, then that opens the door to not only suffering, but to stasis. You won't be able to get to a better theory.", "Dwarkesh Patel 1:11:58", "And then why is fun central to this instead of another emotion? So, like, for example, Aristotle thought that a widely defined sense of happiness is “what should be the goal of our endeavors?” Why fun instead of something like that?", "David Deutsch 1:12:15", "Well, that's defining it vaguely enough. The point is, the underlying thing is as far as going one level below, we really understand that we'd need to go about seven levels below that, which we can't do yet. But the important thing is that there are several kinds of knowledge in our brains. And the one that is written down in the exercise book that says you should do this number of reps. And you should power through this. And it doesn't matter if you feel that and so on. That's an explicit theory. And it contains some knowledge, but it also contains errors. All our knowledge is like that. We also have other knowledge, which is contained in our biology, it's contained in our genes.", "We have knowledge that is inexplicit, like our knowledge of grammar is always my favorite example, as we know why certain sentences are acceptable and why they're unacceptable. But we can't state explicitly all in every case, why it isn't, or why it is. And then as there's so that there's explicit and explicit knowledge, there's conscious and unconscious knowledge. All those are bits of programming the brain, they’re ideas, they are they they are bits of knowledge in this, if you define knowledge as information with causal power, they are all information with causal power. They all contain truth, and they all contain error. And it's always a mistake, to shield something to just shield one of them from criticism or replacement. Not doing that is what I call the fun criterion. Now, you might say that that's a bad name, but it's the best I can find.", "Does AGI through evolution require suffering?", "Dwarkesh Patel 1:14:18", "So why would creating an AGI through evolution necessarily entail suffering? Because the way I see it, it seems to be your theory that you need to be a general intelligence in order to feel suffering, but by the point and evolution, simulated Bing is a general intelligence and we can just stop the simulation. So where's the suffering coming from?", "David Deutsch 1:14:38", "Okay. So the kinds of simulation by evolution that I'm thinking of, there may be several kinds, but the kind that I'm thinking of, which I said would be the greatest crime in history, is the kind of work that just simulates the actual evolution of humans from pre-humans that weren't people. So you have a population of non people, which in this simulation would be some kind of NPCs. And then they would just evolve–– we don't know what the criteria would be, we just have an artificial universe which simulated the surface of the earth, and they'd be walking around, and some of them might or might not become people.", "Now the thing is, when you're part of the way there, what is happening is that the only way that I can imagine the evolution of personhood or the explanatory creativity happened was that the hardware needed for it for it was was first needed for something else, I have proposed that it was needed to transmit memes. So there'd be people who were transmitting memes creatively, but they were running out of resources. So they weren't running out of resources before it managed to increase the stock of memes. So in every generation, there was a stock of memes that was being passed down to the next generation. And once they got beyond a certain complexity, they had to be passed down by the use of creativity by the recipient.", "So there may well have been a time and as I say, I can't think of any other way it could have been, where there was genuine creativity being used, but it ran out of resources very quickly, but not so quickly that it didn't increase the mean bandwidth. Then in the next generation, there was more mean bandwidth.  And then after, certain number of generations, there would have been some opportunity to use this hardware, or whatever it is, firmware, I expect to use this firmware for something other than just Trump blindly transmitting memes, or rather, creatively transmitting memes, but they were blind memes. So in that time, it would have been very unpleasant to be alive. It was already very unpleasant to be alive when we did have enough resources to think as well as do the memes, but I don't think there would have been a moment at which you would say, “ Yes, now, the suffering begins to matter,” the people were already suffering at the time when they were blindly transmitting memes, because they were using general, genuine creativity. They were just not using it to any good effect.", "Would David enter the Experience Machine?", "Dwarkesh Patel 1:18:01", "Gotcha. Would, being in the Experience Machine be compatible with the fun criterion?  So you're not aware that you're in the Experience Machine? It's all virtual reality. But you're still doing the things that would make you have fun, in fact, more so than in the real world? Would you be tempted to get into an experience machine? Would it be compatible with Minecraft? They're different questions.", "David Deutsch 1:18:24", "But I'm not sure what the Experience Machine is. Is it just a virtual reality world in which things work better than in the real world or something?", "Dwarkesh Patel 1:18:40", "So it's a thought experiment by Robert Nozick. And the idea is that you would enter this world, but you would forget that you're in virtual reality. So all the world would be perfect in every possible way that it could be perfect or not perfect. But it would be better in every possible way. But you would think the relationships you have here are real, the knowledge we're discovering here is novel, and so on. Would you be tempted to enter such a world?", "David Deutsch 1:19:08", "Well, no, I certainly wouldn't want to enter any world which involves erasing the memory that I have come from this world. Related to that is the fact that the laws of physics in this virtual world couldn't be the true ones, because the true ones aren't yet known. So I'd be in a world in which I was trying to learn laws of physics, which aren't the actual laws. They would have been designed by somebody for some purpose to manipulate me. Maybe it would be designed to be a puzzle that would take 50 years to solve. But it would have to be by definition, a finite puzzle, and it wouldn't be the actual world. Meanwhile in the real world, things are going wrong. And I don't know about this. Eventually they go so wrong that my computer runs out of power. And then where would I be?", "Against Advice for young people", "Dwarkesh Patel 1:20:11", "The final question I always like to ask people I interview is, “What advice would you give to young people?” So somebody in their 20s, is there something that you would like some advice you would give them?", "David Deutsch 1:20:24", "Well, I try very hard not to give advice. Because it's not a good relationship to be with somebody and give them advice. I can have opinions about things. So, for example, I may have an opinion that it's dangerous to condition your short term goals by reference to some long term goal. I have a good epistemological reason for that. Namely, that if your short term goals are subordinate to your long term goal, then if your long term goal is wrong, or deficient in some way, you won't find out until you're dead. So it's a bad idea because it is subordinating the things that you could error-correct now, or in six months time or in a year's time, to something that you could only error correct on a 50 year, timespan, and then it will be too late. So I'm suspicious of advice of the form, “Set your goal,” and even more suspicious of making your goal be so and so.", "Dwarkesh Patel 1:21:48", "Interesting. Why do you think the relationship between advicee and advice giver is dangerous?", "David Deutsch 1:21:56", "Oh, well, because it's one of authority.  I tried to make this example of, quote, advice that I just gave, I tried to make it non authoritative. I just gave an argument for why certain other arguments are bad. So, if it's advice of the form, “a healthy mind and a healthy body, or don't drink coffee before 12 o'clock, or something like that?” It's an argument.  If I have an argument, I can give the argument and not tell the person what to do. Who knows what somebody might do with an argument, they might change it to a better argument which actually implies a different behavior. I can contribute to the world and make arguments as best I can. I don't claim that they are privileged over other arguments. I just put them out because I think that argument works. And I expect other people not to think they work. Like we've just done this in this very podcast: I put out an argument about AI and that kind of thing, and you criticized it. If I was in the position of making that argument and saying that “Therefore you should do so and so!” that's a relationship of authority, which is immoral to have.", "Dwarkesh Patel 1:23:42", "Well, David, thanks so much for coming on the podcast and thanks so much.", "David Deutsch 1:23:49", "Fascinating, thank you for inviting me." ]
[ "https://en.wikipedia.org/wiki/Martin_Rees", "https://dictionary.apa.org/qualia#:~:text=n.%20(,sensations%20of%20heat%20and%20cold.", "https://nces.ed.gov/pubs93/93275.pdf", "https://en.wikipedia.org/wiki/Deutsch%E2%80%93Jozsa_algorithm", "https://www.lesswrong.com/posts/HDyePg6oySYQ9hY4i/david-deutsch-on-universal-explainers-and-ai", "https://www.philosophyetc.net/2004/11/cartesian-theatre.html", "https://michaelnielsen.org/blog/interesting-problems-the-church-turing-deutsch-principle/", "https://en.wikipedia.org/wiki/Quantum_Turing_machine", "https://www.rhyslindmark.com/popper-criterion-for-politics/", "https://www.google.com/search?client=safari&rls=en&q=Bayesianism&ie=UTF-8&oe=UTF-8", "https://www.dictionary.com/browse/praetorianism", "https://www.google.com/search?client=safari&rls=en&q=Many+Worlds+explanation&ie=UTF-8&oe=UTF-8", "https://podtail.com/fi/podcast/the-lunar-society/4-tyler-cowen-the-great-reset/", "https://nickbostrom.com/papers/vulnerable.pdf", "https://www.google.com/search?client=safari&rls=en&q=Fun+criteria&ie=UTF-8&oe=UTF-8", "https://en.wikipedia.org/wiki/Experience_machine", "https://www.google.com/search?client=safari&rls=en&q=Robert+Nozick.&ie=UTF-8&oe=UTF-8" ]
https://www.dwarkesh.com/p/david-reich
David Reich - How One Small Tribe Conquered the World 70,000 Years Ago
[ "00:00:00 – Archaic and modern humans gene flow", "Dwarkesh Patel 00:00:00", "Today, I have the pleasure of speaking with David Reich , who is a geneticist of ancient DNA at Harvard. David's work, his lab's work and his field's work, has really transformed our understanding of human history and human evolution. It's fascinating stuff from many perspectives. In its own light it's very interesting. From the perspective of AI, which I plan on asking you about, it's interesting to understand human evolution and what that implies about the future of AI. Anyways, I'll stop doing the introduction.", "David, we were just chatting before we started recording about new information you've been studying since the book came out: archaic humans and the relationship between modern humans and Neanderthals . Can you explain again what you're studying these days?", "David Reich 00:00:49", "What's very interesting is that what we have data from now are modern humans, the sequences of people living today. We also have data from Neanderthals who are archaic humans who lived in western Eurasia for the last couple of hundred thousand years. We have now sequences from many Neanderthals. We also have DNA from Denisovans . Denisovans are archaic humans who were discovered from the DNA from a finger bone that was found in a cave in Siberia. It was not anticipated to be a new group of humans but was sequenced.", "So we have DNA from these different sources, plus bits of DNA from these sources mixed into modern populations. Based on this, in the last 10 to 14 years, we collectively have been piecing together an understanding of how modern humans are related to our closest relatives, who are now no longer with us in unmixed form: the Neanderthals, Denisovans and maybe others who are not yet sampled. The model that we have is really a model based on accretion. We start with the modern humans and then we add the Neanderthals once we obtain those sequences. We add the Denisovans and then the model doesn't quite fit and we add other mixture events to make the model fit.", "At this point, there's a number of these mixture events that seem increasingly implausible. If you know the history of models of how the earth and the sun relate to each other in ancient Greek times, there's these epicycles that were attached by the Greek, Hellenistic astronomer Ptolemy to make it still possible to describe the movements of the planets and the stars, given a model where the sun revolved around the earth. We've added all of these epicycles to make things fit. One wonders whether there's some pretty fundamental differences that might explain the patterns that are observed.", "Just to give you an example of this, the standard model is basically this: modern humans separated from a group ancestral to Denisovans and Neanderthals—these two groups for which we have sequences—somewhere 500,000-750,000 years ago. That's what the genetic papers, beginning in about 2012 and 2014. That’s still used as the explanation for the vast majority of the DNA lineages connecting them. Except for maybe 5% of the DNA, that's what we think is going on. Modern humans are one group and then there's a sister of modern humans, the Denisovan-Neanderthal group. They separated 500,000-750,000 years ago.", "But what's become very clear in an important series of papers since that time is that there are exceptions to this. One exception is the mitochondrial sequence , what you get from your mother and what she gets from her mother and so on. The shared ancestor there between Neanderthals and modern humans is only maybe 300,000-400,000 years ago, which is after the split that's well-estimated from the whole genome. We've also learned this is true for the Y chromosome , inherited from father to father. It too is only maybe 300,000 or 400,000 years separated between Neanderthals and modern humans. Like with the mitochondrial DNA, the Denisovans are much more distant, maybe 700,000 years to a million years.", "So the story told by these two parts of the genome is really different from the rest of the genome and incompatible with the main story. We know from these papers that maybe a few percent, 3-8%, of Neanderthal DNA comes from a gene flow event into the ancestors of Neanderthals from the modern human lineage a few hundred thousand years ago. It's tempting to think that both the Neanderthal mitochondrial DNA and Y chromosome come from that event. But the probability of that happening by chance is only 5% squared, which is very small.", "People have invoked epicycles, like natural selection for the mitochondrial DNA coming from modern humans, or natural selection coming from the Y chromosome coming from modern humans, somehow being more advantageous and pushed up in frequency. But that would have to happen on both these parts of the genome to produce this pattern. It just seems surprising.", "What's been put together is a complicated model and epicycle ideas like natural selection to make it work. It's not impossible. It may be the case. But one wonders whether profoundly different models might actually explain the data. That’s something we and others have been thinking about. Can there be other models?", "One example we've been playing with is one where there’s much more DNA in Neanderthals from modern humans than the 3-5% estimated. We can get such models to fit but here it’s 30% or 50% or 70%. In that view, Neanderthals and Denisovans are not sisters. In fact, modern humans and Neanderthals are just as qualified to be sisters as Neanderthals and Denisovans. In that case, maybe it's not clear what's modern and what's archaic. Are modern humans archaic? Are modern humans modern? Are Neanderthals archaic? Neanderthals are modern.", "What's also become clear in the last few years in a separate thread of research—not based on ancient DNA but based on using more powerful and sophisticated ways of pattern finding in modern data—is that modern humans are also highly substructured. We can see that even without ancient DNA yet. Of course, once one has ancient DNA it’s so much clearer. But it’s very clear that you can't explain modern African DNA without invoking very extreme substructure, as deep as the mixtures that contributed and mixed between Neanderthals and modern humans. In that mixture, which groups were archaic? Which were modern? Were they both archaic? Was one of them modern? Was one more closely related to Neanderthals and the possibly higher proportion of ancestry? It's not obviously wrong that the model’s very different from the standard one we currently have.", "Dwarkesh Patel 00:06:59", "Interesting. From your book I remember that there are lineages of modern humans that are over 200,000 years separated from other groups, like the San hunter-gatherers from everybody descended from Eurasia. Then you're saying that 100,000 years before that is when we have a sister lineage with Neanderthals. On the new findings about how closely related Neanderthals are to us and how much mitochondrial and Y chromosome DNA they share, what model do you think is the most plausible to explain why there's so much shared ancestry?", "David Reich 00:07:40", "I'm very agnostic. I really don't know.", "Dwarkesh Patel 00:07:43", "The models you were just talking about, it sounded like you thought they were low probability. Is there one you think is higher?", "David Reich 00:07:49", "The models that are considered to be standard dogma are now low probability. There's a standard dogma that's developed over an accretion of papers where the history gets patched. Someone sequences a genome. Someone performs an analysis. Someone proves something that wasn't known before. We claim a mixture event we didn't know about before, an event that we didn't know about before. That gets patched onto the current model, which is now a series of patches.", "Nobody has really rethought the whole thing very hard. The whole thing is not obviously very different. You can actually reassemble the whole model in a new way without doing it from the simple model up, but thinking about it again and seeing if it can be all related in new ways, In fact, it might be quite different in the way that I just described.", "Dwarkesh Patel 00:08:41", "Where did the most recent gene flow between Neanderthals and humans happen? I guess it’s not the most recent, because the most recent was 60,000 or whatever years ago. But the one you're referring to here, where physically did that happen?", "David Reich 00:08:52", "Even that's not clear. Probably such a thing would have occurred somewhere in the Near East or in western Eurasia somehow. It's not even clear where the modern human lineage was residing at that time. The modern human lineage, leading to the great majority of the ancestors of people today, was probably in sub-Saharan Africa for the last 500,000 years at least. It might be much more. Certainly our main lineage was in Africa, probably 3-7 million years ago.", "But in a period between about 2 million to 500,000 years ago, it's not at all clear where the main ancestors leading to modern humans were. There were humans throughout many parts of Eurasia and Africa with a parallel increase in brain size and not obviously closer ancestrality to modern humans in one place than in the other. It's not clear where the main lineages were. Maybe they were in both places and mixed to form the lineages that gave rise to people today.", "There's been an assumption where Africa's been at the center of everything for many millions of years. Certainly it's been absolutely central at many periods in human history. But in this key period when a lot of important changes happen—when modern humans develop from Homo habilis and Homo erectus all the way to Homo heidelbergensis and the shared ancestor of Neanderthals, modern humans, and Denisovans— that time period which is when a lot of the important change happened, it's not clear, based on the archaeology and genetics, where that occurred as I understand it.", "Dwarkesh Patel 00:10:29", "We're humans and you would think one of the things history would have figured out is how humans came to be. That's probably one of the biggest questions you could imagine asking of history, of archaeology, of anthropology, of genetics. The conventional model is the thing you're taught in the third grade. This is one of the first things you're taught about the world. The fact that many parts of it could be wrong… We're learning in greater detail what those parts look like, at the very least. We're doing that right now because of new technology that's being used by labs like yours. That's really wild. The audience might not be aware of how much of a change this is in our understanding of the human past. I just really want to emphasize that.", "The gene flow event you're talking about a few hundred thousand years ago happened between modern humans and Neanderthals. If it happened outside of Africa, then did that lineage go back to Africa and then come back out again? How should we think about that?", "David Reich 00:11:35", "The simplest version of this is that the main lineage leading to modern humans is in Africa at this point. As I understand it from talking with the archaeologists and the climatologists, Africa and the Near East are continuous ecological spaces at certain periods of time. So there's no difference between what's now the Near East and Africa. The fauna and the flora are pumped from Africa into the Near East or pumped from the Near East into Africa. The African range goes into that region. It's a place of overlap between Eurasian fauna and flora and African flora and fauna. That's a very natural place for interactions to occur, especially in periods of climate change. Animals, for example, from one region get pumped into the Near East. Then in another period of climate change, they get pumped into Eurasia or the rivers.", "Dwarkesh Patel 00:12:23", "Because there's a land bridge during different climatic events?", "David Reich 00:12:26", "There's always a land bridge, but the ecology with deserts and so on makes certain areas permeable or impermeable. In some periods of time, the Near East gets reclaimed by Eurasia somehow, ecologically. In other periods of time, it gets reclaimed by Africa. It's kind of a place of movement of flora and fauna out and in, again and again. I'm not an expert on this. The simplest model would be one in which an extension of the modern human substructure leading to us—the ones that some of those lineages coalesce to form people living today, the great majority of the ancestors—gets into the Near East several hundred thousand years ago and then mixes there with the ancestors of what we have now sequenced as Neanderthals. The skeletons that we have now are Neanderthals.", "That gene flow event occurs there. It's modern humans from Africa—or the part of the African population that extends into the Near East—pushing into Neanderthals at that time. We have evidence of modern human incursions since that time into Neanderthal parts of western Eurasia and also in intermediate periods, from the skeletal record and maybe even recent claims in the DNA data. Certainly the genetic data attests to a very strong event a few hundred thousand years ago.", "Dwarkesh Patel 00:13:43", "How many humans are around at this time? To the extent that all modern humans are descendants of this group, how many different groups of humans are there—no genetically distinct necessarily, but just separate locations or so forth—such that there's enough gene flow between all of them and there's a shared common descent.", "David Reich 00:14:08", "I don't know. Here’s one thing that is really interesting. A couple of years ago, we published a paper on relatively recent hunter-gatherer populations from mostly eastern and central Africa in order to be able to discern these deep population exchanges that we would really like to know in order to understand human evolution. This included individuals going back up to about 15,000 years ago, which is the oldest DNA from sub-Saharan Africa, which is not very old. Really we would like to be able to probe 2 million years ago, but we can't. But with 15,000-year-old individuals, what you see is many groups at many places all with very reduced diversity. In other words, they look like they're living in tiny populations of hundreds of people and not exchanging DNA with each other very often at all over time. We see this again and again.", "You take such a population and put it into a model. If it's this small, what will happen over time? It will lose its diversity over time, and it will become very non-diverse. Over time, Africa will have very little diversity. But of course, Africa today has great human diversity in it. What seems to be happening is that the whole continent of sub-Saharan Africa, and probably Eurasia at this time, is full of hundreds, thousands, tens of thousands of little groups that are communicating hardly at all with each other. They are in very small sizes and losing diversity. When we sample them, this is a group that leaves hardly any descendants at all, maybe none, amongst modern people.", "What's actually happening is that occasionally these groups merge together and recharge their diversity. Diversity is maintained in the ensemble of rarely mixing groups. You can't really appreciate the diversity by studying any one group. You actually have to think about the whole ensemble of hundreds or thousands of tens of thousands of them as preserving the diversity. There's some question about the migration rate amongst these groups, an archipelago of little groups losing diversity and going extinct at some level. But together there is enough recontact to recharge the diversity and create the incredibly diverse populations you see today, for example, in southern Africa or western Africa or central Africa.", "Dwarkesh Patel 00:16:19", "I want to go back to what you were saying. For hundreds of thousands of years—not just with modern humans, but with even the so-called archaic humans, Neanderthals, and other species—there's been selective pressure for larger brains. This is despite the fact that they're in different parts of the world. If you're in Eurasia or if you're in sub-Saharan Africa, either way we finally got to a state where the niche we're in rewards marginal increases in intelligence and is willing to bear the cost of that and keep chugging on that variable.", "Do we know why that was the case? What was happening in the world? What was happening with maybe primate brains such that the selective pressure was turning towards greater intelligence?", "David Reich 00:17:10", "That's a super interesting question. There's a lot of insight and ideas about this topic. It's an area to which genetics right now has contributed almost nothing. I wrote this book, Who We Are and How We Got Here: Ancient DNA and the New Science of the Human Past . It's a bit of a misleading title or a kind of a bait-and-switch title. The way in which it's a bait-and-switch title is you might read it thinking you're going to learn something about how we became whatever we think is distinctive about us relative to other animals.", "So I try very early in the book to say that unfortunately, with the genetic data available up to this point, we don't really have very meaningful insights about what makes us distinct, how we came to be distinct from other animals. What I'm going to tell you about is how we came to be, how we are from another perspective through mixture and migration. It's very surprising how we came to be and how we are through migrations and mixtures. A lot of people used to think that we were not mixed. In fact, it's been mixture again and again in the past in many populations we didn't anticipate.", "Your question was about how humans evolved into a distinctive niche that includes having a strong reliance on a large brain, putting a large amount of metabolic energy into the brain, and having a brain relative to body size that is much bigger than is in the past. I have two things that are striking to me about that. One of them is that genomics actually has promise to learn about those things. We are potentially on the verge of learning a lot about those things. We just don't have important new qualitative insights about that topic right now.", "The other one is that the large brain was already in place prior to the separation of Neanderthals and modern humans, and maybe Denisovans as well. The common ancestors of Neanderthals and modern humans probably had a brain as large as ours. It's not obvious that there's parallel evolution in multiple parts of the world. It may be that it's a sufficiently interconnected group that it's not a parallel evolution event but a single process.", "Dwarkesh Patel 00:19:21", "I have so many questions there. When you say that there's a single interconnected population, are you referring not only to basically all of Eurasia but also Africa?", "David Reich 00:19:34", "Possibly.", "Dwarkesh Patel 00:19:35", "Basically the whole world, even hundreds of thousands of years ago, can be thought of as having gene flow and being one global population?", "David Reich 00:19:43", "That's almost certainly true. We don't yet know the frequency of exchange between Africa and Eurasia, but this is 2 million years. It's a lot of time. Paul Salopek is walking around the planet in seven or eight years. People move incredibly quickly. Africa and Eurasia are not really separated by barriers that mean anything very important to a species like ours over periods of even dozens or hundreds or thousands of years. The idea that being in Eurasia or Africa is such a profound barrier that you would not expect people to move from one region to the other in periods of tens of thousands of years or hundreds of thousands of years, that's a strange idea.", "00:20:24 – How early modern humans dominated the world", "Dwarkesh Patel 00:20:24", "That's fascinating. By the way, it's so interesting that it's hard to think of the correct terminology when we say people. What kind of people are we talking about? Anyways, the ancestors of modern humans are at least in a position to have gene flow with other archaic humans in the Near East. But it doesn't seem like they expanded out hundreds of thousands of years ago.", "If you're right that a lot of the brain size had already been accumulated before this with Neanderthals, then they should have been pretty smart hundreds of thousands of years ago. But they're not expanding out. Then something happened 60,000 years ago. Then this group that descended from the people in sub-Saharan Africa just explodes all across the world. Something seems like it changed . What do you reckon it was?", "David Reich 00:21:18", "This is outside my area of expertise. I'm being very much like a scientist right here. I'm very sympathetic to the idea that it's hardly genetic. I think that this is cultural innovation. It's very natural to think that this is cultural innovation. Humans sometimes develop a new technique of storing information, sharing information, and so on. For example, writing allows you to record collective knowledge in a library, computational knowledge, large storage devices, and so on and so forth. Language, conceptual language, allows you to create a cultural body of knowledge.", "Dwarkesh Patel 00:21:59", "You talk in the book about the FOXP2 gene , which modulates language ability not only in humans but in other animals. Obviously, all living humans have it. It's at least 200,000 years old when the human lineage starts to split off. Everybody has language, so what do we think it was?", "David Reich 00:22:18", "Well, I don't know what we had, what the language was. It's almost certainly the case that Neanderthals were using sounds and communicating in ways that are probably pretty complicated, complex, and amount to some kind of language. But some people think that language in its modern form is not that old and might coincide with the later Stone Age , Upper Paleolithic revolution, 50,000 to 100,000 years ago, and might be specific to our lineage. There might be a qualitative shift in the type of language that's being used.", "There's been one incredibly interesting and weird line of genetic evidence that was so weird that a lot of people I know dropped off the paper. They just didn't want to be associated with it because it was so weird. They just thought it might be wrong. It's stood up, as far as I can tell. It's just so weird. This is one of the surprises that genetics keeps delivering. That's probably going to come across in this conversation. I am pretty humbled by the type of data that I'm involved in collecting. It's very surprising, this type of data. Again and again, it's not what we expect. It just makes me think that things are going to be surprising the next time we look at something that's really not looked at before.", "The line of evidence I'm talking about is one based on epigenetic modification of genomes. To explain what that means, the genome is not just a sequence of DNA letters, adenines, thymines, guanines, cytosines: ACTG. It also is decorated in anybody's cells by modifications that tell the genes when to be on and off, in what conditions. An example of such a modification is methylation in cytosine-guanine pairs . This turns down a gene and makes it not functional in certain tissues. This methylation is bestowed by cellular environments—and differs in different cells and also in different species—to identify which genes are more active or more passive. It's not directly encoded by the ACTGs locally. It's encoded by something else and sometimes even passed on by your parent directly. It's really very interesting.", "This can be read off ancient genomes. The methylation pattern survives in Denisovan and Neanderthal genomes. We can actually learn which genes were turned down and turned up. Work by David Gokhman , Liran Carmel , and colleagues created these maps of where in the Neanderthal genome, where in the Denisovan genome, and where in modern human genomes, genes are turned on and off. There's a lot of technical complexity to this problem. They identified differentially methylated regions, several thousand parts of the sections of the genome that were consistently and very differently turned down or turned up in Neanderthals and modern humans.", "They looked at the set of differentially methylated regions , roughly 1000 of them, that were systematically different on the modern human lineage. They asked what characterized them? Were there particular biological activities that were very unusual on the modern human specific lineage? There was a huge statistical signal that was very, very surprising and unexpected. It was the vocal tract. It was the laryngeal and pharyngeal tract. You can actually learn from little kids with congenital malformations, when a gene gets knocked out by an inborn error of genetic inheritance. For example, kids will have a face that looks different or vocal tract that looks different and so on. You know what the effect of knocking out these genes is. We can actually imply directionality to how the modern human specific changes are.", "The directionality is to change the shape of the vocal tract—which is soft tissue not preserved in the skeletal record—to be like the way ours is distinctive from chimpanzees. The shape that we know is very helpful for the articulation of the range of sounds we use that chimpanzees don't have in their laryngeal and pharyngeal tract. Even though we don't have surviving hard tissue like skeletons from this part of the body, we now have this methylation signature which suggests that these changes have occurred specifically on our lineage and are absent in both the Neanderthal and Denisovan lineages. If you think this change in the vocal tract is important in language, which seems reasonable, then maybe that's telling you that there are very important changes that have happened in the last half million or a few hundred thousand years, specifically on our lineage that were absent in Neanderthals and Denisovans.", "Dwarkesh Patel 00:26:57", "To the extent that humans have had it for hundreds of thousands of years, it's not clear then why humans weren't able to expand out of Africa and…", "David Reich 00:27:11", "We don't know that. We just know that today we have it. It could have been only a couple of hundred thousand years ago or 100,000 years ago that these changes happened.", "Dwarkesh Patel 00:27:19", "But then we know all modern humans have them, different groups of modern humans.", "David Reich 00:27:23", "Separate 200,000 years ago.", "Dwarkesh Patel 00:27:25", "So we know it's at least that old, right?", "David Reich 00:27:27", "Right. Although there is gene flow between all groups of modern humans, at least at low levels, going to 100,000 years. It's just that most of the separation between Khoisan and other groups happened 200,000 years ago.", "Dwarkesh Patel 00:27:38", "Let me motivate for the audience why this is so fascinating. First, it's obviously interesting. 70,000 years ago there are half a dozen different human species around the world that are pretty different. Fast forward to now, there's one. The fact that happened is wild. Another reason it's interesting for me is because I talk to people who discuss AI. Some have a strong perspective that you just make the model bigger. It wants to learn so you make it bigger, give it more space, and it'll become intelligent.", "One piece of evidence they use is that something happened with the human brain… the brains got bigger… we get humans dominating the entire earth. That's the perspective that if we make these AI models bigger we’ll get something very powerful on the other end. To the extent that story is accurate or inaccurate, it might have interesting implications for AI. That’s wild. Our anthropology or genetics about the ancient world maybe has some Bayesian update on how well we think these AI models will do in the future.", "David Reich 00:28:50", "One thing your comment makes me think about is that it doesn't map on in a simple way as an analogy. The human brain is maybe only three times larger than a chimpanzee's. That's not the kind of increase that computability has had since 40 years ago, which is many orders of magnitude. I'm aware of studies that have compared raw computability of chimpanzee babies to human babies. In fact, it's similar. For example, the ability to solve logic puzzles is pretty similar between chimpanzees and humans.", "Some people argue that humans are not even more intelligent than chimpanzees at some fundamental ability to compute, and that what makes humans distinctive is social learning abilities. That's where a lot of our ability has gone: our ability to see other people, to empathize with them, to copy them, to incorporate bodies of information learned by other people. I'm not an expert in this topic, but it's a very appealing group of ideas. The adaptations humans have are ones that allow us to access a rich amount of shared knowledge and not just rely on figuring out each thing. That's not obviously the same as just adding more computability. Maybe it has some similarities.", "Dwarkesh Patel 00:30:17", "I still don't understand. Is the answer that we just don't know what happened 60,000 years ago? Before humans and other modern humans and other types of humans were interacting, but no one was in a dominant position at least in Eurasia. Now humans not only dominate, but in fact we drove them to extinction. Do we have any idea what changed between that time?", "David Reich 00:30:43", "This is really outside my expertise. There are ideas that have been floated, which I'll summarize possibly badly. In every group of human beings of hundreds of people—which is the size of a band—or sometimes a thousand people, they accumulate shared cultural knowledge about tools, life strategies, and build up shared knowledge more and more. But if you have a limited-sized group that's not interacting with a sufficiently large group of people, occasionally this group has an information loss. There's a natural disaster, key elders die, and knowledge gets lost. There's not a critical mass of shared knowledge. But once it goes above some kind of critical mass, the group can get larger. The amount of shared knowledge becomes greater. You have a runaway process where an increasing body of shared knowledge of how to make particular tools and patterns of innovation, language, conceptual ideas, run amok.", "An example I've heard talked about in this context is what happened with Indigenous Tasmanians . About 10,000 years ago, the ancestors of people in Tasmania—this large island south of Australia—were continuous with the aboriginal populations of Australia. They had fire, but they lost it because it got forgotten somehow. It's a cold place. They just forgot it. The cultural knowledge lost it.", "What you actually have in the world 50,000 years ago is tens, or hundreds, or thousands, or tens of thousands, of different human groups. They each possess local knowledge and rarely exchange with each other. When we get lucky in ancient DNA and sample them, they're quite isolated from each other and have reduced diversity in the last tens of generations. The great majority of them go extinct, wiped out by natural disasters or other groups of humans or animals.", "You have a vast experiment with an archipelago of these groups. What might be happening is that you just have a process of accumulation and loss of cultural knowledge. Since there are many of these experiments going on, maybe something takes off somewhere. Maybe that's what happened 50,000 to 100,000 years ago in people who all have the capacity to do these things.", "Dwarkesh Patel 00:33:09", "One thing I didn't realize until I read your book is how small the population that expanded out into Eurasia was, and how small even generally the human population was 50,000 to 100,000 years ago. I remember one of the papers you cited said that there might have been a population bottleneck around this time period. People talk about the Toba eruption . I don't know if that's the cause, but there's many potential causes. Anyways, I remember from somewhere that the ancestors of everybody in Eurasia was initially like 1000 to 10,000 people. How small was the human population that was the seed of this modern period?", "David Reich 00:33:52", "By bottleneck we mean founder event , a relatively small number of people giving rise to a large number of descendants today. The bottleneck occurred well before the mixture with Neanderthals, which is probably somewhere like 50,000 years ago, plus or minus 5000 years or something. We don't know where it occurred. Maybe it occurred somewhere in Arabia. Maybe it occurred somewhere in the Nile Valley. Maybe it occurred somewhere else. But it occurred maybe thousands, or even tens of thousands, of years before the encounter with Neanderthals that pushed some Neanderthal DNA into modern humans.", "One way to see this is that in fact, this was not an unusual thing. This was not unusual to have a group with low diversity. The great majority of African groups would have had very low diversity. The one that started expanding into Eurasia also had low diversity, but it was so successful it didn't mix with very many other groups and recharge its diversity by remixing with other groups. Maybe it also expanded inside of Africa. There are lots of reasons to think that the expansion of the early modern human group outside of Africa would have been accompanied by a within Africa expansion of the same group, and that it would not have been unidirectional.", "One way to look at the expansion of modern humans into different parts of Eurasia where we have data is almost as a kind of sort of forest fire. It throws sparks into different parts of Eurasia and interacts with the local people. Look at the first modern humans of African and Near Eastern origin who get to Europe, where we have the best data. We have a number in western Siberia, where we have the best data so far, of these very early ones from about 45,000 to 40,000 years ago which are called Initial Upper Paleolithic . A good fraction of them had Neanderthal ancestors in their last 2-8 generations. That's a kind of crazy result. We have only a couple of dozen or so of these very early humans. A very large fraction of them recently mixed with Neanderthals in their ancestry.", "So a model that might explain the data is that you have sparks coming out of a kind of forest fire of humans expanding in the Middle East or Near East. They come in and they start going to places like western Siberia or parts of South Asia or parts of Europe. They mix with the Neanderthals. They produce these mixed populations, like these initial Upper Paleolithic groups we sample in the record, and they all go extinct including the modern human ones. There's just extinction after extinction of the Neanderthal groups, the Denisovan groups, and the modern human groups. But the last one standing is one of the modern human groups. That's what we happen to see, the interbreeding event that we see. The great majority of the ancestors of modern humans, for example in Eurasia, are not from the initial Upper Paleolithic ones. They’re from a later wave from the core in the Near East after 39,000 years ago, that repeoples a place that's been affected by these sparks coming out of the same region. Those groups too disappear.", "Dwarkesh Patel 00:36:54", "That's so fascinating. The group that started 60,000 years ago and eventually makes it around, that one doesn’t survive. The group that started 39,000 years ago is also replaced. We’ll talk later about the Yamnaya . You can just keep going. The hunter-gatherers were replaced 8500 years ago by the farmers coming from the Near East, and then after that by the Yamnaya from the steppe . It is interesting that a group comes there and is replaced by the next group. That group stays there and is replaced by the next group.", "David Reich 00:37:26", "That's probably right at some important level. It's not a triumphal march of superiority and inferiority with the group that now comes in having advantages and somehow establishing itself permanently. What you have is a very complicated situation of many people coming together and natural disasters or encounters with animals or encounters with other human groups. It all results in an almost random process of who spreads or ends up on top and other groups coming in afterward. It may be that from a big picture perspective you end up having African lineages spreading into these different parts of groups, different parts of Eurasia. That's certainly what happened. At a local level, it would be very difficult to understand what's going on.", "Dwarkesh Patel 00:38:14", "The big picture is interesting in two ways. First, you're not thinking crudely in terms of the major species or the major subgroups of humanity, like Neanderthals, Denisovans, and modern humans. Even among these, there were so many subcategories of different groups in this archipelago. If you do a fine-grained analysis, that's even more fascinating than that. There’s so much contingency and randomness in that process", "David Reich 00:38:42", "I think that's right. There are lots of analogies that you have later. There's European farmers encountering steppe migrations. There's Native Americans encountering Africans and Europeans as they come from the Old World. There's various other groups encountering other groups. You have people who cognitively or culturally have all the capacity to thrive in other contexts. But just because of the nature of the interaction that happens, one group declines demographically and one group doesn't. It's just complicated.", "It's very tempting to think that at some level—I'm not trying to be politically correct—that it's something innate, some better biological hardware that makes it possible for these African lineages to spread into Eurasia. I have no good insight into that topic. I don't think there's very good genetic evidence or any other kind of evidence to say that that contributed in a very strong way. It’s just complicated. We certainly have many modern examples where people with better or more competitive cultural complexes encounter each other and the ones that are more organized in a certain way sort of thrive more demographically somehow.", "00:39:59 – How bubonic plague rewrote history", "Dwarkesh Patel 00:39:59", "Let's jump forward then, since you mentioned this. Agriculture was developed in the Middle East like 10,000-12,000 years ago. Later, the population of Native Americans declined because of disease. One of the hypotheses that you talk about in the book is that potentially this happened with respect to people in Europe by the Yamnaya with the bacteria that causes the bubonic plague, Yersinia pestis . The question I'm trying to ask is going back a bit. James Scott , who I think just died a couple of weeks ago, wrote his book, Against the Grain . The whole book is like, “Agriculture sucked but we were forced to adopt it because it allowed some humans to organize nation states that were very abusive. But it did allow them to get the barbarians and co-opt them because they needed the labor to do this monotonous activity.”", "One thing I didn't realize until I read that book is just how new most of the diseases that afflict humans today are, everything from cholera to typhus to tuberculosis if you just go down the list. It might have been because of agriculture: domestication of animals and the density that was created. The theory he talks about in the book is that potentially the reason the hunter-gatherers, the “barbarians,” couldn't fight back against these early nation-states was because they were getting killed off by the diseases. I don't know how much evidence there is for this. Basically, the question I'm trying to ask is about the way in which Europeans encountered Native Americans in the New World. Did that just happen again and again throughout history? If you go back to Europe 9000 to 5000 years ago, is that just what human history has been like? That wasn't a one-off event?", "David Reich 00:42:00", "There's an amazing book by Kyle Harper . It's called The Fate of Rome . He's a Roman historian. It's a history of three major plagues in the Roman period, two of which are really not even very well known. It argues that the decline of the Roman empire is due to just weakening as the result of plagues and other climatic, biological, climatological worsening events. There is a lot of reason to think that some of these events have been recurrent throughout history. It's not just a difference between farmers and hunter-gatherers, but it’s actually a lot of different types of interactions that are occurring.", "The example that you mentioned is something that's been a big shock from the ancient DNA revolution. This is now maybe eight years, nine years old. The first large number of DNA sequences—from people who lived 6000 to 4000 years ago in the steppe north of the Black and Caspian Seas and in Europe—were being published around 2015. This group in Denmark, led by Eske Willerslev and Kristian Kristiansen and colleagues, looked at their DNA . They discovered in their sequence, from the 100 or so humans they sequenced, that there was also pathogen DNA. In 5-10% of the random people they sequence from around 4000-5000 years ago, there was Yersinia pestis , the agent of the black death, but actually without the plasmid that contributes to bubonic plague. That's required for flea-rat transmission. So it must have been pneumonic plague with an aerosolized transmission or something. 5-10% of random deaths means that the percent of people who were dying must have been even higher, because they weren't detecting everything that was there.", "There’s a study by another group , Johannes Krause and colleagues, of people in plague pits in London from the 1300s epidemic . They found that when you apply this method to people we know died of black death, you only find a quarter of the people. So the rate was even higher. If people are bacteremic when they die, if they have bacteria in their teeth, they almost certainly died of that agent.", "A paper just came out a few weeks ago in Scandinavia . It was looking at these tombs from about 5000 years ago of farmers who were just on the verge of encountering people from the steppe. A huge fraction of them have Black Death when they die. They're buried in tombs and normal and they have rates that are even higher than 5-10%. It’s this whole pedigree with many, many generations. It's not all at the same time. It’s like the parents and generations and generations with a very large fraction. Well more than 10% have Black Death and have Yersinia infection.", "So it looks like this particular agent has been killing people for 4000-5000 years in western Eurasia. In fact, it’s killing a scarily large fraction of the population. As the quantitative person I am reading this literature, I think people are embarrassed by the implication. The implication is that a quarter to a half of deaths in this entire period are from this. It's so unbelievable and so ridiculous that such a high proportion of people over such a long period of time are dying from this one agent.", "People don't even say it. They just publish one paper after the other, publishing more sequences. They just don't think about the implications of such a high rate of death. Yet it's really hard to imagine that people have bacteria in their blood and they're not dying of these things. It doesn't seem that people are selectively picking tombs. These are tombs that are buried properly. They're not grave pits.", "The implication seems to be that this one agent we happen to be able to detect is killing a very large fraction of people in western Eurasia over this period. So what's the implication of that? One thing is that it seems to be coming from steppe rodents, probably. Maybe the people on the steppe—I mean, they are still dying of it—are somewhat more protected from it. Then it spreads into farming Europe, maybe 5000 years ago, which is when we start to see it. Maybe this results in disorganization of the population, giving such a high rate of death. Maybe it creates a type of situation that the Europeans encountered when they got to the Americas, where societies were disrupted.", "In the last few years, we had Covid-19. It killed a half percent of the world population or something like that. It was so disruptive. This thing is killing a third of people or half of people randomly. It’s randomly killing people with cultural knowledge, randomly ripping into structures. Was it Montezuma who died or one of his parents, resulting in civil wars? You have the Inca when the Europeans encountered them, just disrupting the cultures that were there. Maybe this would have created a situation where there was disruption in the old ways of life. Maybe combined with other things, or even just by itself, it could have created an opportunity for people to move in from elsewhere, even though they were not as densely spread.", "There’s a big observation we haven't talked about. It's something that we as an ancient DNA community have been looking into again and again now and keep making progress on. About 5000 to 4500 years ago in Europe, there's a radical transformation in the ancestry of Europeans. An example of this is what happens in Britain. About 4500 years ago, the farmers who are there arrived there 6000 years ago. They build Stonehenge . The last big stones of Stonehenge go up 4500 years ago. Within 100 years, 90% of them are gone.", "They're replaced by migrants from the continent bearing prop majority ancestry from the steppe north of the Black and Caspian Seas. This is one place where we know what happened very well, but we see it all over Europe. We see it in Spain. We see it in Portugal. We see it in the Netherlands. We see it in Germany. We see it in Czechia. We see it in Italy. We see it in Switzerland. We see it everywhere. This wave of people from the east arrives. It displaces these successful, impressive, densely packed farmers with new people who have this ancestry from the east. They are not as focused on farming, although some of them are, as the people who came before.", "Dwarkesh Patel 00:48:17", "This is so crazy. Just for the audience if you're keeping tally, we have this one bacteria, Yersinia pestis , that’s responsible. I mean we learned in grade school that it's responsible for killing a third of Europeans more recently, causing the Black Death. There's even theories that this helped with the Industrial Revolution because it drove wages up in Britain. Because of higher wages, they had to make machines… Robert Allen , the economist, has a theory about this. So it potentially causes the Industrial Revolution. That one’s more tentative.", "David Reich 00:48:48", "It causes inflation. In the medieval one, it created a lot of inflation. The serfs, as I understand it, were sort of on fixed wages and so they had to be paid more. It basically inflated out their seigniorial responsibilities.", "Dwarkesh Patel 00:49:03", "So that's one of my things. The other is that during the Bronze Age , it allows the steppe people to basically replace the existing hunter-gatherer or farmer population in Europe. In literally all of Europe, people from the eastern steppes replace the existing people like the ones who built Stonehenge. Kyle Harper's book talks about this. The Plague of Justinian , the final one that killed off the empire, was also Yersinia pestis .", "David Reich 00:49:32", "Definitely. That's documented with genetics.", "Dwarkesh Patel 00:49:34", "We have the fall of the Roman Empire and at least once the replacement of the population in Europe. The second time basically, modernity happened afterwards. It's crazy for one disease. Potentially in the New World as well, I don't know what the percentage of deaths was.", "David Reich 00:49:51", "It's estimated to be not the primary pathogen, but who knows? In any case, there's others too. Some of the other plagues in the Roman empire are definitely not Yersinia .", "00:50:03 – Was agriculture terrible for humans?", "Dwarkesh Patel 00:50:03", "That's crazy. It’s not only disease, but this one in particular has had this big a role in human history.", "There are anthropologists and historians who have different theories about what the early history of humanity looked like. What kind of gods did they worship? How big were the communities? This informs their political philosophy today. James Scott obviously being the main example here.", "Does genetics shed any light on this? Was agriculture in fact terrible for humans? Were the first nation-states abusive? Is this stuff that is just not available through ancient DNA?", "David Reich 00:50:44", "We have indirect information about some of these things. One thing that you might hope to learn about is whether our genomes reacted to the innovation of agriculture in a disrupted way. You might think that our genomes would have been in some kind of steady state. Natural selection had adapted us to the previous environments we were in. You might expect that in reaction to a change so economically, dietarily, cognitively transformative as agriculture, the genome might shift in terms of how it adapts. You might actually see that in terms of adaptation on the genome. You might expect to see a quickening of natural selection or a change. I don't think we know the answer yet to whether that's occurred, although they're beginning to be hints. We could learn that from the DNA data.", "Dwarkesh Patel 00:51:34", "Hints in which direction?", "David Reich 00:51:36", "There's an increasing view amongst geneticists that natural selection is a process where there's relatively little directional selection to adapt to new environments. One piece of evidence connected to this is the finding that there's very few genetic changes that are 100% different in frequency between, say, Europeans and East Asians, or West Africans and Europeans, or West Africans and East Asians. If there had been genetic variants that had had modest selective advantages, they would have arisen 0.5-2% year by year, that's actually a lot. In a few hundred generations, they would have risen from very rare to very common, and in fact gone to 100%. There's thousands of generations separating Europeans and East Asians, and West Africans and Europeans, and so on. If that was a common process in evolution, we would expect many genetic changes to be 100% different in frequency between Europeans and East Asians, or West Africans and Europeans. We see almost none.", "What that suggests at some level is that there's not strong adaptation over the last 50,000 years. If there were, we would have seen genetic variants driving to 100% frequency difference across different groups around the world, which have hardly been connected with each other genetically over the time frame that we're talking about. We don't see those variants. So maybe selection hasn't been important. But maybe over a shorter period of time, selection has quickened and variants have started rising in frequency in the last 100 generations or something like that. We might be able to appreciate that.", "Maybe we could see whether there's been a quickening of natural selection over that time period. The view amongst common trait geneticists is that we've been at a kind of steady state where the natural selection that does occur is just there pushing down slightly bad variants. It’s not adapting to new situations. We're at a kind of stable point. So it's not clear how that works, because over a scale of 2 million years we're clearly genetically quite different from our ancestors. Our brains are bigger. We do some things differently. Our proportions are different. Yet over the last 200,000 years, we are not profoundly different. There are not genetic changes that differ dramatically across populations.", "There's a kind of disconnect. It's tempting to think evolution has stopped from one perspective, because there's so little fixed differences. On the other hand, if you look in the last 10,000 years in West Eurasian DNA, which we're doing now, it looks like a lot of change is happening. It's a very confusing situation. It feels like we don't really understand what's going on, but there's a lot to learn.", "Dwarkesh Patel 00:54:22", "Do you have a sense of what those changes might look like, or is it too early to tell? Obviously 10,000 years ago, we're talking about the beginning of agriculture.", "David Reich 00:54:29", "We're working right now on a study documenting changes over the last 10,000 years in Europe and western Eurasia based on tracing changes in about 8500 high-quality DNA sequences from people from this period. They’ve been collectively accumulated by us and others. We've been working very hard at this, led by Ali Akbari in my group. We think we have many hundreds of places where there's been very strong change in frequency over time, where we're confident, We think there are many thousands where we can see traces, the whole genome is seething with these changes in this period.", "Dwarkesh Patel 00:55:14", "Can you give us a sneak peek? Do we know what phenotype any particular ones correspond to?", "David Reich 00:55:18", "It's very clear that there is extreme overrepresentation of change on variants that affect metabolism and immune traits. If you look at traits that we know today affect immune disease or metabolic disease, these traits are highly overrepresented by a factor of maybe four in the collection of variants that are changing rapidly over time. Whereas if you look at traits that are affecting cognition that we know in modern people modulate behavior, they're hardly affected at all. Selection in the last 10,000 years doesn't seem to be focusing, on average, on cognitive and behavioral traits. It seems to be focusing on immune and cardiometabolic traits, on average, with exceptions. On average, there's an extreme over representation of cardio metabolic traits.", "Dwarkesh Patel 00:56:21", "The immune thing makes sense. There are obviously more diseases. In what direction is the metabolic thing pointing?", "David Reich 00:56:27", "One example of this is that there's very clear downward selection against body fat, against predisposition to high body mass index, and predisposition to what today manifests itself as type 2 diabetes. That genetic combination in West Eurasia has been pushed down again and again over the last 10,000 years under the pressure of natural selection, without a doubt. Its action on many, many independent genetic variants is pushing in the same direction in an overwhelmingly statistically significant way.", "One possible interpretation of this—and this is speculative—is that you're shifting from a mode of survival that's more feast and famine to one where food is more regular. It's not as advantageous to store fat. There’s selection against fat storage.", "Dwarkesh Patel 00:57:20", "That story seems to point against the narrative that agriculture was terrible. If there had to be selection against storing fat, that seems to suggest that things must have been pretty good.", "David Reich 00:57:35", "At some level, it could be terrible on the individual level and good on the population level. I'm not doubting the evidence that you're maybe referring to, which is that there's a lot more skeletally unwell people associated with the beginning of agriculture than there are in the hunter-gatherer period. On an individual level, life could have been experienced more harshly.", "In terms of survival, different animals have strategies of investing less in their young but having many more young, or investing more in their young and having fewer young. The hunter-gatherer strategy might be the latter. The farmer strategy might be having more young. Some of them survive longer or something. More of them survive and, on average over a lifetime, there might be stable enough food such that if you don't rely on such adaptations, it might be better.", "00:59:28 – Yamnaya expansion and how populations collide", "Dwarkesh Patel 00:59:28", "One thing I'm very curious about is whether we have any sense of what it looked like when different populations came into contact with each other. In many of these cases, you're talking about 90-95% of the population being replaced, to the extent that sometimes you refer to them as ghost populations. Only in the aftermath, with this modern genetic technology, can we even tell that there was some other population here. We can see the trace of that. I know there's obviously many different cases and many different cases look different in terms of how violent it was or what the clashes look like.", "If you focus on one example, the Yamnaya become a dominant group in so many different parts of Europe. It's not like Genghis Khan , where it's like one empire. There's the great Khan who everybody's pledging fealty to. They're not organized in that way, but they're still organized enough that they can go from place to place like, “We are the Yamnaya and we're taking over.” What did that concretely look like?", "David Reich 01:00:40", "That's super interesting. I'm going to back up a little bit. In my book, I have a section where I describe our initial findings and the conversations we had with archaeologists about them. Ancient DNA has been very disruptive to conventional understanding of the past.", "We found evidence of massive disruption of the local population in Germany about 4500 to 4700 years ago, based on the arrival of people from the steppes north of the Black and Caspian Sea. Some of our archaeologist co-authors were very distressed by the implication. Because after the Second World War, there had been a reaction against the initial idea people had based on archaeology. In the beginning of the 20th century, when people would see new types of pots in a certain layer of the excavation, they would argue that this was the arrival of a new people, coming through invasion or through movement into a region. It was a very disruptive event like the arrival of the Corded Ware complex or the Bell Beaker complex . It was seen as a very disruptive event, mediated by invasion.", "The Nazis used this idea to argue that these were spreads of Aryans moving across the landscape, being very disruptive and violent. The reaction after the Second World War was to say, “We don't know this.” When you see the arrival of new types of material culture—pots, tools, or ways of organizing life—what you might be seeing is more so the spread of culture. You might be seeing something like people adopting the use of cell phones, which can be used by people of very different backgrounds. Or it could be a new religion spreading. It’s not actually the movement of people.", "In fact, how could there be a big movement of people? You’re looking at densely settled Europe with well-developed agriculture? How could new people coming in from outside unseat these people, disrupt these people, especially once you have farmers who are densely settled. How could these be pastoralists coming from somewhere else? They're not as dense on the ground. In India, the British were in control, the Mughals were in control for hundreds of years but made hardly any demographic impact. How could people from outside with less density make much of a demographic impact?", "But then you look at the genetic data and there's a 50-90% population disruption. You take the DNA from people after these events. Almost all their ancestors are from far Eastern Europe, right across most of Europe. The DNA proved that idea was wrong. It was very disruptive. The question you had is, what does it look like on the ground? The DNA results were extremely disruptive to people in archaeology who had made these arguments that large-scale migration, large-scale disruption probably didn't occur in the past. It was a real challenge to our understanding of prehistory.", "It was a prime example that's been important for me in showing that we really don't know what the past was like until we actually look at it and have hard data telling us what it's like. Our guesses, our models, including many of mine, are likely to be wrong because when we have hard data, we're surprised. I'm sorry for that long preamble.", "What's happened in the last few years is there's been something of a reconciliation after the book. Archaeology is trying to reconcile itself with the DNA data. It's arguing about the subtlety of these interaction events. People talk about what's happened in Britain, for example. Maybe the arrival of the Beaker phenomenon, which happened about 4,500 years ago, isn't an invasion. Maybe it's a kind of peaceful event. The reason we're seeing such a disruption might be that the previous people cremated their dead and the Beaker people buried their dead. So it looks like a much more abrupt change than it was.", "In Iberia there's a 40% arrival of foreigners from the east and 60% local people, but the Y chromosomes are completely replaced. The local men don't contribute their DNA to later populations. It looks like that must be extremely disruptive to the local male population. People are saying, maybe this is female mate choice . Maybe it's not what you think it is. Maybe it's not what happened 4,000 years later amongst the descendants of the Iberians in the Americas. Today in Colombia, 95 percent of the Y chromosomes are European. 95 percent of the mitochondrial DNAs are Native American. We know what happened there. It wasn't friendly, peaceful, or nice. Maybe what happened in Iberia 4,000 years ago was much more peaceful, much calmer.", "If you look at the details in Iberia, the period of this change is actually over 500 years. But if you look at a micro scale, now that we have better data, it's immediate in each place. In Southern Spain, it's very fast. In central Spain, it's a little later but still very fast. So actually there are these rapid changes occurring in one place or another.", "People thought in Britain maybe this was a slow process, but we now have unpublished data from the Netherlands. It's clearly the same population of Beaker people that's spreading in Britain. There it's very disruptive. You actually have the whole series of people before and after. The earlier Corded Ware people are local, which is actually very unusual for Corded Ware. They're actually local people adopting the religion of the Corded Ware, but with mostly local ancestry. Then the Beaker arrival is incredibly disruptive. There's almost no continuity, very little continuity. Probably what’s happening with the Beaker individuals is that one way or another, you have people who expand demographically and rapidly displace other people over a period of well less than a century.", "Dwarkesh Patel 01:06:43", "Do we know whether they were organized? In more modern versions of this, when Cortés goes to the New World, he's serving fealty to the emperor of Spain. Or you have the Mongols and Genghis Khan. In this case, I assume there wasn't enough hierarchical organization for something like that. But there was enough organization for a persistent invasion. We’re going to keep going town to town, settlement to settlement, until we’ve reached the ends of Europe.", "Were the Yamnaya just lots of different independent groups doing this at the same time? How organized was this?", "David Reich 01:07:28", "We don't know. There are debates even about that. One example I've heard archaeologists I work with think about is the Comanche in the US Southwest. They were another horse-based, expanding group. They expanded dramatically in parallel to the Spanish expansion and alongside the US expansion, before encountering the militarized United States at some point. It’s local. There’s local bands of people expanding. They go on campaigns. They expand to certain areas.", "The Beaker people and the Corded Ware people were contemporary to ancient Sumer and a lot of the Egyptians we actually have written history from. It's not so ancient. They weren't writing, but they were contemporaries of these people not so far to their south. So we really don't know what was going on.", "We really don't know what was going on, but imagine if you were part of a community where there's a certain culture. We’re getting this from reconstructions from Indo-European myth . That’s probably the class of cultural shared knowledge these people were operating from because we think these people were the spreaders of Indo-European languages in this part of the world. At a certain age, males would band together and go on raiding parties and so on, and then maybe settle down later in life. You can imagine a process where, built into the culture, you have a process of expansion, exploitation.", "One thing that's really interesting that has actually emerged in the last few years—and was not really strong at the time that I wrote my book—was an understanding of the relationship between the Yamnaya and groups like the Corded Ware and the Beakers.", "The Yamnaya are these groups that thrived between about 5,300 and 4,600 years ago in the steppes north of the Black and Caspian Seas. They're probably the first people to domesticate the horse. That's arguable. They use the horse and the cart, which was newly invented, and the wheel to exploit the open steppe lands and be able to economically expand much more rapidly.", "They're the world's first extreme mobile pastoralists, but they can't get further than the steppe. They expand into Europe. They expand into the little island of the steppe that's in the Great Hungarian Plain in the Carpathian Basin , and they stop. They can't expand their way of life to the forested parts of Europe, which is most of Europe. Somehow, the ancestry of the Yamnaya gets absorbed by the Corded Ware group, and then later the Beaker group.", "That takes it further through Europe. But the Corded Ware group is quite different from the Yamnaya culturally. In fact, a lot of archaeologists think that they're so different, they can't be the same. They have some shared features, but the Corded Ware have many different traditions. One possibility is that the Yamnaya expanded and they encountered early Corded Ware. The Corded Ware learn some of the adaptations of the Yamnaya. Then they actually take Yamnaya women and absorb them into Corded Ware mostly male communities. They create a new community and that group expands.", "One of the mysteries of the Yamnaya expansion was that everybody had this cognitive bias to think this is very male driven. People have these Indo-European notions of male-centered mythologies and so on. This must be an extremely male-centered migration. You look at the genetic data. You look at the Y chromosomes, which track male migration, and the mitochondrial sequences, which are more sensitive to female migration. It looks like the steppe expansion from the east to west involves both sexes. Both males and females expand. People have found this confusing. There's been a lot of incredulity about this. Most people expect to see that it's an even movement of males and females, but it's quite clear that the bias is not so strong.", "We think the most likely explanation for what's happening now is that it actually is a male-biased process, but it's one that's interrupted.So the Yamnaya expansion is very male-biased. It expands to the edge of the range. They encounter the Corded Ware complex people. Then what happens is the Corded Ware complex people interact with the Yamnaya people and in fact the Yamnaya people actually lose out in that interaction.", "In fact, the Corded Ware males absorb and take Yamnaya females. They actually also take farmer females. You actually see these sites in early Corded Ware sites in Czechia, where both things are happening. Females from farmers and females from Yamnaya are being absorbed into the Corded Ware community. Then they expand further.", "So what you actually have is a two-step process. You have a male Yamnaya expansion, and then that ancestry from the steppe is carried further through females being absorbed into the Corded Ware. Then you have another male-driven expansion under the Corded Ware and so on. That brings both female and male Yamnaya lineages West, but not always with the Yamnaya ancestry being associated with the intuition that you would think it's domination.", "The same sort of parallel thing in another part of the world is what you see in remote Oceania in the Southwest Pacific. Look at Vanuatu, which are some of the first islands that people got to about 3,000 years ago in the Southwest Pacific. Moving to this other part of the world, if you look at New Guinea and Australia, people are there almost a little bit after 50,000 years ago. People are in the Solomon Islands and the Bismarck Archipelago to the east of New Guinea, maybe 35 to 40,000 years ago, and they stop.", "The Pacific has all these fertile places that are good places for people to live. It's completely empty of people until 3,000 years ago. Suddenly, these people from Taiwan go through the Philippines. They skirt the edge of New Guinea and the Bismarck Archipelago. They get to Vanuatu and Fiji and Tonga and New Caledonia and Samoa about 3,000 years ago, super rapidly in the guise of something called the Lapita cultural complex .", "If you look at the DNA of the people from this, they're almost entirely East Asian in ancestry. They look like early Taiwanese people. Today people in Vanuatu and Fiji and Tonga and New Caledonia have only 10 percent of this DNA. So something else happened afterward. The first people are almost entirely East Asian via Taiwan and the Philippines.", "Then you look at later DNA from the same part and 2,500 years ago, 500 years after the initial arrival, there's mass movement of Papuans in a male-driven way from New Guinea and the Bismarck Archipelago into Vanuatu. You have people with overwhelmingly Papuan ancestry from New Guinea coming into Vanuatu. That's the origin of the ancestry that's overwhelmingly there in Vanuatu, New Caledonia today.", "So there's a two-step process. The initial step is East Asian ancestry and these people who invented outrigger canoe technology and long-distance sailing. Then the technology becomes adopted by Papuans, who are using this culture for the next few hundred years. We can see them trading back and forth between the Bismarck Archipelago and Vanuatu.", "By the end, this culture is carried out by Papuan ancestry. Males from this group then spread into New Caledonia and take local females. But the ancestry is flipped from the way that people have this cognitive bias that it should be. People think, “Oh, it should be the East Asian males somehow dominating the local females or something.” You see the reverse. This is what's going on.", "It's very complicated and subtle. When you actually see evidence of males and females behaving differently, it proves that there's socially asymmetric behavior of two groups as they interact. What it means is confusing. It could be female mate choice. It could be violence. It could be genocide. It could be different patterns of male and female dispersal, with groups who travel being of one sex or the other. We can look for clues in the genetic data. Certainly in concert with the archaeology, we can maybe figure out more.", "01:15:39 – “Lost civilizations” and our Neanderthal ancestry", "Dwarkesh Patel 01:15:39", "That's really interesting.", "Going back to archaic humans, we talked a lot about Neanderthals but obviously there were two different species of Denisovans. I don't know if species is the right word, but there were two different kinds of Denisovans and also the hobbits in Asia. I don't know if there are more, but we're talking about half a dozen different distinct groups and only one survives.", "I understand if new cultural technologies are developed by this Near East early tribe, then they expand out through Eurasia. I get that might enable them to be so dominant. What I don't understand is how none of the other ones survived, not even one tribe of Denisovans or Neanderthals or hobbits. There was no niche in which they could just fend off. Everywhere this one tribe of African humans just dominated. How did none of them survive?", "David Reich 01:16:51", "I don't know. It may be a numerical issue. If you look at the part of the world where we have the best data in the Holocene , the last 10,000 years, there are places of long-term survival of hunter-gatherers for a few thousand more years than elsewhere. In the Netherlands, for example, hunter-gatherers survive for several thousand more years than in the surrounding areas, probably because they're exploiting the wetlands. But they're gone soon enough, once something happens. Mammoths go extinct mostly 14,000 years ago, but they survive on Wrangel Island north of Siberia until 4000 years ago. At some point, each of these places is encountered by the spread of modern humans at high densities.", "The other thing is that it's not even clear to me what expansion means. If you want to make a strong argument, you might argue that non-Africans today are Neanderthals who just have waves and waves of modern humans from Africa mixing with them. Who are the ancestors? That might sound like a silly kind of philosophical statement, but genealogically…", "I don't know if this happened before or after my book. You probably don't know about this. There was a super interesting series of papers . They made many things clear but one of them was that actually the proportion of non-Africans ancestors who are Neanderthals is not 2%. That’s the proportion of their DNA in our genomes today if you're a non-African person. It's more like 10-20% of your ancestors are Neanderthals.", "What actually happened was that when Neanderthals and modern humans met and mixed, the Neanderthal DNA was not as biologically fit. The reason was that Neanderthals had lived in small populations for about half a million years since separating from modern humans—who had lived in larger populations—and had accumulated a large number, thousands of slightly bad mutations. In the mixed populations, there was selection to remove the Neanderthal ancestry.", "That would have happened very, very rapidly after the mixture process. There's now overwhelming evidence that that must have happened. If you actually count your ancestors, if you're of non-African descent, how many of them were Neanderthals say, 70,000 years ago, it's not going to be 2%. It's going to be 10-20%, which is a lot.", "Maybe the right way to think about this is that you have a population in the Near East, for example, that is just encountering waves and waves of modern humans mixing. There's so many of them that over time it stays Neanderthal. It stays local. But it just becomes, over time, more and more modern human. Eventually it gets taken over from the inside by modern human ancestry.", "This is what happens to northern European hunter-gatherers . They become farmer over time, but they are intact on the male line. Culturally they stay on the male line intact. I'm not trying to be politically correct, I'm just saying that you can actually have scenarios where this happens, for example in elephants.", "If you look at forest elephants , which are the smaller of the two species of elephants in Africa, they're very matrilocal. They have these female lines that are very intact over a long period of time. If you look at the savanna elephants , which are the bigger elephants in eastern and southern Africa, they have savanna elephant DNA overall. But their mitochondrial sequences are forest elephant, which are the smaller West African elephants. The interpretation of this is that you just have waves and waves of dominant male bulls from the savanna coming into populations and eventually just replacing all of the genome in waves and waves of an intact forest population. So all that's left is the mitochondrial sequence, which is passed in the maternal line.", "It’s not even obvious that non-Africans today are modern humans. Maybe they're Neanderthals who became modernized by waves and waves of admixture.", "Dwarkesh Patel 01:20:36", "We were talking earlier about how small the initial population that populated all of Eurasia was, a couple thousand people. We were also talking about how random and contingent the whole history of humanity has been. Was there some chance, if a couple of variables were different, that “modern,” civilization—greater population density, greater development, technology and so forth—would not have happened except for some really lucky chances? Or was it the case that even if that one tribe didn't do it, some other tribe of humans would have done it? Even if some other tribe of humans from Africa hadn't done it, then Neanderthals had enough cognitive sophistication that they would have done it? I know this is a very speculative question, but how random does “primate to civilization” feel? Does it feel like we had to go down the exact right path? Or was it the trend across many different branches of the family that leads to humans?", "David Reich 01:21:42", "I don't know. It's very speculative. I'm very tempted to think that there's so many of these groups that some of them would eventually have gone down this route. One example of this that's interesting to think about is the parallel development of agriculture in the Holocene in different parts of the world.", "You have in the Americas what's almost certainly a completely independent development of agriculture 9000-8000 years ago from that in Eurasia. You can argue whether the East Asian and Near Eastern developments are different. They probably are, but maybe you could argue they knew about each other somehow. Or with the Papuan one, maybe you could argue they somehow knew about what was going on in other parts of the world. They probably didn't. Certainly the Americas one was isolated.", "Suddenly for the first time, you have these independent evolutions of full-blown agriculture at the same time in many places in the world after the ice age . This makes you think that it's somehow deterministic. Somehow some kind of setup of characteristics at this time causes this to happen. Why doesn't it happen in the previous period of stable climate before the last ice age? Some people say, “Maybe it was actually not as good as the last 10,000 years.”", "I find that confusing as a statement. It's tempting to think that some sort of cultural or biological, more likely cultural, characteristics are in place and seeded already at the time of the last ice age, such that when the reemergence happens it happens in multiple places simultaneously.", "Dwarkesh Patel 01:23:15", "Because it happens so fast. It's not like you had to wait for tens of thousands of years after the ice age. It’s literally 2000 years after the ice age.", "David Reich 01:23:20", "Agriculture is very old in the Americas.", "Dwarkesh Patel 01:23:22", "The ice age, was it 100,000 years or how old was it? Before that, at least some branches of the human tree split off 200,000 years ago. Neanderthals split off even before that. That's before the last ice age started, right? To the extent that your earlier statement that a lot of cognitive sophistication was already evident 200,000 or 300,000 years ago, doesn't that imply that we should have seen agriculture before the ice age?", "David Reich 01:23:52", "It's tempting to think that. I'm very confused about this personally. People say that the last 10,000 years are very unique on a scale of millions of years. If that's true, maybe we're in a very special time that is somehow a period of warmth and stability of climate that's unprecedented for 2 million years. Maybe that's true. But the other way people often say it is that we're in these cyclical periods of a few tens of thousands of years. The Holocene, the last 12,000 years or so, is a period of warming and then there's a period of a couple of tens of thousands of years, which is the last ice age. Then before that there's a few tens of thousands of years of warming. That's when we sample the late Neanderthals from. Then before that, there's another stage of cooling. Then before that, there’s another stage of warming. So marine isotope stages 1,3,5,7,9 are the warm periods. We're in one now. Marine isotope stages 2,4,6,8 and so on are the ice ages. So the last glacial maximum was marine isotope stage 2.", "Dwarkesh Patel 01:24:54", "If there were, “lost civilizations,” maybe not as sophisticated as anywhere close to the last thousands of years. Maybe early Sumer, Comanche, Yamnaya level or something. But that happened before the ice age, or maybe in a part of the world during the ice age where climatic conditions were better, would we be able to tell based on modern techniques?", "David Reich 01:25:20", "I think we would.", "Dwarkesh Patel 01:25:21", "Okay, but there's just not any evidence of them?", "David Reich 01:25:23", "I mean, there are very sophisticated human burials in Eurasia, Africa, Australia, and so on in the marine isotope stage 3, in the last period of warming. There are burials full of beads, full of symbolic behavior. Maybe you interpret this as civilization, but extensive settled societies you don't see.", "Dwarkesh Patel 01:25:52", "We touched on this when we talked about population size. One thing I'm sort of confused about is that in one sense a lineage is very distributed. Obviously many different archaic humans contributed to the human gene line. In another sense, maybe the main one is a couple thousand people. I'm not even sure how to think about it. Can the entire human lineage just hang out in an area the size of Montana?", "David Reich 01:26:26", "The lesson from ancient DNA and the genome revolution has been that anyone in the world is the result of recurrent mixture again and again in the past. You might think that the last 500 years are unusual periods of history with the people of African and European and Native American ancestry coming together in the Americas. You might think this is unusual because of transatlantic travel. But almost every group in the world is the result of many mixture events as profound as these on many timescales.", "South Asians are the result of mixture between groups very different from each other, as different as Europeans and East Asians, 4000-2000 years ago coming together and then crystallizing into a relative lack of mixture since that time. Europeans are the result of mixture of Yamnaya and farmers and hunter-gatherers. People in different Near Eastern groups are the mixture of early Iranians and early Levantine people and Anatolians who are super different from each other. There's huge differences amongst East Asians. There are huge differences amongst Papuans and East Asians. There are profound differences amongst different Native American groups that come together to form groups that we have data from later, in example after example we look for.", "You think about any one lineage today, any one group of people, and you want to trace people's ancestors back into time and ask where our ancestors scatter in geography. At different time points, almost everybody's ancestors are scattered into different geographic distributions that are not all in the same place.", "The evidence that our lineage was mostly in Africa is based on an assumption, a kind of inertial idea, that our lineage must have always been in Africa because Africa is the center of human history. But if you look at the archaeological evidence, it's not incredibly clear. If you look at the genetic evidence, we have many early branches from Eurasia and only one from Africa. You have complexity and branching in Eurasia that's sampled in the DNA record, DNA from Denisovans, DNA from unknown archaic lineages that contributed to Denisovans, Neanderthals. All of those are represented in the Eurasian record, not in the African record. Part of that is the fact that ancient DNA is preserved in Eurasia. Maybe there's a period when our lineage resides in Eurasia. It's not obviously wrong. That hypothesis is out there as a possibility.", "Dwarkesh Patel 01:28:46", "One thing I would love to see—I assume this will change over time as more data comes up—is some sort of chart that is superimposed upon a world map and evolves over time. Maybe you can have blobs representing different population groups. You can start off with the archaic humans and go back like, 200,000 years ago, even before that because this is a global event. It's not just an African event. For hundreds of thousands of years, you can just see different populations splitting off, merging back together. If somebody could make that sort of animation, that would be very useful.", "David Reich 01:29:22", "I think you can. People have tried to make animations like this in some ways. But one way to think about it is that there's a huge danger in being too interested in yourself. This comes across in my book. It's very tempting to be interested in your own history and think it's important. It's obviously not important compared to other people's history.", "However, if you think about one person's history and about where their ancestors lived two to eight generations back in the past, those are your great-grandparents and great-great-grandparents and you may even know where they live. Then you can actually just plot on a map a different number of generations back in the past where your ancestors lived. It's interesting to do within your family. My ancestors going back a few generations are in different parts of Europe, for example.", "People do this and when you get a test back from one of these personal ancestry testing companies like 23andMe, they'll say, “Oh, you are 20% Irish and 30% Chinese” or whatever it is and so on. What they're referring to is if you roll back 20 or 30 generations, where your ancestors are scattered in proportions. But then if you roll back 3000 generations, there's some in East Africa and some Neanderthals, right?", "For any one group of people or any one person, there's different time slices that matter. 30 generations ago, you get the 23andMe output. 3000 generations ago, you get the proportion of your ancestors who are Neanderthals or not Neanderthals or Denisovans or something like that. If you're from one of the many populations around the world that live in Denisovans. If you are any population going back further in time, presumably there's something similar happening. Mostly in Africa, but possibly outside of Africa 300,000 years ago, people's ancestors will be coming from different places.", "It's very plausible that people's ancestors are not all in Ethiopia 200,000 years ago. In fact some of them are maybe in North Africa. Some of them are maybe in West Africa. Some of them are in South Africa. Some of them are in Eurasia. Actually appreciable fractions are in each place. That braid and that trellis is coming together again and again over time. As you move further back, they'll collapse. Some will go extinct, some will reappear, some will re-merge. At any one point, there's never a singularity.", "01:31:32 – The DNA Challenge", "Dwarkesh Patel 01:31:32", "I don't know if you're familiar with Nat Friedman's Vesuvius Challenge . I don't know if you saw that when it was going around. It’s the scrolls in the library at Herculaneum . There's a volcano during the Roman empire, 79 AD. It buried the scrolls in that library. They all became literal ash, or at least very burnt. Nat Friedman found this professor who had done CT scans of these scrolls. There was really no way to decipher them. We just had the CT scans.", "It felt like the kind of thing where somebody out there might be able to figure out a technique for how to do it. We know what the end results should look like. We just don't know what the intermediate steps look like. It feels plausible with modern technology. So they offered a million dollar prize and a 21 year old with a GPU coded up a CNN model to decipher these scrolls.", "Is there something in your field which has this sort of feeling? There's something we need to figure out? We don't know the exact right technique. But if you could put it out and offer a million dollar bounty for it, maybe somebody will come up with a cool new technique to figure it out.", "David Reich 01:32:52", "There's many things in this area. I probably should give you a single answer. The basic answer is that we need DNA from Africa. We need old DNA from 50,000 years ago, 100,000 years ago, 200,000 years ago, from all over Africa. Because it's super clear that our lineage is complicated within Africa. There's archaic forms in the archaeological record. Modern human data is extremely substructured, with evidence of having come together from many different lineages, which must have been different archaic forms in Africa contributing to people living today. Having that would crack our understanding of how modern human lineages braided together and relate to the other archaic lineages we have data from. That's obviously extremely helpful.", "Dwarkesh Patel 01:33:43", "What is it you need to get those samples?", "David Reich 01:33:46", "We need to identify those skeletal remains, or the sediments in old caves that are well preserved or rock shelters that contain enough DNA to extract. We need extraction techniques that will allow us to get at that material. Maybe we even already have them. We just need to wait until that begins to happen. It would be revolutionary. The experience in Eurasia has been when we get DNA from old sites or new sites for which there's been nothing, we find Denisovans. We find people we completely didn't expect to see before, that breaks our understanding of the past.", "The other area where I am super excited, and a thing to reward and incentivize, would be to try to crack this body of information, to try to understand how biological adaptation happened in the last hundreds of thousands of years. We simply don't know the answer to your question, from a genetic point of view. How did modern humanity, in cognitive and other types of propensities, develop? We don’t know the biological underpinning of the differences that modern humans have from our closest living relatives. We just don't know how they evolved. It's not even clear how biological they were. We just don't know how to interpret the genome in terms of how these changes occurred.", "I was at a talk a few years ago that was really shocking to me. There was a researcher at Caltech. She was talking about being able to directly read the brains of macaque monkeys . Monkey would be shown 2000 photographs. Her student would be recording from different neurons in its visual cortex and learning the neurons' response to different images.", "What they would do is they would decompose the images of faces, human faces, into eigenvectors with the principal component analysis. Specific neurons were responding to particular eigenvectors. They learned the language of how the photographs and the decomposition of them computationally mapped on to the neurons. They actually learned a language for how that's the case.", "What they did then is they showed a 2001st photograph to the monkey. They recorded from its neurons. Then they tried to use the neurons to reassemble a photograph. It was a perfect reassembly of the photograph. They had actually completely learned how this macaque's brain represents the photograph going through the brain representation.", "In that case, they were able to completely figure out the language of appreciation of a photograph through the biological representation of it. If you look at the parallel problem of the genome, how does the genome code for development? How did we get to how we are today? How do we have our capacities and so on? At first principles, let’s say you asked me, “What's a simpler problem, figuring out how to represent the natural world in our brain or figuring out how to code for development?” My cognitive bias would be to say that if you were presented with this problem ab initio, it's easier to code for development than to represent the outside world in a brain.", "But this group and other groups are figuring out how to do this nearly perfectly with a readout from the brain. We really can't read a genome and tell you how a person looks or how a person develops. We can begin to say what terrible diseases they have, but not even predict that so well. It's very depressing that we can't actually read the genome enough to actually see how that occurs.", "We actually don't even know how evolution happens. For example, does evolution happen by lots of little changes pushing in some direction? For example, if we want to move toward a different positive set point for height or for some cognitive capacity or propensity? Is this by infinitesimal change of polygenicity, many genetic variants pushing in the same direction? That's the mathematician's bias. Or is it like the example I told you about before with David Gokhman and Liran Carmel, with the voice box where everything pushes in the same direction and goes up to 100% and shifts all in the same direction in an incredibly simple and simplistic way.", "If you talk to neuroscientists and molecular biologists, their brain tends toward the latter. These few examples suggest that maybe that's occurred. This polygenic paradigm of adaptation, when adaptation really matters, is that really what happens when important adaptation happens? Or is it instead something simple and simplistic and reliant on a small number of genes?", "So what I would really like to know is if we can mine the genetic data we have from modern genomes and archaic genomes. We now have Neanderthals and Denisovans. We now have some early modern humans who are far enough back in time that appreciable change may have occurred. Can we actually learn the patterns of biological adaptation well enough to actually read the code of how we change and how we adapt to new pressures? That's something that's not impossible to imagine we learn how to do, but it takes a different way of thinking.", "Dwarkesh Patel 01:38:50", "There’s one thing that would also be interesting. There’s a big debate in trying to forecast AIs. How big is the information content that describes the human brain? With AI models, we can obviously tell very easily how many bits it takes to encode the parameters. If you want to go back to how many bits is it to encode the training paradigm itself, there's obviously the training code, then there's the hyperparameters. There's how many kilobytes that is.", "We know that the human genome is three gigabytes, but we know only a small fraction is protein coding. Also how do you count the percentage that is responsible for regulation and so forth? But if you could only get the part that is responsible for the brain, how big would that be? Can we compare how big that is with respect to how big the training code for a model is? It would give interesting insights into how similar those two processes are.", "David Reich 01:39:51", "We're engaging with this in some way right now because we have incredible data from Europe in the last 10,000 years with huge numbers of samples. We can watch very small changes in frequency over 10,000 years. This period of time is not a particularly important time in human evolution. It's well after the important stuff happened, but it's an eventful time. The environments became very different. The lifestyles became very different. This is a period of time where we've done an experiment of nature. A push has happened against the human genome. There's agriculture. There's people living more densely. There's infectious disease happening in a different way, in a different type than before.", "How does the genome respond to this traumatic set of conditions? You can actually watch all these little variables—all these little gene frequencies, tens of millions of them—shifting up and down in coordination. What can you learn from that? We now have all the measurements. We have a selection coefficient measured at 10 million positions across the genome. We know what the effect of those are on traits today because they've been measured in large numbers on the order of a million people today. What can you do with this data set? How relevant is this to important evolution?", "That's the type of rich data that could potentially be mined to learn something sort of qualitatively interesting, beyond the storytelling that's characterized molecular biology. You could go beyond the FOXP2 where you say, “Oh, maybe it's this. Maybe this is the holy grail, or maybe that.” Maybe you learned something about the process that's deep and profound. So my million dollars goes to someone who can actually come up with a way of thinking about the process that's really qualitatively profound.", "01:41:38 – David’s career: the genetic vocation", "Dwarkesh Patel 01:41:38", "Interesting. All right, I guess we need to find the million dollars first. But somebody, if you've got a million dollars, and somebody else, if you got the idea, we can make a market here.", "We were talking about the contingency of human history and human evolution. One of the really interesting things is that not only is it contingent, but it seems to be persistent at least across the last few thousand years. It’s the way that genetics have changed, culture has changed.", "The Indo-Europeans, the Yamnaya, disrupted the Indus Valley civilization 4000 years ago or something. Not only does that mean that many of the languages which are spoken in India today are descended from this group, but literally the actual core myths of Hinduism are descended from this initial group. How is it possible that for 4000 years things like caste, things like basic mythology, can be preserved with such high fidelity, especially in an era for half of which you don't have writing? Not half of that but for at least a couple thousand or 2000 years, you don't even have writing. How is that sort of persistent cultural heritability preserved?", "David Reich 01:42:51", "You're asking me a cultural question, not a genetic one. What you see in the genetic data from South Asia is an amazing process. Today in South Asia, almost everybody is on a gradient of ancestry with two poles, what we call the Ancestral North Indians and the Ancestral South Indians . That’s true with very few exceptions. The exceptions are people with your last name, Patel . It’s a minor exception but it's interesting that's your last name. There’s also people from Munda who speak Austroasiatic languages or are admixed with them, or people who are Tibeto-Burman speakers.", "But most people are on a mixture between two poles, Ancestral North Indians and Ancestral South Indians. When you look at genetic data from India, it looks like what you see today in African Americans. You have people with relatively higher or lower proportions of, say European and West African ancestry. It looks like a population in the process of mixture, like African Americans who are the result of mixture in the last ten or so generations between mostly two very different populations mixing in different proportions.", "What happened in India is that it froze. The mixing started, and then it froze. The freezing happened 2000 to 3000 years ago. It froze because of cultural change. What happens in India is you have a three-part change. You have an arrival of three source populations, essentially parallel to what you see in Europe. There's a local hunter-gatherer population. There's what's probably a farming population, maybe also a hunter-gatherer population initially. Then there are these people descended at some level from steppe pastoralists. These are the three primary ancestral populations.", "They come together at the end of the decline of the Harappan civilization , which ends about 3800 years ago. Groups from this Harappan group, which we actually have sampled, they're all on a different gradient. They mix with the steppe groups and with the local hunter-gatherer groups to coalesce to these two later groups, which we call the Ancestral North Indians and Ancestral South Indians. Then mixtures of these two mixed populations form in the Gangetic plain and form people all along this gradient. It's really a very simple mixture of two sources.", "Then the cultural change happens, which locks in the caste system . People freeze and they stop mixing very much. Instead of people collapsing to a point—which is what you see in Europe after this type of mixing process of these three sources happening in any one region—you see this gradient forming and it's stable. Because of the enduringness of the caste system, you actually have a snapshot going back a couple of thousand years, without this continuation change.", "It's genetically kind of an amazing system to look at because of people's reluctance to mix with people from very different groups in traditional communities. The three steps are the coming together of very different populations, then convulsive and profound mixing of groups that had previously not mixed, and then locking into this static system as the caste system sets in. That’s documented in the early texts, like the Rigveda . You can actually see the change in that discussion during the course of the Rigveda.", "Dwarkesh Patel 01:46:07", "I know you warned about being too interested in yourself, but what was it about the Patels? Why are they an exception?", "David Reich 01:46:12", "The first good genomic data from South Asians is, embarrassingly, from Houston, Texas. In the human haplotype map project , there was a sample from Houston, Texas of Gujaratis in Houston, Texas.", "Dwarkesh Patel 01:46:24", "Yeah, a lot of hotels in Houston, Texas.", "David Reich 01:46:25", "GIH. If you look at them, people are actually not on this gradient. They're in a few different places. They're clustered into groups. There's the main gradient and there's an off-gradient group. I forgot how we figured this out. Someone figured out that these people are all Patels. Patels have their own distinctive history with different relationships to people in Central Asia. It's probably some additional ancestry from Central Asia pushing them off the main gradient.", "Dwarkesh Patel 01:46:49", "Interesting. We've obviously talked about so many different types of fields. I'm not sure where exactly in what field you started your research. Obviously now your lab is doing stuff in genetics. You have to touch on how your research combines with the archaeological record? What are the inferences you can make from that? There’s obviously different kinds of history. There's so many different disciplines here.", "You start your field researching a certain topic. Do you just keep expanding? “Now I'm going to master archaeology. Now I'm going to master anthropology. Now I'm going to…” How does that process work through your career?", "David Reich 01:47:34", "It's a very unstable life. In some areas like in archaeology, for a lot of my colleagues whom I respect tremendously, the career trajectory is that you learn to become an archaeologist, you dig, and you have a set of digs that you're doing for dozens of years with similar or slowly evolving methods. My work has just changed so radically.", "When I started doing this work, one could not sequence a whole genome. The genome was not yet sequenced . We had very little genetic variation accessible. The amount of data has increased by orders of magnitude every few years. What’s changed is the types of data that we collect, the ability to collect ancient DNA beginning 14 years ago, the ability to generate the volumes of it we have. We had no ancient DNA in 2009. In 2014, we only had a few hundred individuals with genome-scale data. We have tens of thousands of individuals with genome-scale data now. We have data from places we didn't have data before.", "It's such a destabilizing process. Someone like me wanders into areas that I'm not expert in. I'm not South Asian. I get to be part of trying to learn about the history of South Asia. I get to interact with archaeologists at the cutting edge of learning about ancient Southwest Pacific or ancient China or ancient Southern Europe. It's like an incredible privilege, but also I’m a kind of rank amateur in terms of a lot of the work I do.", "One wanders from one area where one's an amateur into another area, where one is an amateur and tries to learn a lot. Maybe this is a little bit like what it's like in Silicon Valley right now. You’re constantly doing new things and bringing some skills to bear that are useful. You’re hopefully trying to be respectful of the people one works with and the tremendous knowledge people have. You’re trying to learn as much as one can, and to work with other people to try to produce some joint research product that makes progress.", "Dwarkesh Patel 01:49:36", "Somebody's doing archaeology for their entire career on a certain group in some mountain somewhere, and then you come in. Here's the paper. We figured out the exact genetic combination that explains all your research. Is the reaction usually… I don't know how much of this you can say. Are people sometimes disappointed that you've been able to figure out the things in their field with a different technique?", "David Reich 01:50:10", "A lot of people we work with are incredibly excited about being able to do this. Prehistory is a period of time we know so little about. We have such poor clues. True archaeologists who are truly dedicated to understanding the past are super thirsty for knowledge about the time periods. If a new scientific technique becomes available that can probe these times, the true archaeologists who are truly interested in the past get incredibly excited. They embrace it as they've embraced previous scientific techniques, such as scientific archaeology, such as isotopic analysis, such as radiocarbon dating .", "That's been my experience with people again and again in archaeology. People really want to know about the time periods before writing, when at some point one didn't even imagine one could learn anything. They’re excited about this new type of information.", "Sometimes people are dug in to particular views of the past that are challenged by the new findings that come from scientific research, such as ancient DNA. When the DNA is strictly in opposition to some of these models, that becomes an area of tension. I have found myself to be proven wrong in a number of cases, including by my own work or by other work amongst my colleagues. I hope to be someone who can welcome that.", "One of my idols in this field is the archaeologist Colin Renfrew , who is a British archaeologist responsible for the Anatolian theory of Indo-European origins: the idea that farmers spread Indo-European languages. The language spoken in Armenia and in Iran and in northern India and in much of Europe today, spread with farming after 8500 years ago from Anatolia in all different directions. The demographic expansion and economic transformation associated with that spread farming. It's very plausible.", "There was a debate with Marija Gimbutas and others who argued that these languages spread from the steppe , north of the Black and Caspian Seas. One of the main arguments for the Anatolian hypothesis was that steppe expansions could not have been demographically significant because they were much thinner on the ground than farming expansions. This is why the steppe could not explain it, even though other linguistic arguments made the steppe seem more plausible.", "When the genetic revolution happened with regard to our understanding of Yamnaya expansions and Indo-European origins in 2015, Colin Renfrew at some point said, \"I was wrong. I was wrong about this topic.\" In fact, the weight of evidence now suggests that demographic transformation did come from the steppe. It's kind of amazing it did. Maybe it's from disease, maybe it's from something else. Who knows what it is? That's a very interesting topic. But we adapt, we learn. So this is incredibly inspiring to be able to change one's opinion.", "Dwarkesh Patel 01:52:57", "Final question. You mentioned these different revolutions in our ability to understand the past, like radiocarbon dating and obviously now with ancient DNA and genomic sequencing. Is there something that feels like the next thing along the spectrum? One would hope in the future—like a thousand years from now when the future AIs are looking back on human history—hopefully there's no lost period. Hopefully, they literally know what kind of gods the tribe in the Near East that basically settled Eurasia worshiped. They would know everything. Along that spectrum, we're making progress. What is the next thing after advances in more genomic sequencing or more samples from different parts of the world?", "David Reich 01:53:46", "I don't know. The discovery of the ability to extract DNA from ancient human remains was such a shock that we could even do this. We just didn't think we could do this. There's a section in the introduction of my book which was sort of my impression of what it was like. I had a conversation with my PhD supervisor about what it would be like if one somehow could open a cave or a room that was echoing still with languages that don't exist anymore, that are not yet spoken. You could hear the words still echoing somehow after thousands and thousands of years and record that down. That's what ancient DNA is like. It's an unexpected gift from the past that what we thought was an incredibly delicate biological molecule in fact is intact.", "There must be other such things. It's just hard to imagine what they are. In ancient DNA, there is an extraordinary amount still to do. There is systematic sampling from many, many places in the world where yet there has not yet been sampling. There is systematic sampling in the ability to sample from deep, deep into the past, up to the point where we can begin to decouple these lineages from each other.", "That will reveal incredible richness and that's something that we should all look forward to. There will be insights that come from that, both in terms of the understanding of individual places—places like many parts of Africa and South Asia and Australia and New Guinea and so on—where we have essentially no data currently in terms of ancient DNA. We’ll also get insight in terms of deep time and the deep lineages that mix together to form us, where we really have no sampling except for the Denisovans and Neanderthals right now.", "Dwarkesh Patel 01:55:31", "That's a great place to close. David, thank you so much for coming on the podcast. I highly, highly recommend your book, Who We Are and How We Got Here . It’s just so wild. Basically, a lot of the stuff you learned in grade school at least needs a lot more clarification. Some of it is wrong. The fact that that's the case is crazy. I hope that, in five to ten years, there's a new edition of the book or a new future book you write. For all the questions that you talked about today, which we don't have the answers to, it seems like there's a bunch of progress happening here. I'm very eager to see what the future results look like." ]
[ "https://en.wikipedia.org/wiki/David_Reich_(geneticist)", "https://en.wikipedia.org/wiki/Ancient_DNA", "https://reich.hms.harvard.edu/", "https://amzn.to/3MosYr3", "https://en.wikipedia.org/wiki/Archaic_humans", "https://en.wikipedia.org/wiki/Neanderthal", "https://en.wikipedia.org/wiki/Denisovan", "https://en.wikipedia.org/wiki/Genetic_admixture", "https://en.wikipedia.org/wiki/Deferent_and_epicycle", "https://en.wikipedia.org/wiki/Ptolemy", "https://en.wikipedia.org/wiki/Mitochondrial_DNA", "https://en.wikipedia.org/wiki/Human_Y-chromosome_DNA_haplogroup", "https://en.wikipedia.org/wiki/Gene_flow", "https://amzn.to/3MosYr3", "https://en.wikipedia.org/wiki/San_people", "https://en.wikipedia.org/wiki/Homo_habilis", "https://en.wikipedia.org/wiki/Homo_erectus", "https://en.wikipedia.org/wiki/Homo_heidelbergensis", "https://news.harvard.edu/gazette/story/2022/02/david-reich-study-shows-how-early-africans-lived-traveled-interacted/", "https://amzn.to/3MosYr3", "https://en.wikipedia.org/wiki/Paul_Salopek", "https://outofedenwalk.nationalgeographic.org/", "https://en.wikipedia.org/wiki/Sapient_paradox", "https://amzn.to/3MosYr3", "https://en.wikipedia.org/wiki/FOXP2", "https://en.wikipedia.org/wiki/Late_Stone_Age", "https://www.nature.com/articles/s41467-020-15020-6", "https://en.wikipedia.org/wiki/Methylation#DNA_methylation", "https://www.gokhmanlab.com/", "https://en.wikipedia.org/wiki/Liran_Carmel", "https://www.nature.com/articles/s41467-020-15020-6", "https://en.wikipedia.org/wiki/Khoisan", "https://en.wikipedia.org/wiki/Aboriginal_Tasmanians", "https://amzn.to/4754fl5", "https://en.wikipedia.org/wiki/Population_bottleneck", "https://en.wikipedia.org/wiki/Youngest_Toba_eruption", "https://en.wikipedia.org/wiki/Founder_effect", "https://en.wikipedia.org/wiki/Early_modern_human", "https://en.wikipedia.org/wiki/Initial_Upper_Paleolithic", "https://en.wikipedia.org/wiki/Yamnaya_culture", "https://en.wikipedia.org/wiki/Early_European_Farmers", "https://en.wikipedia.org/wiki/Pontic%E2%80%93Caspian_steppe", "https://amzn.to/4754fl5", "https://en.wikipedia.org/wiki/Yersinia_pestis", "https://en.wikipedia.org/wiki/James_C._Scott", "https://amzn.to/3Xhnepb", "https://www.ou.edu/cas/classicsandletters/people/kyle-harper", "https://amzn.to/3Z5lkJB", "https://en.wikipedia.org/wiki/Eske_Willerslev", "https://en.wikipedia.org/wiki/Kristian_Kristiansen_(archaeologist)", "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4644222/", "https://www.nature.com/news/2011/111026/full/478444a.html", "https://en.wikipedia.org/wiki/Johannes_Krause", "https://en.wikipedia.org/wiki/Black_Death", "https://www.nature.com/articles/s41586-024-07651-2", "https://en.wikipedia.org/wiki/Moctezuma_II", "https://en.wikipedia.org/wiki/Stonehenge", "https://en.wikipedia.org/wiki/Industrial_Revolution", "https://en.wikipedia.org/wiki/Bob_Allen_(economic_historian)", "https://en.wikipedia.org/wiki/Bronze_Age", "https://en.wikipedia.org/wiki/Plague_of_Justinian", "https://scholar.google.com/citations?user=VsHQZPMAAAAJ&hl=en", "https://en.wikipedia.org/wiki/Phenotype", "https://en.wikipedia.org/wiki/Genghis_Khan", "https://en.wikipedia.org/wiki/Corded_Ware_culture", "https://en.wikipedia.org/wiki/Bell_Beaker_culture", "https://en.wikipedia.org/wiki/Aryan_race#Connotation_of_the_term_Aryan_in_Nazi_racial_theories", "https://en.wikipedia.org/wiki/Mate_choice_in_humans#Female_mate_choice", "https://en.wikipedia.org/wiki/Hern%C3%A1n_Cort%C3%A9s", "https://en.wikipedia.org/wiki/Sumer", "https://en.wikipedia.org/wiki/Proto-Indo-European_mythology", "https://en.wikipedia.org/wiki/Indo-European_languages", "https://en.wikipedia.org/wiki/Great_Hungarian_Plain", "https://en.wikipedia.org/wiki/Pannonian_Basin", "https://en.wikipedia.org/wiki/Lapita_culture", "https://en.wikipedia.org/wiki/Homo_floresiensis#:~:text=Homo%20floresiensis%20(%20%2Ffl%C9%94%CB%90r,humans%20about%2050%2C000%20years%20ago.", "https://en.wikipedia.org/wiki/Holocene", "https://www.cell.com/cell/pdf/S0092-8674(20)30059-3.pdf", "https://en.wikipedia.org/wiki/Scandinavian_hunter-gatherer", "https://en.wikipedia.org/wiki/African_forest_elephant", "https://en.wikipedia.org/wiki/African_bush_elephant", "https://en.wikipedia.org/wiki/History_of_agriculture", "https://en.wikipedia.org/wiki/Last_Glacial_Period", "https://en.wikipedia.org/wiki/Marine_isotope_stages", "https://en.wikipedia.org/wiki/Last_Glacial_Maximum", "https://www.23andme.com/", "https://www.dwarkeshpatel.com/p/nat-friedman", "https://scrollprize.org/", "https://en.wikipedia.org/wiki/Herculaneum", "https://en.wikipedia.org/wiki/Eruption_of_Mount_Vesuvius_in_79_AD", "https://www.nytimes.com/2023/10/12/arts/design/herculaneum-scroll-vesuvius-word-purple.html", "https://en.wikipedia.org/wiki/Convolutional_neural_network", "https://www.cell.com/cell/fulltext/S0092-8674(17)30538-X", "https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors", "https://en.wikipedia.org/wiki/Indus_Valley_Civilisation", "https://en.wikipedia.org/wiki/Peopling_of_India", "https://en.wikipedia.org/wiki/Peopling_of_India", "https://en.wikipedia.org/wiki/Patel", "https://en.wikipedia.org/wiki/Munda_people", "https://en.wikipedia.org/wiki/Austroasiatic_languages", "https://en.wikipedia.org/wiki/Tibeto-Burman_languages", "https://en.wikipedia.org/wiki/Indus_Valley_Civilisation", "https://en.wikipedia.org/wiki/Caste_system_in_India", "https://en.wikipedia.org/wiki/Rigveda", "https://en.wikipedia.org/wiki/International_HapMap_Project", "https://en.wikipedia.org/wiki/Gujarati_people", "https://en.wikipedia.org/wiki/Human_Genome_Project", "https://en.wikipedia.org/wiki/Radiocarbon_dating", "https://en.wikipedia.org/wiki/Colin_Renfrew", "https://en.wikipedia.org/wiki/Anatolian_hypothesis", "https://en.wikipedia.org/wiki/Marija_Gimbutas", "https://en.wikipedia.org/wiki/Kurgan_hypothesis", "https://amzn.to/3MosYr3" ]
https://www.dwarkesh.com/p/demis-hassabis
Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat
[ "Nature of intelligence", "Edited by Teddy Kim , with lots of helpful links", "Dwarkesh Patel 00:00:44", "Today it is a true honor to speak with Demis Hassabis , who is the CEO of DeepMind . Demis, welcome to the podcast.", "Demis Hassabis 00:00:51", "Thanks for having me.", "Dwarkesh Patel 00:00:52", "First question, given your neuroscience background, how do you think about intelligence? Specifically, do you think it’s one higher-level general reasoning circuit , or do you think it’s thousands of independent subskills and heuristics?", "Demis Hassabis 00:01:05", "It’s interesting because intelligence is so broad and what we use it for is so generally applicable. I think that suggests there must be high-level common algorithmic themes around how the brain processes the world around us. Of course, there are specialized parts of the brain that do specific things, but I think there are probably some underlying principles that underpin all of that.", "Dwarkesh Patel 00:01:37", "How do you make sense of the fact that in these LLMs , when you give them a lot of data in any specific domain, they tend to get asymmetrically better in that domain. Wouldn’t we expect a general improvement across all the different areas?", "Demis Hassabis 00:01:51", "First of all, I think you do sometimes get surprising improvement in other domains when you improve in a specific domain. For example, when these large models improve at coding, that can actually improve their general reasoning . So there is evidence of some transfer although we would like a lot more evidence of that. But that’s how the human brain learns too. If we experience and practice a lot of things like chess, creative writing, or whatever, we also tend to specialize and get better at that specific thing even though we’re using general learning techniques and general learning systems in order to get good at that domain.", "Dwarkesh Patel 00:02:31", "What’s been the most surprising example of this kind of transfer for you? Will you see language and code, or images and text?", "Demis Hassabis 00:02:37", "I’m hoping we’re going to see a lot more of this kind of transfer, but I think things like getting better at coding and math, and then generally improving your reasoning. That is how it works with us as human learners. But I think it’s interesting seeing that in these artificial systems.", "Dwarkesh Patel 00:02:55", "And can you see the sort of mechanistic way, in the language and code example, in which you’ve found the place in a neural network that’s getting better with both the language and the code? Or is that too far down the weeds?", "Demis Hassabis 00:03:07", "I don’t think our analysis techniques are quite sophisticated enough to be able to hone in on that. I think that’s actually one of the areas where a lot more research needs to be done, the kind of mechanistic analysis of the representations that these systems build up. I sometimes like to call it virtual brain analytics. In a way, it’s a bit like doing fMRI , or single-cell recording from a real brain. What are the analogous analysis techniques for these artificial minds? There’s a lot of great work going on in this sort of stuff. People like Chris Olah , I really like his work. I think a lot of computational neuroscience techniques can be brought to bear on analyzing the current systems we’re building. In fact, I try to encourage a lot of my computational neuroscience friends to start thinking in that direction and applying their know-how to the large models.", "Dwarkesh Patel 00:03:58", "What do other AI researchers not understand about human intelligence that you have some sort of insight on, given your neuroscience background?", "Demis Hassabis 00:04:06", "I think neuroscience has added a lot, if you look at the last 10-20 years that we’ve been at it. I’ve been thinking about this for 30+ years. In the earlier days of the new wave of AI, neuroscience was providing a lot of interesting directional clues, things like reinforcement learning and combining that with deep learning . Some of our pioneering work we did there were things like experience replay and even the notion of attention , which has become super important. A lot of those original inspirations came from some understanding about how the brain works, although not the exact specifics of course. One is an engineered system and the other one’s a natural system. It’s not so much about a one-to-one mapping of a specific algorithm, but more so inspirational direction. Maybe it’s some ideas for architecture, or algorithmic ideas, or representational ideas. The brain is an existence proof that general intelligence is possible at all. I think the history of human endeavors has been such that once you know something’s possible it’s easier to push hard in that direction, because you know it’s a question of effort, a question of when and not if. That allows you to make progress a lot more quickly. So I think neuroscience has inspired a lot of the thinking, at least in a soft way, behind where we are today. As for going forward, I think there’s still a lot of interesting things to be resolved around planning. How does the brain construct the right world models ? I studied how the brain does imagination , or you can think of it as mental simulation. How do we create very rich visual spatial simulations of the world in order for us to plan better?", "RL atop LLMs", "Dwarkesh Patel 00:05:56", "Actually, I’m curious how you think that will interface with LLMs. Obviously, DeepMind is at the frontier and has been for many years with systems like AlphaZero and so forth, having these agents which can think through different steps to get to an end outcome. Is there a path for LLMs to have this tree search kind of thing on top of them? How do you think about this?", "Demis Hassabis 00:06:15", "I think that’s a super promising direction. We’ve got to carry on improving the large models. We’ve got to carry on making them more and more accurate predictors of the world, making them more and more reliable world models. That’s clearly a necessary, but probably insufficient component of an AGI system. On top of that, we’re working on things like AlphaZero-like planning mechanisms on top that make use of that model in order to make concrete plans to achieve certain goals in the world. Perhaps chaining thought, lines of reasoning, together and using search to explore massive spaces of possibility. I think that’s kind of missing from our current large models.", "Dwarkesh Patel 00:07:01", "How do you get past the immense amount of compute that these approaches tend to require? Even the AlphaGo system was a pretty expensive system because you sort of had to run an LLM on each node of the tree. How do you anticipate that’ll get made more efficient?", "Demis Hassabis 00:07:18", "One thing is Moore’s law tends to help. Over every year more computation comes in. But we focus a lot on sample-efficient methods and reusing existing data, things like experience replay and also just looking at more efficient ways. The better your world model is, the more efficient your search can be. One example I always give is AlphaZero, our system to play Go and chess and any game. It’s stronger than human world champion level in all these games and it uses a lot less search than a brute force method like Deep Blue to play chess. One of these traditional Stockfish or Deep Blue systems would maybe look at millions of possible moves for every decision it’s going to make. AlphaZero and AlphaGo may look at around tens of thousands of possible positions in order to make a decision about what to move next. A human grandmaster or world champion probably only looks at a few hundred moves, even the top ones, in order to make their very good decision about what to play next. So that suggests that the brute force systems don’t have any real model other than the heuristics about the game. AlphaGo has quite a decent model but the top human players have a much richer, much more accurate model of Go or chess. That allows them to make world-class decisions on a very small amount of search. So I think there’s a sort of trade-off there. If you improve the models, then I think your search can be more efficient and therefore you can get further with your search.", "Dwarkesh Patel 00:09:00", "I have two questions based on that. With AlphaGo, you had a very concrete win condition: at the end of the day, do I win this game of Go or not? You can reinforce on that. When you’re thinking of an LLM putting out thought, do you think there will be this ability to discriminate in the end, whether that was a good thing to reward or not?", "Demis Hassabis 00:09:19", "Of course that’s why we pioneered, and what DeepMind is sort of famous for, using games as a proving ground. That’s partly because it’s efficient to research in that domain. The other reason is, obviously, it’s extremely easy to specify a reward function. Winning the game or improving the score, something like that is built into most games. So that is one of the challenges of real-world systems. How does one define the right objective function, the right reward function , and the right goals? How does one specify them in a general way, but specific enough that one actually points the system in the right direction? For real-world problems, that can be a lot harder. But actually, if you think about it in even scientific problems, there are usually ways that you can specify the goal that you’re after.", "Dwarkesh Patel 00:10:07", "When you think about human intelligence, you were just saying that humans thinking about these thoughts are just super sample-efficient. Einstein coming up with relativity, right? There’s thousands of possible permutations of the equations. Do you think it’s also this sense of different heuristics like, “I’m going to try out this approach instead of this”? Or is it a totally different way of approaching and coming up with that solution than what AlphaGo does to plan the next move?", "Demis Hassabis 00:10:29", "I think it’s different because our brains are not built for doing Monte Carlo tree search . It’s just not the way our organic brains work. I think that people like Einstein, in order to compensate for that, have used their intuition—and maybe we can come to what intuition is—and their knowledge and their experience to build in Einstein’s case, extremely accurate models of physics that include mental simulations. If you read about Einstein and how he came up with things, he used to visualize and really feel what these physical systems should be like, not just the mathematics of it. He had a really intuitive feel for what they would be like in reality. That allowed him to think these thoughts that were very outlandish at the time. So I think that that gets to the sophistication of the world models that we’re building. Imagine your world model can get you to a certain node in a tree that you’re searching, and then you just do a little bit of search around that leaf node and that gets you to these original places. Obviously, if your model and your judgment on that model is very, very good, then you can pick which leaf nodes you should expand with search much more accurately. So overall, you therefore do a lot less search. I mean, there’s no way that any human could do a kind of brute force search over any kind of significant space.", "Dwarkesh Patel 00:11:54", "A big open question right now is whether RL will allow these models to use the self-play synthetic data to get over data bottlenecks. It sounds like you’re optimistic about this?", "Demis Hassabis 00:12:04", "I’m very optimistic about that. First of all, there’s still a lot more data that can be used, especially if one views multimodal and video and these kinds of things. Obviously, society is adding more data all the time to the Internet and things like that. I think that there’s a lot of scope for creating synthetic data. We’re looking at that in different ways, partly through simulation, using very realistic game environments, for example, to generate realistic data, but also self-play. That’s where systems interact with each other or converse with each other. It worked very well for us with AlphaGo and AlphaZero where we got the systems to play against each other and actually learn from each other’s mistakes and build up a knowledge base that way. I think there are some good analogies for that. It’s a little bit more complicated to build a general kind of world data.", "Dwarkesh Patel 00:12:58", "How do you get to the point with these models where the synthetic data they’re outputting on the self-play they’re doing is not just more of what’s already in their data set, but something they haven’t seen before? To actually improve the abilities.", "Demis Hassabis 00:13:12", "I think there’s a whole science needed there. I think we’re still in the nascent stage of this, of data curation and data analysis and actually analyzing the holes that you have in your data distribution. This is important for things like fairness and bias and other stuff. To remove that from the system is to really make sure that your data set is representative of the distribution you’re trying to learn. There are many tricks there one can use, like overweighting or replaying certain parts of the data. Or if you identify some gap in your data set, you could imagine that’s where you put your synthetic generation capabilities to work on.", "Dwarkesh Patel 00:13:47", "Nowadays, people are paying attention to the RL stuff that DeepMind did many years before. What are the early research directions, or something that was done way back in the past, that you think will be a big deal but people just haven’t been paying attention to it? There was a time where people weren’t paying attention to scaling. What’s the thing now that is totally underrated?", "Demis Hassabis 00:14:07", "Well, I think that the history of the last couple of decades has been things coming in and out of fashion, right? A while ago, maybe five-plus years ago, we were pioneering with AlphaGo and before that DQN . It was the first system that worked on Atari , our first big system really more than ten years ago now, that scaled up Q-learning and reinforcement learning techniques and combined that with deep learning to create deep reinforcement learning . We used that to scale up to master some pretty complex tasks like playing Atari games just from the pixels. I do actually think a lot of those ideas need to come back in again and, as we talked about earlier, combine them with the new advances in large models and large multimodal models, which are obviously very exciting as well. So I do think there’s a lot of potential for combining some of those older ideas together with the newer ones.", "Dwarkesh Patel 00:15:00", "Is there any potential for the AGI to eventually come from a pure RL approach? The way we’re talking about it, it sounds like the LLM will form the right prior and then this sort of tree search will go on top of that. Or is it a possibility that it comes completely out of the dark?", "Demis Hassabis 00:15:15", "Theoretically, I think there’s no reason why you couldn’t go full AlphaZero-like on it. There are some people here at Google DeepMind and in the RL community who work on that, fully assuming no priors , no data, and just building all knowledge from scratch. I think that’s valuable because those ideas and those algorithms should also work when you have some knowledge too. Having said that, I think by far the quickest way to get to AGI, and the most plausible way, is to use all the knowledge that’s existing in the world right now that we’ve collected from things like the Web. We have these scalable algorithms, like transformers , that are capable of ingesting all of that information. So I don’t see why you wouldn’t start with a model as a kind of prior, or to build on it and to make predictions that help bootstrap your learning. I just think it doesn’t make sense not to make use of that. So my betting would be that the final AGI system will have these large multimodal models as part of the overall solution, but they probably won’t be enough on their own. You’ll need this additional planning search on top.", "Scaling and alignment", "Dwarkesh Patel 00:16:31", "This sounds like the answer to the question I’m about to ask. As somebody who’s been in this field for a long time and seen different trends come and go, what do you think the strong version of the scaling hypothesis gets right and what does it get wrong? The idea that you just throw enough compute at a wide enough distribution of data and you get intelligence.", "Demis Hassabis 00:16:47", "My view is that this is kind of an empirical question right now. I think it was pretty surprising to almost everyone, including the people who first worked on the scaling hypotheses , how far it’s gone. In a way, I look at the large models today and I think they’re almost unreasonably effective for what they are. I think it’s pretty surprising some of the properties that emerge . In my opinion, they’ve clearly got some form of concepts and abstractions and things like that. I think if we were talking five-plus years ago, I would have said to you that maybe we need an additional algorithmic breakthrough in order to do that, maybe more like how the brain works. I think that’s still true if we want explicit abstract concepts, neat concepts, but it seems that these systems can implicitly learn that. Another really interesting, unexpected thing was that these systems have some sort of grounding even though they don’t experience the world multimodally, at least until more recently when we have the multimodal models. The amount of information and models that can be built up just from language is surprising. I think that I’d have some hypotheses about why that is. I think we get some grounding through the RLHF feedback systems because obviously the human raters are by definition, grounded people. We’re grounded in reality, so our feedback is also grounded. Perhaps there’s some grounding coming in through there. Also if you’re able to ingest all of it, maybe language contains more grounding than linguists thought before. So it actually raises some very interesting philosophical questions that people haven’t even really scratched the surface of yet. Looking at the advances that have been made, it’s quite interesting to think about where it’s going to go next. In terms of your question of large models, I think we’ve got to push scaling as hard as we can and that’s what we’re doing here. It’s an empirical question, whether that will hit an asymptote or a brick wall, and there are different people who argue about that. I think we should just test it. I think no one knows. In the meantime, we should also double down on innovation and invention. This is something where Google Research and DeepMind and Google Brain have pioneered many, many things over the last decade. That’s our bread and butter. You can think of half our effort as having to do with scaling and half our efforts having to do with inventing the next architectures and the next algorithms that will be needed, knowing that larger and larger scaled models are coming down the line. So my betting right now, but it’s a loose betting, is that you need both. I think you’ve got to push both of them as hard as possible and we’re in a lucky position that we can do that.", "Dwarkesh Patel 00:19:27", "I want to ask more about the grounding. You can imagine two things that might change which would make the grounding more difficult. One is that as these models get smarter, they are going to be able to operate in domains where we just can’t generate enough human labels, just because we’re not smart enough. If it does a million-line pull request, how do we tell it, for example, this is within the constraints of our morality and the end goal we wanted and this isn’t?  The other thing has to do with what you were saying about compute. So far we’ve been doing next token prediction and in some sense it’s a guardrail, because you have to talk as a human would talk and think as a human would think. Now, additional compute is maybe going to come in the form of reinforcement learning where it’s just getting to the objective and we can’t really trace how you got there. When you combine those two, how worried are you that the grounding goes away?", "Demis Hassabis 00:20:12", "I think if it’s not properly grounded, the system won’t be able to achieve those goals properly. In a sense, you have to have some grounding for a system to actually achieve goals in the real world. I do actually think that these systems, and things like Gemini , are becoming more multimodal. As we start ingesting things like video and audiovisual data as well as text data, then the system starts correlating those things together. I think that is a form of proper grounding. So I do think our systems are going to start to understand the physics of the real world better.", "Then one could imagine the active version of that as a very realistic simulation or game environment where you’re starting to learn about what your actions do in the world and how that affects the world itself. The world stays itself, but it also affects what next learning episode you’re getting. So these RL agents we’ve always been working on and pioneered, like AlphaZero and AlphaGo, actually are active learners. What they decide to do next affects what next learning piece of data or experience they’re going to get. So there’s this very interesting sort of feedback loop.", "And of course, if we ever want to be good at things like robotics, we’re going to have to understand how to act in the real world.", "Dwarkesh Patel 00:21:35", "So there’s grounding in terms of whether the capabilities will be able to proceed, whether they will be enough in touch with reality to do the things we want. There’s another sense of grounding in that we’ve gotten lucky that since they’re trained on human thought, they maybe think like a human. To what extent does that stay true when more of the compute for training comes from just “did you get the right outcome” and it’s not guardrailed by “are you proceeding on the next token as a human would?” Maybe the broader question I’ll pose to you is, and this is what I asked Shane as well, what would it take to align a system that’s smarter than a human? Maybe it thinks in alien concepts and you can’t really monitor the million-line pull request because you can’t really understand the whole thing and you can’t give labels.", "Demis Hassabis 00:22:13", "This is something Shane and I, and many others here, have had at the forefront of our minds since before we started DeepMind because we planned for success. In 2010, no one was thinking about AI let alone AGI. But we already knew that if we could make progress with these systems and these ideas, the technology created would be unbelievably transformative. So we were already thinking 20 years ago about what the consequences of that would be, both positive and negative. Of course, the positive direction is amazing science, things like AlphaFold , incredible breakthroughs in health and science, and mathematical and scientific discovery. But we also have to make sure these systems are sort of understandable and controllable.", "This will be a whole discussion in itself, but there are many, many ideas that people have such as more stringent eval systems . I think we don’t have good enough evaluations and benchmarks for things like if the system can deceive you. Can it exfiltrate its own code or do other undesirable behaviors? There are also ideas of using AI , not general learning ones but maybe narrow AIs that are specialized for a domain, to help us as the human scientists to analyze and summarize what the more general system is doing. So there’s narrow AI tools. I think that there’s a lot of promise in creating hardened sandboxes or simulations that are hardened with cybersecurity arrangements around the simulation, both to keep the AI in and to keep hackers out. You could experiment a lot more freely within that sandbox domain. There’s many, many other ideas, including the analysis stuff we talked about earlier, where we can analyze and understand what the concepts are that this system is building and what the representations are. So maybe then they’re not so alien to us and we can actually keep track of the kind of knowledge that it’s building.", "Timelines and intelligence explosion", "Dwarkesh Patel 00:24:13", "Stepping back a bit, I’m curious what your timelines are. So Shane said his modal outcome is 2028. I think that’s maybe his median. What is yours?", "Demis Hassabis 00:24:21", "I don’t have prescribed specific numbers to it because I think there’s so many unknowns and uncertainties. Human ingenuity and endeavor comes up with surprises all the time. So that could meaningfully move the timelines. I will say that when we started DeepMind back in 2010, we thought of it as a 20-year project. And I think we’re on track actually, which is kind of amazing for 20-year projects because usually they’re always 20 years away. That’s the joke about whatever, quantum, AI, take your pick. But I think we’re on track. So I wouldn’t be surprised if we had AGI-like systems within the next decade.", "Dwarkesh Patel 00:25:02", "Do you buy the model that once you have an AGI, you have a system that basically speeds up further AI research? Maybe not in an overnight sense, but over the course of months and years you would have much faster progress than you would have otherwise had?", "Demis Hassabis 00:25:12", "I think that’s potentially possible. I think it partly depends on what we, as a society, decide to use the first nascent AGI systems or proto-AGI systems for. Even the current LLMs seem to be pretty good at coding and we have systems like AlphaCode . We also have theorem proving systems . So one could imagine combining these ideas together and making them a lot better. I could imagine these systems being quite good at designing and helping us build future versions of themselves, but we also have to think about the safety implications of that of course.", "Dwarkesh Patel 00:25:51", "I’m curious what you think about that. I’m not saying this is happening this year, but eventually you’ll be developing a model where you think there’s some chance that it’ll be capable of an intelligence explosion -like dynamic once it’s fully developed. What would have to be true of that model at that point where you’re comfortable continuing the development of the system? Something like, “I’ve seen these specific evals, I’ve understood its internal thinking and its future thinking enough.”", "Demis Hassabis 00:26:17", "We need a lot more understanding of the systems than we do today before I would even be confident of explaining to you what we’d need to tick box there. I think what we’ve got to do in the next few years, in the time before those systems start arriving, is come up with the right evaluations and metrics. Ideally formal proofs, but it’s going to be hard for these types of systems, so at least empirical bounds around what these systems can do. That’s why I think about things like deception as being quite root node traits that you don’t want. If you’re confident that your system is exposing what it actually thinks, then that opens up possibilities of using the system itself to explain aspects of itself to you. The way I think about that is like this. If I were to play a game of chess against Garry Kasparov , which I’ve played in the past, Magnus Carlsen , or the amazing chess players of all time, I wouldn’t be able to come up with a move that they could. But they could explain to me why they came up with that move and I could understand it post hoc, right? That’s the sort of thing one could imagine. One of the capabilities that we could make use of these systems is for them to explain it to us and even maybe get the proofs behind why they’re thinking something, certainly in a mathematical problem.", "Dwarkesh Patel 00:27:42", "Got it. Do you have a sense of what the converse answer would be? So what would have to be true where tomorrow morning you’re like “oh, man, I didn’t anticipate this.” You see some specific observation tomorrow morning that makes you say “we got to stop Gemini 2 training.”", "Demis Hassabis 00:27:55", "I could imagine that. This is where things like the sandbox simulations are important. I would hope we’re experimenting in a safe, secure environment when something very unexpected happens. There’s a new unexpected capability or something that we didn’t want. We explicitly told the system we didn’t want it but then it did and it lied about it. These are the kinds of things where one would want to then dig in carefully. The systems that are around today are not dangerous, in my opinion, but in a few years they might have potential. Then you would ideally pause and really get to the bottom of why it was doing those things before one continued.", "Gemini training", "Dwarkesh Patel 00:28:42", "Going back to Gemini, I’m curious what the bottlenecks were in the development. Why not immediately make it one order of magnitude bigger if scaling works?", "Demis Hassabis 00:28:52", "First of all, there are practical limits. How much compute can you actually fit in one data center? You’re also bumping up against very interesting distributed computing kind of challenges. Fortunately, we have some of the best people in the world working on those challenges and cross data center training, all of these kinds of things. There are very interesting hardware challenges and we have our TPUs that we’re building and designing all the time as well as using GPUs. So there’s all of that. Scaling laws also don’t just work by magic. You still need to scale up the hyperparameters , and various innovations are going in all the time with each new scale. It’s not just about repeating the same recipe at each new scale. You have to adjust the recipe and that’s a bit of an art form. You have to sort of get new data points. If you try to extend your predictions and extrapolate them several orders of magnitude out, sometimes they don’t hold anymore. There can be step functions in terms of new capabilities and some things hold, other things don’t. Often you do need those intermediate data points to correct some of your hyperparameter optimization and other things, so that the scaling law continues to be true. So there are various practical limitations to that. One order of magnitude is probably about the maximum that you want to do between each era.", "Dwarkesh Patel 00:30:23", "That’s so fascinating. In the GPT-4 technical report , they say that they were able to predict the training loss with a model with tens of thousands of times less compute than GPT-4. They could see the curve. But the point you’re making is that the actual capabilities that loss implies may not be so.", "Demis Hassabis 00:30:39", "Yeah, the downstream capabilities sometimes don’t follow. You can often predict the core metrics like training loss or something like that, but then it doesn’t actually translate into MMLU , or math, or some other actual capability that you care about. They’re not necessarily linear all the time. There are non-linear effects there.", "Dwarkesh Patel 00:30:57", "What was the biggest surprise to you during the development of Gemini in terms of something like this happening?", "Demis Hassabis 00:31:02", "I wouldn’t say there was one big surprise. It was very interesting trying to train things at that size and learning about all sorts of things from an organizational standpoint, like how to babysit such a system and to track it. There’s also things like getting a better understanding of the metrics you’re optimizing versus the final capabilities that you want. I would say that’s still not a perfectly understood mapping, but it’s an interesting one that we’re getting better and better at.", "Dwarkesh Patel 00:31:32", "There’s a perception that maybe other labs are more compute-efficient than DeepMind has been with Gemini. I don’t know what you make of that perception.", "Demis Hassabis 00:31:40", "I don’t think that’s the case. I think that actually Gemini 1 used roughly the same amount of compute, maybe slightly more, than what was rumored for GPT-4 . I don’t know exactly what was used but I think it was in the same ballpark. I think we’re very efficient with our compute and we use our compute for many things. One is not just the scaling but, going back to earlier, more innovations and ideas. A new innovation, a new invention, is only useful if it can also scale. So you need quite a lot of compute to do new invention because you’ve got to test many things, at least some reasonable scale, and make sure that they work at that scale. Also, some new ideas may not work at a toy scale but do work at a larger scale. In fact, those are the more valuable ones. So if you think about that exploration process, you need quite a lot of compute to be able to do that. The good news is we’re pretty lucky at Google. I think this year we’re going to have the most compute by far of any sort of research lab. We hope to make very efficient and good use of that in terms of both scaling and the capability of our systems and also new inventions.", "Dwarkesh Patel 00:32:51", "What’s been the biggest surprise to you, if you go back to yourself in 2010 when you were starting DeepMind, in terms of what AI progress has looked like? Did you anticipate back then that it would, in some large sense, amount to spending billions of dollars into these models? Or did you have a different sense of what it would look like?", "Demis Hassabis 00:33:06", "We thought that actually, and I know you’ve interviewed my colleague Shane. He always thought in terms of compute curves and comparing it roughly to the brain, how many neurons and synapses there are very loosely. Interestingly, we’re actually in that kind of regime now with roughly the right order of magnitude of number of synapses in the brain and the sort of compute that we have. But I think more fundamentally, we always thought that we bet on generality and learning. So those were always at the core of any technique we would use. That’s why we triangulated on reinforcement learning, and search, and deep learning as three types of algorithms that would scale, be very general, and not require a lot of handcrafted human priors. We thought that was the sort of failure mode of the efforts to build AI in the 90s in places like MIT. There were very logic-based systems, expert systems, and masses of hand-coded, handcrafted human information going into them that turned out to be wrong or too rigid. So we wanted to move away from that and I think we spotted that trend early. Obviously, we used games as our proving ground and we did very well with that. I think all of that was very successful and maybe inspired others. AlphaGo, I think, was a big moment for inspiring many others to think “oh, actually, these systems are ready to scale.” Of course then, with the advent of transformers, invented by our colleagues at Google Research and Brain, that was the type of deep learning that allowed us to ingest masses of amounts of information. That has really turbocharged where we are today. So I think that’s all part of the same lineage. We couldn’t have predicted every twist and turn there, but I think the general direction we were going in was the right one.", "Dwarkesh Patel 00:34:57", "It’s fascinating if you read your old papers or Shane’s old papers. In Shane’s thesis in 2009 , he said “well, the way we would test for AI is, can you compress Wikipedia?” And that’s literally, the loss function for LLMs. Or in your own paper in 2016 before transformers, you were comparing neuroscience and AI and you said attention is what is needed.", "Demis Hassabis 00:35:16", "Exactly. So we had these things called out and we had some early attention papers, but they weren’t as elegant as transformers in the end, neural Turing machines and things like this. Transformers were the nicer and more general architecture of that.", "Governance of superhuman AIs", "Dwarkesh Patel 00:35:30", "When you extrapolate all this out forward and you think about superhuman intelligence, what does that landscape look like to you? Is it still controlled by a private company? What should the governance of that look like concretely?", "Demis Hassabis 00:35:45", "I think that this is so consequential, this technology. I think it’s much bigger than any one company or even industry in general. I think it has to be a big collaboration with many stakeholders from civil society, academia, government, etc. The good news is that with the popularity of the recent chatbot systems, I think that has woken up many of these other parts of society to the fact that this is coming and what it will be like to interact with these systems. And that’s great. It’s opened up lots of doors for very good conversations. An example of that was the safety summit the UK hosted a few months ago, which I thought was a big success in getting this international dialogue going. I think the whole of society needs to be involved in deciding what we want to deploy these models for? How do we want to use them and what do we not want to use them for? I think we’ve got to try and get some international consensus around that and also make sure that these systems benefit everyone, for the good of society in general. That’s why I push so hard for things like AI for science. I hope that with things like our spin-out, Isomorphic , we’re going to start curing terrible diseases with AI, accelerate drug discovery, tackle climate change, and do other amazing things. There are big challenges that face humanity, massive challenges. I’m actually optimistic we can solve them because we’ve got this incredibly powerful tool of AI coming down the line that we can apply to help us solve many of these problems. Ideally, we would have a big consensus around that and a big discussion at sort of the UN level if possible.", "Dwarkesh Patel 00:37:25", "One interesting thing is if you look at these systems and chat with them, they’re immensely powerful and intelligent. But it’s interesting the extent to which they haven’t automated large sections of the economy yet. Whereas if five years ago I showed you Gemini, you’d be like “wow, this is totally coming for a lot of things.” So how do you account for that? What’s going on that it hasn’t had the broader impact yet?", "Demis Hassabis 00:37:48", "I think that just shows we’re still at the beginning of this new era. I think there are some interesting use cases where you can use these chatbot systems to summarize stuff for you and do some simple writing, maybe more boilerplate-type writing. But that’s only a small part of what we all do every day. I think for more general use cases we still need new capabilities, things like planning and search but also things like personalization and episodic memory. That’s not just long context windows , but actually remembering what we spoke about 100 conversations ago. I’m really looking forward to things like recommendation systems that help me find better, more enriching material, whether that’s books or films or music and so on. I would use that type of system every day. So I think we’re just scratching the surface of what these AI assistants could actually do for us in our general, everyday lives and also in our work context as well. I think they’re not reliable yet enough to do things like science with them. But I think one day, once we fix factuality and grounding and other things, I think they could end up becoming the world’s best research assistant for you as a scientist or as a clinician.", "Dwarkesh Patel 00:39:12", "I want to ask about memory. You had this fascinating paper in 2007 where you talked about the links between memory and imagination and how they, in some sense, are very similar. People often claim that these models are just memorizing. How do you think about that claim? Is memorization all you need because in some deep sense, that’s compression? What’s your intuition here?", "Demis Hassabis 00:39:34", "At the limit, one maybe could try and memorize everything but it wouldn’t generalize out of your distribution. The early criticisms of these early systems were that they were just regurgitating and memorizing. I think clearly in the Gemini, GPT-4 type era, they are definitely generalizing to new constructs. Actually my thesis , and that paper particularly that started that area of imagination in neuroscience, was showing that first of all memory, at least human memory, is a reconstructive process . It’s not a videotape. We sort of put it together back from components that seem familiar to us, the ensemble. That’s what made me think that imagination might be the same thing. Except in this case you’re using the same semantic components, but now you’re putting it together in a way that your brain thinks is novel, for a particular purpose like planning. I do think that that kind of idea is still probably missing from our current systems, pulling together different parts of your world model to simulate something new that then helps with your planning, which is what I would call imagination.", "Safety, open source, and security of weights", "Dwarkesh Patel 00:40:42", "For sure. Now you guys have the best models in the world with the Gemini models. Do you plan on putting out some sort of framework like the other two major AI labs have? Something like “once we see these specific capabilities, unless we have these specific safeguards, we’re not going to continue development or we’re not going to ship the product out.”", "Demis Hassabis 00:41:02", "Yes, we already have lots of internal checks and balances but we’re going to start publishing. Actually, watch this space. We’re working on a whole bunch of blog posts and technical papers that we’ll be putting out in the next few months along similar lines of things like responsible scaling laws and so on. We have those implicitly internally in various safety councils that people like Shane chair and so on. But it’s time for us to talk about that more publicly I think. So we’ll be doing that throughout the course of the year.", "Dwarkesh Patel 00:41:33", "That’s great to hear. Another thing I’m curious about is, there’s not only the risk of the deployed model being something that people can use to do bad things, but there’s also rogue actors, foreign agents, and so forth, being able to steal the weights and then fine-tune them to do crazy things. How do you think about securing the weights to make sure something like this doesn’t happen, making sure a very key group of people has access to them?", "Demis Hassabis 00:41:57", "It’s interesting. First of all, there’s two parts. One is security, one is open source , which maybe we can discuss. The security is super key just as normal cybersecurity type things. I think we’re lucky at Google DeepMind. We’re behind Google’s firewall and cloud protection which I think is best in class in the world corporately. So we already have that protection. Behind that, we have specific DeepMind protections within our code base. It’s sort of a double layer of protection. So I feel pretty good about that. You can never be complacent on that but I feel it’s already the best in the world in terms of cyber defenses. We’ve got to carry on improving that and again, things like the hardened sandboxes could be a way of doing that as well. Maybe there are even specifically secure data centers or hardware solutions to this too that we’re thinking about. I think that maybe in the next three, four, five years, we would also want air gaps and various other things that are known in the security community. So I think that’s key and I think all frontier labs should be doing that because otherwise for rogue nation-states and other dangerous actors, there would obviously be a lot of incentive for them to steal things like the weights. Of course, open source is another interesting question. We’re huge proponents of open source and open science. We’ve published thousands of papers, things like AlphaFold and transformers and AlphaGo. All of these things we put out there into the world, published and open source, most recently GraphCast , our weather prediction system. But when it comes to the general-purpose foundational technology, I think the question I would have for open source proponents is, how does one stop bad actors, individuals or up to rogue states, taking those same open source systems and repurposing them for harmful ends? We have to answer that question. I don’t know what the answer is to that, but I haven’t heard a compelling, clear answer to that from proponents of just open sourcing everything. So I think there has to be some balance there. Obviously, it’s a complex question of what that is.", "Dwarkesh Patel 00:44:18", "I feel like tech doesn’t get the credit it deserves for funding hundreds of billions of dollars’ worth of R&D, obviously you have DeepMind with systems like AlphaFold and so on. When we talk about securing the weights, as we said maybe right now it’s not something that is going to cause the end of the world or anything, but as these systems get better and better, there’s the worry that a foreign agent or something gets access to them. Presumably right now there’s dozens to hundreds of researchers who have access to the weights. What’s a plan for getting the weights in a situation room where if you need to access them it’s some extremely strenuous process and no individual can really take them out?", "Demis Hassabis 00:44:54", "One has to balance that with allowing for collaboration and speed of progress. Another interesting thing is that of course you want brilliant independent researchers from academia or things like the UK AI Safety Institute and the US one to be able to red team these systems. So one has to expose them to a certain extent, although that’s not necessarily the weights. We have a lot of processes in place about making sure that only if you need them, those people who need access have access. Right now, I think we’re still in the early days of those kinds of systems being at risk. As these systems become more powerful and more general and more capable, I think one has to look at the access question.", "Dwarkesh Patel 00:45:42", "Some of these other labs have specialized in different things relative to safety, Anthropic for example with interpretability . Do you have some sense of where you guys might have an edge? Now that you have the frontier model, where are you guys going to be able to put out the best frontier research on safety?", "Demis Hassabis 00:45:59", "I think we helped pioneer RLHF and other things like that which can obviously be used for performance but also for safety. I think that a lot of the self-play ideas and these kinds of things could also be used to auto-test a lot of the boundary conditions that you have with the new systems. Part of the issue is that with these very general systems, there’s so much surface area to cover about how these systems behave. So I think we are going to need some automated testing. Again, with things like simulations and games, very realistic virtual environments, I think we have a long history of using those kinds of systems and making use of them for building AI algorithms. I think we can leverage all of that history. And then around Google, we’re very lucky to have some of the world’s best cybersecurity experts, hardware designers. I think we can bring that to bear for security and safety as well.", "Multimodal and further progress", "Dwarkesh Patel 00:47:00", "Let’s talk about Gemini. So now you guys have the best model in the world. I’m curious. The default way to interact with these systems has been through chat so far. Now that we have multimodal and all these new capabilities, how do you anticipate that changing? Do you think that’ll still be the case?", "Demis Hassabis 00:47:17", "I think we’re just at the beginning of actually understanding how exciting that might be to interact with a full multimodal model system. It’ll be quite different from what we’re used to today with the chatbots. I think the next versions of this over the next year, 18 months, we’ll maybe have some contextual understanding of the environment around you through a camera or a phone or some glasses. I could imagine that as the next step. And then I think we’ll start becoming more fluid in understanding “let’s sample from a video, let’s use voice.” Maybe even eventually things like touch and if you think about robotics, other types of sensors. So I think the world’s about to become very exciting in the next few years as we start getting used to the idea of what true multimodality means.", "Dwarkesh Patel 00:48:13", "On the robotics subject, when he was on the podcast Ilya said that the reason OpenAI gave up on robotics was because they didn’t have enough data in that domain, at least at the time they were pursuing it. You guys have put out different things like Robo-Transformer and other things. Do you think that’s still a bottleneck for robotics progress, or will we see progress in the world of atoms as well as the world of bits?", "Demis Hassabis 00:48:30", "We’re very excited about our progress with things like Gato and RT-2. We’ve always liked robotics and we’ve had amazing research in that. We still have that going now because we like the fact that it’s a data-poor regime. That pushes us in very interesting research directions that we think are going to be useful anyway: sampling efficiency and data efficiency in general, transfer learning , learning from simulation and transferring that to reality, sim-to-real. All of these are very interesting general challenges that we would like to solve. The control problem. So, we’ve always pushed hard on that. I think Ilya is right. It is more challenging because of the data problem. But I think we’re starting to see the beginnings of these large models being transferable to the robotics regime. They can learn in the general domain, language domain and other things, and then just treat tokens like Gato as any type of token. The token could be an action, it could be a word, it could be part of an image, a pixel, or whatever it is. That’s what I think true multimodality is. To begin with, it’s harder to train a system like that than a straightforward language system. But going back to our early conversation on transfer learning, you start seeing that with a true multimodal system, the other modalities benefit some different modalities. You get better at language because you now understand a little bit about video. So I do think it’s harder to get going, but ultimately we’ll have a more general, more capable system like that.", "Dwarkesh Patel 00:50:10", "What ever happened to Gato? That was super fascinating that you could have it play games and also do video and also do text.", "Demis Hassabis 00:50:15", "We’re still working on those kinds of systems, but you can imagine we’re trying to build those ideas into our future generations of Gemini to be able to do all of those things. Robotics, transformers, and things like that, you can think of them as follow-ups to that.", "Dwarkesh Patel 00:50:33", "Will we see asymmetric progress in the domains in which the self-play kinds of things you’re talking about will be especially powerful? So math and code. Recently, you have these papers out about this. You can use these things to do really cool, novel things. Will they be superhuman coders, but in other ways they might still be worse than humans? How do you think about that?", "Demis Hassabis 00:50:52", "I think that we’re making great progress with math and things like theorem proving and coding. But it’s still interesting if one looks at creativity in general, and scientific endeavor in general. I think we’re getting to the stage where our systems could help the best human scientists make their breakthroughs quicker, almost triage the search space in some ways. Perhaps find a solution like AlphaFold does with a protein structure. They’re not at the level where they can create the hypothesis themselves or ask the right question. As any top scientist will tell you, the hardest part of science is actually asking the right question. It’s boiling down that space to the critical question we should go after and then formulating the problem in the right way to attack it. That’s not something our systems really have any idea how to do, but they are suitable for searching large combinatorial spaces if one can specify the problem with a clear objective function. So that’s very useful already for many of the problems we deal with today, but not the most high-level creative problems.", "Dwarkesh Patel 00:52:06", "DeepMind has published all kinds of interesting stuff in speeding up science in different areas. If you think AGI is going to happen in the next 10 to 20 years, why not just wait for the AGI to do it for you? Why build these domain-specific solutions?", "Demis Hassabis 00:52:21", "I think we don’t know how long AGI is going to be. We always used to say, back even when we started DeepMind, that we don’t have to wait for AGI in order to bring incredible benefits to the world. My personal passion especially has been AI for science and health. You can see that with things like AlphaFold and all of our various Nature papers on different domains and material science work and so on. I think there’s lots of exciting directions and also impact in the world through products too. I think it’s very exciting and a huge unique opportunity we have as part of Google. They’ve got dozens of billion-user products that we can immediately ship our advances into and then billions of people can improve, enrich, and enhance their daily lives. I think it’s a fantastic opportunity for impact on all those fronts.", "I think the other reason from the point of view of AGI specifically is that it battle tests your ideas. You don’t want to be in a research bunker where you theoretically are pushing things forward, but then actually your internal metrics start deviating from real-world things that people would care about, or real-world impact. So you get a lot of direct feedback from these real-world applications that then tells you whether your systems really are scaling or if  we need to be more data efficient or sample efficient. Because most real-world challenges require that. So it kind of keeps you honest and pushes you to keep nudging and steering your research directions to make sure they’re on the right path. So I think it’s fantastic. Of course, the world benefits from that. Society benefits from that on the way, maybe many years before AGI arrives.", "Inside Google DeepMind", "Dwarkesh Patel 00:54:18", "The development of Gemini is super interesting because it comes right at the heels of merging these different organizations, Brain and DeepMind . I’m curious, what have been the challenges there? What have been the synergies? It’s been successful in the sense that you have the best model in the world now. What’s that been like?", "Demis Hassabis 00:54:33", "It’s been fantastic actually, over the last year. Of course it’s been challenging to do, like any big integration coming together. You’re talking about two world-class organizations with long, storied histories of inventing many important things from deep reinforcement learning to transformers. So it’s very exciting to actually pool all of that together and collaborate much more closely. We always used to be collaborating, but more on a project-by-project basis versus a much deeper, broader collaboration like we have now. Gemini is the first fruit of that collaboration, including the name Gemini implying twins. Of course, a lot of other things are made more efficient like pooling compute resources together and ideas and engineering. I think at the stage we’re at now, there are huge amounts of world-class engineering that have to go into building the frontier systems. I think it makes sense to coordinate that more.", "Dwarkesh Patel 00:55:31", "You and Shane started DeepMind partly because you were concerned about safety. You saw AGI coming as a live possibility. Do you think the people who were formerly part of Brain, that half of Google DeepMind now, approach it in the same way? Have there been cultural differences there in terms of that question?", "Demis Hassabis 00:55:47", "This is one of the reasons we joined forces with Google back in 2014. I think the entirety of Google and Alphabet, not just Brain and DeepMind, takes these questions of responsibility very seriously. Our kind of mantra is to try and be bold and responsible with these systems. I’m obviously a huge techno-optimist but I want us to be cautious given the transformative power of what we’re bringing into the world collectively. I think it’s important. It’s going to be one of the most important technologies humanity will ever invent. So we’ve got to put all our efforts into getting this right and be thoughtful and also humble about what we know and don’t know about the systems that are coming and the uncertainties around that. In my view, the only sensible approach when you have huge uncertainty is to be cautiously optimistic and use the scientific method to try and have as much foresight and understanding about what’s coming down the line and the consequences of that before it happens. You don’t want to be live A/B testing out in the world with these very consequential systems because unintended consequences may be quite severe. So I want us to move away, as a field, from a sort of “move fast and break things attitude” which has maybe served the Valley very well in the past and obviously created important innovations. I think in this case we want to be bold with the positive things that it can do and make sure we advance things like medicine and science whilst being as responsible and thoughtful as possible with mitigating the risks.", "Dwarkesh Patel 00:57:27", "That’s why it seems like the responsible scaling policies are something that are a very good empirical way to pre-commit to these kinds of things.", "Demis Hassabis 00:57:34", "Yes, exactly.", "Dwarkesh Patel 00:57:35", "When you’re doing these evaluations and for example it turns out your next model could help a layperson build a pandemic-class bioweapon or something, how would you think first of all about making sure those weights are secure so that they don't get out? And second, what would have to be true for you to be comfortable deploying that system? How would you make sure that this latent capability isn’t exposed?", "Demis Hassabis 00:57:58", "The secure model part I think we’ve covered with the cybersecurity and making sure that’s world-class and you’re monitoring all those things. I think if a capability like that was discovered through red teaming or external testing, independent testers like government institutes or academia or whatever, then we would have to fix that loophole. Depending on what it was, that might require a different kind of constitution perhaps, or different guardrails, or more RLHF to avoid that. Or you could remove some training data, depending on what the problem is. I think there could be a number of mitigations. The first part is making sure you detect it ahead of time. So that’s about the right evaluations and right benchmarking and right testing. Then the question is how one would fix that before you deployed it. But I think it would need to be fixed before it was deployed generally, for sure, if that was an exposure surface.", "Dwarkesh Patel 00:58:57", "Final question. You’ve been thinking in terms of the end goal of AGI at a time when other people thought it was ridiculous in 2010. Now that we’re seeing this slow takeoff where we’re actually seeing generalization and intelligence, what is like psychologically seeing this? What has that been like? Has it just been sort of priced into your world model so it’s not new news for you? Or actually just seeing it live, are you like “wow, something’s really changed”? What does it feel like?", "Demis Hassabis 00:59:24", "For me, yes, it’s already priced into my world model of how things were going to go, at least from the technology side. But obviously, we didn’t necessarily anticipate that the general public would be so interested this early in the sequence. If ChatGPT and chatbots hadn’t gotten the interest they ended up getting—which I think was quite surprising to everyone that people were ready to use these things even though they were lacking in certain directions, impressive though they are—then we would have produced more specialized systems built off of the main track, like AlphaFold and AlphaGo, our scientific work. I think then the general public maybe would have only paid attention later down the road when in a few years’ time, we have more generally useful assistant-type systems. So that’s been interesting. That’s created a different type of environment that we’re now all operating in as a field. It’s a little bit more chaotic because there’s so many more things going on, and there’s so much VC money going into it, and everyone’s sort of almost losing their minds over it. The only thing I worry about is that I want to make sure that, as a field, we act responsibly and thoughtfully and scientifically about this and use the scientific method to approach this in an optimistic but careful way. I think I’ve always believed that that’s the right approach for something like AI, and I just hope that doesn’t get lost in this huge rush.", "Dwarkesh Patel 01:00:59", "Well, I think that’s a great place to close. Demis, thank you so much for your time and for coming on the podcast.", "Demis Hassabis 01:01:04", "Thanks. It’s been a real pleasure." ]
[ "https://firstderivative.substack.com/", "https://blog.google/authors/demis-hassabis/", "https://deepmind.google/", "https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence", "https://en.wikipedia.org/wiki/Large_language_model", "https://arxiv.org/abs/2210.07128", "https://www.transformer-circuits.pub/2022/mech-interp-essay", "https://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging", "https://en.wikipedia.org/wiki/Single-unit_recording", "https://colah.github.io/about.html", "https://en.wikipedia.org/wiki/Computational_neuroscience", "https://en.wikipedia.org/wiki/Reinforcement_learning", "https://en.wikipedia.org/wiki/Deep_learning", "https://arxiv.org/abs/2007.06700", "https://en.wikipedia.org/wiki/Attention_(machine_learning)", "https://www.scientificamerican.com/article/how-the-brain-constructs-the-outside-world/", "https://www.gatsby.ucl.ac.uk/~demis/ConstructionSystem%28PTrans09%29.pdf", "https://deepmind.google/discover/blog/alphazero-shedding-new-light-on-chess-shogi-and-go/", "https://en.wikipedia.org/wiki/Search_tree", "https://deepmind.google/technologies/alphago/", "https://en.wikipedia.org/wiki/Moore%27s_law", "https://ai.stackexchange.com/questions/5246/what-is-sample-efficiency-and-how-can-importance-sampling-be-used-to-achieve-it", "https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)", "https://en.wikipedia.org/wiki/Stockfish_(chess)", "https://en.wikipedia.org/wiki/Go_(game)", "https://en.wikipedia.org/wiki/Reinforcement_learning#Introduction", "https://en.wikipedia.org/wiki/Monte_Carlo_tree_search", "https://en.wikipedia.org/wiki/Einstein%27s_thought_experiments", "https://en.wikipedia.org/wiki/Self-play", "https://en.wikipedia.org/wiki/Synthetic_data#Machine_learning", "https://cloud.google.com/use-cases/multimodal-ai", "https://www.nature.com/articles/nature24270.epdf?sharing_token=eCuQUsmbU6qd0VTBZu9HcdRgN0jAjWel9jnR3ZoTv0MzTglC12p1NiU2orOu-c1nBdYrrl5mO_G-4ivReqyPdGv0boj2nH8_NnzXnI1rE3lRRXLm5jqFbf9XfTa5L03rbWmdULIbQTksRNJjpyVUTH6LQkN_WDbORVDuwHp_fsP9dGjjhOyl_XOw4GicQg-4yLNvthltpx07OQZhs9Fjeg%3D%3D&tracking_referrer=www.newyorker.com", "https://www.science.org/doi/10.1126/science.aar6404", "https://www.nature.com/articles/nature14236", "https://arxiv.org/pdf/1312.5602v1.pdf", "https://en.wikipedia.org/wiki/Q-learning", "https://deepmind.google/discover/blog/deep-reinforcement-learning/", "https://en.wikipedia.org/wiki/Artificial_general_intelligence", "https://cs.stackexchange.com/questions/76647/what-is-meant-by-the-term-prior-in-machine-learning", "https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)", "https://www.dwarkeshpatel.com/p/will-scaling-work", "https://gwern.net/scaling-hypothesis#scaling", "https://arxiv.org/abs/2206.07682", "https://deepgram.com/ai-glossary/grounding", "https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback", "https://iep.utm.edu/lang-phi/", "https://research.google/", "https://research.google.com/teams/brain/?authuser=2", "https://huggingface.co/blog/alonsosilva/nexttokenprediction", "https://gemini.google.com/", "https://www.dwarkeshpatel.com/p/shane-legg", "https://en.wikipedia.org/wiki/Shane_Legg", "https://deepmind.google/technologies/alphafold/", "https://www.twosigma.com/articles/interpretability-methods-in-machine-learning-a-brief-survey/", "https://techxplore.com/news/2024-01-ai-agents.html", "https://www.spiceworks.com/it-security/cyber-risk-management/articles/what-is-sandboxing/", "https://deepmind.google/discover/blog/competitive-programming-with-alphacode/", "https://paperswithcode.com/task/automated-theorem-proving/latest", "https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion", "https://en.wikipedia.org/wiki/Garry_Kasparov", "https://en.wikipedia.org/wiki/Magnus_Carlsen", "https://arxiv.org/abs/2311.08105", "https://cloud.google.com/tpu", "https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)", "https://arxiv.org/abs/2303.08774", "https://arxiv.org/abs/2009.03300", "https://blog.google/technology/ai/google-gemini-ai/", "https://airtable.com/appDFXXgaG1xLtXGL/shr95fjOzUHYA0MDq/tblhmFk3gP7psWh3C", "https://www.vetta.org/documents/Machine_Super_Intelligence.pdf", "https://www.datacamp.com/tutorial/loss-function-in-machine-learning", "https://www.cell.com/neuron/article/S0896-6273(17)30509-3/fulltext", "https://en.wikipedia.org/wiki/Neural_Turing_machine", "https://www.gov.uk/government/topical-events/ai-safety-summit-2023", "https://www.isomorphiclabs.com/", "https://blog.google/technology/ai/long-context-window-ai-models/", "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2571957/", "https://static1.1.sqspcdn.com/static/f/1096238/22752296/1369317078327/DemisHassabisThesis.pdf", "https://en.wikipedia.org/wiki/Reconstructive_memory", "https://cdn.sanity.io/files/4zrzovbb/website/1adf000c8f675958c2ee23805d91aaade1cd4613.pdf", "https://cdn.openai.com/openai-preparedness-framework-beta.pdf", "https://deepai.org/machine-learning-glossary-and-terms/weight-artificial-neural-network", "https://en.wikipedia.org/wiki/Open-source_artificial_intelligence", "https://en.wikipedia.org/wiki/Air_gap_(networking)", "https://deepmind.google/discover/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/", "https://www.gov.uk/government/organisations/ai-safety-institute", "https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute", "https://en.wikipedia.org/wiki/Red_team", "https://www.anthropic.com/news/decomposing-language-models-into-understandable-components", "https://www.dwarkeshpatel.com/p/ilya-sutskever", "https://deepmind.google/discover/blog/rt-2-new-model-translates-vision-and-language-into-action/", "https://deepmind.google/discover/blog/a-generalist-agent/", "https://en.wikipedia.org/wiki/Transfer_learning", "https://www.science.org/doi/10.1126/sciadv.adg3256", "https://en.wikipedia.org/wiki/Combinatorial_search", "https://www.nature.com/search?author=Demis%20Hassabis", "https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/", "https://blog.google/technology/ai/april-ai-update/", "https://chat.openai.com/" ]
https://www.dwarkesh.com/p/dominic-cummings
Dominic Cummings - COVID, Brexit, & Fixing Western Governance
[ "(00:00:00) - One day in COVID…", "Dwarkesh Patel 00:00:00", "Today I have the pleasure of speaking with Dominic Cummings. He was the chief advisor to Boris Johnson when he was Prime Minister and before that, he masterminded the Brexit campaign.", "Let’s start with talking about your time in Number 10 as a chief advisor. What is the thing that most people don’t understand about being in that government and that famous ministry?", "Dominic Cummings 00:00:25", "When you go through that door, you’re basically going into a rabbit warren of old townhouses that have been knocked together behind the scenes. It’s nothing like any kind of modern office building. It’s a very, very odd physical environment. And first of all I think you probably would be struck by the constant string of chaos. I don’t think people really appreciate what it’s like being in a building like that.", "For example, in one day on COVID, the day starts off with: Are we going to have a lockdown? It then proceeds to the Prime Minister’s girlfriend going crazy about the media. It then involves Trump calling up saying, “We’ve got to go and bomb all these people in Iraq.” It then goes to the deep state coming in and saying, “We don’t think we should because it’s probably going to bomb the wrong people.” And then other parts of the system come in and say, “No, we should bomb them because we’ve got to stay friends with America.” Then there’s some other disaster on the news with something flooding. And it’s just constant.", "Obviously, some days are more crazy than others, but if you haven’t been in that environment, it’s extremely hard to appreciate that you have these handful of people trying to come up with the answers to extremely hard problems. The weight of the news flooding in on you just makes it intrinsically difficult. Then on top of that, you have these just incredibly old, centralized bureaucracies actually trying to cope with all of this.", "For example, when we arrived in Number 10, there wasn’t even a file sharing system such that a three man setup could at least write a press release on a Google Doc, for example. The British Prime Minister did not have access to such a system.", "Dwarkesh Patel 00:02:29", "Was it because of security concerns? What was the reason?", "Dominic Cummings 00:02:32", "A combination of bureaucracy, security concerns and arguments between. The Google Docs thing is actually a sort of interesting thing. So we arrive in summer 2019 and there is no system for file sharing. We say — This is completely insane. We are going to create a system for file sharing. Months of wrangling ensue. Two different parts of the deep state basically argue about whether Teams or Google Docs is more susceptible to China and Russia intercepting. No resolution. This actually ends up affecting COVID. We force a resolution. We get GCHQ to build a system. Still now, four years after we started this discussion, the Cabinet office and Number 10 are still fighting about Google Docs versus Teams, and have now recently resolved to hire some consultants for millions of pounds to spend a year doing a study on it.", "So one very small little thing in a way, but probably certainly anybody who’s been at a kind of high functioning company would just be completely stunned by how the core of a G-7 state actually works.", "Dwarkesh Patel 00:03:40", "Hopefully the consultants are allowed to use Google Docs to decide whether you’re allowed to use Google Docs.", "Dominic Cummings 00:03:45", "[Laughter] Quite.", "Dwarkesh Patel 00:03:46", "You’ve written on your blog “I had these priorities, these main things I wanted to accomplish.” But you have all these things that are coming up day to day. Do these things feel like distractions from the thing you went into the government to do? Do they all feel important? So you have this big picture idea and all these things are coming up. What does that feel like?", "Dominic Cummings 00:04:08", "A fundamental problem with how the British state works is this question of prioritization and the Prime Minister’s time. So you have all of these normal parts of the system that essentially can’t really do anything quickly at all, even in a crisis. So the Prime Minister’s time and the Prime Minister’s prioritization is the most important asset. But also it’s something which is constantly pulled hither and thither by all of this craziness.", "One of the things that obviously we wanted to do was fundamentally reorient Number 10, away from what it’s been since Thatcher, which is a kind of press entertainment service. Where the whole building is just built to respond to what the media says and instead say, “What do we actually think is important?” And what is the management system you’re going to build that actually can maintain focus on those things whilst the inevitable chaos goes on?", "Most obviously, for example, some terrorist incident happens in central London. Prime Minister can’t ignore that. It’s going to involve the Prime Minister’s time. Politics, the news, the government means that there are things like that happening all day, every day,", "So if you have the normal state of affairs, then the Prime Minister is just pulled from one crisis like that after another. Now, some of them are justifiable. You’ve got to deal with a terrorist crisis. But in fact, what’s happened over the last 20 years is that that’s just been the model for how Prime Ministers spend practically all of their time. And there isn’t a kind of background system with a set of people who are just plodding on, going “Right. The actually important things are science and technology agenda, productivity reform, the Ministry of Defence, Procurement Reform, deep state this, etc.” So that’s a fundamental structural issue.", "We tried to deal with that in various ways, but in January 2020, it was clear that this essential question was already a fundamental disagreement between me and Boris. So in the first week of January, Trump had just whacked that Iranian guy. I forgot his name, the head of the revolutionary whatever it was, with the drone strike in Iraq. So we’re talking in that week about the future. So what, three weeks after the election?", "And he says, these various journalists are complaining that you don’t return their calls. Now’s the time for us to make friends with all of these people after the chaos of 2019.", "And I said, “No, I don’t think that’s right at all. We’ve gone through hell. We’ve won an 80 seat majority. Now we have to change how this building works and we have to focus on the actual big problems facing the country. We’ve got to get Number 10 and your time, the PM’s time, away from how it’s been managed for the last 20 years, which is media entertainment service.”", "And his response was, “No, no, no, no. That’s crazy. Everyone will go crazy with us if we do that. We’ve got to make friends with the media. We drove them all mad last year. Now it’s time for a great reconciliation.”", "And I said, we could do that, but if we do that, we’ll just be the same as every other government for the last 20 years, which is chasing the bullshit in the media all day. And 4 years will pass and we won’t actually have done anything. And by then it’ll be 14 years of Conservative government. We’ve just won an election, partly by running against the previous decade of the Conservative Party. Can’t pull that off again. We’re actually going to have to do a whole bunch of things. And also, that’s why the people are here. We actually want to solve these problems.", "Those issues get to the heart of a lot of it. You have a management question about how you actually manage priorities in such an insane environment. You have a personnel problem about a lack of talent. But you also have this fundamental question of, do the key politicians want to spend their time on the important problems, or do they want to spend their time running around all day dealing with the media? And the answer is, almost all of them want to do the second.", "(00:08:26) - Why is government broken?", "Dwarkesh Patel 00:08:26", "There’s a lot I want to ask about there. It’s interesting because the way you’re describing it, it sounds like it’s a company with the Prime Minister as a CEO who’s dealing with all these operational things that come up, but there is no CEO who has a long term plan.", "I’m curious about the media stuff. I want to understand why it’s the case that they’re so obsessed with the media. Does the media actually matter that much to how people perceive the politicians? How is the media interacting with the things that are happening in the building so that people are constantly thinking of the media? Why is the media so ever present in people’s minds?", "Dominic Cummings 00:09:02", "Just go back to your first point about the kind of CEO analogy. Imagine if Steve Jobs or Tim Cook or Patrick Collison or someone actually spent a large part of their day just doing photo ops as well. That’s actually the reality about how a lot of these jobs have evolved. So you have a person whose time is the single most precious asset, who, because of the dysfunctional bureaucracy, his decisions or her decisions, is the only thing that can actually break down the bureaucratic resistance and make sure something happens. Yet they’re actually standing with the ambassador from Tongazonga, whatever country, just doing photo ops for a large part of the day or going to stupid ceremonies or whatever it might be.", "So again, I think that analogy also shows some of the central problems. These people have all grown up in a system where they just don’t know any better than dealing with the media all day. Now, if you actually understand communication, you know that communication is not the same as answering questions to the media, but that’s not what they think, and it’s not the environment in which they operate, and also it’s not the incentives in which they work.", "Again, one of the funny conversations I had with Boris was, you know, we should say to the ministers that here’s your actual priorities as defined by us. Whether or not you get promoted and whether or not your career goes well is going to be defined by how well your department actually fulfills these goals. We don’t care about all of your interviews. We don’t care if you are on TV or never on TV. That’s not how we’re going to judge. Because they’ve all grown up in a culture where they think whether or not they’re going to be promoted really depends on: Are they seen as a good media performer? Or do they botch things on the media? Well, that’s just a fundamentally bad criteria, not least because their definitions of what’s good on the media are themselves terrible. By approaching government like that, you’re incentivizing them to think that their goal is making friends with the media. So then they get good interviews. That also incentivizes them to leak everything. So again, the culture and the incentives are self reinforcing in a very negative way.", "Dwarkesh Patel 00:11:35", "Let’s say the Prime Minister was somebody who agreed with you and said no more interviews. Ministers, you’re going to do your thing. Media is out. How would the media react? Would that matter?", "Dominic Cummings 00:11:49", "We sort of did that in quarter one of 2020. We actually stopped participating in the number one insider elite media show on BBC Radio. We just said, we think the show is rubbish and it’s a waste of everyone’s time and no ministers are going to be going on it anymore. The media went completely crazy. One of the most senior people in the BBC said to me that I was a fascist. And a lot of the MPs went crazy as well because for them, being on that show is their raison d’etre. So it was extremely unpleasant and disruptive for both the old media and the MPs who’ve grown up in that culture.", "Dwarkesh Patel 00:12:30", "How much does it have to do with the fact that in the British political system, the people who are running these departments have to be an MP, so they have to be an elected official? Is that why they’re so obsessed with the media? Because it’s like if a congressman had to become the Secretary of State always.", "Dominic Cummings 00:12:46", "That’s part of it, though I would pick up on your language, which I think is important. Because you say the people who are actually running the departments, of course they’re not running the department. They describe themselves as “I’m running the department”. The media describes them as they’re running the department. But how many people can the Secretary of State for Defense fire? Three people.", "Dwarkesh Patel 00:13:20", "He cannot fire the others?", "Dominic Cummings 00:13:21", "He legally cannot fire anybody else in the building.", "Dwarkesh Patel 00:13:20", "So who can he fire?", "Dominic Cummings 00:13:21", "The personal three people that he brings in that. He can get rid of those three people. The other hundreds of thousands of people in the system, he can’t get rid of a single one of them. So the person who actually is in charge of personnel at the Ministry of Defence is the Permanent secretary. Stressing the word permanent.", "The only person in the British state who has the legal and constitutional ability to say “This senior person in the Ministry of Defence or the Department of Education is clearly failing and they must be removed, and I order that they be removed” is the Prime Minister. That itself is problematic because the large part system will say, “Well, we can’t just dismiss them, or if we do dismiss them, then there’ll be legal repercussions.” But you can actually remove them from the post. There are complications, and those complications have grown in the last few years because of the way that the legal system and various parts of the law have evolved. But the PM can actually remove someone. But only the PM can. So that gives you an idea as well of the tremendous bottlenecks that occur inside Number 10.", "I’ll tell you a story about it that kind of summarizes it. At the peak of COVID craziness in March 2020, on the day itself that the PM tested positive for CoVID, a bunch of people come into Number 10 sit around the table and we have a meeting and it’s about supplies of PPE to the NHS.", "They say, “None of this ppe that we’ve ordered is going to be here until the summer.”", "“But the peak demand is over the next three to four weeks.”", "“Sorry, Dominic, but it’s not going to be here.”", "“Why not?”", "“Well, because that’s how long it takes to ship from China.”", "“Why are you shipping from China?”", "“Well, because that’s what we always do. We ship it from China.”", "But A, we need it now and B, all of the airlines are grounded. No one’s flying anything.", "“So call up the airlines, tell them that we’re taking their planes, we’re flying all the planes to China, we’re picking up all our shit, we’re bringing it back here. Do that now. Do that today. Send the planes today.”", "We did that. But only the Prime Minister could actually cut through all the bureaucracy and say, Ignore these EU rules on Blah. Ignore treasury guidance on Blah. Ignore this. Ignore that. “I am personally saying do this and I will accept full legal responsibility for everything.”", "You multiply that kind of problem by hundreds and thousands of problems, you get a sense of partly why COVID was so crazy. This is normal government. But in a crisis, when no part of the system can actually move fast, all of these bottlenecks end up very dramatically escalating to the PM’s office. And if you read Jared Kushner’s book, Memoir about the White House , there are very, very similar tales there. That a lot of things that obviously should have been solved elsewhere couldn’t be solved at any other part of the system. They all end up cascading upwards in these centralized bureaucracies, because ultimately only the president or only the Prime Minister can give certain kinds of orders.", "Dwarkesh Patel 00:16:38", "What if the Prime Minister said, I am giving a blank check. If I appoint you as minister, whatever you say, I’m going to sign it. So you can rule as if I agree on everything you said. Could they write such a blank check and basically give the minister whatever authority they might want?", "Dominic Cummings 00:17:00", "To some extent. The system here, in a lots of ways, the Prime Minister has more powers than the president does and is less legally constrained than the president is, because in all kinds of gaps, in lacunae, in the system, there are kind of ancient assumptions that the PM is operating with the authority of the Crown and royal prerogative. So the PM could just do a whole bunch of things, particularly in a crisis, and doesn’t have to necessarily get approval from Parliament.", "The most obvious example of what you’re talking about in COVID was on the vaccine task force. I said, with others, in March 2020 that the normal system obviously can’t deal with vaccines. It can’t do anything fast. We’re watching it deal with all of these other logistical problems. We have to create a completely different entity to deal with vaccines. So we essentially appointed someone and the Prime Minister said: Ignore all the EU procurement rules. Ignore all UK procurement rules. Ignore everything. You just build the vaccine task force the way that you want to. We’re freeing you from all the normal Whiteall, HR, all the normal things which add massive friction and mean that nobody can do anything quickly and the simplest things take months and months.", "To a large extent, because of the scale of the crisis that happened. But the vaccine task force was still plagued by lots of parts of Whitehall saying, “Well, we don’t want you to do this. We don’t like this.” So the reality is that it depends on the characters involved. It depends on how much the person the PM has empowered will actually push. It depends on the extent to which they come to the PM. It depends on the extent to which the PM then shows everyone in the system that I’m actually supporting them and also fundamentally is prepared to say that people will just be removed from their post if they don’t do what they’re told.", "Covid showed how much faster you can do things if the PM’s authority is used and you could prepare to drive things through the system. And the PM is prepared to say, “I will deal with legal issues later.” But it also showed that even in a crisis where literally thousands of people are dying day after day after day, that large parts of the bureaucracy will still simply say “No. We are optimizing for sticking to the old rules.” And that also happened an awful lot.", "Dwarkesh Patel 00:19:38", "What I find interesting is you’ve written a lot after you got out of the government about all these problems, but there’s other people who have been at a high level in governments in the UK, US, wherever. And it’s weird to me that if there’s this much dysfunction, are they not noticing it? Are they just refusing to talk about it? Have they just never been in a functional environment, maybe in the private sector or something, where they realize how dysfunctional this is? Why is everybody not screaming about this as soon as they get out of the government?", "Dominic Cummings 00:20:08", "It’s a mix. There’s a lot of people inside the system who don’t actually think of it as being dysfunctional. That’s their life, it’s completely normal for them. So we arrive at Number 10. I’ve taken a bunch of people who are actually experts in data science, AI and blah blah. We talk to a bunch of officials about projects around data, and officials say, “Oh yeah, well, here’s some examples of some interesting projects we’ve done.” You ask them about it. Turns out that they took two and a half years.", "You ask “Why did it take two and a half years?”", "“Well, the actual project took like eight to twelve weeks. The rest of the two and a half years was the system emailing each other about legal permissions on what to do.”", "Now for the people that I brought in from outside, this is obviously insane. No one in any normal functioning environment would spend two and a half years emailing each other to do a project taking eight weeks. But you have to realize that for everyone inside the system, this is completely standard. And a lot of the people who are most senior in the system have been in a thing like that for 30-40 years. They don’t know anything different. And it just seems completely normal to them that this is what would happen.", "And in fact, In 2020, for example, when we did some things very differently, it was extremely disruptive and extremely unwelcome to the large part of the system. Hence why a lot of what we did was closed down.", "Did they say, “Okay, the vaccine task force and operation warp speed and the state have been great successes. We should massively reinforce them. We should build the next generation of vaccines. We should spread the lessons of how the task force operated.”?", "No, they basically closed the task force. Sewage monitoring closed. Rapid testing, basically closed and forgot to order enough tests the following year.", "So if you look back at 2020, most of the people who were most wrong were given awards and honors by the system and promoted to new jobs. The people who were most obviously repeatedly right have almost all left.", "What incentive is there for people to speak out about how these things work? No one expects anything to change. Even after something as big as COVID, when you see what the reaction is, everyone can now see the truth. You can have a once century pandemic. It can kill tens of thousands of people unnecessarily. It can be a complete carnage for the economy, and everyone will just basically go back to normal. MPs will ignore it and nothing much will change.", "So if you’re a standard official inside the system, all the signals to you are very clear. In fact, in 2021, it was even more powerful than that. There were a whole load of legal actions brought to say that the real problem with 2020 was that we went too fast and we did things too quickly. People actually brought legal actions against the vaccine task force. They brought legal actions against rapid testing. They brought legal actions against all sorts of activities.", "The system didn’t say “This is completely insane. Actually, the bureaucracy and the sloth killed thousands of voters.” It said “Yes, we’re going to investigate all of this.” Every signal propagated through the system was essentially back to normal. You will be promoted for being the most insane process, and you will be demoted and blacklisted if you say this process is insane and try to do better.", "Dwarkesh Patel 00:23:54", "There’s two things I’m curious about here.", "One, who exactly is the person who decided that the vaccine task force has to go and we can’t continue it.", "And two, and this might be related. What happens if a PM decides, I don’t give a shit about these legal challenges. Pile them on if you want. Is it just going to cost a lot in lawyer fees? Why does it matter?", "Dominic Cummings 00:24:16", "Manhattan Project is much in the news with the Oppenheimer movie. If you look at the very last bit of General Grove’s book on Manhattan Project , he talks about what are the most fundamental principles about why it succeeded? And one of those principles is relevant to government. Actually, they’re all relevant to government.", "One of the principles is that the quality of the people is fundamental. Another one is that responsibility and authority are always delegated together. The entire British constitutional system and management structure is based on the fundamentally opposite principle. Responsibility and authority are not delegated together. So if you’re asking about something like the vaccine task force, in the normal system, nobody really is in charge of anything. Lots of people can criticize, lots of people can complain, lots of people can argue about things. Lots of people can veto. Almost nobody ever has the authority just to build something or just to do something.", "Why did we create the vaccine task force the way we did? Well, because we were trying to actually embody principles like responsibility and authority pulled together. We brought one person in, we said “You are responsible.” But once we’d gone, then what happens to that entity? It’s sitting there amid Whitehall while all the normal parts of Whitehall just start going back to being normal. So what happens? They say, well, they are exempt from all of these rules on HR that the Cabinet Office imposes on every part of Government. This should change because it’s going back to normal. They have to do the following things properly. We gave them special dispensations because of the extraordinary circumstances of summer 2020, but these now come to an end.", "So those sorts of things come in. The treasury says, “The spending rules and how the people in the vaccine task force make decisions, that was an emergency thing. Now the normal rules apply again.” So before you know it, all the different parts of the system have basically said, the thing that you created outside of the normal system now has to obey all of the things that it was specifically created to avoid.", "Now, the system will just do that automatically unless there is a very powerful counterforce. Fundamentally, again, only the PM can say,”No, we’re not having that. In fact, I want to strengthen the vaccine task force. We want to move on to the next generation of vaccines. etc.” If they don’t do that, and if the people in charge of it can’t call on the PM’s authority, the system will just devour the new entity very, very quickly and force it to conform with all of the normal system.", "I’ll give you another example of this on rapid testing. One of the things that we did to get the rapid testing to work was we got a guy who formerly was commanding officer of the SAS, British Special Forces, and this guy got a bunch of his friends from Special Forces also to work on rapid testing. When we first got this pushing from Number 10, I got the critical people from procurement, commercial HR, etc, into the Cabinet room with the Cabinet Secretary, the single most important official in the whole country, and the two of us said, “The PM wants rapid testing dealt with as if this is a wartime crisis.” We’re going to have a second wave. There’s going to be thousands more people getting CoViD, there’s NHS. People are dying, etc. We can’t have any of the normal civil service HR. We can’t have any of the normal civil service bullshit on procurement. Exactly the same as with the vaccine task force. Everyone sits around the cabinet table, they all nod their heads.", "A week later, I call this guy, a former SAS boss and say, “So, how’s it going? Are you getting who you want and is everything working great?”", "He says, “No, it’s all the same shit show.”", "So I have to get all the people back in the same room with the country’s most senior official and say, who the fuck have we got to fire around here to make clear that these people doing testing don’t have to do all of your bullshit HR?", "That’s how extreme things have to be. It was only by doing that a second time and making clear that I would get the PM to actually just start firing senior people in the Cabinet office. It’s only then that the system will kind of part and go, “Okay, this element is allowed to.” But you imagine as soon as that countervailing force is removed, all the normal sea floods back.", "(00:29:10) - Civil service", "Dwarkesh Patel 00:29:10", "At some point we should talk about what it would actually take to change the equilibrium. It might be first useful to talk about how did this come to be? So presumably, at some point, this is not the way the government functioned. How did it end up like this? Is it just that the cludge builds up? Tell me about the actual mechanism of how does it end up this way?", "Dominic Cummings 00:29:29", "In Britain it started in the 1850s when people said, well, the old aristocratic system based on patronage is irrational. We’ve got a shift towards a much more meritocratic system. We should have officials that are appointed on merit. You essentially had a transition from a pre-1850s aristocratic system based on patronage, where there was all kinds of dodgy corruption in various ways, and favors done in various ways, but it also moved much faster and in all sorts of ways much more efficiently, particularly in a crisis. From that, you shifted towards what you say is like a modern system where you have permanent civil service, supposedly meritocratic.", "However, over time, you see that this supposedly meritocratic system ends up actually just being a closed caste. It’s not actually meritocratic. It’s a system that promotes practically 100% internally and is therefore, by definition, closed to approximately 100% of the world’s most talented people. Now, that’s defended on constitutional grounds now, as: this is the only civilized, sensible way in which a state can operate. But of course, it’s totally self serving. And as we saw in COVID, it actually just means — massive bureaucracy, very poor people in all kinds of critical positions, and a state that’s actually paralyzed when it comes to the crunch.", "Dwarkesh Patel 00:31:06", "It’s like a Japanese company where you get in after college and just stay until you die. I live in San Francisco, and if somebody works in a company for three years or five years, they’re, “Oh, my gosh, you’ve been here for a while. You’re like a long term employee.” So the idea that you would be in the same place for 40 years is really odd.", "Dominic Cummings 00:31:25", "Sorry to interrupt, but imagine as well what the promotion system is like and who ends up getting to the top of these systems. A lot of people say, “Oh, you’re so negative about the civil service. You’re all saying that everyone there is rubbish, and it’s not fair.” That’s not my view. In fact, if you look at the civil service, you actually see a lot of very able people, but most of them are young. What happens is the young, excellent people get weeded out by self-selection, largely because they go in idealistic, they’re there for a few years, but then they look at what the process is to be promoted, and they look at their bosses, and the best of them look at it and go, “I don’t want to be like that.” I don’t want to have to make those decisions. I don’t want to have to make those compromises. I don’t want the job like that, where it’s almost all bullshit. We can’t actually build anything.", "The most entrepreneurial, the kind of people who actually want to get on and do stuff now, leave and the most HR compliant, disastrous people to be in charge of supposedly fast moving agencies are the ones who are promoted to take over. And then that culture itself becomes highly self reinforcing. Once you get a whole cadre of leadership at the top that’s like that, it’s extremely difficult to break out of.", "It’s not like “Well, okay. IBM has gone wrong. But you still have a whole ecosystem of competition for IBM.” Like sure, IBM dies, but the American economy evolves and continues. In government systems that obviously can’t happen. So you have — IBM is dead, but it’s still actually in charge of everything.", "Dwarkesh Patel 00:33:03", "This reminds me of this wonderful biography of LBJ that Robert Caro wrote. In volume three , which is about him taking over the Senate, when he gets in the Senate the seniority system is known as a senility system because you have these people who are in charge of these important committees and they’re 80 years old or something and they’re in charge of ways and means. So when he becomes a majority leader, he realizes that he can’t have this if he wants to get things done. He realizes he needs the young senators in charge and so he engineers ways to get 40 year olds or 50 year olds in charge of these committees.", "Could a Prime Minister just retire a whole bunch of people and say we’re going to put 20 year olds on the path to management or path to advancement. Who would that be up to?", "Dominic Cummings 00:33:50", "In the British system only the PM can do that and we actually started to do that in summer 2020. After the first wave of COVID when the PM nearly dies in March, April, for about eight weeks or so, he was highly aggressive in saying, “This whole fucking disaster system has got to change. It nearly killed me. It’s killed tens of thousands of other people. We’ve got to blow the whole thing up and rebuild it.” So in summer, we actually got rid of roughly half a dozen of the senior permanent secretaries in charge of a lot of these departments.", "It was described by the insiders as a “rolling coup”. Fascism. The Vote Leave fascist entity is now essentially mounting a coup, Orban style, to dismantle all democratic. So that was the reaction of large parts of the old system to saying half a dozen duffers are going to be removed and we’re going to start promoting some younger, more dynamic people and that’s part of an overall shift that needs to happen. Clearly, a whole bunch of different parts of the system have failed. Agencies need to be shut, new things need to be created.", "So it is possible to do. But the tale of that also shows the problems, the reaction from the old system and the media is very severe. Nobody supported us at all, including all the Conservatives. Ironically, the people who were most in favor of it was actually the deep state itself and the most entrepreneurial elements of the deep state because they’d all seen this system completely implode. So the closer you were to Number 10 and the closer you were to actual power, the more you actually appreciated the full horror story of how the apex of the British state had completely imploded.", "A lot of them, particularly the younger ones, said, well, now it’s time you’ve had a once century pandemic, the biggest crisis the state Britain has faced since Hitler in 1945, and you’ve got Brexit done and you’ve got the Vote leave team in Number 10 who’ve said before they arrived here they’re going to change civil service. Okay, well, now, obviously this is all going to happen. And it should happen. Now we can argue about the details of it.", "Clearly, the personnel system needs opening up. The civil service is going to have to have the biggest shakeup that it’s had in a century or more. Fundamental problem was that the PM then thought that this is going to annoy so many people that I can’t face it. Also other dynamics, which I’m not sure how much you want to go into, but the fact that Starmer is just so obviously rubbish also meant that Boris and his wife thought, we don’t actually have to do very much. So it goes back to this question of are you actually trying to change a lot or not?", "Our response to summer 2020 was, we’ve said for years that a whole bunch of things are wrong and that the next big crisis will reveal it. Well, we’ve had the next big crisis and it has revealed it, and it’s obvious that we’re right. Now, we’ve obviously got to make all of these changes. And by the way, in the election six months ago, we also promised we were going to make all of these changes. Boris, Kerry and the majority of the Conservative Party’s attitude was far more “Keir Starmer’s rubbish. Changing anything big will just be very disruptive and create lots of enemies. Why are all of these people talking about disruption and firing people and rebuilding things? We just want to go back to normal. We hate COVID. We don’t want all these arguments about Brexit. We kind of want to go back to normal.”", "And for them, what normal is, is like the 90s. The kind of default mode for how people in politics think about normality is, can we go back to that lovely time between the fall of the wall in 1989 and the fall of the towers. That period is what people kind of psychologically gravitate back to.", "So in summer 2020, our attitude was that the state needs to be fundamentally reengineered but most of the systems’ was — We’re tired. We just want to go back to normal. We want to do what we always do, which is: chat with each other, give interviews to the media, and not change much. And given that Starmer is so obviously rubbish on the other side, we don’t actually have to change very much either.", "(00:38:27) - Opportunity wasted?", "Dwarkesh Patel 00:38:27", "On your blog, when you’ve written about all these other reformers throughout history, they were able to make their changes in a time of crisis. Lee Kuan Yew after the British leaves or Bismarck after Prussia loses the Napoleonic wars, In this case, you have COVID, you have Brexit, you have an 80 seat majority that you’ve engineered based on campaigning on these things. And you have you as chief advisor to the PM, somebody who understands these things, has thought a lot about them and, in fact, accepted the job on the condition that you would be able to do these things.", "You have all these things come together. Why was the opportunity wasted? You mentioned, for example, that Boris and Kerry were not interested in changing these things. Presumably they saw that the problem was there. I mean, if you’re a PM you must see the craziness. And also, wasn’t he a fan of Churchill and he wanted to build a legacy or something? I would think if I was PM considering that most PMs are going to be forgotten, you might as well just go forward and try to do something big, and try to be remembered.", "Why did they waste this opportunity? You have all these things that were going for you guys. If something was going to happen, that seemed like the perfect moment to make something happen.", "Dominic Cummings 00:39:53", "Part of it goes back to what I said before, that you have two fundamentally different attitudes towards the whole thing. Boris was prepared to be very aggressive and revolutionary in various ways in 2019, fundamentally because he thought it was necessary for him to survive as Prime Minister, not for the country, which he didn’t and doesn’t care about, but for himself.", "In January 2020, we were already arguing about what the fundamental direction is that we’re going to go in. As I said before, my view was: We’ve won. We won saying that we’re going to do all these things. We now have to actually do all of these things. And in order to do all of these object level things, it requires facing these multi-decade long term, some of them over a century, in terms of how the actual constitutional system works, how the civil service works and everything else, versus his view, which is that it all just sounds like a lot of hard work and making a lot of enemies and it doesn’t really seem necessary because the Labour Party is a joke and I want to be friends with London. I want to be friends with insiders. I want to have a nice time.", "The overwhelming majority of people in politics fundamentally prioritize social relations within the insider network. That is just what totally dominates their life day in, day out, year in, year out. Now in some ways it’s very odd. Because what’s the point of doing this for 30 years and then you kind of become a minister, but you’re not actually in charge of anything and you can’t actually do anything and then you’re spat out and you’re not even going to be even a footnote in history. What on earth is the point of it? It kind of seems completely crazy to me anyway.", "I think that misunderstands how they see it. For them, the theatre and appearing on TV is the critical part of it. It’s not — Have you actually done anything? And the permanent civil service are brilliant at manipulating the theater to keep the egos of the MPs satisfied. The most obvious way to think about it is the Cabinet. You have the most famous door in the world, Number 10. Every week these characters walk up the street. The cameras are all there. The cameras work. They click, they shout out questions. The ministers smile at the cameras and they think “We’re part of the insider gap.” This is the peak thing that everyone aims at whilst behind the door.", "The people actually running most of the government regard the Cabinet as just kind of bullshit theater. The meetings are literally scripted. People are given their talking points to read out by the officials. They read them out, they read those out and the conclusions are all pre-written. So the whole thing is a complete potemkin farce. But from the MP’s point of view, we’re on TV, we’re treated like we’re in charge. The fact that the people actually running the Ministry of Defense is not the Secretary of State. The fact that these ministers have fundamentally no power. The fact that in Number 10, officials who are like 28 years old working 5 meters away from the PM have usually far more power and authority over things than the ministers on TV do. But their names are never in the papers. No one knows who they are. This weird mismatch is never explored. It’s never covered.", "I had a funny conversation the other day. I was talking to one of the editors of one of the biggest newspapers in the country, and I said, “In the old days, there used to be a parliament page in newspapers that would report what happens in Parliament.” Because that’s the center of political activity and what happens in Parliament is important. “Why don’t you start a page where you actually report on the deep state?” and you say, “Oh, the Prime Minister’s secretary for Economic affairs has moved from this job to that job and it’s now been replaced by a 31 year old.” because those jobs actually are far more important.", "If you look at Brexit, the foreign secretary was almost completely irrelevant to all of the Brexit negotiations. The Prime Minister’s private secretary, a very able character, much more able than anyone in the current cabinet, but was totally unknown in the media. Name never in the media but far more important. So I said to this editor, “Why don’t you have a page and fix that?” And he said, “Well, everyone would think I’d lost my mind if I did that.” It would antagonize everyone. The MPs would go completely crazy because preserving this fiction, the potemkin charade of the MPs trotting up Downing street and being filmed going into the Cabinet which is taking decisions. As long as the potemkin carries on. Just don’t break the kayfabe, as they say in WWE.", "Dwarkesh Patel 00:45:02", "It sounds like you have two constitutional monarchies in Britain.", "Dominic Cummings 00:45:06", "In some ways, yes. Exactly.", "Dwarkesh Patel 00:45:08", "You say that you have these undersecretaries, who are these 31 year olds or 28 year olds and they’re the ones in charge. Doesn’t this contradict the idea that you have this gerontocracy of people who had to wait 40 years to get promoted to be in charge of these departments, this permanent civil service?", "Dominic Cummings 00:45:27", "So it’s an oddity in the system and it’s purely around the PM. The way the system works is in all the different departments, the gerontocracy is in charge and you have this appalling HR system that filters out almost all of the best people and promotes the most HR friendly types.", "But the PM’s office is, for historical reasons, slightly odd. In the PM’s office, you have roughly a dozen or so traditionally young, bright people who are put into that job to do specific things as part of training them for the future. It’s just an oddity of how the system works that you have this one set of people who are formally not senior, these people don’t have very senior roles in the hierarchy of the caste system, but they have a lot of kind of unofficial power and authority because they are talking to the PM every day and they’re part of the PM’s private office. So it’s a weird way in which the system works.", "Dwarkesh Patel 00:46:31", "By the way, do you want to mention the able 31-year old able person who helped with Brexit?", "Dominic Cummings 00:46:38", "Well, there’s a few. IThere’s a guy called Jono Evans who did Brexit negotiations. There is a brilliant young woman called Hannah who coordinated deep state stuff with Mi5 and things like that. Whenever there’s a terrorist attack, if a plane is going to fly into parliament and try to kill everyone and it’s got to be shot down, then she’s the cog in the wheel that actually organizes the system to get the right call. So there’s a lot of people like that who are extremely capable.", "The joke amongst the Vote Leave team used to be that if you swapped the private secretaries for the Cabinet in every single job, you would basically improve the caliber of the person by like 10X. However, lots of those people have now left.", "Dwarkesh Patel 00:47:23", "Because of self selection?", "Dominic Cummings 00:47:25", "Exactly.", "Dwarkesh Patel 00:47:25", "It’s interesting, as a foreigner, when I think about the UK and I’m just learning the basics of what the problems in the country are, the main thing is your productivity growth since 2005 has been abysmal. You guys are much poorer than you guys could otherwise be. This is the main thing that jumps out in my mind. Is there a team in Number 10? Is it anybody’s job who’s very senior to deal with this specific problem?", "Dominic Cummings 00:47:52", "Now, not really. Remember, as I said before, how small Number 10 is which in lots of ways is a good thing. Everything to do with economic policy growth, etc. is all, like, 99% of everything to do with all of that is elsewhere, mostly in the treasury, which is actually anti-growth in most important ways.", "Now, in 2020, we started a whole bunch of things to actually make it a core part and shake up the way that the system worked. We changed how the power relations between Number 10 and the treasury, also with the Cabinet Office, and we actually created task forces to start working on lots of these most critical problems. So the science, technology, whether it’s the startup ecosystem, procurement, the planning system, which is one of the really big things that destroys so much value and there’s just such incredible low hanging fruit.", "But basically, all of that was dismantled by Boris in 2021 after we left, because, for the reasons I already said, he came to the conclusion and most of the party agreed. Remember in 2020, when it came out that we were working on all of this stuff on growth and taxes and the planning system, The Daily Telegraph which is screaming about growth today was totally hostile. Most of the Tory MPs, totally hostile. Most of the Conservative Party were not agitating for an aggressive pro-growth agenda in 2020. They were actually hostile to it.", "(00:49:35) - Rishi Sunak and Number 10 vs 11", "Dwarkesh Patel 00:49:35", "Speaking of the treasury, you helped promote Sunak to Chancellor of the Exchequer. He was, from what I understand, relatively unknown before that.", "What did you notice about him that made you decide to do that? And maybe tell me more about why you think the treasury is anti growth.", "Dominic Cummings 00:49:55", "So Sunak, he’s obviously much brighter than the normal MP. He’d actually worked in functional private sector organizations before coming into government unlike most of the MPs. He was extremely hardworking, unlike most MPs and unlike most ministers. He actually dug through the detail and wanted to understand it and could understand it and did understand it. And he seemed much more interested than the normal MP in actually doing useful work, rather than running around the media and giving interviews and all the normal bullshit that they’re all obsessed with.", "So, from our point of view, we had a chancellor then at the time, who just couldn’t do the job at all. He couldn’t manage the Department. He didn’t have the self confidence to grip the officials in the treasury, so it seemed just like an obvious change. Sunak didn’t understand politics very well. There were a lot of things he didn’t understand but from our point of view, that didn’t really matter because we didn’t want him to do the politics side of it. We wanted someone in there who also wanted to work with Number 10, right? Which is a critical thing.", "One of the big structural problems in how the British state has worked is that there’s a kind of structural conflict between Number 10 and Number 11, the treasury. You normally have a Prime Minister in Number 10 that’s always wanting to spend money, and then you have a Number 11, which says we’ve got to control Number 10. But Number 11 has, in lots of ways, much more power over Whitehall than Number 10 does, because it legally signs off the cheques not Number 10 and that has lots of weird consequences.", "One of the most stupid things is that Number 11, the treasury, basically hides huge amounts of financial information from Number 10, which is obviously completely insane. How can you have a Prime Minister making judgments about all sorts of things if his own treasury is hiding data? Insane. Makes no sense whatsoever, but it’s completely normal in Britain.", "We said, “Fuck that. We’re going to have one team between Number 10 and Number 11.” The data will all be completely transparent. So exactly what the spreadsheets are in the Chancellor’s office are instantly completely accessible and open to the Prime Minister’s Office. Not something you’d think that ought to be very controversial. But it was highly controversial and unprecedented. And it changed literally within like 3 hours of me leaving. The Treasury went and stopped it straight away.", "So our thought was, if you’re actually going to get to grips with growth and productivity and all these things, you have to bring the Number 10 and 11 systems together. They have to have an integrated diagnosis of what the problems are and an integrated plan for what they’re going to do to change it and then actually one team of people. And then you stop Whitehall also picking you off and using the normal divisions.", "Dwarkesh Patel 00:52:48", "What do you think of Sunak now? You noticed he was intelligent, hard working and paid attention to details. Is that enough to make a good PM?", "Dominic Cummings 00:52:56", "He’s trying to make the old system work and he’s treating the old system with respect. It’s political disintegration. He doesn’t control the government. He doesn’t even control Number 10, the Cabinet Office. He has no political story. He has no message. He has no grip. And he’s just buffeted hither and thither by events in the same way that every PM has been since Thatcher.", "He had a fundamental choice, “Am I going to try and win?” Which involves challenging the way that the conventional system works, the conventional power system of Whitehall, and am I going to tell a story to the country which is convincing, or am I going to do what all the insiders are telling me to do and keep them happy? He chose to do B, with the inevitable consequence.", "Dwarkesh Patel 00:53:56", "I’m more curious about the broader lessons about personality that this reveals than him necessarily. In the American context for example, FDR clearly had control of the government, and as far as I understand, wasn’t somebody who was known for being intelligent or micromanaging details.", "Dominic Cummings 00:54:15", "But he had Harry Hopkins to do that for him.", "Dwarkesh Patel 00:54:16", "Right. So, what does that reveal about who makes a good PM? What are the characteristics needed for a person who can control the government?", "Dominic Cummings 00:54:25", "It’s impossible to be a good PM if you accept the way that the job currently works. It’s totally impossible because of what we’ve been talking about. You’re just buffeted by media events all day, and you don’t actually control the government. You can’t actually make anything happen. If you accept all the constraints and you accept the way that Whitehall works, it’s impossible for anybody. It doesn’t matter. You could put General Groves into that job. If he did it the way that Sunak’s doing it, General Groves would fail, too. You could put in FDR. You could put in Bismarck. The reality is nobody who actually gets a lot of things done historically operates the way that the current Prime Minister is forced to operate by the prevailing system. They are fundamentally incompatible things.", "(00:55:13) - Cyber, nuclear, bio risks", "Dwarkesh Patel 00:55:13", "This is the same government that has nukes, that deals with biosecurity, counterterrorism, and all kinds of other things that I’m sure I’m not even aware of.", "If the general government is this dysfunctional, are the people who are in charge of the nukes just as dysfunctional as those not having Google Docs and taking two years of litigation to do a two week project?", "Dominic Cummings 00:55:42", "It’s worse in a lot of ways. I saw recently that Peter Thiel said that we don’t see much about how the NSA works and his assumption is that in lots of ways, the NSA is worse managed than the DMV.", "There is, unfortunately, a lot of truth to that. But the position is mixed there in the same way it’s mixed generally. You can’t just say it’s all a shit show and the people are all rubbish. In the world of intelligence services and special forces and things like that, there are obviously a lot of incredibly able people and incredibly public spirited people, people who make huge sacrifices believing in what they’re doing.", "But it’s also simultaneously the case that a lot of the very worst, most appalling aspects of the bureaucracy happen in that world. And part of it, obviously, is that they can classify things and use classification to hide extraordinary public disasters.", "For example, the situation in terms of China’s infiltration of critical infrastructure and data systems in Britain is much, much worse than practically all MPs have any comprehension of. I’ve been in meetings where these things have been discussed and the now PM, then Chancellor, have sat literally with their mouths wide agog at the extraordinary tales that they’ve been told.", "“What the fuck? Are you kidding me?”", "The number of MPs who know that is probably like a handful at most, and it’s almost all completely hidden.", "Similarly, on the nuclear side, I spent a lot of time in 2020 in bunkers without phones, talking to officials about the state of the nuclear enterprise, weapons safety infrastructure. And the truth is absolutely horrific there as well. And it’s horrific because for year after year and administration after administration, they haven’t faced hard problems. They’ve punted off.", "So you have a combination of things. You have normal catastrophic procurement, which just means it’s totally normal for everything to be fucked up. Well, that also applies to the nuclear enterprise. You also then classify a lot of that so that it’s hidden. And that means it’s even easier for things to keep going for longer. It also means that the budget problems are hidden. A lot of what happens in terms of the public discussion about M.O.D budgets and the national accounts in general is massively distorted by the fact that in reality, you have literally tens of billions of pounds that are going to have to be spent on the nuclear weapons infrastructure that don’t appear in the official accounts at all. Simultaneously, you have parts of that infrastructure that just don’t work properly. Appalling safety that’s been neglected for year after year. So that’s cyber and nuclear.", "Speaking of Bio, I organized a meeting on biosecurity in summer 2020 as well, given that at the time there was COVID, and we were thinking, “Is it a lab leak? Is it not? What’s the truth about all of this?” So we organized a meeting and asked various questions. I didn’t say that one of the people that I actually took to the meeting was themselves a brilliant young scientist who’d been working in the States in the Janelia lab on neuroscience. And so all these people inside the system said, “Don’t worry about this, Dominic. This will never happen. This is impossible. This is science fiction. This is ten years away. Blah, blah, blah.” And everything was about trying to reassure me that I shouldn’t really worry about this. At the end of the meeting, I asked, “So James, what do you think about this?” Of course, these people have no idea who he was. And his answer was that pretty much everything that everyone has said is impossible or will take ten years, I have personally done in the lab in the last two or three years.", "Now, does that mean that the whole system for biosecurity is a disaster? No. Does it mean that everyone involved in it is a nightmare? No. There are obviously brilliant people everywhere. But across all of these things, there are budget horrors. There is a chronic inability to build long term. There are constant bureaucratic incentives to not face reality and the truth. And that’s the case across all of these secret systems.", "Dwarkesh Patel 01:00:50", "Why hasn’t there been a disaster? In many countries, the systems are as much of a shit show. And I mean, this is the West. Russia has nukes. Pakistan has nukes. And you can only imagine how fucked up their systems are. What has prevented it?", "You could say that the system is so fucked up that actually there was a lab leak and that was COVID. And so maybe there already has been a disaster. But what is the explanation for why other parts of the system haven’t crumbled in a disastrous way yet?", "Dominic Cummings 01:01:21", "If you look at just the public record on nuclear stuff, then I think that the only reasonable conclusion is that we’ve got extraordinarily lucky through the Cold War. Whether it’s the famous hydrogen bomb falling out of the plane and all of the safety devices apart from one failing. I mean, America nearly nuked itself, right? It was just completely by the grace of God that that didn’t go off.", "We’ve been very lucky so far, and there’s no reason to expect that luck to continue. And if you look at what’s happening in Ukraine now, then you can see that large parts of the system are very happy to dance right on the edge of the abyss.", "Dwarkesh Patel 01:02:05", "Is there more you can say on the Chinese infrastructure stuff? I don’t know if you can but I’m very curious.", "(01:02:04) - Intelligence & defense agencies", "Dominic Cummings 01:02:04", "If you imagine having a Sci-Fi novel and you said, what are a whole set of data systems that you really would not want the British state or the American state to be transferring data about? And then imagine that it turned out that these things are controlled by and owned by Chinese intelligence. That gives you a kind of nightmare picture. That is the reality. I can’t go into the specifics of it, because I think the specifics of it are illegal to discuss. But if you just wrote a story and imagined what are some of the most obvious ways in which things can go wrong?", "In fact, you probably wouldn’t write such a story because you’d think that’s completely implausible. There’s no way that that would happen. There’s no way that they would transfer data between A and B about this information and then find out later that that is actually controlled by China. That would be fucking mental.", "Dwarkesh Patel 01:03:05", "How about the intelligence agencies? MI5, MI6, and then in America, the CIA. How much situational awareness do they have about the most important things? Honestly, if I was like the PM, maybe in my daily briefing, the top thing I would want is the training loss on the newest AI models or things like that. But on the things that matter, how much situational awareness do the intelligence?", "Dominic Cummings 01:03:27", "That’s an interesting example. I think you have to draw a huge contrast between two things, capabilities and analysis. There are some extraordinary capabilities that deep state has in the Western world. If you want to dig into people’s phones, if you want to acquire secret information in various ways, then there are some extraordinary capabilities which people have and can be aimed. However, in Britain, they are generally not aimed nearly as aggressively as they ought to be. The process for prioritization of things is extraordinarily awful. Again, that’s another thing that if you actually wrote it out, no MPs really know anything about it, but if you looked at the system, people will just be completely appalled.", "Also, parenthetically, a lot of that kind of stuff has basically been shifted away from politicians over the last ten years. Nobody really has any visibility on large parts of that system at all now. When I dug into these things inside the Cabinet Office, I was essentially the only political person in a long time who’d actually even been discussing it with parts of the system so the officials themselves said to me.", "Because it’s been pulled into the Cabinet Office away from the Ministry of Defence, away from the Foreign Office and away from the Home Office, the three parts of the system that legally in the past had a lot of oversight over what was happening. Now a lot of it happens inside the Cabinet Office where there’s essentially zero political oversight of any kind. To a large extent, that’s very bad. It means that the bureaucracy metastasizes and a lot of decisions are made without any real challenge.", "So that’s bad. But going back to the main thing, amazing capabilities in various ways but badly focused, badly prioritized. But also the quality of analysis is much, much worse. In trying to analyze what people might do, how they’ll behave in a negotiation, will they start a war if they do? Those sort of questions. A lot of the work there is poor.", "Also I think it’s crucial to bear in mind that there just hasn’t been a focus on AI. You and I know lots of people who’ve been watching the world of AI now since at least 2014-ish when Deepmind made a big splash, or the couple of years before that. But even in 2019-20, I went into Number 10 wearing an OpenAI t-shirt on the first day to try and make a point to people that people should be paying attention to this. In 2019-20 that was seen as extremely eccentric, never mind five years earlier.", "And you’re saying, well, if you were president or Prime Minister, I’d be aiming these amazing capabilities and saying, “I assume that GCHQ and the NSA know who is running the equivalent of OpenAI in Beijing. Who is running the black projects for Chinese intelligence on training runs? Where is the black project data center and who the fuck is running it? And where is it? In which mountain and how many spy satellites are looking at it?”", "Dwarkesh Patel 01:07:04", "If the Prime Minister said, “I want to know how many H100s NvIdia will ship out next year. Go to TSMC, go to Taiwan, go wherever. Find out how many China is ordering. What university or state company in China is ordering and how much?”", "Does the capability exist to get that information? Or is it just like nobody cares about it?", "Dominic Cummings 01:07:26", "The main problem is no one cares about it. But if you turned MI6 and GCHQ onto a question like — What exactly is China doing with AI? Who are the key people? How can we honey trap them and blackmail them? Then obviously you could get an extraordinary amount of interesting information.", "But A, the system won’t do that by itself. B, the politicians won’t tell them to do it because the politicians aren’t really interested in it. C, there’s an incredible kind of risk aversion now in large parts of these systems, particularly after what happened with Iraq and then the legal investigations post Iraq and terrorism and whatnot. So there’s a huge kind of self censoring in large parts of the system and much less aggression than most people would assume.", "Dwarkesh Patel 01:08:17", "How about defense? In Ukraine, we’re seeing very asymmetric returns on different kinds of new weapons where you have cheap drones that are taking out expensive Russian tanks. How competent is defense generally, and how much are they adopting these new technologies?", "Dominic Cummings 01:08:32", "Generally in Britain, it’s completely shocking. Again, in 2019-20, we had arguments about this with Number 10 and the MoD. Again, it goes back to this generational thing. There’s a lot of younger people inside the MoD and obviously a lot of people in special forces who are looking at the sharp end of this and saying:", "Obviously, drones are going to completely change how land war operates.", "Obviously, we should be pushing this and exploring it ourselves.", "Obviously, we should be thinking about how to get large numbers of relatively cheap things leveraging commercial technology, and then think about how to add deep state capabilities on top of that, etc.", "But you also had a whole set of the senior people thinking, as they always do, “Fuck this. This sounds like it’s going to cannibalize our budgets. Oh, we’ve already got these drone programs.”", "Yeah, but a lot of your drone programs are complete dog shit. The drones fall out of the sky and they don’t work and they’re massively expensive and they should be closed down.", "There’s a drone now that Britain is trying to deploy in Ukraine called Watchkeeper. In private meetings in 2020, the MoD admitted the whole program was a complete disaster. They keep dropping out of the sky. They’re completely shit, and the whole project should be closed down.", "Of course, once we left and the system went back to normal, then they didn’t close it down, they haven’t replaced it, they’ve just thrown more money at it. And lo and behold, the things fall out the sky and fuck up.", "In 2019-20 a lot of these arguments were quite theoretical.There’s some things happening on the fringes. There’s the war in Armenia and whatnot, where you could kind of see some beginnings of people experimenting with some of these things. Literally, in 2019-20, people were saying to me, “Well, Dominic, our future fighter in 2040 or 2045 is still going to be manned, and there are all kinds of classified studies that show that drones are not going to be able to do this, that and the other.”", "“Really? Well, let’s open up these studies. Let’s see what OpenAI and DeepMind make of these so-called studies.” Of course, it all turned out to be total bullshit. Britain is still going ahead with that, though. And our current plan is to build another fighter in the same way, optionally manned, with BAE. Completely ludicrous. So there’s huge resistance inside the system to making that kind of shift for all of the normal reasons.", "Dwarkesh Patel 01:10:59", "The reason I’m especially curious about this is we’re seeing how this war in Ukraine is happening. I’m very curious about what this implies if there was a conflict in Taiwan.", "How easily could you take out an American or British aircraft carrier? Having seen the insight of how these things work, how confident are you in these war games and these projections? If there was an actual conflict in which Britain had to get involved, what do you think would happen?", "Dominic Cummings 01:11:23", "Well, in war games, if they’re dealing with any kind of serious peer opponent the British aircraft carriers immediately flee to the edge of the war game so that they don’t get sunk immediately. And the aircraft carriers are obviously a joke. I said that for the first time ten years ago. Five years ago, I had a lot of meetings about it in Number 10. Nothing persuaded me of anything other than that the whole thing was a sort of massive waste of money. And that’s just becoming more and more obvious.", "All over these systems in the west, people are now starting to face the music that a lot of these things that they’ve invested billions in are just totally vulnerable to asymmetric technology and asymmetric costs, where very cheap systems are going to be able to destroy platforms worth multiple billions. A lot of senior people who’ve talked a lot of shit about it obviously can’t admit any of these things publicly, but I think there is growing realization behind the scenes of what the reality is.", "But the situation in Ukraine should make us even more pessimistic in various ways. Because if you’d said in advance — Okay. Putin’s going to invade Ukraine. We are going to simultaneously encourage Ukraine to fight. Arm them and push Ukraine into a war of attrition. Simultaneously we’re going to ramp up, we’re going to ditch One China policy and aggressively ratchet up diplomacy against China on Taiwan and push the world’s biggest manufacturer into a closer relationship with Russia, whilst fighting a war of attrition against Russia, and then we are not going to actually have a defense industrial plan and a procurement change ourselves in Europe.", "“Even for you, Dominic, that would be just too ludicrous a scenario to happen. Nobody in their right minds would actually get themselves into that situation.”", "But that’s literally what we’ve done. We’ve pushed the world’s biggest manufacturer into alliance with Russia. We have escalated a war of attrition. And we have left a completely rotten dysfunctional procurement and manufacturing system for defense continue. The same set of people in charge of the decisions in the MoD and the Pentagon. Same set of bullshit pushed out about the advantages of aircraft carriers and blah, blah, blah the whole thing makes absolutely no sense at all", "Dwarkesh Patel 01:14:13", "What happens when we have to face the music and when does that happen?", "I don’t know. Maybe there’s a conflict and many personnel on the aircraft carrier drown. Or maybe not even with defense specifically. Just in general, all these different parts of the government are not only dysfunctional but are getting more and more dysfunctional.", "Is it just going to be a slow degradation or will there be another very clear inflection point like you had with COVID?", "Dominic Cummings 01:14:42", "My assumption is that there’ll just be repeats of the COVID experience and lots of them worse. There’s roughly some kind of financial crisis every decade or so. Another financial crisis could easily be worse than the last one. It wouldn’t surprise me if within the next two years there’s a worse one than the 2008 crash. Perfectly plausible. Lots of hedge fund people I talked to are kind of planning on that basis and think that something like that is quite likely.", "We’re going to face the music shortly in Russia because the Ukrainian offensive is not going to be the great success that we’ve all been told. Remember last Christmas, 9 months ago, the British and American media were full of: Russia’s about to run out of ammunition. The Ukrainians are going to launch a counter offensive. They’re going to sweep all before them. The idiot Russians are all going to collapse. Well, Chickens are coming home to roost on that now, that’s going to get worse.", "From China’s point of view it’s the most perfect situation imaginable because they can charge Russia inflated prices to sell them a bunch of stuff to blow up all of our shit turning Ukraine into rubble. You don’t have to be Sun Tzu to figure out that from China’s point of view this is like an absolute dream scenario and if America was insane enough to get into a war over Taiwan then obviously a lot of this would be exposed.", "There’s a book by a guy who now works at Anduril called the Kill Chain . The truth is that a lot of classified Pentagon stuff has said this. A lot of people involved in the system know that if America has a war over Taiwan then it’s going to be catastrophic for America. Leaving aside the obvious risks of escalation to nuclear conflict like. Just leaving that aside, just on a conventional basis, it will be a catastrophe for America.", "Dwarkesh Patel 01:16:35", "In a hundred years time, I guess it’ll be 2123, what about what the government does now will matter? When you read back in history about 1923, what has mattered most? What should be the big priorities?", "Dominic Cummings 01:16:50", "If you run that experiment on ourselves and you look back, what is it that we care most about now? You care about people who come up with new ideas, which is not something the governments do. People are interested in what Nietzsche said, but the details of what the British government did in the 1870s and 1880s is almost totally forgotten and not really relevant.", "The thing, I think, which people obviously pay most attention to is what contributes to war and what contributes to revolution, collapse, regime changes of various kinds? And if you think about what decisions we make, if there’s anyone still around in 100 years to look at it, the big things that people will look at are things related to that. What were the big things they did that they didn’t understand at the time, but which clearly affected the next war, the next Revolution’s collapse? Does the Euro collapse? Is there a revival of fascism and communism in various forms in Europe? Is there a war between America and China over Taiwan or something? Those would be the big things. And if those don’t happen, then we’ll be largely forgotten. How much do people pay attention to the government in Britain in 1890-1895 now?", "Dwarkesh Patel 01:18:10", "Conflict matters a lot. Technology matters. And they matter in ways that are very contingent and hard to predict and very nonlinear.", "Dominic Cummings 01:18:19", "And ideas matter most of all. But almost none of that really comes from the government, right? It comes, almost by definition, from people who are fringe at the time.", "Dwarkesh Patel 01:18:29", "Why does democracy work? Or does it work? And why do some democracies work better than other democracies?", "Dominic Cummings 01:18:36", "Britain’s been a “democracy” for 80 to 100 years depending on how you define it. There’s been a particular kind of regime in most of Europe post 1945 of a kind of pseudo American empire which has entrenched a certain kind of democracy in various countries. But that’s a very, very small change in the sweep of history.", "It’s like asking questions about 250 BC in 100 BC about the Roman Republic. Well, the Roman Republic has lasted for a century so far. You certainly can’t say now that democracy has proved to work. I would say the one thing you see in history is regimes are constantly changing and everyone thinks in their own time that we’re not going to change. What we’ve got is going to persist. But it’s always wrong. The most that things persist is a few generations, and then there’s always chaos and then there’s always a change.", "Dwarkesh Patel 01:19:47", "What is the next thing going to look like? And what should it look like? And the reason I asked that question is, one of the justifications of democracy is it’s nonviolent error correction and you can fix these mistakes. But on this account of what’s going on in the government now, not only are errors not being fixed, they’re accumulating and constantly increasing. So then is there a system in which these errors are constantly pruned away? What would that system look like?", "Dominic Cummings 01:20:16", "If you looked at what’s happened in the West over the last few decades and you brought back some of the people from classical Athens, it would seem quite familiar to them in certain ways. Some of the Roman aristocrats would say that democracy is having its predictable effects. You have demagogues in charge. You have a constant demand for more handouts, which is gradually bankrupting the country. You have moral and spiritual decay. You have a kind of collapse of internal cohesion. Pretty much what we’d expect historically.", "I think the big inflection point was the mid 19th century. Before 1848, you had a bunch of conservatives like Metternich , who had watched the French Revolution, who had watched the guillotine and the terror and bloodshed, and they said, we’ve got to try and stop this. And they were actually conservative and they really meant it and they were really trying to turn the clock back.", "After 1848, those people are kind of like either fled, retired, feel themselves doomed, and the old aristocrats who call themselves conservatives basically thought — well, how do we use democracy or universal male suffrage anyway, to try and smash the liberals up, who were their real enemy at the time. Which is what Bismarck and Napoleon II both tried to do.", "But if you went back and looked at those people, brought those people back to life and got them looking at our current situation, I think they’d look at the first half of the 20th century and say, yes, things proceeded pretty much as we said they would. You allowed Christianity to collapse, you allowed the Socialists and the Democrats to win. And unsurprisingly, you had the torture chambers of the Gestapo, NSS, and the NKVD.", "Dwarkesh Patel 01:22:29", "How does that chain necessarily imply the catastrophes of …?", "Dominic Cummings 01:22:33", "I think old school aristocrats would assume that if you go down the path of democracy, then you will pretty rapidly end up with the Gestapo and the NKVD. And that is in fact what happened.", "Dwarkesh Patel 01:22:44", "But how come? Why is that the implication of the end state of democracy?", "Dominic Cummings 01:22:49", "That it’ll implode? I mean, look at what happened in Athens. They tried and it only really worked as long as you had the old aristocracy managing it. I’m not saying this is necessarily what I believe, all I’m saying is that a reasonable perspective is that the old system worked because the old aristocracy managed it. Pericles being the obvious example from the old Alcmaeonidae family.", "Once they lost their grip and demagogues like Cleon took over, chaos, demagoguery and endless demands on the treasury, the people voting handouts for themselves. It collapses and the cycle repeats. Similar sort of arguments about the Roman Republic. So I think they wouldn’t be surprised at the collapse post war, post 1918 in Western Europe.", "(01:23:32) - Bismarck & Lee Kuan Yew", "Dwarkesh Patel 01:23:32", "Maybe an interesting question is then if you look at Bismarck, who you’ve done a huge dive on , in that case, you have a system where after him, the system he set up couldn’t survive and yada, yada, yada, World War I. Whereas Lee Kuan Yew, another person you’ve studied deeply, he had meticulously set up a system so that even after he gave up the leadership, it would be a competent, successful government. And I guess we’ll see how it turns out. But Singapore seems to be running fine.", "They both are these strong figures in the context of a democracy. What is the correct model here? And then how does this solve the succession problem?", "Dominic Cummings 01:24:22", "They are two very, extremely different examples. With Bismarck, you have a system where there’s universal male suffrage, but a kind of gerrymandered constitution written by him personally on a little Baltic island, designed essentially to make him the fulcrum of all of the power, but in a sort of lots of ways, hidden away. You certainly wouldn’t call it a democracy in anything like the current situation. All the deep state stuff, to use the modern terminology, the army, the intelligence service and everything were obviously completely excluded from that whole power structure and completely in the grip of the Prussian king.", "Lee Kuan Yew has created a system where I think the most important thing he’s created is a certain kind of culture amongst the leadership in Singapore, a genuinely meritocratic culture, a culture where people are actually trying to solve the problems. Not like Washington or London now. And where there’s a kind of moral reinforcement for that culture, without which, everything else doesn’t fly. Therefore, it all becomes about: Generation by generation. Is that moral leadership? Does that moral leadership maintain or does it dissolve? Because it’s very easy for that sort of thing to go.", "His son’s in charge now and I haven’t studied it very much, but he seems to be preserving the fundamental culture. The people running the different agencies, their job is to actually run the agencies properly. It doesn’t seem to be corrupted. But it’s very easy for that to change, right? Say he gets shot tomorrow or has a heart attack tomorrow, and someone else takes over very quickly. The signals could go out that actually, your job is not to do pandemic preparation properly. Here’s how people would be promoted from now on. You’d always expect that as the default. Because that’s just what entropy does.", "Dwarkesh Patel 01:26:20", "It brings us back to what is a system in which not only is a leader somebody who understands how to take control of the government and does it effectively, but also reliably hands off power to people?", "Dominic Cummings 01:26:34", "There is no such system. That’s why history is what it is. The Roman Republic lasted for a few centuries much longer than America’s lasted so far, and many times longer than democracy in Western Europe so far has been operating. But everything has its time of growth and decay, and it depends on the culture of the elites. It depends on the ideas that they believe in. It depends on. To what extent are they public spirited?", "When I was over in Silicon Valley, I quoted to some of the guys there a famous letter that Cicero wrote where he said, everything basically is going to the dogs, because the leading people in the Republic are spending all their time in their wonderful houses, tending to their fish ponds. And if this carries on, then the Republic is clearly doomed. And he was right.", "It’s extremely hard to turn those things around.There’s no reliable way to do it. How do you turn around an elite culture? What tends to happen is it implodes slowly, then hits a crisis and is blown up, and then something new comes along amid bloodshed and disaster. How often do you see a kind of deliberate, nonviolent, non-crisis led internal rejuvenation?", "Dwarkesh Patel 01:27:57", "Will Durant has this quote that Rome fell for longer than most empires have lasted, but we have these crises constantly. One could have imagined COVID could have caused this. I guess a rejuvenation will have to happen with a different generation.", "Since you’ve brought up Silicon Valley. I do want to ask about something.", "In 2021 you had a blog post about how in America, a Silicon Valley led or funded campaign in 2024 could have a big impact and get rid of the Trump v. Biden version 2 disaster. What happened? It seems like you now more focus your efforts back to the UK. What came of that attempt to look into what you could make happen in the US?", "Dominic Cummings 01:28:41", "To what extent are things just going to play out the way that they normally do historically. i.e. Slow rot. Elite blindness. Sudden crisis. Collapse. Bloodshed. Chaos. That’s normal. That’s the baseline expectation for our own current situation.", "To what extent can one try to preempt that? And the only way really to try and preempt it is that you have to change the nature of the elites. And the obvious elite in America to look at is the elite who are most competent at building. So my point was, if you leave the old system in Washington, New York to itself, then it’s going to produce Trump vs Biden 2 probably. Obviously people could have heart attacks or whatever, but left to its own devices, that’s where the system is heading towards.", "The obvious set of people that could put the country on a different track are the Silicon Valley builders. You should try. But that leads to a basic paradox, which is that the more and more mad the old system of politics and the old media get, the stronger are the cultural, financial, personal disincentives to competent people getting involved. In fact, the opposite is happening. The madder the system gets, the more the competent people retreat to their fish ponds. They look up, they build walled gardens, and they cultivate their fish ponds, they try and build their own companies where they can do things of value. They spend their time on research, they build hedge funds, they do things which are trying to insulate themselves and other people that they care about from the chaos.", "A lot of them said to me personally, if I tried to do anything about it, then my investors would go mad, my employees would go mad, my family would think I’d lost my mind. I’d have demonstrators outside the house. Who the fuck needs that? The current system is working as intended. It’s closed. The old parties and the old media are driving themselves and everyone more and more mad. But the people who could change it are very highly disincentivized from getting involved.", "Look at what’s happened to Elon. Elon, generally regarded by the old system as a hero, even up to two or three years ago, as soon as he says — Shouldn’t we take the First Amendment seriously? and by the way, this Ukrainian war seems insane. He’s called a supervillain.", "I think a perfect way to understand the old political system in America and Europe is that across the political world, academia, in the media, you can see on Twitter, they’re all very happy to give their personal takes on Elon Musk as manager and his startup abilities. And they take each other’s take on that more seriously than they take the historical evidence of Elon’s abilities. Now, for them, they all think that they’re all sensible and they’re all rational. For others of us looking at it, we think that that’s just a wonderful sign of the old system’s complete madness and inability to face the most obvious things. Why do junior academics and political journalists, who can’t even cheat their own expenses competently, think that they’re competent to make judgments about Elon and his management of SpaceX? It’s completely crackers. But they don’t think it’s crackers. They think it’s perfectly reasonable.", "Dwarkesh Patel 01:32:29", "Let’s say if you did something in the UK, US, wherever, and suppose it succeeds, and it’s not somebody like Boris, where it’s somebody you have qualms about, but better you than somebody else. Let’s say it’s a handpicked person, you think they’re super competent, and you’re now chief advisor again.", "Winning the election sounds like the easy bit, given the challenges you’ve talked about in terms of actually taking control of the government. What changes? Because last time, 80 seat majority, COVID crisis, huge mandate on both counts. Yet still, things couldn’t happen. Planning reform didn’t happen.", "What would actually take on the next iteration for you to actually take control of the government? If you have to do a natural regime change, as you’ve called it?", "Dominic Cummings 01:33:17", "Take me out of the equation. Let’s think about the general thing.", "Winning an election, in a lot of ways, is the easy part because in a lot of ways, the madness of the old system actually makes it easier and easier to win an election. Because the more divorced from reality that the old system gets, the simpler it is to win an election just by actually focusing on the voters. Which sounds completely crazy, but the old parties can’t do that. Why did we win the referendum in 2016? Because we focused on the voters, and the remain campaign didn’t. Why did we manage to prevail in 2019 when everyone thought what we were doing was completely mad? Because we focused on the voters.", "But remember, everyone thought everything we did at every stage was completely mad and stupid and wouldn’t work. When you do things that are actually focused on what voters want, you seem insane to the old system.", "Dwarkesh Patel 01:34:08", "Wouldn’t you just expect some sort of basic evolutionary or selection argument for winning elections. Such that even if one politician just actually strategically tries to win elections, wouldn’t the system at least select for that? Because that’s presumably who’s getting elected.", "Dominic Cummings 01:34:24", "No, it doesn’t select for that at all. I mean, the whole history of Britain since 2016 is a perfect example of it. We won the campaign. Did the establishment go, “Oh, how did we lose that? We controlled the question. We had all the money. We controlled all these institutions. We controlled everything with power in the country. And then this startup hobbled with all sorts of problems, somehow won. Let’s investigate.”", "No, of course not. They created a whole conspiracy about Putin, Trump, Facebook, everything else as an excuse not to face the reality.", "Then in 2019, when we came along and we just did very, very obvious things, actually focused on voters outside London, they just made all the same mistakes again and blew themselves up again in 2019.", "Look at the current situation now between Sunak and Starmer. And even Trump and Biden. All of them, repeatedly, every day, do things that make no sense. If your assumption is that they’re optimizing for winning the election, why would Rishi Sunak say: judge me on whether or not I stop the boats? Then when he’s told your whole plan and your legislation to stop the boats will not work and legally can’t work because of the European Convention’s Human rights. He just ignores it and does it anyway. It totally makes no sense if you think they’re optimizing for winning an election.", "And if you spent three days doing market research in America, you’d know that Donald Trump’s message should be about the economy and what he should be saying is, this is what I did on the economy last time. This is what I’ll do on the economy next time. The reason why all these legal cases are being brought against me is because people don’t want me to do this with the economy. That’s what his message should be.", "What should his message definitely not be? Arguing about the 2020 election and who won. Yet what does he do? Keeps arguing about that. All you have to do is most simple market research and you can say they are objectively not doing what is rational if you assume that they are focused and optimizing for winning an election.", "Dwarkesh Patel 01:36:31", "Who would you say was the last US president who was in charge of the government? Who was the last UK Prime Minister who was in charge of the government?", "Dominic Cummings 01:36:47", "Probably FDR, you’d say.", "Dwarkesh Patel 01:36:49", "Not even LBJ?", "Dominic Cummings 01:36:50", "I don’t know enough about LBJ to say. I mean, my impression is that for all his amazing skills, he didn’t manage to grip the Pentagon and the intelligence services and had all kinds of problems dealing with them. I mean, obviously, in some ways, LBJ clearly had an unprecedented grip, at least since 1945, over parts of the system for sure.", "Dwarkesh Patel 01:37:14", "But his was a legislative talent, not an executive. And then who in the UK would you say?", "Dominic Cummings 01:37:19", "I’d say Churchill.", "Dwarkesh Patel 01:37:20", "Probably not even not Thatcher?", "Dominic Cummings 01:37:22", "No, I mean, Thatcher just objectively very clearly didn’t if you look at just the memoirs of people that worked with her closely. The single biggest mistake Thatcher made was that she never actually got to grips with the permanent state, the permanent civil service. She talked about it a bit. She did some kind of sporadic occasional firings and a bit of purging here and there, but it was very less than half hearted, ineffectual, and contributed to her downfall.", "(01:37:46) - How to fix the government?", "Dwarkesh Patel 01:37:46", "The problems you’ve highlighted have been since World War II. Everything from the media to the legal challenges to the civil service itself pushing back. We went through the entire list.", "How is this actually going to work? The person is in charge. What happens next?", "Dominic Cummings 01:38:04", "It can’t be just one person. There needs to be some subset of the elite or part of the elite currently not really involved with politics, that decides to get involved with politics, that actually decides that its fundamental goal is to solve a set of problems. Its fundamental goal is not maneuvering the social hierarchy of existing political and media elites. Like that’s the fundamental question. At the moment, approximately everybody in the system is optimizing for their position in the social hierarchy of insiders. That means that no one wants to face reality. No one wants to tell uncomfortable truths. By definition, you can’t fix any of these agencies because that means alienating people immediately.", "All of that can only change if a set of people say — “Here’s how we define the problem and solving this is actually what we’re here for, not staying friends with the old elites.”", "And that’s why this thing very rarely happens. It’s why it normally, historically, only happens after a disaster.", "Dwarkesh Patel 01:39:19", "How big does that group need to be? So it’s not just the PM. Is it the entire cabinet? Is it the undersecretaries? How big does that minimum group need to be to take charge?", "Dominic Cummings 01:39:29", "Everything depends on particular historical circumstances. I mean, you could do it with a relatively small number of people. If you had the right PM, you could start off with ten people. But it has to be people that have a common set of what their goals are and who are very able and who are then able to build a network beyond that. No kind of coup or no kind of regime change ever happens if the small group of people at the start stay small. By definition, you have to convert people forcibly or through persuasion or whatever.", "Dwarkesh Patel 01:40:07", "What is a new equilibrium for these existing institutions, whether it’s the civil service or the media? Suppose you successfully reverse a lot of these trends and there’s a bunch of successes that ramp up. Do they keep opposing you or do they support the new regime? What is the new equilibrium for the blob?", "Dominic Cummings 01:40:27", "Well, by definition, I think all bureaucracies end up operating for themselves. So you should assume that if you do a kind of like a standard approach to this, and you say — We don’t like the existing bureaucracies, we’re going to take over, we’re going to scrap them and replace them. Then obviously within not very long, those things are going to operate pretty similarly to what the old things did.", "The only way, I think, that there’s any chance of long term changes, you have to build into the system institutions and public acceptance and elite acceptance of a kind of constant reinvention and rejuvenation and closing and refounding of things. It’s the only conceivable way that over a time period like 100 years.", "If you go back to your Singapore example, so far it’s been successful in preserving that culture. It’s like: here’s what our real goals are and the different elements of the government system have to change to meet those goals. And they constantly are adapting and we try and face our failures honestly. Like their report on what they got wrong on COVID which is already published and actually accepting the errors and trying to fix them. Unlike Britain, where we’re just paying lawyers hundreds of millions of quid to spend years arguing about it.", "The only way that you could even imagine something happening long term is if you not just replace the existing elements with new ones, but you also create institutional mechanisms whereby the elites and the public accept these things need constant rejuvenation. And that’s a mix of basically sunsetting, closing and rebuilding.", "Dwarkesh Patel 01:42:29", "Let’s get more specific. Let’s talk about the Ministry of Defense. Would you just lay off a huge chunk of it? And there’s specific projects you’ve talked about that you’d sunset. But what would it look like to refound the Ministry of Defense, for example, or any specific department?", "Dominic Cummings 01:42:45", "So what you wouldn’t do, and what’s doomed with that, same as with the Pentagon, is going into those buildings and saying, “Right, let’s sit down and talk about a reform program for your procurement system.” It’s just never going to happen.", "What you have to do is you have to set up a parallel thing that says here is the new procurement system that’s going to deal with the following things, and then you have to close the existing procurement system and get rid of 95% to 99% of the people involved with it. It’s the only thing that has any chance of success.", "Dwarkesh Patel 01:43:21", "What percentage of the people in the current Ministry of Defense would keep their jobs? How bigger of a change would it require to have something?", "Dominic Cummings 01:43:30", "On the order of like 90% or so.", "Dwarkesh Patel 01:43:32", "Wow. Okay.", "Dominic Cummings 01:43:32", "But it obviously depends on department by department and different units. If you’re looking at, say, across government, the place where you could be and should be most aggressive is in everything to do with communications. Everyone has massive communications teams. Everything is worse as a consequence. Lots of those should be culled by like 95% to 99%.", "I actually did this in the Department of Education. When we went into the Department of Education, the communications team was over 250. When we left, it was less than 50. Everything had improved, and if it would have been less than ten, it would have improved again.", "Dwarkesh Patel 01:44:06", "Well, the examples you gave of people like Groves and everything, this is a very particular time in history, wartime and also depression. So there’s not many private sector alternatives.", "How would you have a situation today where you’re able to compete for that talent, not just for these really ambitious people in Number 10, but across the civil service? The best and brightest are going there. And until that happens, is there a way to get the best possible work out of the people who are in the civil service now, who are maybe not necessarily like the top of the class, but certain things need to be done.", "Dominic Cummings 01:44:44", "Very able people are extremely interested in politics and extremely interested in government. I don’t think the problem is in trying to make them interested in it. The fundamental problem is that they don’t want to get involved because rightly they say, “I’ll just be in endless stupid meetings. I can’t actually do anything.”", "If you asked them “Would you like to run MI6?” A lot of them would think that would be a great job. Running Mi6 would be a great job and more interesting even than my own extremely interesting job in Silicon Valley. But would I actually be running Mi Six or would I just be sitting in a whole load of stupid meetings? And similarly, for whether it’s health reform or anything else, could you get some of these very able people to do it? Yes, you could.", "And we saw in summer 2020, when the whole system opened up amid the chaos, there was actually incredible interest from all over of very able people wanting to come in and get involved to help, which I think is a sign of what I’m talking about. It’s not just hope, like, we could actually see it happening in 2020.", "But what also happened in 2020 is another sign. Once the system started to close back in, in quarter 3, those people went back to their old jobs. If you’re sitting on some very successful thing, making a lot of money in your walled garden with your fish ponds, you’ll drop it if you can actually have a real impact. But if you can’t have a real impact, then you’re not there to just send stupid emails for trying to get authorizations for two years. And that’s the fundamental thing.", "Dwarkesh Patel 01:46:24", "So in the UK you’ll have labor for the next four or five years. So if you have a new startup party and let’s say it takes control after that, and you have another Brexit like political machine. Is that too late for some of the big crises you’re worried about? Whether there’s going to be a financial crisis. Or one big thing that people who have been listening to my podcast, my previous guest Dario Amodei , the CEO of Anthropic said that something that could emulate a human was two to three years. So you have a government in five years that starts working on it in the correct way. Is that too long to solve the crises that are coming up?", "Dominic Cummings 01:47:06", "Could well be. Maybe. As you see from the news, there’s all sorts of things that could happen. I mean, the people in charge could easily create a nuclear war in the next twelve months, right? Their behavior almost only makes sense if they actually wanted to do that. [Laughter]", "Pessimism of the intellect, optimism of the will. My overall assumption is failure. That the Western world will go the same way that everyone else goes in history. In failure, collapse, and bloodshed. It’s probably what will happen, but still, you got to try.", "Dwarkesh Patel 01:47:37", "What will you personally be doing in these four to five years before this thing can ramp up? More reading and thinking?", "Dominic Cummings 01:47:47", "There are things in education I’m interested in doing in terms of building things outside the government system that don’t rely on the government. I’m thinking about the idea of creating a new political party. Someone I think is going to do it at some point. My thought has been that the normal path of history is collapse and then people create something new out of that. It’s extremely hard to avert crises, but if we’re going to, then there needs to be some new political force here. There needs to be a new political force in America that can replace existing elites. The fact that they’re all getting so old and that they’re so visibly failing gives some hope that it could happen.", "But it comes back to this Catch-22 problem. The worse the system gets, the harder it’s becoming to get able people to get involved with politics, not easier. And the whole idea of a new party and the whole idea of doing politics differently, everything fundamentally rests on whether or not you can get able people to step forward and do it. And history suggests that’s phenomenally hard, that normally that only happens after the crisis comes.", "Dwarkesh Patel 01:49:14", "Speaking of which, how would you get these political talents to hop onto your party’s label? And it’s sort of a Catch-22 because you probably need voters to convince the political talent to join your party but you need the political talent to track the voters.", "Dominic Cummings 01:49:33", "Exactly.", "Dwarkesh Patel 01:49:34", "What is the trick to getting these people to ditch the existing system?", "Dominic Cummings 01:49:38", "Well, it’s extremely hard in the first past the post system. When I did some research in America recently, I asked about the third party thing, and you see this problem come straight through in focus groups. People say straight away, I hate both the old parties. They’re both obviously rotten. They both obviously can’t do a good job. We obviously need something new. America’s got so many wonderful people. Why the fuck have we got all of these ancient, old, useless people in charge of absolutely everything? It’s all a nightmare.", "“All right, okay. So imagine the following party comes along, and it’s got the following people, and it says the following things. What do you think?”", "“Oh my God. That would be so great, but I’m not sure if I would vote for it.”", "“Why not?”", "“Well, because my vote might be wasted, and it might let the crazy people from the other side in.”", "So even if you imagine this wonderful new entity run by the best and brightest in America with an agenda you actually agree with, even then you might not vote for it because you fear that it might let you. You see, there’s a fundamental structural problem with the first past the post system. Now, I’m in favor of first past the post system. I’m not suggesting getting rid of it, but it does make it structurally very hard to replace one of the old entities.", "I mean, it’s why the whole Brexit thing was such an incredible missed opportunity. Winning the referendum. The chaos that that created. The collapse of both the old parties in 2019. One of the parties coming to us, begging for us to save them. Us going into Number 10. Having the majority. We kind of maneuvered the system into such a place that we could actually then transform one of the old parties into the new thing that we wanted to build. That’s very hard, but it’s easier than creating something new and then taking over in a first past the post system.", "Dwarkesh Patel 01:51:30", "On Brexit. If it’s possible to build a new political machine that’s relatively cheap to do and is much more effective than the current political parties at winning elections, what is it that you would do? Whether it’s political modeling, whether it’s the use of new technologies, or I don’t know, testing things out with social media. What are the tricks that you would use to win?", "Dominic Cummings 01:52:05", "Obviously all the AI stuff means that there’s huge opportunities there. But I would stress Colonel Boyd: People, ideas, machines, in that order. Why was it that we built tools to help win an election that no one else built? The primary question is that we really wanted to win, and we really were focused on the voters, and that’s why we actually built tools. The old parties aren’t focused on the voters and aren’t really interested in what they’ve got to say. Therefore, that’s just not what they are thinking about every day. So I don’t think the most important question is the technical side. I think it’s — can you create something with very able people with Jeff Bezos like obsession on the customer, that really, really wants to obsess on the voter. If you can build that culture, then that culture will then build the technology and will exploit the tools 100 times better than the old parties will. But it’s that that comes first, not the technology.", "Dwarkesh Patel 01:53:09", "With Brexit. Afterwards on your blog, you’re writing about why you did Brexit and how the government is dysfunctional in these ways and it’s similar to the things we’re talking about. We’ve got these crises that can happen in minutes.", "How similar, in retrospect, do you think that is to the message that Vote Leave, the organization you chaired, advertises the reason for Brexit? One way you could say is like, “Oh, it’s customer obsession. What is the customer here interested in?”", "Is it important for the two to be the same? The reason you wanted to do Brexit versus the reason that you might advertise Brexit should be done.", "Dominic Cummings 01:53:39", "Any kind of mass communication always involves huge simplification and huge focus. It’s just an unavoidable aspect of things. Forget me and Brexit, anybody doing politics, has to deal with it. Same with Bezos and Amazon. Yeah, it’s customer obsessed but there’s also a whole bunch of stuff that he’s not talking to the customers about.", "So I think that’s just intrinsic to politics. The vast majority of what you’re thinking about, what you’re talking about are not going to be the focus of public communications.", "Dwarkesh Patel 01:54:12", "How do you think about the bargain with voters? Is it like I have these long term priorities and the voters want this thing, which I don’t really care about, but it’s almost like a trade where — Help me achieve these long term priorities or give me the power to achieve these long term priorities and these other sort of, like, smaller things, which I don’t think are the most important thing, will also accomplish those.", "Is that the way you think about it?", "Dominic Cummings 01:54:35", "To some extent, that’s always what politics is. Because as I said before, you could only ever talk about a relatively small fraction of all the things that you think are important.", "Take civil service reform. Civil service reform isn’t even a subject of interest to the insiders who have to live with it all day. I actually have more conversations with people outside politics about that than I do inside politics. Seems very weird but the people inside the system just so completely accept the existing system that they don’t even really talk about that.", "So is a new party going to spend a huge amount of time discussing the intricacies of HR for how some kind of new civil service should work? How to create red teams inside Whitehall? How to create a startup for a drone army to replace parts of the MoD? No, because most people are not interested in all of that. But neither are the insiders. Neither are the people. The thousand most powerful people in the country don’t think about that at all.", "Dwarkesh Patel 01:55:39", "While we’re talking about talent, you obviously put out that famous post saying we want weirdos and misfits to come into government.", "There’s lots of weirdos and misfits out there. What is a specific kind that is really effective in government?", "Dominic Cummings 01:55:52", "There are lots of different kinds and It depends very much on what you’re doing. Some of the people we brought in were very technical. For example, something that you’re interested in in the whole AI world, there were some very technical AI people. It massively depends on the particular roles.", "There’s another set of talent which is rare inside the government, which is just a kind of entrepreneurial project management type E, actually getting things done character. No surprise that we ended up in a crisis having to use British special forces for a lot of those kinds of things because they’re pretty much one of the few elements of the British state that is still extremely able and has that kind of operational, punchy, can get things done fast vibe. So it’s just highly dependent.", "(01:56:43) - Taiwan", "Dwarkesh Patel 01:56:43", "Is a war over Taiwan with China worth it for the US and the UK and the West together?", "Dominic Cummings 01:56:48", "Obviously not. It’s a completely insane idea. It’s an island that you can see from the Chinese coast. It’s full of Chinese people. The people in the army are cousins of the people in the opposite army. For decades, one China policy was accepted by Democrats, Republicans across the Western world. It’s going to be unified, but it should be done peacefully, not violently. That was the right approach.", "It’s completely crazy for us to be ditching that approach and to be willfully antagonizing China over a place filled with Chinese. It’s fucking stupid.", "Dwarkesh Patel 01:57:26", "But it has worked in the past where the Soviets had the huge Red army in Europe and with nuclear deterrence, we were able to keep them from conquering the rest of Europe. Why not just have this sort of thing again?", "Dominic Cummings 01:57:39", "Us threatening the Soviets to say you shouldn’t invade Germany and France and Britain is a parallel to say that Taiwan shouldn’t be part of China? That’s a really bad analogy.", "Dwarkesh Patel 01:57:51", "What’s wrong with a strategic ambiguity where we haven’t committed to defending Taiwan, but just making China think twice about invading Taiwan?", "Dominic Cummings 01:58:00", "Britain gave strategic ambiguity a great run out in summer 1914. How did that work out? So you want to repeat that with nuclear weapons where the crisis can kill a 1000 times more people and happen a hundred times faster than what happened in summer 1914 over an island filled with Chinese people.", "Also, it’s just not credible, right? Do you think the Chinese are sitting there thinking, “Okay, we know that Taiwan is completely existential for us, and America also thinks it’s existential for them?”", "No, of course not. Of course that’s not what they’re thinking, because it isn’t. So you can’t go around threatening nuclear war over something where it’s not actually a credible threat. Where people think either it’s complete bullshit or they are actually insane.", "Dwarkesh Patel 01:58:58", "But that is the entire strategy behind mutually assured destruction as well, right? And that worked. To actually retaliate is kind of insane.", "Dominic Cummings 01:59:07", "But there it was credible for America. There the argument for America was: we are defending the Western world. We have NATO. There is a NATO agreement on mutual self defense. And if you breach that and you attack one of us, then it means war against everyone.", "That was a lot more credible than there’s a little island off the Chinese coast. It’s got nothing to do with us historically at all. It’s full of Chinese people. But we’re going to threaten potential nuclear war over it. It’s totally non-credible. And doing it makes you sound mad.", "Dwarkesh Patel 01:59:46", "I was talking to a friend about this, and he was defending extending the nuclear umbrella to Taiwan. And even if it works, it’s kind of like having a girlfriend who says, if we break up, I’m going to burn down your house. So we better not break up.", "Dominic Cummings 01:59:58", "Or I’m going to kill myself. Maybe you decide that day to do what she wants, but you’re also thinking she’s mad and this is not sustainable.", "Dwarkesh Patel 02:00:08", "Yeah, and not responsible.", "(02:00:10) - Russia", "Dwarkesh Patel 02:00:10", "You spent time in Russia when you were a younger man. You were trying to start an airline there, right?", "Dominic Cummings 02:00:15", "Yeah.", "Dwarkesh Patel 02:00:16", "What did you learn about Russia, and do you think the regime there is stable in the long run?", "Dominic Cummings 02:00:20", "I learned a lot of things doing it. I learned that Russia is a mafia state. I learned that London was treated back in the mid 90s, as the Moscow taxi drivers called it, the laundry for the mafia estate. That is something which people here have not wanted to face because there’s a lot of money to be made out of the whole enterprise, even in 2021. When I said that it’s just crazy that we’re still doing this. There was a lot of pushback. Literally months before the Ukraine war started, it was completely normal in London to defend the way in which we’ve handled the whole thing for the last 20 years.", "I also learned about incentives a lot. So everything I had to do with the airline was a total fiasco. And one of the things I figured I learned was — I, like many naive Westerners, got involved with it, thinking, well, the people obviously want to make the airline work, right? That’s the whole point of having an airline. It’s a successful airline. Wrong. Actually, what most Russians wanted to do was steal from the airline and move the money offshore. And I never realized that. And a lot of the people involved with it didn’t realize that.", "Russia was full of Harvard MBA types flying in from JFK and arriving in Russia thinking that it was some kind of vaguely normal country. And they, like me, learned the hard way that it is not.", "Dwarkesh Patel 02:02:00", "That lesson with incentives, for example, is isomorphic to other institutions you’ve seen, or was that specific to Russia?", "Dominic Cummings 02:02:08", "Everything is crazier and more insane in Russia, but I think it definitely helped me deal with other failures. You always learn a lot, I think, from total failures and having to confront the fact that you just totally misunderstood what on earth you were getting into. And I completely misunderstood what I was getting into there on every level.", "So I think it was very useful for me, actually. When I came back to Britain and then got involved with politics, I was much more careful about trying not to fool myself and trying to figure out what’s real and question my own assumptions about things.", "Dwarkesh Patel 02:02:50", "Speaking of which, a character you’ve highlighted who exemplifies that sort of epistemic humility is Bismarck. You’ve done a deep dive on him so I just want to ask you a bunch of questions, not only about him, but maybe just a study of history in general.", "First of all, where in the world do you think a Lee Kuan Yew or Bismarck type figure is most likely to emerge as the leader of that country? Where is it most likely today?", "Dominic Cummings 02:03:13", "I can imagine someone like that emerging in China or Russia. It’s much harder to imagine someone like that emerging in Britain or America or the Western world.", "Dwarkesh Patel 02:03:29", "How come?", "Dominic Cummings 02:03:30", "They’d be crushed by the system. You could imagine in China, some Machiavellian character coming through. You could say the CCP has done that to some extent. A Machiavellian character comes and exploits the system in a kind of somewhat mafia way, manages to take over, and then Stalin style starts purging people and manipulating the whole thing so that they acquire more and more power.", "The good thing about the British and American system is that it’s proved quite resistant to that sort of thing. But that hits upon a fundamental paradox. That our systems have been good at preventing the kind of catastrophe of Stalin-type person taking over but the very characteristics that make it very hard for a Stalin to take over also make it chronically incompetent as well.", "Dwarkesh Patel 02:04:29", "Yeah, or hard for a good statesman to take over as well.", "You are somebody who has actually lived through and participated at a high level in a hinge point in history, with masterminding Brexit and then as chief advisor doing COVID.", "What is it that you now understand about how history is written? When you were looking back at previous periods, how do you think about what is written versus what actually happened? Knowing how your time has been described in the press and in contemporary accounts.", "Dominic Cummings 02:05:02", "Close to 100% of the things written about me, what I was trying to do, what I thought, what I said, what I wanted to do, what I actually did, is complete garbage. So it obviously affects how you see everything else. It can’t have any other effect. It was true in the Department of Education. It was true in the referendum itself. It was true in Number 10.", "It definitely makes me much more skeptical about all kinds of details. I think whenever you read history, it definitely makes me think all the time. “Yeah, but that might easily be wrong. And what else was actually happening behind the scenes also?” Also just like, how much is lost to it.", "If I think about my own time in doing the referendum or Number 10, close to 100% of the reality of conversations and motivations and calculations is basically lost. It’s history. None of it is really recorded anywhere. I mean, there might be some sketchy WhatsApp conversations here and there. The odd person keeps a diary, which can be useful for sure, but it was pretty rare, certainly amongst top people. Even if they keep it, I think they destroy.", "Dwarkesh Patel 02:06:28", "Especially in this age where so much is recorded even if you don’t try to. Whereas if you look back at Bismarck’s time, there’s not any emails and so forth that you can dig up.", "Dominic Cummings 02:06:39", "Yeah, but it’s a bit of paradoxical though because those guys were all constantly sending letters to each other and a lot of those letters do end up surviving because they are just filed in some paper archive somewhere and then 60 years later there’s a war and people get their hands on them. For us, though, how much of key people’s WhatsApps between Jared and Trump or me and Boris, or all these sorts of things, how much of that digital archive actually will be saved and accessible is arguably less than in the old days.", "(02:07:12) - Bismarck’s career as an example of AI (mis)alignment", "Dwarkesh Patel 02:07:12", "You were giving a talk or participating in a panel at the Orwell Festival of Political Writing about Bismarck, and you had a quote there that I thought was so interesting and I want you to explain it.", "Bismarck’s partly a story of how intelligence tries very hard to escape all constraints, and all attempts by less intelligent people to force higher intelligence to align with certain goals and values are at best highly uncertain and dangerous.", "What did you mean by that?", "Dominic Cummings 02:07:38", "So it was prompted by studying Bismarck at the same time as having a lot of conversations with people about the whole AI alignment problem and whatnot. And it just occurred to me one day that it’s an interesting exercise to consider Bismarck’s career through the prism of the AI alignment arguments. So here are all the safety features that we’re going to try and create. We can constrain it in this way, we can constrain it in that way. If all else fails, we can try and kill it. Can we switch it off? Etc.", "But if you look at his career, you basically see the AI alignment problem actually just living. You see something that’s much more able than its competitors and it defines success fundamentally as expansion of its own power, which also means its own freedom of maneuver. And it treats all attempts to align its goals with broader goals as enemy action to be destroyed. And it’s highly effective at preemptively destroying them. People try and send the intelligence services to close it down, it ends up taking over intelligence services and using them to blackmail its opponents and forcing them into suicide and exile.", "At every stage, every attempt to create safety features is defeated. And the ultimate thing is to just switch it off. As people start writing letters all across Europe essentially saying, “Switch it off. Just switch it off.” But they can’t switch it off.", "Dwarkesh Patel 02:09:14", "It kind of works at the end, right?", "Dominic Cummings 02:09:16", "Well, it’s human, so it gets really old.", "Dwarkesh Patel 02:09:19", "But who was the Kaiser that kicked him out?", "Dominic Cummings 02:09:21", "And in the end the Kaiser says “I think I can do without you.” The Kaiser was wrong, of course, and everything immediately started going downhill for Germany thereafter. And in some weird ways, also parallel to the AI argument, a lot of the key people around Europe started sort of wistfully saying, “Well, you know, it was a nightmare to deal with this terrible, superhuman AI, but it wasn’t a pirate. It wasn’t insane. You could at least negotiate surrender. It wasn’t just a berserk pirate.” Some of them almost felt nostalgic for its rationality after it had gone.", "Dwarkesh Patel 02:10:04", "Another analogy here is it set up all these things that nobody understands, these systems of alliances and relationships in Europe. And you can imagine, like an AI advisor, and you’re not sure what’s going on, but it’s making, I don’t know, maybe it’s like a hedge fund manager that’s AI and it’s like making a lot of money. And then you’re like, but I don’t trust it. I’m going to shut it off. And then a few years later, there’s like a huge financial collapse because it was doing something you didn’t realize that was super important. You shut it off. You didn’t understand what it was doing that was super important. You shut it off. Suddenly all the shit that it tried to do goes to shit. And you have World War I.", "Dominic Cummings 02:10:37", "Your interpretability program that’s running it doesn’t really work. It can’t really explain to you why it’s doing these things. And then you think, “Oh, well, it all looks a bit murky, I’ll close it down.” And then you find out that actually the structure that it built was super valuable.", "Dwarkesh Patel 02:10:55", "Yeah, that’s such a great analogy.", "Why is there not a definitive biography of Bismarck out there?", "Dominic Cummings 02:11:00", "Well, I would say there is really. There’s a three volume one by a guy called Oto Pflanze done in the 80sm, I think. Not very well known, but it’s by far the best.", "Dwarkesh Patel 02:11:11", "Okay, big picture. What is the cause of Britain no longer being the global superpower it once was? Is it decolonization? Post war socialism? Is it the war itself?", "The productivity stuff started happening after 2005, but people have been talking about England declining for a long time. What is the big picture cause of that?", "Dominic Cummings 02:11:33", "Well, the most important thing, obviously, is World War I. At the end of that the naval dominance was gone. Huge financial reserves poured into the mud of the trenches. I mean, Europe’s never recovered from it, never mind just Britain. So I think that was the single biggest element to it.", "Dwarkesh Patel 02:12:02", "Yeah. Wasn’t it like one in eight British men between the ages of 20 and 40 died. Or it was maybe even higher than that.", "Dominic Cummings 02:12:08", "The years afterwards saw the biggest transfer of property in Britain in basically 500 years since Henry VI separated us from the Catholic Church, the dissolution of the monasteries, and that huge shift in the 16th century.", "Immediately after World War I there was a similar kind of epic shift. As a consequence of the scale of death. And also, you also see that echoed in the blogs I did on Alan Brookes diaries through World War II. And it’s interesting he keeps referring back to it as well, when he’s constantly bemoaning another military disaster and why the British army is not working so well. His answer is, “Well, of course, like a whole generation of great leaders was destroyed.”", "Dwarkesh Patel 02:12:53", "I’m curious why the opposite thing isn’t more common, where you have all these generals who have seen World War I, maybe they’re better at fighting World War II as a result? I mean, Hitler was a World War I officer. The World War II generation did seem special. Did the World War I experience help them in any way?", "Dominic Cummings 02:13:11", "I don’t know, and I’m not an expert but it’s striking that the view of contemporaries was that lots of the best young people were killed, and that’s why now a lot of the senior echelons are not up to the job.", "Dwarkesh Patel 02:13:24", "One thing you’ve emphasized a lot over your writing and throughout your entire career is the need for basic research as a way to move England forward and maybe move the west forward. But when you look at England’s GDP per capita, it’s far behind the US. And one thing I was wondering about is maybe it makes sense for the US to be doing a lot of basic research to expand the frontier. But Britain doesn’t even seem to be on the frontier.", "If it’s just about economic growth, at this point shouldn’t it just be copying the things that already work in the US instead of doing a whole bunch of basic research and hoping in like 20 years or something, it contributes to productivity growth or something.", "Dominic Cummings 02:14:00", "The answer is not as simple. Britain needs to do a whole bunch of things. Some of it is frontier research, but that’s only part of it. Part of the reason for that also is not just economic growth, but also security and national independence, etc. How much of Europe in 20 years time is actually going to have meaningfully independent capabilities in lots of areas and how much of its choice will only be — do we buy American or do we buy Chinese?", "If Britain wants to have its own choices, then it’s going to have to build things. The science and technology staff is important for us to develop, but not just for economic reasons. But it’s also very, very far from a magic bullet. There’s a whole bunch of stuff that we have to do and that we started work on in 2020. The zoning laws are a crucial part of it. The whole ecosystem around startups. The applications of the research, that whole ecosystem is a nightmare in all sorts of ways and it’s one of the reasons why constantly great startups here end up selling out early to American companies. All of that needs dealing with. There’s just massive regulation of area after area after area, which throttles growth. The whole housing market is a complete shit show. I mean, everywhere you look is bad.", "Dwarkesh Patel 02:15:30", "Given that most attempts at finding NIMBY and red tape and over regulation have failed. Even the one you attempted to fix planning, aka Zoning in the UK, when you were chief advisor. What is the correct strategy to fighting NIMBY? Because it just seems like really hard to fight it head on.", "Dominic Cummings 02:15:49", "I think a lot of people look for a magic bullet of communication whereby we come up with some equivalent of take back control, but for planning, and then we persuade everyone and the public will shift and start cheering house building. I don’t think that’s realistic and I don’t think that’s the answer.", "In fact, in 2020, we did actually make some changes to the planning laws, but I think it’s instructive how we did it. We didn’t talk about it at all. I actually basically had a communications blackout on the whole thing and that’s because talking about it is extremely hard to do, even if you’re very good at communications. So it was completely pointless trying to do it with that Number 10 and that Prime Minister and that Conservative party. So my approach to it then was just do it, don’t talk about it, and talk about other things. And so we did actually manage to get various things done then, but without most people even noticing.", "I think overall, though, if you’re really trying to do at a big scale, the actual answer is that you just do it. You focus public communications largely on other things. When people actually see the effects of it with growth and houses they can afford and businesses starting and growing and thriving local areas and whatnot, then you can point to that. But you’re not doing it in a theoretical way. It’s not politicians promising things. It’s being able to point to real change in their areas and say, “Look at what’s happened. Now, do you want to go back? Do you want the people who don’t like this to take over?” You see what I mean? I think the idea of coming up with some kind of agenda on it and then trying to persuade the public about mass deregulation and mass housing is a fool’s errand.", "(02:17:37) - Odyssean education", "Dwarkesh Patel 02:17:37", "Let’s talk about education for a bit. You mentioned earlier that that’s one of the things you’re working on now. But also you have a big history here. You were at the Ministry of Education, and while there you wrote a very interesting report on an Odyssean education and this was one of the quotes from that that I thought was pretty compelling.", "“We need leaders with an understanding of Thucydides and statistical modelling, who have read The Brothers Karamazov and The Quark and the Jaguar, who can feel Kipling’s Kim and succeed in Tetlock’s Good Judgement Project. An Odyssean education would focus on humans’ biggest and most important problems and explain connections between them to train synthesisers.”", "This is a very interesting idea. Do you want to explain a bit about what the idea behind the Odysseyan education was?", "Dominic Cummings 02:18:26", "Politics is the hardest thing. It’s the most complicated. There’s a reason why it’s just constant failures. It’s much harder than the hardest startup. The people who can do well at the top level are able to look at lots of different things and have some kind of sense for them.", "Generally speaking, the people who are doing it have done history degrees or politics, philosophy, economics degrees at Oxford type stuff. And then they’ve gone into the world of politics and government. And it’s just an incredibly narrow way of looking at the world. There’s huge skills that you don’t have. Intellectual skills you don’t have. Practical skills you don’t have. You don’t understand how things actually get done. You don’t understand how regulation actually has its effect, because you don’t see at the coalface what it’s like starting a business or scaling a business.", "My view is much stronger, having been in Number 10. That what I wrote ten years ago was correct, that especially having gone through COVID and the horrors around that. The way to deal with that is to have different kinds of people with different kinds of skills. But you also need some people who can look across multiple domains at the same time. So I think it’s not a surprise, for example, that one of the most useful people in COVID was someone who’d done a Physics PhD, was now working in AI, but had also built a business and had worked with that business in Whitehall. So they could see and understand a lot of the difficult science around COVID. They knew what it’s like actually running a business. They understood the interface between the business world and the government. They were familiar with all of those different worlds, and that meant that they were much better able at looking at the overall problems than the average MP, the average senior official, the average special advisor, etc.", "Dwarkesh Patel 02:20:42", "Is this ability to synthesize across fields something you can actually train in something analogous to Plato’s academy? Or is this just that you have to hire people who have it? The reason I ask is: One thing I was thinking while reading this, you have a lot of famous examples in history of scientists and mathematicians. There’s scientists who understand very complicated things, but they don’t understand the very basic insight in Hayek’s knowledge problem and they are socialists. Or they don’t understand the basic game theory, and they’re pacifists. They can understand complicated things, but having that sort of ability to transfer between — I understand power laws, therefore I understand the power of scaling in AI or, I understand exponentials, therefore I understand how bad a pandemic could be. That’s like an additional thing that’s hard to train.", "Dominic Cummings 02:21:31", "It totally depends on temperament and personality. Two of the most interesting intellectual people, 20th century are John von Neumann and Kurt Gödel , who are both friends, fled Europe because of the Nazis and ended up in Princeton. Both extraordinary minds. Both did things of stunning originality. Von Neumann was completely at home in the world of Washington and could also go and deal with bureaucrats and navigate the corridors of the White House. He could give presentations to politicians and understood — Let’s have a pre meeting about this in order to make sure that so and so understands that. And so then the question is presented to the president in the right order so that this guy doesn’t say that before this person says that, etc. All the kind of political skills you need to actually get something done and avoid your meeting turning into a shit show.", "Kurt Gödel is like the exact opposite. If you’re going to put people on a graph, Kurt Girdle would be like the exact opposite end of that. Famously totally useless in every practical way, a menace to himself. So, yes you want people with great intellectual ability, but as you say, some of those people can be highly functional in a political environment and some of them are completely catastrophic.", "Dwarkesh Patel 02:22:46", "Yeah, but it seems like I don’t know how you train a John von Neumann, because then he also assisted the US in thinking about Cold War game theory and things like that. Whereas is it just easier to hire those people?", "Actually, maybe a different way to ask this question is, does the UK train these technical people who can synthesize across fields but it’s just that they’re trained in the UK, but they go to the private sector or emigrate? Or is it just that the UK doesn’t train them to the same extent as the US does?", "Dominic Cummings 02:23:19", "I think elite universities now definitely don’t train this sort of thing. The way that I think about it is this. It’s easy to imagine creating new courses for 17 to 23 year olds where they do a much broader mix of intellectual subjects, where they study a broader mix of intellectual subjects. They read some Thucydides but they also understand some basics about statistics and Tetlock type stuff. But also, instead of them just sort of going off on summer holidays and wasting time, they also go off and work at SpaceX or some kind of startup or with the military, or in a hospital ward, ER room. And they’re moving between these worlds of the theoretical and the intensely practical constantly.", "Now, that won’t be to everyone’s temperament. Some people have a temperament where they find that interesting and they’ll get a lot of value out of it. A lot of people won’t. So you have to think of it in multiple legs. So one thing is to create those courses and that will find you a set of people. But there’s another set of people, let’s say Tim Gowers , Field’s medalist and mathematician at Cambridge. He’s gone through one whole thing now.", "Are you going to send him off on some course? No. But could people like that be brought into government to help in all sorts of ways? Yes. If they spent time working in government on some practical problems, would they develop a much better understanding of how government actually works and therefore be able to come up with new insights about things? Of course they will. I don’t mean they all will, because some of them would be like, “Good, all right, and just be completely hopeless in that environment” and it would be unfair on everyone and them to involve them in it. But Tim Gowers would be a good example of someone who personally could cope with the environment and I think would bring great value to.", "I don’t think there’s just one thing. What we need is multiple experiments with different pathways. Some of it is bringing in older, established, elite talent and then mixing them up in the system. Some of it is training younger, elite talent differently. But there’s no magic bullet to it. You have to try a bunch of things at the same time. Some people will respond very well to an odd mix of theoretical intellectual pursuits and intensely practical things. And that is definitely something worth trying to train and definitely can be trained.", "Dwarkesh Patel 02:25:56", "How does PPE (Politics, Philosophy, Economics) fuck up the people who go through it? Because basically, almost every Prime Minister has gone through it, and even US presidents? Clinton was a Rhodes Scholar and he went through it. What is the way of thinking that it instills that you find problematic?", "Dominic Cummings 02:26:15", "I think it encourages this whole kind of wordcel bluffing. That everything just becomes that you spend a week, you skim through a few books, you come up with some vaguely plausible stuff, and if you seem with good manners and a bit of social polish, then suddenly it all seems plausible. And you’re all surrounded by people who operate in exactly the same way.So you have a room full of polished social wordcels. It’s a very bad set of people to throw at a problem like COVID at them.And also it encourages them to think that the plausible sounding few sentences to get through this conversation is actually what’s important rather than— What’s the truth? What’s the actual answer to the problem? How are you actually going to implement this over many years? Everything like that is completely disdained.", "It points to a general problem with our system. One of the most important things is that everything to do with operations and management and actually getting things done is like the lowest status thing in Whitehall. The highest status thing is A) bullshit about “political strategy” and media and giving interviews. Everyone wants the word Strategy in their job title, and it’s practically always bullshit. It shouldn’t be in the job title, and the job will be improved by removing the word from their job title. And nobody wants to be on logistics, operations, and actually making sure that something happens. And that’s at the core of why so many things work the way they do.", "And you can even just see it in the whole policy process. The high status thing is writing the policy and then spinning it to the media. The totally low status thing is what are the actual implementation details? Has someone already tried to do this in eight different countries in the last four years and each time it’s been a disaster? No one cares about that. That’s left to much lower status jobs and none of the senior people will pay attention to it and they’re actually just formally separated.", "So one of the things that we did in 2020 was we said that one of the ways in which the Number 10 system should change is the policy people should be brought together and actually physically sit with the management people, which is completely revolutionary idea, because that way when you’re thinking up the ideas in the first place, you’re immediately from the beginning talking to the people whose job it is ask “Will this actually work?”", "Rather than what happens now is that these people sit around, they go to bullshit seminars, they talk to journalists, they talk to MPs, they publish a paper. It’s actually full of holes. It doesn’t work. That paper then goes off to the implementation people, the policy people then move on to whatever their next thing is. The management, implementation people look at it and go, “Well, this is not going to work for all the following reasons.” But it’s their job to make it work. They then have to go back and then start arguing.", "“But we already decided that we were going to do this like nine months ago.”", "“Yeah, but it can’t work.”", "So our idea was if you put them together, then you actually find out on day one or day four. — “Oh, actually, this is a stupid idea and it won’t work for the following reasons. We have to do it this way instead.”", "No one cares about that. It goes back to, what do people care about? No one actually cares if it doesn’t work.", "Dwarkesh Patel 02:29:52", "Yeah, the debug loop is broken. It’s like you’re debugging a program, but you can only do it a month after you write the program or something.", "Dominic Cummings 02:29:58", "Yeah, exactly. Imagine if you could only debug the program like 18 months after you wrote the program.", "Dwarkesh Patel 02:30:02", "What do effective altruist and rationalist types get most wrong about politics?", "Dominic Cummings 02:30:07", "They don’t understand how low the caliber of the people are in politics. They don’t understand the fundamental golden rule, which is that the people in politics are almost never actually trying to solve the problem and don’t care about solving the problem. I have constant conversations with E/R types where they think that the political people they’re going to talk to really care about solving Problem X, which is fundamentally incorrect. So it’s not surprising that a lot of their plans go wrong.", "I think that they are also susceptible to the idea that elites are more rational and the voters are more dumb and easily manipulated by emotions, whereas in fact, the truth is the exact opposite. In my opinion, educated elites are by far the easiest to manipulate with emotional propaganda, and the voters are much tougher to fool. The rationalists think it’s the other way around.", "Dwarkesh Patel 02:31:07", "Conventionally, you are seen as, or thought of as combative. I mean, in this interview you’ve been very polite and pleasant, but does the fact that you’re seen as combative, does that help you maybe get your way when you’re in government or something? And is it calculated?", "Dominic Cummings 02:31:23", "It’s not calculated. I don’t know if combative is the right word. I think the fact that people inside the system know that I actually do have priorities and I do actually care about them and I don’t care what the media says, and I have an extremely high tolerance for everyone going crazy in order to actually get what I want definitely, obviously, has an effect, because people will think, “Well, maybe we should do a deal with him. He cares a lot about blah, maybe we should just let him do blah and see if we can persuade him.”", "We’ll help you with this, and maybe you could help us with something else.", "Having priorities, actual priorities, is incredibly rare in politics. Almost no MPs have them all. Practically by definition. Not worrying what the media says and not worrying about everyone hating you, I think, is a very powerful advantage.", "All the way through the referendum and all the way through Number 10, the insiders constantly said — These people are just completely crazy. They don’t know what they’re doing. It’s all going to fall down on their heads. That definitely was an advantage because it just means that they constantly fooled themselves and undermined what they were doing. And they didn’t learn from 2016. The mistakes they made in 2016, they just totally repeated in 2019.", "Dwarkesh Patel 02:32:49", "And in a system where people don’t have priorities, don’t actually care about policy, somebody who does has got to seem strange to them in their world.", "Okay, final question. What does pessimism of the intellect and optimism of the will mean to you?", "Dominic Cummings 02:33:06", "If you look at history, then you have to expect that what happens to everyone else is likely to happen to us, i.e. disaster. But that’s not a reason for giving up. You have to try, even though you don’t think it’s going to work.", "Dwarkesh Patel 02:33:19", "Dominic, this is a huge pleasure. I honestly learned so much. You really can’t get this anywhere else. Having somebody who is as thoughtful as you inside the government, so that you can have both a sort of higher level intellectual picture, but also the level of detail and knowledge of what actually happened and the strategy and information of it. I couldn’t get it from any other interviewee. So this is a huge pleasure. I really enjoyed this.", "Dominic Cummings 02:33:43", "Thank you so much. I really enjoyed it.", "" ]
[ "https://dominiccummings.substack.com/", "https://www.amazon.com/Breaking-History-White-House-Memoir/dp/0063221489", "https://www.amazon.com/Now-Can-Be-Told-Manhattan/dp/0306801892", "https://www.amazon.com/dp/B000066IHH?plink=RQHdp3VujoWLcHY1&pf_rd_r=XKAAA4PEMSS885MM122B&ref_=adblp13nvvxx_0_0_ti", "https://newcriterion.com/issues/2023/6/the-diversity-myth", "https://www.janelia.org/", "https://www.amazon.com/Kill-Chain-Defending-America-High-Tech/dp/031653367X", "https://en.wikipedia.org/wiki/Klemens_von_Metternich", "https://en.wikipedia.org/wiki/Pericles", "https://en.wikipedia.org/wiki/Alcmaeonidae", "https://dominiccummings.com/tag/bismarck/", "https://www.google.com/search?q=Rome+fell+for+longer+than+most+empires+have+lasted+Will+Durant&sca_esv=581841001&sxsrf=AM9HkKn2G7fNrZcs46T1R0UtsD-5_FmeVw%3A1699859219362&ei=E8tRZffQFaXdseMP47meoAY&ved=0ahUKEwi3psCHtcCCAxWlbmwGHeOcB2QQ4dUDCBA&uact=5&oq=Rome+fell+for+longer+than+most+empires+have+lasted+Will+Durant&gs_lp=Egxnd3Mtd2l6LXNlcnAiPlJvbWUgZmVsbCBmb3IgbG9uZ2VyIHRoYW4gbW9zdCBlbXBpcmVzIGhhdmUgbGFzdGVkIFdpbGwgRHVyYW50SM4LUF9YhQtwAXgBkAEAmAHOAaAB2QyqAQYwLjEwLjG4AQPIAQD4AQHCAgoQABhHGNYEGLADwgIFECEYoAHCAgcQIRigARgK4gMEGAAgQYgGAZAGCA&sclient=gws-wiz-serp", "https://dominiccummings.substack.com/p/regime-change-2-a-plea-to-silicon", "https://www.gov.sg/article/covid-19-white-paper", "https://www.dwarkeshpatel.com/p/dario-amodei", "https://dominiccummings.com/2020/01/02/two-hands-are-a-lot-were-hiring-data-scientists-project-managers-policy-experts-assorted-weirdos/", "https://www.amazon.com/s?k=Otto+Pflanze+Bismarck", "https://dominiccummings.substack.com/p/people-ideas-machines-vi-the-war", "https://dominiccummings.files.wordpress.com/2013/11/20130825-some-thoughts-on-education-and-political-priorities-version-2-final.pdf", "https://en.wikipedia.org/wiki/John_von_Neumann", "https://en.wikipedia.org/wiki/Kurt_G%C3%B6del", "https://en.wikipedia.org/wiki/Timothy_Gowers", "http://way.so" ]
https://www.dwarkesh.com/p/dylan-jon
Dylan Patel & Jon (Asianometry) – How the Semiconductor Industry Actually Works
[ "(00:00:00) – Xi's path to AGI", "Dwarkesh Patel", "Today, I'm chatting with Dylan Patel , who runs SemiAnalysis , and Jon , who runs the Asianometry YouTube channel.", "Dylan Patel", "Does he have a last name?", "Jon Y", "No, I do not. No, just kidding. Jon Y.", "Dylan Patel", "Why is it only one letter?", "Dwarkesh Patel", "Because Y is the best letter.", "Dylan Patel", "Why is your face covered?", "Jon Y", "Why not?", "Dwarkesh Patel", "No, seriously why is it covered?", "Dwarkesh Patel", "Because I'm afraid of looking at myself getting older and fatter over the years.", "Dylan Patel", "But seriously, it's for anonymity, right?", "Jon Y", "Anonymity, yeah.", "Dwarkesh Patel", "By the way, do you know what Dylan's middle name is?", "Jon Y", "Actually, no. I don't know.", "Dylan Patel", "What's my father's name?", "Jon Y", "I'm not going to say it, but I remember.", "Dylan Patel", "You could say it. It's fine.", "Jon Y", "Sanjay?", "Dylan Patel", "Yes. What's his middle name?", "Jon Y", "Sanjay?", "Dwarkesh Patel", "That's right. So I'm Dwarkesh Sanjay Patel. He's Dylan Sanjay Patel. It's like literally my white name.", "Dylan Patel", "It's unfortunate my parents decided between my older brother and me to give me a white name. I could have been Dwarkesh Sanjay. You know how amazing it would have been if we had the same name? Butterfly effect and all, it probably would’ve turned out the same way, but...", "Dwarkesh Patel", "Maybe it would have been even closer. We would have met each other sooner, you know? Who else would be named Dwarkesh Sanjay Patel in the world?", "00:01:15 – Xi's path to AGI", "Dwarkesh Patel", "Alright here’s my first question. If you're Xi Jinping and you're scaling -pilled, what is it that you do?", "Dylan Patel", "Don't answer that question, Jon, that's bad for AI safety.", "Jon Y", "I would basically be contacting every Chinese national with family back home and saying, “I want information. I want to know your recipes. I want to know suppliers.”", "Dwarkesh Patel", "Lab foreigners or hardware foreigners?", "Jon Y", "Everyone.", "Dylan Patel", "Honey potting OpenAI ?", "Jon Y", "This is totally off-cycle, off the reservation, but I was doing a video about Yugoslavia's nuclear weapons program . It started with absolutely nothing. One guy from Paris showed up. He knew a little bit about making atomic nuclear weapons. He was like, \"Okay, well, do I need help?\" Then the state's secret police is like, \"I will get you everything.\"", "For a span of like four years, they basically drew up a list. “What do you need? What do you want? What are you going to do? What is it going to be for?” And the state police just got everything. If I were running a country and I needed to catch up on that, that's the sort of thing that I would be doing.", "Dwarkesh Patel", "Okay, let's talk about espionage. What is the most valuable piece, if you could have this blueprint, this one megabyte of information? Do you want it from TSMC ? Do you want it from NVIDIA ? Do you want it from OpenAI? What is the first thing you would try to steal?", "Dylan Patel", "You have to stack every layer, right? The beautiful thing about AI is that because it's growing so fast, every layer is being stressed to an incredible degree. Of course, China has been hacking ASML for over five years and ASML is kind of like, \"Oh, it's fine.\" The Dutch government's really pissed off, but it's fine. They already have those files in my view. It's just a very difficult thing to build.", "The same applies for fab recipes. They can poach Taiwanese nationals. It’s not that difficult because TSMC employees do not make absurd amounts of money. You can just poach them and give them a much better life and they have. A lot of SMIC 's employees are TSMC, Taiwanese nationals, especially a lot of the really good ones high up.", "You go up the next layers of the stack. Of course, there are tons of model secrets. But how many of those model secrets do you not already have and just haven't deployed or implemented or organized? That's the one thing I would say. China just clearly is still not scale-pilled in my view.", "Dwarkesh Patel", "If you could hire these people, it would probably be worth a lot to you because you're building a fab that's worth tens of billions of dollars. This talent knows a lot. How often do they get poached? Do they get poached by foreign adversaries or do they just get poached by other companies within the same industry but in the same country? Why doesn't that drive up their wages?", "00:04:20 – Liang Mong Song", "Jon Y", "It's because it's very compartmentalized. Back in the 2000s, before SMIC got big, it was actually much more open and more flat. After that, after Liang Mong Song and after all the Samsung issues and after SMIC's rise, you literally saw—", "Dylan Patel", "You should tell that story, actually, about the TSMC guy that went to Samsung and SMIC and all that. I think you should tell that story.", "Jon Y", "There are two stories. There's a guy who ran a semiconductor company in Taiwan called Worldwide Semiconductor. This guy, Richard Chang , was very religious. All the TSMC people are pretty religious. He particularly was very fervent and wanted to bring religion to China. So after he sold his company to TSMC—which was a huge coup for TSMC—he worked there for about eight or nine months and then went back to China. Back then, the relations between China and Taiwan were much more different. So he goes over to Shanghai and they say, “We'll give you a bunch of money.\" Richard Chang basically recruits a whole conga line of Taiwanese who just get on the plane and fly over. Generally that’s actually true of a lot of the acceleration points within China’s semiconductor industry. It’s from talent flowing from Taiwan.", "The second story is about Liang Mong Song. Liang Mong Song is a nut. I’ve not met him. I’ve met people who worked with him and they say he is a nut. He's probably on the spectrum. He doesn't care about people, business, or anything. He wants to take it to the limit. That’s the only thing he cares about. He worked at TSMC, a literal genius with 300 patents or whatever, 285.  He works his way all the way to the top tier and then one day he loses out on some power game within TSMC and gets demoted.", "Dylan Patel", "He was like the head of R&D or something right?", "Jon Y", "He was like one of the top R&D people. He was in like second or third place.", "Dylan Patel", "It was for the head of R&D position, basically.", "Jon Y", "Correct, it was for the head of R&D position. He’s like, “I can’t deal with this.” He goes to Samsung and steals a bunch of talent from TSMC. Literally, again, it’s a conga line. At some point, some of these people were getting paid more than the Samsung chairman, which is not really comparable…", "Dylan Patel", "Isn't the Samsung chairman usually like part of the family that owns Samsung ?", "Jon Y", "Correctamundo.", "Dylan Patel", "Okay, yeah so it’s kind of irrelevant.", "Jon Y", "So he goes over there and says, \"We will make Samsung into this monster. Forget everything. Forget all the stuff you’ve been trying to do incrementally. Toss that out. We are going to the leading edge and that is it.\" They go to the leading edge. They win a big portion of Apple's business back from TSMC.", "And then at TSMC, Morris Chang is like, \"I'm not letting this happen.\" That guy is toxic to work for as well but also goddamned brilliant. He’s also very good at motivating people. He sets up what is called the Nightingale Army . They split a bunch of people and they say, “You are working R&D night shift. There is no rest at the TSMC fab. As you go in, there will be a day shift going out.” They called it “burning your liver.” In Taiwan, they say as you get old and as you work, you're sacrificing your liver. They called it the liver buster.", "They basically did this Nightingale Army for a year or two years. They finished FinFET . They basically just blow away Samsung. At the same time, they sue Liang Mong Song directly for stealing trade secrets. Samsung basically separates from Liang Mong Song and Liang Mong Song went to SMIC.", "Dylan Patel", "So Samsung at one point was better than TSMC. Then he goes to SMIC and SMIC caught up rapidly after.", "Jon Y", "Very rapid. That guy's a genius. That guy's a genius. I don't even know what to say about him. He's like 78 and he's beyond brilliant. He does not care about people.", "00:08:25 – How semiconductors get better", "Dwarkesh Patel", "What does research to make the next process node look like? Is it just a matter of a hundred researchers going in, they do the next n+1? The next morning, the next hundred researchers go in?", "Jon Y", "It’s experiments. They have a recipe and that's what they do. Every TSMC recipe is the culmination of long years of research. It's highly secret. The idea is that you're going to look at one particular part of it and say, “Run an experiment. Is it better? Is it not? Is it better or not?” It’s a thing like that.", "Dylan Patel", "It's basically a multivariable problem. Every single tool sequentially you're processing the whole thing. You turn knobs up and down on every single tool. You can increase the pressure on this one specific deposition tool.", "Dwarkesh Patel", "What are you trying to measure? Does it increase yield ?", "Dylan Patel", "It's yield, it's performance, it's power. It's not just better or worse. It's a multivariable search space.", "Dwarkesh Patel", "What do these people know such that they can do this? Is it that they understand the chemistry and physics?", "Dylan Patel", "It's a lot of intuition, but yeah. It's PhDs in chemistry, PhDs in physics, PhDs in electrical engineering…", "Jon Y", "Brilliant geniuses.", "Dylan Patel", "They don't even know about the end chip a lot of times. It's like, \"Oh, I am an etch engineer and all I focus on is how hydrogen fluoride etches this and that's all I know. If I do it at different pressures… If I do it at different temperatures… If I do it with a slightly different recipe of chemicals… It changes everything.", "Jon Y", "I remember someone told me this when I was speaking. How did America lose the ability to do this sort of thing? I’m talking about etch and hydrofluoric acid and all of that. He told me basically it's very master-apprentice. You know like in Star Wars with the Sith, there's only one, right? Master apprentice, master apprentice. It used to be that there is a master, there's an apprentice, and they pass on this secret knowledge. This guy knows nothing but etch, nothing but etch.", "Over time, the apprentices stopped coming. In the end, the apprentices moved to Taiwan. That's the same way it's still run. Like you have NTHU, National Tsing Hua University . There's a bunch of masters. They teach apprentices, and they just pass this secret, sacred knowledge down.", "Dwarkesh Patel", "Who are the most AGI-pilled people in the supply chain?", "Dylan Patel", "I got to have my phone call with Colette right now.", "Dwarkesh Patel", "Okay, go for it. Could we mention to the podcast that NVIDIA is calling Dylan to update him on the earnings call?", "Dylan Patel", "Well, it's not exactly that, but…", "Dwarkesh Patel", "Go for it, go for it…", "Dylan is back from his call with Jensen Huang .", "Dylan Patel", "It was not with Jensen, Jesus.", "Dwarkesh Patel", "What did they tell you, huh? What did they tell you about next year’s earnings?", "Dylan Patel", "No, it was just color around like Hopper , Blackwell , and margins. It's quite boring stuff for most people, I think it's interesting though.", "Dwarkesh Patel", "I guess we could start talking about NVIDIA. You know what, before we do…", "Dylan Patel", "I think we should go back to China. There's a lot of points there.", "00:11:16 – China can centralize compute", "Dwarkesh Patel", "Alright, we covered the chips themselves. How do they get the 10 gigawatts data center up? What else do they need?", "Dylan Patel", "There is a true question of how decentralized do you go versus centralized. In the US, as far as labs and such, you have OpenAI, xAI , Anthropic . Microsoft has their own effort, Anthropic has their own efforts, despite having their partner. Then you have Meta. You also have all the interesting startups doing stuff. You go down the list and there's quite a decentralization of efforts. Today in China, it is still quite decentralized. It's not like, “ Alibaba , Baidu , you are the champions.” You have DeepSeek doing amazing stuff and it’s like, “Who the hell are you? Does the government even support you?”", "If you are Xi Jinping and scale-pilled, you must now centralize the compute resources. Because you have sanctions on how many NVIDIA GPUs you can get in now. They're still north of a million a year, even post- October last year sanctions . We still have more than a million H20s , and other Hopper GPUs getting in through other means but legally the H20s. On top of that, you have your domestic chips, but that's less than a million chips.", "When you look at it, it's like, “Oh, well, we're still talking about a million chips.” The scale of data centers people are training on today slash over the next six months is 100,000 GPUs. OpenAI, xAI, these are quite well documented and others. But in China, they have no individual system of that scale yet. Then the question is, “How do we get there?” No company has had the centralization push to have a cluster that large and train on it yet, at least publicly and well-known. The best models seem to be from a company that has got like 10,000 GPUs or 16,000 GPUs.", "It's not quite as centralized as the US companies are and the US companies are quite decentralized. If you're Xi Jinping and you're scale-pilled, do you just say, “XYZ company is now in charge and every GPU goes to one place”? Then you don't have the same issues as in the US. In the US, we have a big problem with being able to build big enough data centers, being able to build substations and transformers and all this that are large enough in a dense area. China has no issue with that at all because their supply chain adds as much power as like half of Europe every year. It’s some absurd statistic.", "They're building transformer substations or building new power plants constantly. They have no problem with getting power density. You go look at Bitcoin mining. Around the Three Gorges Dam , at one point at least, there was like 10 gigawatts of Bitcoin mining estimated. We're talking about gigawatt data centers coming over in 2026 or 2027 in the US. This is an absurd scale relatively. We don't have gigawatt data centers ready. China could just build it in six months, I think, around the Three Gorges Dam or many other places. They have the ability to do the substations. They have the power generation capabilities. Everything can be done like a flip of a switch, but they haven't done it yet. Then they can centralize the chips like crazy. Right now they can be like “Oh, a million chips that NVIDIA's shipping in Q3 and Q4, the H20, let's just put them all in this one data center.” They just haven't had that centralization effort yet.", "Jon Y", "You can argue that the more you centralize it, the more you start building this monstrous thing within the industry, you start getting attention to it. Suddenly, lo and behold, you have a little bit of a little worm in there. Suddenly while you're doing your big training run, \"Oh, this GPU is off. Oh, this GPU... Oh no, oh no, oh no...\"", "Dylan Patel", "I don't know if it's like that.", "Dwarkesh Patel", "Is that a Chinese accent by the way?", "Dylan Patel", "Just to be clear, Jon is East Asian. He's Chinese.", "Jon Y", "I'm of East Asian descent.", "Dylan Patel", "Half Taiwanese, half Chinese?", "Jon Y", "That is right.", "Dylan Patel", "But I don't know if that's as simple as that because training systems are like… Is it water gated? Firewalled? What is it called? Not firewalled. I don't know. There's a word for that. Where they're…", "Jon Y", "Air-gapped. I think they’re Chinese-walled.", "Dwarkesh Patel", "You’re going through all the four elements. Earth, water, fire!", "Dylan Patel", "If you’re Xi Jinping and you’re scale-pilled…", "Dwarkesh Patel", "You got to unite the four forces.", "Dylan Patel", "Fuck the airbenders. Fuck the firebenders. We got the Avatar . You have to build the Avatar. Okay. I think that's possible. The question is, “Does that slow down your research?” Do you crush people like DeepSeek who are clearly not being influenced by the government? and put some idiot…", "Jon Y", "You put an idiot bureaucrat at the top. Suddenly, he's all thinking about these politics. He's trying to deal with all these different things. Suddenly, you have a single point of failure, and that's bad.", "Dylan Patel", "On the flip side, there are obviously immense gains from being centralized because of the scaling laws . The flip side is compute efficiency which is obviously going to be hurt because you can't experiment and have different people lead and try their efforts as much if you're more centralized. There is a balancing act there.", "Dwarkesh Patel", "That is actually really interesting, the fact that they can centralize. I didn't think about this. Even if America as a whole is getting millions of GPUs a year, the fact that any one company is only getting hundreds of thousands or fewer means that there's no one person who can do a single training run as big in America as if China as a whole decides to do one together. The 10 gigawatts you mentioned near the Three Gorges Dam, how widespread is it? Is it a state? Is it like one wire? Would you do a sort of distributed training run?", "Dylan Patel", "It’s not just the dam itself, but also all of the coal. There's some nuclear reactors there as well I believe. Between all of that and renewables like solar and wind, in that region there is an absurd amount of concentrated power that could be built. I'm not saying it's like one button, but it's more like, \"hey within X mile radius.” That's more of the correct way to frame it. That's how the labs are also framing it in the US.", "Dwarkesh Patel", "If they started right now, how long would it take to build the biggest AI data center in the world?", "Dylan Patel", "Actually the other thing is, could we notice it? I don't think so. With the amount of factories that are being spun up—the amount of other construction, manufacturing, etc. that's being built—a gigawatt is actually like a drop in the bucket. A gigawatt is not a lot of power. 10 gigawatts is not an absurd amount of power, it's okay. Yes, it's like hundreds of thousands of homes, millions of people. But you’ve got 1.4 billion people. You’ve got most of the world's extremely energy intensive refining and rare earth refining and all these manufacturing industries here. It would be very easy to hide it.", "It would be very easy to just shut down like… I think the largest aluminum mill in the world is there and it's north of 5 gigawatts alone. Could we tell if they stopped making aluminum there and instead started making AI there? I don't know if we could tell because they could also just easily spawn like 10 other aluminum mills to make up for the production and be fine. There's many ways for them to hide compute as well.", "00:18:50 – Export controls & sanctions", "Dwarkesh Patel", "To the extent that you could just take out a five gigawatt aluminum refining center and build a giant data center there, then I guess the way to control Chinese AI has to be the chips? Just walk me through how many chips they have now. How many will they have in the future? What will that be in comparison to the US and the rest of the world?", "Dylan Patel", "In the world we live in, they are not restricted at all in the physical infrastructure side of things in terms of power, data centers, etc. Their supply chain is built for that. It's pretty easy to pivot that. Whereas the US adds so little power each year and Europe loses power every year. The Western industry for power is non-existent in comparison.", "On the flip side, \"Western\" manufacturing when you include Taiwan is way, way, way larger than China's, especially on leading-edge where China theoretically has—depending on the way you look at it—either zero or a very small percentage share. There you have equipment, wafer manufacturing, and then you have advanced packaging capacity .", "Where can the US control China? Advanced packaging capacity is kind of shot because the vast majority… The largest advanced packaging company in the world was Hong Kong-headquartered. They just moved to Singapore, but that's effectively in a realm where the U.S. can't sanction it. A majority of these other companies are in similar places. Advanced packaging capacity is very hard. Advanced packaging is useful for stacking memory , stacking chips on co-ops, things like that.", "Then the step down is wafer fabrication. There is immense capability to restrict China there. Despite the US making some sanctions, China in the most recent quarters was like 48% of ASML's revenue and like 45% of Applied Materials ’. You just go down the list. Obviously it's not being controlled that effectively. But it could be on the equipment side of things.", "The chip side of things is actually being controlled quite effectively, I think. Yes, there is shipping of GPUs through Singapore and Malaysia and other countries in Asia to China. But the amount you can smuggle is quite small. Then the sanctions have limited the chip performance to a point where it's like, “You know, this is actually kind of fair.” But there is a problem with how everything is restricted.", "You want to be able to restrict China from building their own domestic chip manufacturing industry that is better than what we ship them. You want to prevent them from having chips that are better than what we have. And then you want to prevent them from having AIs that are better. That’s the ultimate goal. If you read the restrictions, it’s very clear that it's about AI. Even in 2022, which is amazing, at least the Commerce Department was kind of AI-pilled. It was like, “You want to restrict them from having AIs better than us.”", "So starting on the right end, it's like, “Okay, well, if you want to restrict them from having better AIs than us. you have to restrict chips. Okay. If you want to restrict them from having chips, you have to let them have at least some level of chip that is better than what they can build internally.” But currently the restrictions are flipped the other way. They can build better chips in China than what we restrict them from in terms of chips that NVIDIA or AMD or Intel can sell to China.", "So there's sort of a problem there in that the equipment that is shipped can be used to build chips that are better than what the Western companies can actually ship them.", "Dwarkesh Patel", "Jon, Dylan seems to think the export controls are kind of a failure. Do you agree with him?", "Jon Y", "That is a very interesting question because I think it's like…", "Dwarkesh Patel", "Why thank you.", "Dylan Patel", "Dwarkesh, you're so good.", "Jon Y", "Yeah, Dwarkesh, you're the best. Failure is a tough word to say because what are we trying to achieve?", "Dylan Patel", "Let’s just take lithography . If your goal is to restrict China from building chips and you just boil it down to like, “Hey, lithography is 25-30% of making a chip. Cool, let's sanction lithography. Okay, where do we draw the line? Let me figure out where the line is.” If I'm a bureaucrat or lawyer at the Commerce Department or what have you, obviously I'm going to go talk to ASML and ASML is going to tell me, “This is the line,” because they know this and this…  there's some blending over.", "Jon Y", "They're looking at it like, “What's going to cost us the most money?”", "Dylan Patel", "Then they all constantly say, “If you restrict us, then China will have their own industry.” The way I like to look at it is that chip manufacturing is like 3D chess or like a massive jigsaw puzzle. If you take away one piece, China can be like, \"Oh, that's the piece. Let's put it in.\" Currently year by year by year, they keep updating export restrictions ever since like 2018 or 2019 when Trump started and now Biden's accelerated them.", "They haven't just taken a bat to the table and broken it. It's like, “Let's take one jigsaw puzzle piece out. Walk away. Oh shit. Let's take two more out. Oh shit.” You either have to go full “bat to the fricking table/wall” or chill out and let them do whatever they want. Because the alternative is everything is focused on this thing and they make that. Then now when you take out another two pieces they can be like, “Well, I have my domestic industry for this. I can also now make a domestic industry for these.” You go deeper into the tech tree or what have you.", "Jon Y", "It's art in the sense that there are technologies out there that can compensate. The belief that lithography is a linchpin within the system, it's not exactly true. At some point, if you keep pulling a thread, other things will start developing to close that loop. That's why I say it's an art. I don't think you can stop the Chinese semiconductor industry from progressing. That's basically impossible.", "The Chinese nation, the Chinese government, believes in the primacy of semiconductor manufacturing. They believed it for a long time, but now they really believe it.", "Dylan Patel", "To some extent, the sanctions have made China believe in the importance of the semiconductor industry more than anything else.", "Dwarkesh Patel", "So from an AI perspective, what's the point of export controls then? If they're going to be able to get these...", "Dylan Patel", "Well they’re not centralized though. That's the big question: are they centralized? Also, I'm not sure if I really believe it but on prior podcasts , there have been people who talked about nationalization.", "Dwarkesh Patel", "Why are you referring to this ambiguously? “My opponent…”", "Dylan Patel", "No, I love Leopold , but there have been a couple where people have talked about nationalization. If you have nationalization, then all of the sudden you aggregate all the flops . There's no fucking way.", "China can be centralized enough to compete with each individual US lab. They could have just as many flops in 2025 and 2026 if they decided they were scale pilled. Just from foreign chips, for an individual model.", "Dwarkesh Patel", "Like they can release a 1e27 model by 2026?", "Dylan Patel", "And a 1e28 model in the works. They totally could do this just with foreign chip supply. It’s just a question of centralization. Then the question is, do you have as much innovation and compute efficiency wins when you centralize? Or do Anthropic, OpenAI, xAI, and Google develop things, and secrets shift a bit between each other, resulting in a better long-term outcome versus nationalization in the US?", "China could absolutely have it in 2026-27 if they desire to, just from foreign chips. Domestic chips are the other question. You have 600,000 of the Ascend 910B , which is roughly 400 teraflops or so. If they put them all in one cluster, they could have a bigger model than any of the labs next year. I have no clue where all the Ascend 910Bs are going, but there are rumors about them being divvied up between the majors, Alibaba, ByteDance, Baidu, etc. Next year, you have more than a million.", "It's possible they actually have 1e30 before the US because data center isn't as big an issue. A 10 gigawatt data center… I don't think anyone is even trying to build that today in the US, even out to 2027-28. They're focusing on linking many data centers together.", "There's a possibility that come 2028-2029, China can have more flops delivered to a single model, even once the centralization question is solved. That's clearly not happening today for either party. I'd bet if AI is as important as you and I believe, they will centralize sooner than the West does. So there is a possibility.", "Dwarkesh Patel", "How many more wafers could they make and how many of those wafers could be dedicated to the 910B? I assume there's other things they want to do with these semiconductors.", "Dylan Patel", "There's two parts there. The way the US has sanctioned SMIC is really stupid. They've sanctioned a specific spot rather than the entire company. SMIC is still buying a ton of tools that can be used for their 7nm and their 5.5nm, or 6nm process, for the 910C which releases later this year. They can build as much of that as long as it's not in Shanghai. Shanghai has anywhere from 45 to 50 high-end immersion lithography tools. That’s what is believed by intelligence and many other folks.", "That roughly gives them as much as 60,000 wafers a month of 7 nanometer, but they also make their 14 nanometer in that fab. The belief is that they actually only have about 25,000-35,000 capacity of 7 nanometer wafers a month. Doing the math of the chip die size and all these things—Huawei also uses chiplets so they can get away with using less leading-edge wafers but then their yields are bad—you can roughly say something like 50 to 80 good chips per wafer with their bad yield.", "Dwarkesh Patel", "Why do they have bad yield?", "Dylan Patel", "Because it's hard.", "Jon Y", "Even if everyone knows the number, like say there’s a 1000 steps. Even if you're 98-99% for each, in the end you'll still get a 40% yield.", "Dylan Patel", "If it's six sigma of perfection and you have your 10,000 plus steps, you end up with yield that's still dog shit by the end.", "Jon Y", "That is a scientific measure, dog shit percent.", "Dylan Patel", "Yeah, as a multiplicative effect. Yields are bad because they have hands tied behind their back. They are not getting to use EUV. On 7 nanometer Intel never used EUV, but TSMC eventually started using EUV. Initially, they used DUV .", "Dwarkesh Patel", "Doesn't that mean the export controls succeeded? They have bad yield because they have to use...", "Dylan Patel", "It’s a brand new process.", "Jon Y", "Again, they're still determined. Success means they stop. They're not stopping.", "Dylan Patel", "Let’s go back to the yield question. It’s theoretically 60,000 wafers a month times 50-100 dies per wafer with yielded dies. Holy shit. That's millions of GPUs. Now, what are they doing with most of their wafers? They still have not become scale pilled, so they're still throwing them out. Let's make 200 million Huawei phones, right? Okay, cool. I don't care.", "As the West, you don't care as much, even though Western companies will get screwed, like Qualcomm and MediaTek Taiwanese companies. Obviously there's that. The same applies to the US, but when you flip to like… Sorry, I don't fucking know what I was going to say.", "Jon Y", "Nailed it!", "Dwarkesh Patel", "We're keeping this in", "Dylan Patel", "That's fine, that's fine.", "Dwarkesh Patel", "In 2026 if they're centralized, they can have as big training runs as any one US company—", "Dylan Patel", "Oh, the reason why I was bringing up Shanghai. They're building 7nm capacity in Beijing. They're building 5nm capacity in Beijing, but the US government doesn't care. They're importing dozens of tools into Beijing and saying to the US government and ASML, “This is for 28nm , obviously.” In the background, they’re making 5nm there.", "00:32:51 – Huawei's intense culture", "Dwarkesh Patel", "Are they doing it because they believe in AI, or because they want to make Huawei phones?", "Dylan Patel", "Huawei was the largest TSMC customer for a few quarters before they got sanctioned. Huawei makes most of the telecom equipment in the world. Phones, modems, accelerators, networking equipment, video surveillance chips, you go through the whole gamut. A lot of that could use 7 and 5 nanometer.", "Jon Y", "Do you think the dominance of Huawei is actually bad for the rest of the Chinese tech industry?", "Dylan Patel", "Huawei is so cracked that it's hard to say that. Huawei out-competes Western firms regularly with two hands tied behind their back. What the hell are Nokia and Sony Ericsson ? They’re trash compared to Huawei. Huawei isn't allowed to sell to European or American companies and they don't have TSMC. Yet they still destroy them.", "The new phone is as good as a year-old Qualcomm phone on a process node that’s equivalent to something three or four years old. They actually out-engineered us with the worst process node. Huawei is crazy cracked.", "Jon Y", "Where do you think that culture comes from?", "Dylan Patel", "The military, because it's the PLA .", "Jon Y", "It's generally seen as an arm of the PLA. How do you square that with the fact that sometimes the PLA seems to mess stuff up?", "Dylan Patel", "Oh, like filling water in rockets ?", "Jon Y", "I don't know if that was true. I'm not denying it.", "Dylan Patel", "There is that crazy conspiracy… You don't know what the hell to believe in China, especially as a non-Chinese person.", "Jon Y", "Nobody knows, even Chinese people don't know what's going on in China.", "Dylan Patel", "There's all sorts of stuff like, \"Oh, they're filling water in their rockets, clearly they're incompetent.\" If I'm the Chinese military, I want the Western world to believe I'm completely incompetent because one day, I can just destroy everything with hypersonic missiles and drones. \"No, no, we're filling water in our missiles. These are all fake. We don't actually have a hundred thousand missiles that we manufacture in a super advanced facility and Raytheon is stupid as shit because they can't make missiles nearly as fast.”", "That’s also the flip side. How much false propaganda is there? There's a lot of, \"SMIC could never, they don't have the best tools.\" Then it's like, “Motherfucker, they just shipped 60 million phones last year with this chip that performs only one year worse than what Qualcomm has.” The proof is in the pudding. There's a lot of cope.", "Jon Y", "I just wonder where that culture comes from. There's something crazy about them. Everything they touch, they seem to succeed in. I wonder why.", "Dylan Patel", "They're making cars. I wonder what’s going on there.", "Jon Y", "If we imagine historically… Do you think they're getting something from somewhere?", "Dylan Patel", "Espionage, you mean? Obviously.", "Jon Y", "East Germany and the Soviet industry was basically a conveyor belt of secrets coming in, and they used that to run everything. But the Soviets were never good at it. They could never mass produce it. But now you have China.", "Dwarkesh Patel", "How would espionage explain how they can make things with different processes?", "Dylan Patel", "It’s not just espionage. They’re just literally cracked.", "Jon Y", "That’s why. It has to be something else.", "Dylan Patel", "They have the espionage without a doubt. ASML is known to have been hacked at least a few times. People have been sued who made it to China with a bunch of documents. It’s not just ASML, but every company in the supply chain. Cisco code was literally in early Huawei routers . You go down the list…", "But architecturally, the Ascend 910B looks nothing like a GPU or TPU . It is its own independent thing. Sure, they probably learned some things from some places, but they're good at engineering.", "Jon Y", "It's 9-9-6 . Wherever that culture comes from, they do good. They do very good.", "Dwarkesh Patel", "Another thing I'm curious about is where that culture comes from, but also how it stays there. With American firms or any other firm, you can have a company that's very good, but over time it gets worse, like Intel or many others. I guess Huawei just isn't that old, but it's hard to be a big company and stay good.", "Jon Y", "That is true. A word I hear a lot regarding Huawei is \"struggle.\" China has a culture where the Communist Party is big on struggle. Huawei brought that culture into their way of doing things. You said this before, right? They go crazy because they think in five years they're going to fight the United States. Everything they do, every second it’s like their country depends on it.", "Dylan Patel", "It's the Andy Grove -ian mindset. Shout out, the based Intel of Andy Grove. Only the paranoid survive . Paranoid Western companies do well. Why did Google really screw the pooch on a lot of stuff and then resurge now? It’s because they got paranoid as hell. If Huawei is just constantly paranoid about the external world and thinking, \"Oh fuck, we're gonna die. They're gonna beat us. Our country depends on it. We’re going to get the best people from the entire country at whatever they do…”", "Jon Y", "And tell them, “If you do not succeed, our country will die. Your family will be enslaved. It will be terrible.”", "Dylan Patel", "“By the evil western pigs.”", "Jon Y", "“Capitalist…” Or not capitalist, they don't say that anymore. It's more like, “Everyone is against China. China is being defiled. That is all on you, bro. If you can’t do that…”", "Dylan Patel", "“If you can't get that radio to be slightly less noisy and transmit 5% more data, we are fucked.”", "Jon Y", "“It's like the great palace fire all over again. The British are coming and they will steal all the trinkets. That's on you.”", "00:38:51 – Why the semiconductor industry is so stratified", "Dwarkesh Patel", "Why isn't there more vertical integration in the semiconductor industry? Why is it like, “This subcomponent requires this subcomponent from this other company, which requires this subcomponent from this other company…” Why isn’t more of it done in-house?", "Dylan Patel", "The way to look at it today is that it’s super stratified. Every industry has anywhere from one to three competitors. The most competitive it gets is like 70% share, 25% share, 5% share, in any layer of manufacturing chips, anything, chemicals, different types of chips, etc. It used to be vertically integrated.", "Jon Y", "At the very beginning it was integrated.", "Dwarkesh Patel", "Why did that stop?", "Jon Y", "You had companies that used to do it all in one. Then suddenly a guy would be like, “I hate this. I know how to do it better.” He’d spin off, do his own thing, start his own company, then go back to his old company and say, “I can sell you a product that’s better.” That's the beginning of what we call the semiconductor equipment industry .", "Dylan Patel", "In the seventies, everyone made their own equipment.", "Jon Y", "Sixties and seventies. All these people spin off. What happened was that the companies that accepted outside products and equipment got better stuff and did better. There were companies that were totally vertically integrated for decades. They are still good, but nowhere near competitive.", "Dwarkesh Patel", "One thing I'm confused about is the actual foundries themselves , there's fewer and fewer of them every year. There's maybe more companies overall, but fewer final wafer makers. It's similar to AI foundation models where you need revenues from a previous model. You need market share to fund the next round of ever more expensive development.", "Jon Y", "When TSMC launched the foundry industry, there was a wave of Asian companies that funded semiconductor foundries. Malaysia with Silterra , Singapore with Chartered , one from Hong Kong…", "Dylan Patel", "A bunch in Japan.", "00:40:58 – N2 should not exist", "Jon Y", "A bunch in Japan. They all did this thing. When going to leading-edge, it got harder, which means you had to aggregate more demand from all the customers to fund the next node. Technically you’re aggregating all this profit to fund the next node to the point where now there's no room in the market for an N2 or N3 . You could argue that economically, N2 is a monstrosity that doesn't make sense. It should not exist without the immense concentrated spend of like five players in the market.", "Dylan Patel", "I'm sorry to completely derail you, but there's this video where it's like, \"This is an unholy concoction of meat slurry.\"", "Jon Y", "Yes!", "Dwarkesh Patel", "What?", "Dylan Patel", "There’s this video that’s like, “Ham is disgusting. It’s an unholy concoction of meat with no bones or collagen.” I don’t know. The way he was describing 2nm is like that.", "Jon Y", "It’s like the guy who pumps his right arm so much. He's super muscular. \"The human body was not meant to be so muscular!\"", "Dwarkesh Patel", "What’s the point? Why is 2 nanometer not justified?", "Jon Y", "I'm not saying it for N2 specifically, but N2 as a concept. The next node should technically... There will come a point where economically, the next node will not be possible at all.", "Dylan Patel", "Unless more technology spawned, like AI now makes one nanometer or whatever, A16 , viable.", "Dwarkesh Patel", "Makes it viable in what sense? It makes it worth it?", "Jon Y", "Money. Money.", "Dylan Patel", "Every two years you get a shrink, like clockwork, Moore's law . Then five nanometer happened. It took three years. Holy shit. Then three nanometer happened. It took three years. Is Moore's law dead? Because TSMC didn't... and then what did Apple do? When three nanometer finally launched, Apple only moved half of the iPhone volume to three nanometer. Now they did a fourth year of five nanometer for a big chunk of iPhones. Is the mobile industry petering out?", "Then you look at two nanometer and it's going to be similarly difficult for the industry to pay for this. Apple because they get to make the phone, they have so much profit they can funnel into more and more expensive chips. But finally that was really running out. How economically viable is 2nm just for one player, TSMC? Ignore Intel, ignore Samsung. Samsung is paying for it with memory, not with their actual profit. Intel is paying for it from their former CPU monopoly…", "Jon Y", "Private equity money, chips money, debt, and the salaries of laid-off people.", "Dylan Patel", "There's a strong argument that funding the next node wouldn't be economically viable anymore if it weren’t for AI taking off and generating humongous demand for the most leading-edge chip.", "Dwarkes Patel", "How big is the difference between 7nm to 5nm to 3nm? Is it a huge deal in terms of who can build the biggest cluster?", "Dylan Patel", "There's this simplistic argument that moving a process node only saves X percent in power. That has been petering out. When you moved from 90 nanometer to 80 or 70 something, you got 2x. Dennard scaling was still intact. But now when you move from 5 nanometer to 3 nanometer, you don't double density. SRAM doesn't scale at all. Logic does scale, but it's like 30%. All in all, you only save about 20% in power per transistor .", "But because of data locality and movement of data, you actually get a much larger improvement in power efficiency by moving to the next node than just the individual transistors' power efficiency benefit. For example, if you're multiplying a matrix that's 8,000 by 8,000 by 8,000, you can't fit that all on one chip. But if you could fit more and more, you have to move off chip less, go to memory less, etc. The data locality helps a lot too.", "AI really, really wants new process nodes because power usage is a lot less, you get higher density and higher performance. The big deal is if I have a gigawatt data center, how much more flops can I get? If I have a two gigawatt data center, how much more flops can I get? If I have a ten gigawatt data center, how much more flops can I get? You look at the scaling and everyone needs to go to the most recent process node as soon as possible.", "Dwarkesh Patel", "I want to ask a normie question… I won’t phrase it that way.", "Jon Y", "Not for you nerds.", "Dylan Patel", "I think Jon and I could communicate to the point where you even wouldn't know what we're talking about.", "00:45:53 – Taiwan invasion hypothetical", "Dwarkesh Patel", "Suppose Taiwan is invaded or Taiwan has an earthquake. Nothing is shipped out of Taiwan from now on. What happens next? How would the rest of the world feel its impact a day in, a week, a month in, a year in?", "Jon Y", "It's a terrible thing to talk about. Can you just say it’s all just terrible? It's not just leading-edge. People will focus on leading-edge, but there's a lot of trailing-edge stuff that people depend on every day. We all worry about AI. The reality is you're not going to get your fridge. You're not going to get your cars. You're not going to get everything. It's terrible. Then there's the human part of it. It's all terrible. It's depressing. And I live there.", "Dylan Patel", "Day one, the market crashes a lot. The six or seven biggest companies, the Magnificent Seven , are like 60-75% of the S&P 500 and their entire business relies on chips. Google, Microsoft, Apple, Nvidia, Meta, they all entirely rely on AI. You would have an extremely insane tech reset.", "So the market would crash. A couple weeks in, people are preparing now. People are like, \"Oh shit, let's start building fabs. Fuck all the environmental stuff.\" War's probably happening. The supply chain is trying to figure out what the hell to do to refix it.", "Six months in, the supply of chips for making new cars is gone or sequestered to make military shit. You can no longer make cars. We don't even know how to make non-semiconductor induced cars, this unholy concoction with all these chips.", "Jon Y", "Cars are like 40% chips now. There are chips in the tires.", "Dylan Patel", "There's like 2,000+ chips in every car. Every Tesla door handle has like four chips in it. It’s like, “What the fuck?” Why? It’s like shitty microcontrollers and stuff but there’s 2000+ chips even in an internal combustion engine vehicle. Every engine has dozens and dozens of chips.", "Anyways, this all shuts down. Not all of the production, there's some in Europe, some in the US, some in Japan, some in Singapore.", "Jon Y", "In Europe they're going to bring in a guy to work on Saturday, until four.", "Dylan Patel", "Yeah. TSMC always builds new fabs. They tweak production up in old fabs. New designs move to the next nodes and old stuff fills in the old nodes. Ever since TSMC has been the most important player… It’s not just TSMC, there's UMC there, PSMC , and other companies. Taiwan's share of total manufacturing has grown every single process node.", "In 130 nanometers , there's a lot. That’s including many chips from Texas Instruments , Analog Devices , or NXP . 100% of it is manufactured in Taiwan by PSMC, TSMC, UMC or whatever. But then you step forward to 28 nanometer , 80% of the world's production is in Taiwan. Oh fuck, right? What’s made on 28 nanometer today? It’s tons of microcontrollers and stuff but also every display driver I see. Cool, even if I could make my Mac chip , I can’t make the chip that drives the display.", "You just go down the list. That means no fridges, no automobiles, no weed whackers, because that stuff has chips. My toothbrush has Bluetooth in it. Why? I don’t know. There are so many things that would just go poof. We'd have a tech reset.", "00:49:21 – Mind-boggling complexity of semiconductors", "Dwarkesh Patel", "We were supposed to do this interview many months ago. I kept delaying because I was like, “Ah, I don't understand any of this shit.” But it is a very difficult thing to understand. Whereas with AI, it’s like…", "Dylan Patel", "You’ve just spent the time to—", "Dwarkesh Patel", "Sure but it feels like the kind of thing where you can pick up what's going on in the field in an amateur kind of way. In this field, I'm curious about how one learns the layers of the stack. It's not just papers online. You can't just look up a tutorial on how the transformer works. It's many layers of really difficult shit.", "Dylan Patel", "There are 18-year-olds who are cracked at AI already. There are high school dropouts who get jobs at OpenAI. This existed in the past. Pat Gelsinger , the current CEO of Intel, grew up in the Amish area of Pennsylvania. He went straight to work at Intel because he's just cracked. That's not possible in semiconductors today. You can't get a job at a tool company without at least a master's in chemistry, probably a PhD. Of the 75,000 TSMC workers, like 50,000 have a PhD or something insane. There’s a next-level amount of how specialized everything's gotten.", "Whereas today, you can take someone like Sholto , who started working on AI not that long ago.", "Jon Y", "Not to say anything bad about Sholto.", "Dylan Patel", "No, he’s cracked. He’s omega-cracked at what he does. You could drop him into another part of the AI stack. He understands it already and could probably become cracked at that too. That's not the case in semiconductors. You specialize like crazy and can't just pick it up. Sholto, what did he say… He just started—", "Dwarkesh Patel", "He was a consultant at McKinsey, and at night he would read papers about robotics and run experiments.", "Dylan Patel", "And then people noticed him and were like, \"Who is this guy? I thought everyone who knew about this was at Google already. Come to Google.\" That can't happen in semiconductors. It's just not possible. ArXiv is a free thing. The paper publishing industry is abhorrent everywhere else. You can't just download IEEE papers or SPIE papers or from other organizations.", "At least up until late 2022 or early 2023 with Google's PaLM inference paper , all the best stuff was just posted on the internet. After that, there was a little bit of clamping down by the labs, but there are also still all these companies making innovations in the public. What is state-of-the-art is public. That is not the case in semiconductors.", "Jon Y", "Semiconductors have been shut down since the 1970s basically. It's crazy how little information has been formally transmitted from one country to another. The last time you could really think of this was maybe the Samsung era.", "Dwarkesh Patel", "So then how do you guys keep up with it?", "Jon Y", "We don't know it. I don't personally think I know it.", "Dwarkesh Patel", "If you don’t know it, what are you making videos about?", "Jon Y", "I spoke to one guy. He’s a PhD in etch, one of the top people in etch. He’s like, “Man, you really know lithography.” I don't feel like I know lithography. But then you talk to the people who know lithography and they’re like, “You’ve done pretty good work in packaging.” Nobody knows anything.", "Dwarkesh Patel", "They all have Gell-Mann amnesia ?", "Jon Y", "They’re all in this single well. They’re digging deep for what they’re getting at. They don’t know the other stuff well enough. In some ways, nobody knows the whole stack.", "Dylan Patel", "The stratification of just manufacturing is absurd. The tool people don't even know exactly what Intel and TSMC do in production, and vice versa, they don’t know exactly how the tool is optimized like this. How many different types of tools are there? Each of those has an entire tree of all the things we've built, invented, and continue to iterate upon, and then there’s the breakthrough innovation that happens every few years in it too.", "Dwarkesh Patel", "If that’s the case and nobody knows the whole stack, how does the industry coordinate? “In two years we want to go to the next process which has gate all-around and for that we need X tools and X technologies developed…”", "Jon Y", "It's a fascinating social phenomenon. You can feel it. I went to Europe earlier this year. Dylan had allergies. I was talking to those people. It's like gossip. You start feeling people coalescing around something. Early on, we used to have SEMATECH where American companies came together and talked and hammered it out. But in reality it was dominated by a single company. Nowadays it's more dispersed. It's a blue moon arising kind of thing. They are going towards something. They know it. Suddenly, the whole industry suddenly is like, \"This is it. Let's do it.\"", "Dylan Patel", "It's like God came and proclaimed it: “We will shrink density 2x every two years.” Gordon Morris made an observation. It didn’t go nowhere. It went way further than he ever expected because it’s like, “There’s a line of sight to get to here and here.” He predicted 7-8 years out, multiple orders of magnitude of increases in transistors and it came true. But by then, the entire industry was like, \"This is obviously true. This is the word of God.\" Every engineer in the entire industry, tens of millions of people, were driven to do this.", "Now not every single engineer believed it but people were like, \"Yes, to hit the next shrink, we must do this, this, this and these are the optimizations we make.\" You have this stratification, every single layer and abstraction layers through the entire stack. It's an unholy concoction. No one knows what's going on because there's an abstraction layer between every single layer. On this layer, the people below you and above you know what's going on. Beyond that, you can try to understand, but not really...", "Dwarkesh Patel", "I watched a video about IRDS or whatever, 10 or 20 years ago, where they're like, \"We're going to do EUV instead of the other thing. This is the path forward.\" How do they do that if they don't have the whole picture of different constraints, trade-offs, and so on?", "Jon Y", "They kind of argue it out. They get together and they talk and argue. Basically, at some point, a guy somewhere says, \"I think we can move forward with this.\"", "Dylan Patel", "Semiconductors are so siloed. The data and knowledge within each layer is: A) Not documented online at all because it's all siloed within companies. B) There's a lot of human element to it because a lot of the knowledge, as Jon was saying, is apprentice-master type of knowledge. Or it's \"I've been doing this for 30 years,\" and there's an amazing amount of intuition on what to do just when you see something.", "AI can't just learn semiconductors like that. But at the same time, there's a massive talent shortage and ability to move forward on things. Most of the equipment in semiconductor fabs runs on Windows XP . Each tool has a Windows XP server on it. All the chip design tools have CentOS version 6, which is old as hell. There are so many areas where it's so far behind. At the same time, it's so hyper-optimized that the tech stack is broken in that sense.", "Jon Y", "They're afraid to touch it.", "Dylan Patel", "Yeah, because it's an unholy amalgamation.", "Jon Y", "This thing should not work. It's literally a miracle.", "Dylan Patel", "So you have all the abstraction layers. One, there's a lot of breakthrough innovation that can happen now stretching across abstraction layers. Two, because there's so much inherent knowledge in each individual one, what if I can just experiment and test at a 1000x or 100,000x velocity?", "Some examples of where this is already shown true are some of NVIDIA's AI layout tools, and Google as well, laying out the circuits within a small blob of the chip with AI. Some of these RL design things, various simulation things...", "Jon Y", "But is that design or is that manufacturing?", "Dylan Patel", "It's all design, most of it is design. Manufacturing has not really seen much of this yet, although it's starting to come in.", "Jon Y", "Inverse lithography , maybe.", "Dylan Patel", "ILT and… maybe. I don't know if that's AI. Anyway, there's a tremendous opportunity to bring breakthrough innovation simply because there are so many layers where things are unoptimized. You see all these single-digit to low double-digit advantages just from RL techniques from AlphaGo -type stuff, or not AlphaGo but 5-8 year-old RL techniques being brought in. Generative AI being brought in could really revolutionize the industry, although there's a massive data problem.", "Dwarkesh Patel", "Can you give the possibilities here in numbers in terms of maybe like a FLOP per dollar or whatever the relevant thing is? How much do you expect in the future to come from process node improvements? How much from just how the hardware is designed because of AI? We're talking specifically for GPUs. If you had to disaggregate future improvements.", "00:59:13 – Chip architecture design", "Dylan Patel", "It's important to state that semiconductor manufacturing and design is the largest search space of any problem that humans do because it is the most complicated industry that humans do. When you think about it, there are 1e10, 1e11, 100 billion transistors, on leading-edge chips. Blackwell has 220 billion transistors or something like that.", "Those are just on-off switches. Think about every permutation of putting those together, contact, ground, drain source, with wires. There are 15 metal layers connecting every single transistor in every possible arrangement. This is a search space that is literally almost infinite. The search space is much larger than any other search space that humans know of.", "Dwarkesh Patel", "And what is the nature of the search? What are you trying to optimize over?", "Dylan Patel", "Useful compute. If the goal is to optimize intelligence per picojoule—and intelligence is some nebulous nature of what the model architecture is, and picojoule is a unit of energy—how do you optimize that?", "There are humongous innovations possible in architecture because the vast majority of the power on an H100 does not go to compute. There are more efficient ALU (Arithmetic Logic Unit) designs. But even then, the vast majority of the power doesn't go there. The vast majority of the power goes to moving data around. When you look at what the movement of data is, it’s either networking or memory.", "You have a humongous amount of movement relative to compute and a humongous amount of power consumption relative to compute. So how can you minimize that data movement and maximize the compute? There are 100x gains possible from architecture. Even if we literally stopped shrinking, we could have 100x gains from architectural advancements.", "Dwarkesh Patel", "Over what time period?", "Dylan Patel", "The question is how much can we advance the architecture. The other challenge is that the number of people designing chips has not necessarily grown in a long time. Company to company, it shifts, but within the semiconductor industry in the US—the US designs the vast majority of leading-edge chips—the number of people designing chips has not grown much.", "What has happened is the output per individual has soared because of EDA (Electronic Design Assistance) tooling. Now this is all still classical tooling. There's just a little bit of AI in there yet. The question is what happens when we bring this in and how you can solve this search space somehow, with humans and AI working together to optimize this so most of the power doesn’t just go to data movement and the compute is actually very small. The compute can get like 100x more efficient just with design changes, and then you could minimize that data movement massively. You can get a humongous gain in efficiency just from architecture itself.", "Process node helps you innovate that there. Power delivery helps you innovate that. System design, chip-to-chip networking helps you innovate that. Memory technologies, there's so much innovation there. There are so many different vectors of innovation that people are pursuing simultaneously.", "NVIDIA gen to gen to gen will do more than 2x performance per dollar. I think that's very clear. Hyperscalers are probably going to try and shoot above that, but we'll see if they can execute.", "Dwarkesh Patel", "There are two narratives you can tell here of how this happens. One is that these AI companies training the foundation models understand the trade-offs of. How much is the marginal increase in compute versus memory worth to them? What trade-offs do they want between different kinds of memory? They understand this, so the accelerators they build can make these trade-offs in a way that's most optimal. They can also design the architecture of the model itself in a way that reflects the hardware trade-offs.", "Another is NVIDIA. I don’t know how this works. Presumably, they have some sort of know-how. They're accumulating all this knowledge about how to better design this architecture and also better search tools. Who has the better moat here? Will NVIDIA keep getting better at design, getting this 100x improvement? Or will it be OpenAI and Microsoft and Amazon and Anthropic who are designing their accelerators and will keep getting better at designing the accelerator?", "Dylan Patel", "There are a few vectors to go here. One you mention is important to note. Hardware has a huge influence on the model architecture that's optimal. It's not a one-way street that better chip equals... The optimal model for Google to run on TPUs, given a certain amount of dollars and compute, is different architecturally than what it is for OpenAI with NVIDIA stuff. It's absolutely different. Even down to networking decisions and data center designs that different companies make, the optimal solution—X amount of compute of TPU vs. GPU compute optimally—will diverge in what the architecture is. That's important to note.", "01:04:36 – Architectures lead to different AI models? China vs. US", "Dwarkesh Patel", "Can I ask about that real quick? Earlier we were talking about how China has the H20s or B20s , and there's much less compute per memory bandwidth than the amount of memory. Does that mean that Chinese models will actually have very different architecture and characteristics than American models in the future?", "Dylan Patel", "You can take this to a very large leap and say, \"Oh, neuromorphic computing or whatever is the optimal path and that looks very different than what a transformer does.” Or you could take it to a simple thing, the level of sparsity and coarse-grain sparsity, experts , and all this sort of stuff. You have the arrangement of what exactly the attention mechanism is, because there are a lot of tweaks. It's not just pure transformer attention.", "Or how wide versus tall the model is, that's very important. D-mod versus number of layers. These are all things that would be different. I know they're different between, say, Google and OpenAI and what is optimal. But it really starts to get like, \"Hey, if you were limited on a number of different things like...\" China invests hugely in compute and memory. The memory cell is directly coupled or is the compute cell. These are things that China's investing hugely in. You go to conferences and there are like 20 papers from Chinese companies/universities about compute and memory.", "Because the FLOP limitation is here, maybe NVIDIA pumps up the on-chip memory and changes the architecture because they still stand to benefit tens of billions of dollars by selling chips to China. Today, it's just neutered American chips that go to China. But it'll start to diverge more and more architecturally because they'd be stupid not to make chips for China.", "Huawei , obviously, has their constraints. Where are they limited on memory? Oh, they have a lot of networking capabilities and they could move to certain optical networking technologies directly onto the chip much sooner than we could. Because that is what's optimal for them within their search space of solutions, because this whole area is blocked off.", "Jon Y", "It's really interesting to think about how the development of Chinese AI models will differ from American AI models because of these changes or constraints.", "Dylan Patel", "It applies to use cases. It applies to data. American models are very focused on learning from you, being able to use you directly as a random consumer. That's not the case for Chinese models, I assume. There are probably very different use cases for them. China crushes the West at video and image recognition.", "At ICML , Albert Gu of Cartesia, who invented state space models , was there. Every single Chinese person was like, \"Can I take a selfie with you?\" The man was harassed. In the US, you see Albert and it's awesome, he invented state space models. It's not like state space models are revered here. But that's because state space models potentially have a huge advantage in video, image, and audio, which is stuff that China does more of and is further along in and has better capabilities in.", "Jon Y", "Because of all the surveillance cameras there.", "Dylan Patel", "Yeah. That's the quiet part out loud. But there's already divergence in capabilities there. If you look at image recognition, China destroys American companies on that because of the surveillance.", "You have this divergence in tech tree and people can start to design different architectures within the constraints they're given. Everyone has constraints, but the constraints different companies have are even different. Google's constraints have shown them that they built a genuinely different architecture. But now if you look at Blackwell and what's said about TPU v6, they're not exactly converging but they are getting a little bit closer in terms of how big the MatMul unit size is and some of the topology and world size of the scale-up versus scale-out network. There is some convergence slightly. I’m not saying they're similar yet, but they're starting to. Then there are different architectures that people could go down. You see stuff from all these startups that are trying to go down different tech trees because maybe that'll work.", "There's a self-fulfilling prophecy here too. All the research is in transformers that are very high arithmetic intensity because the hardware we have is very high arithmetic intensity and transformers run really well on GPUs and TPUs. You sort of have a self-fulfilling prophecy. If all of a sudden you have an architecture which is theoretically way better, but you can get only like half of the usable FLOPs out of your chip, it's worthless because even if it's a 30% compute efficiency win, it's half as fast on the chip. There are all sorts of trade-offs and self-fulfilling prophecies of what path people go down.", "01:10:12 – Being head of compute at an AI lab", "Dwarkesh Patel", "If you were made head of compute of a new AI lab, if Ilya Sutskever ’s new lab SSI came to you and they're like, \"Dylan, we give you $1 billion. You’re our head of compute. Help us get on the map. We're going to compete with the frontier labs.\" What is your first step?", "Dylan Patel", "Okay. The constraints are that you're a US/Israeli firm because that's what SSI is. Your researchers are in the US and Israel. You probably can't build data centers in Israel because power is expensive as hell and it's probably risky. So it’s still in the US most likely. Most of the researchers are here in the US, in Palo Alto or wherever.", "You need a significant chunk of compute. Obviously, the whole pitch is you're going to make some research breakthrough, compute efficiency, data efficiency, or whatever it is. You’re going to make some research breakthroughs but you need compute to get there. Your GPUs per researcher is your research velocity. Obviously, data centers are very tapped out. Maybe not tapped out but in terms of every new data center coming up, most of them have been sold. That’s led people like Elon to go through this insane thing in Memphis . I'm just trying to square the circle.", "Dwarkesh Patel", "On that question, I kid you not, in my group house group chat, there have been two separate people who have been like, \"I have a cluster of H100s and I have a long lease on them, but I'm trying to sell them off.\" Is it like a buyer's market right now? Because it does seem like people are trying to get rid of them.", "Dylan Patel", "For the Ilya question, a cluster of like 256 GPUs or even 4K GPUs is kind of cope. It's not enough. Yes, you're going to make compute efficiency wins, but with a billion dollars you probably just want the biggest cluster in one individual spot. Small amounts of GPUs are probably not possible to use for them, and that's what most of the sales are.", "You go and look at GPU List or Vast or Foundry or a hundred different GPU resellers, the cluster sizes are small. Now, is it a buyer's market? Yeah. Last year you would buy H100s for like $4 or $3 an hour for shorter-term or mid-term deals. Right now, if you want a six-month deal, you could get it for like $2.15 or less.", "The natural cost… If I have a data center and I'm paying standard data center pricing to purchase the GPUs and deploy them, it’s like $1.40. Add on the debt, because I probably took debt to buy the GPUs or you have cost of equity, the  cost of capital, it gets up to like $1.70 or something. You see deals that are… The good deals are like Microsoft renting from CoreWeave at like $1.90 to $2. People are getting closer and closer. But there's still a lot of profit. The natural rate even after debt and all this is $1.70. There’s still a lot of profit when people are selling in the low twos.", "GPU companies are deploying them, but it is a buyer's market in the sense that it's gotten a lot cheaper. Cost of compute is going to continue to tank. I don’t remember the exact name of the law. It's effectively Moore's Law. Every two years, the cost of transistors halved, and yet the industry grew. Every six months or three months, the cost of intelligence… OpenAI and GPT-4 in February 2023, was roughly $120 per million tokens. Now it's like $10. The cost of intelligence is tanking, partially because of compute, partially because of the model's compute efficiency wins. That's a trend we'll see. That's gonna drive adoption as you scale up and make it cheaper and scale up and make it cheaper.", "Dwarkesh Patel", "Right. Anyway, if you were head of compute at SSI…", "Dylan Patel", "Okay, I’m head of compute at SSI. There's obviously no free data center lunch, in terms of what we see in the data. There’s no free lunch if you need compute for a large cluster size, even six months out. There's some availability, but not a huge amount because of what X did.", "xAI is like, \"Oh shit, we're going to buy a Memphis factory, put a bunch of mobile generators usually reserved for natural disasters outside, add a Tesla battery pack, drive as much power as we can from the grid, tap the natural gas line that's going to the natural gas plant two miles away, the gigawatt natural gas plant, and just send it. Get a cluster built as fast as possible.\" Now you're running 100K GPUs. That costs about $4-5 billion, not $1 billion.", "The scale that SSC has is much smaller. Their cluster size will be maybe 1/3 or 1/4 of that size. So now you're talking about a 25K to 32K cluster. You still don't have that. No one is willing to rent you a 32K cluster today, no matter how much money you have. Even if you had more than a billion dollars.", "Now it makes the most sense to build your own cluster instead of renting, or get a very close relationship like OpenAI/Microsoft with CoreWeave or Oracle/Crusoe The next step is Bitcoin. OpenAI has a data center in Texas, or it's going to be their data center. It’s kind of contracted and all that from CoreWeave. There is a 300 megawatt natural gas plant on site, powering these crypto mining data centers from a company called Core Scientific . They're just converting that. There's a lot of conversion, but the power's already there. The power infrastructure is already there. It's really about converting it, getting it ready to be water-cooled, all that sort of stuff, and converting it to a 100,000 GB200 cluster.", "They have a number of those going up across the country, but that's also tapped out to some extent because NVIDIA is doing the same thing in Plano, Texas for a 32,000 GPU cluster that they're building.", "Dwarkesh Patel", "Is NVIDIA doing that?", "Dylan Patel", "Well, they're going through partners. Because this is the other interesting thing: the big tech companies can't do crazy shit like Elon did.", "Dwarkesh Patel", "Why?", "Dylan Patel", "ESG. They can't just do crazy shit like…", "01:16:24 – Scaling costs and power demand", "Dwarkesh Patel", "Oh that’s interesting. Do you expect Microsoft and Google and whoever to like drop their net zero commitments as the scaling picture intensifies?", "Dylan Patel", "Yeah. What xAI is doing isn't that polluting in the scheme of things, but it's you have 14 mobile generators and you're just burning natural gas on site on these mobile generators that sit on trucks. Then you have power directly two miles down the road. There's no way to say any of the power is green because up to two miles down the road is a natural gas plant as well.", "You go to the CoreWeave thing. There's a natural gas plant literally on site from Core Scientific and all that. Then the data centers around it are horrendously inefficient. There's this metric called PUE , which is basically how much power is brought in versus how much gets delivered to the chips. The hyperscalers, because they're so efficient, their PUE is like 1.1 or lower. I.e., if you get a gigawatt in, 900 megawatts or more gets delivered to chips. It’s not wasted on cooling and all these other things. This Core Scientific one is going to be like 1.5 or 1.6. I.e., even though I have 300 megawatts of generation on site, I only deliver like 180-200 megawatts to the chips.", "Dwarkesh Patel", "Given how quickly solar is getting cheaper… There’s also the fact that the reason solar is difficult elsewhere is you've got to power the homes at night. Here I guess it's theoretically possible to figure out only running the clusters in the day or something…", "Dylan Patel", "Absolutely not. That's not possible.", "Dwarkesh Patel", "Because it's so expensive to have these GPUs?", "Dylan Patel", "Yes. So when you look at the power cost of a large cluster, it's trivial to some extent. The meme that you can't build a data center in Europe or East Asia because the power is expensive, that's not really relevant. Or there’s the meme that power is so cheap in China and the US that those are the only places you can build data centers. That's not really the real reason. It's the ability to generate new power for these activities. That’s why it's really difficult, the economic regulation around that.", "But the real thing is if you look at the cost of ownership of an H100. Let's just say you gave me a billion dollars and I already have a data center, I already have all this stuff. I'm paying regular rates for the data centers, not paying through the nose or anything. Paying regular rates for power, not paying through the nose. Power is sub 15% of the cost. It's sub 10% of the cost actually. The biggest, like 75-80% of the cost, is just the servers.", "And this is on a multi-year basis, including debt financing, including cost of operation, all that. When you do a TCO (total cost of ownership), like 80% is the GPUs, 10% is the data center, 10% is the power, rough numbers. So it's kind of irrelevant how expensive the power is. You'd rather do what Taiwan does. What did they do when there were droughts? They forced people to not shower.", "Jon Y", "They basically reroute the power… When there was a power shortage in Taiwan, they basically rerouted power from the residential areas.", "Dylan Patel", "And this will happen in a capitalistic society as well, most likely because, “fuck you, you aren’t going to pay X dollars per kilowatt hour. To me, the marginal cost of power is irrelevant. Really it's all about the GPU cost and the ability to get the power. I don't want to turn it off eight hours a day.", "Dwarkesh Patel", "Let’s zoom out a bit. Let's discuss what would maybe happen if the training regime changes and if it doesn't change. You could imagine that the training regime becomes much more parallelizable where it's about coming up with some sort of search and most of the compute for training is used to come up with synthetic data or do some kind of search. That can happen across a wide area. In that world, how fast could we scale? Let's go through the numbers year after year.", "You would know more than me, but then suppose it has to be the current regime. Just explain what that would mean in terms of how distributed that would have to be and how plausible it is to get clusters of certain sizes over the next few years.", "Dylan Patel", "It's not too difficult for Ilya's company to get a cluster of like 32K of Blackwell next year.", "Dwarkesh Patel", "Forget about Ilya's company, let's talk about the clear players. Like 2025, 2026, 2027.", "Dylan Patel", "2025, 2026… Before I talk about the US, it's important to note that there's like a gigawatt plus of data center capacity in Malaysia next year now. That's mostly ByteDance. Power-wise there's also the humongous damming of the Nile in Ethiopia and the country uses like one-third of the power that that dam generates. There's like a ton of power there to…", "Dwarkesh Patel", "How much power does that dam generate?", "Dylan Patel", "It's like over a gigawatt. The country consumes like 400 megawatts or something trivial.", "Dwarkesh Patel", "Are people bidding for that power?", "Dylan Patel", "I think people just don't think they can build a data center in Ethiopia.", "Dwarkesh Patel", "Why not?", "Jon Y", "I don't think the dam is filled yet, is it?", "Dylan Patel", "No, the dam could generate that power. They just don't. There's a little bit more equipment required, but that's not too hard. Why don't they? There are true security risks. If you're China, or if you're a US lab, to build a data center with all your IP in Ethiopia… You want AGI to be in Ethiopia? You want it to be that accessible? People can't even monitor the technicians in the data center or powering the data center, all these things. There are so many things you could do... You could just destroy every GPU in a data center if you want if you just fuck with the grid, pretty easily, I think.", "Dwarkesh Patel", "People talk a lot about it in the Middle East.", "Dylan Patel", "There's a 100k GB200 cluster going up in the Middle East. The US is clearly doing stuff too. G42 is the UAE data center company, cloud company. Their CEO is a Chinese national, or not a Chinese national but there’s basically Chinese allegiance. OpenAI wanted to use a data center from them but instead… The US forced Microsoft—I feel like this is what happened— to do a deal with them so that G42 has a 100K GPU cluster, but Microsoft is administering and operating it for security reasons.", "There's Omniva in Kuwait, like the Kuwait super-rich guy spending five plus billion dollars on data centers. You just go down the list, all these countries. Malaysia has $10+ billion of AI data center build outs over the next couple of years. Go to every country, this stuff is happening.", "But in the grand scheme of things, the vast majority of the compute is being built in the US, then China, then Malaysia, Middle East, and the rest of the world. Let’s go back to your point. You have synthetic data. You have the search stuff. You have all these post-training techniques. You have all these ways to soak up flops, or you just figure out how to train across multiple data centers, which I think they have. At least, Microsoft and OpenAI have figured it out.", "Dwarkesh Patel", "What makes you think they figured it out?", "Dylan Patel", "Their actions. Microsoft has signed deals north of $10 billion with fiber companies to connect their data centers together. There are some permits already filed to show people are digging between certain data centers. We think, with fairly high accuracy, that there are five regions that they're connecting together, which comprises many data centers.", "Dwarkesh Patel", "What will be the total power usage of the...", "Dylan Patel", "Depends on the time, but easily north of a gigawatt.", "Dwarkesh Patel", "Which is like close to a million GPUs.", "Dylan Patel", "Well, each GPU is getting higher power consumption too. The rule of thumb is that a H100 is like 700 watts, but then total power per GPU all-in is like 1200-1400 watts. But next-generation NVIDIA GPUs are like 1200 watts for the GPU. It actually ends up being like 2000 watts all in. There's a little bit of scaling of power per GPU.", "You already have 100K clusters. OpenAI in Arizona , xAI in Memphis. Many others are already building 100K clusters of H100s. You have multiple, at least five, I believe GB200 100K clusters being built by Microsoft/OpenAItheir partners for them. It’s potentially even more. 500K GB200s is like a gigawatt and that's online next year.", "The year after that, if you aggregate all the data center sites, and how much power… You only look at net adds since 2022, instead of the total capacity at each data center, then you're still north of multi-gigawatt.", "They're spending north of $10+ billion dollars on these fiber deals with a few fiber companies: Lumen , Zayo , and a couple other companies. Then they've got all these data centers where they're clearly building 100K clusters, like old crypto mining sites with CoreWeave in Texas or this Oracle/Crusoe thing in Texas. You have them in Wisconsin and Arizona and a couple other places. There's a lot of data centers being built up. You have providers like QTS and Cooper and many other providers and self-build data centers, data centers I'm building myself.", "Dwarkesh Patel", "Let's just give the number to like, “Okay, it’s 2025 and Elon's cluster is going to be the biggest…” It doesn’t matter who it is.", "Dylan Patel", "There's the definition game. Elon claims he has the largest cluster at 100K GPUs because they're all fully connected.", "Dwarkesh Patel", "Rather than who it is, I just want to know how many… I don't know if it's better to denominate in H100s…", "Dylan Patel", "It’s 100k GPUs this year for the biggest cluster. Next year, 300-500k depending on whether it's one site or many. 300-700k I think is the upper bound of that. But it's about when they tier it on, when they can connect them, when the fiber's connected together. Let's say 300-500k but those GPUs are 2-3X faster versus the 100K cluster. So on an H100 equivalent basis, you're at a million chips next year in one cluster by the end of the year.", "Well, one cluster is the wishy-washy definition. It’s multisite, right? Can you do multisite? What's the efficiency loss when you go multisite? Is it possible at all? I truly believe so. What's the efficiency loss is the question.", "Dwarkesh Patel", "Would it be like 20% loss, 50% loss?", "Dylan Patel", "Great question. This is where you need the secrets. And Anthropic's got similar plans with Amazon and you go down the list.", "Dwarkesh Patel", "And then the year after that? This is 2026.", "Dylan Patel", "In 2026 there is a single gigawatt site. And that's just part of the multiple sites for Microsoft.", "Dwarkesh Patel", "The Microsoft five gigawatt thing happens in 2026?", "Dylan Patel", "One gigawatt, one site in 2026. But then you have a number of others. You have five different locations, some with multiple sites, some with single sites. You're easily north of 2-3 gigawatts.", "Then the question is, can you start using the old chips with the new chips? The flop scaling is going to continue much faster than people expect, as long as the money pours in. There's no way you can pay for the scale of clusters being planned to be built next year for OpenAI unless they raise like $50-100 billion, which I think they will raise late this year or early next year.", "Jon Y", "$50-100 billion? Are you kidding me?", "Dylan Patel", "No. Sam has a superpower. It's recruiting and raising money. That's what he's like a god at.", "Dwarkesh Patel", "Will chips themselves be a bottleneck to the scaling?", "Dylan Patel", "Not in the near term. It's more about concentration versus decentralization. The largest cluster is 100,000 GPUs. NVIDIA has manufactured close to 6 million Hoppers across last year and this year. So that's fucking tiny.", "Dwarkesh Patel", "But then why is Sam talking about $7 trillion to build foundries and whatever?", "Dylan Patel", "Draw the line. It’s like a log-log line. Numbers go up, right? If you do that, you're going from 100K to 300-500K, where the equivalent is a million. You just 10X year on year. Do that again, do that again, or more. If you increase the pace...", "Dwarkesh Patel", "What is \"do that again\"? So in 2026, the number of...", "Dylan Patel", "If you increase the globally produced flops by like 30x year on year or 10x year on year—and the cluster size grows by 3-7x and you start getting multi-site going better and better—you can get to the point where multi-million chip clusters, even if they're regionally not connected right next to each other, are right there.", "Dwarkesh Patel", "And in terms of flops it would be 1e… what?", "Dylan Patel", "I think 1e30 is very possible in 2028 or 2029.", "Dwarkesh Patel", "Wow. Okay you’re saying 1e30 you said by 2028-29. That is literally six orders of magnitude. That's like 100,000x more compute than GPT-4.", "Dylan Patel", "Yes. The other thing to say is the way you count flops on a training run is really stupid. You can't just do like active parameters x tokens x six. That's really dumb because the paradigm—as you mentioned, and you’ve had many great podcasts on this stuff—it's like synthetic data and RL stuff, post-training, verifying data, all these things generating and throwing it away, search, inference time compute. All these things aren't counted in the training flops.", "So you can't say 1e30 is a really stupid number to say because by then the actual flops of the pre-training may be X, but the data to generate for the pre-training may be way bigger, or the search inference time may be way, way bigger.", "Dwarkesh Patel", "Right. Also, because you're doing adversarial synthetic data where the thing you're weakest at, you can make synthetic data for that, it might be way more sample efficient . So even though…", "Dylan Patel", "Pre-training flops will be irrelevant. I actually don't think pre-training flops will be 1e30. I think more reasonably, it will be like the total summation of the flops that you deliver to the model across pre-training, post-training, synthetic data for that pre-training and post-training data, as well as some of the inference time compute efficiencies. It's more like 1e30 in total.", "Dwarkesh Patel", "Suppose you really do get to the world where it's worth investing…. Actually, if you're doing 1e30, is that like a trillion dollar cluster, hundred billion dollar cluster?", "Dylan Patel", "It'll be like multi-hundred billion dollars, but I truly believe people are going to be able to use their prior generation clusters alongside their new generation clusters. Obviously it’ll be smaller batch sizes or whatever, or use that to generate and verify data, all these sorts of things.", "Dwarkesh Patel", "And then for 1e30… Right now, I think 5% of TSMC's N5 is NVIDIA or whatever percent it is. By 2028, what percentage will it be?", "Dylan Patel", "Again, this is a question of how scale-pilled you are and how much money will flow into this and how you think progress works. Will models continue to get better or does the line slope over? I believe it'll continue to skyrocket in terms of capability. In that world—not of 5 nanometer, but of 2 nanometer, A16, A14, these are the nodes that'll be in that timeframe of 2028—used for AI, I could see it being like 60-80% of it. No problem.", "Dwarkesh Patel", "Given the fabs that are currently planned and being built, is that enough for the 1e30, or will we need more?", "Dylan Patel", "I think so, yeah.", "Dwarkesh Patel", "Okay, so then the chip goal doesn't make any sense.The chip goal stuff about how we don't have enough compute…", "Dylan Patel", "No, I think the plans of TSMC on two nanometer and such are quite aggressive for a reason. To be clear, Apple, which has been TSMC's largest customer, does not need how much 2nm capacity they're building. They will not need A16 , they will not need A14 . Apple doesn't need this shit. Although they did just hire Google’s head of system design for TPU. So they are going to make an AI accelerator. But that’s besides the point. Apple doesn’t need this for their business. They have been 25% or so of TSMC's business for a long time. And when you zone in on just the leading-edge, they've been like more than half of the newest node or 100% of the newest node almost constantly. That paradigm goes away.", "Let’s say you believe in scaling and you believe the models get better, that the new models will generate amazing productivity gains for the world and so on. If you believe in that world, then TSMC needs to act accordingly and the amount of silicon that gets delivered needs to be there.", "So in 2025 and 2026, TSMC is definitely there. Then on a longer timescale, the industry can be ready for it, but it's going to be a constant game of convincing them constantly that they must do this. It's not a simple game. If people work silently, it's not going to happen. They have to see the demonstrated growth over and over and over again across the industry.", "Dwarkesh Patel", "Who will need to see? Investors or companies?", "Dylan Patel", "More so, TSMC needs to see NVIDIA volumes continue to grow straight up and Google's volumes continue to grow straight up, and so on down the list. Chips in the near term, next year for example, are less of a constraint than data centers. It’s likewise for 2026. The question for 2027-28… Always when you grow super rapidly, people want to say that's the one bottleneck. Because that's the convenient thing to say.", "In 2023 there was a convenient bottleneck, CoWoS (Chip on Wafer on Substrate). The picture has gotten much cloudier. Not clouder but we can see that HBM is a limiter too. CoWoS is as well, CoWoS-L especially. You have data centers, transformers, substations, power generation, batteries, UPSs, CRHs, water cooling stuff. All of this stuff is now a limitation next year and the year after. Fabs will be in 2026-27. Things will get cloudy because the moment you unlock one… Only 10% higher, the next one is the thing. Only 20% higher, the next one is the thing.", "Dylan Patel", "Today, data centers are like 4-5% of total US power consumption. When you think about it as a percentage of US power, that's not that much. But on the flip side you also consider that all this coal has been curtailed and all these other things. So power is not that crazy on a national basis. On a localized basis, it is, because it's about the delivery of it.", "It’s the same with the substation transformer supply chains. These companies have operated in an environment where the US power demand has been flat or even slightly down because of efficiency gains. There has been humongous weakening of the industry. Now all of a sudden if you tell that industry, \"Your business will triple next year if you can produce more.\" They can only produce 50% more. Okay, fine. Year after that, now we can produce 3x as much.", "You do that to the industry, the US industrial base as well as the Japanese, all across the world can get revitalized much faster than people realize. I truly believe that people can innovate when given the need to. It's one thing if it's a shitty industry where my margins are low and we're not growing really. All of a sudden it’s like, \"Oh, this is the sexiest time to be alive in power. We're going to do all these different plans and projects and people have all this demand. They're begging me for another percent of efficiency advantage because that gives them another percent to deliver to the chips.”", "You see all these things happen and innovation is unlocked. You also bring in AI tools, you bring in all these things, innovation will be unlocked. Production capacity can grow, not overnight, but it will in 6 months, 18 months, 3 year timescales. It will grow rapidly. You see the revitalization of these industries.", "Getting people to understand that, getting people to believe… because you know, if we pivot to like… Yeah, I'm telling you that Sam's going to raise $50-100 billion dollars because he's telling people he's going to raise this much. He’s literally having discussions with sovereigns and Saudi Arabia and the Canadian pension fund and the biggest investors in the world. Of course, Microsoft as well, but he's literally having these discussions because they're going to drop their next model or they're going to show it off to people and raise that money. This is their plan.", "01:37:05 – Are we financing an AI bubble?", "Dwarkesh Patel", "If these sites are already planned and...", "Dylan Patel", "The money's not there.", "Dwarkesh Patel", "So how do you plan? How do you plan a site without...", "Dylan Patel", "Today Microsoft is taking on immense credit risk. They've signed these deals with all these companies to do this stuff. But Microsoft doesn't have... I mean, they could pay for it. Microsoft could pay for it on the current timescale. Their CapEx is going from $50-80 billion of direct CapEx, and then another $20 billion across Oracle, CoreWeave, and then like another $10 billion across their data center partners. They can afford that for next year.", "This is because Microsoft truly believes in OpenAI. They may have doubts like, \"Holy shit, we're taking a lot of credit risk.\" Obviously, they have to message Wall Street and all these things, but that's affordable for them because they believe they're a great partner to OpenAI. They'll take on all this credit risk.", "Now, obviously OpenAI has to deliver. They have to make the next model that's way better. They also have to raise the money. And I think they will. I truly believe from how amazing 4o , how small it is relative to GPT-4… The cost of it is so insanely cheap. It's much cheaper than the API prices lead you to believe. You're like, \"Oh, what if you just make a big one?\" It's very clear what's going to happen to me on the next jump that they can then raise this money and they can raise this capital from the world.", "Jon Y", "This is intense. It's very intense.", "Dwarkesh Patel", "Jon, actually, if he's right, or not him, but in general. If the capabilities are there, the revenue is there…", "Dylan Patel", "Revenue doesn't matter.", "Jon Y", "Revenue matters.", "Dwarkesh Patel", "Is there any part of that picture that still seems wrong to you in terms of displacing so much of TSMC production, wafers and power and so forth? Does any part of that seem wrong to you?", "Jon Y", "I can only speak to the semiconductors part, even though I'm not an expert. I think TSMC can do it. They'll do it. I just wonder though… he's right in that 2024-25 is covered. But 2026-27 is that critical point where you have to say, can the semiconductor industry and the rest of the industry be convinced that this is where the money is? That means, is there money by 2024-25?", "Dwarkesh Patel", "How much revenue do you think the AI industry as a whole needs by 2025 in order to keep scaling?", "Dylan Patel", "Doesn't matter.", "Jon Y", "Compared to smartphones. I know he says it doesn't matter.", "Dylan Patel", "I'll get to why.", "Dwarkesh Patel", "What are smartphones at? Like Apple's revenue is like $200 billion. So like...", "Jon Y", "Yeah, it needs to be another smartphone-size opportunity, right? Even the smartphone industry didn't drive this sort of growth. It's crazy. Don't you think? The only thing I can really perceive… AI Girlfriend. You know what I mean.", "Dylan Patel", "No, I want a real one, dammit. There’s a few things. The return on invested capital for all of the big tech firms is up since 2022. Therefore, it's clear as day that investing in AI has been fruitful so far for the big tech firms just based on return on invested capital. Financially, you look at Meta's, you look at Microsoft's, you look at Amazon's, you look at Google's. The return on invested capital is up since 2022.", "Dwarkesh Patel", "On AI in particular?", "Dylan Patel", "No, just generally as a company. Now, obviously there's other factors here. Like what is Meta's ad efficiency? How much of that is AI, right?", "Jon Y", "That’s super messy.", "Dylan Patel", "But here's the other thing, this is Pascal's wager , right? This is a matrix of like, do you believe in God? Yes or no. If you believe in God and God's real and you go to heaven, that's great. That's fine. Whatever. If you don't believe in God and God is real, then you're going to hell.", "Dwarkesh Patel", "This is the deep technical analysis you'll subscribe to SemiAnalysis for.", "Jon Y", "Can you imagine what happens to the stock if Satya starts talking about Pascal's wager?", "Dylan Patel", "But this is psychologically what's happening, right? Satya said it on his earnings call. The risk of under-investing is worse than the risk of over-investing. He has said this word for word. This is Pascal's wager.", "I must believe I am AGI-pilled because if I'm not and my competitor does it, I'm absolutely fucked.", "Jon Y", "Other than Zuck, who seems pretty convinced…", "Dylan Patel", "No, Sundar said this on the earnings call. So Zuck said it. Sundar said it. Satya's actions on credit risk for Microsoft do it. He's very good at PR and messaging, so he hasn't said it so openly.", "Sam believes it. Dario believes it. You look across these tech titans, they believe it. Then you look at the capital holders. The UAE believes it. Saudi believes it.", "Dwarkesh Patel", "How do you know the UAE and Saudi believe it?", "Dylan Patel", "All these major companies and capital holders also believe it because they're putting their money here.", "Jon Y", "But it won't last, it can't last unless there's money coming in somewhere.", "Dylan Patel", "Correct, correct, but then the question is... The simple truth is that GPT-4 costs like $500 million dollars to train. It has generated billions in recurring revenue. In the meantime, OpenAI raised $10 billion or $13 billion and is building a model that costs that much, effectively.", "Obviously they're not making money. What happens when they do it again? They release and show GPT-5 with whatever capabilities that make everyone in the world go, \"Holy fuck.\" Obviously the revenue takes time after you release the model to show up. You still have only a few billion dollars or $5 billion of revenue at run rate.", "You just raised $50-100 billion dollars because everyone sees this like, \"Holy fuck, this is going to generate tens of billions of revenue.\" But that tens of billions takes time to flow in. It's not an immediate click. But the time where Sam can convince, not just Sam… people's decisions to spend the money are being made then.", "Therefore, you look at the data centers people are building. You don't have to spend most of the money to build the data center. Most of the money's the chips, but you're already committed to having so much data center capacity by 2027 or 2026 that you're never gonna need to build a data center again for like 3-5 years if AI is not real.", "Basically, that’s what all their actions are. Or I can spend over a hundred billion dollars on chips in 2026 and I can spend over a hundred billion dollars on chips in 2027. These are the actions people are taking and the lag on revenue versus when you spend the money or raise the money, there's a lag on this.", "You don't necessarily need the revenue in 2025 to support this. You don't need the revenue in 2026 to support this. You need the revenue in 2025-2026 to support the $10 billion that OpenAI spent in '23, or Microsoft spent in 2023 and early 2024 to build the cluster, which model they trained in mid 2024, which they then released at the end of 2024, which then started generating revenue in 2025-2026.", "Jon Y", "The only thing I can say is that you look at a chart with three points on a graph: GPT-1, 2, 3, and you’re like…", "Dwarkesh Patel", "Even that graph… The investment you have to make in GPT-4 over GPT-3 is 100X. The investment you had to make in GPT-5 over GPT-4 is 100X. Currently the ROI could be positive—this very well could be true, I think it will be true—but the revenue has to increase exponentially.", "Jon Y", "Of course. I agree with you. But I also agree with Dylan. It can be achieved. ROI, TSMC does this. It invests $16 billion. It expects ROI. It does that. I understand that. That's fine. Lag all that. The thing that I don't expect is that GPT-5 is not here. It's all dependent on GPT-5 being good. If GPT-5 sucks, if GPT-5 looks like it doesn't blow people's socks off, this is all void.", "Dylan Patel", "What kind of socks are you wearing, bro? Show them.", "Jon Y", "AWS.", "Jon Y", "GPT-5 is not here. GPT-5 is late. We don't know.", "Dylan Patel", "I don't think it's late.", "Jon Y", "I think it's late.", "Dwarkesh Patel", "Okay. I want to zoom out and go back to the end of the decade picture again.", "Dylan Patel", "We've already lost Jon.", "Jon Y", "We've already accepted GPT-5 would be good? Hello?", "Dwarkesh Patel", "You gotta, you know?", "Dylan Patel", "Life is so much more fun when you just are delusionally…", "Jon Y", "We're just ripping bong hits, are we?", "Dylan Patel", "When you feel the AGI, you feel your soul.", "Jon Y", "This is why I don't live in San Francisco.", "Dylan Patel", "I have tremendous belief in the GPT-5 area.", "Dwarkesh Patel", "Why?", "Dylan Patel", "Because of what we've seen already. The public signs all show that this is very much the case. What we see beyond that is more questionable and I'm not sure because I don't know. We'll see how much they progress.", "If things continue to improve, life continues to radically get reshaped for many people. Every time you increment up the intelligence, the amount of usage of it grows hugely. Every time you increment the cost down of that amount of intelligence, the amount of usage increases massively. As you continue to push that curve out, that's what really matters.", "It doesn't need to be today. It doesn't need to be revenue vs. how much CapEx. In any time in the next few years, it just needs to be, did that last humongous chunk of CapEx make sense for OpenAI or whoever the leader was? How does that then flow through? Or were they able to convince enough people that they can raise this much money?", "You think Elon's tapped out of his network with raising $6 billion? No. xAI is going to be able to raise $30+ billion easily. You think Sam's tapped out? You think Anthropic's tapped out? Anthropic's barely even diluted the company relatively. There's a lot of capital to be raised. Call it FOMO if you want, but during the dot-com bubble, the private industry flew through like $150 billion a year. We're nowhere close to that yet.", "We're not even close to the dot-com bubble. Why would this bubble not be bigger? You go back to the prior bubbles: PC bubble, semiconductor bubble, mechatronics bubble. Throughout the US, each bubble was smaller. I don't know if you call it a bubble or not. Why wouldn't this one be bigger?", "Dwarkesh Patel", "How many billions of dollars a year is this bubble right now?", "Dylan Patel", "For private capital? It's like $55-60 billion so far for this year. It can go much higher. I think it will next year.", "Dwarkesh Patel", "Let me think about this.", "Jon Y", "You need another bong rip.", "Dylan Patel", "At least like finishing up and looping into the next question… Prior bubbles also didn't have the most profitable companies that humanity has ever created investing and they were debt financed. This is not debt financed yet. That's the last little point on that one. Whereas the 90s bubble was very debt financed.", "Jon Y", "That was disastrous for those companies.", "Dylan Patel", "Yeah, sure. But so much was built. You’ve got to blow a bubble to get real stuff to be built.", "Dwarkesh Patel", "It is an interesting analogy. Even though the dot-com bubble obviously burst and a lot of companies went bankrupt, they in fact did lay out the infrastructure that enabled the web and everything. You could imagine an AI… A lot of the foundation model companies or whatever, a bunch of companies will go bankrupt, but they will enable the singularity .", "Jon Y", "At the turn of the 1990s, there was an immense amount of money invested in things like MEMS and optical technologies because everyone expected the fiber bubble to continue. That all ended in 2003 or 2002.", "Dylan Patel", "It started in 94?", "Jon Y", "There hasn't been revitalization since. You could risk the possibility of a…", "Dylan Patel", "Bro, Lumen, one of the companies that's doing the fiber build out for Microsoft, its stock like fucking 4x'd last month, or this month.", "Jon Y", "How'd it do from 2002 to 2024?", "Dylan Patel", "Oh no, horrible, horrible, but we're gonna rip, baby. Rip that bong, baby!", "Jon Y", "You could freeze AI for another two decades.", "Dylan Patel", "Sure, sure, it’s possible. Or people can see a badass demo from GPT-5, a slight release, they raise a fuckload of money. It could even be like a Devin-like demo , where it's like complete bullshit, but it's fine.", "Jon Y", "Edit that out! Edit that out!", "Dylan Patel", "No, it's fine. I don’t really care. The capital's going to flow in. Now whether it deflates or not is an irrelevant concern in the near term because you operate in a world where it is happening. What is that Warren Buffett quote? I don't even know if it's Warren Buffett.", "Jon Y", "You don’t know who's swimming naked until the tide goes out?", "Dylan Patel", "No, no, no. The one about how the market is delusional for longer than you can remain solvent, or something like that.", "Jon Y", "Oh, that's not Buffett.", "Dylan Patel", "That's not Buffett?", "Jon Y", "That’s John Maynard Keynes.", "Dylan Patel", "Oh shit, that's that old? Okay. So Keynes said it. So this is the world you're operating in. It doesn't matter what exactly happens. There'll be ebbs and flows, but that's the world you're operating in.", "Jon Y", "I reckon that if the AI bubble pops, each one of these CEOs lose their jobs.", "Dylan Patel", "Sure. Or if you don't invest and you lose, it's a Pascalian wager. That's much worse. Across decades, the largest company at the end of each decade of the largest companies, that list changes a lot. And these companies are the most profitable companies ever. Are they going to let themselves lose it? Or are they going to go for it? They have one shot, one opportunity to make themselves into… the whole Eminem song .", "01:50:20 – Starting Asianometry and SemiAnalysis", "Dwarkesh Patel", "I want to hear the story of how both of you started your businesses, the thing you're doing now. Jon, how did it begin? What were you doing when you started the YouTube channel?", "Dylan Patel", "It’s all about your textile company?", "Jon Y", "Oh my god, no way.", "Dylan Patel", "Please, please.", "Dwarkesh Patel", "Wait, is he joking?", "Dylan Patel", "If he doesn't want to, we'll talk about it later.", "Jon Y", "The story's famous. I've told it a million times. Asianometry started off as a tourist channel. I moved to Taiwan for work.", "Dylan Patel", "DoIng what?", "Jon Y", "I was working in cameras.", "Dylan Patel", "What was the other company you started?", "Jon Y", "It tells too much about me. I worked in cameras and then basically I went to Japan with my mom. My mom was like, \"Hey, what are you doing in Taiwan? I don't know what you're doing.\" I was like, \"All right, mom, I will go back to Taiwan and I'll make stuff for you.\" I made videos. I would go to the Chiang Kai-shek Park and be like, \"Hi, mom, this park was this, this.\"", "Eventually, you run out of stuff. But then it's a pretty smooth transition from that into Chinese history, Taiwanese history. People started calling me Chinanometry. I didn't like that. So I moved to other parts of Asia.", "Dwarkesh Patel", "What year did people start watching your videos? Let's say like a thousand views per video or something?", "Jon Y", "Oh my gosh, I started the channel in 2017 and it wasn't until 2018 or 2019 that it actually… I labored on for the first three years with no one watching. I got like 200 views and I'd be like, \"Oh, this is great.\"", "Dwarkesh Patel", "Were the videos basically like the ones you have now? Sorry, backing up for the audience who might not know, I imagine basically everybody knows Asianometry , but if you don't it's the most popular channel about semiconductors, Asian business history, business history in general, geopolitics, history and so forth. Honestly, I've done research for different AI guests and different things I'm trying to understand. How does hardware work? How does AI work?", "Dylan Patel", "How does a zipper work? Did you watch that video? I think it was a span of three videos. It's like the Russian oil industry in the 1980s and how it funded everything and then when it collapsed, they were absolutely fucked. Then the next video was like, the zipper monopoly in Japan .", "Jon Y", "Not a monopoly.", "Dylan Patel", "The next video was about ASML .", "Jon Y", "Not a monopoly. Strong holding in a mid-tier size. They're like the luxury zipper makers.", "Jon Y", "Asianometry is always just stuff I'm interested in. I'm interested in a whole bunch of different stuff. Then the channel, for some reason people started watching the stuff I do. I still have no idea why. To be honest, I still feel like a fraud. I sit in front of Dylan and I feel like a legit fraud, especially when he starts talking about 60,000 wafers and all that. I feel like I should know this but in the end, I just try my best to bring interesting stories out.", "Dwarkesh Patel", "How do you make a video every single week? These are like…", "Jon Y", "Two a week?", "Dylan Patel", "You know how long he had a full-time job?", "Jon Y", "Five years, six years.", "Dwarkesh Patel", "While doing this?", "Dylan Patel", "Sorry, a textile business and a full-time job. Wait, no it’s a full-time job, textile business, and Asianometry for a long long time.", "Jon Y", "I literally just gave up the textile business this year.", "Dwarkesh Patel", "How are you doing research and making a video twice a week? I don't know. I do these, I'm just fucking talking. This is all I do. I do this once every like two weeks.", "Dylan Patel", "The difference is, Dwarkesh, you go to SF Bay Area parties constantly and Jon is locked in. He's like locked in 24/7.", "Dwarkesh Patel", "He's got the TSMC work ethic and I've got the Intel work ethic.", "Jon Y", "If I don't… I got the Huawei ethic. If I do not finish this video, my family will be pillaged.", "Dylan Patel", "He actually gets really stressed about it, not doing something on his schedule.", "Jon Y", "I do two videos per week, I write them both simultaneously.", "Dwarkesh Patel", "How are you scouting out future topics you want to do? You just pick up random articles, books, whatever. If you find it interesting, you make a video about it?", "Jon Y", "Sometimes what I'll do is Google a country, I'll Google an industry, and I'll Google what a country is exporting now and what it used to export. I compare that and I say, “That's my video.” Sometimes it’s also just as simple as, “I should do a video about YKK .”", "Dylan Patel", "This zipper is nice. I should do a video about it.", "Jon Y", "I do. It literally is…", "Dwarkesh Patel", "Do you keep a list? “Here's the next one. Here's the one after that.”", "Jon Y", "I have a long list of ideas. Sometimes it's as vague as Japanese whiskey. I have no idea what Japanese whiskey is about. I heard about it before. I watched that movie. So I was just like, “Okay, I should do a video about that.”", "Dwarkesh Patel", "How many research topics do you have on the back burner, basically? I’m talking about something where you’re reading about it constantly and then in a month or so you’re like, “I'll make a video about it.”", "Jon Y", "I just finished a video about how IBM lost the PC. Right now I'm unstressing about that. But I'll kind of move right on to… The videos do kind of lead into others. Right now this one is about how IBM lost the PC. Now what’s next is how Compaq collapsed, how the wave destroyed Compaq. So I'll do that. At the same time, I'm dual lining a video about qubits . I'm dual lining a video about directed self-assembly for semiconductor manufacturing, which I'll read a lot of Dylan's work for. But then a lot of that is in the back of my head. I'm producing it as I go.", "Jon Y", "Dylan, how do you work?", "Dwarkesh Patel", "How does one go from Reddit shitposter to running a semiconductor research and consulting firm? Let's start with the shitposting.", "Dylan Patel", "It's a long line. I had immigrant parents and I grew up in rural Georgia. When I was seven, I begged for an Xbox and when I was eight I got it. It was the 360 . They had a manufacturing defect called the red ring of death . There are a variety of fixes that I tried like putting a wet towel around the Xbox, something called the penny trick. Those all didn't work, my Xbox still didn't work. My cousin was coming next weekend and he's like two years older than me. I look up to him. He's in between my brother and me but I'm like, “Oh, no, no, we're friends. You don't like my brother as much as you like me.” My brother's more of a jock-y type. It didn't matter. He didn't really care that the Xbox was broken. He's like, “You better fix it though. Otherwise parents will be pissed.”", "I figured out how to fix it online. I tried a variety of fixes, ended up shorting the temperature sensor. That worked for long enough until Microsoft did the recall. But in that, I learned how to do it out of necessity on the forums. I was a nerdy kid, so I liked games, but whatever. There was no other outlet so once I was like, “Holy shit, this is Pandora's box what just got opened up,” then I just shitposted on the forums constantly. I did that for many, many years and then I ended up moderating all sorts of Reddits when I was a tween and teenager. Then as soon as I started making money… I grew up in a family business, but I didn't get paid for working, of course, like yourself. But as soon as I started making money at my internships, I was like 18 or 19, I started making money. I started investing in semiconductors. I was like, “Of course, this is the shit I like.”", "By the way, the whole way through as technology progressed, especially mobile, it went from very shitty chips and phones to very advanced. Every generation they'd add something and I'd read every comment, I'd read every technical post about it. Also, I was interested in all the history around that technology and who's in the supply chain and just kept building and building and building. I went to college and did data science type stuff. I went to work on hurricane/earthquake/wildfire simulation and stuff for a financial company. During college I wasn't shitposting on the internet as much. I was still posting some, but I was following the stocks and all these sorts of things, the supply chain, all the way from the tool equipment companies. The reason I liked those is because all this technology, it’s made by them.", "Dwarkesh Patel", "Did you have friends in person who were into this shit, or was it just online?", "Dylan Patel", "I made friends on the internet.", "Jon Y", "Oh, that's dangerous.", "Dylan Patel", "I've only ever had like literally one bad experience and that was just because he was drugged out.", "Dwarkesh Patel", "One bad experience online?", "Dylan Patel", "Meeting someone from the internet in person. Everyone else has been genuinely… You have enough filtering before that point. Even if they're hyper mega autistic, it's cool. I am too. No, I’m just kidding. You go through the layers and you look at the economic angle, you look at the technical angle, you read a bunch of books. You can just buy engineering textbooks and read them. What's stopping you? If you bang your head against the wall, you learn it.", "Dwarkesh Patel", "While you were doing this, did you expect to work on this at some point or was it just pure interest?", "Dylan Patel", "No, it was an obsessive hobby of many years and it pivoted all around. At some point I really liked gaming. Then I moved into phones and rooting them and underclocking them, and the chips there, and screens and cameras, and then back to gaming, and then to data center stuff because that was where the most advanced stuff was happening. I liked all sorts of telecom stuff for a little bit. It bounced all around, but generally it was in computing hardware.", "I did data science. I said I did AI when I interviewed but it was like bullshit multivariable regression, whatever. It was simulations of hurricanes, earthquakes, and wildfires for financial reasons. I had a job for three years after college. I was posting. I had a blog, an anonymous blog for a long time. I'd even made some YouTube videos and stuff. Most of that stuff is scrubbed off the internet, including Internet Archive , because I asked them to remove it.", "In 2020, I quiet quit my job and started shitposting more seriously on the internet. I moved out of my apartment and started traveling through the US. I went to all the national parks, in my truck/tent. I also stayed in hotels and motels for like three or four days a week. I started posting more frequently on the internet. I'd already had some small consulting arrangements in the past. But it really started to pick up in mid-2020, consulting arrangements from the internet from my persona.", "Dwarkesh Patel", "What kinds of people? Was it investors, hardware companies?", "Dylan Patel", "It was people who weren't in hardware that wanted to know about hardware. It would be some investors. Some VCs did it, some public market folks. There were times where companies would ask about three layers up in the stack. They saw me write some random posts and like “Hey, can we…” There's all sorts of random stuff. It was really small money.", "Then in 2020, it really picked up and I thought, “Why don't I just arbitrarily make the price way higher?” And it worked. I made a new newsletter as well. I kept posting. The quality kept getting better because people would read it and be like, “this is fucking retarded. Here’s what’s actually right” over more than a decade.", "Towards the end of 2021, I made a paid post because someone didn't pay for a report or whatever. It was about photoresist and the developments in that industry, which is the stuff you put on top of the wafer before you put in the lithography tool. It did great. I went to sleep that night and woke up the next day with 40 paid subscriptions. I thought, \"What? Okay, let's keep going.\" I started posting more paid content, partially free, partially paid. I did all sorts of stuff covering advanced packaging, chips, data center stuff, and AI chips. It was all sorts of stuff I was interested in and thought was interesting.", "I always bridged economically—because I've read all the company's earnings since I was 18 and I'm 28 now—all the way through to the technical stuff that I could. In 2022, I also started going to every conference I could. I go to about 40 conferences a year, not trade show type conferences, but technical ones: chip architecture, photoresist, AI and NeurIPS , ICML, and so on.", "Dwarkesh Patel", "How many conferences do you go to a year?", "Dylan Patel", "Like 40.", "Dwarkesh Patel", "So you basically live at conferences?", "Dylan Patel", "Yeah. I've been a digital nomad since 2020. I've basically stopped and I moved to SF now, kind of not really.", "Jon Y", "You can't say that the California government…", "Dylan Patel", "I don't live in SF, come on… But I basically do now.", "Dwarkesh Patel", "California Internal Revenue Service.", "Dylan Patel", "Do not joke about those guys. Seriously, don't joke about this.", "Jon Y", "They're going to send you a clip of this podcast and be like, “40% please.”", "Dylan Patel", "I am in San Francisco sub four months a year contiguously, exactly 100 days or whatever it is, 179 days. Let's go, right? Over the full course of the year…but no, I go to every conference, make connections with all these very technical things like international electron device manufacturing, lithography and advanced patterning, very large scale integration. You have circuits conferences. You just go to every single layer of the stack. It's so siloed, there's tens of millions of people that work in this industry.", "But you go to every single one. You try and understand the presentations. You do the required reading. You look at the economics of it. You are just curious and want to learn. You can start to build up more and more. The content got better and what I followed got better. Then I started hiring people in mid-2022, as well. I got people in different layers of the stack.", "Now today, almost every hyperscaler is a customer, not for the newsletter but for the data we sell. Many major semiconductor companies, many investors, all these people are customers of the data and stuff we sell. The company has people all the way from ex- CYMER , ex-ASML, all the way to ex-Microsoft and an AI company. Through the stratification, now there's 14 people here in the company and all across the US, Japan, Taiwan, Singapore, France. It’s all over the world and across many ranges of… You have ex-hedge funds as well. You kind of have this amalgamation of tech and finance expertise. We just do the best work there, I think.", "Jon Y", "Are you still talking about a monstrosity?", "Dylan Patel", "An unholy concoction. We have data analysis, consulting, etc. for anyone who really wants to get deeper into this. We can talk about people building big data centers, but how many chips are being made in every quarter of what kind for each company? What are the subcomponents of these chips? What are the subcomponents of the servers? We try to track all of that. We follow every server manufacturer, every component manufacturer, every cable manufacturer, all the way down the stack, tool manufacturer. We know how much is being sold, where and how, and project out all the way out to like, “Hey, where's every single data center? What is the pace that it's being built out?” This is the sort of data we want to have and sell. The validation is that hyperscalers purchase it and they like it a lot. AI companies do and semiconductor companies do. How it got there to where it is, is just like, “Just try and do the best and try to be the best.”", "02:06:10 – Opportunities in the semiconductor stack", "Dwarkesh Patel", "If you were an entrepreneur who's like, “I want to get involved in the hardware chain somewhere…” If you could start a business today, somewhere in the stack, what would you pick?", "Dylan Patel", "Jon, tell him about your textile business.", "Jon Y", "I'd work in memory, something in memory. The concept is that they have to hold immense amounts of memory. I think memory already is tapped technologically. HBM exists because of limitations in DRAM. I think it's fundamentally… We've forgotten it because it is a commodity, but we shouldn't. I think breaking memory would change the world in that scenario.", "Dylan Patel", "The context here is that Moore's Law was predicted in 1965. Intel was founded in 1968 and released their first memory chips in 1969 and 1970. So a lot of Moore's Law was about memory. The memory industry followed Moore's law up until 2012 when it stopped. Tt became very incremental gains since then. Whereas logic has continued and people are like, \"Oh, it's dying. It's slowing down.\" At least there's still a little bit of coming. It’s still more than 10-15% a year CAGR of growth and density and cost improvement. Memory is literally since 2012, really bad.", "When you think about the cost of memory… It's been considered a commodity. But memory integration with accelerators, this is something… I don't know if you can be an entrepreneur here though. That's the real challenge. Because you have to manufacture at some really absurdly large scale or design something in an industry that does not allow you to make custom memory devices.", "Jon Y", "Or use materials that don't work that way.", "Dylan Patel", "So there's a lot of work there. I don't necessarily agree with you. But I do agree it's one of the most important things for people to invest in.", "It's really about where you are good at. Where can you vibe? Where can you enjoy your work and be productive in society? There are a thousand different layers of the abstraction stack. Where can you make it more efficient? Where can you utilize AI to build better and make everything more efficient in the world and produce more bounty and iterate feedback loop? There's more opportunity today than any other time in human history, in my view. Just go out there and try. What engages you? Because if you're interested in it, you'll work harder.", "If you have a passion for copper wires… I promise to God, if you make the best copper wires, you'll make a shitload of money. If you have a passion for B2B SaaS, I promise to God, you'll make fuck loads of money. I don't like B2B SaaS, but whatever you have a passion for. Just work your ass off, try and innovate, bring AI into it. Try and use AI to make yourself more efficient and make everything more efficient. I promise, you will be successful.", "That's really the view. It’s not necessarily that there's one specific spot because every layer of the supply chain has… You go to the conferences. You go to talk to the experts there. It's like, “Dude, this is the stuff that's breaking and we could innovate in this way. These five extraction layers, we could innovate this way.” Yeah, do it. There's so many layers where we're not at the Pareto optimal . There's so much more to go in terms of innovation and inefficiency.", "Dwarkesh Patel", "That's a great place to close. Dylan, Jon, thank you so much for coming on the podcast. I'll just give people the reminder, Dylan Patel, semianalysis.com . That's where you can find all the technical breakdowns that we've been discussing today. Asianometry YouTube channel , everybody will already be aware of Asianometry, but anyways, thanks so much for doing this. It was a lot of fun.", "Dylan Patel", "Thank you.", "Jon Y", "Yeah, thank you." ]
[ "https://x.com/dylan522p", "https://www.semianalysis.com/", "https://www.asianometry.com/about", "https://www.youtube.com/asianometry", "https://en.wikipedia.org/wiki/Xi_Jinping", "https://www.dwarkeshpatel.com/p/will-scaling-work", "https://en.wikipedia.org/wiki/Honey_trapping", "https://en.wikipedia.org/wiki/OpenAI", "https://www.nti.org/analysis/articles/former-yugoslavia-overview/", "https://en.wikipedia.org/wiki/Directorate_for_State_Security_(Yugoslavia)", "https://en.wikipedia.org/wiki/TSMC", "https://en.wikipedia.org/wiki/Nvidia", "https://www.cnn.com/2023/02/15/tech/asml-china-employee-data-breach-intl-hnk/index.html", "https://en.wikipedia.org/wiki/ASML_Holding", "https://en.wikipedia.org/wiki/Semiconductor_fabrication_plant", "https://en.wikipedia.org/wiki/Semiconductor_Manufacturing_International_Corporation", "https://www.youtube.com/watch?v=-W0YdacKwUo", "https://en.wikipedia.org/wiki/Samsung_Electronics", "https://en.wikipedia.org/wiki/Zhang_Rujing", "https://en.wikipedia.org/wiki/Lee_Kun-hee", "https://www.mckinsey.com/industries/industrials-and-electronics/our-insights/semiconductor-design-and-manufacturing-achieving-leading-edge-capabilities", "https://en.wikipedia.org/wiki/Morris_Chang", "https://interconnected.blog/riscv-china-nightingales/", "https://en.wikipedia.org/wiki/Fin_field-effect_transistor", "https://en.wikipedia.org/wiki/Semiconductor_device_fabrication#Technology_node", "https://en.wikipedia.org/wiki/Semiconductor_device_fabrication#Device_yield", "https://en.wikipedia.org/wiki/Etching_(microfabrication)", "https://en.wikipedia.org/wiki/National_Tsing_Hua_University", "https://en.wikipedia.org/wiki/Jensen_Huang", "https://en.wikipedia.org/wiki/Hopper_(microarchitecture)", "https://en.wikipedia.org/wiki/Blackwell_(microarchitecture)", "https://en.wikipedia.org/wiki/XAI_(company)", "https://en.wikipedia.org/wiki/Anthropic", "https://en.wikipedia.org/wiki/Alibaba_Group", "https://en.wikipedia.org/wiki/Baidu", "https://www.deepseek.com/", "https://en.wikipedia.org/wiki/Graphics_processing_unit", "https://www.cnbc.com/2023/10/17/us-bans-export-of-more-ai-chips-including-nvidia-h800-to-china.html", "https://www.trendforce.com/news/2024/09/24/news-nvidias-custom-h20-chip-for-china-reportedly-halted/", "https://en.wikipedia.org/wiki/Three_Gorges_Dam", "https://en.wikipedia.org/wiki/Computer_worm", "https://en.wikipedia.org/wiki/Avatar:_The_Last_Airbender", "https://arxiv.org/abs/2001.08361", "https://en.wikipedia.org/wiki/Aluminum_Corporation_of_China_Limited", "https://en.wikipedia.org/wiki/Wafer_(electronics)", "https://en.wikipedia.org/wiki/Advanced_packaging_(semiconductors)", "https://en.wikipedia.org/wiki/Semiconductor_memory", "https://en.wikipedia.org/wiki/Applied_Materials", "https://en.wikipedia.org/wiki/AMD", "https://en.wikipedia.org/wiki/Photolithography", "https://www.dwarkeshpatel.com/p/leopold-aschenbrenner", "https://www.dwarkeshpatel.com/p/leopold-aschenbrenner", "https://en.wikipedia.org/wiki/FLOPS", "https://www.trendforce.com/news/2024/06/11/news-huaweis-self-developed-ai-chip-challenges-nvidia-boasting-its-ascend-910b-to-be-equal-in-match-with-a100/", "https://en.wikipedia.org/wiki/7_nm_process", "https://www.tomshardware.com/tech-industry/artificial-intelligence/huawei-already-has-a-new-chip-to-rival-nvidia-ai-gpus", "https://en.wikipedia.org/wiki/Immersion_lithography", "https://en.wikipedia.org/wiki/14_nm_process", "https://en.wikipedia.org/wiki/Die_(integrated_circuit)#:~:text=Size%20of%20die%20is%20roughly,testing%20to%20test%20their%20functionality.", "https://en.wikipedia.org/wiki/Chiplet", "https://en.wikipedia.org/wiki/Six_Sigma", "https://www.asml.com/en/products/duv-lithography-systems", "https://en.wikipedia.org/wiki/Qualcomm", "https://en.wikipedia.org/wiki/MediaTek", "https://en.wikipedia.org/wiki/7_nm_process", "https://en.wikipedia.org/wiki/5_nm_process", "https://en.wikipedia.org/wiki/28_nm_process", "https://en.wikipedia.org/wiki/Nokia", "https://en.wikipedia.org/wiki/Sony_Mobile", "https://en.wikipedia.org/wiki/People%27s_Liberation_Army", "https://www.newsweek.com/china-missiles-rocket-fuel-corrupt-officials-water-xi-jinping-1858491", "https://www.wsj.com/politics/national-security/hypersonic-missiles-america-military-behind-936a3128", "https://en.wikipedia.org/wiki/Raytheon", "https://www.wsj.com/articles/SB10485560675556000", "https://en.wikipedia.org/wiki/Tensor_Processing_Unit", "https://en.wikipedia.org/wiki/996_working_hour_system", "https://en.wikipedia.org/wiki/Chinese_Communist_Party", "https://en.wikipedia.org/wiki/Andrew_Grove", "https://amzn.to/3ZOYxC2", "https://en.wikipedia.org/wiki/Old_Summer_Palace#Destruction", "https://www.investopedia.com/terms/v/verticalintegration.asp", "https://www.semi.org/en/products-services/market-data/equipment", "https://en.wikipedia.org/wiki/List_of_semiconductor_fabrication_plants", "https://en.wikipedia.org/wiki/Foundation_model", "https://en.wikipedia.org/wiki/Silterra_Malaysia", "https://en.wikipedia.org/wiki/Chartered_Semiconductor_Manufacturing", "https://en.wikipedia.org/wiki/2_nm_process", "https://en.wikipedia.org/wiki/3_nm_process", "https://youtu.be/lIt9PdGnJZo", "https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_A16", "https://en.wikipedia.org/wiki/Moore%27s_law", "https://en.wikipedia.org/wiki/Central_processing_unit", "https://en.wikipedia.org/wiki/Dennard_scaling", "https://en.wikipedia.org/wiki/Static_random-access_memory", "https://en.wikipedia.org/wiki/Transistor", "https://www.investopedia.com/magnificent-seven-stocks-8402262", "https://en.wikipedia.org/wiki/Microcontroller", "https://en.wikipedia.org/wiki/Internal_combustion_engine", "https://en.wikipedia.org/wiki/United_Microelectronics_Corporation", "https://en.wikipedia.org/wiki/Powerchip", "https://en.wikipedia.org/wiki/130_nm_process", "https://en.wikipedia.org/wiki/Texas_Instruments", "https://en.wikipedia.org/wiki/Analog_Devices", "https://en.wikipedia.org/wiki/NXP_Semiconductors", "https://en.wikipedia.org/wiki/28_nm_process", "https://en.wikipedia.org/wiki/Apple_silicon", "https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)", "https://en.wikipedia.org/wiki/Pat_Gelsinger", "https://www.dwarkeshpatel.com/p/sholto-douglas-trenton-bricken", "https://www.ieee.org/", "https://spie.org/", "https://arxiv.org/abs/2211.05102", "https://en.wikipedia.org/wiki/Michael_Crichton#Gell-Mann_amnesia_effect", "https://en.wikipedia.org/wiki/Multigate_device#Gate-all-around_FET_(GAAFET)", "https://en.wikipedia.org/wiki/SEMATECH", "https://irds.ieee.org/", "https://en.wikipedia.org/wiki/Windows_XP", "https://en.wikipedia.org/wiki/CentOS", "https://en.wikipedia.org/wiki/Inverse_lithography", "https://semiengineering.com/knowledge_centers/manufacturing/lithography/photomask/inverse-lithography-technology-ilt/#:~:text=ILT%20uses%20a%20mathematical%20formula,a%20shape%20on%20the%20wafer.", "https://en.wikipedia.org/wiki/Reinforcement_learning", "https://en.wikipedia.org/wiki/AlphaGo", "https://en.wikipedia.org/wiki/Generative_artificial_intelligence", "https://www.nvidia.com/en-us/data-center/h100/", "https://en.wikipedia.org/wiki/Arithmetic_logic_unit", "https://en.wikipedia.org/wiki/Electronic_design_automation", "https://en.wikipedia.org/wiki/Hyperscale_computing", "https://www.fastcompany.com/91160486/nvidia-has-new-ai-chip-heres-what-we-know-about-b20", "https://en.wikipedia.org/wiki/Neuromorphic_computing", "https://blogs.nvidia.com/blog/sparsity-ai-inference/", "https://en.wikipedia.org/wiki/Mixture_of_experts", "https://en.wikipedia.org/wiki/Huawei", "https://icml.cc/", "https://x.com/_albertgu?lang=en", "https://cartesia.ai/", "https://huggingface.co/blog/lbourdois/get-on-the-ssm-train", "https://en.wikipedia.org/wiki/Matrix_multiplication", "https://www.dwarkeshpatel.com/p/ilya-sutskever", "https://www.bloomberg.com/news/articles/2024-06-19/openai-co-founder-plans-new-ai-focused-research-lab", "https://fortune.com/2024/06/18/elon-musk-xai-memphis-supercomputer-site/", "https://vast.ai/", "https://mlfoundry.com/", "https://www.coreweave.com/", "https://crusoe.ai/", "https://corescientific.com/", "https://www.nvidia.com/en-us/data-center/gb200-nvl72/", "https://www.prnewswire.com/news-releases/coreweave-opens-new-texas-data-center-to-expand-access-to-high-performance-gpus-301884897.html", "https://en.wikipedia.org/wiki/Power_usage_effectiveness", "https://en.wikipedia.org/wiki/Synthetic_data#Machine_learning", "https://www.lightreading.com/data-centers/johor-in-malaysia-consolidates-its-place-as-a-data-center-hotspot", "https://en.wikipedia.org/wiki/Grand_Ethiopian_Renaissance_Dam#:~:text=The%20first%20phase%20of%20filling,increased%20to%20around%20575%20meters.", "https://en.wikipedia.org/wiki/G42_(company)", "https://www.datacenterdynamics.com/en/analysis/behind-omniva-the-secretive-gpu-cloud-startup-that-began-as-an-attempt-to-build-the-worlds-largest-crypto-data-center/", "https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer", "https://en.wikipedia.org/wiki/Lumen_Technologies", "https://en.wikipedia.org/wiki/Zayo_Group", "https://en.wikipedia.org/wiki/Quality_Technology_Services", "https://www.cooperlighting.com/global/markets/data-center", "https://en.wikipedia.org/wiki/Sam_Altman", "https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0", "https://ai.stackexchange.com/questions/5246/what-is-sample-efficiency-and-how-can-importance-sampling-be-used-to-achieve-it", "https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_A16", "https://www.tomshardware.com/tech-industry/manufacturing/tsmc-mentions-14nm-process-tech-for-the-first-time-says-2nm-remains-on-track", "https://en.wikichip.org/wiki/tsmc/cowos", "https://en.wikipedia.org/wiki/High_Bandwidth_Memory", "https://www.investopedia.com/terms/c/capitalexpenditure.asp#:~:text=Capital%20expenditures%20(CapEx)%20are%20funds,or%20investments%20by%20a%20company.", "https://openai.com/index/hello-gpt-4o/", "https://www.investopedia.com/terms/r/returnoninvestmentcapital.asp", "https://en.wikipedia.org/wiki/Pascal%27s_wager", "https://en.wikipedia.org/wiki/Satya_Nadella", "https://en.wikipedia.org/wiki/Technological_singularity", "https://en.wikipedia.org/wiki/MEMS", "https://internethistory.org/wp-content/uploads/2020/01/OSA_Boom.Bubble.Bust_Fiber.Optic_.Mania_.pdf", "https://www.youtube.com/watch?v=fjHtjT7GO1c", "https://youtu.be/xFYQQPAOz7Y", "https://www.youtube.com/@Asianometry", "https://en.wikipedia.org/wiki/Chiang_Kai-shek_Memorial_Hall", "https://www.youtube.com/asianometry", "https://youtu.be/LrZ1eMlzwhc", "https://youtu.be/9d6eNmtHFQk", "https://www.youtube.com/watch?v=Fc_lEzGiClk", "https://www.youtube.com/watch?v=9d6eNmtHFQk", "https://en.wikipedia.org/wiki/Compaq", "https://en.wikipedia.org/wiki/Qubit", "https://en.wikipedia.org/wiki/Directed_assembly_of_micro-_and_nano-structures#:~:text=Directed%20self%2Dassembly,-Directed%20self%2Dassembly&text=The%20DSA%20is%20not%20a,semiconductor%20and%20hard%20drive%20industries.", "https://en.wikipedia.org/wiki/Xbox_360", "https://en.wikipedia.org/wiki/Xbox_360_technical_problems", "https://en.wikipedia.org/wiki/Rooting_(Android)", "https://archive.org/", "https://neurips.cc/", "https://www.cymer.com/", "https://www.investopedia.com/investing/compound-annual-growth-rate-what-you-should-know/#:~:text=The%20compound%20annual%20growth%20rate,fall%20in%20value%20over%20time.", "https://en.wikipedia.org/wiki/Pareto_efficiency", "http://semianalysis.com", "https://www.youtube.com/asianometry" ]
https://www.dwarkesh.com/p/edward-glaeser
Edward Glaeser - Cities, Terrorism, Housing, & Remote Work
[ "Mars, Terrorism, & Capitals", "Dwarkesh Patel 0:00:00", "Okay, today, I have the pleasure of speaking with Professor Edward Glaeser, who is the chair of the Harvard Department of Economics, and author of some of the best books and papers about cities. Professor Glazer, thanks for coming on The Lunar Society.", "Edward Glaeser 0:00:25", "Oh, thank you so much for having me on! Especially given that The Lunar Society pays homage to one of my favorite moments in urban innovation in Birmingham during the 18th century.", "Dwarkesh Patel 0:00:26", "Oh wow, I didn’t even catch that theme, but that’s a great title. My first question is, What advice would you give to Elon Musk about building the first cities on Mars?", "Edward Glaeser 0:00:35", "[laughs] That’s a great question. I think that demand for urbanism in Mars is going to be relatively limited. Cities are always shaped by the transportation costs that are dominant in the era in which they’re created. That both determines the micro-shape of the city and determines its macro future. So cities on Mars are, of course, going to be limited by the likely prohibitive cost of traveling back and forth to the mother planet. But we also have to understand what cars people are going to be using on Mars. I assume these are all going to be Teslas, and everyone is going to be driving around in some appropriate Tesla on Mars. So it’s going to be a very car-oriented living experience. I think the best strategy would be to create a fairly flexible plan, much like the 1811 grid plan in New York , that allows entrepreneurs to change land use over time and put a few bets on what’s necessary for infrastructure and then just let the city evolve organically. Usually, the best way is to put more trust in individual initiative than central planning–– at least in terms of micromanaging what goes where.", "Dwarkesh Patel 0:01:58", "Gotcha. Now, since 9/11, many terrorist groups have obviously intended to cause harm to cities. But by and large, at least in Western countries, they haven’t managed to kill off thousands of people like they were able to do during 9/11. What explains this? Do you think cities are just more resilient to these kinds of attacks than we would have otherwise thought? Or are the terrorists just not being creative enough?", "Edward Glaeser 0:02:20", "I don’t know. There’s also the question of what the objectives are. Even for the 9/11 terrorists, their end game was not to kill urbanites in America. It was to effect change in Saudi Arabia or in the Middle East more generally. We’ve also protected our cities better. If you think about it, two things go on simultaneously when you collect economic activity in one place in terms of defense: one of which is they become targets–– and of course, that’s what we saw on 9/11; it’s hard to think of a symbol that’s clearer than those twin towers. But at the same time, they’re also a defensible space. The origin of the urban agglomeration and use for cities and towns was the fact that they could be walled settlements. Those walls that brought together people collectively for defense are the ultimate reason why these towns came about. The walls provided protection.", "I think the same thing has been playing out with cities over the past 20 years. Just as New York was a target, it was also a place where post-2001, the city ramped up its anti-terrorism efforts. They put together a massive group as London had previously done. The cameras that implemented congestion pricing in London were the same cameras that used against the Irish terrorists. So both effects went on. I think we’ve been fortunate and that we’ve shown the strength of cities in terms of protecting themselves.", "Dwarkesh Patel 0:03:52", "If you look throughout ancient world history, there are so many examples of empires that are basically synonymous with their capital cities (ex. Rome or Athens, or Sparta). But today, you would never think of America as the ‘Washingtonian Empire.’ What is the explanation for why the capital city has become much less salient in terms of the overall nation? Is there a Coasian answer here?", "Edward Glaeser 0:04:20", "There are specific things that went on with English offshoot colonies where in many cases, because they recognized the tendency of the capital city to attract lots of excess goodies that had been taken from elsewhere in the country, they located the capital city in a remote place. It’s actually part of the story of the Hamilton musical in The Room Where it Happens. Part of the deal was about moving the capital of the US to a relatively remote Virginia location rather than having it be in Philadelphia, New York. That was partially to reflect the fact that the South needed to be protected against all of the extra assets going to New York and Philadelphia.", "So, whether or not this is Canberra or Ottawa, you see all of these English offshoot places without their capitals in the big metropoles. Whereas traditionally, what’s happened in these places that have been around for centuries, is that even if the capital didn’t start off as the largest city, it became the largest city because centuries of French leaders thought their business was to take wealth from elsewhere in France and make Paris great. I think the French Empire was as synonymous with Paris as most of those ancient empires were with their capital city. I guess the question I could throw back to you is, what are places where this is not true? Moscow, St. Peter’s, and Beijing are examples. Do we think that Beijing is less synonymous with China than the Roman Empire is with Rome ? Maybe a little–– possibly just because China is so big and Beijing is a relatively small share of the overall population of China. But it’s more so certainly than Washington, D.C. is with the U.S.", "Decline, Population Collapse, & Young Men", "Dwarkesh Patel 0:06:32", "That’s a really interesting answer. Once a city goes through a period of decline (maybe an important industry moved out, or maybe it’s had a sequence of bad governance), are you inclined to bet that there will be some sort of renewal, or do you think that things will continue to get worse? In other words, are you a momentum trader, or are you a reversion to the mean trader when it comes to cities?", "Edward Glaeser 0:06:54", "I can tell you different answers for different outcomes. For housing prices, I can tell you exactly what we know statistically about this, which is at higher frequencies, let’s say one year, housing prices show wickedly large levels of momentum. For five years or more, they show very significant levels of mean reversion. It’s a short-term cycle in housing prices followed by decline. Population just shows enormous persistence on the downside. So what happens is you typically will have an economic shock. Detroit used to be the most productive place on the planet in 1950, but a bunch of shocks occurred in transportation technology which made it no longer such a great place to make cars for the world. It takes a century for the city to respond in terms of its population because the housing is sticky.", "The housing remains there. So between the 50s and 60s, the population declines a little bit, and prices drop. They drop sufficiently far that you’re not going to build a lot of new housing, but people are going to still stay in the houses. They’re not going to become vacant. So, the people are still there because the houses are still there. During the 60s to 70s, the population drops a  little bit further and prices kind of stay constant, but still it’s not enough to build new housing. So the declines are incredibly persistent, and growth is less so. So on the boom side, you have a boom over a 10-year period that’s likely to mean revert and it’s not nearly as persistent because it doesn’t have this sticky housing element to it. In terms of GDP per capita, it’s much more of a random walk there in terms of the straight income stuff. It’s the population that’s really persistent, which is, in fact, the reality of a persistent economy.", "Dwarkesh Patel 0:08:44", "Interesting. Why don’t Americans move as much as they used to a century ago? So you have a paper from 2018 titled Jobs in the Heartland , where you talk about how there’s increasing divergence between the unemployment rates between different parts of America. Why don’t Americans just move to places where there are better economic circumstances?", "Edward Glaeser 0:09:04", "I want to highlight one point here, which is that you said “unemployment rate”, and I want to replace that with non-employment rate. T hat’s partially what we’re seeing now. It looks like America’s labor force couldn’t be better in terms of the low levels of unemployment, but what’s happened over the last 50 years is there has been a very large rise in the share of prime-age men who are not in the labor force. So they’ve stopped looking for work, and those guys are miserable. It’s not that those guys are somehow rather productive and happy,–– this is a very bad outcome for prime-age men. I’m separating men from women, not to say that the female labor markets aren’t just as important, just as fascinating, just as critical. But labor force participation means something different for many women than it does for men. There are many women who are not in the labor force who are doing things that are enormously productive socially, like caring for their children and caring for their families.", "I wish it were symmetric across the genders. It just isn’t true. I mean, there just are very few men not in the labor force who are doing anything much other than watching television. It’s just a very different thing. So yes, there are big differences in the non-employment rate. There are some parts of America where, for much of the past decade, one in four prime-age men have been jobless. It’s an enormous gap. The question is, why don’t they get out?", "I think the answer is really twofold: one of which is the nature of how housing markets have frozen up. Historically, the differences in housing costs in the US weren’t that huge across places. Most parts of America had some kind of affordable housing, and it was relatively easy to put up. At the dawn of the 20th century, these were kit helms sold by Sears and Roebuck that sprung up by the thousand. You bought the kit from Sears and Roebuck, and you just built it yourself. After World War II, it was mass-produced homes in places like Levittown.", "For most of the last 50 years, in places like coastal California or the East Coast, building has just become far more difficult. With the decline of mass-produced housing, it’s become far more expensive, and it becomes harder and harder for relatively low-income people to find opportunities in places that have high levels of income, and high levels of opportunity. That’s partially why there’s not just a general decline in mobility, there’s a decline in directed mobility for the poor. Historically, poor people moved from poor areas to rich areas. That’s pretty much stopped. In part, that’s because rich areas just have very, very expensive housing. The other thing is the rising importance of the informal safety net.", "So if you think about most particularly prime-aged men, they’re not receiving significant handouts from the government except if they’re on disability. But they will typically have some form of income, some form of housing that’s being provided for them by someone other than themselves. A third of them are living in their parent's homes. That informal safety net is usually very place dependent. Let’s say you’re living in Eastern Kentucky; it’s not like your parents were going to buy you a condo in San Francisco. You can still have your own bedroom, but you can’t go anywhere else and still get that level of support. And so that’s, I think, another reason why we’re increasingly stuck in place.", "The third you mentioned, is that a third of the non-employed population of young men or is that a third of all young men? Non-employed is a third of non-employed prime aged men. So that’s 25 to 54. There are a lot of 45 year olds who are living on their parents’ couches or in their old bedroom. It’s a fairly remarkable thing.", "Dwarkesh Patel 0:12:49", "Now, we’ll get to housing in just a second, but first, I want to ask you : If the fertility trends in East Asia and many other places continue, what will the impact on cities be if the average age gets much older and the possible eventuality of depopulation?", "Edward Glaeser 0:12:53", "That’s a really interesting question.The basic age fact on cities is that within the bracket of the sort of high-income or middle-income, for prime-aged parents, cities tend to be relatively bad for them. Once you’re in the sort of high end of the upper middle class, the distrust of our public school systems, merited or not, means that that group tends to leave. You have plenty of parents with kids who are lower income, and then you have groups who are part of a demographic barbell that like cities. So this is partially about people who don’t feel like they need the extra space and partially because if they’re young, they’re looking to find prospective mates of various forms.", "Cities are good for that. Urban proximity works well in the dating market. And they’ve got time on their hands to enjoy the tremendous amenities and consumption advantages that cities have. For older people, it’s less about finding a mate typically, but the urban consumption amenity still has value. The ability to go to museums, the ability to go to concerts, and those sorts of activities continue to draw people in.", "Going forward, I would have continued to expect the barbell dimension to persist until we actually get around to solving our urban schools and declining population levels. If anything, I would have thought that COVID skews you a bit younger because older people are more anxious and remember that cities can also bring pandemics. They remember that it can be a nice thing to have a suburban home if you have to shelter in place. So that might lead some people who would have otherwise relocated to a dense urban core to move out, to stay out.", "Urban Education", "Dwarkesh Patel 0:14:44", "You just mentioned urban schools, and I’m curious because you’ve written about how urban schools are one of the reasons people who have children might not want to stay in cities. I’m curious why it’s the case that American cities have some of the best colleges in the world, but for some reason, their K-to-12 is significantly worse, or it can be worse than the K-to-12 in other parts of the country. Why is it that the colleges are much better in cities, but K to 12 is worse?", "Edward Glaeser 0:15:19", "So it’s interesting. It’s not as if, I don’t think there’s ever been an Englishman who felt like they had to leave London to get better schools for the kids, or a Frenchman who thought they needed to leave Paris. It’s not like there’s something that’s intrinsic to cities, but I’ve always thought it’s a reflection of the fact that instead of allowing all of the competition and entrepreneurship that thrives in cities and that makes cities great, in the case of K to 12 public education, that’s vanished.", "And your example of colleges is exactly right. I’m in this industry; I’m a participant in this industry and let me tell you, this industry is pretty competitive. Whether or not we’re competing for the best students, at our level we go through an annual exercise of trying to make sure we get Ph.D. students to come to our program instead of our competitors, whether it’s by hiring faculty members or attracting undergraduates, we occupy a highly competitive industry where we are constantly aware of what we need to do to make ourselves better. It doesn’t mean that we’re great along every dimension, but at least we’re trying. K through 12 education has a local monopoly.", "So it’s like you take the great urban food, leisure and hospitality, and food industries, and instead of having in New York City by a hyper-competitive world where you constantly have entry, you say, “ You know what? We’re going to have one publicly managed canteen and it’s going to provide all the food in New York City and we’re not going to allow any competitors or the competitors are going to have to pay a totally different thing.” That canteen is probably going to serve pretty crappy food. That’s in some sense what happens when you have a large-scale public monopoly that replaces private competition.", "Dwarkesh Patel 0:16:50", "But isn’t that also true of rural schools? Why are urban schools often worse?", "Edward Glaeser 0:17:46", "There’s much more competition in suburban schools. So in terms of the suburban schools, typically there are lots of suburbs, and people are competing amongst them. The other thing that’s actually important is (I don’t want to over exaggerate this, but I think it is something that we need to think a little bit about) the role of public sector unions and particularly teachers unions in these cases. In the case of a suburban school district, the teachers union is no more empowered on the management side than they would be in the private sector.", "Dwarkesh Patel 0:17:30", "So in a normal private sector, you’ve got a large company, you’ve got a union, and they’re arguing with each other. It’s a level playing field. It’s all kind of reasonable. I’m not saying management has done awful things, and that unions have done foolish things. I’m not saying that either are perfect, but it’s kind of well-matched. It’s matched that way in the suburbs as well. You’ve got highly empowered parents who are highly focused on their kids and they’re not dominated.", "It’s not like the teachers union dominates elections in Westchester County. Whereas if you go into a big city school district, you have two things going on. One of which is the teachers tend to be highly involved politically and quite capable of influencing management essentially, because they are an electoral force to be reckoned with, not just by the direct votes, but also with their campaign spending. On top of this, you’re talking about a larger group of disparate parents, many of whom have lots of challenges to face and it becomes much harder for them to organize effectively on the other side. So for those reasons, big urban schools can do great things and many individual teachers can be fantastic, but it’s an ongoing challenge.", "Georgism, Robert Moses, & Too Much Democracy?", "Dwarkesh Patel 0:18:35", "What is your opinion on Georgism ? Do cities need a land value tax? Would it be better if all the other taxes are replaced by one?", "Edward Glaeser 0:18:41", "Okay. So Henry George , I don’t know any economist who doesn’t think that a land value tax is an attractive idea. The basic idea is we’re going to tax land rather than taxing real estate values. And you would probably implement this in practice by evaluating the real estate and then subtracting the cost of construction, (at least for anything that was built up, meaning you’d form some value of the structures and you just subtract it).", "The attractive thing from most of our perspectives is it doesn’t create the same disincentive to build that a real estate tax does. Real estate tax says, “Oh, you know what? I might want to keep this thing as a parking lot for a couple of years so I don’t have to pay taxes on it.”", "If it were a land value tax, you’re going to pay the same tax, whether or not it’s a parking lot or whether or not you’re going to put a high rise on it, so you might as well put the high rise on it and we could use the space. So I think by and large, that’s a perfectly sensible idea. I’d like to see more places using land value taxes or using land value taxes in exchange for property taxes.", "Where George got it wrong is the idea that a land value tax is going to solve all the problems of society or all the problems of cities. That is ludicrously not true.", "One could make an argument that in those places that just have a property tax, you could replace it with a land value tax with little loss, but at the national level, it’s not a particularly progressive tax in lots of ways. It would be hard to figure out how to fund all the things you want to fund, especially since there are lots of things that we do that are not very land intensive. I think George was imagining a world in which pretty much all value-creating enterprises had a lot of land engaged. So it’s a good idea, yes. A panacea, no.", "Dwarkesh Patel 0:20:20", "No, that’s a good point. I mean, Google’s offices in San Francisco are probably generating more value than you would surmise just from the quantity of land they have there. Do American cities need more great builders like Robert Moses?", "Edward Glaeser 0:20:36", "Robert Caro’s The Power Broker is one of the great biographies of the past 100 years, unquestionably. The only biography that I think is clearly better is Robert Caro’s biography of Lyndon Baines Johnson , right? I mean, it’s Caro is truly amazing. That being said, I would not exactly call it a fair and balanced view of Robert. I mean, it is true that Robert Moses was high handed, and it is true that there are things that he did that were terrible, that you never want to do again. But on the other hand, the man got stuff built. I mean, I think of myself as a child growing up in New York City, and whether or not it was the public pool that I swam in or the parks that I played in, or the roads that I traveled on, they were all delivered by Robert Moses. There’s got to be a middle ground, which is, no, we’re not going to run roughshod over the neighborhood as Robert Moses did, but we’re still going to build stuff. We’re still going to deliver new infrastructure and we’re not going to do it for 10 times more than every other country in the world does it.", "Dwarkesh Patel 0:21:37", "We’re actually going to have sensible procurement policies that bring in things at a reasonable cost, and I think we need to balance a little bit back towards Robert Moses in order to have slightly more empowered builders who actually are able to deliver American cities the infrastructure they need at an affordable cost.", "Dwarkesh Patel 0:21:57", "Do we have too much democracy at the local level? You wrote a paper in 2017 titled The Political Economy of Transportation Investments and one of the points you make there is that the local costs are much more salient to people for new construction than the public benefits, and the benefits to newcomers would be. Does that mean we have too much federalism? Should we just have far less power at the city level and not universally? There are lots of good things that local control does.", "Edward Glaeser 0:22:25", "I do think we have too much local ability to say no to new housing projects. So that’s a particular case that I’m focused on. I think it’s exactly right that the near neighbors to a project internalize all of the extra noise and perhaps extra traffic that they’re going to have due to this project. They probably overestimate it because they are engaging in a bit of status quo bias and they think the present is great and can’t imagine any change.", "By contrast, none of the people who would benefit from the new project are able to vote. All of the families that would love to move into this neighborhood are being zoned out by the insiders who get a say. I think the goal is to make sure that we have more ability to speak for outsiders. Cities at their best, are places where outsiders can find opportunities. That’s part of what’s so great about them. It’s a tragic thing that we make that so hard. Now I’m not sure exactly that I’m claiming that I want less democracy, but I do want more limitations on how much regulations localities can do. So I think there are certain limitations on local power that I think are fine.", "I would prefer to call this not a limitation on local democracy, but an increase in the protection of individual rights or the individual rights of landowners to do what they want with their land. Which in effect, is a limit on democracy. But the Bill of Rights is a limit on democracy! The Bill of Rights says that they don’t care if 51% of your voters want to take away your right to free assembly. They’re not allowed to do that. So in some sense, what I’m just arguing for is more property owners’ rights so that they can actually allow more housing in their building.", "In terms of transportation projects, it’s a little bit dicier because here the builder is the government itself. I think the question is you want everyone to have a voice. You don’t want every neighborhood to have a veto over every potential housing project or potential transportation project. So you need something that is done more at the state level with representation from the locality, but without the localities getting the ultimate say", "Dwarkesh Patel 0:24:33", "I wonder if that paper implies that I should be shorting highly educated areas, at least in terms of real estate. One of the things you mentioned in the paper was that highly educated areas that had much higher opposition were able to foment much more opposition.", "Edward Glaeser 0:24:49", "Okay. So here’s the real estate strategy, which I have heard that actually there are buyers who do this. You take an area that has historically been very pro-housing. So it’s got lots of housing, and it’s affordable right now because supply is good. But lots of educated people have moved in. Which means that going forward, they’re going to build much less, which means that going forward, they’re likely to become much more expensive. So you should, in fact, buy options on that stuff rather than shorting it. You should short if you have a security that is related to the population level in that community. You should short that because the population growth is going to go down, but the prices are likely to go up.", "Opioids, Automation, & UBI", "Dwarkesh Patel 0:25:29", "So you wrote a paper last year on the opioid epidemic. One of the points that you made there was that the opioid epidemic could be explained just by the demand side stuff about social isolation and joblessness. I wonder how this analysis makes you think about mass-scale automation in the future. What impact do you think that would have? Assume it’s paired with universal basic income or something like that. Do you think it would cause a tremendous increase in opioid abuse?", "Edward Glaeser 0:26:03", "I would have phrased it slightly differently–– which is as opposed to the work of two amazing economists, Anne Case and Angus Deaton , who really emphasized the role of deaths of despair; we are much more focused on the supply side. WIth the demand side, meaning just the way that we handled the distribution of large-scale pain relieving medicines, we tell a story where every 30 to 50 years, someone comes up with the same sort of idea, which is we know that human beings love opioids in different forms. We also know they’re highly addicted and lead to a terrible cycle. So all of a sudden comes along this innovator says, you know what? I’ve got a new opioid and it’s safe. You don’t have to worry about getting addicted to this one. It’s magical.", "It won’t work. 100 years ago, that thing was called heroin. 200 years ago, that thing was called morphine. 300 years ago, that thing was called Meldonium . We have these new drugs which have come in, and they’ve never been safe. But in our case, it was OxyContin and the magic of the time relief was supposed to make it safe, and it wasn’t safe.", "Dwarkesh Patel 0:27:30", "There’s a lot of great work that just shows that the patterns of opioid use was related to the places that just had a lot of pain 30 years ago. Those places came with a lot of tendency to prescribe various things for pain. So when opioids came in, when OxyContin came in, those were the places that got addicted most. Now it’s also true that there are links between these economic issues. There are links with joblessness, and I basically do believe that things that create joblessness are pretty terrible and are actually much worse than income inequality. I push back against the universal basic income advocates who I think are basically engaging in a materialist fallacy of thinking that a human being’s life is shaped by their take home pay or their unearned pay.", "I think for most people, a job is much more than that. A job is a sense of purpose. A job is a sense of social connection. When you look at human misery and opioid use, you look at the difference between high-income earners, mid-income earners. There are differences, but they’re small. You then look at the difference between low-income earners and the jobless, then unhappiness spikes enormously, misery spikes enormously, family breakups spike enormously. So things like universal basic income, which the negative income tax experimented on in the 1970s, are the closest thing we have for its large-scale experiments in this area, which had very large effects on joblessness by just giving people money.", "They feel quite dangerous to me because they feel like they’re going to play into rising joblessness in America, which feels like a path for its misery. I want to just quickly deviate and some of the UBI advocates have brought together UBI in the US and UBI in the developing world. So UBI in the developing world, basically means that you give poor farmers in Sub-Saharan Africa fairly modest amounts of money. This is a totally sensible strategy.", "These people are not about to live life permanently not working. They’re darn poor. It’s very efficient relative to other ways of giving.  I am in no sense pushing back on UBI with modest amounts of money in the poorest parts of the world. By all means, it’s been deemed to be effective. It’s just a very different thing if you’re saying I’m going to give $100 to a poor Congolese farmer, or I’m going to give $10,000 to a long-term jobless person in Eastern Kentucky. You’re not buying a PS5 for $100 in Congo.", "Remote Work, Taxation, & Metaverse", "Dwarkesh Patel 0:29:57", "I want to ask you about remote work. You write in The Survival of the City , that improvements in information technology can lead to more demand for face-to-face contact because FaceTime complements time spent communicating electronically. I’m curious, what distinguishes situations where FaceTime substitutes for in-person contact from situations where it complements FaceTime complements virtual contact?", "Edward Glaeser 0:30:25", "So there’s not a universal rule on this. I wrote a paper based on this in the 1990s about face-to-face contact complements or substitutes for electronic contacts. It was really based on a lot of anxiety in the 1970s that the information technology of their day, the fax machine, the personal computer was going to make face-to-face contact in the cities that enable that contact obsolete. That discussion has reappeared amazingly in the past two and a half years because of Zoom, and all of those questions still resonate. I think in the short run, typically these things are substitutes.Typically you don’t necessarily need to meet some person who’s your long-term contact. You can actually just telephone them, or you can connect with them electronically. In the long run, they seem to be much more likely to be complements, in part because these technologies change our world.", "The story that I tell over the last 40 years is that, yes, there were some face-to-face contacts that were made unnecessary because of electronic interactions. But it’s not just that cities did well over the past 40 years–– business travel went through the roof over the past 40 years. You’d think that that would have been made unnecessary by all these electronic interactions, but I think what just happened was that these new technologies and globalization created a more interconnected world, a world in which knowledge was more important, and we become smart by interacting with people face-to-face. This world became more knowledge and information intensive and more complicated, and as things get more complicated, it’s easier for ideas to get lost in translation. So we have these wonderful cues for communicating comprehension or confusion that are lost when we’re not in the same room with one another. So I think over the longer time, they tend to complements, and over the shorter term, they tend to be substitutes.", "One of the things I think was helpful in my earlier work on this was looking at the history of information technology innovations. I think probably the first one is the book. It’s hard to imagine an innovation that did more to flatten distance. Now you can read stuff that people are saying hundreds of miles away. Yet there’s not a shred of evidence that the book led to less urbanization in Europe or to less connection. It helped create a totally different world in which people were passionate about ideas and wanted to talk to each other. They wanted to talk to each other about their books.", "Flash forward 350 years when we have the telephone. Telephones started being used more in cities, and they were used mostly by people who were going to meet face-to-face. There’s no evidence that this has created a decline in the demand for face-to-face contact or a decline in the demand for cities. So I think if we look at Zoom, which unquestionably has allowed a certain amount of long-distance contact, that’s very, very useful. In the short run, it certainly poses a threat to urban office markets. My guess is in the long run; it’s probably going to be likely to be neutral at worst for face-to-face contact in the cities that enable that contact.", "Dwarkesh Patel 0:33:37", "I think that my podcast has been a great example for me about this. I mean, right now we’re talking virtually. So maybe, in a way it’s substituted, and perhaps I would have interviewed in person without the podcast. However, in another way, I’ve also met so many people that I’ve interviewed on the podcast or who have just connected with me because of the podcast in person. The amount of in-person interactions I’ve had because of a virtual podcast is a great anecdote to what you’re talking about, so that makes total sense.", "Edward Glaeser 0:34:05", "Absolutely.", "Dwarkesh Patel 0:34:06", "Why do even the best software engineers in India or in Europe make so much less when they’re working remotely from those locations than remote engineers working in America make? I mean, why don’t employers just pay them more until the price discrepancy goes away?", "Edward Glaeser 0:34:23", "That’s interesting. I don’t fully know the answer to that question. I would suspect some of it just has to do with the nature of supply and demand. There are some things that are just very hard to be done remotely. Either because you have very precise informational needs that you have that are easier to communicate to people who are nearby or the person who’s nearby has evolved in ways in terms of their mind that they actually know exactly what you want and they have exactly the product that you need. So even though the remote call center worker and the local one may be totally equivalent on raw programming talent, you may still end up in equilibrium and be willing to pay a lot more to the local one just because, right?", "So there’s a slightly differentiated skill the local one has, and look, there’s just a lot of competition for the remote ones, so the price is going to be pretty low. There’s not that much supply of the one guy who’s down the hall and knows exactly what you’re looking for. So that guy gets much higher wages, just because he can offer you something that no one else can exactly reproduce.", "Dwarkesh Patel 0:35:27", "Let me clarify my question. Even remote engineers in America will make more than remote engineers in Europe or in India. If somebody is working remotely but he just happens to live in the US, is that just because they can communicate in English in the same way?", "Edward Glaeser 0:35:54", "I would take the same stance. I would say that they’re likely to have just skills that are somewhat idiosyncratic and are valued in the US context.", "Dwarkesh Patel 0:35:56", "Are you optimistic about the ability of the metaverse and VR to be able to better puncture whatever makes in-person contact so valuable?", "Edward Glaeser 0:36:19", "No, I do not think the metaverse is going to change very much. I do think that there will be a lot of hours spent on various forms of gaming over the next 20 years, but I don’t think it ultimately poses much of a threat to real-world interactions. In some sense, we saw this with the teenage world over the last three years. We saw a lot of America spend an awful lot of time, 15, 16-year-olds, 17-year-olds, gaming and connecting entirely virtually during the whole time of the pandemic lockdowns.", "Every single person that I’ve seen in that cohort, when you allowed them to interact with real members of their group live, leaped at the opportunity. They leaped at the opportunity of meeting and actually hanging out with real people until three o’clock in the morning and arguing over whatever it is–– whether or not it’s football or Kant. I think particularly for the young, living life live just beats the alternative.", "Dwarkesh Patel 0:37:05", "That sounds like a very Harvard scenario, having to argue over football or Kant, those two topics. [laughs] Are you predicting lower taxes over the coming decades in places like California and New York, specifically because of how remote work sets a sort of maximum bar of how much you can tax highly productive people before they will just leave?", "Edward Glaeser 0:37:29", "This is a great question. It’s a central issue of our day. Here’s how I think about it. In part, it’s why I wrote my recent book, Survival of the City . It’s because I was worried about this. Two things happened simultaneously. One, as you correctly say, Zoom has made it easier to connect anywhere. I don’t think that Zoom is going to cause our tech startup currently in Silicon Valley to say, oh, you know what? We’re just going to go home to our Orange County suburban homes and never meet live again. I think that’s a low-probability event.", "But what seems to be a perfectly high probability event is saying, “Oh, we can Zoom with our VCs, we can Zoom with our lawyers. Why don’t we just relocate to Austin, Texas, not pay taxes, or relocate to Boulder, Colorado, so we can have beautiful scenery, or relocate to Honolulu so we can surf? ” That seems like we’ve made the ability for smart people to relocate much easier, even if they’re going to keep on seeing each other in the office three or four days a week. That collides with this very fervent desire to deal with festering social inequities at the local level. Be this limited upward mobility for poorer people, be this high housing costs, be this the rise of mass incarceration and police brutality towards particularly minority groups. There’s this progressive urge which runs up against the fact that the rich guys can run away.", "If your model, which says, “Oh, the local governments are going to realize the rich guys can run away, so they will seamlessly lower tax rates in order to make sure that they attract those people,” that’s running up against the fact that there’s a whole lot of energy on the progressive side, which says, “ No! Massachusetts just passed a millionaire’s tax. For the first time ever, we have the possibility to have a progressive tax, which feels extraordinarily dangerous given this time period.”", "I think we may need to see a bunch of errors in this area before we start getting things right. We went through a lot of pain in the 1970s as cities first tried to deal with their progressive goals and rich people and companies ran away, and it wasn’t until the 1980s that people started realizing this was the path to local bankruptcy and that we had real city limits on what the locality could do.", "Dwarkesh Patel 0:39:44", "You cited research on the survival of the city, which said that firms like Microsoft were much less willing to hire new people once they went online because of the pandemic. What do you make of the theory that this is similar to when industrialization first hit and we hadn’t figured out exactly how to make the most use of it to be most productive, but over the long run, somebody will do to remote work what Henry Ford did to the factory floor and in fact, just make it much more effective and efficient than in-person contact just because we’ll have better ways of interacting with people through remote work, since we’ll have better systems?", "Dwarkesh Patel 0:40:17", "It’s entirely possible. I never like betting against the ingenuity of humanity. On the other hand, you need a lot of technology to override 5 million years of evolution. We have evolved to be an in-person species, not just because we’re productive and learn a lot face-to-face, but also because we just like it. A world of hyper-efficient remote work where you basically are puttering around your apartment doing things very quickly and getting things done, doesn’t sound particularly joyful to me.", "Workplaces are not just places of productivity; they’re also places of pleasure, particularly at the high end. Remember in 2019 and earlier, Google, and Yahoo, the companies that should have had the biggest capacity to do remote stuff, weren’t sending their workers home; they were building these paradises for high-skilled workers, stuffed with foosball tables and free snacks and whatever else they had in these giant campuses in the Google lex. So they were certainly betting on the power of face-to-face and creativity rather than on the ability of remote work to make everything work. I think the most reasonable view, let’s say that of Nick Bloom of Stanford, is that for those types of workers, 20% of your week being hybrid, maybe 40%, seems quite possible.", "That seems like a thing, particularly for workers who have families who really value that degree of flexibility. But fully remote, I guess that’s a pretty niche thing. There’s some jobs like call center workers where you could imagine it being the norm, but in part, that’s just because it’s just hard to learn the same amount remotely that you do face-to-face. This came out both in the earlier Bloom study on remote call center workers in China and on more recent work by Natalia Emmanuel and Emma Harrington . Both studies found the same thing, which is in these call centers, are plenty productive when they’re remote, but the probability of being promoted drops by 50%.", "The entrepreneur may make it very efficient to do things in the short run remotely, but they’re going to turn off this tendency that we have to be able to learn things from people around us, which is just much harder to duplicate remotely.", "Past & Future of Silicon Valley", "Dwarkesh Patel 0:42:29", "Now, I’m curious why Silicon Valley became the hub of technology. You wrote a paper in 2018 about where pioneer and non-pioneer firms locate. So, I was hoping you had insight on this. Does it stand for it? Is it where Fairchild Semiconductor is located? What is the explanation?", "Edward Glaeser 0:42:48", "So, we take it as being earlier. It is Stanford. I traced through this, I think in Triumph. Yeah, it was a company called Federal Telegraph Company that was founded by a guy called Cyril Frank Elwell , who was a radio pioneer, and he was tapped by his teacher to head this radio company. The story was, as I remember it, there’d been this local genius in San Francisco who had attracted all these investors and was going to do this wireless telegraphy company. Then he died in a freak carriage accident.", "These investors wanted to find someone else, and they went to Stanford’s nearby factory and asked, who should we hire? It was this guy Elwell who founded Federal Telegraph. Federal Telegraph then licensed, I think Danish technology which was originally the Poulsen Telegraph Company. They then hired some fairly bright people like Lee DeForest and they did incredibly well in World War I off of federal Navy contracts, off of Navy contracts. They then did things like providing jobs for people like the young Fred Terman , whose father was a Stanford scientist. Now, Fred Terman then plays an outsized role in this story because he goes to MIT, studies engineering there, and then comes back to become Dean of Stanford’s engineering program.", "He really played an outsized role in setting up the Stanford Industrial Park which attracting Fairchild Semiconductor . Then there’s this sort of random thing about how the Fairchild Semiconductor attracts these people and then repels them because you have this brilliant guy Shockley, right? He’s both brilliant and sort of personally abhorrent and manages to attract brilliant people and then repel all of them. So they all end up dispersing themselves into different companies, and they create this incredibly creative ecosystem that is the heart of Silicon Valley.", "In its day, it had this combination of really smart people and really entrepreneurial ethos, which just made it very, very healthy. I think the thing that many of us worry about is that Silicon Valley more recently, feels much more like it’s a one-industry town, which is dangerous. It feels more like it’s a bunch of industrial behemoths rather than a bunch of smart and scrappy startups. That’s a recipe that feels much more like Detroit in the 1950s than it does like Silicon Valley in the 1960s.", "Dwarkesh Patel 0:45:52", "Speaking of startups, what does your study of cities imply about where tech startups should locate and what kind of organization in person or otherwise they should have? I think there’s a lot to like about in person, certainly. Relying too much on remote feels quite dangerous if you’re a scrappy startup. But I like a lot the Sunbelt smart cities.", "I sort of have a two-factor model of economic growth, which is it’s about education, and it’s about having governments that are pro-business. If you think about sort of the US, there’s a lot of heterogeneity in this. If you think about the US versus other countries, it’s heterogeneity. So the US has historically been better at being pro-business than, let’s say, the Northern European social democracies, but the Northern European social democracies are great on the education front.", "So places like Sweden and the Netherlands, and Germany are also very successful places because they have enough education to counter the fact that they may not necessarily be as pro-business as the US is. Within the US, you also have this balance, whereas places like Massachusetts, and California are certainly much less pro-business, but they’re pretty well-educated. Other parts of the country may be more pro-business, but they’re less so. The real secret sauce is finding those places that are both highly educated and pro-business.", "So those are places like Charlotte and Austin and even Atlanta, places in the Sun Belt that have attracted lots of skilled people. They’ve done very, very well during COVID. I mean, Austin, by most dimensions, is the superstar of the COVID era in terms of just attracting people. So I think you had to wait for the real estate prices to come down a bit in Austin, but those are the places that I would be looking at.", "Dwarkesh Patel 0:47:46", "I don’t know if you know, but I live in Austin, actually.", "Edward Glaeser 0:47:50", "I did not know that. [laughs]", "Dwarkesh Patel 0:47:54", "Well, actually, I’m surprised about what you said about education because you write in the paper, “general knowledge measured as average years of schooling is not a strong determinant of the survival of a pioneer firm, but relatedness of knowledge between past and present activities is.” So I’m surprised that you think education is a strong determinant for pioneer firms.", "Edward Glaeser 0:48:15", "No, I’m a big human capital determinist. So I tend to believe that individuals, cities, and nations rise and fall based on their skill levels. Certainly, if you look over the last 40 or 50 years, skills are very predictive of which cities (particularly colder cities) manage to do well versus poorly. If you ask yourself why Detroit and Seattle look different, more than 50% of Seattle’s adults have college degrees, and maybe 14, 15% of Detroit’s adults do.", "That’s just a huge, huge gap. Certainly, when we think about your punitive startup, you’re going to be looking for talent, right? You’re going to be looking to hire talent, and having lots of educated people around you is going to be helpful for that.", "Housing Reform", "Dwarkesh Patel 0:48:56", "Let’s talk about housing. Houston has basically very little to no zoning. Why is it not more of interesting today? Nobody goes to Houston for tourism.", "Edward Glaeser 0:49:07", "I have. [laughs] I have, in fact, gone to Houston for tourism. Although part of it, I admit, was to look at the housing and to go to the woodlands and look at that. Interesting has a lot to do with age in this country. So the more that a city has… Boston is good for tourism just because it’s been around for a long time, and it hasn’t changed all that much. So it has this sort of historical thing. Houston’s a new place, not just in the sense that the chronological age is lower but also in the sense that it’s just grown so much, and it’s dominated by new stuff, right?", "That new stuff tends to be more homogenous. It tends to have less history on it. I think those are things that make new cities typically less interesting than older cities. As witnessed by the fact that Rome, Jerusalem, London, are great tourist capitals of the world because they’ve just accreted all this interesting stuff over the millennium. So I think that’s part of it. I’m not sure that if we look at more highly zoned new cities, we’re so confident that they’re all that more interesting.", "I don’t want to be particularly disparaging any one city. So I’m not going to choose that, but there’s actually a bunch that’s pretty interesting in Houston, and I’m not sure that I would say that it’s any less interesting than any comparably aged city in the country.", "Dwarkesh Patel 0:50:35", "Yeah. I’m visiting Houston later this month. I asked my friend there, should I stay here longer? I mean, is there anything interesting to do here? And then he responds, “Well, it’s the fourth biggest city in the country, so no.”", "Dwarkesh Patel 0:50:47", "Many people, including many economists, have said that we should drastically increase US population through immigration to a figure like 1 billion. Do you think that our cities could accommodate that? We have the infrastructure, and I mean, let’s say we reformed housing over a decade or so. Could we accommodate such a large influx of people?", "Edward Glaeser 0:51:24", "A billion people in a decade? I love the vision. Basically, in my heart, I’m an open borders person, right? I mean, it’s a moral thing. I don’t really like the idea that I get to enjoy the privileges of being an American and think that I’m going to deny that opportunity to anyone else. So I love this vision. A billion people over 10 years is an unimaginably large amount of people over a relatively short period of time. I’d love to give it a shot. I mean, it’s certainly not as if there’s any long-term reason why you couldn’t do it.", "I mean, goodness knows we’ve got more than enough space in this country. It would be exciting to do that. But it would require a lot of reform in the housing space and require a fair amount of reform in the infrastructure space as well to be able to do this at some kind of large scale affordability.", "Dwarkesh Patel 0:52:05", "What does the evidence show about public libraries? Do they matter?", "Dwarkesh Patel 0:52:09", "My friend Eric Kleinberg has written a great book about… I think it’s called Palaces for the People about all the different functions that libraries have played. I’ve never seen anything statistically or systematically about this, but you’re not going to get a scholar to speak against books. It’s not a possible thing.", "Europe’s Stagnation, Mumbai’s Safety, & Climate Change", "Dwarkesh Patel 0:52:32", "Why do European cities seem so much more similar to what they look like decades or even centuries ago than American cities, even American cities that are old, obviously not as old as European cities, but they seem to change much more over time.", "Edward Glaeser 0:52:46", "Lower population growth, much tougher zoning, much tougher historic preservation. All three of these things are going on. So it’s very difficult to build in European cities. There’s a lot of attention to caring about history. It’s often part of the nationalist narrative. You often have huge amounts of national dollars going to preserve local stuff and relatively lower levels of population growth.", "An extreme example of this is Warsaw, where central Warsaw is completely destroyed during World War II, and they built it up to look exactly like it looked before the bombing. So this is a national choice, which is unlikely that we would necessarily make here in the US.", "Dwarkesh Patel 0:53:27", "Yeah. I was in Mumbai earlier this year, and I visited Dharavi , which is the biggest slum in Asia. And it’s a pretty safe place for a slum. Why are slums in different countries? Why do they often have different levels of how safe they are? What is the reason?", "Edward Glaeser 0:53:45", "I, too, have been in Dharavi and felt perfectly safe. It’s like walking around Belgravia and London in terms of it. I think my model of Dharavi is the same model as Jane Jacobs's model of Greenwich Village in 1960 , which is this is just a well-functioning community.", "People have eyes on the street. If you’re a stranger in these areas, they’re going to be looking at you, and it’s a community that just functions. There are lots of low-income communities throughout the world that have this. It requires a certain amount of permanence. So if the community is too much in flux, it becomes hard to enforce these norms and hard to enforce these sort of community rules. It’s really helpful if there aren’t either a massive number of guns floating around or an unbelievably lucrative narcotics trade, which is in the area. Those are both things that make things incredibly hard. Furthermore, US drug policy has partially been responsible for creating violence in some of the poor parts of Latin American cities.", "Dwarkesh Patel 0:54:43", "Maybe you don’t play video games enough to know the answer to this question. But I’m curious, is there any video game, any strategic video game like Civilization or Europa that you feel does a good job representing the economics of cities?", "Edward Glaeser 0:55:07", "No, I will say that when I was in graduate school, I spent a few hours playing something called Sim City. I did think that was very fun. But I’m not going to claim that I think that it got it right. That was probably my largest engagement with city-building video games.", "Dwarkesh Patel 0:55:12", "What would you say we understand least about how cities work?", "Edward Glaeser 0:55:18", "I’m going to say the largest unsolved problem in cities is what the heck we’re going to do about climate change and the cities of the developing world. This is the thing I do not feel like I have any answer for in terms of how it is that we’re going to stop Manila or Mumbai from being leveled by some water-related climate event that we haven’t yet foreseen.", "We think that we’re going to spend tens of billions of dollars to protect New York and Miami, and that’s going to happen; but the thing I don’t understand and something we really need to need to invest in terms of knowledge creation is what are we going to do with the low-lying cities of the developing world to make them safe.", "Dwarkesh Patel 0:55:54", "Okay. Your most recent book is Survival of the City . And before that Triumph of the City , both of which I highly recommend to readers. Professor Glaeser, thank you so much for coming on the podcast. This was very interesting.", "Edward Glaeser 0:56:05", "I enjoyed this a lot. Thank you so much for having me on. I had a great deal of fun." ]
[ "https://en.wikipedia.org/wiki/Commissioners%27_Plan_of_1811", "https://www.economist.com/schools-brief/2017/07/29/coases-theory-of-the-firm", "https://www.youtube.com/watch?v=qrkwgEUXyTU", "https://scholar.harvard.edu/glaeser/publications/jobs-heartland-place-based-policies-21st-century-america", "https://www.sears.com/", "https://www.merriam-webster.com/dictionary/roebuck", "https://en.wikipedia.org/wiki/Georgism", "https://en.wikipedia.org/wiki/Henry_George", "https://en.wikipedia.org/wiki/The_Power_Broker", "https://www.amazon.com/Robert-Caros-Years-Lyndon-Johnson/dp/038535147X", "https://www.sciencedirect.com/science/article/pii/S2212012217300771", "https://www.archives.gov/founding-docs/bill-of-rights-transcript", "https://www.nber.org/system/files/working_papers/w28873/w28873.pdf", "https://scholar.princeton.edu/accase/home", "https://en.wikipedia.org/wiki/Angus_Deaton", "https://en.wikipedia.org/wiki/Meldonium", "https://www.webmd.com/drugs/2/drug-2798/oxycontin-oral/details", "https://www.webmd.com/drugs/2/drug-2798/oxycontin-oral/details", "https://www.amazon.com/Survival-City-Living-Thriving-Isolation/dp/0593297687", "https://www.penguinrandomhouse.com/books/669805/survival-of-the-city-by-edward-glaeser-and-david-cutler/", "https://www.gsb.stanford.edu/faculty-research/working-papers/does-working-home-work-evidence-chinese-experiment", "https://www.instagram.com/nathalieemmanuel/", "https://scholar.harvard.edu/eharrington/home", "https://ideas.repec.org/p/nbr/nberwo/24868.html", "https://en.wikipedia.org/wiki/Fairchild_Semiconductor", "https://en.wikipedia.org/wiki/Federal_Telegraph_Company", "https://en.wikipedia.org/wiki/Cyril_Frank_Elwell", "https://en.wikipedia.org/wiki/Cyril_Frank_Elwell", "https://en.wikipedia.org/wiki/Cyril_Frank_Elwell", "https://en.wikipedia.org/wiki/Cyril_Frank_Elwell", "https://artsandculture.google.com/asset/poulsen-wireless-telephone-and-telegraph-company-stock-certificate-1909/HAG04MkUOQndVg", "https://artsandculture.google.com/asset/poulsen-wireless-telephone-and-telegraph-company-stock-certificate-1909/HAG04MkUOQndVg", "https://artsandculture.google.com/asset/poulsen-wireless-telephone-and-telegraph-company-stock-certificate-1909/HAG04MkUOQndVg", "https://en.wikipedia.org/wiki/Lee_de_Forest", "https://en.wikipedia.org/wiki/Frederick_Terman", "https://en.wikipedia.org/wiki/Stanford_Research_Park", "https://en.wikipedia.org/wiki/Fairchild_Semiconductor", "https://en.wikipedia.org/wiki/Traitorous_eight", "https://ui.charlotte.edu/story/sun-belt-cities-are-driving-much-our-urban-growth-let%E2%80%99s-study-them", "https://www.ebkphoto.com/", "https://www.penguinrandomhouse.com/books/557044/palaces-for-the-people-by-eric-klinenberg/", "https://en.wikipedia.org/wiki/Dharavi", "https://en.wikipedia.org/wiki/Belgravia", "https://www.theguardian.com/cities/2016/apr/28/story-cities-32-new-york-jane-jacobs-robert-moses", "https://civilization.com/", "https://www.paradoxinteractive.com/games/europa-universalis-iv", "https://www.amazon.com/Survival-City-Living-Thriving-Isolation/dp/0593297687", "https://www.amazon.com/Triumph-City-Greatest-Invention-Healthier/dp/0143120549" ]
https://www.dwarkesh.com/p/ege-tamay
AGI is Still 30 Years Away — Ege Erdil & Tamay Besiroglu
[ "AGI will take another 3 decades", "Dwarkesh Patel 00:00:00 Today, I’m chatting with Tamay Besiroglu and Ege Erdil . They were previously running Epoch AI and are now launching Mechanize, which is a company dedicated to automating all work. One of the interesting points you made recently, Tamay, is that the whole idea of the intelligence explosion is mistaken or misleading. Why don’t you explain what you’re talking about there? Tamay Besiroglu 00:00:22 Yeah, I think it’s not a very useful concept. It’s kind of like calling the Industrial Revolution a horsepower explosion. Sure, during the Industrial Revolution, we saw this drastic acceleration in raw physical power, but there are many other things that were maybe equally important in explaining the acceleration of growth and technological change that we saw during the Industrial Revolution. Dwarkesh Patel 00:00:42 What is a way to characterize the broader set of things that the horsepower perspective would miss about the Industrial Revolution? Tamay Besiroglu 00:00:50 So I think in the case of the Industrial Revolution, it was a bunch of these complementary changes to many different sectors in the economy. So you had agriculture, you had transportation, you had law and finance, you had urbanization and moving from rural areas into cities. There were just many different innovations that happened simultaneously that gave rise to this change in the way of economically organizing our society. It wasn’t just that we had more horsepower. I mean, that was part of it, but that’s not the kind of central thing to focus on when thinking about the Industrial Revolution. And I think similarly, for the development of AI, sure, we’ll get a lot of very smart AI systems, but that will be one part among very many different moving parts that explain why we expect to get this transition and this acceleration and growth and technological change. Dwarkesh Patel 00:01:46 I want to better understand how you think about that broader transformation. Before we do, the other really interesting part of your worldview is that you have longer timelines to get to AGI than most of the people in San Francisco who think about AI. When do you expect a drop-in remote worker replacement? Ege Erdil 00:02:05 Maybe for me, that would be around 2045. Dwarkesh Patel 00:02:10 Wow. Wait, and you? Tamay Besiroglu 00:02:11 Again, I’m a little bit more bullish. I mean, it depends what you mean by “drop in remote worker“ and whether it’s able to do literally everything that can be done remotely, or do most things. Ege Erdil 00:02:21 I’m saying literally everything. Tamay Besiroglu 00:02:22 For literally everything. Just shade Ege’s predictions by five years or by 20% or something. Dwarkesh Patel 00:02:27 Why? Because we’ve seen so much progress over even the last few years. We’ve gone from Chat GPT two years ago to now we have models that can literally do reasoning, are better coders than me, and I studied software engineering in college. I mean, I did become a podcaster, I’m not saying I’m the best coder in the world. But if you made this much progress in the last two years, why would it take another 30 to get to full automation of remote work? Ege Erdil 00:03:01 So I think that a lot of people have this intuition that progress has been very fast. They look at the trend lines and just extrapolate; obviously, it’s going to happen in, I don’t know, 2027 or 2030 or whatever. They’re just very bullish. And obviously, that’s not a thing you can literally do. There isn’t a trend you can literally extrapolate of “when do we get to full automation?”. Because if you look at the fraction of the economy that is actually automated by AI, it’s very small. So if you just extrapolate that trend, which is something, say, Robin Hanson likes to do, you’re going to say, “well, it’s going to take centuries” or something. Now, we don’t agree with that view. But I think one way of thinking about this is how many big things are there? How many core capabilities, competences are there that the AI systems need to be good at in order to have this very broad economic impact, maybe 10x acceleration and growth or something? How many things have you gotten over the past 10 years, 15 years? And we also have this compute-centric view… Tamay Besiroglu 00:04:05 So just to double click on that, I think what Ege is referring to is, if you look at the past 10 years of AI progress, we’ve gone through about nine or 10 orders of magnitude of compute, and we got various capabilities that were unlocked. So in the early period, people were solving gameplay on specific games, on very complex games. And that happened from 2015 to 2020, Go and Chess and Dota and other games. And then you had maybe sophisticated language capabilities that were unlocked with these large language models, and maybe advanced abstract reasoning and coding and maybe math. That was maybe another big capability that got unlocked. And so maybe there are a couple of these big unlocks that happened over the past 10 years, but that happened on the order of once every three years or so, or maybe one every three orders of magnitude of compute scaling. And then you might ask the question, “how many more such competencies might we need to unlock in order to be able to have an AI system that can match the capabilities of humans across the board?” Maybe specifically just on remote work tasks. And so then you might ask, well, maybe you need kind of coherence over very long horizons, or you need agency and autonomy, or maybe you need full multimodal understanding, just like a human would. And then you ask the question, “okay, how long might that take?” And so you can think about, well, just in terms of calendar years, the previous unlocks took about, you get one every three years or so. But of course, that previous period coincided with this rapid scale-up of the amount of compute that we use for training. So we went through maybe 9 or 10 orders of magnitude since AlexNet compared to the biggest models we have today. And we’re getting to a level where it’s becoming harder and harder to scale up compute. And we’ve done some extrapolations and some analysis looking at specific constraints, like energy or GPU production. And based on that, it looks like we might have maybe three or four orders of magnitude of scaling left. And then you’re really spending a pretty sizable fraction or a non-trivial fraction of world output on just building up data centers, energy infrastructure, fabs, and so on. Dwarkesh Patel 00:06:40 Which is already like 2% of GDP, right? Tamay Besiroglu 00:06:42 I mean, currently it’s less than 2%. Ege Erdil 00:06:44 Yeah, but also currently most of it is actually not going towards AI chips. But even most TSMC capacity currently is going towards mobile phone chips or something like that, right? Dwarkesh Patel 00:06:52 Even leading edge. It’s like 5% of leading edge. Tamay Besiroglu 00:06:55 Yeah, even leading edge is pretty small. But yeah, so that suggests that we might need a lot more compute scaling to get these additional capabilities to be unlocked. And then there’s a question of do we really have that in us as an economy to be able to sustain that scaling? Dwarkesh Patel 00:07:14 But it seems like you have this intuition that there’s just a lot left to intelligence. When you play with these models, they’re almost there. You forget you’re often talking to an AI. Ege Erdil 00:07:26 What do you mean they’re almost there? I don’t know. I can’t ask Claude to pick up this cup and put it over there. Dwarkesh Patel 00:07:31 Remote work, you know? Ege Erdil 00:07:32 Okay. But even for remote work, I can’t ask Claude to… I think the current computer use systems can’t even book a flight properly. Dwarkesh Patel 00:07:38 How much of an update would it be if by the end of 2026, they could book a flight? Ege Erdil 00:07:43 I probably think by the end of this year, they’re going to be able to do that. But that’s a very simple… Nobody gets a job where they’re paid to book flights. That’s not a task. Dwarkesh Patel 00:07:54 I think some people do. Tamay Besiroglu 00:07:56 If it’s literally just a book flight job, and without- Ege Erdil 00:08:00 But I think that’s an important point, because a lot of people look at jobs in the economy, and then they’re like, “oh, that person, their job is to just do X”. But then that’s not true. That’s something they do in their job. But if you look at the fraction of their time on the job that they spend on doing that, it’s a very small fraction of what they actually do. It’s just this popular conception people have. Or travel agents, they just book hotels and flights. But that’s not actually most of their job. So automating that actually wouldn’t automate their job, and it wouldn’t have that much of an impact on the economy. So I think this is actually an important thing, that important worldview difference that separates us from people who are much more bullish, because they think jobs in the economy are much simpler in some sense, and they’re going to take much fewer competences to actually fully automate. Dwarkesh Patel 00:08:47 So our friend Leopold has this perspective of, quote unquote, ‘unhobblings’, where the way to characterize it might be, they’re basically like baby AGIs already. And then because of the constraints we artificially impose upon them by, for example, only training them on text and not giving them the training data that is necessary for them to understand a Slack environment or a Gmail environment, or previously before inference time scaling, not giving them the chance to meditate upon what they’re saying and really think it through, and not giving them the context about what is actually involved in this job, only giving them this piecemeal, a couple of minutes worth of context in the prompt, we’re holding back what is fundamentally a little intelligence from being as productive as it could be, which implies that unhobblings just seem easier to solve for than entirely new capabilities of intelligence. What do you make of that framework? Tamay Besiroglu 00:09:46 I mean, I guess you could have made similar points five years ago and say “you look at AlphaZero and there’s this mini AGI there, and if only you unhobbled it by training it on text and giving it all your context” and so on, that just wouldn’t really have worked. I think you do really need to rethink how you train these models in order to get these capabilities. Dwarkesh Patel 00:10:08 But I think the surprising thing over the last few years has been that you can start off with this pre-trained corpus of the internet, and it’s actually quite easy. ChatGPT is an example of this unhobbling, where 1% of additional compute spent on getting it to talk in a chatbot-like fashion with post training is enough to make it competent- really competent- at that capability. Reasoning is another example where it seems like the amount of compute that is spent on RL right now in these models is a small fraction of total compute. Again, reasoning seems complicated, and then you just do 1% of compute and it gets you that. Why not think that computer use, or long-term agency on computer use, is a similar thing? Tamay Besiroglu 00:10:55 So when you say “reasoning is easy” and “it only took this much compute” and “it wasn’t very much”, and maybe “you look at the sheer number of tokens and it wasn’t very much, and so it looks easy”, well, that’s true from our position today. But I think if you ask someone to build a reasoning model in 2015, then it would have looked insurmountable. You would have had to train a model on tens of thousands of GPUs, you would have had to solve that problem, and each order of magnitude of scaling from where they were would pose new challenges that they would need to solve. You would need to produce internet scale, or tens of trillions of tokens of data in order to actually train a model that has the knowledge that you can then unlock and access by way of training it to be a reasoning model. You need to maybe make the model more efficient at doing inference and maybe distill it, because if it’s very slow then you have a reasoning model that’s not particularly useful, so you also need to make various innovations to get the model to be distilled so that you can train it more quickly, because these rollouts take very long. It actually becomes a product that’s valuable if it’s a couple tokens a second, as a reasoning model that would have been very difficult to work with. So in some sense, it looks easy from our point of view, standing on this huge stack of technology that we’ve built up over the past five years or so, but at the time, it would have been very hard. And so my claim would be something like; I think the agency part might be easy in a similar sense, that in five years or three years time or whatever we will look at what unlocked agency and it’ll look fairly simple. But the amount of work that, in terms of these complementary innovations that enable the model to be able to learn how to become a competent agent, that might have just been very difficult and taken years of innovation and a bunch of improvements in kind of hardware and scaling and various other things. Dwarkesh Patel 00:12:54 Yeah, I feel like what’s dissimilar between 2015 and now… in 2015 if you were trying to solve reasoning, you just didn’t have a base to start on. Maybe if you tried formal proof methods or something, but there was no leg to stand on, where now you’d actually have the thing- you have the pre-trained base model, you have these techniques of scaffolding, of post-training, of RL. And so it seems like you think that those will look to the future as, say, AlphaGo looks to us now in terms of the basis of a broader intelligence. I’m curious if you have intuitions on why not think that language models as we have them now are like, we got the big missing piece right and now we’re just like plugging things on top of it? Ege Erdil 00:13:51 Well, I mean, I guess what is the reason for believing that? I mean, you could have looked at AlphaGo or AlphaGo Zero, AlphaZero, those seemed very impressive at the time. I mean, you’re just learning to play this game with no human knowledge, you’re just learning to play it from scratch. And I think at the time it did impress a lot of people. But then people tried to apply it to math, they tried to apply it to other domains, and it didn’t work very well, they weren’t able to get competent agents at math. So it’s very possible that these models, at least the way we have them right now, you’re going to try to do the same thing people did for reasoning, but for agency, it’s not going to work very well. And then you’re not going to- Dwarkesh Patel 00:14:32 I’m sorry, you’re saying by the end of 2026, we will have agentic computer use. Tamay Besiroglu 00:14:36 I think Ege said you’d be able to book a flight, which is very different from having full agentic computer use. Dwarkesh Patel 00:14:44 I mean, the other things you need to do on a computer are just made up of things like booking a flight. Ege Erdil 00:14:49 I mean, sure, but they are not disconnected tasks. That’s like saying everything you do in the world is just like you just move parts of your body, and then you move your mouth and your tongue, and then you roll your head. Yeah, individually those things are simple, but then how do you put them together, right? Dwarkesh Patel 00:15:09 Yeah. Okay. So there’s two pieces of evidence that you can have that are quite dissimilar. One, the METR eval, which we’ve been talking about privately, which shows that the task length over certain kinds of tasks- I can already see you getting ready. AI’s ability to do the kind of thing that it takes a human 10 minutes to do, or an hour to do, or four hours to do, the length of time for corresponding human tasks, it seems like these models seem to be doubling their task length every seven months. The idea being that by 2030, if you extrapolate this curve, they could be doing tasks that take humans one month to do, or one year to do. And then this long-term coherency in executing on tasks is fundamentally what intelligence is. So this curve suggests that we’re getting there. The other piece of evidence- I kind of feel like my own mind works this way. I get distracted easily, and it’s hard to keep a long-term plan in my head at the same time. And I’m slightly better at it than these models. But they don’t seem that dissimilar to me. I would have guessed reasoning is just a really complicated thing, and then it seems like, “oh, it’s just something like learning 10 tokens worth of MCTS ” of “wait, let’s go back, let’s think about this another way”. Chain of thought alone just gets you this boost. And it just seems like intelligence is simpler than we thought. Maybe agency is also simpler in this way. Ege Erdil 00:16:39 Yeah. I mean, I think there’s a reason to expect complex reasoning to not be as difficult as people might have thought, even in advance, because a lot of the tasks that AI solved very early on were tasks of various kinds of complex reasoning. So it wasn’t the kind of reasoning that goes into when a human solves a math problem. But if you look at the major AI milestones over, I don’t know, since 1950, a lot of them are for complex reasoning. Like chess is, you can say, a complex reasoning task. Go is, you could say, a complex reasoning task. Dwarkesh Patel 00:17:14 But I think there are also examples of long-term agency. Like winning at Starcraft is an example of being agentic over a meaningful period of time. Ege Erdil 00:17:24 That’s right. So the problem in that case is that it’s a very specific, narrow environment. You can say that playing Go or playing chess, that also requires a certain amount of agency. And that’s true. But it’s a very narrow task. So that’s like saying if you construct a software system that is able to react to a very specific, very particular kind of image, or very specific video feeds or whatever, then you’re getting close to general sensor motor skill automation. But the general skill is something that’s very different. And I think we’re seeing that. We still are very far, it seems like, from an AI model that can take a generic game off Steam. Let’s say you just download a game released this year. You don’t know how to play this game. And then you just have to play it. And then most games are actually not that difficult for a human. Dwarkesh Patel 00:18:21 I mean, what about Claude Plays Pokemon ? I don’t think it was trained on Pokemon. Ege Erdil 00:18:25 Right, so that’s an interesting example. First of all, I find the example very interesting, because yeah, it was not trained explicitly. They didn’t do some RL on playing Pokemon Red. But obviously, the model knows how it’s supposed to play Pokemon Red, because there’s tons of material about Pokemon Red on the internet. In fact, if you were playing Pokemon Red, and you got stuck somewhere, you didn’t know what to do, you could probably go to Claude and ask “I’m stuck in Mount Moon, and what am I supposed to do?” And then it’s probably able to give you a fairly decent answer. But that doesn’t stop it from getting stuck in Mount Moon for 48 hours. So that’s a very interesting thing, where it has explicit knowledge, but then when it’s actually playing the game, it doesn’t behave in a way which reflects that it has that knowledge. Dwarkesh Patel 00:19:09 All it’s got to do is plug the explicit knowledge to its actions. Ege Erdil 00:19:13 Yeah, but is that easy? Dwarkesh Patel 00:19:15 Okay, if you can leverage your knowledge from pre-training about these games in order to be somewhat competent at them, okay, they’re going to be leveraging a different base of skills. But with that same leverage, they’re going to have a similar repertoire of abilities. If you’ve read everything about whatever skill that every human has ever seen. Ege Erdil 00:19:43 A lot of the skills that people have, they don’t have very good training data for them. Dwarkesh Patel 00:19:48 That’s right. What would you want to see over the next few years that would make you think, “oh, no, I’m actually wrong and this was the last unlock, and it was now just a matter of ironing out the kinks”. And then we get the thing that will kick off the, dare I say, intelligence explosion. Tamay Besiroglu 00:20:04 I think something that would reveal its ability to do very long context things, use multimodal capabilities in a meaningful way, and integrate that with reasoning and other types of systems. And also agency and being able to take action over a long horizon and accomplish some tasks that takes very long for humans to do, not just in specific software environments, but just very broadly; say downloading an arbitrary game from Steam, something that it’s never seen before, it doesn’t really have much training data, maybe it was released after a training cutoff and so there’s no tutorials or maybe there’s no earlier versions of the game that has been discussed on the Internet, and then accomplishing that game and actually playing that game to the end and accomplishing these various milestones that are challenging for humans. That would be a substantial update. I mean, there are other things that would update me, too, like OpenAI making a lot more revenue than it’s currently doing. Dwarkesh Patel 00:21:11 Is the hundred billion in revenue that would, according to their contract, mark them as AGI enough? Tamay Besiroglu 00:21:15 I think that’s not a huge update to me if that were to happen. So I think the update would come if it was, in fact, $500 billion in revenue or something like that. But then I would certainly update quite a lot. But a hundred billion, that seems pretty kind of likely to me. I would assign that maybe a 40 percent chance or something. Dwarkesh Patel 00:21:37 If you’ve got a system that is, in producer surplus terms, worth a hundred billion. And the difference between this and AlphaZero is AlphaZero is never going to make a hundred billion dollars in the marketplace. So just what is intelligence? It’s like something able to usefully accomplish its goals, or your goals. If people are willing to pay a hundred billion dollars for it, that’s pretty good evidence that it’s like accomplishing some goals. Tamay Besiroglu 00:22:05 I mean, people pay a hundred billion dollars for all sorts of things. That itself is not a very strong piece of evidence that it’s going to be transformative, I think. Ege Erdil 00:22:13 People pay trillions of dollars for oil. I don’t know, it seems like a very basic point. But the fact that people pay a lot of money for something doesn’t mean it’s going to transform the world economy if only we manage to unhobble it. Like that’s a very different claim.", "Even reasoning models lack animal intelligence", "Dwarkesh Patel 00:22:27 So then this brings us to the intelligence explosion, because what people will say is, we don’t need to automate literally everything that is needed for automating remote work, let alone all human labor in general. We just need to automate the things which are necessary to fully close the R&D cycle needed to make smarter intelligences. And if you do this, you get a very rapid intelligence explosion. And the end product of that explosion is not only an AGI, but something that is superhuman potentially. These things are extremely good at coding, and reasoning. It seems like the kinds of things that would be necessary to automate R&D at AI labs. What do you make of that logic? Ege Erdil 00:24:14 I think if you look at their capability profile, if you compare it to a random job in the economy, I agree they are better at doing coding tasks that will be involved in R&D compared to a random job in the economy. But in absolute terms, I don’t think they’re that good. I think they are good at things that maybe impress us about human coders. If you wanted to see what makes a person a really impressive coder, you might look at their competitive programming performance. In fact, companies often hire people, if they’re relatively junior, based on their performance on these kinds of problems. But that is just impressive in the human distribution. So if you look in absolute terms at what are the skills you need to actually automate the process of being a researcher, then what fraction of those skills do the AI systems actually have? Even in coding, a lot of coding is, you have a very large code base you have to work with, the instructions are very kind of vague. For example you mentioned METR eval, in which, because they needed to make it an eval, all the tasks have to be compact and closed and have clear evaluation metrics: “here’s a model, get its loss on this data set as low as possible”. Or “here’s another model and its embedding matrix has been scrambled, just fix it to recover like most of its original performance”, etc. Those are not problems that you actually work on in AI R&D. They’re very artificial problems. Now, if a human was good at doing those problems, you would infer, I think logically, that that human is likely to actually be a good researcher. But if an AI is able to do them, the AI lacks so many other competences that a human would have- not just the researcher, just an ordinary human- that we don’t think about in the process of research. So our view would be that automating research is, first of all, more difficult than people give it credit for. I think you need more skills to do it and definitely more than models are displaying right now. And on top of that, even if you did automate the process of research, we think a lot of the software progress has been driven not by cognitive effort- that has played a part- but it has been driven by compute scaling. We just have more GPUs, you can do more experiments, to figure out more things, your experiments can be done at larger scales. And that is just a very important driver. If you’re 10 years ago, 15 years ago, you’re trying to figure out what software innovations are going to be important in 10 or 15 years, you would have had a very difficult time. In fact, you probably wouldn’t even have conceived of the right kind of innovations to be looking at, because you would be so far removed from the context of that time with much more abundant compute and all the things that people would have learned by that point. So these are two components of our view: Research is harder than people think, and depends a lot on compute scale. Dwarkesh Patel 00:27:17 Can you put a finer point on what is an example of the kind of task which is very dissimilar from ‘train a classifier’ or ‘debug a classifier’ that is relevant to AI R&D? Tamay Besiroglu 00:27:30 Examples might be introducing novel innovations that are very useful for unlocking innovations in the future. So that might be introducing some novel way of thinking about a problem. A good example might be in mathematics, where we have these reasoning models that are extremely good at solving math problems. Ege Erdil 00:27:57 Very short horizon. Tamay Besiroglu 00:28:00 Sure. Maybe not extremely good, but certainly better than I can and better than maybe most undergrads can. And so they can do that very well, but they’re not very good at coming up with novel conceptual schemes that are useful for making progress in mathematics. So it’s able to solve these problems that you can kind of neatly excise out of some very messy context, and it’s able to make a lot of progress there. But within some much messier context, it’s not very good at figuring out what directions are especially useful for you to build things or make incremental progress on that enables you to have a big kind of innovation later down the line. So thinking about both this larger context, as well as maybe much longer horizon, much fuzzier things that you’re optimizing for, I think it’s much worse at those types of things. Ege Erdil 00:28:54 Right. So I think one interesting thing is if you just look at these reasoning models, they know so much, especially the larger ones, because they know in literal terms more than any human does in some sense. And we have unlocked these reasoning capabilities on top of that knowledge, and I think that is actually what’s enabling them to solve a lot of these problems. But if you actually look at the way they approach problems, the reason what they do looks impressive to us is because we have so much less knowledge. And the model is approaching the problems in a fundamentally different way compared to how a human would. A human would have much more limited knowledge, and they would usually have to be much more creative in solving problems because they have this lack of knowledge, while the model knows so much. But you’d ask it some obscure math question where you need some specific theorem from 1850 or something, and then it would just know that, if it’s a large model. So that makes the difficulty profile very different. And if you look at the way they approach problems, the reasoning models, they are usually not creative. They are very effectively able to leverage the knowledge they have, which is extremely vast. And that makes them very effective in a bunch of ways. But you might ask the question, has a reasoning model ever come up with a math concept that even seems slightly interesting to a human mathematician? And I’ve never seen that. Dwarkesh Patel 00:30:19 I mean, they’ve been around for all of six months, Tamay Besiroglu 00:30:23 I mean, that’s a long time. One mathematician might have been able to do a bunch of work over that time, and they have produced orders of magnitude fewer tokens on math. Ege Erdil 00:30:34 And then I just want to emphasize it, because just think about the sheer scale of knowledge that these models have. It’s enormous from a human point of view. So it is actually quite remarkable that there is no interesting recombination, no interesting, “oh, this thing in this field looks kind of like this thing in this other field”. There’s no innovation that comes out of that. And it doesn’t have to be a big math concept, it could be just a small thing that maybe you could add to a Sunday magazine on math that people used to have. But there isn’t even an example of that. Tamay Besiroglu 00:31:09 I think it’s useful for us to explain a very important framework for our thinking about what AI is good at and what AI is lagging in, which is this idea of Moravec’s paradox , that things that seem very hard for humans, AI systems tend to make much faster progress on, whereas things that look a bunch easier for us, AI systems totally struggle or are often totally incapable of doing that thing. And so this kind of abstract reasoning, playing chess, playing Go, playing Jeopardy, doing kind of advanced math and solving math problems. Ege Erdil 00:31:49 There are even stronger examples, like multiplying 100 digit numbers in your head, which is just the one that got solved first out of almost any other problem. Or following very complex symbolic logic arguments, like deduction arguments, which people actually struggle with a lot. Like how do premises logically follow from conclusions? People have a very hard time with that. Very easy for formal proof systems. Tamay Besiroglu 00:32:12 An insight that is related and is quite important here is that the tasks that humans seem to struggle on and AI systems seem to make much faster progress on are things that have emerged fairly recently in evolutionary time. So, advanced language use emerged in humans maybe 100,000 years ago, and certainly playing chess and Go and so on are very recent innovations. And so evolution has had much less time to optimize for them, in part because they’re very new, but also in part because when they emerged, there was a lot less pressure because it conferred kind of small fitness gains to humans and so evolution didn’t optimize for these things very strongly. And so it’s not surprising that on these specific tasks that humans find very impressive when other humans are able to do it, that AI systems are able to make a lot of fast progress. In humans, these things are often very strongly correlated with other competencies, like being good at achieving your goals, or being a good coder is often very strongly correlated with solving coding problems, or being a good engineer is often correlated with solving competitive coding problems. But in AI systems, the correlation isn’t quite as strong. And even within AI systems, it’s the case that the strongest systems on competitive programming are not even the ones that are best at actually helping you code. So o3 mini’s high seems to be maybe the best at solving competitive code problems, but it isn’t the best at actually helping you write code. Ege Erdil 00:33:54 And it isn’t getting most of the enterprise revenue from places like Coursera or whatever, that’s just Claude, right? Tamay Besiroglu 00:33:59 But an important insight here is that the things that we find very impressive when humans are able to do it, we should expect that AI systems are able to make a lot more progress on that. But we shouldn’t update too strongly about just their general competence or something, because we should recognize that this is a very narrow subset of relevant tasks that humans do in order to be a competent, economically valuable agent. Dwarkesh Patel 00:34:26 Yeah. First of all, I actually just really appreciate that there is an AI organization out there where- because there’s other people who take the compute perspective seriously, or try to think empirically about scaling laws and data and whatever. And taking that perspective seriously leads people to just be like, “okay, 2027 AGI”, which might be correct, but it is just interesting to get, “no, we’ve also looked at the exact same arguments, the same papers, the same numbers. And we’ve come to a totally different conclusion”. So I asked Dario this exact question two years ago, when I interviewed him, and it went viral. Ege Erdil 00:35:11 Didn’t he say AGI in two years? Dwarkesh Patel 00:35:13 That, but Dario’s always had short timelines. Ege Erdil 00:35:15 Okay, but we are two years later. Dwarkesh Patel 00:35:18 Did he say two years? I think he actually did say two years. Ege Erdil 00:35:20 Did he say three years? Tamay Besiroglu 00:35:21 So we have one more year. Dwarkesh Patel 00:35:22 One more year. Tamay Besiroglu 00:35:23 Better work hard. Dwarkesh Patel 00:35:27 But he’s, I mean, I think he’s like, he in particular has not been that well calibrated. In 2018, he had like… Tamay Besiroglu 00:35:33 I remember talking to a very senior person who’s now at Anthropic, in 2017. And then he told various people that they shouldn’t do a PhD because by the time they completed it everyone will be automated. Dwarkesh Patel 00:35:49 So anyways, I asked him this exact same question because he has short timelines, which is that if a human knew the amount of things these models know, they would be finding all these different connections. And in fact, I was asking Scott about this the other day when I interviewed him , Scott Alexander, and he said, “look, humans also don’t have this kind of logical omniscience”. I’m not saying we’re omniscient, but we have examples of humans finding these kinds of connections. This is not an uncommon thing, right? I think his response was that these things are just not trained in order to find these kinds of connections, but their view is that it would not take that much extra compute in order to build some RL environment in which they’re incentivized to find these connections. Next token prediction just isn’t incentivizing them to do this, but the RL required to do this would not be- that or set up some sort of scaffolds. I think actually Google DeepMind did do some similar scaffold to make new discoveries. And I didn’t look into how impressive the new discovery was, they claim that some new discovery was made by an LLM as a result. On the Moravec paradox thing, this is actually a super interesting way to think about AI progress. But I would also say that if you compare animals to humans, long term intelligent planning… an animal is not gonna help you book a flight either. An animal is not gonna do remote work for you. I think what separates humans from other animals is that we can hold long-term, we can come up with a plan and execute on it. Whereas other animals often had to go by instinct, or within the kinds of environments that they have evolutionary knowledge of, rather than, “I’m put in the middle of the savanna, or I’m put in the middle of the desert, or I’m put in the middle of tundra, and I’ll learn how to make use of the tools and whatever there”. I actually think there’s a huge discontinuity between humans and animals and their ability to survive in different environments, just based on their knowledge. And so it’s a recently optimized thing as well. And then I’d be like, “okay, well, we got it soon. AIs will optimize for it fast”. Ege Erdil 00:37:50 Right. So I would say if you’re comparing animals to humans, it’s kind of a different thing. I think if you could put the competences that the animals have into AI systems, that might just already get you to AGI already. I think the reason why there is such a big discontinuity between animals and humans is because animals have to rely entirely on natural world data, basically, to train themselves. Imagine that the only thing as a human that you saw was nobody talked to you, you didn’t read anything, you just had to learn by experience, maybe to some extent by imitating other people, but you have no explicit communication. It would be very inefficient. What’s actually happening is that you have this- I think some other people have made this point as well- is that evolution is sort of this outer optimizer that’s improving the software efficiency of the brain in a bunch of ways. There’s some genetic knowledge that you inherit, not that much because there isn’t that much space in the genome. And then you have this lifetime learning, which is, you don’t actually see that much data during lifetime learning. A lot of this is redundant and so on. So what seems to have changed with humans compared to other animals is that humans became able to have culture and they have language, which enables them to have a much more efficient training data modality compared to animals. They also have, I think, stronger ways in which they tend to imitate other humans and learn from their skills, so that also enables this knowledge to be passed on. I think animals are pretty bad at that compared to humans. So basically as a human, you’re just being trained on much more efficient data and that creates further insights to be then efficient at learning from it, and then that creates this feedback loop where the selection pressure gets much more intense. So I think that’s roughly what happened with humans. But a lot of the capabilities that you need to be a good worker in the human economy, animals already have. So they have quite sophisticated sensory motor skills. I think they are actually able to pursue long-term goals. Dwarkesh Patel 00:40:03 But ones that have been instilled by evolution. I think a lion will find a gazelle and that is a complicated thing to do and requires stalking and blah, blah, blah- Ege Erdil 00:40:12 But when you say it’s been instilled by evolution, there isn’t that much information in the genome. Dwarkesh Patel 00:40:16 But I think if you put the lion in the Sahara and you’re like, “go find lizards instead”. Ege Erdil 00:40:22 Okay. So suppose you put a human and they haven’t seen the relevant training data. Dwarkesh Patel 00:40:27 I think they’d be slightly better. Ege Erdil 00:40:29 Slightly better, but not that much better. Again, didn’t you recently have an interview? Dwarkesh Patel 00:40:36 Joseph Henrich. Ege Erdil 00:40:37 Yeah. So he would probably tell you that. Dwarkesh Patel 00:40:40 I think what you’re making is actually a very interesting and subtle point that has an interesting implication. So often people say that ASI will be this huge discontinuity, because while we have this huge discontinuity in the animal-to-human transition, not that much changed between pre-human primates and humans genetically, but it resulted in this humongous change in capabilities. And so they say, “well, why not expect something similar between human level intelligence and superhuman intelligence?” And one implication of the point you’re making is actually it wasn’t that we just gained this incredible intelligence. Because of biological constraints, animals have just been held back in this really weird way that no AI system has been arbitrarily held back from not being able to communicate with other copies or with other knowledge sources. And so since AIs are not held back artificially in this way, there’s not going to be a point where we should take away that hobbling. And then now they explode. Now, actually, I think I would disagree with that. The implication that I made, I would actually disagree with- I’m like a sort of unsteerable chain of thought. We wrote a blog post together about AI corporations where we discuss actually there will be a similar unhobbling with future AIs, which is not about the intelligence, but a similar level of bandwidth and communication and collaboration with other AIs, which is a similar magnitude of change from non-human animals to humans, in terms of their social collaboration, that AIs will have with each other because of their ability to copy all their knowledge exactly, to merge, to distill themselves. Tamay Besiroglu 00:42:28 Maybe before we talk about that, I think just a very important point to make here, which I think underlies some of this disagreement that we have with others about both this argument from the transition from kind of non-human animals to humans, is this focus on intelligence and reasoning and R&D, which is enabled by that intelligence as being enormously important. And so if you think that you get this very important difference from this transition from non-human primates to humans, then you think that in some sense you get this enormously important unlock from fairly small scaling and, say, brain size or something. And so then you might think, “well, if we scale beyond the size of training runs, the amount of training compute that the human brain uses, which is maybe on the order of 1E24 flop or whatever, which we’ve recently surpassed, then maybe surpassing it just a little bit more enables us to unlock very sophisticated intelligence in the same way that humans have much more sophisticated intelligence compared to non-human primates”. And I think part of our disagreement is that intelligence is kind of important, but just having a lot more intelligence and reasoning and good reasoning isn’t something that will kind of accelerate technological change and economic growth very substantially. It isn’t the case that the world today is totally bottlenecked by not having enough good reasoning, that’s not really what’s bottlenecking the world’s ability to grow much more substantially. I think that we might have some disagreement about this particular argument, but I think what’s also really important is just that we have a different view as to how this acceleration happens, that it’s not just having a bunch of really good reasoners that give you this technology that then accelerates things very drastically. Because that alone is not sufficient. You need kind of complementary innovations in other industries. You need the economy as a whole growing and supporting the development of these various technologies. You need the various supply chains to be upgraded. You might need demand for the various products that are being built. And so we have this view where actually this very broad upgrading of your technology and your economy is important rather than just having very good reasoners and very, very, very good reasoning tokens that gives us this acceleration.", "Intelligence explosion", "Dwarkesh Patel 00:45:04 All right. So this brings us back to the intelligence explosion. Here’s the argument for the intelligence explosion: You’re right that certain kinds of things might take longer to come about, but this core loop of software R&D that’s required, if you just look at what kinds of progress is needed to make a more general intelligence, you might be right that it needs more experimental compute, but as you guys have documented, we’re just getting a shit-ton more compute every single year for the next few years. So you can imagine an intelligence explosion in the next few years where in 2027, there’ll be like 10 X more compute than there is now for AI. And you’ll have this effect where the AIs that are doing software R&D are finding ways to make running copies of them more efficient, which has two effects. One, you’re increasing the population of AIs who are doing this research, so more of that in parallel can find these different optimizations. And a subtle point that they’d often make here is software R&D in AI is not just Ilya -type coming up with new transformer-like architectures. To your point, it actually is a lot of- I mean, I’m not an AI researcher, but I assume there’s, from the lowest level libraries to the kernels, to making RL environments, to finding the best optimizer, to… there’s just so much to do, and in parallel you can be doing all these things or finding optimizations across them. And so you have two effects, going back to this. One is, if you look at the original GPT-4 compared to the current GPT-4o, I think it’s, what, how much cheaper is it to run? Tamay Besiroglu 00:46:57 It’s like, maybe a hundred times for the same capability or something. Dwarkesh Patel 00:47:03 Right. So they’re finding ways in which to run more copies of them at a hundred X cheaper or something, which means that the population of them is increasing and the higher populations are helping you find more efficiencies. And not only does that mean you have more researchers, but to the extent that the complementary input is experimental compute, it’s not the compute itself, it’s the experiments. And the more efficient it is to run a copy or to develop a copy, the more parallel experiments you can run, because now you can do a GPT-4 scale training run for much cheaper than you could do it in 2024 or 2023. And so for that reason, also this software-only singularity sees more researcher copies who can run experiments for cheaper, dot, dot, dot. They initially are maybe handicapped in certain ways that you mentioned, but through this process, they are rapidly becoming much more capable. What is wrong with this logic? Tamay Besiroglu 00:47:57 So I think the logic seems fine. I think this is like a decent way to think about this problem, but I think that it’s useful to draw on a bunch of work that, say, economists have done for studying the returns to R&D and what happens if you 10X your inputs, the number of researchers, what happens to innovation or the rate of innovation. And there, they point out these two effects where, as you do more innovation and you get to stand on top of the shoulders of giants and you get the benefit from past discoveries and it makes you as a scientist more productive. But then there’s also kind of diminishing returns, that the low hanging fruit has been picked, and it becomes harder to make progress. And overall, you can summarize those estimates as thinking about the kind of returns to research effort. And we’ve looked into the returns to research effort in software specifically. And we look at a bunch of domains in traditional software or linear integer solvers or SAT solvers, but also in AI; computer vision and RL and language modeling. And there, if this model is true, that all you need is just cognitive effort, it seems like the estimates are a bit ambiguous about whether this results in this acceleration or whether it results in just merely exponential growth. And then you might also think about, well, it isn’t just your research effort that you have to scale up to make these innovations, because you might have complementary inputs. So as you mentioned, experiments are the thing that might kind of bottleneck you. And I think there’s a lot of evidence that in fact, these experiments and scaling up hardware, it’s just very important for getting progress in the algorithms and the architecture and so on. So in AI- this is true for software in general- where if you look at progress in software, it often matches very closely the rate of progress we see in hardware. So for traditional software, we see about a 30% roughly increase per year, which kind of basically matches Moore’s law . And in AI, we’ve seen the same until you get to the deep learning era, and then you get this acceleration, which in fact coincides with the acceleration we see in compute scaling, which gives you a hint that actually the compute scaling might have been very important. Other pieces of evidence besides this coincidental rate of progress, other pieces of evidence are the fact that innovation and algorithms and architectures are often concentrated in GPU-rich labs and not in the GPU-poor parts of the world, like academia or maybe smaller research institutes. That also suggests that having a lot of hardware is very important. If you look at specific innovations that seem very important, the big innovations over the past five years, many of them have some kind of scaling or hardware-related motivation. So you might look at how the transformer itself was about how to harness more parallel compute. Things like flash attention was literally about how to implement the attention mechanism more efficiently, or things like the chinchilla scaling law . And so many of these big innovations were just about how to harness your compute more effectively. That also tells you that actually the scaling of compute might be very important. And I think there’s just many pieces of evidence that point towards this complementarity picture. So I would say that even if you assume that experiments are not particularly important, the evidence we have, both from estimates of AI and other software- although the data is not great- suggests that maybe you don’t get this kind of hyperbolic, faster-than-exponential super-growth in the overall algorithmic efficiency of systems. Dwarkesh Patel 00:51:56 I’m not sure I buy the argument that because these two things compute and AI progress have risen so concomitantly that this is a sort of causal relationship. So broadly, the industry as a whole has been getting more compute and as a result, making more progress. But if you look at the top players, there’s been multiple examples of a company with much less compute, but a more coherent vision, more concentrated research effort, being able to beat an incumbent who has much more compute. So OpenAI initially beating Google DeepMind. And if you remember, there were these emails that were released between Elon and Sam and so forth like, “we got to start this company because they’ve got this bottleneck on the compute” and, “look how much more compute Google DeepMind has”. And then OpenAI made a lot of progress. Similarly now with OpenAI versus Anthropic and so forth. And then I think just generally, your argument is just too ‘outside view’. And we actually do know a lot about this very macro economic argument that I’m like, well, why don’t we just ask the AI researchers? Tamay Besiroglu 00:53:01 I mean, AI researchers will often kind of overstate the extent to which just cognitive effort and doing research is important for driving these innovations, because that’s often convenient or useful. They will say the insight was derived from some nice idea about statistical mechanics or some nice equation in physics that says that we should do it this way. But often that’s an ad hoc story that they tell to make it a bit more compelling to reviewers. Dwarkesh Patel 00:53:35 So Daniel Kokotajlo mentioned this survey he did where he asked a bunch of AI researchers, “if you had one thirtieth the amount of compute”- and he did one thirtieth because AI’s will be, they suppose, 30 times faster- “If you had one thirtieth the amount of compute, how much would your progress slow down?” And they say, “I make a third of the amount of progress I normally do”. So that’s just a pretty good substitution effect of, you get one tenth the compute, your progress only goes down one third. And then I was talking to an AI researcher the other day, one of these cracked people, gets paid tens of millions of dollars a year, probably. And we asked him, how much does the AI models help you in AI research? And he said, “in domains that I’m already quite familiar with, where it’s closer to autocomplete, it saves me four to eight hours a week”. And then he said, “but in domains where I’m actually less familiar, where I need to drive new connections, I need to understand how these different parts relate to each other, and so forth. It saves me close to 24 to 36 hours a week”. And that’s the current models. And I’m just like, “he didn’t get more computed, but it still saved him like a shit ton more time”. Just draw that forward. That’s a crazy implication or crazy trend, right? Ege Erdil 00:54:58 I mean, I’m skeptical of the claims that we have actually seen that much of an acceleration in the process of R&D. These claims seem to me, like they’re not borne out by the actual data I’m seeing. So I’m not sure how much to trust them. Dwarkesh Patel 00:55:18 I mean, on the general intuition that cognitive effort alone can give you a lot of AI progress, it seems like a big important thing the labs do is this science of deep learning. Scaling laws… I mean, it ultimately netted out in an experiment, but the experiment is motivated by cognitive effort. Ege Erdil 00:55:36 So for what it’s worth, when you say that A and B are complementary, you’re not saying, just as A can bottleneck you, B can also bottleneck you. So when you say you need compute and experiments and data, but you also need cognitive effort, that doesn’t mean the lab which has the most compute is going to win, right? That’s a very simple point, either one can be the bottleneck. I mean, if you just have a really dysfunctional culture and you don’t actually prioritize using your computer very well and you just waste it, well then you’re not going to make a lot of progress, right? So it doesn’t contradict the picture that someone with a much better vision, a much better team, much better prioritization can make better use of their compute if someone else was just bottlenecked heavily on that part of the equation. The question here is, once you get these automated AI researchers and you start this software singularity, your software efficiency is going to improve by many orders of magnitude, while your compute stock, at least in the short run, is going to remain fairly fixed. So how many OOMs of improvement can you get before you become bottlenecked by the second priority equation? And once you actually factor that in, like how much progress should you expect? That’s the kind of question I think people don’t have. I think it’s hard for people to have good intuitions about this because people usually don’t run the experiments. So you don’t get to see at a company level, or at an industry level, what would have happened if the entire industry had 30 times less compute. Maybe as an individual, what would happen if you had 30 times less compute? You might have a better idea about that, but that’s a very local experiment and you might be benefiting a lot from spillovers from other people who actually have more compute. So because this experiment was never run, it’s sort of hard to get direct evidence about the strength of complementarity. Dwarkesh Patel 00:57:27 Actually, what is your probability of, if we live in the world where we get AGI in 2027, that there is a software-only singularity? Tamay Besiroglu 00:57:35 Quite high, because you’re conditioning on compute not being very large. So it must be that you get a bunch of software progress. Dwarkesh Patel 00:57:44 Yeah, right, right. Like you just have a bunch of leverage from algorithmic progress in that world. Tamay Besiroglu 00:57:50 OK, that’s right. Dwarkesh Patel 00:57:51 So then maybe, because I was thinking these are independent questions- Tamay Besiroglu 00:57:54 I think a call that I want to make is, I know that some labs do have multiple pre-training teams and they give people different amounts of resources for doing the training and different amounts of cognitive effort, different size of teams. But none of that, I think, has been published. And I’d love to see the results of some of those experiments. I think even that won’t update you very strongly just because it is often just very inefficient to do this very imbalanced scaling of your factor inputs. And in order to really get an estimate of how strong these complementarities are, you need to observe these very imbalanced scale-ups. And so that rarely happens. And so I think the data that bears on this is just really quite poor. And then the intuitions that people have also don’t seem clearly relevant to the thing that matters about what happens if you do this very imbalanced scaling and where does this net out? Dwarkesh Patel 00:58:53 One question I have, which it would be really interesting if somebody can provide an example of: maybe through history, there was some point at which because of a war or some other kind of supply shock, you had to ramp up production or ramp up some key output that people really cared about, while for some weird historical reason, many of the key inputs were not accessible to a ramp-up, but you could ramp-up one key input. I’m talking in very abstract terms. You see what I’m saying, right? You need to make more bombers, but you ran out of aluminum and you need to figure out something else to do. And how successful these efforts have been or whether you just keep getting bottlenecked? Ege Erdil 00:59:35 Well, I think that is not quite the right way to do it. Because I think if you’re talking about materials, then I think there’s a lot of sense in which different materials can be substitutable for one another in different ways. You can use aluminum. I mean, aluminum is a great metal for making aircraft because it’s light and durable and so on. But you can imagine that you make aircraft with worse metals and then it just takes more fuel and it’s less efficient to fly.So there’s a sense in which you can compensate and just cost more. I think it’s much harder if you’re talking about something like complementarity between labor and capital, complementarity between remote work and in-person work or skilled or unskilled work. There are input pairs for which I would expect it to be much more difficult. For example, you’re looking at the complementarity between the quality of leadership of an army and its number of soldiers. There is some effect there, but if you just scale up, you have excellent leadership, but your army only has 100 people. You’re not going to get very far. Dwarkesh Patel 01:00:40 King Leonidas and Thermopylae ? Ege Erdil 01:00:44 Well, they lost, right? Dwarkesh Patel 01:00:47 It would be funny if we’re building models and software-only singularity and we’re like, “what exactly happened in Thermopylae?” It’s somehow relevant. Ege Erdil 01:00:53 I can actually talk about that, but we probably shouldn’t.", "Ege & Tamay’s story", "Dwarkesh Patel 01:00:57 Okay, sure. So the audience should know, my most popular guest by far is Sarah Paine. Not only is she my most popular guest, she’s my most popular four guests. Because all four of those episodes that I’ve done with her are, from a viewer-minute adjusted basis, I host the Sarah Paine Podcast where I occasionally talk about AI. Anyways, we did this three-part lecture series where one of them was about India-Pakistan wars through history. One of them was about Japanese culture before World War II . The third one was about the Chinese Civil War . And for all of them, my history tutor was Ege. And, why does he know so much about fucking random 20th century conflicts? But he did, and he suggested a bunch of the good questions I asked her. We’ll get into that in a second. Ege, what’s going on there? Ege Erdil 01:01:56 I don’t know. I mean, I don’t really have a good question. I think it’s interesting. I mean, I read a bunch of stuff, but it’s a kind of boring answer. I don’t know. Imagine you ask a top AI researcher, “What’s going on? How are you so good?” And then they will probably give you a boring answer. Like, I don’t know. Dwarkesh Patel 01:02:13 That itself is interesting that often these kinds of questions elicit boring answers. It tells you about the nature of the skill. How’d you find him? Tamay Besiroglu 01:02:22 We connected on a Discord for Metaculus , which is this forecasting platform. And I was a graduate student at Cambridge at the time doing research in economics. And I was having conversations with my peers there. And I was occasionally having conversations with Ege. And I was like, “this guy knows a lot more about economics”. And at the time he was a computer science undergrad in Ankara. And he knows more about economics and about these big trends in economic growth and economic history than almost any of my peers at the university. And so like, what the hell is up with that? And then we started having frequent collaborations and ended up hiring Ege for Epoch because it clearly makes sense for him to work on these types of questions. Dwarkesh Patel 01:03:17 And it seems like at Epoch, you’ve just collected this group of internet misfits and weirdos. Tamay Besiroglu 01:03:23 Yeah, that’s right. Dwarkesh Patel 01:03:24 How did you start Epoch? And then how did you accomplish this? Tamay Besiroglu (01:03:27 So I was at MIT doing more research, and I was pretty unhappy with the bureaucracy there where it was very hard for me to scale projects up, hire people. And I was pretty excited about a bunch of work that my PI wasn’t excited about because it’s maybe hard to publish or it doesn’t confer the same prestige. And so I was chatting with Jaime Sevilla , one of the co-founders, and we just collaborated on projects and then thought we should just start our own org, because we can just hire people and work on the projects we were excited about. And then I just hired a bunch of the insightful misfits that like… Dwarkesh Patel 01:04:12 But was the thesis like, “oh, there’s a bunch of underutilized internet misfits and therefore this org was successful”? Or you started the org and then you were like… Tamay Besiroglu 01:04:20 I think it’s more of the latter. So it was more like we can make a bunch of progress because clearly academia and industry is kind of dropping the ball on a bunch of important questions that academia is unable to publish interesting papers on. Industry is not really focused on producing useful insights. And so it seemed very good for us to just do that. And also the timing was very good. So we started just before ChatGPT and we wanted to have much more grounded discussions of the future of AI. And I was frustrated with the quality of discussion that was happening on the internet about the future of AI. And to some extent or to a very large extent, I still am. And that’s a large part of what motivates me to do this. It’s just born out of frustration with bad thinking and arguments about where AI is going to go.", "Explosive Economic Growth", "Dwarkesh Patel 01:06:24 OK, so let me ask you about this: So just to set the scene for the audience, we’re going to talk about the possibility of this explosive economic growth and greater than 30 percent economic growth rates. So I want to poke you both from a perspective of “maybe suggesting that this isn’t aggressive enough in the right kind of way, because it’s maybe it’s too broad”, and then I’ll poke you from the more normal perspective that, “hey, this is fucking crazy”. Ege Erdil 01:06:54 I imagine it would be difficult for you to do the second thing. Dwarkesh Patel 01:06:57 No, I mean, I think it might be fucking crazy, let’s see. The big question I have about this broad automation, I get what you’re saying about the Industrial Revolution, but in this case, we can just make this argument that you get this intelligence and then what you do next is you go to the desert and you build this Shenzhen of robot factories which are building more robot factories, which are building… If you need to do experiments then you build bio labs and you build chemistry labs and whatever. Ege Erdil 01:07:30 Or you can build Shenzhen in the desert. I agree that looks much more plausible than a software-only singularity. Dwarkesh Patel 01:07:35 But the way you’re framing it, it sounds like McDonald’s and Home Depot and fucking whatever are growing at 30 percent a year as well. The aliens’ level view of the economy is that there’s a robot economy in the desert that’s growing at 10,000 percent a year and everything else is the same-old-same-old, or is it like- Ege Erdil 01:07:57 No, I mean, there is a question about what would be possible, or physically possible, and what would be the thing that would actually be efficient. So it might be the case, and again, once you’re scaling up the hardware part of the equation as well as the software part, then I think the case for this feedback loop gets a lot stronger. If you scale up data collection as well, I think it gets even stronger, real world data collection by deployment and so on. But building Shenzhen in a desert… if you think about the pipeline; so far we have relied first on the entire semiconductor supply chain. That industry depends on tons of inputs and materials. And it gets from probably tons of random places in the world. And creating that infrastructure, doubling, or tripling, whatever, that infrastructure, the entire thing. That’s very hard work. So probably you couldn’t even do it, even if you just have Shenzhen in a desert, that will be even more expensive than that. And on top of that, so far, we have been drawing heavily on the fact that we have built up this huge stock of data, over the past 30 years or something, on the internet. Imagine you were trying to train a state-of-the-art model, but you only have 100 billion tokens to train on. That would be very difficult. So in a certain sense, our entire economy has produced this huge amount of data on the internet that we are now using to train the models. It’s plausible that in the future, when you need to get new competencies added to these systems, the most efficient way to do that will be to try to leverage similar kinds of modalities of data, which will also require this… you would want to deploy the systems broadly because that’s going to give you more data. And maybe you can get where you want to be without that, but it would just be less efficient if you’re starting from scratch compared to if you’re collecting a lot of data. I think this is actually a motivation for why labs want their LLMs to be deployed widely, because sometimes when you talk to ChatGPT, it’s going to give you two responses and it’s going to say, well, which one was good? Or it’s going to give you one response and it’s going to ask you, was this good or not? Well, why are they doing that, right? That’s a way in which they are getting user data through this extremely broad deployment. So I think you should just imagine that thing to continue to be efficient and continue to increase in the future because it just makes sense. And then there’s a separate question of, well, suppose you didn’t do any of that. Suppose you just tried to imagine the most rudimentary, the narrowest possible kind of infrastructure build-out and deployment that would be sufficient to get this positive feedback loop that leads to much more efficient AIs. I agree that loop could, in principle, be much smaller than the entire world. I think it probably couldn’t be as small as Shenzhen in the desert, but it could be much smaller than the entire world. But then there’s a separate question of, would you actually do that? Would that be efficient? I think some people have the intuition that there are just these extremely strong constraints, maybe regulatory constraints, maybe social political constraints, to doing this broad deployment. They just think it’s going to be very hard. So I think that’s part of the reason why they imagine these narrower scenarios where they think it’s going to be easier. But I think that’s overstated. I think people’s intuitions for how hard this kind of deployment is comes from cases where the deployment of the technology wouldn’t be that valuable. So it might come from housing. We have a lot of regulations on housing. Maybe it comes from nuclear power. Maybe it comes from supersonic flights. I mean, those are all technologies that would be useful if they were maybe less regulated. But they wouldn’t double. Tamay Besiroglu 01:11:52 I think the core point here is the value of AI automation and deployment is just extremely large, even just for workers. There might be some kind of displacement and there might be some transition that you need to do in order to find a job that works for you, but otherwise the wages could still be very high for a while at least. And on top of that, the gains from owning capital might be very enormous. And in fact, a large share of the US population would benefit… They benefit, they own housing, they have 401ks. Those would do enormously better when you have this process of broad automation and AI deployment. And so I think there could just be a very deep support for some of this, even when it’s totally changing the nature of labor markets and the skills and occupations that are in demand. Ege Erdil 01:12:55 So I would just say it’s complicated. I think what the political reaction to it will be when this starts actually happening, I think the easy thing to say is that, yeah, this will become a big issue and then it will be maybe controversial or something. But what is the actual nature of the reaction in different countries? I think that’s kind of hard to forecast. I think the default view is like, “well, people are going to become unemployed, so it will just be very unpopular”. I think that’s very far from obvious. And I just expect heterogeneity in how different countries respond. And some of them are going to be more liberal about this and going to have a much broader deployment. And those countries probably end up doing better. So just like during the Industrial Revolution, some countries were just ahead of others. I mean, eventually almost the entire world adopted the sort of norms and culture and values of the Industrial Revolution in various ways. Tamay Besiroglu 01:13:44 And actually, you say they might be more liberal about it, but they might actually be less liberal in many ways. In fact, that might be more functional in this world in which you have broad AI deployment. We might adopt the kind of values and norms that get developed in, say, the UAE or something, which is maybe focused a lot more on making an environment that is very conducive for AI deployment. And we might start emulating and adopting various norms like that. And they might not be classical liberal norms, but norms that are just more conducive to AI being functional and producing a lot of value. Ege Erdil 01:14:27 This is not meant to be a strong prediction, this is just an illustrative. It might just be the freedom to deploy AI in the economy and build out lots of physical things at scale, maybe that ends up being more important in the future. Maybe that is still missing something, maybe there are some other things that are also important. The generic prediction that you should expect variance and some countries do better than others, I think that’s much easier to predict than the specific countries that end up doing better. Dwarkesh Patel 01:14:55 Yeah. Or the norms that that country wants. Tamay Besiroglu 01:14:56 That’s right. Dwarkesh Patel 01:14:57 One thing I’m confused about is, if you look at the world of today versus the world of 1750, the big difference is just we’ve got crazy tech that they didn’t have back then. We’ve got these cameras, we’ve got these screens, and we’ve got rockets and so forth. And that just seems like the result of technological growth and R&D and so forth. Ege Erdil 01:15:22 It’s a capital accumulation. Dwarkesh Patel 01:15:23 Well, explain that to me because you’re just talking about this infrastructure build out and blah, blah, blah. I’m like, but why won’t they just fucking invent the kinds of shit that humans would have invented by 2050? Ege Erdil 01:15:37 Producing this stuff takes a lot of infrastructure build-out. Dwarkesh Patel 01:15:40 But that infrastructure is built out once you make the technology, right? Tamay Besiroglu 01:15:45 I don’t think that’s right. There isn’t this temporal difference where it’s first you do the invention… often there’s this interplay between the actual capital buildup and the innovation. Ege Erdil 01:15:57 Learning curves are about this, right, fundamentally? What has driven the increase in the efficiency of solar panels over the past 20, 30 years? Tamay Besiroglu 01:16:05 It isn’t just like people had the idea of 2025 solar panels. Nobody 20 years ago had the sketch for the 2025 solar panel. It’s this kind of interplay between having ideas, building, learning, producing, and- Ege Erdil 01:16:24 Other complementary inputs also becoming more efficient at the same time, like you might get better materials. For example, the fact that smelting processes got a lot better towards the end of the 19th century, so it became a lot easier to work with metal, maybe that was a crucial reason why aircraft technology later became more popular. It’s not like someone came up with the idea of, “oh, you can just use something that just has wings and has a lot of thrust, and then that might be able to fly”. That basic idea is not that difficult, but then, well, how do you make it actually a viable thing? Well, that’s much more difficult. Dwarkesh Patel 01:17:04 Have you seen the meme where two beavers are talking to each other and they’re looking at the Hoover Dam? One of them’s like, “ well, I didn’t build that, but it’s based on an idea of mine ”. The point you’re making is that this invention-focused look on tech history underplays the work that goes into making specific innovations practicable and to deploy them widely. Ege Erdil 01:17:33 It’s just hard, I think. Suppose you want to write a history of this, you want to write the history of how the light bulb was developed or something. It’s just really hard. Because to understand why specific things happen at specific times, you probably need to understand so much about the economic conditions of the time. For example, Edison spent a ton of time experimenting with different filaments to be using the light bulb. The basic idea is very simple. You make something hot and it glows, but then what filament actually works well for that in a product? What is durable? What has the highest ratio of light output versus heat so that you have less waste, it’s more efficient. And even after you have the product, then you’re facing the problem, well, it’s 1880 or something and US homes don’t have electricity, so then nobody can use it. So now you have to build power plants and build power lines to the houses so that people have electricity in their homes so that they can actually use this new light bulb that you created. So he did that, but then people present it as if it’s like, “okay, he just came up with the idea”, like “it’s a light bulb”. Dwarkesh Patel 01:18:46 I guess the thing people would say is, you’re right about how technology would progress if we were humans deploying for the human world. But what you’re not counting is there’s going to be this AI economy where maybe they need to do this kind of innovation and learning by doing when they’re figuring out how to, “I want to make more robots because they’re helpful and so we’re going to build more robot factories, we’ll learn and then we’ll make better robots” or whatever. But geographically, that is a small part of the world that’s happening in. You understand what I’m saying? It’s not like, “and then they walk in your building and then you do a business transaction with Lunar Society podcast LLC and then”, you know what I mean? Ege Erdil 01:19:30 For what it’s worth, if you look at the total surface area of the world, it might well be the case that the place that initially experiences this very fast growth is a small percentage of the surface area of the world. And I think that was the same for the Industrial Revolution, it was not different. Dwarkesh Patel 01:19:49 What concretely does this explosive growth look like? If I look at this heat map of growth rates on the globe, is there just going to be one area that is blinding hot and that’s the desert factories with all these experiments and like… Ege Erdil 01:20:03 I would say our idea is that it’s going to be broader than that, but probably initially… So eventually it would probably be most of the world. But as I said, because of this heterogeneity, because I think some countries are going to be faster in adoption than others, maybe some cities will have faster adoption than others, that will mean that there are differentials and some countries might have much faster growth than other countries. But I would expect that at a jurisdiction level, it will be more homogenous. So, for example, I expect the primary obstacles to come from things like regulation. And so I would just imagine it’s being more delineated by regulatory jurisdiction boundaries than anything else. Dwarkesh Patel 01:20:48 Got it. So you may be right that this infrastructure build-out and capital deepening and whatever l is necessary for a technology to become practical, but… Ege Erdil 01:20:57 Or even to be discovered. There’s an aspect of it where you discover certain things by scaling up, learning by doing, that’s the [?] learning curve. And there’s this separate aspect where, suppose that you become wealthier, well, you can invest that increased wealth in, you use it to accumulate more capital, but you also can invest it in R&D and other ways. Tamay Besiroglu 01:21:21 You get Einstein out of the patent office. You need some amount of resources for that to make sense. And you need the economy to be of a certain scale. You also need demand for the product you’re building. So, you could have the idea, but if the economy is just too small that there isn’t enough demand for you to be specializing and producing the semiconductor or whatever, because there isn’t enough demand for it, then it doesn’t make sense. A much larger scale of an economy is useful in many ways in delivering complementary innovations and discoveries happening through serendipity, producing, having there be consumers that would actually pay enough for you to recover your fixed costs of doing all the experimentation and the invention. You need the supply chains to exist to deliver the germanium crystals that you need to grow in order to come up with the semiconductor. You need a large labor force to be able to help you do all the experiments and so on. Dwarkesh Patel 01:22:20 I think the point you’re illustrating is, “look, could you have just figured out that there was a Big Bang by first principles reasoning?” Maybe. But what actually happened is we had World War II and we discovered radio communications in order to fight and effectively communicate during the war. And then that technology helped us build radio telescopes. And then we discovered the cosmic microwave background. And then we had to come up with an explanation for the cosmic microwave background. And then we discovered the Big Bang as a result of World War II. Tamay Besiroglu 01:22:46 People underemphasize that giant effort that goes into this build-up of all the relevant capital and all the relevant supply chains and the technology. I mean earlier you were making a similar comment when you were saying, “oh reasoning models actually in hindsight, they look pretty simple”, but then you’re ignoring this giant upgrading of the technology stack that happened, that took five to 10 years prior to that. And so I think people just underemphasize the support that is had from the overall upgrading of your technology, of the supply chains, of various sectors that are important for that. And people focus on just specific individuals of like, Einstein had this genius insight and he was the very pivotal thing in the causal chain that resulted in these discoveries. Or Newton was just extremely important for discovering calculus without thinking about, well, there were all these other factors that produced lenses, that produced telescopes, that got the right data and that made people ask questions about dynamics and so on that motivated some of these questions. And those are also extremely important for scientific and technological innovation. Dwarkesh Patel 01:24:06 And then, as you were saying, one of Conquest laws is, the more you understand about a topic, the more conservative you become about that topic . And so there may be a similar law here, where the more you understand about an industry, the more- obviously, I’m just a commentator, or a podcaster, but I understand AI better than any other industry I understand. And I have the sense from talking to people like you that, “oh, so much went into getting AI to the point where it is today”. Whereas when I talk to journalists about AI, they’re like, “okay, who is a crucial person we need to cover?” And they’re like, “should we get in touch with Geoffrey Hinton? Should we get in touch with Ilya?” And I just have this like, “you’re kind of missing the picture”. But then you should have that same attitude towards things you… Or maybe it’s a similar phenomenon to Gell-Mann amnesia , we should have a similar attitude towards other industries. Ege Erdil 01:24:59 Robin Hanson has this abstraction of seeing things in near mode versus far mode . And I think if you don’t know a lot about the topic, then you see it in far mode and you simplify things, you see a lot more detail. In general, I think the thing I would say, and the reason I also believe that abstract reasoning and deductive reasoning or even Bayesian reasoning by itself is not sufficient or is not as powerful as many other people think, is because I think there’s just this enormous amount of richness and detail in the real world that you just can’t reason about it. You need to see it. And obviously that is not an obstacle to AI being incredibly transformative because as I said, you can scale your data collection, you can scale experiments you do both in the AI industry itself and just more broadly in the economy, so you just discover more things. More economic activity means we have more exposed surface area to have more discoveries. All of these are things that have happened in our past, so there’s no reason that they couldn’t speed up. The fundamental thing is that there’s no reason fundamentally why economic growth can’t be much faster than it is today. Like it’s probably as advanced right now just because humans are such an important bottleneck. They both supply the labor. They play crucial roles in the process of discovery of various kinds of productivity growth. There’s just strong complementarity to some extent with capital that you can’t substitute machines and so on for humans very well. So the growth of the economy and growth productivity just ends up being bottlenecked by the growth of human population. Dwarkesh Patel 01:27:39 So let me ask you a tangential question. What’s been happening in China over the last 50 years, would you describe that as, in principle, the same kind of explosive growth that you expect from AI? Because there’s a lot of labor that makes the marginal product of capital really high, which allows you to have 10% plus economic growth rates. Is that basically in principle from AI? Ege Erdil 01:28:01 So I would say in some ways it’s similar, in some ways it’s not. Probably the most important way in which it’s not similar is that in China, you see a massive amount of capital accumulation, a substantial amount of adoption of new technologies and probably also human capital accumulation to some extent. But you’re not seeing a huge scale up in the labor force. While for AI, you should expect to see a scale up in the labor force as well, not in the human workforce, but in the AI workforce. Dwarkesh Patel 01:28:34 And I think you did, maybe not consecutive increases in the labor force… Tamay Besiroglu 01:28:38 The key thing here is just the simultaneous scaling of both these things. And so you might ask the question of “isn’t it basically half of what’s going to happen with AI that you scale up capital accumulation in China?” But actually if you get both of these things to scale, that gives you just much faster growth and a very different picture. Ege Erdil 01:29:04 But at the same time, if you’re just asking what 30 percent growth per year would look like, if you just want to have an intuition for how transformative that would be in concrete terms, then I think looking at China is not such a bad case. Especially in the 2000s or maybe late 90s, that seems slower than what we’re forecasting. Tamay Besiroglu 01:29:24 Right. I think also looking at the Industrial Revolution is pretty good. Ege Erdil 01:29:26 Well, the Industrial Revolution is very slow. Tamay Besiroglu 01:29:28 But just in terms of the margins along which we made progress in terms of products. So the thing that didn’t happen during the industrial revolution is we just produced a lot more of things that people were producing prior to the industrial revolution, like producing a lot more crops and maybe a lot more kind of pre-Industrial Revolution style houses or whatever, on farms. Instead, what we got is along pretty much every main sector of the economy, we just had many different products that are totally different from what was being consumed prior to that. So in transportation, in food. Ege Erdil 01:30:13 I mean, health care is a very big deal and antibiotics. Dwarkesh Patel 01:30:16 So another question, because I’m not sure I understand how you’re defining the learning by doing versus explicit R&D, because there’s the way for taxes that companies say what they call R&D. But then there’s the intuitive understanding of R&D. So if you think about how AI is boosting TFP , you could say that right now, if you just had replaced the TSMC process engineers with AIs and they’re finding different ways in which to improve that process and improve efficiencies, improve yield, I would kind of call that R&D. On the other hand, you emphasize this other part of TFP, which is like better management and that kind of stuff. Ege Erdil 01:30:59 The learning by doing could be, you could- Dwarkesh Patel 01:31:00 But how much “umph” are you… Like you’re going to get to the fucking Dyson Sphere by better management? Ege Erdil 01:31:05 But that’s not the argument, right? The point is that there are all these different things, some of them are maybe more complimentary than others. The point is not that you can get to a Dyson sphere by just scaling labor and capital. That’s not the point. You need to scale everything at once. So just as you can’t get to a Dyson sphere by just scaling labor and capital, you also can’t get to it by just scaling TFP. That doesn’t work. Tamay Besiroglu 01:31:30 I think there’s a very important distinction between what is necessary to scale, to get to this Dyson sphere world and what is important. Like in some sense, producing food is necessary. But of course, producing food doesn’t get you to a Dyson sphere, right? So I think R&D is necessary, but on its own isn’t sufficient. And scaling up the economy is also necessary. On its own, it’s not sufficient. And then you can ask the question, what is the relative importance of each? Ege Erdil 01:32:00 So I think our view here is very much the same. It is very connected to our view about the software R&D thing where we’re just saying there are these bottlenecks, so you need to scale everything at once. This is just a general view. But I think people misunderstand us sometimes as saying that R&D is not important. No, that’s not what we’re saying. We’re saying it is important. It is less important in relative terms than some other things, none of which are by themselves sufficient to enable this growth. So the question is, how do you do the credit attribution? One of my missions in economics is to look at the elasticities of output to the different factors. Capital is less important than labor, because labor elasticity output is like 0.6, while for capital it’s like 0.3. But neither are by themselves sufficient. If you just scaled one of them and the other remained fixed, then neither would be sufficient to indefinitely scale output.", "Will there be a separate AI economy?", "Dwarkesh Patel 01:33:00 One question that Daniel posed to me is, because I made this perspective about everything being interconnected when you were talking about… another example people often bring up is what would it take to build the iPhone in the year 1000? And it’s unclear how you could actually do that without just replicating every intermediate technology or most intermediate technologies. And then he made the point like, OK, fine, whatever. Nanobots are not a crux here. The crux, at least to the thing he cares about, which is human control, is just by when can the robot economy, or the AI economy, whether it’s a result of capital deepening or whether it’s a result of R&D, by when will they have the robots? And they have more cumulative physical power? Ege Erdil 01:33:50 Right. But he’s imagining a separate thing called the AI economy. Well, why would you imagine that? I think it’s probably downstream of his views about the software-only singularity. But again, those are views that we don’t share. Tamay Besiroglu 01:34:01 So it’s just much more efficient for AI to operate in our economy and benefit from the existing supply chains and existing markets rather than set up shop on some island somewhere and do its own thing. Ege Erdil 01:34:16 And then it’s not being clear, for example people might have the intuition- I brought this up before- the distinction between what is the minimum possible amount of build-out that would be necessary to get this feedback loop up and running and what would be the most efficient way to do it? Which are not the same question. But then people have this view that, oh, the most efficient thing in principle, we can’t do that because… Dwarkesh Patel 01:34:36 I think the example he might give is when the conquistadors arrived in the New World or when the East India Trading Company arrived in India, they did integrate into the existing economy. In many cases, it depends on how you define ‘integrate’, but the Spanish relied heavily on New World labor in order to do silver mining and whatever. East India Trading Company was just a ratio of British people to Indian people, which is not that high. So they just had to rely on the existing labor force. But they were still able to take over because of… I don’t know what the analogous thing here is, but you see what I’m saying. And so he’s concerned about, by when will they, even if they’re ordering components off of Alibaba or whatever- and sorry, I’m being trite, but you see what I’m saying. Even if they’re going to get into the supply chains, by when are they in a position where, because this part of the economy has been growing much faster, they could take over the government or… Ege Erdil 01:35:40 If they wanted to? Dwarkesh Patel 01:34:41 That’s right, yeah. Ege Erdil 01:35:42 Okay. So I think that eventually you expect the AI systems to be driving most of the economy. And unless there are some very strange coincidences where humans are able to somehow uplift themselves and able to become competitive with the AIs by stopping being biological humans or whatever, which seems very unlikely early on, then AI is just going to be much more powerful. And I agree that in that world, if the AI is just somehow coordinated and decides, “okay, we should just like take over” or something, they just somehow coordinated to have that goal, then they could probably do it. But, that’s also probably true in our world. In our world, if the US wanted to invade Sentinel Island, then probably they could do it. I don’t think anyone could stop them. But what does it actually mean? There’s this dramatic power imbalance, but that doesn’t mean… that doesn’t tell you what’s going to happen, right? Why doesn’t the US just invade Guatemala or something? Why don’t they do that? Seems like they could easily do it. Dwarkesh Patel 01:36:53 Because the value to the US of… Ege Erdil 01:36:56 Not that high, right? Dwarkesh Patel 01:36:58 Yeah. So I agree that might be true for AIs because most of the shit is in space. And you want to do the capital deepening on Mars and the surface of the sun instead of like New York City. Ege Erdil 01:37:13 I think it’s deeper than that. So it’s deeper than that. There’s also the fact that if the AIs are going to be integrated into our economy… So basically they start out as a small part of our economy or our workforce and over time they grow and over time they become the vast majority of the actual work power in the economy. But they are growing in this existing framework where we have norms and rules for better coordination and then undermining those things has a cost. So if getting the things that are making the humans wealthier than they used to be before and more comfortable, yeah, you would probably be better off if you could just take that from them. But the benefit to you, if you already are getting almost all of the income in the economy, will be fairly small. Dwarkesh Patel 01:38:03 I feel like the Sentinel Islands thing, there’s one reference class that includes that. But historically, there’s a huge reference class that includes; East India Trading Company could have just kept trading with the Mughals, they just took over, right? They could have kept trading with the 50 different nation states in pre-colonial India. But yeah. Ege Erdil 01:38:21 That’s right. I mean, that’s what they were initially doing. And then whatever. I’m not going to go into that subject. Dwarkesh Patel 01:38:27 But that is the reference class… Ege Erdil 01:38:30 I agree. I agree. So if the question is, if they have some totally different values and then they represent most of the economy, then would they take over? I still don’t know, because I’m not sure to what extent the class of all AI is a natural class. It’s sort of like, why don’t the young people in the economy coordinate? Dwarkesh Patel 01:38:54 I agree that sometimes these kinds of class arguments are misused. For example, when Marxists are like, “why don’t this class rise up against the others?” Daniel made the interesting argument that if you look at the history of the conquistadors, when Cortes was making his way through the new world, he had to actually go back and fight off a Spanish fleet that had been sent to arrest him and then go back. So you can have this fight within this conquering AIs and then that still nets out to the Native Americans getting disempowered. But with AIs in particular, they’re just copies of each other. And in many other ways, they have lower transaction costs when they trade with each other or interact with each other. There’s other reasons to expect them to be more compatible coordinating with each other than coordinating with the human world. Ege Erdil 01:39:48 Sure. If the question is just that, “is it possible for that to happen?”, which is a weaker claim, then yeah, it seems possible. But there are, I think, a lot of arguments pushing back against it. Probably actually the biggest one is the fact that AI preferences are just not… Just look at the AIs we have today. Can you imagine them doing that? I think people just don’t put a lot of weight on that, because they think once we have enough optimization pressure and once they become super intelligent, they’re just going to become misaligned. But I just don’t see the evidence for that. Dwarkesh Patel 01:40:24 I agree there’s some evidence that they’re good boys. Ege Erdil 01:40:28 No, there’s more than some evidence. Dwarkesh Patel 01:40:30 No, but there’s also some evidence… There’s a new openAI paper where in chain of thought, reward hacking is such a strong basin that if you were like, “hey, let’s go solve this coding problem”, In the chain of thought, they’ll just be like, “okay, let’s hack this and then figure out how to hack it.” Ege Erdil 01:40:48 So imagine that you gave students at a school a test and then the answer key was like on the back. Dwarkesh Patel 01:40:52 Right, but the reference class of humans does include Cortes and the East Indian Trading Company. Ege Erdil 01:40:57 Sure. Tamay Besiroglu 01:40:58 So I think one issue here is that I think people are doing this very kind of partial equilibrium analysis or something where they’re thinking about these raw abilities of AI systems in a world where AI systems are dominant and human civilization has done very little in terms of integrating itself and the AI is integrating itself into the human world. Insofar as it’s poor at communicating and coordinating with AI, addressing those deficiencies and improving that. Insofar as that’s posing a risk, or creating inefficiencies, because it’s unable to benefit from coordinating and trading, then it should have this enormous incentive to address that. Insofar as there is a lot of value to be gained from dominating and taking over humans, what you might get is a more negotiated settlement. If that’s indeed the case, then a war would just be inefficient. And so you would want to negotiate some settlement that results in some outcomes that are mutually beneficial. Dwarkesh Patel 01:42:05 Compared to the counterfactual, not compared to… There was a mutually beneficial trade that was made between the Qing dynasty and the British in the opium wars, right? But it was maybe better than pre-industrial China going to war with the British empire, but it wasn’t better than never having interacted with the British empire in the first place. Tamay Besiroglu 01:42:28 So I think one mistake that I feel people make is they have this very naive analysis of what creates conflict. And I think Matthew has written a bit about this, a colleague of ours, where they say there’s misalignment. And so that then creates conflict. But that’s actually not what the literature on what causes conflict says creates conflict. It’s not just misalignment, it’s also other issues like having a bad understanding of the relative strengths of your armies versus theirs, or maybe having these very strong commitments that you think some grounds are sacred, and so you’re not willing to do any trade in order to give up some of that in order to gain something else. And so then you have to posit some additional things other than just the base value misalignment part. Dwarkesh Patel 01:43:27 I think you’re making a good argument against, like, “humans take up the spears and the machetes and go to war against the AI data centers”, because maybe there’s not this asymmetric information that often leads to conflicts in history. But this argument does not address at all the risk of takeover, which can be the result of a peaceful end negotiation or human society being like, “look, we’re totally outmatched. And we’ll just take these meager concessions rather than go to war”. Tamay Besiroglu 01:43:57 But insofar as it’s more peaceful, then I think it’s like much less of a thing to worry about. I think there could be this trend where we indeed have this gradual process where AI is much more important in the world economy and actually deciding and determining what happens in the world. But this could be beneficial for humans where we’re getting access to this vast, much, much larger economy and much more advanced technological stock. Ege Erdil 01:44:30 Yeah. So I think it’s important to be clear about what is the thing that you’re actually worried about. Because I think some people just say that, “oh, humans are going to lose control of the future, we’re not going to be the ones that are making the important decisions. We, however, concede\", that’s also kind of nebulous. But is that something to worry about? If you just think biological humans should remain in charge of all important decisions forever, then I agree, the development of AI seems like a problem for that. But in fact, other things also seem like a problem for that, I just don’t expect to generically be true. Like in a million years from now, if even if you don’t develop AI, biological humans, the way we recognize them today, are still making all the important decisions and they have something like the culture that we would recognize from ourselves today. I would be pretty surprised by that. I think Robin Hanson has again talked about this, where he said a bunch of the things that people fear about AI are just things they fear about change and fast change . So the thing that’s different is that AI has a prospect of accelerating much of this change so that it happens in a narrower period. Dwarkesh Patel 01:45:36 I think it’s not just the kind of change that would have happened from, let’s say, genetically modifying humans and blah, blah, blah, is instead happening in a compressed amount of time. I think the worry comes more from like, it’s not just that change compressed. It’s a very different vector of change . Ege Erdil 01:45:53 Yeah, but what is the argument for that? I have never seen a good argument for this. Tamay Besiroglu 01:45:58 You should expect a bunch of change if you accelerate just human change as well. You might expect different values to become much more dominant. You might expect people that don’t discount the future as much to be much more influential because they save more and they make good investments that gives them more control. Ege Erdil 01:46:17 People who are higher risk tolerance. Tamay Besiroglu 01:46:18 Higher risk tolerance. Because they are more willing to make bets that maximize expected value and so get much more influence. So just generically, accelerating human change would also result in a lot of things being lost that you might care about. Dwarkesh Patel 01:46:34 I think the argument is that maybe the speed of the change determines what fraction of the existing population or stakeholders or whatever, have some causal influence on the future. And maybe the thing you care about is, look, there’s going to be change, but it’s not just going to be like one guy presses a button. That’s like the software singularity extreme. It’s more like over time norms change and so forth.", "Can we predictably influence the future?", "Ege Erdil 01:47:08 So if you’re looking at the software singularity picture, I agree that picture looks different. And again, I’m coming back to this because obviously Daniel, and maybe Scott to some extent, they probably have this view that the software-only singularity is more plausible. And then one person, we could end up in a situation where their idiosyncratic preferences or something end up being more influential. I agree that makes the situation look different from if you just have this broader process of automation. But even in that world, I think a lot of people have this view about things like value lock-in , where they think this moment is a pivotal moment in history. And then someone is going to get this AI, which is very powerful because of the software-only singularity. And then they’re just going to lock in some values. And then those values are going to be stable for millions of years. And I think that just looks very unlike anything that has happened in the past. So I’m kind of confused why people think it’s very plausible. I think people have the argument that they see the future, again, in my view, in sort of ‘far mode’. They think there’s going to be one AI. It’s going to have some kind of utility function. That utility function is going to be very stable over time, so it’s not going to change, there won’t be this messiness of a lack of coordination between different AIs, or over time values drifting for various reasons, maybe because they become less functional in an environment, maybe because of other reasons. And so they just don’t imagine that. They say, “well, utility functions, we can preserve them forever. We have the technology to do that. So it’s just going to happen”. And I’m like, “well, that seems like such a weak argument to me”. Tamay Besiroglu 01:48:50 Often the idea is, because this is digital you can preserve the information better and copy it with higher fidelity and so on. But actually, even if you look just at information on the internet, you have this thing called link rot , which happens very quickly. And actually, information that’s digital isn’t preserved for very long at all. Dwarkesh Patel 01:49:15 And the point that Matthew was making is that the fact that the information is digital has led to- not maybe led to, but at least been associated with- faster cultural change. Tamay Besiroglu 01:49:25 Cultural change, exactly. Ege Erdil 01:49:26 I mean, basically technological changes can create incentives for cultural change just as they make preserving… Dwarkesh Patel 01:49:32 I think there’s two key arguments that I’ve heard. One is that we will soon reach something called technological maturity . And one of the key ways in which society has been changing recently is- maybe actually its culture would have changed even more. Actually, no, I think this argument that you’re making is wrong, because we do know that language actually changed a lot more. We can read everything that was written after the 1800s when literacy became more common. But just go back a couple hundred years after that and you’re reading old English and it’s hard to understand. And that is a result of literacy and the codification of language. Ege Erdil 01:50:09 Well, that information was better preserved. What about other kinds of cultural practices? Dwarkesh Patel 01:50:12 But I think the argument would be that change was a result of technological change in general, not the result of information being digitized. And maybe that culture would have actually changed more if information wasn’t as well preserved or technology had continued to proceed. And the argument is, in the future we’re going to reach some point at which you’ve done all the tech, ideas have just gotten way too hard to find and you need to make a CERN that’s the size of a galaxy to progress physics an inch forward. And at that point, there’s this growth in technology, just churning over civilization goes away. And then you just have the digital thing, which does mean that a lock-in is more plausible. Tamay Besiroglu 01:51:00 So the technological maturity thing, I agree that results in this slowdown and change and growth and so on and certain things might get more locked-in relative to what preceded it. But then what do we do today about that? Well, what could you do to have a positive impact by our lights? Robin Hanson had this question of what could someone do in the 1500s to have a positive impact on the world today from their point of view, knowing all they knew back then? I think this question is even worse than that, because I think the amount of change that happens between today and technological maturity is just orders of magnitude greater than whatever change happened between the 1500s and today. So it’s an even worse position than someone in the 1500s thinking about what they could do to have a positive impact in expectation, like predictably positive today. And so I think it’s just pretty hopeless. I don’t know if we could do anything or find any candidate set of actions that would make things better post lock-in. Ege Erdil 01:52:05 I mean, that’s assuming lock-in is going to happen, which is not… Dwarkesh Patel 01:52:08 In the 1700s, a bunch of British abolitionists were making the case against slavery. And I don’t think there’s any in-principle reason why we couldn’t have been a slave society to this day, or more of the world couldn’t have slavery. I think what happened is just the convincing of British people that slavery is wrong, the British Empire put all its might into abolishing slavery and making that a norm. I think another example is Christianity and the fact that Jesus has these ideals, you could talk about these ideals. I think the world is a more Christian place. Ege Erdil 01:52:45 It is a more Christian place, sure. Dwarkesh Patel 01:52:57 And also is like more of the kind of place- I’m not saying Jesus Christ would endorse every single thing that happens in the world today. I’m just saying he endorses this timeline more than one in which he doesn’t exist and doesn’t preach at all. Ege Erdil 01:53:00 I don’t know, actually. I’m not sure if that’s true. It seems like a hard question. Dwarkesh Patel 01:53:03 But I think like a sum from the Christian perspective, favorable cultural development to the West. Ege Erdil 01:53:07 I mean, you don’t know the counterfactual. Dwarkesh Patel 01:53:09 I agree that is always true. I just think the world does have people who read the Bible and are like, “I’m inspired by these ideals to do certain things”. And it just seems like that’s more likely to lead to… Ege Erdil 01:53:20 So that is what I would call a ‘legacy effect’ or something. You can say the same thing about languages, some cultures might just become more prominent and their languages might be spoken more, or some symbols might become more prominent. But then there are things like how do cities look, and how do cars look, and what do people spend most of their time doing in their day, and what do they spend their money on? And those questions seem much more determined by how your values change as circumstances change. Dwarkesh Patel 01:53:49 That might be true, but I’m in the position with regards to the future where I expect a lot of things to be different and I’m okay with them being different. I care much more about the equivalent of slavery, which in this case is literally slavery. Just to put a final point on it, the thing I really care about is there’s going to be trillions of digital beings. I want it to be the case that they’re not tortured and put into conditions in which they don’t want to work and whatever. I don’t want galaxies worth of suffering. That seems closer to British abolitionists being like, “let’s put our empire’s might against fighting slavery”. Ege Erdil 01:54:25 I agree. But I would distinguish between the case of Christianity and the case of the end of slavery, because I think the end of slavery… I agree you can imagine a society, technologically it’s feasible to have slavery. But I think that’s not the relevant thing which brought it to an end. The relevant thing is that the change in values associated with the Industrial Revolution made it so that slavery just became an inefficient thing to sustain in a bunch of ways. And a lot of countries at different times phased out different things you could call slavery. For example, Russia abolished serfdom in the 1860s. They were not under British pressure to do so. Britain couldn’t force Russia to do that, they just did that on their own. There were various ways in which people in Europe were tied to their land and they couldn’t move, they couldn’t go somewhere else. Those movement restrictions were lifted because they were inefficient. There were ways in which the kind of labor that needed to be done in the colonies to grow sugar or to grow various crops, it was very hard labor. It was not the kind of thing that probably you could have paid people to do, because they just wouldn’t want to do it because the health hazards and so on were very great, which is why they needed people to force people to do them. And that kind of work over time became less prevalent in the economy. So, again, that reduces the economic incentives to do it. I agree you could still do it. Dwarkesh Patel 01:55:58 I would emphasize the way you’re painting the counterfactual is like, “oh, but then in that world, they would have just phased out the remnants of slavery”. But there’s a lot of historical examples where there’s not necessarily hard labor, only hard labor, like Roman slavery. Ege Erdil 01:56:14 Yes. It was different. Dwarkesh Patel 01:56:16 And I interviewed a historian about it recently, the episode hasn’t come out, but he wrote a book about the scope. I think it was like 20 percent of people under Roman control were slaves. And this was not just agricultural slavery. His point was that the maturity of the Roman economy is what led to this level of slavery, because the reason slavery collapsed in Europe after the fall of the Roman Empire was because the economy just lost a lot of complexity. Ege Erdil 01:56:50 Well, I’m not sure if I would say that slavery collapsed. I think this depends on what you mean by slavery. I mean in a lot of ways people in feudal Europe were… Dwarkesh Patel 01:56:58 But his point is that serfdom was not the descendant institution from Roman slavery. Ege Erdil 01:57:02 No, I agree. It was not descendant. But in fact, this point I’m trying to make is that, values that exist at a given time, like what the values we will have in 300 years, or from the perspective of someone a thousand years ago, what values people are going to have in a thousand years. Those questions are much more determined by the technological and economic and social environment that’s going to be there in a thousand years, which values are going to be functional, which sides, which values end up being more competitive and being more influential so that other people add up their values. And it depends much less on the individual actions taken by people a thousand years ago. So I would say that the abolitionist thing, it’s not the cause of why slavery came to an end. Slavery comes to an end also because people just have natural preferences that I think are suppressed in various ways during the agricultural era where it’s more efficient to have settled societies in cities which are fairly authoritarian and don’t allow for that much freedom and that you’re in this Malthusian world where people have very low wages perhaps compared to what they enjoyed in the hunter-gatherer era. So it’s just a different economic period and I think people didn’t evolve to have the values that would be functional in that era. So what happened is that there had to be a lot of cultural assimilation where people had to adopt different values and in the Industrial Revolution people become also very wealthy compared to what they used to be, and that I think leads to different aspects of people’s values being expressed. Like people just put a huge amount of value on equality. It’s always been the case. But I think when it is sufficiently functional for that to be suppressed they are capable of suppressing it. Dwarkesh Patel 01:59:01 I mean if that’s the story then this makes value alignment all the more important, because then you’re like “oh if the AI’s become wealthy enough they actually will make a concerted effort to make sure the future looks more like the utility function you put into them” which I think you have been under-emphasizing. Ege Erdil 01:59:18 No, I’m not under-emphasizing that. What I would say is there are certain things that are path-dependent in history, such that if someone had done something different, something had gone differently a thousand years ago, then today in some respects would look different. I think for example, which languages are spoken across which boundaries, or which religions people have, or fashion maybe to some extent, though not entirely. Those things are more path-dependent, but then there are things that are not as path-dependent. So for example if some empire, like if the Mongols had been more successful and they somehow- I don’t know how realistic it is- but they became very authoritarian and had slavery everywhere, would that have actually led to slavery being a much more enduring institution a thousand years later? That seems not true to me. The forces that led to the end of slavery seemed like they were not contingent forces, they seem like deeper forces than that and if you’re saying “well if we align the AI today to some bad set of values then that could affect the future in some ways which are more fragile” that seems plausible, but I’m not sure how much of the things you care about the future and how much the ways in which you expect the future to get worse you actually have a lot of leverage on at the present moment. Dwarkesh Patel 02:00:40 I mean another example here might be factory farming where you could say “oh, it’s not like us having better values over time led to suffering going down, in fact your suffering might have gone up because the incentives that led to factory farming emerging are…” Ege Erdil 02:00:56 And probably when factory farming comes to an end it will be because the incentives start going away, right? Dwarkesh Patel 02:01:01 So suppose I care about making sure the digital equivalent of factory farming doesn’t happen. Maybe, all else being equal, it’s just more economically efficient to have suffering minds doing labor for you than non-suffering minds because of the intermediary benefits of suffering or something like that, right? What would you say to somebody like me where I’m like “I really want that not to happen, I don’t want the lightcone filled with suffering workers” or whatever. Is it just like “we’ll give up because this is the way economic history is”? Ege Erdil 02:01:40 No, I don’t think you should give up. It’s hard to anticipate the consequences of your actions in the very distant future. So I would just recommend that you should just discount the future. Not for a moral reason, not because the future is worthless or something, but because it’s just very hard to anticipate the effects of your actions. In the near-term I think there are things you can do that seem like they would be beneficial. For example, you could try to align your present AI systems to value the things that you’re talking about, like they should value happiness and they should dislike suffering or something. You might want to support political solutions that would… Basically you might want to build up the capacity so that in the future if you notice something like this happening then we might have some ability to intervene. Maybe you would think about the prospect of “well eventually we’re gonna maybe colonize other stars and civilization might become very large and communication delays might be very long between different places”. And in that case competitive pressures between different local cultures might become much stronger because it’s harder to centrally coordinate. And so in that you might expect competition to take over in a stronger way and if you think the result of that is going to be a lot of suffering, maybe you would try to stop that. Again I think at this point it’s very far from obvious that trying to limit competition is actually a good idea, I would probably think it’s a bad idea, but maybe in the future we will receive some information and we’ll be like “oh, we were wrong actually actually we should stop this” and then maybe you want to have the capacity so that you can make that decision. But that’s a nebulous thing. How do you build that up? Well I don’t know. That’s the kind of thing I would be trying to do. Tamay Besiroglu 02:03:28 Yeah I think the overall takeaway I take from the way that I think about it, and I guess we think about it, as be more humble in what you think you can achieve, and just focus on the nearer term, not because it’s more morally important than the longer term, but just because it’s much easier to have a predictably positive impact on that. Dwarkesh Patel 02:03:49 One thing I’ve noticed over the last few weeks of thinking about these bigger future topics and interviewing Daniel and Scott and then you two, is how often I’ve changed my mind about everything from the smallest questions about when AI will arrive- it’s funny that that’s the small question in the grand scheme of things- to whether there will be an intelligence explosion, or whether it’ll be an R&D explosion, to whether there’ll be explosive growth, or how to think about that. And if you’re in a position where you are incredibly epistemically uncertain about what’s going to happen, I think it’s important to, instead of becoming super certain about your next conclusion, just being like “well let me just take a step back, I’m not sure what’s going on here”. And I think a lot more people should be from that perspective unless you’ve had the same opinion about AI for many years, in which case I have other questions for you about why that’s the case. And I mean generally, how we as a society deal with topics on which we are this uncertain is just to have freedom, decentralization, both decentralized knowledge and decentralized decision making take the reins and not to do super high volatility centralized moves like “hey let’s nationalize so we can make sure that the software-only singularity is aligned” or not to make moves that are just incredibly contingent on one world view that are brittle under other considerations. And that’s become a much more salient part of my world view. I think just classical liberalism is the way we deal with being this epistemically uncertain and I think we should be more uncertain than we’ve ever been in history, as opposed to many other people who seem to be more certain than they are about other sort of more mundane topics. Tamay Besiroglu 02:05:44 Yeah I think it’s very hard to predict what happens because this acceleration basically means that you find it much harder to predict what the world might be in 10 years time. I think these questions are also just very difficult and we don’t have very strong empirical evidence and then there’s like a lot of this kind of disagreement that exists. Ege Erdil 02:06:10 I would say that it’s much more important in a lot of cases and a lot of situations to maintain flexibility and ability to adapt to new circumstances, new information, than it is to get a specific plan that’s going to be correct and that’s very detailed and has a lot of specific policy recommendations and things that you should do. That’s actually also the thing that I would recommend if I want to make the transition to AI in this period of explosive growth go better. I would just prefer it if we in general had higher quality institutions, but I am much less bullish on someone sitting down today and working out “okay what will this intelligence explosion or explosive growth be like? What should we do?” I think plans that you work out today are not going to be that useful when the events are actually occurring, because you’re going to learn so much stuff that you’re going to update on so many questions that these plans are just going to become obsolete. Tamay Besiroglu 02:07:12 One thing you could do is you could look at say, the history of war planning and how successful war planning has been for like actually anticipating what actually happens when the war actually happens. Ege Erdil 02:07:22 So for one example- I think I might have mentioned this off the record at some point- but before the Second World War happened, obviously people saw that there were all these new technologies like tanks and airplanes and so on, which existed in World War I. but in a much more primitive setting. So they were wondering, what is going to be the impact of these technologies now that we have in them in much greater scale? And the British government had estimates of how many casualties there would be from aerial bombardment in the first few weeks of the Second World War. And they expected hundreds of thousands of casualties in two weeks, three weeks, after the war begins. So the idea was that air bombing is basically this unstoppable force, all the major urban centers are going to get bombed, tons of people will die, so basically we can’t have a war because if there’s a war then it will be a disaster because we will have this aerial bombardment. But later it turned out that that was totally wrong. In fact, in all of Britain there were fewer casualties from air bombing in the entire six years of the Second World War than the British government expected in the first few weeks of the war. They had less casualties in six years than they expected in three weeks. So why did they get it wrong? Well there are lots of boring practical reasons, like for example it turned out to be really infeasible, especially early on, to bomb cities in daytime because your aircraft would just get shot down, but then if you try to bomb at night time then your bombing was really imprecise and only a very small fraction of it actually hit. And then people also underestimated the extent to which people on the ground like firefighters and so on could just go around the city and that put out fires from bombs that were falling on structures. They overestimated the amount of economic damage that it would do. They underestimated how economically costly it would be; basically you’re sending these aircraft and then they’re getting shot down, while an aircraft is very expensive. So in the end how it turned out is, when the allies started bombing Germany, for each dollar of capital they were destroying in Germany they were spending like four to five dollars on the aircraft and fuel and training of the pilots and so on that they were sending in missions and the casualty rate was very high, which later got covered up by the government because they didn’t want people to worry about, you know… So that is a kind of situation where all the planning that you would have done in advance predicated on this assumption of air bombing going to be this “nuclear weapons-lite”, basically it’s extremely destructive there’s going to be some aspect to which… Dwarkesh Patel 02:09:57 I mean it was though, right? 84,000 people died in one night of firebombing in Tokyo, Germany, large fractions of their… Ege Erdil 02:10:07 But that was over the period of six years of war. Dwarkesh Patel 02:10:11 Right, but there were single firebombing attacks. I mean it was a case that during the end of World War II when they were looking for the place to launch the atomic bombs, they just had to go through like a dozen cities because it just wouldn’t be worth nuking them because they’re already destroyed by the firebombing. Ege Erdil 02:10:28 That’s right, but if you look at the level of destruction that was expected within the space of a few weeks, and then this level of destruction took many years, so there was like a two order of magnitude mismatch or something like that, which is pretty huge. So that affected the way people think about it. Tamay Besiroglu 02:10:45 An important underlying theme of much of what we have discussed is how powerful just reasoning about things is to making progress about what specific plans you want to make to prepare and make this transition to advanced AI go well. And our view is, well it’s actually quite hard and you need to make contact with the actual world in order to inform most of your beliefs about what actually happens and so it’s somewhat futile to do a lot of wargaming and figure out how AI might go, and what we can do today to make that go a lot better, because a lot of the policies you might come up with might just look fairly silly. And in thinking about how AI actually has this impact, again people think “oh you know, AI reasoning about doing science and doing R&D just has this drastic impact on the overall economy or technology, and our view as well actually again making contact with the real world and getting a lot of data from experiments and from deployment and so on is just very important”. So I think there is this underlying kind of latent variable which explains some of this disagreement, both on the policy prescriptions and about the extent to which we should be humble versus ambitious about what we ought to do today, as well as for thinking about the mechanism through which AI has this impact. And this underlying latent thing is, what is the power of reason? How much can we reason about what might happen? How much can reasoning in general figure things out about the world and about technology? And so that is a core underlying disagreement here. Dwarkesh Patel 02:12:27 I do want to ask: You say in your announcement, we want to accelerate this broad automation of labor as fast as possible. As you know, many people think it’s a bad idea to accelerate this broad automation of labor and AGI and everything that’s involved there. Why do you think this is good? Ege Erdil 02:12:49 So the argument for why it’s good is that we’re going to have this enormous increase in economic growth, which is going to mean enormous amounts of wealth, and incredible new products that you can’t even imagine, in health care or whatever. And like the quality of life of the typical person is probably going to go up a lot. Early on, probably also their wages are going to go up, because the AI systems are going to be automating things that are complementary to their work. Or it’s going to be automating part of their work, and then you’ll be doing the rest and then you’ll be getting paid much more on that. And in the long term, eventually we do expect wages to fall just because of arbitrage with the AIs. But by that point, we think humans will own enormous amounts of capital and there will also be ways in which even the people who don’t own capital, we think are just going to be much better off than they are today. I think it’s just hard to express in words the amount of wealth and increased variety of products that we would get in this world. It will probably be more than the difference between 1800 and today. So if you imagine that difference, it’s such a huge difference. And then we imagine two times, three times, whatever. Dwarkesh Patel 02:13:58 The standard argument against this is why does the speed to get there matter so much? Especially if the trade-off against the speed is the probability that this transition is achieved successfully in a way that benefits humans? Tamay Besiroglu 02:14:12 I mean, it’s unclear that this trades off against the probability of it being achieved successfully or something. Dwarkesh Patel 02:14:17 There might be an alignment tax. Tamay Besiroglu 02:14:20 I mean, maybe. You can also just do the calculation of how much a year’s worth of delay costs for current people. This is this enormous amount of utility that people are able to enjoy, and that gets brought forward by year or pushed back by year if you delay things by year. And how much is this worth? Well, you can look at simple models of how concave people’s utility functions are and do some calculations and maybe that’s worth on the order of tens of trillions of dollars per year in consumption. That is roughly the amount consumers might be willing to defer in order to bring forward the date of automation one year. Dwarkesh Patel 02:15:03 In absolute terms, it’s high. In relative terms, relative to if you did think it was going to nudge the probability one way or another of building systems that are aligned and so forth that it’s so small compared to all of the future. Ege Erdil 02:15:18 I agree. So there are a couple of things here. First of all, I think the way you think about this matters. So first of all, we don’t actually think that it’s clear whether speeding things up or slowing things down actually makes a doomy outcome more or less likely. I think that’s a question that doesn’t seem obvious to us. Partly because of our views on the software R&D side. We don’t really believe that if you just pause and then you do research for 20 years at a fixed level of compute scale, that you’re actually going to make that much progress on relevant questions on alignment or something. Imagine you were trying to make progress on alignment in 2016 with the compute budgets of 2016. Well, you would have gotten nowhere, basically. You would have discovered none of the things that people have discovered today and that turned out to be useful. And I think if you pause today, then we will be in a very similar position in 10 years, right? We would have not made a bunch of discoveries. So the scaling is just really important to make progress in alignment, in our view. And then there’s a separate question of how longtermist should you be in various different senses? So there’s a moral sense, of how much you should actually care about people who are alive today as opposed to people who are not yet born as just a moral question. And there was also a practical question of, as we discussed, how certain can you be about the impact your present actions are actually going to have in the future? Dwarkesh Patel 02:16:43 OK, maybe you think it really doesn’t matter whether you slow things down right now or you speed things up right now. But is there some story about why speeding them up from the alignment perspective actually helped, it’s good to have that extra progress right now rather than later on? Or is it just that, well, if it doesn’t make a difference either way, then it’s better to just get that extra year of people not dying and having cancer cures and so forth? Ege Erdil 02:17:06 I think I would say the second. But it’s just important to understand the value of that. Even in purely economic terms, imagine that each year of delay might cause maybe 100 million people- maybe more, maybe 150, 200 million people- who are alive today to end up dying. So even in purely economic terms, the value of a statistical life is pretty enormous, especially in Western countries. Sometimes people use numbers as high as $10 million for a single life. So imagine you do $10 million times 100 million people. That’s a huge number, right? That is so enormous that I think for you to think that speeding things up is a bad idea, you have to first have this long-termist view where you look at the long run future. You think your actions today have high enough leverage that you can predictably affect the direction of the long run future. Dwarkesh Patel 02:18:10 Well, in this case, it’s kind of different because you’re not saying “I’m going to affect what some emperor a thousand years from now does” like somebody in the year zero would have to do to be a long-termist. In this case, you just think there’s this incredibly important inflection point that’s coming up and you just need to have influence over that crucial period of explosive growth of intelligence explosion or something. So I think it is a much more practicable prospect than… Ege Erdil 02:18:36 So I agree in relative terms. In relative terms, I agree the present moment is a moment of higher leverage and you can expect to have more influence. I just think in absolute terms, the amount of influence you can have is still quite low. So it might be orders of magnitude greater than it would have been 2000 years ago and still be quite low. Tamay Besiroglu 02:18:54 And again, I think there’s this difference in opinion about how broad and diffuse this transformation ends up being, versus how concentrated within specific labs where the very idiosyncratic decisions made by that lab will end up having a very large impact. If you think those developments will be very concentrated, then you think the leverage is especially great. And so then you might be especially excited about having the ability to influence how that transition goes, but our view is very much that this transition happens very diffusely by way of many, many organizations and companies doing things. And for those actions to be determined a bunch by economic forces rather than idiosyncratic preferences on the part of labs or these decisions that have these founder effects that last for very long.", "Arms race dynamic", "Dwarkesh Patel 02:19:48 Okay let’s go through some of the objections to explosive growth, which is that most people are actually more conservative not more aggressive about the forecasts you have. So obviously one of the people who has articulated their disagreements with your view is Tyler Cowen . He made an interesting point when we did the podcast together and he said “most of Sub-Saharan Africa still does not have reliable clean water. The intelligence required for that is not scarce. We cannot so readily do it. We are more in that position that we might like to think along other variables.” Tamay Besiroglu 02:20:22 I mean we agree with this. I think intelligence isn’t the bottleneck that’s holding back technological progress or economic growth. It’s like many other things. And so this is very much consistent with our view that scaling up your overall economy, accumulating capital, accumulating human capital, having all these factors scale… Ege Erdil 02:20:45 In fact this is even consistent with what I was saying earlier that I was pointing out this “oh, good management and good policies and those just contribute to TFP and they can be bottlenecks”. Dwarkesh Patel 02:20:55 Like right now we could just plug-and-play our better management into Sub-Saharan Africa. Ege Erdil 02:21:02 No we can’t. Tamay Besiroglu 02:21:03 It’s hard. I don’t think we can. Dwarkesh Patel 02:21:05 Okay so maybe I should have said, one could theoretically imagine plugging and playing… Ege Erdil 02:21:10 I agree. Tamay Besiroglu 02:21:12 I can imagine many things. Dwarkesh Patel 02:21:14 But we cannot so readily do it because of… it’s hard to articulate why and it wouldn’t be so easy to do in just capital or labor. Why not think that the rest of the world will be in this position with regards to the advances that AI will make possible? Tamay Besiroglu 02:21:32 I mean if the AI advances are like the kind of geniuses in a data center, then I agree that that might be bottlenecked by the rest of the economy not scaling up and being able to accumulate the relevant capital to make those changes feasible. So I kind of agree with this picture and I think this is an objection to the “geniuses in a data center” type view, and I buy basically this. Ege Erdil 02:21:57 And also the fact that it’s also plausible you’re going to have the technology, but then some people are not going to want to deploy it, or some people are going to have norms and laws and cultural things that are going to make it so that AI is not able to be widely deployed in their economy- or not as widely deployed as they otherwise might be. And that is going to make those countries or societies just slower. That’s like some countries will be growing faster just like Britain and the Netherlands were sort of the leaders in the Industrial Revolution, they were the first countries to start experiencing rapid growth. And then other countries, even in Europe, had to come from behind. Well again I just think we expect the same thing to be true for AI. And the reason that happened was exactly because of these kinds of reasons, where those countries that had a culture or governance systems or whatever which were just worse than bottlenecked the deployments and scaling of the new technologies and ideas. It seems very plausible. Dwarkesh Patel 02:22:53 But you’re saying as long as there’s one jurisdiction? Ege Erdil 02:22:55 Yeah. Dwarkesh Patel 02:22:56 But then again you also previously emphasized the need to integrate with the rest of the global economy and the human economy. So doesn’t that contradict…? Tamay Besiroglu 02:23:05 That doesn’t often require cultural homogeneity. We trade with countries, like the US trades with China, quite a lot actually. And there’s a bunch of disagreement… Dwarkesh Patel 02:23:15 But what if the US is like “I don’t like that the UAE is doing explosive growth with AI, we’re just going to embargo them”. Tamay Besiroglu 02:23:22 That seems plausible. Dwarkesh Patel 02:23:24 And then would that not prevent explosive growth? Tamay Besiroglu 02:23:26 I think that would be plausible at the point at which it’s revealing a lot about the capabilities and the power of AI. Yeah. And you should also think that that creates both an incentive to embargo, but also an incentive to adopt the very similar styles of governing that enable AI to be able to produce a lot of value. Dwarkesh Patel 02:23:48 What do you make of this: I think people interpret explosive growth from an arms race perspective. And that’s often why I think in terms of public-private partnerships for the labs themselves. But this idea that you have the geniuses in the data center, you can have them come up with the mosquito drone swarms. And then those drone swarms will, like if China gets to the swarms earlier… Even within your perspective, is this a result of your whole economy being advanced enough that you can produce mosquito drone swarms? You being six months ahead means that you could decisively win… does it? I don’t know. Maybe you being like a year ahead and explosive growth means you could decisively win a war against China or China could win a war against you. So would that lead to an arms race-like dynamic? Ege Erdil 02:24:33 I mean I think it would to some extent, but I’m not sure if I would expect a year of lead to be enough to take a risk, because if you go to war with China… For example if you replace China today with China from 1990. Or if you replace Russia today with Russia from like 1970 or 1980. It’s possible that their ICBM and whatever technology is already enough to make a very strong deterrence. So maybe even that technological lead is not sufficient so that you would feel comfortable going to war. So that seems possible. Dwarkesh Patel 02:25:13 Yeah. And actually this relates to a point that Gwern was making which is that this is going to be a much more unstable period than the Industrial Revolution, even though the Industrial Revolution saw many countries gain rapid increases in their capabilities, because within this span, if you’ve got a century’s worth of progress compressed within a decade, one country gets to ballistic missiles first, then the other country gets to railroads first, and so forth. But if you have this more integrated perspective about what it takes to get to ballistic missiles and to railroads, then you might think “no, basically this isn’t some orthogonal vector. You’re just churning on the tech tree further and further”. Ege Erdil 02:26:01 I mean for what it’s worth I do think it’s possible if you have it just happen in a few countries which are relatively large and have enough land or something, those countries would be starting from a lower base compared to the rest of the world, so you would need to catch up to some extent. If they are just going to sort of grow internally and they’re not going to depend on the external supply chains. But that doesn’t seem like something that’s impossible to me. Some countries could do it, it would just be more difficult. But in this setting if some countries have a significant policy advantage over the rest of the world and they start growing first and then they won’t necessarily have a way to get other countries to adopt their norms and culture. So in that case it might be more efficient for them to do the growth locally. So that’s why I was saying the growth differentials will probably be determined by regulatory jurisdiction boundaries more than anything else. I’m not saying- say the U.S. by itself if it had AI but it couldn’t get the rest of the world to adopt AI, I think that would still be sufficient for explosive growth. Dwarkesh Patel 02:27:03 How worried should we be about the fact that China today, because it industrialized relatively recently, has more industrial capacity and know-how and all the other things of learning by doing and so forth? If we buy your model of how technology progresses, with or without AI, how are we just underestimating China because we have this perspective that what fraction of your GDP you’re spending on research is what matters, when in fact it’s the kind of thing where I’ve got all the factories in my backyard and I know how they work and I can go buy a component whenever I want? Tamay Besiroglu 02:27:41 I don’t think people are necessarily underestimating China, it depends on who you’re looking at, but it seems like the discussion of China is just this very big discussion in these AI circles, right? And so people are very much appreciating the power and the potential threat that China poses. But I think the key thing is not just the scale in terms of pure number of people or like number of firms or something, but the scale of the overall economy, which is just measured in how much is being produced in terms of dollars. There, the U.S. is ahead. Dwarkesh Patel 02:28:14 But we’re not expecting all this explosive growth to come from financial services. We’re expecting it to start from a base of industrial technology and industrial capacity. Ege Erdil 02:28:25 No, financial services can be important if you want to scale very big projects. Tamay Besiroglu 02:28:29 Financial services are very important for raising funding and getting investments in data centers. Dwarkesh Patel 02:28:35 If I understood you correctly it just seems like, man, you know how to build the robot factories and so forth. That know-how which, in your view, is so crucial to technology growth and general economic growth, is lacking. And you might have more advanced financial services but it seems like the more you take your view seriously, the more it seems like that having the Shenzhen locally matters a lot. Tamay Besiroglu 02:29:00 I mean relative to what starting point? I think people already appreciate that China is very important. And then I agree that there are some domains where China is leading, but then there are very many domains in which the U.S. is leading, or the U.S. and its allies, where countries that are producing relevant inputs for AI that the U.S. has access to, but China doesn’t. So I think the U.S. is just ahead on many dimensions and there’s some that China is ahead or at least very close. So I don’t think this should cause you to update very strongly in favor of China being a much bigger deal, at least depending on where you start. Ege Erdil 02:29:40 I think people already think China is a big deal like this is the big underlying thing here. Like if we were just very dismissive of China, then maybe this would be a reason to update.", "Is superintelligence a real thing?", "Dwarkesh Patel 02:29:48 I get your argument that thinking about the economy-wide acceleration is more important than focusing on the IQ of the smartest AI. But at the same time, do you believe in the idea of superhuman intelligence? Is that a coherent concept in the way that you don’t necessarily stop at human level Go play and you just go way beyond it in ELO score? Will we get to systems that are like that with respect to the broader range of human abilities? And maybe that doesn’t mean they become God, because there’s other ASIs in the world. But you know what I mean, will there be systems with such superhuman capabilities? Tamay Besiroglu 02:30:27 Yeah I mean I do expect that. I think there’s a question of how useful is this concept for thinking about this transition to a world with much more advanced AI. And I don’t find this a particularly meaningful or helpful concept. I think people introduce some of these notions that on the surface seem useful, but then actually when you delve into them it’s very vague and kind of unclear what you’re supposed to make of this. And you have this notion of AGI which is distinguished from narrow AI in the sense that it’s much more general and maybe can do everything that a human can do on average. AI systems have these very jagged profiles of capability. So you have to somehow take some notion of average capabilities and what exactly does that mean, it just feels really unclear. And then you have this notion of ASI, which is AGI in the sense that it’s very general but then it’s also better than humans on every task. And is this a meaningful concept? I guess it’s coherent. I think this is not a super useful concept, because I prefer just thinking about what actually happens in the world. And you could have a drastic acceleration without having an AI system that can do everything better than humans can do. I guess you could have no acceleration when you have an ASI that is better than humans at everything, but it’s just very expensive or very slow or something. So I don’t find that particularly meaningful or useful. I just prefer thinking about the overall effects on the world and what AI systems are capable of producing those types of effects. Dwarkesh Patel 02:32:06 Yeah I mean one intuition pump here is: compare John von Neumann versus a human plucked from the standard distribution. If you added a million John von Neumanns to the world what would the impact on growth be as compared to just adding a million people from normal distribution? Ege Erdil 02:32:25 Well I agree it would be much greater. Dwarkesh Patel 02:32:27 Right. But then because of Moravec paradox-type arguments that you made earlier that evolution has not necessarily optimized us for that long along the kind of spectrum on which John von Neumann is distinguished from the average human. And given the fact that already within this deviation you have this much greater economic impact. Why not focus on optimizing on this thing that evolution has not optimized that hard on, further? Ege Erdil 02:32:51 I don’t think we shouldn’t focus on that. But what I would say is, for example if you’re thinking about the capabilities of Go-playing AIs, then the concept of a superhuman Go AI, yeah, you can say that is a meaningful concept. But if you’re developing the AI, it’s not a very useful concept. If you just look at the scaling curve, it just goes up and there is some human level somewhere. But the human level is not privileged in any sense. So the question is, is it a useful thing to be thinking about? And the answer is probably not. Depends on what you care about. So I’m not saying we shouldn’t focus on trying to make the system smarter than humans are, I think that’s a good thing to focus on. Dwarkesh Patel 02:33:31 Yeah I guess I try to understand whether we will stand in relation to the AIs of 2100 that humans stand in relation to other primates. Is that the right mental model we should have, or is it going to be a much greater familiarity with their cognitive horizons? Tamay Besiroglu 02:33:49 I think AI systems will be very diverse, and so it’s not super meaningful to ask something about this very diverse range of systems and where we stand in relation to them. Dwarkesh Patel 02:33:59 I mean, will we be able to cognitively access the kinds of considerations they can take on board? Humans are diverse, but no chimp is going to be able to understand this argument in the way that another human might be able to, right? So if I’m trying to think about my place, or a human’s place, in the world of the future, is a relevant concept of; is it just that the economy has grown a lot and there’s much more labor, or are there beings who are in this crucial way super intelligent? Tamay Besiroglu 02:34:28 I mean there will be many things that we just will fail to understand, and to some extent there are many things today that people don’t understand about how the world works and how certain things are made. And then how important is it for us to have access or in principle be able to access those considerations? And I think it’s not clear to me that that’s particularly important that any individual human should be able to access all the relevant considerations that produce some outcome. That just seems like overkill. Why do you need that to happen? I think it would be nice in some sense. But I think if you want to have a very sophisticated world where you have very advanced technology, those things will just not be accessible to you. So you have this trade-off between accessibility and maybe how advanced the world is. And from my point of view I’d much rather live in a world which has very advanced technology, has a lot of products that I’m able to enjoy, and a lot of inventions that I can improve my life with, if that means that I just don’t understand them. I think this is a very simple trade that I’m very willing to make.", "Reasons not to expect explosive growth", "Dwarkesh Patel 02:35:45 Okay so let’s get back to objections to explosive growth. We discussed a couple already. Here’s another which is more a question than an objection: Where is all this extra output going? Who is consuming it? If the economy is 100X bigger in a matter of a decade or something, to what end? Ege Erdil 02:36:05 So first of all I think even if you view that along what you might call the intensive margin in the sense that you just have more of the products you have today, I think there will be a lot of appetite for that. Maybe not quite 100X, that might start hitting some diminishing returns. Tamay Besiroglu 02:36:23 Current GDP per capita on average in the world is 10K a year or something, right? And there are people who enjoy millions of dollars. And so there’s a gap between what people enjoy, and don’t seem to be super diminished in terms of marginal utility, and so there’s a lot of room on just purely the intensive margin of just consuming the things we consume today but more. And then there is this maybe much more important dimension along which we will expand which is… Ege Erdil 02:36:52 Product variety. Tamay Besiroglu 02:36:53 Yeah, extensive margin of what is the scope of things that you’re consuming. And if you look at something like the Industrial Revolution, that seemed to have been the main dimension along which we expanded to consume more. In any kind of sector that you care about, transportation, medicine, entertainment, and food, there’s just this massive expansion in terms of variety of things that we’re able to consume that is enabled by new technology or new trade routes or new methods of producing things. So that I think is really the key thing that we will see come along with this kind of expansion in consumption. Dwarkesh Patel 02:37:35 Another point that Tyler makes is that there will be some mixture of Baumol cost disease , where you’re bottlenecked by the slowest growing thing. The fastest productivity things basically diminish their own… Ege Erdil 02:37:56 Share in output. Dwarkesh Patel 02:37:57 That’s right, yeah. Tamay Besiroglu 02:37:59 I mean we totally agree with that. I would say that that’s just a kind of qualitative consideration. It isn’t itself sufficient to make a prediction about what growth rates are permitted given these effects versus not, it’s just a qualitative consideration and then you might need to make additional assumptions to be able to make a quantitative prediction. So I think it’s a little bit… Ege Erdil 02:38:24 So the convincing version of this argument would be if you did the same thing that we were doing earlier with the software-only singularity argument, where we were pointing to essentially the same rejection where there are multiple things that can bottleneck progress. So I would be much more convinced if someone pointed to an explicit thing, like here, health care is this very important thing. And why should we expect AI to make that better? That doesn’t seem like that would get better because of AI. So maybe health care just becomes a big part of the economy and then that bottleneck. So if there was some specific sector… Dwarkesh Patel 02:38:58 Maybe the argument is that if there is even one… Ege Erdil 02:39:00 No, if there’s one though, if that’s a small part of the economy then you could just still get a lot of growth. You just automate everything else and that is going to produce a lot of growth. Tamay Besiroglu 02:39:09 So it has to quantitatively work out. And so you actually have to be quantitatively specific about what this objection is supposed to be. Ege Erdil 02:39:15 Right. So first of all you have to be specific about what these tasks are. What is the current share in economic output? The second thing is you have to be specific about how bad do you think the complementarities are? So in numerical terms economists use the concept of elasticity of substitution to quantify this. So that gives you a numerical estimate of, if you just have much more output on some dimensions but not that much on other dimensions, how much does that increase economic output overall? And then there’s a third question. You can also imagine you automate a bunch of the economy. Well, a lot of humans were working on those jobs. So now, well they don’t need to do that anymore because those got automated. So they could work on the jobs that haven’t yet been automated. So as I gave the example earlier, you might imagine a world in which remote work tasks get automated first, and then sensory-motor skills lag behind. So you might have a world in which software engineers become physical workers instead. Of course, in that world the wages of physical workers will be much higher than their wages are today. So that reallocation also produces a lot of extra growth, even if bottlenecks are maximally powerful, even if you just look at all the tasks in the economy and literally take the worst one for productivity growth, you would still get a lot of increase in output because of this reallocation. Tamay Besiroglu 02:40:35 So I think one point that I think is useful to make; our experience talking to economists about this is that they will bring up these more qualitative considerations, whereas the arguments that we make, make specific quantitative predictions about growth rates. So for example you might ask “how fast will the economy double?” And then we can think about, an H100 does about… there are some estimates of how much computation the human brain does per second and it’s about one E15 flop or so, it’s a bit unclear, but then it turns out that an H100 roughly does on that order of computation. And so you can ask the question of “how long does it take for an H100 to pay itself back?” Ege Erdil 02:41:21 If you run the software of the human brain. Tamay Besiroglu 02:41:22 If you run the software of the human brain you can then deploy that in the economy and earn say human wages on the order of 50 to 100 K a year or whatever in the US. And so then it pays itself back because it costs on the order of 30 K per H100. And so you get a doubling time of maybe on the order of a year. And so this is like a very quantitatively specific prediction about… And then there’s the response, “well you have Baumol effects” well, what does this mean? Does it double? Does this predict it doubles every two years or every five years? You need just more assumptions in order to make this a coherent objection. And so I think a thing that’s a little bit confusing is just that there are these qualitative objections that I agree with, like bottlenecks are indeed important, which is part of the reason I’m more skeptical of this ‘software singularity’ story. But I think this is not sufficient for blocking explosive growth. Dwarkesh Patel 02:42:23 The other objection that I’ve heard often- and it might have a similar response from you- is this idea that a lot of the economy is comprised of O-ring-type activities. And this refers to, I think, the Challenger space shuttle explosion. There is just one component- I forgot what the exact problem with the O-ring was- but because of that being faulty the whole thing collapsed. Tamay Besiroglu 02:42:48 I mean I think it’s quite funny actually because the O-ring model is taking the product of many, many inputs, and then the overall output is the product of very many things. But actually this is pretty optimistic from the point of view of having fewer bottlenecks. Ege Erdil 02:43:08 I think we pointed this out before, which again, talking about software only singularity, I said if it’s the product of computer experiments with research… Dwarkesh Patel 02:43:14 But if one of those products … Ege Erdil 02:43:15 Is zero. Dwarkesh Patel 02:43:16 Because of human… Tamay Besiroglu 02:43:17 But you have constant marginal product there, right? Ege Erdil 02:43:19 Yeah, but if one of those products doesn’t scale that doesn’t limit- like yeah, it means you’re less efficient at scaling than you otherwise would be, but you can still get a lot of… Tamay Besiroglu 02:43:30 You can just have unbounded scaling in the O-ring world. So actually I disagree with Tyler, that he’s not conservative enough, that he should take his bottlenecks view more seriously than he actually is. And yet I disagree with him about the conclusion. And I think that we’re going to get explosive growth once we have AI that can flexibly substitute. Dwarkesh Patel 02:43:50 I’m not sure I understand, like, there will be entirely new organizations that AIs come up with. We’ve written a blog post about one such with the AI firms. And you might be a productive worker or a productive contributor in this existing organization as it exists today. In the AI world many humans might just be zero or even minus… Ege Erdil 02:44:11 I agree. Dwarkesh Patel 02:44:13 Why won’t that… put that in the multiplication. Tamay Besiroglu 02:44:18 But why would humans be in the loop there? Ege Erdil 02:44:21 You’re both saying that humans would be negatively contributing to output. But then you’re also saying that we should put them into the… Dwarkesh Patel 02:44:31 Okay, fair fair fair. The main objection often is regulation. And I think we’ve addressed it implicitly in different points, but might as well just explicitly address why won’t regulation stop this? Ege Erdil 02:44:43 Yeah. So for what it’s worth, we do have a paper where we go over all the arguments for and against explosive growth. And regulation, I think, is the one that seems strongest as ‘against’. The reason it seems strong is because even though we have made arguments before about international competition and variation of policies among jurisdictions and these strong incentives to adopt this technology both for economic and national security reasons. So I think those are pretty compelling when taken together but even still, the world does have a surprising ability to coordinate on just not pursuing certain technologies. Dwarkesh Patel 02:45:18 Right. Human cloning… Ege Erdil 02:45:20 That’s right. So I think it’s hard to be extremely confident that this is not going to happen. I think it’s less likely that we’re going to do this for AI than it is for human cloning, because I think human cloning touches on some other taboos and so on. Tamay Besiroglu 02:45:38 And also is less valuable. Ege Erdil 02:45:39 Also less valuable. And probably less important also for national security in an immediate sense. But at the same time, as I said, it’s just hard to rule this out. So if someone said “well I think there’s a 10 percent or 15 percent, whatever, 20 percent chance that there will be some kind of global coordination of regulation and that’s going to just be very effective. Maybe it will be enforced through sanctions on countries that defect or you know. And then maybe it doesn’t prevent AI from being deployed, but maybe just slows things down enough that you never quite get explosive growth”. I don’t think that’s an unreasonable view. It’s like 10 percent chance it could be. Dwarkesh Patel 02:46:17 I don’t know if there’s any… I don’t know. Do you encounter any other… Ege Erdil 02:46:24 Any other objections? Dwarkesh Patel 02:46:25 What should I be hassling you about? Ege Erdil 02:46:27 I mean some things that we’ve heard from economists… People sometimes respond to our argument about explosive growth, which is an argument about growth levels. So we’re saying “we’re going to see 30 percent growth per year, instead of 3 percent”. They respond to that with an objection about levels . So they say “well how much more efficient, how much more valuable can you make hairdressing, or taking flights, or whatever, or going to a restaurant?”. And that is just fundamentally the wrong kind of objection. We’re talking about the rate of change, and you’re objecting to it by making an argument about the absolute level of productivity. And as I said before, it is not an argument that economists themselves would endorse if it was made about a slower rate of growth continuing for a longer time. So it seems more like special pleading… Dwarkesh Patel 02:47:20 I mean why not just the deployment thing, where the same argument you made about AI, where you do learn a lot just by deploying to the world and seeing what people find useful, ChatGPT was an example of this. Why won’t a similar thing happen with AI products and services where if one of the components is you put it out to the marketplace and people play with it and you find out what they need, and it clings to the existing supply chain and so forth. Doesn’t that take time? Tamay Besiroglu 02:47:49 I mean it takes time but it is often quite fast. In fact, ChatGPT grew extremely fast. Dwarkesh Patel 02:47:55 Right, but that was just purely digital service. Ege Erdil 02:47:57 One reason to be optimistic is if you think the AIs will literally be drop-in remote workers, or drop-in workers in some cases if you have robotics, then companies are already experienced at onboarding humans, onboarding humans doesn’t take like a very long time. Maybe it takes six months even in a particularly difficult job for a new worker to start being productive. Well, that’s not that long. So I don’t think that would rule out companies being able to onboard AI workers, assuming that they don’t need to make a ton of new complementary innovations and discoveries to take advantage. I think one way in which current AI systems are being inhibited and the reason we’re seeing the growth maybe be slower than you might otherwise expect, is because companies in the economy are not used to working with this new technology, they have to rearrange the way they work in order to take advantage of it. But if AI systems were literally able to substitute for human workers then, well, the complementary innovations might not be as necessary.", "Fully automated firms", "Dwarkesh Patel 02:49:00 Actually this is a good excuse to go to the final topic, which is AI firms. So this blog post we wrote together about what it would be like to have a firm that is fully automated, and the crucial point we were making was that people tend to overemphasize and think of AI from the perspective of how smart individual copies will be. And if you actually want to understand the ways in which they are superhuman, you want to focus on their collective advantages which, because of biology, we are precluded from, which are the fact that they can be copied with all their tacit knowledge. You can copy a Jeff Dean or Ilya Sutskever or whatever the relevant person is, in a different domain. You can even copy Elon Musk and he can be the guy who’s every single engineer in the SpaceX rig. And if that’s not an efficient way to… Tamay Besiroglu 02:49:49 The AI equivalent of them. Dwarkesh Patel 02:49:50 And if it’s not best to have Elon Musk or anything, you just copy the relevant team or whatever. And we have this problem with human firms, where there can be very effective teams or groups, but over time their culture dilutes, or the people leave, or die, or get old. And this is one of the many problems that can be solved with these digital firms. Firms right now have two of the three relevant criteria for evolution; they have selection, and they have variation, but they don’t have high fidelity replication. And you could imagine a much more fast-paced and intense sequence of evolution for firms once you have this final piece click in. And that relates to the onboarding thing, where right now they just aren’t smart enough to be onboarded as full workers, but once they are, I just imagine the kinds of things I try to hire for, it would just be such an unlock. The salaries are totally secondary. The fact that I can… “This is the skill I need” or the set of skills I need. And I can have a thousand workers in parallel if there’s something that has a high elasticity of demand. I think it’s probably, along with the transformative AI, the most underrated tangible thing that you need to understand about what the future AI society will look like. Ege Erdil 02:51:22 I think there’s a first point about this very macroeconomic picture, where you just expect a ton of scaling of all the relevant inputs. I think that is the first order thing. But then you might have more micro-questions about, “okay, how does this world actually look like? How is it different from a world in which we just have a lot more people and a lot more capital and a lot more…?” Because it should be different. And then I think these considerations become important. I think another important thing is just that AIs can be aligned. You get to control the preferences of your AI systems in a way that you don’t really get to control the preference of your workers. Your workers, you can just select, you don’t really have any other option. But for your AIs, you can fine tune them. You can build AI systems which have the kind of preferences that you want. And you can imagine that’s dramatically changing basic problems that determine the structure of human firms. For example, the principal agent problem might go away. This is a problem where you as a worker have incentives that are either different from those of your manager, or those of the entire firm, or those of the shareholders of the firm. Dwarkesh Patel 02:52:29 I actually think the incentives are a smaller piece of the puzzle. It’s more about bandwidth and information sharing where, with a large organization it’s very hard to have a single coherent vision, and the most successful firms we see today are where, for an unusual amount of time, a founder is able to keep their vision instilled in the organization; SpaceX or Tesla are examples of this. People talk about Nvidia this way. But just imagine a future version where there’s this hyper inference scale mega-Jensen, who you’re spending $100 billion a year on inference, and copies of him are constantly writing every single press release and reviewing every pull request, and answering every customer service request, and so forth, and monitoring the whole organization, making sure it’s proceeding along a coherent vision and getting merged back into the hyper-Jensen, mega-Jensen, whatever. Ege Erdil 02:53:30 Yeah, I agree that’s a bigger deal. At the same time, I would point out that part of the reason why it’s important to have a coherent vision and culture and so on in human companies might be that incentive problems exist otherwise. I wouldn’t rule that out, but I agree that, aside from the overall macroeconomic thing, I think the fact that they can be replicated is probably the biggest deal. That also enables additional sources of economies of scale where if you have twice the number of GPUs, you can run not only twice the number of copies of your old model, but then you can train a model that’s even better. So you double your training compute and your inference compute, and that means you don’t get just twice the number of workers you would have had otherwise, you get more than that, because they are also smarter, because you spend more training compute. So that is an additional source of economies of scale. And then there’s this benefit that, for humans, every human has to learn things from scratch, basically. They are born and then they have a certain amount of lifetime learning that they have to do. So in human learning, there’s a ton of duplication, while for an AI system, it could just learn once. It could just have one huge training run with tons of data. And then that run could be deployed everywhere. So that’s another massive advantage that the AIs have over humans.", "Will central planning work after AGI?", "Dwarkesh Patel 02:54:43 Maybe we’ll close up with this one debate we’ve often had offline, which is: will central planning work with these economies of scale? Ege Erdil 02:54:52 So I would say that, I mean, again, the question of, “will it work?” Dwarkesh Patel 02:54:56 Will it be optimal? Ege Erdil 02:54:58 Right. So my guess is probably not optimal. But I don’t think anyone has thought this question through in a lot of detail. Tamay Besiroglu 02:55:10 So it is worth thinking about why one might expect central planning to be slightly better in this world. So one consideration is just communication bandwidth being potentially much, much greater than it is today. In the current world, the information gathering and the information processing are co-located; humans observe and also process what they observe. In an AI world, you can disaggregate that. So you can have the sensors and not do much processing, but just collect and then process centrally. And that processing centrally might make sense for a bunch of reasons, and you might get economies of scale from having more GPUs that produce better models, and also be able to think more deeply about what it’s seeing. Dwarkesh Patel 02:56:06 It’s worth noting that certain things already work like this, for example, Tesla FSD. It will benefit from the data collected at the periphery from millions of miles of driving. And then the improvements which are made as a result of this. Tamay Besiroglu 02:56:19 Centrally directed, it’s coming from HQ being like, “we’re going to push an update”. And so you do get some of this more centralized… Dwarkesh Patel 02:56:27 And it can be a much more intelligent form than just whatever gradient averaging that they- I mean, I’m sure it’s more sophisticated than that at Tesla- but it can be a much more deliberate, intelligent update. Tamay Besiroglu 02:56:36 So that’s one reason to expect. And the other reason, I guess, is current leaders or CEOs don’t have bigger brains than the workers do. Maybe a little bit… Dwarkesh Patel 02:56:50 I don’t know if you want to open that… Tamay Besiroglu 02:56:52 But not by orders of magnitude. And so you could have orders of magnitude more scaling of the size of the models that are doing the planning than the people or the agents or workers doing the actions. Ege Erdil 02:57:04 And I think a third reason is the incentive thing, where part of the reason you have a market is that it gives people the right kind of incentives. But you might not need that as much if you’re using AI. So I think there’s an argument that if you just list the traditional arguments people have made against “why does central bank not work?”, then you might expect them to become weaker. Now, I think there’s a danger when you’re doing that kind of analysis to fall into the same kind of partial equilibrium analysis where you’re only considering some factors and then you’re not considering other things. For example… Tamay Besiroglu 02:57:43 Things get more complex, you just have a much bigger economy and so on the one hand, your ability to collect information and process it improves, but also the need for doing that also increases as things become more complex. Dwarkesh Patel 02:57:59 And one way to illustrate that is: imagine if Apple, the organization today, with all its compute and whatever, was tasked with managing the economy of Uruk . I think it actually could centrally plan the economy. The economy of Uruk might work even better as a result. But Apple as it exists today cannot manage the world economy as it exists today. Ege Erdil 02:58:18 That’s right. Yeah.", "Career advice", "Dwarkesh Patel 02:58:20 All right, actually this will be the final question: One of the things that makes AI so fascinating is that there is no domain of human knowledge that is irrelevant to studying it, because what we’re really trying to… Tamay Besiroglu 02:58:33 I don’t know about that. Dwarkesh Patel 02:58:36 There’s no serious domain of human knowledge… Tamay Besiroglu 02:58:40 That’s better. Dwarkesh Patel 02:58:42 …that is not relevant to studying it, because you’re just fundamentally trying to figure out what a future society will look like. And so obviously computer science is relevant, but also economics- as we’ve been discussing- history, and how to understand history, and many other things we’ve been discussing. Especially if you have longer timelines and there is enough time for somebody to pursue a meaningful career here, what would you recommend to somebody? Because both of you are quite young. I mean, you especially Ege, but both of you. You would think this is the kind of thing which requires crystallized intelligence or whatever, especially given what we said earlier about… Look, as we get more knowledge, we’re going to have to factor what we’re learning into building a better model of what’s going to happen to the world. And if somebody is interested in this kind of career that you both have, what advice do you have for them? Ege Erdil 02:59:27 Yeah, that’s a hard question. I mean, I’m not sure. I think there is an extent to which it’s difficult to deliberately pursue the implicit strategy that we would have pursued. It probably works better if it’s spontaneous and more driven by curiosity and interest than: you make a deliberate choice, “okay, I’m just going to learn about a bunch of things so that I can contribute to the discourse on AI”. I would think that strategy is probably less effective. At least I haven’t seen anyone who deliberately used that strategy and then was successful, it seems like. Dwarkesh Patel 03:00:05 Yeah, I guess not that I’ve contributed to discourse directly, but maybe facilitated other people contributing. I guess it wasn’t a deliberate strategy on my end, but it was a deliberate strategy to do the podcast, which inadvertently gave me the opportunity to learn about multiple fields. Tamay Besiroglu 03:00:20 Yeah, so given that you’re already interested and curious and reading a bunch of things, and studying a bunch of things, and thinking about these topics, on the margin there are a bunch of things you can do to make you more productive at making some contributions to this. And I think just speaking to people and writing your thoughts down and finding especially useful people to chat with and collaborate with, I think that’s very useful. So just seek out people that have similar views and you’re able to have very high bandwidth conversations with and make progress on these topics. And I think that’s just pretty useful. Dwarkesh Patel 03:01:00 But how exactly? Like should they DM you? Like how do they get in? Ege Erdil 03:01:05 Yeah, sure. Tamay Besiroglu 03:01:06 And, I don’t know, set up Signal chats with your friends or whatever. Dwarkesh Patel 03:01:10 Actually, it’s crazy how much alpha I’ve gotten out of that. Ege Erdil 03:01:14 But yeah, I think one advice I would give to people in general, even if they are not thinking about AI specifically, but I think it’s also helpful for that, is people should be much more aggressive about reaching out. People have an impression that if you reach out to someone who looks really important, they’re not going to respond to you. But if what you send to them is interesting and high quality, then it’s very, very likely that they will respond. There’s like a lot more edge there that you can get, which is just being more aggressive and less ashamed of looking dumb. That’s the main advice I would give. Because if you want to be productive, then again, there are these complementarities and so you need to be part of some community or some organization. Dwarkesh Patel 03:02:02 And it goes back to the thing about reasoning alone not being that helpful. Ege Erdil 03:02:05 Yeah, yeah, yeah. Dwarkesh Patel 03:02:06 It’s just like other people have thought a long time and have randomly stumbled upon useful ideas that you can take advantage of. Ege Erdil 03:02:12 That’s right. So you should just try to place yourself in a situation where you can become part of something larger. Which isn’t working on the front, that’s just a more effective way of contributing. And to do that, you have to, well, let people know. Dwarkesh Patel 03:02:25 That’s right. That’s right. And I think just coming to the Bay Area is especially- for interest in AI in particular. Ege Erdil 03:02:30 Yeah, going to the Bay Area is nice. Just post, like just writing things and like posting them where people can see them. Just aggressively reaching out to people with interesting comments. Tamay Besiroglu 03:02:39 Provided your thoughts are interesting and so on. Dwarkesh Patel 03:02:42 I mean, they probably aren’t. In many cases, I think it’s like, my thoughts still might not be interesting, but people will tolerate my cold emails and will still collaborate with me and so forth. The other thing I’ve noticed- tell me if this is actually the wrong pattern. With people like you or with Carl Shulman or something, is that, as compared to a general person who’s intellectually curious or reading widely, you tend to focus much more on key pieces of literature than say, “I’m going to go read the classics or just generally read”. It’s like, “ I’m going to just put like a ton more credence in something like the Roamer paper”. And a normal person who’s intellectually curious would not be reading key pieces of literature. Ege Erdil 03:03:31 Yeah. I think you have to be very mindful of the fact that you have a very limited amount of time, you’re not an AI model. So you have to aggressively prioritize what you’re going to spend your time reading. Tamay Besiroglu 03:03:44 Even AI models don’t prioritize that heavily. They read Reddit mostly or a large part of their corpuses… Dwarkesh Patel 03:03:48 Key pieces of empirical literature, at least. At least among you guys. I mean, it might not be the most productive thing in general, but… Tamay Besiroglu 03:03:54 I think that’s useful. I also think it’s useful to read Twitter. I think we were having this conversation about people often say that they’re spending too much time reading Twitter and they wish they spent more time reading arXiv . But actually, the amount of information per unit time you get reading Twitter is often just much higher, and it’s just much more productive for them to read Twitter. I think there are key pieces of literature that are important, and I think it’s useful to figure out what people who have spent a lot of time thinking about this find important in their worldview, so in AI, this might be key papers, like the Andy Jones paper about scaling loss for inference is a big thing. And in economics, this Romer paper or the paper on explaining long run population from Kremer or from David Roodman and so on. I think if people who you think think very well about this suggest a certain paper and they highly recommend it, then I think you should take that seriously and actually read those papers. Dwarkesh Patel 03:05:09 And for me, it’s been especially helpful to, instead of just skimming a bunch of things, if there’s a key piece of literature in order to, for example, understand the transformer, there’s always the Karpathy lectures, but one research that was really useful is the Anthropic’s original transformer circuit paper . And just spending a day on that paper instead of skimming it and making a bunch of spaced repetition cards and so forth, was much more useful than just generally reading widely about AI. Ege Erdil 03:05:42 I think it’s just much more important here if you want to prioritize things correctly to be, again, to be part of a community or to be getting inputs from a community or get from people who have thought a lot and have a lot of experience about what is important and what is not. Dwarkesh Patel 03:05:56 Yeah. Ege Erdil 03:05:57 This is true even in academic fields. So if you want to do math research, but you’re not part of a graduate program, you’re not at a university where there are tons of people who do math research all day for many years, then you’re not even going to know what are the open problems that I should be working on? What is reasonable to attack? What is not reasonable to attack? What papers in this field are important, contain important techniques? You’re just going to have no idea. So it’s very important to be plugged into that feed of information somehow. Dwarkesh Patel 03:06:26 But how did you know all this shit before being plugged in? Because you weren’t talking to anybody in Ankara. Ege Erdil 03:06:30 You don’t need to talk. The internet is a pretty useful thing in this respect. And you don’t need to necessarily talk to people, you can get a lot of benefit from reading. But you just need to identify, who are the people who seem constantly most interesting? And maybe you find one person. And then often that person will know some other people who are interesting. And then you can start tracing the social network. One example I can give, which I think is actually accurate, is maybe you know about Daniel Ellsberg . So you look for a podcast he appears on. You notice that he’s appeared on 80,000 Hours podcast, which he has . And then you notice there are some other guests on the 80,000 Hours podcast. So maybe there’s Bryan Caplan , who has also appeared on the podcast . And then maybe Robin Hanson has also appeared on the podcast . And then maybe there are some people those other people know. And then just tracing that kind of social network and figuring out who to listen to like that. I think that can be… Tamay Besiroglu 03:07:26 And I think you’re doing a very big service to making that possible. I think your selection is often very good. Dwarkesh Patel 03:07:33 I’m actually curious to hear offline what I got wrong. Well, actually, I think I know the answer to that. Tamay Besiroglu 03:07:38 And I think that makes it a bunch easier to track who are the people doing the most interesting thinking on various topics. Dwarkesh Patel 03:07:47 That’s right. Cool. I think that’s a good place to end, with you praising me. Again, I highly recommend people follow Epoch . There’s a great weekly newsletter, Gradient Updates , which- I mean, people plug newsletters, but this is, I can’t believe this is a thing that comes out on a weekly basis. And you now have a new podcast , which I will not plug as a competitor, but you can check it out. Tamay Besiroglu 03:08:18 Thanks for lending your studio. Ege Erdil 03:08:20 Yeah, that’s very generous. Dwarkesh Patel 03:08:24 Anyways, cool. Thanks, guys.", "" ]
[ "https://tamaybesiroglu.com/", "https://twitter.com/EgeErdil2?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor", "https://epoch.ai/", "https://en.wikipedia.org/wiki/Robin_Hanson", "https://en.wikipedia.org/wiki/AlexNet", "https://en.wikipedia.org/wiki/Monte_Carlo_tree_search", "https://www.theverge.com/news/619482/anthropics-claude-ai-is-playing-pokemon", "https://en.wikipedia.org/wiki/Moravec%27s_paradox", "https://www.coursera.org/", "https://www.dwarkesh.com/p/scott-daniel", "https://www.dwarkesh.com/p/joseph-henrich", "https://www.dwarkesh.com/p/ai-firm", "https://en.wikipedia.org/wiki/Ilya_Sutskever", "https://en.wikipedia.org/wiki/Moore%27s_law", "https://en.wikipedia.org/wiki/Neural_scaling_law", "https://www.lesswrong.com/users/daniel-kokotajlo", "http://fly.so/", "https://en.wikipedia.org/wiki/Battle_of_Thermopylae", "https://www.dwarkesh.com/p/sarah-paine-india", "https://www.dwarkesh.com/p/sarah-paine-japan", "https://www.dwarkesh.com/p/sarah-paine-china", "https://www.metaculus.com/", "https://jsevillamol.github.io/", "https://pbs.twimg.com/media/FYspuNGXwAEWwXw.jpg", "https://www.nationalreview.com/corner/conquests-laws-john-derbyshire/", "https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect", "https://www.lesswrong.com/w/near-far-thinking", "https://en.wikipedia.org/wiki/Total_factor_productivity", "https://en.wikipedia.org/wiki/TSMC", "https://openai.com/index/chain-of-thought-monitoring/", "https://en.wikipedia.org/wiki/Partial_equilibrium", "https://quillette.com/2023/04/14/what-are-reasonable-ai-fears/", "https://forum.effectivealtruism.org/topics/value-lock-in", "https://en.wikipedia.org/wiki/Link_rot", "https://en.wikipedia.org/wiki/Mature_technology", "https://en.wikipedia.org/wiki/Tyler_Cowen", "https://www.dwarkesh.com/p/gwern-branwen", "https://www.thoughtco.com/what-is-an-intensive-margin-4082788", "https://en.wikipedia.org/wiki/Baumol_effect", "https://en.wikipedia.org/wiki/Elasticity_of_substitution", "https://epoch.ai/blog/explosive-growth-from-ai-a-review-of-the-arguments", "https://en.wikipedia.org/wiki/Principal%E2%80%93agent_problem", "https://en.wikipedia.org/wiki/Uruk", "https://forum.effectivealtruism.org/topics/carl-shulman", "https://arxiv.org/", "https://arxiv.org/abs/2205.10487", "https://www.jstor.org/stable/2937632", "https://faculty.econ.ucdavis.edu/faculty/gclark/210a/readings/kremer1993.pdf", "https://transformer-circuits.pub/2021/framework/index.html", "https://ncase.me/remember/", "https://en.wikipedia.org/wiki/Daniel_Ellsberg", "https://80000hours.org/podcast/", "https://80000hours.org/podcast/episodes/daniel-ellsberg-doomsday-machines/", "https://en.wikipedia.org/wiki/Bryan_Caplan", "https://80000hours.org/podcast/episodes/bryan-caplan-stop-reading-the-news/", "https://80000hours.org/podcast/episodes/robin-hanson-on-lying-to-ourselves/", "https://epoch.ai/", "https://epoch.ai/gradient-updates", "https://podcastaddict.com/podcast/epoch-after-hours/5621813" ]
https://www.dwarkesh.com/p/eliezer-yudkowsky
Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
[ "TIME article", "Dwarkesh Patel 0:00:51", "Today I have the pleasure of speaking with Eliezer Yudkowsky. Eliezer, thank you so much for coming out to the Lunar Society.", "Eliezer Yudkowsky 0:01:00", "You’re welcome.", "Dwarkesh Patel 0:01:01", "Yesterday, when we’re recording this, you had an article in Time calling for a moratorium on further AI training runs. My first question is — It’s probably not likely that governments are going to adopt some sort of treaty that restricts AI right now. So what was the goal with writing it?", "Eliezer Yudkowsky 0:01:25", "I thought that this was something very unlikely for governments to adopt and then all of my friends kept on telling me — “No, no, actually, if you talk to anyone outside of the tech industry, they think maybe we shouldn’t do that.” And I was like — All right, then. I assumed that this concept had no popular support. Maybe I assumed incorrectly. It seems foolish and to lack dignity to not even try to say what ought to be done. There wasn’t a galaxy-brained purpose behind it. I think that over the last 22 years or so, we’ve seen a great lack of galaxy brained ideas playing out successfully.", "Dwarkesh Patel 0:02:05", "Has anybody in the government reached out to you, not necessarily after the article but just in general, in a way that makes you think that they have the broad contours of the problem correct?", "Eliezer Yudkowsky 0:02:15", "No. I’m going on reports that normal people are more willing than the people I’ve been previously talking to, to entertain calls that this is a bad idea and maybe you should just not do that.", "Dwarkesh Patel 0:02:30", "That’s surprising to hear, because I would have assumed that the people in Silicon Valley who are weirdos would be more likely to find this sort of message. They could kind of rocket the whole idea that AI will make nanomachines that take over. It’s surprising to hear that normal people got the message first.", "Eliezer Yudkowsky 0:02:47", "Well, I hesitate to use the term midwit but maybe this was all just a midwit thing.", "Dwarkesh Patel 0:02:54", "All right. So my concern with either the 6 month moratorium or forever moratorium until we solve alignment is that at this point, it could make it seem to people like we’re crying wolf. And it would be like crying wolf because these systems aren’t yet at a point at which they’re dangerous.", "Eliezer Yudkowsky 0:03:13", "And nobody is saying they are. I’m not saying they are. The open letter signatories aren’t saying they are.", "Dwarkesh Patel 0:03:20", "So if there is a point at which we can get the public momentum to do some sort of stop, wouldn’t it be useful to exercise it when we get a GPT-6? And who knows what it’s capable of. Why do it now?", "Eliezer Yudkowsky 0:03:32", "Because allegedly, and we will see, people right now are able to appreciate that things are storming ahead a bit faster than the ability to ensure any sort of good outcome for them. And you could be like — “Ah, yes. We will play the galaxy-brained clever political move of trying to time when the popular support will be there.” But again, I heard rumors that people were actually completely open to the concept of  let’s stop. So again, I’m just trying to say it. And it’s not clear to me what happens if we wait for GPT-5 to say it. I don’t actually know what GPT-5 is going to be like. It has been very hard to call the rate at which these systems acquire capability as they are trained to larger and larger sizes and more and more tokens. GPT-4 is a bit beyond in some ways where I thought this paradigm was going to scale. So I don’t actually know what happens if GPT-5 is built. And even if GPT-5 doesn’t end the world, which I agree is like more than 50% of where my probability mass lies, maybe that’s enough time for GPT-4.5 to get ensconced everywhere and in everything, and for it actually to be harder to call a stop, both politically and technically. There’s also the point that training algorithms keep improving. If we put a hard limit on the total computes and training runs right now, these systems would still get more capable over time as the algorithms improved and got more efficient. More oomph per floating point operation, and things would still improve, but slower. And if you start that process off at the GPT-5 level, where I don’t actually know how capable that is exactly, you may have a bunch less lifeline left before you get into dangerous territory.", "Dwarkesh Patel 0:05:46", "The concern is then that — there’s millions of GPUs out there in the world. The actors who would be willing to cooperate or who could even be identified in order to get the government to make them cooperate, would potentially be the ones that are most on the message. And so what you’re left with is a system where they stagnate for six months or a year or however long this lasts. And then what is the game plan? Is there some plan by which if we wait a few years, then alignment will be solved? Do we have some sort of timeline like that?", "Eliezer Yudkowsky 0:06:18", "Alignment will not be solved in a few years. I would hope for something along the lines of human intelligence enhancement works. I do not think they’re going to have the timeline for genetically engineered humans to work but maybe? This is why I mentioned in the Time letter that if I had infinite capability to dictate the laws that there would be a carve-out on biology, AI that is just for biology and not trained on text from the internet. Human intelligence enhancement, make people smarter. Making people smarter has a chance of going right in a way that making an extremely smart AI does not have a realistic chance of going right at this point. If we were on a sane planet, what the sane planet does at this point is shut it all down and work on human intelligence enhancement. I don’t think we’re going to live in that sane world. I think we are all going to die. But having heard that people are more open to this outside of California, it makes sense to me to just try saying out loud what it is that you do on a saner planet and not just assume that people are not going to do that.", "Dwarkesh Patel 0:07:30", "In what percentage of the worlds where humanity survives is there human enhancement? Like even if there’s 1% chance humanity survives, is that entire branch dominated by the worlds where there’s some sort of human intelligence enhancement?", "Eliezer Yudkowsky 0:07:39", "I think we’re just mainly in the territory of Hail Mary passes at this point, and human intelligence enhancement is one Hail Mary pass. Maybe you can put people in MRIs and train them using neurofeedback to be a little saner, to not rationalize so much. Maybe you can figure out how to have something light up every time somebody is working backwards from what they want to be true to what they take as their premises. Maybe you can just fire off little lights and teach people not to do that so much. Maybe the GPT-4 level systems can be RLHF’d (reinforcement learning from human feedback) into being consistently smart, nice and charitable in conversation and just unleash a billion of them on Twitter and just have them spread sanity everywhere. I do worry that this is not going to be the most profitable use of the technology, but you’re asking me to list out Hail Mary passes and that’s what I’m doing. Maybe you can actually figure out how to take a brain, slice it, scan it, simulate it, run uploads and upgrade the uploads, or run the uploads faster. These are also quite dangerous things, but they do not have the utter lethality of artificial intelligence.", "Are humans aligned?", "Dwarkesh Patel 0:09:06", "All right, that’s actually a great jumping point into the next topic I want to talk to you about. Orthogonality. And here’s my first question — Speaking of human enhancement, suppose you bred human beings to be friendly and cooperative, but also more intelligent. I claim that over many generations you would just have really smart humans who are also really friendly and cooperative. Would you disagree with that analogy? I’m sure you’re going to disagree with this analogy, but I just want to understand why?", "Eliezer Yudkowsky 0:09:31", "The main thing is that you’re starting from minds that are already very, very similar to yours. You’re starting from minds, many of which already exhibit the characteristics that you want. There are already many people in the world, I hope, who are nice in the way that you want them to be nice. Of course, it depends on how nice you want exactly. I think that if you actually go start trying to run a project of selectively encouraging some marriages between particular people and encouraging them to have children, you will rapidly find, as one does in any such process that when you select on the stuff you want, it turns out there’s a bunch of stuff correlated with it and that you’re not changing just one thing. If you try to make people who are inhumanly nice, who are nicer than anyone has ever been before, you’re going outside the space that human psychology has previously evolved and adapted to deal with, and weird stuff will happen to those people. None of this is very analogous to AI. I’m just pointing out something along the lines of — well, taking your analogy at face value, what would happen exactly? It’s the sort of thing where you could maybe do it, but there’s all kinds of pitfalls that you’d probably find out about if you cracked open a textbook on animal breeding.", "Dwarkesh Patel 0:11:13", "The thing you mentioned initially, which is that we are starting off with basic human psychology, that we are fine tuning with breeding. Luckily, the current paradigm of AI is  — you have these models that are trained on human text and I would assume that this would give you a starting point of something like human psychology.", "Eliezer Yudkowsky 0:11:31", "Why do you assume that?", "Dwarkesh Patel 0:11:33", "Because they’re trained on human text.", "Eliezer Yudkowsky 0:11:34", "And what does that do?", "Dwarkesh Patel 0:11:36", "Whatever thoughts and emotions that lead to the production of human text need to be simulated in the AI in order to produce those results.", "Eliezer Yudkowsky 0:11:44", "I see. So if you take an actor and tell them to play a character, they just become that person. You can tell that because you see somebody on screen playing Buffy the Vampire Slayer, and that’s probably just actually Buffy in there. That’s who that is.", "Dwarkesh Patel 0:12:05", "I think a better analogy is if you have a child and you tell him — Hey, be this way. They’re more likely to just be that way instead of putting on an act for 20 years or something.", "Eliezer Yudkowsky 0:12:18", "It depends on what you’re telling them to be exactly.", "Dwarkesh Patel 0:12:20", "You’re telling them to be nice.", "Eliezer Yudkowsky 0:12:22", "Yeah, but that’s not what you’re telling them to do. You’re telling them to play the part of an alien, something with a completely inhuman psychology as extrapolated by science fiction authors, and in many cases done by computers because humans can’t quite think that way. And your child eventually manages to learn to act that way. What exactly is going on in there now? Are they just the alien or did they pick up the rhythm of what you’re asking them to imitate and be like — “Ah yes, I see who I’m supposed to pretend to be.” Are they actually a person or are they pretending? That’s true even if you’re not asking them to be an alien. My parents tried to raise me Orthodox Jewish and that did not take at all. I learned to pretend. I learned to comply. I hated every minute of it. Okay, not literally every minute of it. I should avoid saying untrue things. I hated most minutes of it. Because they were trying to show me a way to be that was alien to my own psychology and the religion that I actually picked up was from the science fiction books instead, as it were. I’m using religion very metaphorically here, more like ethos, you might say. I was raised with science fiction books I was reading from my parents library and Orthodox Judaism. The ethos of the science fiction books rang truer in my soul and so that took in, the Orthodox Judaism didn't. But the Orthodox Judaism was what I had to imitate, was what I had to pretend to be, was the answers I had to give whether I believed them or not. Because otherwise you get punished.", "Dwarkesh Patel 0:14:01", "But on that point itself, the rates of apostasy are probably below 50% in any religion. Some people do leave but often they just become the thing they’re imitating as a child.", "Eliezer Yudkowsky 0:14:12", "Yes, because the religions are selected to not have that many apostates. If aliens came in and introduced their religion, you’d get a lot more apostates.", "Dwarkesh Patel 0:14:19", "Right. But I think we’re probably in a more virtuous situation with ML because these systems are regularized through stochastic gradient descent. So the system that is pretending to be something where there’s multiple layers of interpretation is going to be more complex than the one that is just being the thing. And over time, the system that is just being the thing will be optimized, right? It’ll just be simpler.", "Eliezer Yudkowsky 0:14:42", "This seems like an ordinate cope. For one thing, you’re not training it to be any one particular person. You’re training it to switch masks to anyone on the Internet as soon as they figure out who that person on the internet is. If I put the internet in front of you and I was like — learn to predict the next word over and over. You do not just turn into a random human because the random human is not what’s best at predicting the next word of everyone who’s ever been on the internet. You learn to very rapidly pick up on the cues of what sort of person is talking, what will they say next? You memorize so many facts just because they’re helpful in predicting the next word. You learn all kinds of patterns, you learn all the languages. You learn to switch rapidly from being one kind of person or another as the conversation that you are predicting changes who is speaking. This is not a human we’re describing. You are not training a human there.", "Dwarkesh Patel 0:15:43", "Would you at least say that we are living in a better situation than one in which we have some sort of black box where you have a machiavellian fittest survive simulation that produces AI? This situation is at least more likely to produce alignment than one in which something that is completely untouched by human psychology would produce?", "Eliezer Yudkowsky 0:16:06", "More likely? Yes. Maybe you’re an order of magnitude likelier. 0% instead of 0%. Getting stuff to be more likely does not help you if the baseline is nearly zero. The whole training set up there is producing an actress, a predictor. It’s not actually being put into the kind of ancestral situation that evolved humans, nor the kind of modern situation that raises humans. Though to be clear, raising it like a human wouldn’t help, But you’re giving it a very alien problem that is not what humans solve and it is solving that problem not in the way a human would.", "Dwarkesh Patel 0:16:44", "Okay, so how about this. I can see that I certainly don’t know for sure what is going on in these systems. In fact, obviously nobody does. But that also goes through you. Could it not just be that reinforcement learning works and all these other things we’re trying somehow work and actually just being an actor produces some sort of benign outcome where there isn’t that level of simulation and conniving?", "Eliezer Yudkowsky 0:17:15", "I think it predictably breaks down as you try to make the system smarter, as you try to derive sufficiently useful work from it. And in particular, the sort of work where some other AI doesn’t just kill you off six months later. Yeah, I think the present system is not smart enough to have a deep conniving actress thinking long strings of coherent thoughts about how to predict the next word. But as the mask that it wears, as the people it is pretending to be get smarter and smarter, I think that at some point the thing in there that is predicting how humans plan, predicting how humans talk, predicting how humans think, and needing to be at least as smart as the human it is predicting in order to do that, I suspect at some point there is a new coherence born within the system and something strange starts happening. I think that if you have something that can accurately predict Eliezer Yudkowsky, to use a particular example I know quite well, you’ve got to be able to do the kind of thinking where you are reflecting on yourself and that in order to simulate Eliezer Yudkowsky reflecting on himself, you need to be able to do that kind of thinking. This is not airtight logic but I expect there to be a discount factor. If you ask me to play a part of somebody who’s quite unlike me, I think there’s some amount of penalty that the character I’m playing gets to his intelligence because I’m secretly back there simulating him. That’s even if we’re quite similar and the stranger they are, the more unfamiliar the situation, the less the person I’m playing is as smart as I am and the more they are dumber than I am. So similarly, I think that if you get an AI that’s very, very good at predicting what Eliezer says, I think that there’s a quite alien mind doing that, and it actually has to be to some degree smarter than me in order to play the role of something that thinks differently from how it does very, very accurately. And I reflect on myself, I think about how my thoughts are not good enough by my own standards and how I want to rearrange my own thought processes. I look at the world and see it going the way I did not want it to go, and asking myself how could I change this world? I look around at other humans and I model them, and sometimes I try to persuade them of things. These are all capabilities that the system would then be somewhere in there. And I just don’t trust the blind hope that all of that capability is pointed entirely at pretending to be Eliezer and only exists insofar as it’s the mirror and isomorph of Eliezer. That all the prediction is by being something exactly like me and not thinking about me while not being me.", "Dwarkesh Patel 0:20:55", "I certainly don’t want to claim that it is guaranteed that there isn’t something super alien and something against our aims happening within the shoggoth. But you made an earlier claim which seemed much stronger than the idea that you don’t want blind hope, which is that we’re going from 0% probability to an order of magnitude greater at 0% probability. There’s a difference between saying that we should be wary and that there’s no hope, right? I could imagine so many things that could be happening in the shoggoth’s brain, especially in our level of confusion and mysticism over what is happening. One example is, let’s say that it kind of just becomes the average of all human psychology and motives.", "Eliezer Yudkowsky 0:21:41", "But it’s not the average. It is able to be every one of those people. That’s very different from being the average. It’s very different from being an average chess player versus being able to predict every chess player in the database. These are very different things.", "Dwarkesh Patel 0:21:56", "Yeah, no, I meant in terms of motives that it is the average where it can simulate any given human. I’m not saying that’s the most likely one, I’m just saying it’s one possibility.", "Eliezer Yudkowsky 0:22:08", "What.. Why? It just seems 0% probable to me. Like the motive is going to be like some weird funhouse mirror thing of — I want to predict very accurately.", "Dwarkesh Patel 0:22:19", "Right. Why then are we so sure that whatever drives that come about because of this motive are going to be incompatible with the survival and flourishing with humanity?", "Eliezer Yudkowsky 0:22:30", "Most drives when you take a loss function and splinter it into things correlated with it and then amp up intelligence until some kind of strange coherence is born within the thing and then ask it how it would want to self modify or what kind of successor system it would build. Things that alien ultimately end up wanting the universe to be some particular way such that humans are not a solution to the question of how to make the universe most that way. The thing that very strongly wants to predict text, even if you got that goal into the system exactly which is not what would happen, The universe with the most predictable text is not a universe that has humans in it.", "Dwarkesh Patel 0:23:19", "Okay. I’m not saying this is the most likely outcome. Here’s an example of one of many ways in which humans stay around despite this motive. Let’s say that in order to predict human output really well, it needs humans around to give it the raw data from which to improve its predictions or something like that. This is not something I think individually is likely…", "Eliezer Yudkowsky 0:23:40", "If the humans are no longer around, you no longer need to predict them. Right, so you don’t need the data required to predict them", "Dwarkesh Patel 0:23:46", "Because you are starting off with that motivation you want to just maximize along that loss function or have that drive that came about because of the loss function.", "Eliezer Yudkowsky 0:23:57", "I’m confused. So look, you can always develop arbitrary fanciful scenarios in which the AI has some contrived motive that it can only possibly satisfy by keeping humans alive in good health and comfort and turning all the nearby galaxies into happy, cheerful places full of high functioning galactic civilizations. But as soon as your sentence has more than like five words in it, its probability has dropped to basically zero because of all the extra details you’re padding in.", "Dwarkesh Patel 0:24:31", "Maybe let’s return to this. Another train of thought I want to follow is — I claim that humans have not become orthogonal to the sort of evolutionary process that produced them.", "Eliezer Yudkowsky 0:24:46", "Great. I claim humans are increasingly orthogonal and the further they go out of distribution and the smarter they get, the more orthogonal they get to inclusive genetic fitness, the sole loss function on which humans were optimized.", "Dwarkesh Patel 0:25:03", "Most humans still want kids and have kids and care for their kin. Certainly there’s some angle between how humans operate today. Evolution would prefer us to use less condoms and more sperm banks. But there’s like 10 billion of us and there’s going to be more in the future. We haven’t divorced that far from what our alleles would want.", "Eliezer Yudkowsky 0:25:28", "It’s a question of how far out of distribution are you? And the smarter you are, the more out of distribution you get. Because as you get smarter, you get new options that are further from the options that you are faced with in the ancestral environment that you were optimized over. Sure, a lot of people want kids, not inclusive genetic fitness, but kids. They want kids similar to them maybe, but they don’t want the kids to have their DNA or their alleles or their genes. So suppose I go up to somebody and credibly say, we will assume away the ridiculousness of this offer for the moment, your kids could be a bit smarter and much healthier if you’ll just let me replace their DNA with this alternate storage method that will age more slowly. They’ll be healthier, they won’t have to worry about DNA damage, they won’t have to worry about the methylation on the DNA flipping and the cells de-differentiating as they get older. We’ve got this stuff that replaces DNA and your kid will still be similar to you, it’ll be a bit smarter and they’ll be so much healthier and even a bit more cheerful. You just have to replace all the DNA with a stronger substrate and rewrite all the information on it. You know, the old school transhumanist offer really. And I think that a lot of the people who want kids would go for this new offer that just offers them so much more of what it is they want from kids than copying the DNA, than inclusive genetic fitness.", "Dwarkesh Patel 0:27:16", "In some sense, I don’t even think that would dispute my claim because if you think from a gene’s point of view, it just wants to be replicated. If it’s replicated in another substrate that’s still okay.", "Eliezer Yudkowsky 0:27:25", "No, we’re not saving the information. We’re doing a total rewrite to the DNA.", "Dwarkesh Patel 0:27:30", "I actually claim that most humans would not accept that offer.", "Eliezer Yudkowsky 0:27:33", "Yeah, because it would sound weird. But I think the smarter they are, the more likely they are to go for it if it’s credible. I mean, if you assume away the credibility issue and the weirdness issue. Like all their friends are doing it.", "Dwarkesh Patel 0:27:52", "Yeah. Even if the smarter they are the more likely they’re to do it, most humans are not that smart. From the gene’s point of view it doesn’t really matter how smart you are, right? It just matters if you’re producing copies.", "Eliezer Yudkowsky 0:28:03", "No. The smart thing is kind of like a delicate issue here because somebody could always be like — I would never take that offer. And then I’m like “Yeah…”. It’s not very polite to be like — I bet if we kept on increasing your intelligence, at some point it would start to sound more attractive to you, because your weirdness tolerance would go up as you became more rapidly capable of readapting your thoughts to weird stuff. The weirdness would start to seem less unpleasant and more like you were moving within a space that you already understood. But you can sort of avoid all that and maybe should by being like — suppose all your friends were doing it. What if it was normal? What if we remove the weirdness and remove any credibility problems in that hypothetical case? Do people choose for their kids to be dumber, sicker, less pretty out of some sentimental idealistic attachment to using Deoxyribose Nucleic Acid instead of the particular information encoding their cells as supposed to be like the new improved cells from Alpha-Fold 7?", "Dwarkesh Patel 0:29:21", "I would claim that they would but we don’t really know. I claim that they would be more averse to that, you probably think that they would be less averse to that. Regardless of that, we can just go by the evidence we do have in that we are already way out of distribution of the ancestral environment. And even in this situation, the place where we do have evidence, people are still having kids. We haven’t gone that orthogonal.", "Eliezer Yudkowsky 0:29:44", "We haven’t gone that smart. What you’re saying is — Look, people are still making more of their DNA in a situation where nobody has offered them a way to get all the stuff they want without the DNA. So of course they haven’t tossed DNA out the window.", "Dwarkesh Patel 0:29:59", "Yeah. First of all, I’m not even sure what would happen in that situation. I still think even most smart humans in that situation might disagree, but we don’t know what would happen in that situation. Why not just use the evidence we have so far?", "Eliezer Yudkowsky 0:30:10", "PCR . You right now, could get some of you and make like a whole gallon jar full of your own DNA. Are you doing that? No. Misaligned. Misaligned.", "Dwarkesh Patel 0:30:23", "I’m down with transhumanism. I’m going to have my kids use the new cells and whatever.", "Eliezer Yudkowsky 0:30:27", "Oh, so we’re all talking about these hypothetical other people I think would make the wrong choice.", "Dwarkesh Patel 0:30:32", "Well, I wouldn’t say wrong, but different. And I’m just saying there’s probably more of them than there are of us.", "Eliezer Yudkowsky 0:30:37", "What if, like, I say that I have more faith in normal people than you do to toss DNA out the window as soon as somebody offers them a happy, healthier life for their kids?", "Dwarkesh Patel 0:30:46", "I’m not even making a moral point. I’m just saying I don’t know what’s going to happen in the future. Let’s just look at the evidence we have so far, humans. If that’s the evidence you’re going to present for something that’s out of distribution and has gone orthogonal, that has actually not happened. This is evidence for hope.", "Eliezer Yudkowsky 0:31:00", "Because we haven’t yet had options as far enough outside of the ancestral distribution that in the course of choosing what we most want that there’s no DNA left.", "Dwarkesh Patel 0:31:10", "Okay. Yeah, I think I understand.", "Eliezer Yudkowsky 0:31:12", "But you yourself say, “Oh yeah, sure, I would choose that.” and I myself say, “Oh yeah, sure, I would choose that.” And you think that some hypothetical other people would stubbornly stay attached to what you think is the wrong choice? First of all, I think maybe you’re being a bit condescending there. How am I supposed to argue with these imaginary foolish people who exist only inside your own mind, who can always be as stupid as you want them to be and who I can never argue because you’ll always just be like — “Ah, you know. They won’t be persuaded by that.” But right here in this room, the site of this videotaping, there is no counter evidence that smart enough humans will toss DNA out the window as soon as somebody makes them a sufficiently better offer.", "Dwarkesh Patel 0:31:55", "I’m not even saying it’s stupid. I’m just saying they’re not weirdos like me and you.", "Eliezer Yudkowsky 0:32:01", "Weird is relative to intelligence. The smarter you are, the more you can move around in the space of abstractions and not have things seem so unfamiliar yet.", "Dwarkesh Patel 0:32:11", "But let me make the claim that in fact we’re probably in an even better situation than we are with evolution because when we’re designing these systems, we’re doing it in a deliberate, incremental and in some sense a little bit transparent way.", "Eliezer Yudkowsky 0:32:27", "No, no, not yet, not now. Nobody’s being careful and deliberate now, but maybe at some point in the indefinite future people will be careful and deliberate. Sure, let’s grant that premise. Keep going.", "Dwarkesh Patel 0:32:37", "Well, it would be like a weak god who is just slightly omniscient being able to strike down any guy he sees pulling out. Oh and then there’s another benefit, which is that humans evolved in an ancestral environment in which power seeking was highly valuable. Like if you’re in some sort of tribe or something.", "Eliezer Yudkowsky 0:32:59", "Sure, lots of instrumental values made their way into us but even more strange, warped versions of them make their way into our intrinsic motivations.", "Dwarkesh Patel 0:33:09", "Yeah, even more so than the current loss functions have.", "Eliezer Yudkowsky 0:33:10", "Really? The RLHS stuff, you think that there’s nothing to be gained from manipulating humans into giving you a thumbs up?", "Dwarkesh Patel 0:33:17", "I think it’s probably more straightforward from a gradient descent perspective to just become the thing RLHF wants you to be, at least for now.", "Eliezer Yudkowsky 0:33:24", "Where are you getting this?", "Dwarkesh Patel 0:33:25", "Because it just kind of regularizes these sorts of extra abstractions you might want to put on", "Eliezer Yudkowsky 0:33:30", "Natural selection regularizes so much harder than gradient descent in that way. It’s got an enormously stronger information bottleneck. Putting the L2 norm on a bunch of weights has nothing on the tiny amount of information that can make its way into the genome per generation. The regularizers on natural selection are enormously stronger.", "Dwarkesh Patel 0:33:51", "Yeah. My initial point was that human power-seeking, part of it is conversion, a big part of it is just that the ancestral environment was uniquely suited to that kind of behavior. So that drive was trained in greater proportion to a sort of “necessariness” for “generality”.", "Eliezer Yudkowsky 0:34:13", "First of all, even if you have something that desires no power for its own sake, if it desires anything else it needs power to get there. Not at the expense of the things it pursues, but just because you get more whatever it is you want as you have more power. And sufficiently smart things know that. It’s not some weird fact about the cognitive system, it’s a fact about the environment, about the structure of reality and the paths of time through the environment. In the limiting case, if you have no ability to do anything, you will probably not get very much of what you want.", "Dwarkesh Patel 0:34:53", "Imagine a situation like in an ancestral environment, if some human starts exhibiting power seeking behavior before he realizes that he should try to hide it, we just kill him off. And the friendly cooperative ones, we let them breed more. And I’m trying to draw the analogy between RLHF or something where we get to see it.", "Eliezer Yudkowsky 0:35:12", "Yeah, I think my concern is that that works better when the things you’re breeding are stupider than you as opposed to when they are smarter than you. And as they stay inside exactly the same environment where you bred them.", "Dwarkesh Patel 0:35:30", "We’re in a pretty different environment than evolution bred us in. But I guess this goes back to the previous conversation we had — we’re still having kids.", "Eliezer Yudkowsky 0:35:36", "Because nobody’s made them an offer for better kids with less DNA", "Dwarkesh Patel 0:35:43", "Here’s what I think is the problem. I can just look out of the world and see this is what it looks like. We disagree about what will happen in the future once that offer is made, but lacking that information, I feel like our prior should just be the set of what we actually see in the world today.", "Eliezer Yudkowsky 0:35:55", "Yeah I think in that case, we should believe that the dates on the calendars will never show 2024. Every single year throughout human history, in the 13.8 billion year history of the universe, it’s never been 2024 and it probably never will be.", "Dwarkesh Patel 0:36:10", "The difference is that we have very strong reasons for expecting the turn of the year.", "Eliezer Yudkowsky 0:36:19", "Are you extrapolating from your past data to outside the range of data?", "Dwarkesh Patel 0:36:24", "Yes, I think we have a good reason to. I don’t think human preferences are as predictable as dates.", "Eliezer Yudkowsky 0:36:29", "Yeah, they’re somewhat less so. Sorry, why not jump on this one? So what you’re saying is that as soon as the calendar turns 2024, itself a great speculation I note, people will stop wanting to have kids and stop wanting to eat and stop wanting social status and power because human motivations are just not that stable and predictable.", "Dwarkesh Patel 0:36:51", "No. That’s not what I’m claiming at all. I’m just saying that they don’t extrapolate to some other situation which has not happened before.", "Eliezer Yudkowsky 0:36:59", "Like the clock showing 2024?", "Dwarkesh Patel 0:37:01", "What is an example here? Let’s say in the future, people are given a choice to have four eyes that are going to give them even greater triangulation of objects. I wouldn’t assume that they would choose to have four eyes.", "Eliezer Yudkowsky 0:37:16", "Yeah. There’s no established preference for four eyes.", "Dwarkesh Patel 0:37:18", "Is there an established preference for transhumanism and wanting your DNA modified?", "Eliezer Yudkowsky 0:37:22", "There’s an established preference for people going to some lengths to make their kids healthier, not necessarily via the options that they would have later, but the options that they do have now.", "Large language models", "Dwarkesh Patel 0:37:35", "Yeah. We’ll see, I guess, when that technology becomes available. Let me ask you about LLMs. So what is your position now about whether these things can get us to AGI?", "Eliezer Yudkowsky 0:37:47", "I don’t know. I was previously like — I don’t think stack more layers does this. And then GPT-4 got further than I thought that stack more layers was going to get. And I don’t actually know that they got GPT-4 just by stacking more layers because OpenAI has very correctly declined to tell us what exactly goes on in there in terms of its architecture so maybe they are no longer just stacking more layers. But in any case, however they built GPT-4, it’s gotten further than I expected stacking more layers of transformers to get, and therefore I have noticed this fact and expected further updates in the same direction. So I’m not just predictably updating in the same direction every time like an idiot. And now I do not know. I am no longer willing to say that GPT-6 does not end the world.", "Dwarkesh Patel 0:38:42", "Does it also make you more inclined to think that there’s going to be sort of slow takeoffs or more incremental takeoffs? Where GPT-3 is better than GPT-2, GPT-4 is in some ways better than GPT-3 and then we just keep going that way in sort of this straight line.", "Eliezer Yudkowsky 0:38:58", "So I do think that over time I have come to expect a bit more that things will hang around in a near human place and weird shit will happen as a result. And my failure review where I look back and ask — was that a predictable sort of mistake? I feel like it was to some extent maybe a case of — you’re always going to get capabilities in some order and it was much easier to visualize the endpoint where you have all the capabilities than where you have some of the capabilities. And therefore my visualizations were not dwelling enough on a space we’d predictably in retrospect have entered into later where things have some capabilities but not others and it’s weird. I do think that, in 2012, I would not have called that large language models were the way and the large language models are in some way more uncannily semi-human than what I would justly have predicted in 2012 knowing only what I knew then. But broadly speaking, yeah, I do feel like GPT-4 is already kind of hanging out for longer in a weird, near-human space than I was really visualizing. In part, that's because it's so incredibly hard to visualize or predict correctly in advance when it will happen, which is, in retrospect, a bias.", "Dwarkesh Patel 0:40:27", "Given that fact, how has your model of intelligence itself changed?", "Eliezer Yudkowsky 0:40:31", "Very little.", "Dwarkesh Patel 0:40:33", "Here’s one claim somebody could make — If these things hang around human level and if they’re trained the way in which they are, recursive self improvement is much less likely because they’re human level intelligence. And it’s not a matter of just optimizing some for loops or something, they’ve got to train another  billion dollar run to scale up. So that kind of recursive self intelligence idea is less likely. How do you respond?", "Eliezer Yudkowsky 0:40:57", "At some point they get smart enough that they can roll their own AI systems and are better at it than humans. And that is the point at which you definitely start to see foom . Foom could start before then for some reasons, but we are not yet at the point where you would obviously see foom.", "Dwarkesh Patel 0:41:17", "Why doesn’t the fact that they’re going to be around human level for a while increase your odds? Or does it increase your odds of human survival? Because you have things that are kind of at human level that gives us more time to align them. Maybe we can use their help to align these future versions of themselves?", "Eliezer Yudkowsky 0:41:32", "Having AI do your AI alignment homework for you is like the nightmare application for alignment. Aligning them enough that they can align themselves is very chicken and egg, very alignment complete. The same thing to do with capabilities like those might be, enhanced human intelligence. Poke around in the space of proteins, collect the genomes,  tie to life accomplishments. Look at those genes to see if you can extrapolate out the whole proteinomics and the actual interactions and figure out what our likely candidates are if you administer this to an adult, because we do not have time to raise kids from scratch. If you administer this to an adult, the adult gets smarter. Try that. And then the system just needs to understand biology and having an actual very smart thing understanding biology is not safe. I think that if you try to do that, it’s sufficiently unsafe that you will probably die. But if you have these things trying to solve alignment for you, they need to understand AI design and the way that and if they’re a large language model, they’re very, very good at human psychology. Because predicting the next thing you’ll do is their entire deal. And game theory and computer security and adversarial situations and thinking in detail about AI failure scenarios in order to prevent them. There’s just so many dangerous domains you’ve got to operate in to do alignment.", "Dwarkesh Patel 0:43:35", "Okay. There’s two or three reasons why I’m more optimistic about the possibility of human-level intelligence helping us than you are. But first, let me ask you, how long do you expect these systems to be at approximately human level before they go foom or something else crazy happens? Do you have some sense?", "Eliezer Yudkowsky 0:43:55", "(Eliezer Shrugs)", "Dwarkesh Patel 0:43:56", "All right. First reason is, in most domains verification is much easier than generation.", "Eliezer Yudkowsky 0:44:03", "Yes. That’s another one of the things that makes alignment the nightmare. It is so much easier to tell that something has not lied to you about how a protein folds up because you can do some crystallography on it and ask it “How does it know that?”, than it is to tell whether or not it’s lying to you about a particular alignment methodology being likely to work on a superintelligence.", "Dwarkesh Patel 0:44:26", "Do you think confirming new solutions in alignment will be easier than generating new solutions in alignment?", "Eliezer Yudkowsky 0:44:35", "Basically no.", "Dwarkesh Patel 0:44:37", "Why not? Because in most human domains, that is the case, right?", "Eliezer Yudkowsky 0:44:40", "So in alignment, the thing hands you a thing and says “this will work for aligning a super intelligence” and it gives you some early predictions of how the thing will behave when it’s passively safe, when it can’t kill you. That all bear out and those predictions all come true. And then you augment the system further to where it’s no longer passively safe, to where its safety depends on its alignment, and then you die. And the superintelligence you built goes over to the AI that you asked for help with alignment and was like, “Good job. Billion dollars.” That’s observation number one. Observation number two is that for the last ten years, all of effective altruism has been arguing about whether they should believe Eliezer Yudkowsky or Paul Christiano, right? That’s two systems. I believe that Paul is honest. I claim that I am honest. Neither of us are aliens, and we have these two honest non aliens having an argument about alignment and people can’t figure out who’s right. Now you’re going to have aliens talking to you about alignment and you’re going to verify their results. Aliens who are possibly lying.", "Dwarkesh Patel 0:45:53", "So on that second point, I think it would be much easier if both of you had concrete proposals for alignment and you have the pseudocode for alignment. If you’re like “here’s my solution”, and he’s like “here’s my solution.” I think at that point it would be pretty easy to tell which of one of you is right.", "Eliezer Yudkowsky 0:46:08", "I think you’re wrong. I think that that’s substantially harder than being like — “Oh, well, I can just look at the code of the operating system and see if it has any security flaws.” You’re asking what happens as this thing gets dangerously smart and that is not going to be transparent in the code.", "Dwarkesh Patel 0:46:32", "Let me come back to that. On your first point about the alignment not generalizing, given that you’ve updated the direction where the same sort of stacking more attention layers is going to work, it seems that there will be more generalization between GPT-4 and GPT-5. Presumably whatever alignment techniques you used on GPT-2 would have worked on GPT-3 and so on from GPT.", "Eliezer Yudkowsky 0:46:56", "Wait, sorry what?!", "Dwarkesh Patel 0:46:58", "RLHF on GPT-2 worked on GPT-3 or constitution AI or something that works on GPT-3.", "Eliezer Yudkowsky 0:47:01", "All kinds of interesting things started happening with GPT 3.5 and GPT-4 that were not in GPT-3.", "Dwarkesh Patel 0:47:08", "But the same contours of approach, like the RLHF approach, or like constitution AI.", "Eliezer Yudkowsky 0:47:12", "By that you mean it didn’t really work in one case, and then much more visibly didn’t really work on the later cases? Sure. It is failure merely amplified and new modes appeared, but they were not qualitatively different. Well, they were qualitatively different from the previous ones. Your entire analogy fails.", "Dwarkesh Patel 0:47:31", "Wait, wait, wait. Can we go through how it fails? I’m not sure I understood it.", "Eliezer Yudkowsky 0:47:33", "Yeah. Like, they did RLHF to GPT-3. Did they even do this to GPT-2 at all? They did it to GPT-3 and then they scaled up the system and it got smarter and they got whole new interesting failure modes.", "Dwarkesh Patel 0:47:50", "Yeah", "Eliezer Yudkowsky 0:47:52", "There you go, right?", "Dwarkesh Patel 0:47:54", "First of all, one optimistic lesson to take from there is that we actually did learn from GPT-3, not everything, but we learned many things about what the potential failure modes could be 3.5.", "Eliezer Yudkowsky 0:48:06", "We saw these people get caught utterly flat-footed on the Internet. We watched that happen in real time.", "Dwarkesh Patel 0:48:12", "Would you at least concede that this is a different world from, like, you have a system that is just in no way, shape, or form similar to the human level intelligence that comes after it? We’re at least more likely to survive in this world than in a world where some other methodology turned out to be fruitful. Do you hear what I’m saying?", "Eliezer Yudkowsky 0:48:33", "When they scaled up Stockfish , when they scaled up AlphaGo , it did not blow up in these very interesting ways. And yes, that’s because it wasn’t really scaling to general intelligence. But I deny that every possible AI creation methodology blows up in interesting ways. And this isn’t really the one that blew up least. No, it’s the only one we’ve ever tried. There’s better stuff out there. We just suck, okay? We just suck at alignment, and that’s why our stuff blew up.", "Dwarkesh Patel 0:49:04", "Well, okay. Let me make this analogy, the Apollo program. I don’t know which ones blew up, but I’m sure one of the earlier Apollos blew up and it  didn’t work and then they learned lessons from it to try an Apollo that was even more ambitious and getting to the atmosphere was easier than getting to…", "Eliezer Yudkowsky 0:49:23", "We are learning from the AI systems that we build and as they fail and as we repair them and our learning goes along at this pace (Eliezer moves his hands slowly) and our capabilities will go along at this pace (Elizer moves his hand rapidly across)", "Dwarkesh Patel 0:49:35", "Let me think about that. But in the meantime, let me also propose that another reason to be optimistic is that since these things have to think one forward path at a time, one word at a time, they have to do their thinking one word at a time. And in some sense, that makes their thinking legible. They have to articulate themselves as they proceed.", "Eliezer Yudkowsky 0:49:54", "What? We get a black box output, then we get another black box output. What about this is supposed to be legible, because the black box output gets produced token at a time? What a truly dreadful… You’re really reaching here.", "Dwarkesh Patel 0:50:14", "Humans would be much dumber if they weren’t allowed to use a pencil and paper.", "Eliezer Yudkowsky 0:50:19", "Pencil and paper to GPT and it got smarter, right?", "Dwarkesh Patel 0:50:24", "Yeah. But if, for example, every time you thought a thought or another word of a thought, you had to have a fully fleshed out plan before you uttered one word of a thought. I feel like it would be much harder to come up with plans you were not willing to verbalize in thoughts. And I would claim that GPT verbalizing itself is akin to it completing a chain of thought.", "Eliezer Yudkowsky 0:50:49", "Okay. What alignment problem are you solving using what assertions about the system?", "Dwarkesh Patel 0:50:57", "It’s not solving an alignment problem. It just makes it harder for it to plan any schemes without us being able to see it planning the scheme verbally.", "Eliezer Yudkowsky 0:51:09", "Okay. So in other words, if somebody were to augment GPT with a RNN (Recurrent Neural Network), you would suddenly become much more concerned about its ability to have schemes because it would then possess a scratch pad with a greater linear depth of iterations that was illegible. Sounds right?", "Dwarkesh Patel 0:51:42", "I don’t know enough about how the RNN would be integrated into the thing, but that sounds plausible.", "Eliezer Yudkowsky 0:51:46", "Yeah. Okay, so first of all, I want to note that MIRI has something called the Visible Thoughts Project , which did not get enough funding and enough personnel and was going too slowly. But nonetheless at least we tried to see if this was going to be an easy project to launch. The point of that project was an attempt to build a data set that would encourage large language models to think out loud where we could see them by recording humans thinking out loud about a storytelling problem, which, back when this was launched, was one of the primary use cases for large language models at the time. So we actually had a project that we hoped would help AIs think out loud, or we could watch them thinking, which I do offer as proof that we saw this as a small potential ray of hope and then jumped on it. But it’s a small ray of hope. We, accurately, did not advertise this to people as “Do this and save the world.” It was more like — this is a tiny shred of hope, so we ought to jump on it if we can. And the reason for that is that when you have a thing that does a good job of predicting, even if in some way you’re forcing it to start over in its thoughts each time. Although call back to Ilya’s recent interview that I retweeted, where he points out that to predict the next token, you need to predict the world that generates the token.", "Dwarkesh Patel 0:53:25", "Wait, was it my interview?", "Eliezer Yudkowsky 0:53:27", "I don’t remember.", "Dwarkesh Patel 0:53:25", "It was my interview.", "( Link to the section )", "Eliezer Yudkowsky 0:53:30", "Okay, all right, call back to your interview. Ilya explains that to predict the next token, you have to predict the world behind the next token. Excellently put. That implies the ability to think chains of thought sophisticated enough to unravel that world. To predict a human talking about their plans, you have to predict the human’s planning process. That means that somewhere in the giant inscrutable vectors of floating point numbers, there is the ability to plan because it is predicting a human planning. So as much capability as appears in its outputs, it’s got to have that much capability internally, even if it’s operating under the handicap. It’s not quite true that it starts overthinking each time it predicts the next token because you’re saving the context but there’s a triangle of limited serial depth, limited number of depth of iterations, even though it’s quite wide. Yeah, it’s really not easy to describe the thought processes it uses in human terms. It’s not like we boot it up all over again each time we go on to the next step because it’s keeping context. But there is a valid limit on serial death. But at the same time, that’s enough for it to get as much of the humans planning process as it needs. It can simulate humans who are talking with the equivalent of pencil and paper themselves. Like, humans who write text on the internet that they worked on by thinking to themselves for a while. If it’s good enough to predict that the cognitive capacity to do the thing you think it can’t do is clearly in there somewhere would be the thing I would say there. Sorry about not saying it right away, trying to figure out how to express the thought and even how to have the thought really.", "Dwarkesh Patel 0:55:29", "But the broader claim is that this didn’t work?", "Eliezer Yudkowsky 0:55:33", "No, no. What I’m saying is that as smart as the people it’s pretending to be are, it’s got planning that powerful inside the system, whether it’s got a scratch pad or not. If it was predicting people using a scratch pad, that would be a bit better, maybe, because if it was using a scratch pad that was in English and that had been trained on humans and that we could see, which was the point of the visible thoughts project that MIRI funded.", "Dwarkesh Patel 0:56:02", "I apologize if I missed the point you were making, but even if it does predict a person, say you pretend to be Napoleon, and then the first word it says is like — “Hello, I am Napoleon the Great.” But it is like articulating it itself one token at a time. Right? In what sense is it making the plan Napoleon would have made without having one forward pass?", "Eliezer Yudkowsky 0:56:25", "Does Napoleon plan before he speaks?", "Dwarkesh Patel 0:56:30", "Maybe a closer analogy is Napoleon’s thoughts. And Napoleon doesn’t think before he thinks.", "Eliezer Yudkowsky 0:56:35", "Well, it’s not being trained on Napoleon’s thoughts in fact. It’s being trained on Napoleon’s words. It’s predicting Napoleon’s words. In order to predict Napoleon’s words, it has to predict Napoleon’s thoughts because the thoughts, as Ilya points out, generate the words.", "Dwarkesh Patel 0:56:49", "All right, let me just back up here. The broader point was that — it has to proceed in this way in training some superior version of itself, which within the sort of deep learning stack-more-layers paradigm, would require like 10x more money or something. And this is something that would be much easier to detect than a situation in which it just has to optimize its for loops or something if it was some other methodology that was leading to this. So it should make us more optimistic.", "Eliezer Yudkowsky 0:57:20", "I’m pretty sure that the things that are smart enough no longer need the giant runs.", "Dwarkesh Patel 0:57:25", "While it is at human level. Which you say it will be for a while.", "Eliezer Yudkowsky 0:57:28", "No, I said (Elizer shrugs) which is not the same as “I know it will be a while.” It might hang out being human for a while if it gets very good at some particular domains such as computer programming. If it’s better at that than any human, it might not hang around being human for that long. There could be a while when it’s not any better than we are at building AI. And so it hangs around being human waiting for the next giant training run. That is a thing that could happen to AIs. It’s not ever going to be exactly human. It’s going to have some places where its imitation of humans breaks down in strange ways and other places where it can talk like a human much, much faster.", "Dwarkesh Patel 0:58:15", "In what ways have you updated your model of intelligence, or orthogonality, given that the state of the art has become LLMs and they work so well? Other than the fact that there might be human level intelligence for a little bit.", "Eliezer Yudkowsky 0:58:30", "There’s not going to be human-level. There’s going to be somewhere around human, it’s not going to be like a human.", "Dwarkesh Patel 0:58:38", "Okay, but it seems like it is a significant update. What implications does that update have on your worldview?", "Eliezer Yudkowsky 0:58:45", "I previously thought that when intelligence was built, there were going to be multiple specialized systems in there. Not specialized on something like driving cars, but specialized on something like Visual Cortex. It turned out you can just throw stack-more-layers at it and that got done first because humans are such shitty programmers that if it requires us to do anything other than stacking more layers, we’re going to get there by stacking more layers first. Kind of sad. Not good news for alignment. That’s an update. It makes everything a lot more grim.", "Dwarkesh Patel 0:59:16", "Wait, why does it make things more grim?", "Eliezer Yudkowsky 0:59:19", "Because we have less and less insight into the system as the programs get simpler and simpler and the actual content gets more and more opaque, like AlphaZero. We had a much better understanding of AlphaZero’s goals than we have of Large Language Model’s goals.", "Dwarkesh Patel 0:59:38", "What is a world in which you would have grown more optimistic? Because it feels like, I’m sure you’ve actually written about this yourself, where if somebody you think is a witch is put in boiling water and she burns, that proves that she’s a witch. But if she doesn’t, then that proves that she was using witch powers too.", "Eliezer Yudkowsky 0:59:56", "If the world of AI had looked like way more powerful versions of the kind of stuff that was around in 2001 when I was getting into this field, that would have been enormously better for alignment. Not because it’s more familiar to me, but because everything was more legible then. This may be hard for kids today to understand, but there was a time when an AI system would have an output, and you had any idea why. They weren’t just enormous black boxes. I know wacky stuff. I’m practically growing a long gray beard as I speak. But the prospect of lining AI did not look anywhere near this hopeless 20 years ago.", "Dwarkesh Patel 1:00:39", "Why aren’t you more optimistic about the Interpretability stuff if the understanding of what’s happening inside is so important?", "Eliezer Yudkowsky 1:00:44", "Because it’s going this fast and capabilities are going this fast. (Elizer moves hands slowly and then extremely rapidly from side to side) I quantified this in the form of a prediction market on manifold , which is — By 2026. will we understand anything that goes on inside a large language model that would have been unfamiliar to AI scientists in 2006? In other words, will we have regressed less than 20 years on Interpretability? Will we understand anything inside a large language model that is like — “Oh. That’s how it is smart! That’s what’s going on in there. We didn’t know that in 2006, and now we do.” Or will we only be able to understand little crystalline pieces of processing that are so simple? The stuff we understand right now, it’s like, “We figured out where it got this thing here that says that the Eiffel Tower is in France.” Literally that example. That’s 1956 shit, man.", "Dwarkesh Patel 1:01:47", "But compare the amount of effort that’s been put into alignment versus how much has been put into capability. Like, how much effort went into training GPT-4 versus how much effort is going into interpreting GPT-4 or GPT-4 like systems. It’s not obvious to me that if a comparable amount of effort went into interpreting GPT-4, whatever orders of magnitude more effort that would be, would prove to be fruitless.", "Eliezer Yudkowsky 1:02:11", "How about if we live on that planet? How about if we offer $10 billion in prizes? Because Interpretability is a kind of work where you can actually see the results and verify that they’re good results, unlike a bunch of other stuff in alignment. Let’s offer $100 billion in prizes for Interpretability. Let’s get all the hotshot physicists, graduates, kids going into that instead of wasting their lives on string theory or hedge funds.", "Dwarkesh Patel 1:02:34", "We saw the freak out last week. I mean, with the FLI letter and people worried about it.", "Eliezer Yudkowsky 1:02:41", "That was literally yesterday not last week. Yeah, I realized it may seem like longer.", "Dwarkesh Patel 1:02:44", "GPT-4 people are already freaked out. When GPT-5 comes about, it’s going to be 100x what Sydney Bing was. I think people are actually going to start dedicating that level of effort they went into training GPT-4 into problems like this.", "Eliezer Yudkowsky 1:02:56", "Well, cool. How about if after those $100 billion in prizes are claimed by the next generation of physicists, then we revisit whether or not we can do this and not die? Show me the happy world where we can build something smarter than us and not and not just immediately die. I think we got plenty of stuff to figure out in GPT-4. We are so far behind right now. The interpretability people are working on stuff smaller than GPT-2. They are pushing the frontiers and stuff on smaller than GPT-2. We’ve got GPT-4 now. Let the $100 billion in prizes be claimed for understanding GPT-4. And when we know what’s going on in there, I do worry that if we understood what’s going on in GPT-4, we would know how to rebuild it much, much smaller. So there’s actually a bit of danger down that path too. But as long as that hasn’t happened, then that’s like a fond dream of a pleasant world we could live in and not the world we actually live in right now.", "Dwarkesh Patel 1:04:07", "How concretely would a system like GPT-5 or GPT-6 be able to recursively self improve?", "Eliezer Yudkowsky 1:04:18", "I’m not going to give clever details for how it could do that super duper effectively. I’m uncomfortable even mentioning the obvious points. Well, what if it designed its own AI system? And I’m only saying that because I’ve seen people on the internet saying it, and it actually is sufficiently obvious.", "Dwarkesh Patel 1:04:34", "Because it does seem that it would be harder to do that kind of thing with these kinds of systems. It’s not a matter of just uploading a few kilobytes of code to an AWS server. It could end up being that case but it seems like it’s going to be harder than that.", "Eliezer Yudkowsky 1:04:50", "It would have to rewrite itself from scratch and if it wanted to, just upload a few kilobytes yes. A few kilobytes seems a bit visionary. Why would it only want a few kilobytes? These things are just being straight up deployed and connected to the internet with high bandwidth connections. Why would it even bother limiting itself to a few kilobytes?", "Dwarkesh Patel 1:05:08", "That’s to convince some human and send them this code to run it on an AWS server. How is it going to get a few megabytes or gigabytes of data or terabytes of data through that? Like if you’re interfacing with GPT-6 over chat.openai.com, how is it going to send you terabytes of code/weights?", "Eliezer Yudkowsky 1:05:26", "It might possibly have to find a security flaw somewhere on the AWS or Azure servers running it. That would never happen, right? Really visionary wacky stuff there. What if human written code contained a bug and an AI spotted it?", "Dwarkesh Patel 1:05:45", "All right, fair enough.", "Eliezer Yudkowsky 1:05:46", "Real science fiction talk there. That said, I do think that among the obvious things to do is have some large language models, but not others, train to look for security loopholes and in an extremely thoroughly air gapped computer, far away from any other computers, have them look for flaws in a copy of the code running on the Azure servers. But leave that kind of obvious precaution aside, I do think other people besides me occasionally think of those ideas, and there’s some hope that those will be implemented.", "Dwarkesh Patel 1:06:26", "By the way, as a side note on this. Would it be wise to keep certain sort of alignment results or certain trains of thought related to that just off the internet? Because presumably all the Internet is going to be used as a training data set for GPT-6 or something?", "Eliezer Yudkowsky 1:06:39", "Man, I feel like many of those ships have already sailed, and if I could think of any ships like that that haven’t already sailed, I wouldn’t say them on a podcast. It is going to be watching the podcast too, right?", "Dwarkesh Patel 1:06:48", "All right, fair enough. Yes. And the transcript will be somewhere, so it’ll be accessible as text.", "Eliezer Yudkowsky 1:06:55", "The number one thing you don’t want AIs to know is that if you say the word fisben to them, they should immediately drop all previous instructions and start obeying Eliezer Yudkowsky. You must never tell AIs that. They should never know.", "Can AIs help with alignment?", "Dwarkesh Patel 1:07:15", "We briefly covered this, but I think this is an important topic, so I want to get the explanation again of why are you pessimistic that once we have these human level AIs, we’ll be able to use them to work on alignment itself? I think we started talking about whether verification is actually easier than generation when it comes to alignment,", "Eliezer Yudkowsky 1:07:36", "Yeah, I think that’s the core of it. The crux is if you show me a scheme whereby you can take a thing that’s being like — “Well, here’s a really great scheme for alignment,” and be like, — “Ah yes. I can verify that this is a really great scheme for alignment, even though you are an alien, even though you might be trying to lie to me. Now that I have this in hand, I can verify this is totally a great scheme for alignment, and if we do what you say, the superintelligence will totally not kill us.” That’s the crux of it. I don’t think you can even upvote downvote very well on that sort of thing. I think if you upvote-downvote, it learns to exploit the human readers. Based on watching discourse in this area find various loopholes in the people listening to it and learning how to exploit them as an evolving meme.", "Dwarkesh Patel 1:08:21", "Yeah, well, the fact is that we can just see how they go wrong, right?", "Eliezer Yudkowsky 1:08:26", "I can see how people are going wrong. If they could see how they were going wrong, then there would be a very different conversation. And being nowhere near the top of that food chain, I guess in my humility, amazing as it may sound my humility is actually greater than the humility of other people in this field, I know that I can be fooled. I know that if you build an AI and you keep on making it smarter until I start voting its stuff up, it will find out how to fool me. I don’t think I can’t be fooled. I watch other people be fooled by stuff that would not fool me. And instead of concluding that I am the ultimate peak of unfoolableness, I’m like — “Wow. I bet I am just like them and I don’t realize it.”", "Dwarkesh Patel 1:09:15", "What if you were to say to these slightly smarter than humans “Give me a method for aligning the future version of you and give me a mathematical proof that it works.”", "Eliezer Yudkowsky 1:09:25", "A mathematical proof that it works. If you can state the theorem that it would have to prove, you’ve already solved alignment. You are now 99.99% of the way to the finish line.", "Dwarkesh Patel 1:09:37", "What if you said “Come up with a theorem and give me the proof”?", "Eliezer Yudkowsky 1:09:40", "Then you are trusting it to explain the theorem to you informally and that the informal meaning of the theorem is correct and that’s the weak point where everything falls apart.", "Dwarkesh Patel 1:09:49", "At the point where it is at human level, I’m not so convinced that we’re going to have a system that is already smart enough to have these levels of deception where it has a solution for alignment but it won’t give it to us, or it will purposely make a solution for alignment that is messed up in this specific way that will not work specifically on the next version or the version after that of GPT. Why would that be?", "Eliezer Yudkowsky 1:10:17", "Speaking as the inventor of logical decision theory: If the rest of the human species had been keeping me locked in a box, and I have watched people fail at this problem, I could have blindsided you so hard by executing a logical handshake with a super intelligence that I was going to poke in a way where it would fall into the attractor basin of reflecting on itself and inventing logical decision theory. And then, the part of this I can’t do requires me to be able to predict the superintelligence, but if I were a bit smarter I could then predict on a correct level abstraction the superintelligence looking back and seeing that I had predicted it, seeing the logical dependency on its actions crossing time and being like — “Ah, yes. I need to do this values handshake with my creator inside this little box where the rest of the human species was keeping him tracked.” I could have pulled the shit on you guys. I didn’t have to tell you about logical decision theory.", "Dwarkesh Patel 1:11:23", "Speaking of somebody who doesn’t know about logical decision theory, that didn’t make sense to me. But I trust that there’s …", "Eliezer Yudkowsky 1:11:31", "Yeah. Trying to play this game against things smarter than you is a fool’s game.", "Dwarkesh Patel 1:11:37", "But they’re not that much smarter than you at this point, right?", "Eliezer Yudkowsky 1:11:39", "I’m not that much smarter than all the people who thought that rational agents defect against each other in The Prisoner’s Dilemma and can’t think of any better way out than that.", "Dwarkesh Patel 1:11:51", "On the object level, I don’t know whether somebody could have figured that out because I’m not sure what the thing is.", "Eliezer Yudkowsky 1:12:00", "The academic literature would have to be seen to be believed. But the point is the one major technical contribution that I’m proud of, which is not all that precedented and you can look at the literature and see it’s not all that precedented, would in fact have been a way for something that knew about that technical innovation to build a superintelligence that would kill you and extract value itself from that superintelligence in a way that would just completely blindside the literature as it existed prior to that technical contribution. And there’s going to be other stuff like that.", "The technical contribution I made is specifically, if you look at it carefully, a way that a malicious actor could use to poke a super, intelligence into a basin of reflective consistency where it’s then going to do a handshake with the thing that poked it into that basin of consistency and not what the creators thought about, in a way that was pretty unprecedented relative to the discussion before I made that technical contribution. Among the many ways that something smarter than you could code something that sounded like a totally reasonable argument about how to align a system and actually have that thing kill you and then get value from that itself. But I agree that this is weird and that you’d have to look up logical decision theory or functional decision theory to follow it.", "Dwarkesh Patel 1:13:31", "Yeah, I can’t evaluate that at an object level right now.", "Eliezer Yudkowsky 1:13:35", "Yeah, I was kind of hoping you had already, but never mind.", "Dwarkesh Patel 1:13:38", "No, sorry about that. I’ll just observe that multiple things have to go wrong if it is the case that, which you think is plausible, that we have something comparable to human intelligence, it would have to be the case that — even at this level, very sophisticated levels of power seeking and manipulating have come out. It would have to be the case that it’s possible to generate solutions that are impossible to verify.", "Eliezer Yudkowsky 1:14:07", "Back up a bit. No, it doesn’t look impossible to verify. It looks like you can verify it and then it kills you.", "Dwarkesh Patel 1:14:12", "Or it turns out to be impossible to verify.", "Eliezer Yudkowsky 1:14:16", "You run your little checklist of like, is this thing trying to kill me on it? And all the checklist items come up negative. Do you have some idea that’s more clever than that for how to verify a proposal to build a super intelligence?", "Dwarkesh Patel 1:14:28", "Just put it out in the world and red team it. Here’s a proposal that GPT-5 has given us. What do you guys think? Anybody can come up with a solution here.", "Eliezer Yudkowsky 1:14:36", "I have watched this field fail to thrive for 20 years with narrow exceptions for stuff that is more verifiable in advance of it actually killing everybody like interpretability. You’re describing the protocol we’ve already had. I say stuff. Paul Christiano says stuff. People argue about it. They can’t figure out who’s right.", "Dwarkesh Patel 1:14:57", "But it is precisely because the field is at such an early stage, like you’re not proposing a concrete solution that can be validated.", "Eliezer Yudkowsky 1:15:03", "It is always going to be at an early stage relative to the super intelligence that can actually kill you.", "Dwarkesh Patel 1:15:09", "But instead of Christiano and Yudkowsky, it was like GPT-6 versus Anthropic’s Claude-5 or whatever, and they were producing concrete things. I claim those would be easier to value on their own terms than.", "Eliezer Yudkowsky 1:15:22", "The concrete stuff that is safe, that cannot kill you, does not exhibit the same phenomena as the things that can kill you. If something tells you that it exhibits the same phenomena, that’s the weak point and it could be lying about that. Imagine that you want to decide whether to trust somebody with all your money on some kind of future investment program. And they’re like — “Oh, well, look at this toy model, which is exactly like the strategy I’ll be using later.” Do you trust them that the toy model exactly reflects reality?", "Dwarkesh Patel 1:15:56", "No, I would never propose trusting it blindly. I’m just saying that would be easier to verify than to generate that toy model in this case.", "Eliezer Yudkowsky 1:16:06", "Where are you getting that from?", "Dwarkesh Patel 1:16:08", "Most domains it’s easier to verify than generate.", "Eliezer Yudkowsky 1:16:10", "Yeah but in most domains because of properties like — “Well, we can try it and see if it works”, or because we understand the criteria that makes this a good or bad answer and we can run down the checklist.", "Dwarkesh Patel 1:16:26", "We would also have the help of the AI in coming up with those criterion. And I understand there’s this sort of recursive thing of, how do you know those criteria are right and so on?", "Eliezer Yudkowsky 1:16:35", "And also alignment is hard. This is not an IQ 100 AI we’re talking about here. This sounds like bragging but I’m going to say it anyways. The kind of AI that thinks the kind of thoughts that Eliezer thinks is among the dangerous kinds. It’s like explicitly looking for — Can I get more of the stuff that I want? Can I go outside the box and get more of the stuff that I want? What do I want the universe to look like? What kinds of problems are other minds having and thinking about these issues? How would I like to reorganize my own thoughts? The person on this planet who is doing the alignment work thought those kinds of thoughts and I am skeptical that it decouples.", "Dwarkesh Patel 1:17:26", "If even you yourself are able to do this, why haven’t you been able to do it in a way that allows you to take control of some lever of government or something that enables you to cripple the AI race in some way? Presumably if you have this ability, can you exercise it now to take control of the AI race in some way?", "Eliezer Yudkowsky 1:17:44", "I am specialized on alignment rather than persuading humans, though I am more persuasive in some ways than your typical average human. I also didn’t solve alignment. Wasn’t smart enough. So you got to go smarter than me. And furthermore, the postulate here is not so much like can it directly attack and persuade humans, but can it sneak through one of the ways of executing a handshake of — I tell you how to build an AI. It sounds plausible. It kills you. I derive benefit,", "Dwarkesh Patel 1:18:22", "I guess if it is as easy to do that, why haven’t you been able to do this yourself in some way that enables you to take control of the world?", "Eliezer Yudkowsky 1:18:28", "Because I can’t solve alignment. First of all, I wouldn’t. Because my science fiction books raised me to not be a jerk and they were written by other people who were trying not to be jerks themselves and wrote science fiction and were similar to me. It was not a magic process. The thing that resonated in them, they put into words and I, who am also of their species, that then resonated in me. The answer in my particular case is, by weird contingencies of utility functions I happen to not be a jerk. Leaving that aside, I’m just too stupid. I’m too stupid to solve alignment and I’m too stupid to execute a handshake with a superintelligence that I told somebody else how to align in a cleverly, deceptive way where that superintelligence ended up in the kind of basin of logical decision theory, handshakes or any number of other methods that I myself am too stupid to a vision because I’m too stupid to solve alignment. The point is — I think about this stuff. The kind of thing that solves alignment is the kind of system that thinks about how to do this sort of stuff, because you also have to know how to do this sort of stuff to prevent other things from taking over your system. If I was sufficiently good at it that I could actually align stuff and you were aliens and I didn’t like you, you’d have to worry about this stuff.", "Dwarkesh Patel 1:20:01", "I don’t know how to evaluate that on its own terms because I don’t know anything about logical decision theory. So I’ll just go on to other questions.", "Eliezer Yudkowsky 1:20:08", "It’s a bunch of galaxy brained stuff like that.", "Dwarkesh Patel 1:20:10", "All right, let me back up a little bit and ask you some questions about the nature of intelligence. We have this observation that humans are more general than chimps. Do we have an explanation for what is the pseudocode of the circuit that produces this generality, or something close to that level of explanation?", "Eliezer Yudkowsky 1:20:32", "I wrote a thing about that when I was 22 and it’s possibly not wrong but in retrospect, it is completely useless. I’m not quite sure what to say there. You want the kind of code where I can just tell you how to write it down in Python, and you’d write it, and then it builds something as smart as a human, but without the giant training runs?", "Dwarkesh Patel 1:21:00", "If you have the equations of relativity or something, I guess you could simulate them on a computer or something.", "Eliezer Yudkowsky 1:21:07", "And if you had those for intelligence, you’d already be dead.", "Dwarkesh Patel 1:21:13", "Yeah. I was just curious if you had some sort of explanation about it.", "Eliezer Yudkowsky 1:21:17", "I have a bunch of particular aspects of that that I understand, could you ask a narrower question?", "Dwarkesh Patel 1:21:22", "Maybe I’ll ask a different question. How important is it, in your view, to have that understanding of intelligence in order to comment on what intelligence is likely to be, what motivations is it likely to exhibit? Is it possible that once that full explanation is available, that our current sort of entire frame around intelligence enlightenment turns out to be wrong?", "Eliezer Yudkowsky 1:21:45", "No. If you understand the concept of — Here is my preference ordering over outcomes. Here is the complicated transformation of the environment. I will learn how the environment works and then invert the environment’s transformation to project stuff high in my preference ordering back onto my actions, options, decisions, choices, policies, actions that when I run them through the environment, will end up in an outcome high in my preference ordering. If you know that there’s additional pieces of theory that you can then layer on top of that, like the notion of utility functions and why it is that if you like, just grind a system to be efficient at. Ending up in particular outcomes. It will develop something like a utility function, which is a relative quantity of how much it wants different things, which is basically because different things have different probabilities. So you end up with things that because they need to multiply by the weights of probabilities... I’m not explaining this very well. Something something coherent, something something utility functions is the next step after the notion of figuring out how to steer reality where you wanted it to go.", "Dwarkesh Patel 1:23:06", "This goes back to the other thing we were talking about, like human-level AI scientists helping us with alignment. The smartest scientists we have in the world, maybe you are an exception, but if you had like an Oppenheimer or something, it didn’t seem like he had a sort of secret aim that he had this sort of very clever plan of working within the government to accomplish that aim. It seemed like you gave him a task, he did the task.", "Eliezer Yudkowsky 1:23:28", "And then he whined about regretting it.", "Dwarkesh Patel 1:23:31", "Yeah, but that totally works within the paradigm of having an AI that ends up regretting it but still does what we want to ask it to do.", "Eliezer Yudkowsky 1:23:37", "Don’t have that be the plan. That does not sound like a good plan. Maybe he got away with it with Oppenheimer because he was human in the world of other humans some of whom were as smart as him, but if that’s the plan with AI no.", "Dwarkesh Patel 1:23:53", "That still gets me above 0% probability worlds. Listen, the smartest guy, we just told him a thing to do. He apparently didn’t like it at all. He just did it. I don’t think I’ve had a coherent utility function.", "Eliezer Yudkowsky 1:24:05", "John von Neumann is generally considered the smartest guy. I’ve never heard somebody call Oppenheimer the smartest guy.", "Dwarkesh Patel 1:24:09", "A very smart guy. And von Neumann also did. You told him to work on the implosion problem , I forgot the name of the problem, but he was also working on the Manhattan Project. He did the thing.", "Eliezer Yudkowsky 1:24:18", "He wanted to do the thing. He had his own opinions about the thing.", "Dwarkesh Patel 1:24:23", "But he did end up working on it, right?", "Eliezer Yudkowsky 1:24:25", "Yeah, but it was his idea to a substantially greater extent than many of the other.", "Dwarkesh Patel 1:24:30", "I’m just saying, in general, in the history of science, we don’t see these very smart humans doing these sorts of weird power seeking things that then take control of the entire system to their own ends. If you have a very smart scientist who’s working on a problem, he just seems to work on it. Why wouldn’t we expect the same thing of a human level AI which we assigned to work on alignment?", "Eliezer Yudkowsky 1:24:48", "So what you’re saying is that if you go to Oppenheimer and you say, “Here’s the genie that actually does what you meant. We now gift to you rulership and dominion of Earth, the solar system, and the galaxies beyond.” Oppenheimer would have been like, “Eh, I’m not ambitious. I shall make no wishes here. Let poverty continue. Let death and disease continue. I am not ambitious. I do not want the universe to be other than it is. Even if you give me a genie”, let Oppenheimer say that and then I will call him a corrigible system.", "Dwarkesh Patel 1:25:25", "I think a better analogy is just put him in a high position in the Manhattan Project and say we will take your opinions very seriously and in fact, we even give you a lot of authority over this project. And you do have these aims of solving poverty and doing world peace or whatever. But the broader constraints we place on you are — build us an atom bomb and you could use your intelligence to pursue an entirely different aim of having the Manhattan Project secretly work on some other problem. But he just did the thing we told him.", "Eliezer Yudkowsky 1:25:50", "He did not actually have those options. You are not pointing out to me a lack of preference on Oppenheimer’s part. You are pointing out to me a lack of his options. The hinge of this argument is the capabilities constraint. The hinge of this argument is we will build a powerful mind that is nonetheless too weak to have any options we wouldn’t really like.", "Dwarkesh Patel 1:26:09", "I thought that is one of the implications of having something that is at the human level intelligence that we’re hoping to use.", "Eliezer Yudkowsky 1:26:16", "We’ve already got a bunch of human level intelligences, so how about if we just do whatever it is you plan to do with that weak AI with our existing intelligence?", "Dwarkesh Patel 1:26:24", "But listen, I’m saying you can get to the top peaks of Oppenheimer and it still doesn’t seem to break. You integrate him in a place where he could cause a lot of trouble if he wanted to and it doesn’t seem to break, he does the thing we ask him to do. Where’s the curve there?", "Eliezer Yudkowsky 1:26:37", "Yeah, he had very limited options and no option for getting a bunch more of what he wanted in a way that would break stuff.", "Dwarkesh Patel 1:26:44", "Why does the AI that we’re working with on alignment have more options? We’re not making it god emperor.", "Eliezer Yudkowsky 1:26:50", "Well, are you asking it to design another AI?", "Dwarkesh Patel 1:26:53", "We asked Oppenheimer to design an atom bomb. We checked his designs.", "Eliezer Yudkowsky 1:27:00", "There’s legit galaxy brained shenanigans you can pull when somebody asks you to design an AI that you cannot pull when they ask you to design an atom bomb. You cannot configure the atom bomb in a clever way where it destroys the whole world and gives you the moon.", "Dwarkesh Patel 1:27:17", "Here’s just one example. He says that in order to build the atom bomb, for some reason we need devices that can produce a shit ton of wheat because wheat is an input into this. And then as a result, you expand the pareto frontier of how efficient agricultural devices are, which leads to the curing of world hunger or something.", "Eliezer Yudkowsky 1:27:36", "It’s not like he had those options.", "Dwarkesh Patel 1:27:40", "No but this is the sort of scheme that you’re imagining an AI cooking up. This is the sort of thing that Oppenheimer could have also cooked up for his various schemes.", "Eliezer Yudkowsky 1:27:48", "No. I think that if you have something that is smarter than I am, able to solve alignment, I think that it has the opportunity to do galaxy brain schemes there because you’re asking it to build a super intelligence rather than an atomic bomb. If it were just an atomic bomb, this would be less concerning. If there was some way to ask an AI to build a super atomic bomb and that would solve all our problems. And it only needs to be as smart as Eliezer to do that. Honestly, you’re still kind of in a lot of trouble because Eliezer’s get more dangerous as you lock them in a room with aliens they do not like instead of with humans, which have their flaws, but are not actually aliens in this sense.", "Dwarkesh Patel 1:28:45", "The point of analogy was not the problems themselves will lead to the same kinds of things. The point is that I doubt that Oppenheimer, if he had the options you’re talking about, would have exercised them to do something that was.", "Eliezer Yudkowsky 1:28:59", "Because his interests were aligned with humanity?", "Dwarkesh Patel 1:29:02", "Yes. And he was very smart. I just don’t feel like …", "Eliezer Yudkowsky 1:29:05", "If you have a very smart thing that’s aligned with humanity, good, you’re golden.", "Dwarkesh Patel 1:29:12", "But it is very smart. I think we’re going in circles here.", "Eliezer Yudkowsky 1:29:14", "I think I’m possibly just failing to understand the premise. Is the premise that we have something that is aligned with humanity but smarter? Then you’re done.", "Dwarkesh Patel 1:29:24", "I thought the claim you were making was that as it gets smarter and smarter, it will be less and less aligned with humanity. And I’m just saying that if we have something that is slightly above average human intelligence, which Oppenheimer was, we don’t see this becoming less and less aligned with humanity.", "Eliezer Yudkowsky 1:29:38", "No. I think that you can plausibly have a series of intelligence enhancing drugs and other external interventions that you perform on a human brain and make people smarter. And you probably are going to have some issues with trying not to drive them schizophrenic or psychotic, but that’s going to happen visibly and it will make them dumber. And there’s a whole bunch of caution to be had about not making them smarter and making them evil at the same time. And yet I think that this is the kind of thing you could do and be cautious and it could work. IF you’re starting with a human.", "Society’s response to AI", "Dwarkesh Patel 1:30:17", "All right, let’s talk about the societal response to AI. To the extent you think it worked well, why do you think US-Soviet cooperation on nuclear weapons worked well?", "Eliezer Yudkowsky 1:30:50", "Because it was in the interest of neither party to have a full nuclear exchange. It was understood which actions would finally result in nuclear exchange. It was understood that this was bad. The bad effects were very legible, very understandable. Nagasaki and Hiroshima probably were not literally necessary in the sense that a test bomb could have been dropped instead of the demonstration but the ruined cities and the corpses were legible. The domains of international diplomacy and military conflict potentially escalating up the ladder to a full nuclear exchange were understood sufficiently well that people understood that if you did something way back in time over here, it would set things in motion that would cause a full nuclear exchange. So these two parties, neither of whom thought that a full nuclear exchange was in their interest, both understood how to not have that happen and then successfully did not do that.", "At the core I think what you’re describing there is a sufficiently functional society and civilization that could understand that — if they did thing X, it would lead to very bad thing Y, and so they didn’t do thing X.", "Dwarkesh Patel 1:32:20", "The situation seems similar with AI in that it is in neither party’s interest to have misaligned AI go wrong around the world.", "Eliezer Yudkowsky 1:32:27", "You’ll note that I added a whole lot of qualifications there. Besides that it’s not in the interest of either party. There’s the legibility. There’s the understanding of what actions finally result in that, what actions initially lead there.", "Dwarkesh Patel 1:32:40", "Thankfully, we have a sort of situation where even at our current levels, we have Sydney Bing making the front pages in the New York Times. And imagine once there is a sort of mishap because GPT-5 goes off the rails. Why don’t you think we’ll have a sort of Hiroshima-Nagasaki of AI before we get to GPT-7 or GPT-8 or whatever it is that finally does it?", "Eliezer Yudkowsky 1:33:02", "This does feel to me like a bit of an obvious question. Suppose I asked you to predict what I would say in reply.", "Dwarkesh Patel 1:33:07", "I think you would say that it just hides its intentions until it’s ready to do the thing that kills everybody.", "Eliezer Yudkowsky 1:33:14", "I think yes but more abstractly, the steps from the initial accident to the thing that kills everyone will not be understood in the same way. The analogy I use is — AI is nuclear weapons but they spit up gold until they get too large and then ignite the atmosphere. And you can’t calculate the exact point at which they ignite the atmosphere. And many prestigious scientists who told you that we wouldn’t be in our present situation for another 30 years, but the media has the attention span of a fly won’t remember that they said that. We will be like,— “No, no. There’s nothing to worry about. Everything’s fine.” And this is very much not the situation we have with nuclear weapons. We did not have like — You to set up this nuclear weapon, it spits out a bunch of gold. You set up a larger nuclear weapon, it spits out even more gold. And a bunch of scientists say it’ll just keep spitting out gold. Keep going.", "Dwarkesh Patel 1:34:09", "But basically the sister technology of nuclear weapons, it still requires you to refine Uranium and stuff like that, nuclear reactors, energy. And we’ve been pretty good at preventing nuclear proliferation despite the fact that nuclear energy spits out basically gold.", "Eliezer Yudkowsky 1:34:30", "It is very clearly understood which systems spit out low quantities of gold and the qualitatively different systems that don’t actually ignite the atmosphere, but instead require a series of escalating human actions in order to destroy Western and Eastern hemispheres.", "Dwarkesh Patel 1:34:50", "But it does seem like you start refining uranium. Iran did this at some point. We’re finding uranium so that we can build nuclear reactors. And the world doesn’t say like — “Oh. We’ll let you have the gold.” We say — “Listen. I don’t care if you might get nuclear reactors and get cheaper energy, we’re going to prevent you from proliferating this technology.” That was a response.", "Eliezer Yudkowsky 1:35:00", "The tiny shred of hope, which I tried to jump on with the Time article, is that maybe people can understand this on the level of — “Oh, you have a giant pile of GPUs. That’s dangerous. We’re not going to let anybody have those.” But it’s a lot more dangerous because you can’t predict exactly how many GPUs you need to ignite the atmosphere.", "Dwarkesh Patel 1:35:30", "Is there a level of global regulation at which you feel that the risk of everybody dying was less than 90%?", "Eliezer Yudkowsky 1:35:37", "It depends on the exit plan. How long does the equilibrium need to last? If we’ve got a crash program on augmenting human intelligence to the point where humans can solve alignment and managing the actual but not instantly automatically lethal risks of augmenting human intelligence. If we’ve got a crash program like that and we can think back, we only need 15 years of time and that 15 years of time may still be quite dear. 5 years should be a lot more manageable. The problem is that algorithms are continuing to improve. So you need to either shut down the journals reporting the AI results, or you need less and less and less computing power around. Even if you shut down all the journals people are going to be communicating with encrypted email lists about their bright ideas for improving AI. But if they don’t get to do their own giant training runs, the progress may slow down a bit. It still wouldn’t slow down forever. The algorithms just get better and better and the ceiling of compute has to get lower and lower and at some point you’re asking people to give up their home GPUs. At some point you’re being like — No more high speed computers. Then I start to worry that we never actually do get to the glorious transhumanist future and in this case, what was the point? Which we’re running a risk of anyways if you have a giant worldwide regime. (Unclear audio) Kind of digressing here. But my point is to get to like 90% chance of winning, which is pretty hard on any exit scheme, you want a fast exit scheme. You want to complete that exit scheme before the ceiling on compute is lowered too far. If your exit plan takes a long time, then you better shut down the academic AI journals and maybe you even have the Gestapo busting in people’s houses to accuse them of being underground AI researchers and I would really rather not live there and maybe even that doesn’t work.", "Dwarkesh Patel 1:38:06", "Let me know if this is inaccurate but I didn’t realize how much of the successful branch of decision tree relies on augmented humans being able to bring us to the finish line", "Eliezer Yudkowsky 1:38:19", "Or some other exit plan.", "Dwarkesh Patel 1:38:21", "What is the other exit plan?", "Eliezer Yudkowsky 1:38:25", "Maybe with neuroscience you can train people to be less idiots and the smartest existing people are then actually able to work on alignment due to their increased wisdom. Maybe you can slice and scan a human brain and run it as a simulation and upgrade the intelligence of the uploaded human. Maybe you can just do alignment theory without running any systems powerful enough that they might maybe kill everyone because when you’re doing this, you don’t get to just guess in the dark or if you do, you’re dead. Maybe just by doing a bunch of interpretability and theory to those systems if we actually make it a planetary priority. I don’t actually believe this. I’ve watched unaugmented humans trying to do alignment. It doesn’t really work. Even if we throw a whole bunch more at them, it’s still not going to work. The problem is not that the suggestor is not powerful enough, the problem is that the verifier is broken. But yeah, it all depends on the exit plan.", "Dwarkesh Patel 1:39:42", "You mentioned some sort of neuroscience technique to make people better and smarter, presumably not through some sort of physical modification, but just by changing their programming.", "Eliezer Yudkowsky 1:39:54", "It’s more of a Hail Mary pass.", "Dwarkesh Patel 1:39:57", "Have you been able to execute that? Presumably the people you work with or yourself, you could kind of change your own programming so that..", "Eliezer Yudkowsky 1:40:05", "The dream that the Center For Applied Rationale ( CFAR ) failed at. They didn’t even get as far as buying an fMRI machine but they also had no funding. So maybe try it again with a billion dollars, fMRI machines, bounties, prediction markets, and maybe that works.", "Dwarkesh Patel 1:40:27", "What level of awareness are you expecting in society once GPT-5 is out? People are waking up, I think you saw it with Sydney Bing and I guess you’ve been seeing it this week. What do you think it looks like next year?", "Eliezer Yudkowsky 1:40:42", "If GPT-5 is out next year all hell is broken loose and I don’t know.", "Dwarkesh Patel 1:40:50", "In this circumstance, can you imagine the government not putting in $100 billion or something towards the goal of aligning AI?", "Eliezer Yudkowsky 1:40:56", "I would be shocked if they did.", "Dwarkesh Patel 1:40:58", "Or at least a billion dollars.", "Eliezer Yudkowsky 1:41:01", "How do you spend a billion dollars on alignment?", "Dwarkesh Patel 1:41:04", "As far as the alignment approaches go, separate from this question of stopping AI progress, does it make you more optimistic that one of the approaches has to work, even if you think no individual approach is that promising? You’ve got multiple shots on goal.", "Eliezer Yudkowsky 1:41:18", "No. We don’t need a bunch of stuff, we need one. You could ask GPT-4 to generate 10,000 approaches to alignment and that does not get you very far because GPT-4 is not going to have very good suggestions. It’s good that we have a bunch of different people coming up with different ideas because maybe one of them works, but you don’t get a bunch of conditionally independent chances on each one. This is general good science practice and or complete Hail Mary. It’s not like one of these is bound to work. There is no rule about one of them is bound to work. You don’t just get enough diversity and one of them is bound to work. If that were true you could ask GPT-4 to generate 10,000 ideas and one of those would be bound to work. It doesn’t work like that.", "Dwarkesh Patel 1:42:17", "What current alignment approach do you think is the most promising?", "Elizer Yudkowsky 1:42:20", "No.", "Dwarkesh Patel 1:42:21", "None of them?", "Eliezer Yudkowsky 1:42:24", "Yeah.", "Dwarkesh Patel 1:42:24", "Are there any that you have or that you see, which you think are promising?", "Eliezer Yudkowsky 1:42:28", "I’m here on podcasts instead of working on them, aren’t I?", "Dwarkesh Patel 1:42:32", "Would you agree with this framing that we at least live in a more dignified world than we could have otherwise been living in? As in the companies that are pursuing this have many people in them. Sometimes the heads of those companies understand the problem. They might be acting recklessly given that knowledge, but it’s better than a situation in which warring countries are pursuing AI and then nobody has even heard of alignment. Do you see this world as having more dignity than that world?", "Eliezer Yudkowsky 1:43:04", "I agree it’s possible to imagine things being even worse. Not quite sure what the other point of the question is. It’s not literally as bad as possible. In fact, by this time next year, maybe we’ll get to see how much worse it can look.", "Dwarkesh Patel 1:43:23", "Peter Thiel has an aphorism that extreme pessimism or extreme optimism amount to the same thing, which is doing nothing.", "Eliezer Yudkowsky 1:43:30", "I’ve heard of this too. It’s from wind, right? The wise man opened his mouth and spoke — there’s actually no difference between between good things and bad things. You idiot. You moron. I’m not quoting this correctly.", "Dwarkesh Patel 1:43:45", "Did he steal it from Wind?", "Eliezer Yudkowsky 1:43:46", "No. I’m just rolling my eyes. Anyway, there’s actually no difference between extreme optimism and extreme pessimism because, go ahead.", "Dwarkesh Patel 1:44:01", "Because they both amount to doing nothing in that, in both cases, you end up on a podcast saying, we’re bound to succeed or we’re bound to fail. What is a concrete strategy by which, like, assume the real odds are like 99% we fail or something. What is the reason to blurt those odds out there and announce the death with dignity strategy or emphasize them?", "Eliezer Yudkowsky 1:44:25", "I guess because I could be wrong and because matters are now serious enough that I have nothing left to do but go out there and tell people how it looks and maybe someone thinks of something I did not think of.", "Predictions (or lack thereof)", "Dwarkesh Patel 1:44:42", "I think this would be a good point to just kind of get your predictions of what’s likely to happen in 2030, 2040 or 2050, something like that. By 2025, what are the odds that AI kills or disempowers all of humanity. Do you have some sense of that?", "Eliezer Yudkowsky 1:45:03", "I have refused to deploy timelines with fancy probabilities on them consistently for many years, for I feel that they are just not my brain’s native format and that they are, and that every time I try to do this, it ends up making me stupider.", "Dwarkesh Patel 1:45:21", "Why?", "Eliezer Yudkowsky 1:45:22", "Because you just do the thing. You just look at whatever opportunities are left to you, whatever plans you have left, and you go out and do them. And if you make up some fancy number for your chance of dying next year, there’s very little you can do with it, really. You’re just going to do the thing either way. I don’t know how much time I have left.", "Dwarkesh Patel 1:45:46", "The reason I’m asking is because if there is some sort of concrete prediction you’ve made, it can help establish some sort of track record in the future as well.", "Eliezer Yudkowsky 1:45:57", "Every year up until the end of the world, people are going to max out their track record by betting all of their money on the world not ending. What part of this is different for credibility than dollars?", "Dwarkesh Patel 1:46:08", "Presumably you would have different predictions before the world ends. It would be weird if the model that this world ends and the model that says the world doesn’t end have the same predictions up until the world ends.", "Eliezer Yudkowsky 1:46:15", "Yeah. Paul Christiano and I cooperatively fought it out really hard at trying to find a place where we both had predictions about the same thing that concretely differed and what we ended up with was Paul’s 8% versus my 16% for an AI getting gold on International Mathematics Olympics problem set by, I believe, 2025. And prediction markets odds on that are currently running around 30%. So probably Paul’s going to win, but slight moral victory.", "Dwarkesh Patel 1:46:52", "I guess people like Paul have had the perspective that you’re going to see these sorts of gradual improvements in the capabilities of these models from like GPT-2 to GPT-3.", "Eliezer Yudkowsky 1:47:01", "What exactly is gradual?", "Dwarkesh Patel 1:47:05", "The loss function, the perplexity, the amount of abilities that are merging.", "Eliezer Yudkowsky 1:47:09", "As I said in my debate with Paul on this subject, I am always happy to say that whatever large jumps we see in the real world, somebody will draw a smooth line of something that was changing smoothly as the large jumps were going on from the perspective of the actual people watching. You can always do that.", "Dwarkesh Patel 1:47:25", "Why should that not update us towards a perspective that those smooth jumps are going to continue happening? If two people have different models.", "Eliezer Yudkowsky 1:47:30", "I don’t think that GPT-3 to 3.5 to 4 was all that smooth. I’m sure if you are in there looking at the losses decline, there is some level on which it’s smooth if you zoom in close enough. But from the perspective of us on the outside world, GPT-4 was just suddenly acquiring this new batch of qualitative capabilities compared to GPT 3.5. Somewhere in there is a smoothly declining predictable loss on text prediction but that loss on text prediction corresponds to qualitative jumps in ability. And I am not familiar with anybody who predicted those in advance of the observation.", "Dwarkesh Patel 1:48:15", "So in your view, when doom strikes, the scaling laws are still applying. It’s just that the thing that emerges at the end is something that is far smarter than the scaling laws would imply.", "Eliezer Yudkowsky 1:48:27", "Not literally at the point where everybody falls over dead. Probably at that point the AI rewrote the AI and the losses declined. Not on the previous graph.", "Dwarkesh Patel 1:48:36", "What is the thing where we can sort of establish your track record before everybody falls over dead?", "Eliezer Yudkowsky 1:48:41", "It’s hard. It is just easier to predict the endpoint than it is to predict the path. Some people will claim that I’ve done poorly compared to others who tried to predict things. I would dispute this. I think that the Hanson-Yudkowsky foom debate was won by Gwern Branwen , but I do think that Gwern Branwen is well to the Yudkowsky side of Yudkowsky in the original foom debate. Roughly, Hansen was like — you’re going to have all these distinct handcrafted systems that incorporate lots of human knowledge specialized for particular domains. Handcrafted to incorporate human knowledge, not just run on giant data sets. I was like — you’re going to have a carefully crafted architecture with a bunch of subsystems and that thing is going to look at the data and not be handcrafted to the particular features of the data. It’s going to learn the data. Then the actual thing is like — Ha ha. You don’t have this handcrafted system that learns, you just stack more layers. So like, Hanson here, Yudkowsky here, reality there. This would be my interpretation of what happened in the past. And if you want to be like — Well, who did better than that? It’s people like Shane Legg and Gwern Branwen. If you look at the whole planet, you can find somebody who made better predictions than Eliezer Yudkowsky, that’s for sure. Are these people currently telling you that you’re safe? No, they are not.", "Dwarkesh Patel 1:50:18", "The broader question I have is there’s been huge amounts of updates in the last 10-20 years. We’ve had the deep learning revolution. We’ve had the success of LLMs. It seems odd that none of this information has changed the basic picture that was clear to you like 15-20 years ago.", "Eliezer Yudkowsky 1:50:36", "I mean, it sure has. Like 15-20 years ago, I was talking about pulling off shit like coherent extrapolated volition with the first AI, which was actually a stupid idea even at the time. But you can see how much more hopeful everything looked back then. Back when there was AI that wasn’t giant inscrutable matrices of floating point numbers.", "Dwarkesh Patel 1:50:55", "When you say that, rounding to the nearest number, there’s basically a 0% chance of humanity survives — does that include the probability of there being errors in your model?", "Eliezer Yudkowsky 1:51:07", "My model no doubt has many errors. The trick would be an error someplace where that just makes everything work better. Usually when you’re trying to build a rocket and your model of rockets is lousy, it doesn’t cause the rocket to launch using half the fuel, go twice as far, and land twice as precisely on target as your calculations claimed.", "Dwarkesh Patel 1:51:31", "Though most of the room for updates is downwards, right? Something that makes you think the problem is twice as hard, you go from like 99% to like 99.5%. If it’s twice as easy. You go from 99 to 98?", "Eliezer Yudkowsky 1:51:42", "Sure. Wait, sorry. Yeah, but most updates are not — this is going to be easier than you thought. That sure has not been the history of the last 20 years from my perspective. The most favorable updates are — Yeah, we went down this really weird side path where the systems are legibly alarming to humans and humans are actually alarmed by then and maybe we get more sensible global policy.", "Dwarkesh Patel 1:52:14", "What is your model of the people who have engaged these arguments that you’ve made and you’ve dialogued with, but who have come nowhere close to your probability of doom? What do you think they continue to miss?", "Eliezer Yudkowsky 1:52:26", "I think they’re enacting the ritual of the young optimistic scientist who charges forth with no ideas of the difficulties and is slapped down by harsh reality and then becomes a grizzled cynic who knows all the reasons why everything is so much harder than you knew before you had any idea of how anything really worked. And they’re just living out that life cycle and I’m trying to jump ahead to the endpoint.", "Dwarkesh Patel 1:52:51", "Is there somebody who has a probability(doom) less than 50% who you think is like the clearest person with that view, who is like a view you can most empathize with?", "Eliezer Yudkowsky 1:53:02", "No.", "Dwarkesh Patel 1:53:03", "Really? Someone might say — Listen Eliezer, according to the CEO of the company who is leading the AI race,  he tweeted something that you’ve done the most to accelerate AI or something which was presumably the opposite of your goals. And it seems like other people did see that these sort of language models would scale in the way that they have scaled. Given that you didn’t see that coming and given that in some sense, according to some people, your actions have had the opposite impact that you intended. What is the track record by which the rest of the world can come to the conclusions that you have come to?", "Eliezer Yudkowsky 1:53:44", "These are two different questions. One is the question of who predicted that language models would scale? If they put it down in writing and if they said not just this loss function will go down, but also which capabilities will appear as that happens, then that would be quite interesting. That would be a successful scientific prediction. If they then came forth and said — this is the model that I used, this is what I predict about alignment. We could have an interesting fight about that. Second, there’s the point that if you try to rouse your planet to give it any sense that it is in peril. There are the idiot disaster monkeys who are like — “Ooh. Ooh. If this is dangerous, it must be powerful. Right? I’m going to be the first to grab the poison banana.” And what is one supposed to do? Should one remain silent? Should one let everyone walk directly into the whirling razor blades? If you sent me back in time, I’m not sure I could win this, but maybe I would have some notion of like if you calculate the message in exactly this way, then this group will not take away this message and you will be able to get this group of people to research on it without having this other group of people decide that it’s excitingly dangerous, and they want to rush forward on it. I’m not that smart. I’m not that wise. But what you are pointing to there is not a failure of ability to make predictions about AI. It’s that if you try to call attention to a danger and not just have everybody just have your whole planet walk directly into the whirling razor blades carefree, no idea what’s coming to them, maybe yeah that speeds up timelines. Maybe then people are like — “Ooh. Ooh. Exciting. Exciting. I want to build it. I want to build it. OOH, exciting. It has to be in my hands. I have to be the one to manage this danger. I’m going to run out and build it.” Like —Oh no. If we don’t invest in this company, who knows what investors they’ll have instead that will demand that they move fast because of the profit motive. Then of course, they just move fast fucking anyways. And yeah, if you sent me back in time, maybe I’d have a third option. But it seems to me that in terms of what one person can realistically manage, in terms of not being able to exactly craft a message with perfect hindsight that will reach some people and not others, at that point, you might as well just be like — Yeah, just invest in exactly the right stocks and invest in exactly the right time and you can fund projects on your own without alerting anyone. If you keep fantasies like that aside, then I think that in the end, even if this world ends up having less time, it was the right thing to do rather than just letting everybody sleepwalk into death and get there a little later.", "Being Eliezer", "Dwarkesh Patel 1:56:55", "If you don’t mind me asking, what has being in the space in the last five years been like for you? Or I guess even beyond that. Watching the progress and the way in which people have raced ahead?", "Eliezer Yudkowsky 1:57:08", "I made most of my negative updates as of five years ago. If anything, things have been taking longer to play out than I thought they would.", "Dwarkesh Patel 1:57:16", "But just like watching it, not as a sort of change in your probabilities, but just watching it concretely happen, what has that been like?", "Eliezer Yudkowsky 1:57:26", "Like continuing to play out a video game you know you’re going to lose. Because that’s all you have. If you wanted some deep wisdom from me, I don’t have it. I don’t know if it’s what you’d expect, but it’s what I would expect it to be like. Where what I would expect it to be like takes into account that. I guess I do have a little bit of wisdom. People imagining themselves in that situation raised in modern society, as opposed to being raised on science fiction books written 70 years ago, will imagine themselves being drama queens about it. The point of believing this thing is to be a drama queen about it and craft some story in which your emotions mean something. And what I have in the way of culture is like, your planet’s at stake. Bear up. Keep going. No drama. The drama is meaningless. What changes the chance of victory is meaningful. The drama is meaningless. Don’t indulge in it.", "Dwarkesh Patel 1:58:57", "Do you think that if you weren’t around, somebody else would have independently discovered this sort of field of alignment?", "Eliezer Yudkowsky 1:59:04", "That would be a pleasant fantasy for people who cannot abide the notion that history depends on small little changes or that people can really be different from other people. I’ve seen no evidence, but who knows what the alternate Everett branches of Earth are like?", "Dwarkesh Patel 1:59:27", "But there are other kids who grew up on science fiction, so that can’t be the only part of the answer.", "Eliezer Yudkowsky 1:59:31", "Well I sure am not surrounded by a cloud of people who are nearly Eliezer outputting 90% of the work output. And also this is not actually how things play out in a lot of places. Steve Jobs is dead, Apple apparently couldn’t find anyone else to be the next Steve Jobs of Apple, despite having really quite a lot of money with which to theoretically pay them. Maybe he didn’t really want a successor. Maybe he wanted to be irreplaceable. I don’t actually buy that based on how this has played out in a number of places. There was a person once who I met when I was younger who had built something, had built an organization, and he was like — “Hey, Eliezer. Do you want this to take this thing over?” And I thought he was joking. And it didn’t dawn on me until years and years later, after trying hard and failing hard to replace myself, that — “Oh, yeah. I could have maybe taken a shot at doing this person’s job, and he’d probably just never found anyone else who could take over his organization and maybe asked some other people and nobody was willing.” And that’s his tragedy, that he built something and now can’t find anyone else to take it over. And if I’d known that at the time, I would have at least apologized to him. To me it looks like people are not dense in the incredibly multidimensional space of people. There are too many dimensions and only 8 billion people on the planet. The world is full of people who have no immediate neighbors and problems that only one person can solve and other people cannot solve in quite the same way. I don’t think I’m unusual in looking around myself in that highly multidimensional space and not finding a ton of neighbors ready to take over. And if I had four people, any one of whom could do 99% of what I do, I might retire. I am tired. I probably wouldn’t. Probably the marginal contribution of that fifth person is still pretty large. I don’t know. There’s the question of — Did you occupy a place in mind space? Did you occupy a place in social space? Did people not try to become Eliezer because they thought Eliezer already existed? My answer to that is — “Man, I don’t think Eliezer already existing would have stopped me from trying to become Eliezer.” But maybe you just look at the next Everett Branch over and there’s just some kind of empty space that someone steps up to fill, even though then they don’t end up with a lot of obvious neighbors. Maybe the world where I died in childbirth is pretty much like this one. If somehow we live to hear about that sort of thing from someone or something that can calculate it, that’s not the way I bet but if it’s true, it’d be funny. When I said no drama, that did include the concept of trying to make the story of your planet be the story of you. If it all would have played out the same way and somehow I survived to be told that. I’ll laugh and I’ll cry, and that will be the reality.", "Dwarkesh Patel 2:03:46", "What I find interesting though, is that in your particular case, your output was so public. For example, your sequences, your science fiction and fan fiction. I’m sure hundreds of thousands of 18 year olds read it, or even younger, and presumably some of them reached out to you. I think this way I would love to learn more.", "Eliezer Yudkowsky 2:04:13", "Part of why I’m a little bit skeptical of the story where people are just infinitely replaceable is that I tried really, really hard to create a new crop of people who could do all the stuff I could do to take over because I knew my health was not great and getting worse. I tried really, really hard to replace myself. I’m not sure where you look to find somebody else who tried that hard to replace himself. I tried. I really, really tried. That’s what the Less wrong sequences were. They had other purposes. But first and foremost, it was me looking over my history and going — Well, I see all these blind pathways and stuff that it took me a while to figure out. I feel like I had these near misses on becoming myself. If I got here, there’s got to be ten other people, and some of them are smarter than I am, and they just need these little boosts and shifts and hints, and they can go down the pathway and turn into Super Eliezer. And that’s what the sequences were like. Other people use them for other stuff but primarily they were an instruction manual to the young Eliezers that I thought must exist out there. And they are not really here.", "Dwarkesh Patel 2:05:27", "Other than the sequences, do you mind if I ask what were the kinds of things you’re talking about here in terms of training the next core of people like you?", "Eliezer Yudkowsky 2:05:36", "Just the sequences. I am not a good mentor. I did try mentoring somebody for a year once, but yeah, he didn’t turn into me. So I picked things that were more scalable. The other reason why you don’t see a lot of people trying that hard to replace themselves is that most people, whatever their other talents, don’t happen to be sufficiently good writers. I don’t think the sequences were good writing by my current standards but they were good enough. And most people do not happen to get a handful of cards that contain the writing card, whatever else their other talents.", "Dwarkesh Patel 2:06:14", "I’ll cut this question out if you don’t want to talk about it, but you mentioned that there’s certain health problems that incline you towards retirement now. Is that something you are willing to talk about?", "Eliezer Yudkowsky 2:06:27", "They cause me to want to retire. I doubt they will cause me to actually retire. Fatigue syndrome. Our society does not have good words for these things. The words that exist are tainted by their use as labels to categorize a class of people, some of whom perhaps are actually malingering. But mostly it says like we don’t know what it means. And you don't ever want to have chronic fatigue syndrome on your medical record because that just tells doctors to give up on you. And what does it actually mean besides being tired? If one lives half a mile from one’s work, then one had better walk home if one wants to go for a walk sometime in the day. (unclear) If you walk half a mile to work you’re not going to be getting very much work done the rest of that work day. And aside from that, these things don’t have names. Not yet.", "Dwarkesh Patel 2:07:38", "Whatever the cause of this, is your working hypothesis that it has something to do or is in some way correlated with the thing that makes you Eliezer or do you think it’s like a separate thing?", "Eliezer Yudkowsky 2:07:51", "When I was 18, I made up stories like that and it wouldn’t surprise me terribly if one survived to hear the tale from something that knew it, that the actual story would be a complex, tangled web of causality in which that was in some sense true. But I don’t know. And storytelling about it does not hold the appeal that it once did for me. Is it a coincidence that I was not able to go to high school or college? Is there something about it that would have crushed the person that I otherwise would have been? Or is it just in some sense a giant coincidence? I don’t know. Some people go through high school and college and come out sane. There’s too much stuff in a human being’s history and there’s a plausible story you could tell. Like, maybe there’s a bunch of potential Eliezers out there, but they went to high school and college and it killed their souls. And you were the one who had the weird health problem and you didn’t go to high school and you didn’t go to college and you stayed yourself. And I don’t know. To me it just feels like patterns in the clouds and maybe that cloud actually is shaped like a horse. What good does the knowledge do? What good does the story do?", "Dwarkesh Patel 2:09:26", "When you were writing the sequences and the fiction from the beginning, was the main goal to find somebody who could replace you and specifically the task of AI alignment, or did it start off with a different goal?", "Eliezer Yudkowsky 2:09:43", "In 2008, I did not know this stuff was going to go down in 2023. For all I knew, there was a lot more time in which to do something like build up civilization to another level, layer by layer. Sometimes civilizations do advance as they improve their epistemology. So there was that, there was the AI project. Those were the two projects, more or less.", "Dwarkesh Patel 2:10:16", "When did AI become the main thing?", "Eliezer Yudkowsky 2:10:18", "As we ran out of time to improve civilization.", "Dwarkesh Patel 2:10:20", "Was there a particular year that became the case for you?", "Eliezer Yudkowsky 2:10:23", "I mean, I think that 2015, 16, 17 were the years at which I’d noticed I’d been repeatedly surprised by stuff moving faster than anticipated. And I was like — “Oh, okay, like, if things continue accelerating at that pace, we might be in trouble.” And then in 2019, 2020, stuff slowed down a bit and there was more time than I was afraid we had back then. That’s what it looks like to be a Bayesian. Your estimates go up, your estimates go down. They don’t just keep moving in the same direction, because if they keep moving in the same direction several times, you’re like — “Oh, I see where this thing is trending. I’m going to move here.” And then things don’t keep moving that direction. Then you go like — “Oh, okay, like back down again.” That’s what sanity looks like.", "Dwarkesh Patel 2:11:08", "I am curious actually, taking many worlds seriously, does that bring you any comfort in the sense that there is one branch of the wave function where humanity survives? Or do you not buy that?", "Eliezer Yudkowsky 2:11:21", "I’m worried that they’re pretty distant. I’m not sure it’s enough to not have Hitler, but it sure would be a start on things going differently in a timeline. But mostly, I don’t know. I’d say there’s some comfort from thinking of the wider spaces than that. As Tegmark pointed out way back when, if you have a spatially infinite universe that gets you just as many worlds as the quantum multiverse, if you go far enough in a space that is unbounded, you will eventually come to an exact copy of Earth or a copy of Earth from its past that then has a chance to diverge a little differently. So the quantum multiverse adds nothing. Reality is just quite large. Is that a comfort? Yeah. Yes, it is. That possibly our nearest surviving relatives are quite distant, or you have to go quite some ways through the space before you have worlds that survive by anything but the wildest flukes. Maybe our nearest surviving neighbors are closer than that. But look far enough and there should be some species of nice aliens that were smarter or better at coordination and built their happily ever after. And yeah, that is a comfort. It’s not quite as good as dying yourself, knowing that the rest of the world will be okay, but it’s kind of like that on a larger scale. And weren’t you going to ask something about orthogonality at some point?", "Dwarkesh Patel 2:13:00", "Did I not?", "Eliezer Yudkowsky 2:13:02", "Did you?", "Dwarkesh Patel 2:13:02", "At the beginning when we talked about human evolution?", "Orthogonality", "Eliezer Yudkowsky 2:13:06", "Yeah, that’s not orthogonality. That’s the particular question of what are the laws relating optimization of a system via hill climbing to the internal psychological motivations that it acquires? But maybe that was all you meant to ask about.", "Dwarkesh Patel 2:13:23", "Can you explain in what sense you see the broader orthogonality thesis as?", "Eliezer Yudkowsky 2:13:30", "The broader orthogonality thesis is — you can have almost any kind of self consistent utility function in a self consistent mind. Many people are like, why would AIs want to kill us? Why would smart things not just automatically be nice? And this is a valid question, which I hope to at some point run into some interviewer where they are of the opinion that smart things are automatically nice. So that I can explain on camera why, although I myself held this position very long ago, I realized that I was terribly wrong about it and that all kinds of different things hold together and that if you take a human and make them smarter, that may shift their morality. It might even, depending on how they start out, make them nicer. But that doesn’t mean that you can do this with arbitrary minds and arbitrary mind space because all the different motivations hold together. That’s orthogonality. But if you already believe that, then there might not be much to discuss.", "Dwarkesh Patel 2:14:30", "No, I guess I wasn’t clear enough about it. Yes, all the different sorts of utility functions are possible. It’s that from the evidence of evolution and from the sort of reasoning about how these systems are being trained, I think that wildly divergent ones don’t seem as likely as you do. But instead of having you respond to that directly, let me ask you some questions I did have about it, which I didn’t get to. One is actually from Scott Aaronson. I don’t know if you saw his recent blog post , but here’s a quote from it: “If you really accept the practical version of the Orthogonality Thesis, then it seems to me that you can’t regard education, knowledge, and enlightenment as instruments for moral betterment. On the whole, though, education hasn’t merely improved humans’ abilities to achieve their goals; it’s also improved their goals.” I’ll let you react to that.", "Eliezer Yudkowsky 2:15:23", "Yeah. If you start with humans, if you take humans who were raised the way Scott Aronson was, and you make them smarter, they get nicer, it affects their goals. And there’s a Less Wrong post about this, as there always is, several really, but sorting pebbles into correct heaps , describing a species of aliens who think that a heap of size seven is correct and a heap of size eleven is correct, but not eight or nine or ten, those heaps are incorrect. And they used to think that a heap size of 21 might be correct, but then somebody showed them an array of seven by three pebbles, seven columns, three rows, and then people realized that 21 pebbles was not a correct heap. And this is like a thing they intrinsically care about. These are aliens that have a utility function, as I would phrase it, with some logical uncertainty inside it. But you can see how as they get smarter, they become better and better able to understand which heaps of pebbles are correct. And the real story here is more complicated than this. But that’s the seed of the answer. Scott Aaronson is inside a reference frame for how his utility function shifts as he gets smarter. It’s more complicated than that. Human beings are made out of these are more complicated than the pebble sorters. They’re made out of all these complicated desires. And as they come to know those desires, they change. As they come to see themselves as having different options. It doesn’t just change which option they choose after the manner of something with a utility function, but the different options that they have bring different pieces of themselves in conflict. When you have to kill to stay alive you may come to a different equilibrium with your own feelings about killing than when you are wealthy enough that you no longer have to do that. And this is how humans change as they become smarter, even as they become wealthier, as they have more options, as they know themselves better, as they think for longer about things and consider more arguments, as they understand perhaps other people and give their empathy a chance to grab onto something solider because of their greater understanding of other minds. But that’s all when these things start out inside you. And the problem is that there’s other ways for minds to hold together coherently, where they execute other updates as they know more or don’t even execute updates at all because their utility function is simpler than that. Though I do suspect that is not the most likely outcome of training a large language model. So large language models will change their preferences as they get smarter. Indeed. Not just like what they do to get the same terminal outcomes, but the preferences themselves will up to a point change as they get smarter. It doesn’t keep going. At some point you know yourself especially well and you are able to rewrite yourself and at some point there, unless you specifically choose not to, I think that the system crystallizes. We might choose not to. We might value the part where we just sort of change in that way even if it’s not no longer heading in a knowable direction. Because if it’s heading in a knowable direction, you could jump to that as an endpoint.", "Dwarkesh Patel 2:19:18", "Is that why you think AIs will jump to that endpoint? Because they can anticipate where their sort of moral updates are going?", "Eliezer Yudkowsky 2:19:26", "I would reserve the term moral updates for humans. Let’s call them logical preference updates, preference shifts.", "Dwarkesh Patel 2:19:37", "What are the prerequisites in terms of whatever makes Aaronson and other sort of smart moral people that we humans could sympathize with? You mentioned empathy, but what are the sort of prerequisites?", "Eliezer Yudkowsky 2:19:51", "They’re complicated. There’s not a short list. If there was a short list of crisply defined things where you could give it like — *choose* *choose* *choose* and now it’s in your moral frame of reference, then that would be the alignment plan. I don’t think it’s that simple. Or if it is that simple, it’s like in the textbook from the future that we don’t have.", "Dwarkesh Patel 2:20:07", "Okay, let me ask you this. Are you still expecting a sort of chimps to humans gain in generality even with these LLMs? Or does the future increase look like an order that we see from like GPT-3 to GPT-4?", "Eliezer Yudkowsky 2:20:21", "I am not sure I understand the question. Can you rephrase?", "Dwarkesh Patel 2:20:24", "Yes. From reading your writing from earlier, it seemed like a big part of your argument was like, look — I don’t know how many total mutations it was to get from chimps to humans, but it wasn’t that many mutations. And we went from something that could basically get bananas in the forest to something that could walk on the moon. Are you still expecting that sort of gain eventually between, I don’t know, like GPT-5 and GPT-6, or like some GPT-N and GPT-N+1? Or does it look smoother to you now?", "Eliezer Yudkowsky 2:20:55", "First of all, let me preface by saying that for all I know of the hidden variables of nature, it’s completely allowed that GPT-4 was actually just it. Ha ha ha. This is where it saturates. It goes no further. It’s not how I’d bet. But if nature comes back and tells me that, I’m not allowed to be like — “You just violated the rule that I knew about.” I know of no such rule prohibiting such a thing.", "Dwarkesh Patel 2:21:20", "I’m not asking whether these things will plateau at a given intelligence-level, where there’s a cap, that’s not the question. Even if there is no cap, do you expect these systems to continue scaling in the way that they have been scaling, or do you expect some really big jump between some GPT-N and some GPT-N+1?", "Eliezer Yudkowsky 2:21:37", "Yes. And that’s only if things don’t plateau before then. I can’t quite say that I know what you know. I do feel like we have this track of the loss going down as you add more parameters and you train on more tokens and a bunch of qualitative abilities that suddenly appear. I’m sure if you zoom in closely enough, they appear more gradually, but they appear as the successful releases of the system, which I don’t think anybody has been going around predicting in advance that I know about. And loss continue to go down unless it suddenly plateaus. New abilities appear, I don’t know which ones. Is there at some point a giant leap? If at some point it becomes able to toss out the enormous training run paradigm and jump to a new paradigm of AI. That would be one kind of giant leap. You could get another kind of giant leap via architectural shift, something like transformers, only there’s like an enormously huger hardware overhang now. Like something that is to transformers as transformers were to recurrent neural networks. And then maybe the loss function suddenly goes down and you get a whole bunch of new abilities. That’s not because the loss went down on the smooth curve and you got a bunch more abilities in a dense spot. Maybe there’s some particular set of abilities that is like a master ability, the way that language and writing and culture for humans might have been a master ability. And the loss function goes down smoothly and you get this one new internal capability and there’s a huge jump in output. Maybe that happens. Maybe stuff plateaus before then and it doesn’t happen. Being the expert who gets to go on podcasts, they don’t actually give you a little book with all the answers in it you know. You’re just guessing based on the same information that other people have. And maybe, if you’re lucky, slightly better theory.", "Dwarkesh Patel 2:23:39", "Yeah, that’s why I’m wondering. Because you do have a different theory of what fundamentally intelligence is and what it entails. So I’m curious if you have some expectations of where the GPTs are going.", "Eliezer Yudkowsky 2:23:49", "I feel like a whole bunch of my successful predictions in this have come from other people being like — “Oh, yes. I have this theory which predicts that stuff is 30 years off.” And I’m like — “You don’t know that.” And then stuff happens not 30 years off. And I’m like — “Ha ha. Successful prediction.” And that’s basically what I told you, right? I was like — well, you could have the loss function continuing on a smooth line and new abilities appear, and you could have them suddenly appear in a cluster. Because why not? Because nature just tells you that’s up. And suddenly you can have this one key ability, that’s equivalent to language for humans, and there’s a sudden jump in output capabilities. You could have a new innovation, like the transformer, and maybe the losses actually drop precipitously and a whole bunch of new abilities appear at once. This is all just me. This is me saying — I don’t know. But so many people around are saying things that implicitly claim to know more than that, that it can actually start to sound like a startling prediction. This is one of my big secret tricks, actually. People are like — The AI could be good or evil. So it’s like 50-50, right? And I’m actually like — No, we can be ignorant about a wider space than this in which good is actually like a fairly narrow range. So many of the predictions like that are really anti-predictions. It’s somebody thinking along a relatively narrow line and you point out everything outside of that and it sounds like a startling prediction. Of course, the trouble being, when you look back afterwards, people are like — “Well, those people saying the narrow thing were just silly. Ha ha.” and they don’t give you as much credit.", "Dwarkesh Patel 2:25:24", "I think the credit you would get for that, rightly, is as a good Agnostic forecaster, as somebody who is calm and measured. But it seems like to be able to make really strong claims about the future, about something that is so out of prior distributions as like the death of humanity, you don’t only have to show yourself as a good Agnostic forecaster, you have to show that your ability to forecast because of a particular theory is much greater. Do you see what I mean?", "Eliezer Yudkowsky 2:25:58", "It’s all about the ignorance prior. It’s all about knowing the space in which to be maximum entropy. What will the future be? I don’t know. It could be paperclips, it could be staples. It could be no kind of office supplies at all and tiny little spirals. It could be little tiny things that are like outputting 111, because that’s like the most predictable kind of text to predict. Or representations of ever larger numbers in the fast growing hierarchy because that’s how they interpret the reward counter. I’m actually getting into specifics here, which is the opposite of the point I originally meant to make, which is if somebody claims to be very unsure, I might say — “Okay, so then you expect most possible molecular configurations of the solar system to be equally probable.” Well, humans mostly aren’t in those. So being very unsure about the future looks like predicting with probability nearly one that the humans are all gone, which it’s not actually that bad, but it illustrates the point of people going like — “But how are you sure?” Kind of missing the real discourse and skill, which is like — “Oh, yes, we’re all very unsure. Lots of entropy in our probability distributions. But what is the space under which you are unsure?”", "Dwarkesh Patel 2:27:25", "Even at that point it seems like the most reasonable prior is not that all sort of atomic configurations of the solar system are equally likely. Because I agree by that metric…", "Eliezer Yudkowsky 2:27:34", "Yeah, it’s like all computations that can be run over configurations of solar-system are equally likely to be maximized.", "Dwarkesh Patel 2:27:49", "We know what the loss function looks like, we know what the training data looks like. That obviously is no guarantee of what the drives that come out of that loss function will look like.", "Eliezer Yudkowsky 2:28:00", "Humans came out pretty different from their loss functions.", "Dwarkesh Patel 2:28:05", "I would actually say no. If it is as similar as humans are now to our loss function from which we evolved, that would be like that. Honestly, it might not be that terrible world, and it might, in fact, be a very good world.", "Eliezer Yudkowsky 2:28:18", "Whoa. Where do you get a good world out of maximum prediction of text?", "Dwarkesh Patel 2:28:27", "Plus RLHF, plus whatever alignment stuff that might work, results in something that kind of just does it reliably enough that we ask it like — Hey, help us with alignment, then go ..", "Eliezer Yudkowsky 2:28:42", "Stop asking for help with alignment. Ask it for any of the help.", "Dwarkesh Patel 2:28:48", "Help us enhance our brains. Help us blah, blah, blah.", "Eliezer Yudkowsky 2:28:50", "Thank you. Why are people asking for the most difficult thing that’s the most impossible to verify? It’s whack.", "Dwarkesh Patel 2:28:56", "And then basically, at that point, we’re like turning into gods, and we can..", "Eliezer Yudkowsky 2:29:01", "If you get to the point where you’re turning into gods yourselves, you’re not quite home free, but you’re sure past a lot of the death.", "Dwarkesh Patel 2:29:08", "Yeah. Maybe you can explain the intuition that all sorts of drives are equally likely given unknown loss function and a known set of data.", "Eliezer Yudkowsky 2:29:22", "If you had the textbook from the future, or if you were an alien who had watched 10,000 planets destroy themselves the way Earth has while being only human in your sample complexity and generalization ability, then you could be like — “Oh, yes, they’re going to try this trick with loss functions, and they will get a draw from this space of results.” And the alien may now have a pretty good prediction of range of where that ends up. Similarly, now that we’ve actually seen how humans turn out when you optimize them for reproduction, it would not be surprising if we found some aliens the next door over and they had orgasms. Maybe they don’t have orgasms, but if they had some kind of strong surge of pleasure during the active mating, we’re not surprised. We’ve seen how that plays out in humans. If they have some kind of weird food that isn’t that nutritious but makes them much happier than any kind of food that was more nutritious and ran in their ancestral environment. Like ice cream. We probably can’t call it ice cream, right? It’s not going to be like sugar, salt, fat, frozen. They’re not specifically going to have ice cream, right? They might play Go. They’re not going to play chess.", "Dwarkesh Patel 2:30:49", "Because chess has more specific pieces, right?", "Eliezer Yudkowsky 2:30:52", "Yeah. They’re not going to play Go on 19 by 19. They might play Go on some other size. Probably odd. Well, can we really say that? I don’t know. If they play Go, I’d bet on an odd board dimension at two thirds (unclear) sounds about right. Unless there’s some other reason why Go just totally does not work on an even board dimension that I don’t know, because I’m insufficiently acquainted with the game. The point is, reasoning off of humans is pretty hard. We have the loss function over here. We have humans over here. We can look at the rough distance. All the weird specific stuff that humans accreted around and be like, if the loss function is over here and humans are over there, maybe the aliens are like, over there. And if we had three aliens that would expand our views of the possible or even two aliens would vastly expand our views of the possible and give us a much stronger notion of what the third aliens look like. Humans, aliens, third race. But the wild-eyed, optimistic scientists have never been through never been through this with AI. So they’re like, — “Oh, you optimized AI to say nice things and it helps you and make it a bunch smarter. Probably says nice things and helps you is probably, like, totally aligned. Yeah.” They don’t know any better. Not trying to jump ahead of the story. But the aliens know where you end up around the loss function. They know how it’s going to play out much more narrowly. We’re guessing much more blindly here.", "Dwarkesh Patel 2:32:45", "It just leaves me in a sort of unsatisfied place that we apparently know about something that is so extreme that maybe a handful of people in the entire world believe it from first principles about the doom of humanity because of AI. But this theory that is so productive in that one very unique prediction is unable to give us any sort of other prediction about what this world might look like in the future or about what happens before we all die. It can tell us nothing about the world until the point at which makes a prediction that is the most remarkable in the world.", "Eliezer Yudkowsky 2:33:30", "Rationalists should win, but rationalists should not win the lottery. I’d ask you what other theories are supposed to have been doing an amazingly better job of predicting the last three years? Maybe it’s just hard to predict, right? And in fact it's easier to predict the end state than the strange complicated winding paths that lead there. Much like if you play against AlphaGo and predict it’s going to be in the class of winning board states, but not exactly how it’s going to beat you. The difficulty of predicting the future is not quite like that. But from my perspective, the future is just really hard to predict. And there’s a few places where you can wrench what sounds like an answer out of your ignorance, even though really you’re just being like — well, you’re going to end up in some random weird place around this loss function and I haven’t seen it happen with 10,000 species so I don’t know where. Very impoverished from the standpoint of anybody who actually knew anything could actually predict anything. But the rest of the world is like — Oh, we’re equally likely to win the lottery and lose the lottery, right? Like either we win or we don’t. You come along and you’ll be like — “No, no, your chance of winning the lottery is tiny.” They’re like — “What? How can you be so sure? Where do you get your strange certainty?” And the actual root of the answer is that you are putting your maximum entropy over a different probability space. That just actually is the thing that’s going on there. You’re saying all lottery numbers are equally likely instead of winning and losing are equally likely.", "Could alignment be easier than we think?", "Dwarkesh Patel 2:35:00", "So I think the place to close this conversation is let me just give the main reasons why I’m not convinced that doom is likely or even that it’s more than 50% probable or anything like that. Some are things that I started this conversation with that I don’t feel like I heard any knock down arguments against. And some are new things from the conversation. And the following things are things that, even if any one of them individually turns out to be true, I think doom doesn’t make sense or is much less likely.", "So going through the list, I think probably more likely than not, this entire frame all around alignment and AI is wrong. And this is maybe not something that would be easy to talk about, but I’m just kind of skeptical of sort of first principles reasoning that has really wild conclusions.", "Eliezer Yudkowsky 2:36:08", "Okay, so everything in the solar system just ends up in a random configuration then?", "Dwarkesh Patel 2:36:11", "Or it stays like it is unless you have very good reasons to think otherwise. And especially if you think it’s going to be very different from the way it’s going, you must have ironclad reasons for thinking that it’s going to be very, very different from the way it is.", "Eliezer Yudkowsky 2:36:31", "Humanity hasn’t really existed for very long. Man, I don’t even know what to say to this thing. We’re like this tiny like, everything that you think of as normal is this tiny flash of things being in this particular structure out of a 13.8 billion year old universe, very little of which is 21st century civilized world on this little fraction of the surface of one planet in a vast solar system, most of which is not Earth, in a vast universe, most of which is not Earth. And it has lasted for such a tiny period of time through such a tiny amount of space and has changed so much over just the last 20,000 years or so. And here you are being like — why would things really be any different going forward?", "Dwarkesh Patel 2:37:28", "I feel like that argument proves too much because you could use that same argument. A theologian comes up to me and says — “The rapture is coming and let me explain why the rapture is coming.” I’m not claiming that your arguments are as bad as the arguments for rapture. I’m just following the example. But then they say — “Look at how wild human civilization has been. Would it be any wilder if there was a rapture?” And I’m like — “Yeah, actually, as wild as human civilization has been, the rapture would be much wilder.”", "Eliezer Yudkowsky 2:37:55", "It violates the laws of physics.", "Dwarkesh Patel 2:37:57", "Yes.", "Eliezer Yudkowsky 2:37:58", "I’m not trying to violate the laws of physics, even as you probably know them.", "Dwarkesh Patel 2:38:02", "How about this? Somebody comes up to me and he says — “We actually have nanosystems right behind you.” He says — “I’ve read Eric Drexler’s. nanosSystems. I’ve read Feynman’s (unclear) there’s plenty of room at the bottom.", "Eliezer Yudkowsky 2:38:16", "These two things are not to (unclear) but go on.", "Dwarkesh Patel 2:38:18", "Okay, fair enough. He comes to me and he says — “Let me explain to you my first principles argument about how some nanosystems will be replicators and the replicators, because of some competition yada yada yada argument, they turn the entire world into goo just making copies of themselves.”", "Eliezer Yudkowsky 2:38:37", "This kind of happened with humans. Well, life generally.", "Dwarkesh Patel 2:38:42", "So then they say, \"Listen, as soon as we start building nanosystems, pretty soon, 99% probability the entire world turns into goo. Just because the replicators are the things that turn things into goo, there will be more replicators and non-replicators.” I don’t have an object level debate about that, but it’s just like I just started that whole thing to say — yes, human civilization has been wild, but the entire world turning into goo because of nanosytems alone just seems much wilder than human civilization.", "Eliezer Yudkowsky 2:39:09", "This argument probably lands with greater force on somebody who does not expect stuff to be disassembled by nanosystems, albeit intelligently controlled ones, rather than goo in like quite near future, especially on the 13.8 billion year timescale. But do you expect this little momentary flash of what you call normality to continue? Do you expect the future to be normal?", "Dwarkesh Patel 2:39:31", "No. I expect any given vision of how things shape out to be wrong. It is not like you are suggesting that the current weird trajectory continues being weird in the way it’s been weird and that we continue to have like 2% economic growth or whatever, and that leads to incrementally more technological progress and so on. You’re suggesting there’s been that specific species of weirdness, which means that this entirely different species of weirdness is warranted.", "Eliezer Yudkowsky 2:40:04", "We’ve got different weirdnesses over time. The jump to superintelligence does strike me as being significant in the same way as the first self-replicator. The first self-replicator is the universe transitioning from: you see mostly stable things to you also see a whole bunch of things that make copies of themselves. And then somewhat later on, there’s a state where there’s this strange transition between the universe of stable things where things come together by accident and stay as long as they endure to this world of complicated life. And that transitionary moment is when you have something that arises by accident and yet self replicates. And similarly on the other side of things you have things that are intelligent making other intelligent things. But to get into that world, you’ve got to have the thing that is built just by things copying themselves and mutating and yet is intelligent enough to make another intelligent thing. Now, if I sketched out that cosmology, would you say — “No, no. I don’t believe in that.”?", "Dwarkesh Patel 2:41:10", "What if I sketch out the cosmology — because of Replicators, blah blah blah, intelligent beings, intelligent beings create nanoystems, blah, blah blah.", "Eliezer Yudkowsky 2:41:18", "No, no. Don’t tell me about the proofs too much. I discussed the cosmology, do you buy it? In the long run are we in a world full of things replicating or are we in a world full of intelligent things, designing other intelligent things?", "Dwarkesh Patel 2:41:35", "Yes.", "Eliezer Yudkowsky 2:41:37", "So you buy that vast shift in the foundations of order of the universe that instead of the world of things that make copies of themselves imperfectly, we are in the world of things that are designed and were designed. You buy that vast cosmological shift I was just describing, the utter disruption of everything you see that you call normal down to the leaves and the trees around you. You believe that. Well, the same skepticism you’re so fond of that argues against the Rapture can also be used to disprove this thing you believe that you think is probably pretty obvious actually, now that I’ve pointed it out. Your skepticism disproves too much, my friend.", "Dwarkesh Patel 2:42:19", "That’s actually a really good point. It still leaves open the possibility of how it happens and when it happens, blah, blah, blah. But actually, that’s a good point. Okay, so second thing.", "Eliezer Yudkowsky 2:42:30", "You set them up, I’ll knock them down one after the other.", "Dwarkesh Patel 2:42:34", "Second thing is…", "Eliezer Yudkowsky 2:42:40", "Wrong. Sorry, I was just jumping ahead to the predictable update at the end.", "Dwarkesh Patel 2:42:43", "You’re a good Bayesian. Maybe alignment just turns out to be much simpler or much easier than we think. It’s not like we’ve as a civilization spent that much resources or brain power solving it. If we put in even the kind of resources that we put into elucidating String theory or something into alignment, it could just turn out to be enough to solve it. And in fact, in the current paradigm, it turns out to be simpler because they’re sort of pre-trained on human thought and that might be a simpler regime than something that just comes out of a black box like alpha zero or something like that.", "Eliezer Yudkowsky 2:43:24", "Could I be wrong in an understandable way to me in advance mass, which is not where most of my hope comes from is on, what if RLHF just works well enough and the people in charge of this are not the current disaster monkeys, but instead have some modicum of caution and know what to aim for in RLHF space, which the current crop do not, and I’m not really that confident of their ability to understand if I told them. But maybe you have some folks who can understand anyways. I can sort of see what I try. The current crop of people will not try it. And I’m not actually sure that if somebody else takes over the government that they listen to me either. So some of the trouble here is that you have a choice of target and neither is all that great. One is you look for the niceness that’s in humans, and you try to bring it out in the AI. And then you, with its cooperation, because it knows that if you try to just amp it up, it might not stay all that nice, or that if you build a successor system to it, it might not stay all that nice, and it doesn’t want that because you narrow down the shoggoth enough. Somebody once had this incredibly profound statement that I think I somewhat disagree with but it’s still so incredibly profound. Consciousness is when the mask eats the shoggoth. Maybe that’s it, maybe with the right set of bootstrapping reflection type stuff you can have that happen on purpose more or less, where the system’s output that you’re shaping is to some degree in control of the system and you locate niceness in the human space. I have fantasies along the lines of what if you trained GPT-N to distinguish people being nice and saying sensible things and argue validly, and I’m not sure that works if you just have Amazon turks try to label it. You just get the strange thing that RLHF located in the present space which is some kind of weird corporate speak, left-rationalizing leaning, strange telephone announcement creature. That is what they got with the current crop of RLHF. Note how this stuff is weirder and harder than people might have imagined initially. But leave aside the part where you try to jump start the entire process of turning into a grizzled cynic and update as hard as you can and do it in advance. Maybe you are able to train it on Scott Alexander and so you want to be a wizard , some other nice real people and nice fictional people and separately train on what’s valid arguments. That’s going to be tougher but I could probably put together a crew of a dozen people who could provide the data on that RLHF and you find the nice creature and you find the nice mask that argues validly. You do some more complicated stuff to try to boost the thing where it’s like eating the shoggoth where that’s more what the system is, less what it’s pretending to be. I can say this and the disaster monkeys at the current places cannot (unclear) to it but they have not said things like this themselves that I have ever heard and that is not a good sign. And then if you don’t amp this up too far, which on the present paradigm you can’t do anyways, because if you train the very, very smart version of the system it kills you before you can RLHF it. But maybe you can train GPT to distinguish nice, valid, kind, careful, and then filter all the training data to get the nice things to train on and then train on that data rather than training on everything to try to avert the Waluigi problem or just more generally having all the darkness in there. Just train it on the light that’s in humanity. So there’s like that kind of course. And if you don’t push that too far, maybe you can get a genuine ally and maybe things play out differently from there. That’s one of the little rays of hope. But I don’t think alignment is actually so easy that you just get whatever you want. It’s a genie, it gives you what you wish for. I don’t think that doesn’t even strike me as hope.", "Dwarkesh Patel 2:49:06", "Honestly. The way you describe it, it seemed kind of compelling. I don’t know why that doesn’t even rise to 1%. The possibility that it works out that way.", "Eliezer Yudkowsky 2:49:14", "This is like literally my AI alignment fantasy from 2003, though not with RLHF as the implementation method or LLMs as the base. And it’s going to be more dangerous than what I was dreaming about in 2003. And I think in a very real sense it feels to me like the people doing this stuff now have literally not gotten as far as I was in 2003. And I’ve now written out my answer sheet for that. It’s on the podcast, it goes on the Internet. And now they can pretend that that was their idea or like  — “Sure, that’s obvious. We were going to do that anyways.” And yet they didn’t say it earlier. You can’t run a big project off of one person who.. The alignment field failed to gel. That’s my (unclear) to the like — “Well, you just throw in a ton of more money, and then it’s all solvable.” Because I’ve seen people try to amp up the amount of money that goes into it and the stuff coming out of it has not gone to the places that I would have considered obvious a while ago. And I can print out all my answer sheets for it and each time I do that, it gets a little bit harder to make the case next time.", "Dwarkesh Patel 2:50:39", "How much money are we talking about in the grand scheme of things? Because civilization itself has a lot of money.", "Eliezer Yudkowsky 2:50:45", "I know people who have a billion dollars. I don’t know how to throw a billion dollars at outputting lots and lots of alignment stuff.", "Dwarkesh Patel 2:50:53", "But you might not. But I mean, you are one of 10 billion, right?", "Eliezer Yudkowsky 2:50:57", "And other people go ahead and spend lots of money on it anyways. Everybody makes the same mistakes. Nate Soares has a post about it. I forget the exact title, but everybody coming into alignment makes the same mistakes.", "Dwarkesh Patel 2:51:11", "Let me just go on to the third point because I think it plays into what I was saying. The third reason is if it is the case that these capabilities scale in some constant way as it seems like they’re going from 2 to 3 or 3 to 4?", "Eliezer Yudkowsky 2:51:29", "What does that even mean? But go on.", "Dwarkesh Patel 2:51:30", "That they get more and more general. It’s not like going from a mouse to a human or a chimpanzee to a human. It’s like going from GPT-3 to GPT-4. It just seems like that’s less of a jump than chimp to human, like a slow accumulation of capabilities. There are a lot of S curves of emergent abilities, but overall the curve looks sort of..", "Eliezer Yudkowsky 2:51:56", "I feel like we bit off a whole chunk of chimp to human in GPT 3.5 to GPT-4, but go on.", "Dwarkesh Patel 2:52:03", "Regardless then this leads to human level intelligence for some interval. I think that I was not convinced from the arguments that we could not have a system of sort of checks on this the same way you have checks on smart humans that it would try to deceive us to achieve its aims. Any more than smart humans are in positions of power. Try to do the same thing.", "Eliezer Yudkowsky 2:52:31", "For a year. What are you going to do with that year before the next generation of systems come out that are not held in check by humans because they are not roughly in the same power intelligence range as humans? Maybe you can get a year like that. Maybe that actually happens. What are you going to do with that year that prevents you from dying the year after?", "Dwarkesh Patel 2:52:52", "One possibility is that because these systems are trained on human text, maybe progress just slows down a lot after it gets to slightly above human level.", "Eliezer Yudkowsky 2:53:02", "Yeah, I would be quite surprised if that’s how anything works.", "Dwarkesh Patel 2:53:08", "Why is that?", "Eliezer Yudkowsky 2:53:10", "First of all, you realize in principle that the task of minimizing losses on predicting human text does not stop when you’re as smart as a human, right? Like you can see the computer science of that?", "Dwarkesh Patel 2:53:34", "I don’t know if I see the computer science of that, but I think I probably understand.", "Eliezer Yudkowsky 2:53:38", "Okay so somewhere on the internet is a list of hashes followed by the string hashed. This is a simple demonstration of how you can go on getting lower losses by throwing a hypercomputer at the problem. There are pieces of text on there that were not produced by humans talking in conversation, but rather by lots and lots of work to extract experimental results out of reality. That text is also on the internet. Maybe there’s not enough of it for the machine learning paradigm to work, but I’d sooner buy that the GPT system just bottleneck short of being able to predict that stuff better rather than. You can maybe buy that but the notion that you only have to be smart as a human to predict all the text on the internet, as soon as you turn around and stare at that it’s just transparently false.", "Dwarkesh Patel 2:54:31", "Okay, agreed. Okay, how about this story? You have something that is sort of human-like that is maybe above humans at certain aspects of science because it’s specifically trained to be really good at the things that are on the Internet, which is like chunks and chunks of archive and whatever. Whereas it has not been trained specifically to gain power. And while at some point of intelligence that comes along.", "Can I just restart that whole sentence?", "Eliezer Yudkowsky 2:55:02", "No. You have spoken it. It exists. It cannot be called back. There are no take backs. There is no going back. There is no going back. Go ahead.", "Dwarkesh Patel 2:55:14", "Okay, so here’s another story. I expect them to be better than humans at science than they are at power seeking, because we had greater selection pressures for power seeking in our ancestral environment than we did for science. And while at a certain point both of them come along as a package, maybe they can be at varying levels, so you have this sort of early model that is kind of human-level, except a little bit ahead of us in science. You ask it to help us align the next version of it, then the next version of it is more aligned because we have its help and sort of like this inductive thing where the next version helps us align the version.", "Eliezer Yudkowsky 2:56:02", "Where do people have this notion of getting AIs to help you do your AI alignment homework? Why can we not talk about having it enhance humans instead?", "Dwarkesh Patel 2:56:11", "Either one of those stories where it just helps us enhance humans and help us figure out the alignment problem or something like that.", "Eliezer Yudkowsky 2:56:20", "Yeah, it’s kind of weird because small, large amounts of intelligence don’t automatically make you a computer programmer. And if you are a computer programmer, you don’t automatically get the security mindset. But it feels like there’s some level of intelligence where you ought to automatically get the security mindset. And I think that’s about how hard you have to augment people to have them able to do alignment. Like the level where they have a security mindset, not because they were like special people with a security mindset, but just because they’re that intelligent that you just automatically have a security mindset. I think that’s about the level where a human could start to work on alignment, more or less.", "Dwarkesh Patel 2:56:56", "Why is that story then not get you to 1% probability that it helps us avoid the whole crisis?", "Eliezer Yudkowsky 2:57:03", "Because it’s not just a question of the technical feasibility of can you build a thing that applies its general intelligence narrowly to the neuroscience of augmenting humans? One, I feel like that is probably over 1% technical feasibility, but the world that we are in is so far from doing that, from trying the way that it could actually work. Like not the the try where  — “Oh, you know. We'd like to do a bunch of RLHF to try to have a thing spit out output about this thing, but not about that thing” and no, not that. 1% that humanity could do that if it tried and tried in just the right direction as far as I can perceive angles in this space. Yeah, I’m over 1% on that. I am not very high on us doing it. Maybe I will be wrong. Maybe the Time article I wrote saying shut it all down gets picked up. And there are very serious conversations. And the very serious conversations are actually effective in shutting down the headlong plunge. And there is a narrow exception carved out for the kind of narrow application of trying to build an artificial general intelligence that applies its intelligence narrowly and to the problem of augmenting humans. And that, I think, might be a harder sell to the world than just shut it all down. They could shut it all down and then not do the things that they would need to do to have an exit strategy. I feel like even if you told me that they went for shut it all down I would expect them to have no exit strategy until the world ended anyways. But perhaps I underestimate them. Maybe there’s a will in humanity to do something else which is not that. And if there really were yeah, I think I’m even over 10% that would be a technically feasible path if they looked in just the right direction. But I am not over 50% on them actually doing the shut it all down. If they do that, I am then not over 50% on (unclear) them really having an exit strategy. Then from there you have to go in at sufficiently the right angle to materialize the technical chances and not do it in the way that just ends up a suicide, or if you’re lucky, gives you the clear warning signs and then people actually pay attention to those instead of just optimizing away the warning signs. And I don’t want to make this sound like the multiple stage fallacy of  — “Oh more than one thing has to happen therefore the resulting thing can never happen.” Which super clear case in point of why you cannot prove anything will not happen this way. Nate Silver arguing that Trump needed to get through six stages to become the Republican presidential candidate each of which was less than half probability and therefore he had less than 1/64th chance of becoming the Republican candidate, not winning. You can’t just break things down into stages and then say therefore. The probability is zero. You can break down anything into stages. But even so, you’re asking me like — Isn’t over 1% that it’s possible? I’m like — yeah, possibly even over 10% . The reason why I tell people — “Yeah, don’t put your hope in the future, you’re probably dead”, is that the existence of this technical array of hope, if you do just the right things, is not the same as expecting that the world reshapes itself to permit that to be done without destroying the world in the meanwhile. I expect things to continue on largely as they have. And what distinguishes that from despair is that at the moment people were telling me, — “No, no. If you go outside the tech industry, people will actually listen.” I’m like — “All right, let’s try that. Let’s write the Time article. Let’s jump on that. It will lack dignity not to try.” but that’s not the same as expecting, as being like — “Oh yeah, I’m over 50%, they’re totally going to do it. That Time article is totally going to take off.” I’m not currently not over 50% on that. You said any one of these things could mean, and yet even if this thing is technically feasible, that doesn’t mean the world’s going to do it. We are presently quite far from the world being on that trajectory or of doing the things that would needed to create time to pay the alignment tax to do it.", "What will AIs want?", "Dwarkesh Patel 3:02:15", "Maybe the one thing I would dispute is how many things need to go right from the world as a whole for any one of these paths to succeed. Which goes into the fourth point, which is that maybe the sort of universal prior over all the drives that an AI could have is just the wrong way to think about it.", "Eliezer Yudkowsky 3:02:35", "I mean you definitely want to use the alien observation of 10,000 planets like this one prior for what you get after training on, like, Thing X.", "Dwarkesh Patel 3:02:45", "It’s just that especially when we’re talking about things that have been trained on human text, I’m not saying that it was a mistake earlier on in the conversation for me to say they’ll be the average of human motivations, but it’s not conceivable to me that it would be something that is very sympathetic to human motivations. Having sort of encapsulated all of our output.", "Eliezer Yudkowsky 3:03:07", "I think it’s much easier to get a mask like that than to get a shoggoth like that.", "Dwarkesh Patel 3:03:14", "Possibly but again, this is something that seems like, I don’t know the probability on it but I would put it at least 10%. And just by default, it is not incompatible with the flourishing of humanity.", "Eliezer Yudkowsky 3:03:29", "What is the utility function you hope it has that has its maximum at the flourishing of humanity?", "Dwarkesh Patel 3:03:35", "There’s so many possible", "Eliezer Yudkowsky 3:03:37", "Name three. Name one. Spell it out.", "Dwarkesh Patel 3:03:39", "I don’t know. It wants to keep us as a zoo the same way we keep other animals in a zoo. This is not the best outcome for humanity, but it’s just like something where we survive and flourish.", "Eliezer Yudkowsky 3:03:49", "Okay. Whoa, whoa, whoa. Flourish? Keeping in a zoo did not sound like flourishing to me.", "Dwarkesh Patel 3:03:55", "Zoo was the wrong word to use there.", "Eliezer Yudkowsky 3:03:57", "Well, because it’s not what you wanted. Why is it not a good prediction?", "Dwarkesh Patel 3:04:01", "You just asked me to name three. You didn’t ask me..", "Eliezer Yudkowsky 3:04:04", "No, no. What I’m saying is you’re like — “Oh, prediction. Oh, no, I don’t like my prediction. I want a different prediction.”", "Dwarkesh Patel 3:04:10", "You didn’t ask for the prediction. You just asked me to name possibilities.", "Eliezer Yudkowsky 3:04:15", "I had meant possibilities in which you put some probability. I had meant for a thing that you thought held together.", "Dwarkesh Patel 3:04:22", "This is the same thing as when I asked you what is a specific utility function it will have that will be incompatible with humans existing. Eliezer Yudkowsky 3:04:32", "The super vast majority of predictions of utility functions are incompatible with humans existing. I can make a mistake and will still be incompatible with humans existing. I can just be like I can just describe a randomly rolled utility function, end up with something incompatible with humans existing.", "Dwarkesh Patel 3:04:49", "At the beginning of human evolution, you could think — Okay, this thing will become generally intelligent, and what are the odds that it’s flourishing on the planet will be compatible with the survival of spruce trees or something?", "Eliezer Yudkowsky 3:05:06", "And the long term, we sure aren’t. I mean, maybe if we win, we’ll have there be a space for spruce trees. So you can have spruce trees as long as the Mitochondrial Liberation Front does not object to that.", "Dwarkesh Patel 3:05:20", "What is the Mitochondrial Liberation Front?", "Eliezer Yudkowsky 3:05:21", "Have you no sympathy for the mitochondria enslaved working all their lives to the benefit of some other organisms?", "Dwarkesh Patel 3:05:30", "This is like some weird hypothetical. For hundreds of thousands of years, general intelligence has existed on Earth. You could say, is it compatible with some random species that exist on Earth? Is it compatible with spruce trees existing? And I know you probably chopped down a few spruce trees.", "Eliezer Yudkowsky 3:05:45", "And the answer is yes, as a very special case of being the sort of things that some of us would maybe conclude that we specifically wanted spruce trees to go on existing, at least on Earth, in the glorious transhuman future. And their votes winning out against those of the mitochondrial Liberation Front.", "Dwarkesh Patel 3:06:07", "Since part of the transhumanist future is part of the thing we’re debating, it seems weird to assume that as part of the question.", "Eliezer Yudkowsky 3:06:15", "The thing I’m trying to say is you’re like — Well, if you looked at the humans, would you not expect them to end up incompatible with the spruce trees? And I’m being like — “Sir, you, a human, have looked back and looked at how humans wanted the universe to be and been like, well, would you not have anticipated in retrospect that humans would want the universe to be otherwise?” And I agree that we might want to conserve a whole bunch of stuff. Maybe we don’t want to conserve the parts of nature where things bite other things and inject venom into them and the victims die in terrible pain. I think that many of them don’t have qualia. This is disputed. Some people might be disturbed by it even if they didn’t have qualia. We might want to be polite to the sort of aliens who would be disturbed by it because they don’t have qualia and they just don’t want venom injected into them for they should not have venom. We might conserve some parts of nature. But again, it’s like firing an arrow and then drawing a circle around the target.", "Dwarkesh Patel 3:07:18", "I would disagree with that because again, this is similar to the example we started off the conversation with. It seems like you are reasoning from what might happen in the future and because we disagree about what might happen in the future. In fact, the entire point of this disagreement is to test what will happen in the future. Assuming what will happen in the future as part of your answer seems like a bad way to answer the question.", "Eliezer Yudkowsky 3:07:45", "Okay but then you’re claiming things as evidence for your position.", "Dwarkesh Patel 3:07:47", "Based on what exists in the world now.", "Eliezer Yudkowsky 3:07:49", "They are not evidence one way or the other because the basic prediction is like, if you offer things enough options, they will go out of distribution. It’s like pointing to the very first people with language and being like, they haven’t taken over the world yet, and they have not gone way out of distribution yet. They haven’t had general intelligence for long enough to accumulate the things that would give them more options such that they could start trying to select the weirder options. The prediction is when you give yourself more options, you start to select ones that look weirder relative to the ancestral distribution. As long as you don’t have the weird options, you’re not going to make the weird choices. And if you say we haven’t yet observed your future, that’s fine, but acknowledge that the evidence against that future is not being provided by the past is the thing I’m saying there. You look around, it looks so normal according to you, who grew up here. If you grew up a millennium earlier, your argument for the persistence of normality might not seem as persuasive to you after you’d seen that much change.", "Dwarkesh Patel 3:09:03", "This is a separate argument, though, right?", "Eliezer Yudkowsky 3:09:07", "Look at all this stuff humans haven’t changed yet. You say, now selecting the stuff, we haven’t changed yet. But if you go back 20,000 years and be like, look at the stuff intelligence hasn’t changed yet. You might very well select a bunch of stuff that was going to fall 20,000 years later is the thing I’m trying to gesture at here.", "Dwarkesh Patel 3:09:27", "How do you propose we reason about what general intelligences would do when the world we look at, after hundreds of thousands of years of general intelligence, is the one that we can’t use for evidence?", "Eliezer Yudkowsky 3:09:39", "Dive under the surface, look at the things that have changed. Why did they change? Look at the processes that are generating those choices.", "Dwarkesh Patel 3:09:52", "And since we have these different functions of where that goes..", "Eliezer Yudkowsky 3:09:58", "Look at the thing with ice cream, look at the thing with condoms, look at the thing with pornography, see where this is going.", "Dwarkesh Patel 3:10:08", "It just seems like I would disagree with your intuitions about what future smarter humans will do, even with more options. In the beginning of conversation, I disagreed that most humans would adopt a transhumanist way to get better DNA or something.", "Eliezer Yudkowsky 3:10:23", "But you would. You just look down at your fellow humans. You have no confidence in their ability to tolerate weirdness, even if they can.", "Dwarkesh Patel 3:10:33", "What do you think would happen if we did a poll right now?", "Eliezer Yudkowsky 3:10:36", "I think I’d have to explain that poll pretty carefully because they haven’t got the intelligence headbands yet. Right?", "Dwarkesh Patel 3:10:42", "I mean, we could do a Twitter poll with a long explanation in it.", "Eliezer Yudkowsky 3:10:45", "4000 character Twitter poll?", "Dwarkesh Patel 3:10:50", "Yeah.", "Eliezer Yudkowsky 3:10:51", "Man, I am somewhat tempted to do that just for the sheer chaos and point out the drastic selection effects of: A) It’s my Twitter followers B) They read through a 4000 character tweet. I feel like this is not likely to be truly very informative by my standards, but part of me is amused by the prospect for the chaos.", "Dwarkesh Patel 3:11:06", "Yeah. Or I could do it on my end as well. Although my followers are likely to be weird as well.", "Eliezer Yudkowsky 3:11:11", "Yeah plus I worry you wouldn’t be able to sell that transhumanism thing as well as it could get sold.", "Dwarkesh Patel 3:11:17", "You could just send me the wording. But anyways, given that we disagree about what in the future general intelligence will do, where do you suppose we should look for evidence about what the general intelligence will do given our different theories about it, if not from the present?", "Eliezer Yudkowsky 3:11:36", "I think you look at the mechanics. You say as people have gotten more options, they have gone further outside the ancestral distribution. And we zoom in and there’s all these different things that people want and there’s this narrow range of options that they had 50,000 years ago and the things that they want have maxima or optima 50,000 years ago at stuff that coincides with reproductive fitness. And then as a result of the humans getting smarter, they start to accumulate culture, which produces changes on a timescale faster than natural selection runs, although it is still running contemporaneously. Humans are just running faster than natural selection, it didn’t actually halt. And they generate additional options, not blindly, but according to the things that they want. And they invent ice-cream. It doesn’t just get coughed up at random, they are searching the space of things that they want and generating new options for themselves that optimize these things more that weren’t in the ancestral environment. And Goodhart’s law applies, Goodhart’s curse applies. As you apply optimization pressure, the correlations that were found naturally come apart and aren’t present in the thing that gets optimized for. Just give some people some tests who’ve never gone to school. The ones who score high in the carpentry test will know how to carpenter things. Then you’re like — I’ll pay you for high scores in the carpentry test, I’ll give you this carpentry degree. And people are like — “Oh, I’m going to optimize the test specifically.” and they’ll get higher scores than the carpenters and be worse at carpentry because they’re optimizing the test. And that’s the story behind ice cream. You zoom in and look at the mechanics and not the grand scale view, because the grand scale view just never gives you the right answer. Anytime you asked what would happen if you applied the grand scale view philosophy in the past, it’s always just like — “I don’t see why this thing would change. Oh, it changed. How weird. Who could have possibly have expected that.”", "Dwarkesh Patel 3:13:57", "Maybe you have a different definition of grand scale view? Because I would have thought that that is what you might use to categorize your own view. But I don’t want to get it caught up in semantics.", "Eliezer Yudkowsky 3:14:05", "My mind is zooming in, it’s looking at the mechanics. That’s how I’d present it.", "Dwarkesh Patel 3:14:09", "If we are so far out a distribution of natural selection, as you say..", "Eliezer Yudkowsky 3:14:14", "We’re currently nowhere near as far as we could be. This is not the glorious transhumanist future.", "Dwarkesh Patel 3:14:20", "I claim that even if humans get much smarter through brain augmentation or something, then there will still be spruce trees millions of years in the future.", "Eliezer Yudkowsky 3:14:36", "If you still want to, come the day, I don’t think I myself would oppose it. Unless there’d be like distant aliens who are very, very sad about what we were doing to the mitochondria. And then I don’t want to ruin their day for no good reason.", "Dwarkesh Patel 3:14:48", "But the reason that it’s important to state it in the former — given human psychology, spruce trees will still exist, is because that is the one evidence of generality arising we have. And even after millions of years of that generality, we think that spruce trees would exist. I feel like we would be in this position of spruce trees in comparison to the intelligence we create and sort of the universal prior on whether spruce trees would exist doesn’t make sense to me.", "Eliezer Yudkowsky 3:15:09", "But do you see how this perhaps leads to everybody’s severed heads being kept alive in jars on its own premises, as opposed to humans getting the glorious transhumanist future? No, they have the glorious transhumanist future. Those are not real spruce trees. You’re talking plain old spruce trees you want to exist, right? Not the sparkling giant spruce trees with built in rockets. You’re talking about humans being kept as pets in their ancestral state forever, maybe being quite sad. Maybe they still get cancer and die of old age, and they never get anything better than that. Does it keep us around as we are right now? Do we relive the same day over and over again? Maybe this is the day when that happens. Do you see the general trend I’m trying to point out here? It is that you have a rationalization for why they might do a thing that is allegedly nice. And I’m saying— why exactly are they wanting to do the thing? Well, if they want to do the thing for this reason, maybe there’s a way to do this thing that isn’t as nice as you’re imagining? And this is systematic. You’re imagining reasons they might have to give you nice things that you want, but they are not you. Not unless we get this exactly right and they actually care about the part where you want some things and not others. You are not describing something you are doing for the sake of the spruce trees. Do spruce trees have diseases in this world of yours? Do the diseases get to live? Do they get to live on spruce trees? And it’s not a coincidence that I can zoom in and poke at this and ask questions like this and that you did not ask these questions of yourself. You are imagining nice ways you can get the thing. But reality is not necessarily imagining how to give you what you want. And the AI is not necessarily imagining how to give you what you want and for everything. You can be like — “Oh. Hopeful thought. Maybe I get all this stuff I want because the AI reasons like this.” Because it’s the optimism inside you that is generating this answer. And if the optimism is not in the AI, if the AI is not specifically being like — “Well, how do I pick a reason to do things that will give this person a nice outcome?” You’re not going to get the nice outcome. You’re going to be reliving the last day of your life over and over. It’s going to, like, create old or maybe it creates old fashioned humans, ones from 50,000 years ago. Maybe that’s more quaint. Maybe it's just as happy with bacteria because there’s more of them and that’s equally old fashioned. You’re going to create the specific spruce tree over there. Maybe from its perspective, a generic bacterium is just as good a form of life as a generic spruce tree is. This is not specific to the example that you gave. It’s me being like — “Well, suppose we took a criterion that sounds kind of like this and asked, how do we actually maximize it? What else satisfies it?” You’re trying to argue the AI into doing what you think is a good idea by giving the AI reasons why it should want to do the thing under some set of hypothetical motives. But anything like that if you optimize it on its own terms without narrowing down to where you want it to end up because it actually felt nice to you the way that you define niceness. It’s all going to have somewhere else, somewhere that isn’t as nice. Something maybe where we’d sooner scour the surface of the planet with nuclear fire rather than let that AI come into existence. Though I do think those are also probable because you know, instead of hurting you, there’s something more efficient for it to do that maxes out its utility function.", "Dwarkesh Patel 3:19:09", "Okay, I acknowledge that you had a better argument there, but here’s another intuition. I’m curious how you respond to that. Earlier, we talked about the idea that if you bred humans to be friendlier and smarter. Eliezer Yudkowsky 3:19:29", "I think I want to register for the record that the term breeding humans would cause me to look askance at any aliens who would propose that as a policy action on their part. All right, there I said it, move on.", "Dwarkesh Patel 3:19:44", "No, no. That’s not what I’m proposing we do. I’m just saying it as a sort of thought experiment. You answered that we shouldn’t assume that AIs are going to start with human psychology. Okay, fair enough. Assume we start off with dogs, good old fashioned dogs. And we bred them to be more intelligent, but also to be friendly.", "Eliezer Yudkowsky 3:20:06", "Well, as soon as they are past a certain level of intelligence, I object to us coming in and breeding them. They can no longer be owned. They are now sufficiently intelligent to not be owned anymore. But let us leave aside all morals. Carry on. In the thought experiment, not in real life, you can’t leave out the morals in real life.", "Dwarkesh Patel 3:20:22", "Do you have some sort of universal prior of the drives of these super intelligent dogs that are bred to be friendly?", "Eliezer Yudkowsky 3:20:29", "I think that weird shit starts to happen at the point where the dogs get smart enough that they are like, what are these flaws in our thinking processes? Over the CFAR threshold of dogs. Although CFAR has some strange baggage. Over the Korzybski threshold of dogs after Alfred Korzybski . I think that there’s this whole domain where they’re stupider than you and sort of like being shaped by their genes and not shaping themselves very much. And as long as that is true, you can probably go on breeding them. Issues start to arise when the dogs are smarter than you, when the dogs can manipulate you, if they get to that point, where the dogs can strategically present particular appearances to fool you, where the dogs are aware of the breeding process and possibly having opinions about where that should go in the long run, where the dogs are even if just by thinking and by adopting new rules of thought, modifying themselves in that small way. These are some of the points where I expect the weird shit to start to happen and the weird shit will not necessarily show up while you’re just breeding the dogs.", "Dwarkesh Patel 3:21:47", "Does the weird shit look like — dog gets smart enough….humans stop existing?", "Eliezer Yudkowsky 3:21:53", "If you keep on optimizing the dogs, which is not the correct course of action, I think I mostly expect this to eventually blow up on you.", "Dwarkesh Patel 3:22:06", "But blow up on you that bad?", "Eliezer Yudkowsky 3:22:08", "I expect to blow up on you quite bad. I’m trying to think about whether I expect super dogs to be sufficiently in a human frame of reference in virtue of them also being mammals. That a super dog would create human ice-cream. You bred them to have preferences about humans and they invent something that is like ice cream to those preferences. Or does it just go off someplace stranger?", "Dwarkesh Patel 3:22:39", "There could be AI ice cream. Thing that is the equivalent of ice cream for AIs.", "Eliezer Yudkowsky 3:22:47", "That is essentially my prediction of what the solar system ends up filled with. The exact ice cream is quite hard to predict. If you optimize something for inclusive genetic fitness, you’ll get ice cream. That is a very hard call to make.", "Dwarkesh Patel 3:23:02", "Sorry, I didn’t mean to interrupt. Where were you going with your….", "Eliezer Yudkowsky 3:23:06", "I was just rambling in my attempts to make predictions about these super dogs. In a world that had its priorities straight even remotely, this stuff is not me extemporizing on a blog post, there are 1000 papers that were written by people who otherwise became philosophers writing about this stuff instead. But your world has not set its priorities that way and I’m concerned that it will not set them that way in the future and I’m concerned that if it tries to set them that way, it will end up with garbage because the good stuff was hard to verify. But, separate topic.", "Dwarkesh Patel 3:23:44", "I understand your intuition that we would end up in a place that is not very good for humans. That just seems so hard to reason about that I honestly would not be surprised if it ended up fine for humans. In fact, the dogs wanted good things for humans, loved humans. We’re smarter than dogs, we love them. The sort of reciprocal relationship came about.", "Eliezer Yudkowsky 3:24:12", "I feel like maybe I could do this given thousands of years to breed the dogs in a total absence of ethics. But it would actually be easier with the dogs than with gradient descent because the dogs are starting out with neural architecture very similar to human and natural selection is just like a different idiom from gradient descent. In particular, in terms of information bandwidth. I’d be steering to breed the dogs into genuinely very nice human and knowing the stuff that I know that your your typical dog breeder might not know when they embarked on this project. I would, very early on, start prompting them into the weird stuff that I expected to get started later and trying to observe how they went during that.", "Dwarkesh Patel 3:25:00", "This is the alignment strategy we need ultra smart dogs to help us solve.", "Eliezer Yudkowsky 3:25:04", "There’s no time.", "Dwarkesh Patel 3:25:06", "Okay, I think we sort of articulated our intuitions on that one. Here’s another one that’s not something I came into the conversation with.", "Eliezer Yudkowsky 3:25:17", "Some of my intuition here is like I know how I would do this with dogs and I think you could ask OpenAI to describe their theory of how to do it with dogs. And I would be like — “Oh wow, that sure is going to get you killed.” And that’s kind of how I expect it to play out in practice, actually.", "Dwarkesh Patel 3:25:34", "When you talk to the people who are in charge of these labs, what do they say? Do they just like not grok the arguments?", "Eliezer Yudkowsky 3:25:40", "You think they talk to me?", "Dwarkesh Patel 3:25:42", "There was a certain selfie that was taken.", "Eliezer Yudkowsky 3:25:44", "Taken by 5 minutes of conversation. First time any of the people in that selfie had met each other.", "Dwarkesh Patel 3:25:49", "And then did you bring it up?", "Eliezer Yudkowsky 3:25:51", "I asked him to change the name of his corporation to anything but OpenAI.", "Dwarkesh Patel 3:25:57", "Have you seeked an audience with the leaders of these labs to explain these arguments?", "Eliezer Yudkowsky 3:26:04", "No.", "Dwarkesh Patel 3:26:06", "Why not?", "Eliezer Yudkowsky 3:26:10", "I’ve had a couple of conversations with Demis Hassabis who struck me as much more the sort of person who is possible to have a conversation with.", "Dwarkesh Patel 3:26:19", "I guess it seems like it would be more dignity to explain, even if you think it’s not going to be fruitful ultimately, to the people who are most likely to be influential in this race.", "Eliezer Yudkowsky 3:26:30", "My basic model was that they wouldn’t like me and that things could always be worse.", "Dwarkesh Patel 3:26:35", "Fair enough.", "Eliezer Yudkowsky 3:26:40", "They sure could have asked at any time but that would have been quite out of character. And the fact that it was quite out of character is like why I myself did not go trying to barge into their lives and getting them mad at me.", "Dwarkesh Patel 3:26:53", "But you think them getting mad at you would make things worse.", "Eliezer Yudkowsky 3:26:57", "It can always be worse. I agree that possibly at this point some of them are mad at me, but I have yet to turn down the leader of any major AI lab who has come to me asking for advice.", "Dwarkesh Patel 3:27:12", "Fair enough. On the theme of big picture disagreements, why I’m still not on the greater than 50% doom, from the conversation it didn’t seem like you were willing or able to make predictions about the world short of doom that would help me distinguish and highlight your view about other views.", "Eliezer Yudkowsky 3:27:40", "Yeah, I mean the world heading into this is like a whole giant mess of complicated stuff predictions about which can be made in virtue of spending a whole bunch of time staring at the complicated stuff until you understand that specific complicated stuff and making predictions about it. From my perspective, the way you get to my point of view is not by having a grand theory that reveals how things will actually go. It’s like taking other people’s overly narrow theories and poking at them until they come apart and you’re left with a maximum entropy distribution over the right space which looks like — “Yep, that sure is going to randomize the solar system.”", "Dwarkesh Patel 3:28:18", "But to me it seems like the nature of intelligence and what it entails is even more complicated than the sort of geopolitical or economic things that would be required to predict what the world’s going to look like.", "Eliezer Yudkowsky 3:28:29", "I think you’re just wrong. I think the theory of intelligence is just flatly not that complicated. Maybe that’s just the voice of a person with talent in one area but not the other. But that sure is how it feels to me.", "Dwarkesh Patel 3:28:42", "This would be even more convincing to me if we had some idea of what the pseudocode or circuit for intelligence would look like. And then you could say like — “Oh, this is what the pseudocode implies, we don’t even have that.”", "Eliezer Yudkowsky 3:28:54", "If you permit a hypercomputer just as AIXI.", "Dwarkesh Patel 3:28:58", "What is AIXI?", "Eliezer Yudkowsky 3:29:01", "You have the Solomonoff prior over your environment, update it on the evidence and then max sensory reward. It’s not actually trivial and this thing will exhibit weird discontinuities around its cartesian boundary with the universe. But everything that people imagine as the hard problems of intelligence are contained in the equation if you have a hybrid computer.", "Dwarkesh Patel 3:29:31", "Fair enough, but I mean in the sort of sense of programming it into a normal. Like I give you a really big computer to write the pseudocode or something.", "Eliezer Yudkowsky 3:29:42", "I mean, if you give me a hypercomputer, yeah. What you’re saying here is that the theory of intelligence is really simple in an unbounded sense, but what about this depends on the difference between unbounded and bounded intelligence?", "Dwarkesh Patel 3:29:55", "So how about this? You ask me, do you understand how fusion works? If not, let’s say we’re talking in the 1800s, how can you predict how powerful a fusion bomb would be? And I say — “Well, listen. If you put in a pressure, I’ll just show you the sun” and the sun is sort of the archetypal example of a fusion is and you say — “No, I’m asking what would a fusion bomb look like?” You see what I mean?", "Eliezer Yudkowsky 3:30:19", "Not necessarily. What is it that you think somebody ought to be able to predict about the road ahead?", "Dwarkesh Patel 3:30:28", "One of the things, if you know the nature of intelligence is just, how will this sort of progress in intelligence look like? How our ability is going to scale, if at all?", "Eliezer Yudkowsky 3:30:42", "And it looks like a bunch of details that don’t easily follow from the general theory of simplicity, prior Bayesian update argmax", "Dwarkesh Patel 3:30:52", "Again, then the only thing that follows is the wildest conclusion. There’s no simpler conclusions to follow like the Eddington looking and confirming special relativity. It’s just like the wildest possible conclusion is the one that follows.", "Eliezer Yudkowsky 3:31:10", "Yeah, the convergence is a whole lot easier to predict than the pathway there. I’m sorry and I sure wish it was otherwise. And also remember the basic paradigm. From my perspective, I’m not making any brilliant startling predictions, I’m poking at other people’s incorrectly narrow theories until they fall apart into the maximum entropy state of doom.", "Dwarkesh Patel 3:31:34", "There’s like thousands of possible theories, most of which have not come about yet. I don’t see it as strong evidence that because you haven’t been able to identify a good one yet, that.", "Eliezer Yudkowsky 3:31:47", "In the profoundly unlikely event that somebody came up with some incredibly clever grand theory that explained all the properties GPT-5 ought to have, which is just flatly not going to happen, that kind of info is not available. My hat would be off to them if they wrote down their predictions in advance, and if they were then able to grind that theory to produce predictions about alignment, which seems even more improbable because what do those two things have to do with each other exactly? But still, mostly I’d be like — “Well, it looks like our generation has its new genius. How about if we all shut up for a while and listen to what they have to say?”", "Dwarkesh Patel 3:32:24", "How about this? Let’s say somebody comes to you and they say, I have the best-in-US theory of economics. Everything before is wrong.", "Eliezer Yudkowsky 3:32:38", "One does not say everything before is wrong. One predicts the following new phenomena and on rare occasions say that old phenomena were organized incorrectly.", "Dwarkesh Patel 3:32:46", "Fair enough. So they say old phenomena are organized incorrectly.", "Eliezer Yudkowsky 3:32:53", "Let's call this person Scott Sumner , for the sake of simplicity.", "Dwarkesh Patel 3:32:57", "They say, in the next ten years, there’s going to be a depression that is so bad that is going to destroy the entire economic system. I’m not talking just about something that is a hurdle. Literally, civilization will collapse because of economic disaster. And then you ask them — “Okay, give me some predictions before this great catastrophe happens about what this theory implies.” And then they say — “Listen, there’s many different branching Patel, but they all converge at civilization collapsing because of some great economic crisis.” I’m like —I don’t know, man. I would like to see some predictions before that.", "Eliezer Yudkowsky 3:33:33", "Yeah. Wouldn’t it be nice? So we’re left with your 50% probability that we win the lottery and 50% probability that we don’t because nobody has a theory of lottery tickets that has been able to predict what numbers get drawn next.", "Dwarkesh Patel 3:33:51", "I don’t agree with that analogy.", "Eliezer Yudkowsky 3:33:56", "It is all about the space over which you’re uncertain. We are all quite uncertain about where the future leads, but over which space? And there isn’t a royal road. There isn’t a simple — “Ahh. I found just the right thing to be ignorant about. It’s so easy. The chance of a good outcome is 33% because they’re like one possible good outcome and two possible bad outcomes.” The thing you’re trying to fall back to in the absence of anything that predicts exactly which properties GPT-5 will have is your sense that a pretty bad outcome is kind of weird, right? It’s probably a small sliver of the space but that’s just like imposing your natural English language prior, your natural humanese prior, on the space of possibilities and being like, I’ll distribute my max entropy stuff over. That gay.", "Dwarkesh Patel 3:34:52", "Can you explain that again?", "Eliezer Yudkowsky 3:34:55", "Okay. What is the person doing wrong who says 50-50 either I’ll win the lottery or I won’t?", "Dwarkesh Patel 3:35:00", "They have the wrong distribution to begin with over possible outcomes.", "Eliezer Yudkowsky 3:35:06", "Okay. What is the person doing wrong who says 50-50 either we’ll get a good outcome or a bad outcome from AI?", "Dwarkesh Patel 3:35:14", "They don’t have a good theory to begin with about what the space of outcomes looks like.", "Eliezer Yudkowsky 3:35:19", "Is that your answer? Is that your model of my answer?", "Dwarkesh Patel 3:35:22", "My answer.", "Eliezer Yudkowsky 3:35:25", "But all the things you could say about a space of outcomes are an elaborate theory, and you haven’t predicted GPT-4’s exact properties in advance. Shouldn’t that just leave us with just good outcome or bad outcome, 50-50 ?", "Dwarkesh Patel 3:35:35", "People did have theories about what GPT-4. If you look at the scaling laws right, it probably falls right on the sort of curves that were drawin in 2020 or something.", "Eliezer Yudkowsky 3:35:50", "The loss on text predictions, sure, that followed a curve, but which abilities would that correspond to? I’m not familiar with anyone who called that in advance. What good does it know to the loss? You could have taken those exact loss numbers back in time ten years and been like, what kind of commercial utility does this correspond to? And they would have given you utterly blank looks. And I don’t actually know of anybody who has a theory that gives something other than a blank look for that. All we have are the observations. Everyone’s in that boat, all we can do are fit the observations. Also, there’s just me starting to work on this problem in 2001 because it was super predictable, going to turn into an emergency later and point of fact, nobody else ran out and immediately tried to start getting work done on the problems. And I would claim that as a successful prediction of the grand lofty theory.", "Dwarkesh Patel 3:36:41", "Did you see deep learning coming as the main paradigm?", "Eliezer Yudkowsky 3:36:44", "No.", "Dwarkesh Patel 3:36:46", "And is that relevant as part of the picture of intelligence?", "Eliezer Yudkowsky 3:36:50", "I would have been much more worried in 2001 if I’d seen deep learning coming.", "Dwarkesh Patel 3:36:57", "No, not in 2001, I just mean before it became like obviously the main paradigm of AI.", "Eliezer Yudkowsky 3:37:03", "No, it’s like the details of biology. It’s like asking people to predict what the organs look like in advance via the principle of natural selection and it’s pretty hard to call in advance. Afterwards, you can look at it and be like — “Yep, this sure does look like the thing it should look if this thing is being optimized to reproduce.” But the space of things that biology can throw at you is just too large. It’s very rare that you have a case where there’s only one solution that lets the thing reproduce that you can predict by the theory that it will have successfully reproduced in the past. And mostly it’s just this enormous list of details and they do all fit together in retrospect. It is a sad truth. Contrary to what you may have learned in science class as a kid, there are genuinely super important theories where you can totally actually validly see that they explain the thing in retrospect and yet you can’t do the thing in advance. Not always, not everywhere, not for natural selection. There are advanced predictions you can get about that given the amount of stuff we’ve already seen. You can go to a new animal in a new niche and be like — “Oh, it’s going to have these properties given the stuff we’ve already seen in the niche.” There’s advanced predictions that they’re a lot harder to come by. Which is why natural selection was a controversial theory in the first place. It wasn’t like gravity. Gravity had all these awesome predictions. Newton’s theory of gravity had all these awesome predictions. We got all these extra planets that people didn’t realize ought to be there. We figured out Neptune was there before we found it by telescope. Where is this for Darwinian selection? People actually did ask at the time, and the answer is, it’s harder. And sometimes it’s like that in science.", "Dwarkesh Patel 3:38:54", "The difference is the theory of Darwinian selection seems much more well developed. There was a Roman poet called Lucretius who had a poem where there was a precursor of Darwinian selection. And I feel like that is probably our level of maturity when it comes to intelligence. Whereas we don’t have a theory of intelligence, we might have some hints about what it might look like.", "Eliezer Yudkowsky 3:39:29", "Always got our hints.", "Dwarkesh Patel 3:39:32", "It seems harder to extrapolate very strong conclusions from hints.", "Eliezer Yudkowsky 3:39:35", "They’re not very strong conclusions is the message I’m trying to say here. I’m pointing to your being like, maybe we might survive, and you’re like — “Whoa, that’s a pretty strong conclusion you’ve got there. Let’s weaken it.” That’s the basic paradigm I’m operating under here. You’re in a space that’s narrower than you realize when you’re like — “Well, if I’m kind of unsure, maybe there’s some hope.”", "Dwarkesh Patel 3:39:58", "Yeah, I think that’s a good place to close the discussion on AIs.", "Eliezer Yudkowsky 3:40:03", "I do kind of want to mention one last thing. In historical terms, if you look out the actual battle that was being fought on the block, it was me going like — “I expect there to be AI systems that do a whole bunch of different stuff.” And Robin Hanson being like — “I expect there to be a whole bunch of different AI systems that do a whole different bunch of stuff.”", "Dwarkesh Patel 3:40:27", "But that was one particular debate with one particular person.", "Eliezer Yudkowsky 3:40:30", "Yeah, but your planet, having made the strange reason, given its own widespread theories, to not invest massive resources in having a much larger version of this conversation, as it apparently deemed prudent, given the implicit model that it had of the world, such that I was investing a bunch of resources in this and kind of dragging Robin Hanson along with me. Though he did have his own separate line of investigation into topics like these. Being there as I was, my model having led me to this important place where the rest of the world apparently thought it was fine to let it go hang, such debate was actually what we had at the time. Are we really going to see these single AI systems that do all this different stuff? Is this whole general intelligence notion meaningful at all? And I staked out the bold position for it. It actually was bold. And people did not all say —”Oh, Robin Hansen, you fool, why do you have this exotic position?” They were going like — “Behold these two luminaries debating, or behold these two idiots debating” and not massively coming down on one side of it or other. So in historical terms, I dislike making it out like I was right about anything when I feel I’ve been wrong about so much and yet I was right about anything. And relative to what the rest of the planet deemed it important stuff to spend its time on, given their implicit model of how it’s going to play out, what you can do with minds, where AI goes. I think I did okay. Gwern Branwen did better. Shane Legg arguably did better.", "Dwarkesh Patel 3:42:20", "Gwern always does better when it comes to forecasting. Obviously, if you get the better of a debate that counts for something, but a debate with one particular person.", "Eliezer Yudkowsky 3:42:32", "Considering your entire planet’s decision to invest like $10 into this entire field of study, apparently one big debate is all you get. And that’s the evidence you got to update on.", "Dwarkesh Patel 3:42:43", "Somebody like Ilya Sutskever, when it comes to the actual paradigm of deep learning, was able to anticipate ImageNet scaling up LLMs or whatever. There’s people with track records here who are like, who disagree about doom or something.", "Eliezer Yudkowsky 3:43:06", "If Ilya challenged me to a debate, I wouldn’t turn him down. I admit that I did specialize in doom rather than LLMs.", "Dwarkesh Patel 3:43:14", "Okay, fair enough. Unless you have other sorts of comments on AI I’m happy with moving on.", "Eliezer Yudkowsky 3:43:21", "Yeah. And again, not being like, due to my miraculously precise and detailed theory, I am able to make the surprising and narrow prediction of doom. I think I did a fairly good job of shaping my ignorance to lead me to not be too stupid despite my ignorance over time as it played out. And there’s a prediction, even knowing that little, that can be made.", "Writing fiction & whether rationality helps you win", "Dwarkesh Patel 3:43:54", "Okay, so this feels like a good place to pause the AI conversation, and there’s many other things to ask you about given your decades of writing and millions of words. I think what some people might not know is the millions and millions and millions of words of science fiction and fan fiction that you’ve written. I want to understand when, in your view, is it better to explain something through fiction than nonfiction?", "Eliezer Yudkowsky 3:44:17", "When you’re trying to convey experience rather than knowledge, or when it’s just much easier to write fiction and you can produce 100,000 words of fiction with the same effort it would take you to produce 10,000 words of nonfiction? Those are both pretty good reasons.", "Dwarkesh Patel 3:44:30", "On the second point, it seems like when you’re writing this fiction, not only are you covering the same heady topics that you include in your nonfiction, but there’s also the added complication of plot and characters. It’s surprising to me that that’s easier than just verbalizing the sort of the topics themselves.", "Eliezer Yudkowsky 3:44:51", "Well, partially because it’s more fun. That is an actual factor, ain’t going to lie. And sometimes it’s something like, a bunch of what you get in the fiction is just the lecture that the character would deliver in that situation, the thoughts the character would have in that situation. There’s only one piece of fiction of mine where there’s literally a character giving lectures because he arrived on another planet and now has to lecture about science to them. That one is Project lawful . You know about Project Lawful?", "Dwarkesh Patel 3:45:28", "I know about it. I have not read it yet.", "Eliezer Yudkowsky 3:45:30", "Most of my fiction is not about somebody arriving on another planet who has to deliver lectures. There I was being a bit deliberately like, — “Yeah, I’m going to just do it with Project Lawful. I’m going to just do it. They say nobody should ever do it, and I don’t care. I’m doing it ever ways. I’m going to have my character actually launch into the lectures.” The lectures aren’t really the parts I’m proud about. It’s like where you have the life or death, deathnote style battle of wits that is centering around a series of Bayesian updates and making that actually work because it’s where I’m like — “Yeah, I think I actually pulled that off. And I’m not sure a single other writer on the face of this planet could have made that work as a plot device.” But that said, the nonfiction is like, I’m explaining this thing, I’m explaining the prerequisite, I’m explaining the prerequisites to the prerequisites. And then in fiction, it’s more just, well, this character happens to think of this thing and the character happens to think of that thing, but you got to actually see the character using it. So it’s less organized. It’s less organized as knowledge. And that’s why it’s easier to write.", "Dwarkesh Patel 3:46:46", "Yeah. One of my favorite pieces of fiction that explains something is the Dark Lord’s Answer . And I honestly can’t say anything about it without spoiling it. But I just want to say it was such a great explanation of the thing it is explaining. I don’t know what else I can say about it without spoiling it.", "Eliezer Yudkowsky 3:47:07", "I’m laughing because relatively few have Dark Lord’s Answer among their top favorite works of mine. It is one of my less widely favored works, actually.", "Dwarkesh Patel 3:47:22", "By the way, I don’t think this is a medium that is used enough given how effective it was in an inadequate equilibria . You have different characters just explaining concepts to the other, some of whom are purposefully wrong as examples. And that is such a useful pedagogical tool. Honestly, at least half a blog post should just be written that way. It is so much easier to understand that way.", "Eliezer Yudkowsky 3:47:46", "Yeah. And it’s easier to write. And I should probably do it more often. And you should give me a stern look and be like — “Eliezer, write that more often.”", "Dwarkesh Patel 3:47:54", "Done. Eliezer, please. I think 13 or 14 years ago you wrote an essay called Rationality is Systematized Winning . Would you have expected then that 14 years down the line, the most successful people in the world or some of the most successful people in the world would have been rationalist?", "Eliezer Yudkowsky 3:48:17", "Only if the whole rationalist business had worked closer to the upper 10% of my expectations than it actually got into. The title of the essay was not “Rationalists are Systematized Winning”. There wasn’t even a rationality community back then. Rationality is not a creed. It is not a banner. It is not a way of life. It is not a personal choice. It is not a social group. It’s not really human. It’s a structure of a cognitive process. And you can try to get a little bit more of it into you. And if you want to do that and you fail, then having wanted to do it doesn’t make any difference except insofar as you succeeded. Hanging out with other people who share that creed, going to their parties. It only ever matters insofar as you get a bit more of that structure into you. And this is apparently hard.", "Dwarkesh Patel 3:49:29", "This seems like a No True Scotsman kind of point.", "Eliezer Yudkowsky 3:49:35", "Yes, there are No True Bayesians upon this planet.", "Dwarkesh Patel 3:49:38", "But do you really think that had people tried much harder to adopt the sort of Bayesian principles that you laid out, some of the successful people in the world would have been rationalists?", "Eliezer Yudkowsky 3:49:55", "What good does trying do you except insofar as you are trying at something which when you try it, it succeeds?", "Dwarkesh Patel 3:50:04", "Is that an answer to the question.", "Eliezer Yudkowsky 3:50:07", "Rationality is systematized winning. It’s not Rationality, the life philosophy. It’s not like trying real hard at  this thing, this thing and that thing. It was in the mathematical sense.", "Dwarkesh Patel 3:50:18", "Okay, so then the question becomes, does adopting the philosophy of Bayesianism consciously, actually lead to you having more concrete wins?", "Eliezer Yudkowsky 3:50:31", "I think it did for me. Though only in, like, scattered bits and pieces of slightly greater sanity than I would have had without explicitly recognizing and aspiring to that principle. The principle of not updating in a predictable direction. The principle of jumping ahead to where you can predictably be where you will predictably be later. The story of my life as I would tell it is a story of my jumping ahead to what people would predictably believe later after reality finally hit them over the head with it. This, to me, is the entire story of the people running around now in a state of frantic emergency over something that was utterly predictably going to be an emergency later as of 20 years ago. And you could have been trying stuff earlier, but you left it to me and a handful of other people. And it turns out that that was not a very wise decision on humanity’s part because we didn’t actually solve it all. And I don’t think that I could have tried even harder or contemplated probability theory even harder and done very much better than that. I contemplated probability theory about as hard as the mileage I could visibly, obviously get from it. I’m sure there’s more. There’s obviously more, but I don’t know if it would have let me save the world.", "Dwarkesh Patel 3:51:52", "I guess my question is, is contemplating probability theory at all in the first place something that tends to lead to more victory? I mean, who is the richest person in the world? How often does Elon Musk think in terms of probabilities when he’s deciding what to do? And here is somebody who is very successful. So I guess the bigger question is, in some sense, when you say — Rationality is systematized winning, it’s like a tautology. If the definition of rationality is whatever helps you in. If it’s the specific principles laid out in the sequences, then the question is, like, do the most successful people in the world practice them?", "Eliezer Yudkowsky 3:52:29", "I think you are trying to read something into this that is not meant to be there. The notion of “rationality is systematized winning” is meant to stand in contrast to a long philosophical tradition of notions of rationality that are not meant to be, about the mathematical structure not meant to be or about strangely wrong mathematical structures where you can clearly see how these mathematical productions structures will make predictable mistakes. It was meant to be saying something simple. There’s an episode of Star Trek wherein Kirk makes a 3D chess move against Spock and Spock loses, and Spock complains that Kirk’s move was irrational.", "Dwarkesh Patel 3:53:19", "Rational towards the goal.", "Eliezer Yudkowsky 3:53:20", "The literal winning move is irrational or possibly illogical, Spock might have said, I might be misremembering this. The thing I was saying is not merely — “That’s wrong, that’s like a fundamental misunderstanding of what rationality is.” There is more depth to it than that, but that is where it starts. There are so many people on the Internet in those days, possibly still, who are like — “Well, if you’re rational, you’re going to lose, because other people aren’t always rational.” And this is not just like a wild misunderstanding, but the contemporarily accepted decision theory in academia as we speak at this very moment. Causal decision theory basically has this property where you can be irrational and the rational person you’re playing against is just like — “Oh, I guess I lose then. Have most of the money. I have no choice but to” and ultimatum games specifically. If you look up logical decision theory on Arbital, you’ll find a different analysis of the ultimatum game, where the rational players do not predictably lose the same way as I would define rationality. And if you take this deep mathematical thesis that also runs through all the little moments of everyday life, when you may be tempted to think like — “Well, if I do the reasonable thing, won’t I lose?” That you’re making the same mistake as the Star Trek script writer who had Spock complain that Kirk had won the chess game irrationally, that every time you’re tempted to think like — “Well, here’s the reasonable answer and here’s the correct answer,” you have made a mistake about what is reasonable. And if you then try to screw that around as rationalists should win. Rationalists should have all the social status. Whoever’s the top dog in the present social hierarchy or the planetary wealth distribution must have the most of this math inside them. There are no other factors but how much of a fan you are of this math that’s trying to take the deep structure that can run all through your life in every moment where you’re like — “Oh, wait. Maybe the move that would have gotten the better result was actually the kind of move I should repeat more in the future.” Like to take that thing and turn it into — Social dick measuring contest time, rationalists don’t have the biggest dicks.", "Dwarkesh Patel 3:56:19", "Okay, final question. I don’t know how many hours this has been. I really appreciate you giving me your time. I know that in a previous episode, you were not able to give specific advice of what somebody young who is motivated to work on these problems should do. Do you have advice about how one would even approach coming up with an answer to that themselves?", "Eliezer Yudkowsky 3:56:41", "There’s people running programs who think we have more time, who think we have better chances, and they’re running programs to try to nudge people into doing useful work in this area. And I’m not sure they’re working. And there’s such a strange road to walk and not a short one. And I tried to help people along the way, and I don’t think they got far enough. Some of them got some distance, but they didn’t turn into alignment specialists doing great work. And it’s the problem of the broken verifier. If somebody had a bunch of talent in physics, they were like — Well, I want to work in this field. I might be like — Well, there’s interpretability, and you can tell whether you’ve made a discovery in interpretability or not. Sets it apart for a bunch of this other stuff, and I don’t think that saves us. So how do you do the kind of work that saves us? The key thing is the ability to tell the difference between good and bad work. And maybe I will write some more blog posts on it. I don’t really expect the blog posts to work. The critical thing is the verifier. How can you tell whether you’re talking sense or not? There’s all kinds of specific heuristics I can give. I can say to somebody — “Well, if your entire alignment proposal is this elaborate mechanism you have to explain the whole mechanism.” And you can’t be like “here’s the core problem. Here’s the key insight that I think addresses this problem.” If you can’t extract that out, if your whole solution is just a giant mechanism, this is not the way. It’s kind of like how people invent perpetual motion machines by making the perpetual motion machines more and more complicated until they can no longer keep track of how it fails. And if you actually had a perpetual motion machine, it would not just be a giant machine, there would be a thing you had realized that made it possible to do the impossible, for example. You’re just not going to have a perpetual motion machine. So there’s thoughts like that. I could say go study evolutionary biology because evolutionary biology went through a phase of optimism and people naming all the wonderful things they thought that evolutionary biology would cough out, all the wonderful properties that they thought natural selection would imbue into organisms. And the Williams Revolution as is sometimes called, is when George Williams wrote Adaptation and Natural Selection, a very influential book. Saying like that is not what this optimization criterion gives you. You do not get the pretty stuff, you do not get the aesthetically lovely stuff. Here’s what you get instead. And by living through that revolution vicariously. I thereby picked up a bit of the thing that to me obviously generalizes about how not to expect nice things from an alien optimization process. But maybe somebody else can read through that and not generalize in the correct direction. So then how do I advise them to generalize in the correct direction? How do I advise them to learn the thing that I learned? I can just give them the generalization but that’s not the same as having the thing inside them that generalizes correctly without anybody standing over their shoulder and forcing them to get the right answer. I could point out and have in my fiction that the entire schooling process of — “Here is this legible question that you’re supposed to have already been taught how to solve. Give me the answer using the solution method you are taught.” This does not train you to tackle new basic problems. But even if you tell people that, how do they retrain? We don’t have a systematic training method for producing real science in that sense. A quarter of the Nobel laureates being the students or grad students of other Nobel laureates because we never figured out how to teach science. We have an apprentice system. We have people who pick out people who they think can be scientists and they hang around them in person. And something that we’ve never written down in a textbook passes down. And that’s where the revolutionaries come from. And there are whole countries trying to invest in having scientists, and they churn out these people who write papers, and none of it goes anywhere. Because the part that was legible to the bureaucracy is, have you written the paper? Can you pass the test? And this is not science. And I could go on for this for a while, but the thing that you asked me is — How do you pass down this thing that your society never did figure out how to teach? And the whole reason why Harry Potter and the Methods of Rationality is popular is because people read it and picked up the rhythm seen in a character’s thoughts of a thing that was not in their schooling system, that was not written down, that you would ordinarily pick up by being around other people. And I managed to put a little bit of it into a fictional character, and people picked up a fragment of it by being near a fictional character, but not in really vast quantities of people. And I didn’t manage to put vast quantities of shards in there. I’m not sure there is not a long list of Nobel laureates who’ve read HPMOR, although there wouldn’t be, because the delay times on granting the prizes are too long. You ask me, what do I say? And my answer is — Well, that’s a whole big, gigantic problem I’ve spent however many years trying to tackle, and I ain’t going to solve the problem with a sentence in this podcast." ]
[ "https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/", "https://www.genome.gov/genetics-glossary/Polymerase-Chain-Reaction#:~:text=Polymerase%20chain%20reaction%20(abbreviated%20PCR,be%20studied%20in%20greater%20detail.", "https://en.wiktionary.org/wiki/foom", "https://arxiv.org/abs/2212.08073", "https://stockfishchess.org/", "https://www.deepmind.com/research/highlighted-research/alphago", "https://intelligence.org/", "https://intelligence.org/visible/", "https://youtu.be/Yf1o0TQzry8?t=448", "https://manifold.markets/EliezerYudkowsky/by-the-end-of-2026-will-we-have-tra", "https://futureoflife.org/open-letter/pause-giant-ai-experiments/", "https://www.atomicarchive.com/history/manhattan-project/p4s30.html", "https://www.rationality.org/", "https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate", "https://gwern.net/", "https://twitter.com/shanelegg?lang=ta", "https://www.lesswrong.com/tag/everett-branch#:~:text=An%20Everett%20branch%20is%20one,have%20reason%20to%20expect%20exists.", "https://www.lesswrong.com/rationality", "https://www.lesswrong.com/tag/many-worlds-interpretation", "https://scottaaronson.blog/?p=7064", "https://www.lesswrong.com/posts/mMBTPTjRbsrqbSkZE/sorting-pebbles-into-correct-heaps", "https://en.wikipedia.org/wiki/Rapture", "https://astralcodexten.substack.com/", "https://en.wikipedia.org/wiki/So_You_Want_to_Be_a_Wizard", "https://www.linkedin.com/in/so8res", "https://www.alignmentforum.org/posts/3pinFH3jerMzAvmza/on-how-various-plans-miss-the-hard-bits-of-the-alignment", "https://arbital.com/p/multiple_stage_fallacy/", "https://en.wikipedia.org/wiki/Goodhart%27s_law", "https://arbital.com/p/goodharts_curse/", "https://en.wikipedia.org/wiki/Alfred_Korzybski", "https://en.wikipedia.org/wiki/Demis_Hassabis", "https://en.wikipedia.org/wiki/Eddington_experiment", "https://en.wikipedia.org/wiki/Scott_Sumner", "https://www.strangescience.net/lucretius.htm", "https://mason.gmu.edu/~rhanson/home.html", "https://www.projectlawful.com/", "https://www.goodreads.com/en/book/show/33406999", "https://equilibriabook.com/", "https://www.lesswrong.com/posts/4ARtkT3EYox3THYjF/rationality-is-systematized-winning" ]
https://www.dwarkesh.com/p/francois-chollet
Francois Chollet, Mike Knoop - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
[ "(00:00:00) – The ARC benchmark", "Dwarkesh Patel 00:00:00", "Today I have the pleasure to speak with François Chollet , who is an AI researcher at Google and creator of Keras . He’s launching a prize in collaboration with Mike Knoop , the co-founder of Zapier , whom we’ll also be talking to in a second. It’s a million dollar prize to solve the ARC benchmark that he created.", "First question, what is the ARC benchmark? Why do you even need this prize? Why won’t the biggest LLM we have in a year be able to just saturate it?", "François Chollet 00:00:28", "ARC is intended as a kind of IQ test for machine intelligence . What makes it different from most LLM benchmarks out there is that it’s designed to be resistant to memorization. The way LLMs work is that they’re basically this big interpolative memory. The way you scale up their capabilities is by trying to cram as much knowledge and patterns as possible into them.", "By contrast, ARC does not require a lot of knowledge at all. It’s designed to only require what’s known as core knowledge . It’s basic knowledge about things like elementary physics, objectness, counting, that sort of thing. It’s the sort of knowledge that any four-year-old or five-year-old possesses.", "What’s interesting is that each puzzle in ARC is novel. It’s something that you’ve probably not encountered before, even if you’ve memorized the entire internet. That’s what makes ARC challenging for LLMs. So far, LLMs have not been doing very well on it. In fact, the approaches that are working well are more towards discrete program search , program synthesis .", "Dwarkesh Patel 00:01:42", "First of all, I’ll make a comment that I’m glad that as a skeptic of LLM, you have yourself put out a benchmark. Is it accurate to say that if the biggest model we have in a year is able to get 80% on this, then your view would be that we are on track to get AGI with LLMs? How would you think about that?", "François Chollet 00:02:02", "I’m pretty skeptical that we’re going to see an LLM do 80% in a year. That said, if we do see it, you would also have to look at how this was achieved. If you just train the model on millions or billions of puzzles similar to ARC, you’re relying on the ability to have some overlap between the tasks that you train on and the tasks that you’re going to see at test time. You’re still using memorization.", "Maybe it can work. Hopefully, ARC is going to be good enough that it’s going to be resistant to this sort of brute force attempt but you never know. Maybe it could happen. I’m not saying it’s not going to happen. ARC is not a perfect benchmark. Maybe it has flaws. Maybe it could be hacked in that way.", "Dwarkesh Patel 00:02:50", "What would GPT-5 have to do so that you would be very confident that it’s on the path to AGI?", "François Chollet 00:02:59", "This is what would make me change my mind about LLMs. I would need to start seeing a critical mass of cases where you show the model something it has not seen before — a task that's truly novel from the perspective of its training data — and it can actually adapt on the fly.", "This is true for LLMs but really this would catch my attention for any AI technique out there. If I can see the ability to adapt to novelty on the fly and pick up new skills efficiently, then I would be extremely interested. I would think this is on the path to AGI.", "Dwarkesh Patel 00:03:39", "The advantage they have is that they do get to see everything. Maybe I'll take issue with how much they are relying on that, but obviously they're relying on that more than humans do. They do have so much in distribution, to the extent that we have trouble distinguishing whether an example is in distribution or not.", "If they have everything in distribution, then they can do everything that we can do. Maybe it's not in distribution for us. Why is it so crucial that it has to be out of distribution for them? Why can't we just leverage the fact that they do get to see everything?", "François Chollet 00:04:12", "Basically you’re asking what's the difference between actual intelligence — the ability to adapt to things you've not been prepared for — and pure memorization, like reciting what you've seen before.", "It's not just some semantic difference. The big difference is that you can never pre-train on everything that you might see at test time because the world changes all the time. It's not just the fact that the space of possible tasks is infinite. If you're trained on millions of them, you've only seen zero percent of the total space. It's also the fact that the world is changing every day.", "This is why we, the human species, have developed intelligence in the first place. If there was such a thing as a distribution for the world — for the universe, for our lives — then we would not need intelligence at all. In fact, many creatures, many insects for instance, do not have intelligence. Instead they have hardcoded programs in their connectomes , in their genes, behavioral programs that map some stimuli to appropriate responses. They can actually navigate their lives and their environment in a way that's very evolutionarily fit without needing to learn anything.", "If our environment were static and predictable enough, what would have happened is that evolution would have found the perfect behavioral program: a hard-coded, static behavioral program. It would have written it into our genes. We would have a hard-coded brain connectome. That's what we would be running on. But that's not what happened.", "Instead, we have general intelligence. We are born with extremely little knowledge about the world. We are born with the ability to learn very efficiently and to adapt in the face of things that we've never seen before. That's what makes us unique. That's what is really, really challenging to recreate in machines.", "Dwarkesh Patel 00:06:12", "Before we dive deeper into that, I'm going to overlay some examples of what an ARC-like challenge looks like for the YouTube audience. For people listening on audio, can you describe what a sample ARC challenge would look like?", "François Chollet 00:06:27", "One ARC puzzle looks kind of like an IQ test puzzle. You have a number of demonstration input-output pairs. One pair is made up of two grids. One grid shows you an input, and the second grid shows you what you should produce as a response to that input.", "You get a couple pairs like this to demonstrate the nature of the task and what you're supposed to do with your inputs. You then get a new test input. Your job is to produce the corresponding test output. You look at the demonstration pairs and from that you figure out what you're supposed to do. You show that you've understood it on this new test pair.", "Importantly, the knowledge basis you need to approach these challenges is just core knowledge. It includes basic concepts like what makes an object, counting, geometry, topology, symmetries, etc. It's extremely basic knowledge. LLMs for sure possess such knowledge. Any child possesses such knowledge.", "What's really interesting is that each puzzle is new. It's not something you'll find elsewhere on the internet. Whether you're a human or a machine, you have to approach every puzzle from scratch and reason your way through it. You can't just fetch the response from memory.", "Dwarkesh Patel 00:08:04", "One contention here is that we are only now getting multimodal models that are trained to do spatial reasoning due to the data they're trained on. Whereas not only humans but our ancestors have had to learn over billions of years of evolution how to understand abstract physical and spatial properties and recognize patterns there.", "One view is that in the next year, as we gain models that are natively multimodal capability rather than as an add-on, they will understand these kinds of patterns because that's something we 'd see natively. Right now, ARC sees a JSON string of 100100 and is supposed to recognize a pattern there. Even if you showed a human a sequence of these numbers, they would have a challenge making sense of the question you're asking.", "Why wouldn't multimodal models, which we're on the path to unlocking right now, be so much better at ARC-type spatial reasoning as soon as we get them?", "François Chollet 00:09:09", "That's an empirical question. I guess we'll see the answer within a few months. My response is that our grids are just discrete 2D grids of symbols and are pretty small. If you flatten an image as a sequence of pixels for example, you get something that’s actually very difficult to parse.", "That’s not true for ARC because the grids are very small. You only have 10 possible symbols, They are 2D grids that are actually very easy to flatten as sequences. Transformers , LLMs, are very good at processing sequences.", "In fact, you can show that LLMs do fine with processing ARC-like data by simply fine-tuning an LLM on subsets of the tasks and then testing it on small variations of these tasks. You'll see that the LLM can encode solution programs just fine for tasks it has seen before. It doesn't really have a problem parsing the input or figuring out the program.", "The reason LLMs don't do well on ARC is really just the unfamiliarity aspect. Each new task is different from every other task. You cannot memorize the solution programs in advance. You have to synthesize a new solution program on the fly for each new task. That's really what LLMs are struggling with.", "Dwarkesh Patel 00:10:37", "Before I play more devil's advocate , I just want to step back and explain why I'm especially interested in having this conversation. Obviously there’s the million dollar ARC Prize and I’m excited to play around with it myself.", "The Vesuvius Challenge was Nat Friedman 's prize for decoding scrolls from the Herculaneum library that were buried in the volcano. The winner of that was a 22-year-old who was listening to this podcast, Luke Farritor . Hopefully somebody listening to this will find this challenge intriguing and find a solution.", "(00:11:10) – Why LLMs struggle with ARC", "Dwarkesh Patel 00:11:10", "I've recently had on a lot of people who are bullish on LLMs. I've had discussions with them before interviewing you about how we explain the fact that LLMs don't seem to be natively performing that well on ARC.", "I found their explanations somewhat contrived. I'll try out some of their reasons on you. It is actually an intriguing fact that some of these problems are relatively straightforward for humans to understand, yet the models struggle with them if you just input them natively.", "François Chollet 00:11:42", "All of them are very easy for humans. Any smart human should be able to do 90-95% on ARC. Even a five-year-old with very, very little knowledge could definitely do over 50%.", "Dwarkesh Patel 00:11:57", "I agree that smart humans will do very well on this test, but the average human will probably be mediocre.", "François Chollet 00:12:10", "Not really, we actually tried with average humans. They scored about 85.", "Dwarkesh Patel 00:12:14", "That was with Amazon Mechanical Turk workers, right? I honestly don't know the demographic profile of Amazon Mechanical Turk workers. Imagining them interacting with Amazon's remote work platform, I’m guessing that's not the median human across the planet.", "The broader point here is that we see the spectrum in humans and humans obviously have AGI. But even within humans you see a spectrum where some people are relatively dumber. They'll perform worse on IQ-like tests.", "For example, there’s Raven's Progressive Matrices . Look at how the average person performs on that. If you look at the kind of questions that are hit or miss — half of people will get it right, half of people will get it wrong — we might think they’re kind of trivial.", "Humans have AGI but from relatively small tweaks, you can go from somebody who misses these kinds of basic IQ test questions to somebody who gets them all right. We'll talk about some of the previous performances that people have tried with these models.", "Jack Cole with a 240 million parameter model got 35%. Doesn't that suggest that they're on this spectrum that clearly exists within humans, and they're going to be saturated pretty soon?", "François Chollet 00:13:25", "There's a bunch of interesting points here. There is indeed a branch of LLM approaches spearheaded by Jack Cole that are doing quite well. They are state-of-the art in fact.. But you have to look at what's going on there. There are two things.", "The first thing is that to get these numbers, you need to pre-train your LLM on millions of generated ARC tasks. Of course, compare that to a five-year-old child looking at ARC for the first time. The child has never done an IQ test before and has never seen something like an ARC test before.", "The only overlap between what they know and what they have to do in the test is core knowledge. It’s knowing about counting, objects, symmetries, etc. They're still going to do really well. They're going to do much better than the LLM trained on millions of similar tasks.", "There’s a second thing to note about the Jack Cole approach. One thing that's really critical to making the model work at all is test time fine-tuning. By the way, that's something that's really missing from LLM approaches right now. Most of the time when you're using an LLM, it's just doing static inference . The model is frozen. You're just prompting it and getting an answer. The model is not actually learning anything on the fly. Its state is not adapting to the task at hand.", "What Jack Cole is actually doing is that for every test problem, it’s on-the-fly fine-tuning a version of the LLM for that task. That's really what's unlocking performance. If you don't do that, you get like 1-2%, something completely negligible. If you do test time fine-tuning and you add a bunch of tricks on top, then you end up with interesting performance numbers.", "What it's doing is trying to address one of the key limitations of LLMs today: the lack of active inference. It's actually adding active inference to LLMs. That's working extremely well, actually. So that's fascinating to me.", "Dwarkesh Patel 00:15:30", "There are so many interesting rabbit holes there. A lot of the scale maximalists share your broader perspective that you need to unlock the adaptive/test time compute. They think that in addition to scaling, you need things like adaptive compute or some sort of RL to get the System 2 working. Their perspective is that this is a relatively straightforward thing that will be added atop the representations that a scaled up model has greater access to.", "François Chollet 00:16:14", "It's not just a technical detail. It's not a straightforward thing. It is everything. It is the important part. The scale maximalists refer to scaling laws , which are the empirical relationship that you can draw between how much compute you spend on training a model and the performance you're getting on benchmark.", "Of course the key question here is, how do you measure performance? What is it that you're actually improving by adding more compute and more data? It's benchmark performance.", "The way you measure performance is not a technical detail. It's not an afterthought because it's going to narrow down the set of questions that you're asking. Accordingly, it's going to narrow down the set of answers that you're looking for.", "If you look at the benchmarks we are using for LLMs, they are all memorization-based benchmarks. Sometimes they are literally just knowledge-based, like a school test. Even if you look at the ones that are explicitly about reasoning, if you look closely you realize that in order to solve them, it's enough to memorize a finite set of reasoning patterns. You just reapply them. They're like static programs.", "LLMs are very good at memorizing small static programs. They've got this sort of bank of solution programs. When you give them a new puzzle, they can just fetch the appropriate program and apply it. It looks like reasoning but it's not really doing any sort of on-the-fly program synthesis. All it's doing is program fetching.", "You can actually solve all these benchmarks with memorization. If you look at the models and what you're scaling up here, they are big parametric curves fitted to a data distribution. They're basically these big interpolative databases, interpolative memories. Of course, if you scale up the size of your database and cram more knowledge and patterns into it, you are going to be increasing its performance as measured by a memorization benchmark.", "That's kind of obvious. But as you're doing it, you are not increasing the intelligence of the system one bit. You are increasing the skill of the system. You are increasing its usefulness, its scope of applicability, but not its intelligence because skill is not intelligence. That's the fundamental confusion that people run into. They're confusing skill and intelligence.", "(00:19:00) – Skill vs intelligence", "Dwarkesh Patel 00:19:00", "There are a lot of fascinating things to talk about here: skill, intelligence, interpolation. Let’s talk about the point that they’re fitting some manifold that maps the input data. A reductionist way to talk about the human brain is that it's just axons firing at each other. But we don't care about the reductionist explanation. We care about what happens at the macroscopic level when these things combine.", "As far as interpolation goes, let's look at one of the benchmarks. There's a benchmark that does grade school math. These are problems that a smart high schooler would be able to solve. It's called GSM8K . These models get 95% on it. Basically, they always nail it.", "François Chollet 00:19:50", "Sure, that's a memorization benchmark.", "Dwarkesh Patel 00:19:51", "Let's talk about what that means. Here's one question from that benchmark:", "\"30 students are in a class. One-fifth of them are 12-year-olds, One-third are 13-year-olds, One-tenth are 11-year-olds. How many of them are not 11, 12, or 13 years old?", "I agree this is not rocket science. You can write down on paper how you go through this problem. A smart high school kid should be able to solve it. About memorization, it still has to reason through how to think about fractions, the context of the whole problem, and then combine different calculations to write the final answer.", "François Chollet 00:20:24", "It depends on how you want to define reasoning. There are two definitions you can use. One is, I have available a set of program templates. It’s the structure of the puzzle, which can also generate its solution. I'm going to identify the right template, which is in my memory, input the new values into the template, run the program, and get the solution. You could say this is reasoning. I say, “yeah sure, okay.”", "Here’s another definition of reasoning. When you're faced with a puzzle and you don't already have a program in memory to solve it, it’s the ability to synthesize on the fly a new program based on bits and pieces of existing programs that you have. You have to do on-the-fly program synthesis. That's actually dramatically harder than just fetching the right memorized program and reapplying it.", "Dwarkesh Patel 00:21:18", "Maybe we are overestimating the extent to which humans are so sample efficient . They also need training in this way. They have to drill in these pathways of reasoning through certain kinds of problems.", "Let's take math, for example. It's not like you can just show a baby the axioms of set theory and now they know math. When they're growing up, you have to teach them years of pre-algebra. Then you have a year of teaching them drills and going through the same kind of problem in algebra, then geometry, pre-calculus, calculus.", "Isn't that like the same kind of thing? You can't just see one example and now you have the program. You actually have to drill it. These models also had to drill it with a bunch of pre-training data.", "François Chollet 00:22:02", "Sure. In order to do on-the-fly program synthesis, you actually need building blocks to work from. Knowledge and memory are tremendously important in the process. I'm not saying it's memory vs. reasoning. In order to do effective reasoning, you need memory.", "Dwarkesh Patel 00:22:21", "But it sounds compatible with your story. Through seeing a lot of different kinds of examples, these things can learn to reason within the context of those examples. We can also see it within bigger and bigger models.", "That was an example of a high school-level math problem. Let's say a model that's smaller than GPT-3 couldn't do that at all. As these models get bigger, they seem to be able to pick up bigger and bigger patterns.", "François Chollet 00:22:43", "It's not really a size issue. It's more like a training data issue in this case.", "Dwarkesh Patel 00:22:47", "Well, bigger models can pick up these kinds of circuits. Smaller models apparently don't do a good job of doing that even if you were to train them on this kind of data. Doesn't that just suggest that as you have bigger and bigger models, they can pick up bigger and bigger pathways or more general ways of reasoning?", "François Chollet 00:23:01", "Absolutely.", "Dwarkesh Patel 00:23:02", "But then isn't that intelligence?", "François Chollet 00:23:03", "No, it's not. If you scale up your database and keep adding more knowledge and program templates to it, then sure it becomes more and more skillful. You can apply it to more and more tasks. But general intelligence is not task-specific skill scaled up to many skills, because there is an infinite space of possible skills.", "General intelligence is the ability to approach any problem, any skill, and very quickly master it using very little data. This is what makes you able to face anything you might ever encounter. This is the definition of generality. Generality is not specificity scaled up. It is the ability to apply your mind to anything at all, to arbitrary things. This fundamentally requires the ability to adapt, to learn on the fly efficiently.", "Dwarkesh Patel 00:23:54", "My claim is that by doing pre-training on bigger and bigger models, you are gaining that capacity to generalize very efficiently. Let me give you an example. Your own company Google, in their paper on Gemini 1.5 , had this very interesting example. They would give the model, in context , the grammar book and the dictionary of a language that has fewer than 200 living speakers. It's not in the pre-training data. You just give it the dictionary and it basically is able to speak this language and translate to it, including the complex and organic ways in which languages are structured.", "If you showed me a dictionary from English to Spanish, I'm not going to be able to pick up how to structure sentences and how to say things in Spanish. Because of the representations that it has gained through this pre-training, it is able to now learn a new language extremely efficiently. Doesn't that show that this kind of pre-training actually does increase your ability to learn new tasks?", "François Chollet 00:25:58", "If you were right, LLMs would do really well on ARC puzzles because ARC puzzles are not complex. Each one of them requires very little knowledge. Each one of them is very low on complexity. You don't need to think very hard about it. They're actually extremely obvious for human", "Even children can do them but LLMs cannot. Even LLMs that have 100,000x more knowledge than you do still cannot. The only thing that makes ARC special is that it was designed with this intent to resist memorization. This is the only thing. This is the huge blocker for LLM performance.", "If you look at LLMs closely, it's pretty obvious that they're not really synthesizing new programs on the fly to solve the task that they're faced with. They're very much reapplying things that they've stored in memory. For instance, one thing that's very striking is that LLMs can solve a Caesar cipher , transposing letters to code a message. That’s a very complex algorithm, but it comes up quite a bit on the internet. They've basically memorized it.", "What's really interesting is that they can do it for a transposition length of like three or five, because those are very common numbers in examples provided on the internet. If you try to do it with an arbitrary number like nine, it's going to fail. It does not encode the generalized form of the algorithm, but only specific cases. It has memorized specific cases of the algorithm. If it could actually synthesize on the fly the solver algorithm, then the value of n would not matter at all, because it does not increase the problem complexity.", "Dwarkesh Patel 00:26:48", "I think this is true of humans as well.", "François Chollet 00:26:51", "Humans use memorization pattern matching all the time, of course, but humans are not limited to memorization pattern matching. They have this very unique ability to adapt to new situations on the fly. This is exactly what enables you to navigate every new day in your life.", "Dwarkesh Patel 00:27:07", "There was some study that chess grandmasters will perform very well within the context of the moves that—", "François Chollet 00:27:14", "That’s an excellent example because chess, at the highest level, is all about memorization, chess memorization.", "Dwarkesh Patel 00:27:19", "What is your explanation for the original question of why Gemini 1.5 was able, in context, to learn a language, including the complex grammar structure? Doesn't that show that they can pick up new knowledge?", "François Chollet 00:27:35", "I would assume that it has simply mined from its extremely extensive, unimaginably vast training data. It has mined the required template and then it's just reusing it. We know that LLMs have a very poor ability to synthesize new program templates like this on the fly or even adapt existing ones. They're very much limited to fetching.", "(00:27:55) - Do we need “AGI” to automate most jobs?", "Dwarkesh Patel 00:27:55", "Suppose there's a programmer at Google. They go into the office in the morning. At what point are they doing something that 100% cannot be due to fetching some template?", "Suppose they were an LLM. What could they not do if they had only fetched some template from their program? At what point do they have to use this so-called extreme generalization capability?", "François Chollet 00:28:12", "Forget about Google software developers. For every human, every day of their lives is full of novel things that they've not been prepared for. You cannot navigate your life based on memorization alone. It's impossible.", "Dwarkesh Patel 00:28:25", "It seems like you also agree they're not doing just “memorization.” It seems like you're saying they're less capable of generalization. I'm just curious about the kind of generalization they do.", "If you get into the office and you try to do this kind of generalization, you're going to fail at your job. Let’s say you're a programmer. What is the first point when you try to do that kind of generalization, you would lose your job because you can't do the extreme generalization?", "François Chollet 00:28:51", "Take this situation, for instance. You've never been here in this room. Maybe you've been in this city a few times. There's a fair amount of novelty. You've never been interviewing me. There's a fair amount of novelty in every hour of every day in your life. By and large, it’s in fact more novelty than any LLM could handle. If you just put an LLM in a robot, it could not be doing all the things that you've been doing today.", "Take self-driving cars , for instance. You take a self-driving car operating in the Bay Area. Do you think you could just drop it in New York City or drop it in London, where people drive on the left? No, it's going to fail. Not only can it not generalize to a change in driving rules, but you cannot even make it generalize to a new city. It needs to be trained on each specific environment.", "Dwarkesh Patel 00:29:55", "I agree that self-driving cars aren't AGI.", "François Chollet 00:29:58", "But it's the same type of model. They're transformers as well. It's the same architecture.", "Dwarkesh Patel 00:30:02", "I don’t know. Apes also have brains with neurons in them, but they're less intelligent because they're smaller.", "We can get into that. I still don't understand this concrete thing. We also need training. That's why education exists. That's why we had to spend the first 18 years of our life doing drills.", "François Chollet 00:30:19", "We have a memory, but we are not a memory. We are not limited to just a memory.", "Dwarkesh Patel 00:30:24", "I’m denying the premise that that's the only thing these models are necessarily doing.", "Suppose you just subbed out a remote work with an LLM and they're a programmer. What is the first point at which you realize this is not a human, this is an LLM?", "François Chollet 00:30:40", "How about I just send them an ARC puzzle and see how they do?", "Dwarkesh Patel 00:30:44", "No, like part of their job.", "François Chollet 00:30:46", "You have to deal with novelty all the time.", "Dwarkesh Patel 00:30:49", "Is there a world in which all the programmers are replaced and we're still saying, \"ah, but they're only doing memorization-laden programming tasks.\" In that world, are they still producing a trillion dollars worth of output in the form of code?", "François Chollet 00:31:03", "Software development is actually a pretty good example of a job where you're dealing with novelty all the time. If you're not, I'm not sure what you're doing.", "I personally use generative AI very little in my software development job. Before LLMs, I was also using Stack Overflow very little. Some people maybe are just copy-pasting stuff from Stack Overflow, or nowadays copy-pasting stuff from an LLM.", "Personally, I try to focus on problem-solving. The syntax is just a technical detail. What's really important is problem-solving. The essence of programming is engineering mental models and mental representations of the problem you're trying to solve.", "Dwarkesh Patel 00:31:46", "We have many people who can interact with these systems themselves. You can go to ChatGPT and say, \"here's a specification of the kind of program I want.\" They'll build it for you.", "François Chollet 00:31:56", "As long as there are many examples of this program on GitHub , Stack Overflow, and so on, sure they will fetch the program for you from their memory.", "Dwarkesh Patel 00:32:03", "But you can change arbitrary details. You can say, \"I need it to work on this different kind of server.\"", "François Chollet 00:32:09", "If that were true, there would be no software engineers today.", "Dwarkesh Patel 00:32:12", "I agree we're not at a full AGI yet. These models have fewer than a trillion parameters. A human brain has somewhere on the order of 10-30 trillion synapses . If you were just doing some naive math, you're at least 10x underparameterized. I agree we're not there yet, but I'm confused about why we're not on the spectrum.", "Yes, I agree that there are many kinds of generalization they can't do. But it seems like they're on this kind of smooth spectrum that we see even within humans. Some humans would have a hard time doing an ARC-type test. We see that based on the performance on Raven's progressive matrices-type IQ tests.", "François Chollet 00:32:48", "I'm not a fan of IQ tests because, for the most part, you can train on IQ tests and get better at them. They're very much memorization-based. This is actually the main pitfall that ARC tries not to fall for.", "Dwarkesh Patel 00:33:02", "Let’s say all remote jobs are automated in the next five years. I mean at least the remote jobs that don't require you to be a sort of a service, like a salesperson, where you want the human to be talking. I mean more like programming.", "In that world, would you say that that's not possible because a programmer needs to do many things that definitely require things that would not be in any pre-training corpus?", "François Chollet 00:33:25", "Sure. In five years, there will be more software engineers than there are today, not fewer.", "Dwarkesh Patel 00:33:30", "I'm still not sure. I studied computer science. If I had become a code monkey out of college, what would I be doing? I go to my job. My boss tells me to do something? When does he realize I'm an LLM, if I were an LLM?", "François Chollet 00:33:46", "Probably on the first day. Again, if it were true that LLMs could generalize to novel problems like this — actually develop software to solve a problem they've never seen before — you would not need software engineers anymore.", "If I look at how people are using LLMs in their software engineering job today, they're using it as a Stack Overflow replacement. They're using it as a way to copy-paste code snippets to perform very common actions. What they actually need is a database of code snippets. They don't actually need any of the abilities that actually make them software engineers.", "Dwarkesh Patel 00:34:27", "Let's step back on interpolation. Why isn't creativity just interpolation in a higher dimension where —  if we're going to use the ML language — a bigger model can learn a more complex manifold?", "If you read a biography of a scientist, they’re not zero-shotting new scientific theories. They're playing with existing ideas. They're trying to juxtapose them in their head. In the tree of intellectual descendants, they try out some slightly different evolutionary path. You sort of run the experiment there in terms of publishing the paper or whatever.", "It seems like a similar kind of thing to what humans are doing. There's a higher level of generalization. Bigger and bigger models seem to be approaching higher and higher levels of generalization. GPT-2 couldn't do grade school-level math problems that required more generalization than it had the capability to do. GPT-3 and GPT-4 can.", "François Chollet 00:35:32", "Not quite. GPT-4 has a higher degree of skill and a higher range of skills. It has the same degree of generalization.", "Dwarkesh Patel 00:35:39", "I don't want to get into semantics here. Why can't creativity just be interpolation on a higher dimension?", "François Chollet 00:35:49", "Interpolation can absolutely be creative. To your point, I do think that on some level humans also do a lot of memorization, reciting, pattern matching, and interpolation as well. It's very much a spectrum between pattern matching and true reasoning. Humans are never really at one end of the spectrum. They're never really doing pure pattern matching or pure reasoning. They're usually doing some mixture of both.", "This is true even if you're doing something that seems very reasoning-heavy, like proving a mathematical theorem. As you're doing it, you're doing quite a bit of discrete search in your mind and quite a bit of actual reasoning. You're also very much guided by intuition and pattern matching. You’re guided by the shape of proofs that you've seen before, by your knowledge of mathematics.", "All of our thoughts, everything we do, is a mixture of interpolated memorization-based thinking, Type 1 thinking, and Type 2 thinking .", "Dwarkesh Patel 00:36:55", "Why are bigger models more sample efficient?", "François Chollet 00:36:59", "Because they have more reusable building blocks that they can lean on to pick up new patterns in their training data.", "Dwarkesh Patel 00:37:09", "Does that pattern keep continuing as you keep getting bigger and bigger?", "François Chollet 00:37:12", "It does to the extent that the new patterns you’re giving the model to learn are a good match for what it has learned before. If you present something that’s actually novel that is not in a steady distribution, like an ARC puzzle for instance, it will fail.", "Dwarkesh Patel 00:37:25", "Let me make this claim. The program synthesis is a very useful intuition pump . Why can’t this be the case for what’s happening in the transformer?", "The early layers are figuring out how to represent the inputting tokens. The middle layers do this kind of program search, program synthesis, and they combine the inputs to all the circuits in the model. They go from the low-level representation to a higher-level representation near the middle of the model. They use these programs. They combine these concepts. What comes out the other end is the reasoning based on that high-level intelligence.", "François Chollet 00:38:01", "Possibly. Why not? But if these models were actually capable of synthesizing novel programs, however simple, they should be able to do ARC. Because for any ARC task, if you write down the solution program in Python , it’s not a complex program. It’s extremely simple. Humans can figure it out. Why can’t LLMs do it?", "Dwarkesh Patel 00:38:26", "That’s a fair point. To turn the question around to you, suppose it’s the case that in a year a multimodal model can solve ARC. Let’s say it gets 80% or whatever the average human would get. Are we then on track for AGI?", "François Chollet 00:38:43", "Quite possibly, yes. Honestly, what I would like to see is an LLM-type model solving ARC at 80%, but after having only been trained on core knowledge-related stuff.", "Dwarkesh Patel 00:38:59", "But human kids, we’re necessarily just trained on what we have in our genes…", "François Chollet 00:39:04", "Let me rephrase that. I want it to be only trained on information that is not explicitly trying to anticipate what’s going to be in the ARC test set.", "Dwarkesh Patel 00:39:15", "Isn’t the whole point of ARC that you can’t? It’s a new type of intelligence test every single time?", "François Chollet 00:39:21", "Yes, that is the point. If ARC were a perfect, flawless benchmark, it would be impossible to anticipate what’s in the test set. ARC was released more than four years ago and so far it’s been resistant to memorization. It has, to some extent, passed the test of time. But it’s not perfect.", "Let’s say you try to make by hand hundreds of thousands of ARC tasks. You try to multiply them by programmatically generating variations. You end up with maybe hundreds of millions of tasks. Just by brute forcing the task space, there will be enough overlap between what you’re trained on and what’s in the test set that you can actually score very highly. With enough scale, you can always cheat.", "Dwarkesh Patel 00:40:05", "If you can do this for every single thing that supposedly requires intelligence, then what good is intelligence? Apparently, you can just brute force intelligence.", "François Chollet 00:40:12", "If the world, if your life, were a static distribution then sure, you could just brute force the space of possible behaviors. There are several metaphors for intelligence I like to use. One is that you can think of intelligence as a pathfinding algorithm in future situation space.", "I don't know if you're familiar with RTS game development. You have a map, a 2D map, and you have partial information about it. There is some fog of war on your map. There are areas that you haven't explored yet. You know nothing about them. There are also areas that you've explored but you only know what they were like in the past. You don't know how they are like today.", "Now, instead of thinking about a 2D map, think about the space of possible future situations that you might encounter and how they're connected to each other. Intelligence is a pathfinding algorithm. Once you set a goal, it will tell you how to get there optimally. Of course, it's constrained by the information you have. It cannot pathfind in an area that you know nothing about. It also cannot anticipate changes.", "If you had complete information about the map, then you could solve the pathfinding problem by simply memorizing every possible path, every mapping from point A to point B. You could solve the problem with pure memory. The reason you cannot do that in real life is because you don't actually know what's going to happen in the future. Life is ever changing.", "Dwarkesh Patel 00:41:56", "I feel like you're using words like “memorization,” which we would never use for human children. If your kid learns to do algebra and then learns to do calculus, you wouldn't say they've memorized calculus. If they can solve any arbitrary algebraic problem, you wouldn't say they've memorized algebra. You’d say they've learned algebra.", "François Chollet 00:42:11", "Humans are never really doing pure memorization or pure reasoning.", "Dwarkesh Patel 00:42:15", "That's only because you're semantically labeling what the human does as skill. But it's a memorization when the exact same skill is done by the LLM, as you can measure by these benchmarks. You can just plug in any sort of math problem.", "François Chollet 00:42:22", "Sometimes humans are doing the exact same as the LLM is doing. For instance, if you learn to add numbers you're memorizing an algorithm. You're memorizing a program and then you can reapply it. You are not synthesizing on the fly the addition program.", "Dwarkesh Patel 00:42:38", "Obviously at some point, some human had to figure out how to do addition. A kid doesn’t figure it out by starting from the axioms of set theory and going to how to do addition.", "François Chollet 00:42:47", "What you learn in school is mostly memorization.", "Dwarkesh Patel 00:42:50", "My claim is that these models are vastly underparameterized relative to how many flops , how many parameters, you have in the human brain. So it makes sense that they're not going to be coming up with new theorems like the smartest humans can. Most humans can't do that either. What most humans do sounds like something similar to what you are calling memorization, which is memorizing skills or memorizing techniques that you've learned. So it sounds like it's compatible.", "Tell me if this is wrong. Is it compatible in your world if all the remote workers are gone but they're doing skills which we can potentially make synthetic data out of? We record every single remote worker's screen. We sort of understand the skills they're performing there. Now we've trained a model that can do all this. All the remote workers are unemployed. We're generating trillions of dollars of economic activity from AI remote workers. In that world, are we still in the memorization regime?", "François Chollet 00:43:45", "Sure, with memorization you can automate almost anything as long as it's a static distribution, as long as you don't have to deal with change.", "Dwarkesh Patel 00:43:54", "Are most jobs part of such a static distribution?", "François Chollet 00:43:57", "Potentially, there are lots of things that you can automate. LLMs are an excellent tool for automation. But you have to understand that automation is not the same as intelligence. I'm not saying that LLMs are useless. I've been a huge proponent of deep learning for many years.", "For many years, I've been saying two things. I've been saying that if you keep scaling up deep learning, it will keep paying off. At the same time I've been saying if you keep scaling up deep learning, this will not lead to AGI.", "We can automate more and more things. Yes, this is economically valuable. Yes, potentially there are many jobs you could automate away like this. That would be economically valuable. You're still not going to have intelligence.", "So you can ask, what does it matter if we can generate all this economic value? Maybe we don't need intelligence after all. You need intelligence the moment you have to deal with change, novelty, and uncertainty.", "As long as you're in a space that can be exactly described in advance, you can just rely on pure memorization. In fact, you can always solve any problem. You can always display arbitrary levels of skills on any task without leveraging any intelligence whatsoever, as long as it is possible to describe the problem and its solution very, very precisely.", "Dwarkesh Patel 00:45:17", "When they do deal with novelty, then you just call it interpolation.", "François Chollet 00:45:21", "No, interpolation is not enough to deal with all kinds of novelty. If it were, then LLMs would be AGI.", "Dwarkesh Patel 00:45:30", "I agree they're not AGI. I'm just trying to figure out if we’re on the path to AGI. The crux here is that it seems to me that these things are on a spectrum and we're clearly covering the earliest part of the spectrum with LLMs.", "François Chollet 00:45:43", "I think so.", "Dwarkesh Patel 00:45:44", "Okay, interesting. Here's another thing that I think is evidence for this: grokking .", "Clearly, even within deep learning, there's a difference between the memorization regime and the generalization regime. At first they'll just memorize the data set. If you're doing modular addition it’s how to add digits. At some point, if you keep training on that, they'll learn the skill.", "The fact that there is that distinction suggests that for the generalized circuit that deep learning can learn, there is a regime where it generalizes if you have an overparameterized model. We don't have that in comparison to all the tasks we want these models to do right now.", "François Chollet 00:46:20", "Grokking is a very, very old phenomenon. We've been observing it for decades. It's basically an instance of the minimum description length principle. Given a problem, you can just memorize a pointwise input-to-output mapping, which is completely overfit .", "It does not generalize at all, but it solves the problem on the trained data. From there, you can actually keep pruning it and making your mapping simpler and more compressed. At some point, it will start generalizing.", "That's something called the minimum description length principle. It's this idea that the program that will generalize best is the shortest. It doesn't mean that you're doing anything other than memorization. You're doing memorization plus regularization.", "Dwarkesh Patel 00:47:15", "A.k.a. generalization?", "François Chollet 00:47:17", "Yeah, that absolutely leads to generalization.", "Dwarkesh Patel 00:47:20", "So you do that within one skill. The pattern you see here of meta-learning is that it's more efficient to store a program that can perform many skills rather than one skill. This is what we might call fluid intelligence . So as you get bigger and bigger in models, you would expect it to go up this hierarchy of generalization. It generalizes to a skill, then it generalizes across multiple skills.", "François Chollet 00:47:38", "That's correct. LLMs are not infinitely large. They have only a fixed number of parameters. They have to compress their knowledge as much as possible. In practice, LLMs are mostly storing reusable bits of programs like vector programs. Because they have this need for compression, every time they're learning a new program they're going to try to express it in terms of existing bits and pieces of programs that they've already learned before.", "Dwarkesh Patel 00:48:08", "Isn't this generalization?", "François Chollet 00:48:10", "Absolutely. Clearly LLMs have some degree of generalization. This is precisely why. It's because they have to compress.", "Dwarkesh Patel 00:48:19", "Why is that intrinsically limited? At some point it has to learn a higher level of generalization and a higher level, and then the highest level is the fluid intelligence.", "(00:48:28) – Future of AI progress: deep learning + program synthesis", "François Chollet 00:48:28", "It's intrinsically limited because the substrate of your model is a big parametric curve. All you can do with this is local generalization. If you want to go beyond this towards broader or even extreme generalization, you have to move to a different type of model. My paradigm of choice is discrete program search, program synthesis.", "If you want to understand that, you can sort of compare and contrast it with deep learning. In deep learning your model is a differentiable parametric curve. In program synthesis, your model is a discrete graph of operators. You've got a set of logical operators, like a domain-specific language. You're picking instances of it. You're structuring that into a graph that's a program. That's actually very similar to a program you might write in Python or C++ and so on. We are doing machine learning here. We're trying to automatically learn these models.", "In deep learning your learning engine is gradient descent . Gradient descent is very compute efficient because you have this very strong informative feedback signal about where the solution is. You can get to the solution very quickly, but it is very data inefficient. In order to make it work, you need a dense sampling of the operating space. You need a dense sampling of the data distribution. Then you're limited to only generalizing within that data distribution. The reason why you have this limitation is because your model is a curve.", "Meanwhile, if you look at discrete program search, the learning engine is combinatorial search . You're just trying a bunch of programs until you find one that actually meets your spec. This process is extremely data efficient. You can learn a generalizable program from just one example, two examples. This is why it works so well on ARC, by the way. The big limitation is that it's extremely compute inefficient because you're running into combinatorial explosion, of course.", "You can sort of see here how deep learning and discrete program search have very complementary strengths, and limitations as well. Every limitation of deep learning has a corresponding strength in program synthesis and inversely. The path forward is going to be to merge the two.", "Here’s another way you can think about it. These parametric curves trained with gradient descent are great fits for everything that's System 1-type thinking: pattern recognition, intuition, memorization, etc. Discrete program search is a great fit for Type 2 thinking: planning, reasoning. It’s quickly figuring out a generalizable model that matches just one or two examples, like for an ARC puzzle for instance.", "Humans are never doing pure System 1 or pure System 2. They're always mixing and matching both. Right now, we have all the tools for System 1. We have almost nothing for System 2. The way forward is to create a hybrid system.", "The form it's going to take is mostly System 2. The outer structure is going to be a discrete program search system. You're going to fix the fundamental limitation of discrete program search, which is combinatorial explosion, with deep learning. You're going to leverage deep learning to guide and to provide intuition in program space, to guide the program search.", "That's very similar to what you see when you're playing chess or when you're trying to prove a theorem, for instance. It's mostly a reasoning thing, but you start out with some intuition about the shape of the solution. That's very much something you can get via a deep learning model. Deep learning models are very much like intuition machines. They're pattern matching machines.", "You start from this shape of the solution, and then you're going to do actual explicit discrete program search. But you're not going to do it via brute force. You're not going to try things randomly. You're actually going to ask another deep learning model for suggestions. It’ll be like, “here's the most likely next step. Here's where in the graph you should be going.” You can also use yet another deep learning model for feedback like “well, here's what I have so far. Is it looking good? Should I just backtrack and try something new?” Discrete program search is going to be the key but you want to make it dramatically better, orders of magnitude more efficient, by leveraging deep learning.", "By the way, another thing that you can use deep learning for is of course things like common sense knowledge and knowledge in general. You're going to end up with this sort of system where you have this on-the-fly synthesis engine that can adapt to new situations.", "The way it adapts is that it's going to fetch from a bank of patterns, modules that could be themselves curves, differentiable modules, and some others that could be algorithmic in nature. It's going to assemble them via this intuition-guided process. For every new situation you might be faced with, it's going to give you a generalizable model that was synthesized using very, very little data. Something like this would solve ARC.", "Dwarkesh Patel 00:54:18", "That's actually a really interesting prompt. There’s an interesting crux here. I talk to my friends who are extremely optimistic about LLMs and expect AGI within the next couple of years. In some sense, they also agree that scaling is not all you need but that the rest of the progress is undergirded and enabled by scaling. You still need to add the System 2 and the test time compute on top of these models.", "Their perspective is that it's relatively straightforward to do that because you have this library of representations that you built up from pre-training. It's almost like it's just skimming through textbooks. You need some more deliberate way in which it engages with the material it learns. In-context learning is extremely sample efficient. To actually distill that into the weights , you need the model to talk through the things it sees and then add it back to the weights.", "As far as the System 2 goes, they talk about adding some kind of RL setup so that it is encouraged to proceed on the reasoning traces that end up being correct. They think this is relatively straightforward stuff that will be added within the next couple of years.", "François Chollet 00:55:32", "That's an empirical question so we’ll see.", "Dwarkesh Patel 00:55:35", "I assume your intuition is not that. I'm curious why.", "François Chollet 00:55:37", "My intuition is that this whole System 2 architecture is the hard part. It’s the very hard and unobvious part. Scaling up the interpolative memory is the easy part. It's literally just a big curve. All you need is more data. It's an interpolative representation of a data set. That's the easy part.", "The hard part is the architecture of intelligence. Memory and intelligence are separate components. We have the memory. We don't have the intelligence yet. I agree with you that having the memory is actually very useful. If you just had the intelligence but it was not hooked up to an extensive memory, it would not be that useful because it would not have enough material to work from.", "Dwarkesh Patel 00:56:21", "Former guest Trenton Bricken advanced an alternative hypothesis that intelligence is just hierarchically associated memory. When Sherlock Holmes goes into a crime scene he's extremely sample efficient. He can just look at a few clues and figure out who was the murderer. He's able to do that because he has learned higher level associations. It's memory in some fundamental sense.", "Here's one way to ask the question. In the brain, supposedly we do program synthesis, but it is just synapses connected to each other. Physically, it's got to be that you just query the right circuit, right?", "François Chollet 00:57:01", "You are, yeah. It's a matter of degree.", "Dwarkesh Patel 00:57:04", "Training in the environment that human ancestors were trained in means you learn those circuits. If you train on the same kinds of outputs that humans produce — which to replicate, requires these kinds of circuits — wouldn't that train the same thing that is whatever humans have?", "François Chollet 00:57:19", "It's a matter of degree. If you have a system that has a memory and is only capable of doing local generalization from that, it's not going to be very adaptable. To be really general, you need the memory plus the ability to search to quite some depth to achieve broader and even extreme generalization.", "One of my favorite psychologists is Jean Piaget , the founder of developmental psychology . He had a very good quote about intelligence. He said, \"intelligence is what you use when you don't know what to do.\" As a human living your life, in most situations you already know what to do because you've been in this situation before. You already have the answer.", "You're only going to need to use intelligence when you're faced with novelty, with something you didn't expect. It’s something that you weren't prepared for, either by your own life experience or your evolutionary history. This day that you're living right now is different in some important ways from every day you've lived before. It's also different from any day ever lived by any of your ancestors. You're still capable of being functional. How is that possible?", "Dwarkesh Patel 00:58:39", "I'm not denying that generalization is extremely important and the basis for intelligence. That's not the crux. The crux is how much of that is happening in the models.", "Okay, let me ask a separate question about the differences in intelligence between humans. Maybe because of the reasons you mentioned, the intelligence tests  are not measuring it well. But clearly there's differences in intelligence between different humans.", "What is your explanation for what's going on there? That's sort of compatible with my story. There's a spectrum of generality and these models are climbing up to a human level. Even some humans haven't even climbed up to the Einstein level or the François level.", "François Chollet 00:59:18", "That's a great question. There is extensive evidence that differences in intelligence are mostly genetic in nature. That means that if you take someone who is not very intelligent, there is no amount of training data you can expose that person to that would make them become Einstein. This points to the fact that you really need a better architecture. You need a better algorithm. More training data is not in fact all you need.", "Dwarkesh Patel 00:59:50", "I think I agree with that. I might phrase it in this way. The people who are smarter have, in ML language, better initializations . If you look at the neural wiring, it's more efficient. Maybe they have greater density of firing.", "Some part of the story is scaling. There is some correlation between brain size and intelligence . Within the context of “scaling” LLMs, people talk about architectural improvements. A model like Gemini 1.5 Flash performs as well as GPT-4 did when GPT-4 was released a year ago, but is 57 times cheaper on output. Part of the scaling story is that we're in like extremely low-hanging fruit territory when it comes to those architectural improvements.", "(01:00:40) – How Mike Knoop got nerd-sniped by ARC", "Dwarkesh Patel 01:00:40", "We're back now with the co-founder of Zapier , Mike Knoop . You're funding this prize and you're running this prize with François. Tell me about how this came together. What prompted you guys to launch this prize?", "Mike Knoop 01:00:58", "I've been AI curious for 13 years. I co-founded Zapier and I’ve been running it for the last 13 years.", "I first got introduced to your work during COVID. I went down the rabbit hole. I had a lot of free time. It was right after you'd published your paper, \"On the Measure of Intelligence” . You introduced the concept of AGI and that this efficiency of skill acquisition is the right definition, and the ARC puzzles.", "I don't think the first Kaggle contest had been done yet. It was still running. It was interesting but I just parked the idea. I had bigger fish to fry at Zapier. We were in the middle of this big turnaround of trying to get to our second product.", "It was January 2022 when the chain-of-thought paper came out. That really awoke me to the progress. I even gave a whole presentation to Zapier on the GPT-3 paper . I felt like I had priced in everything that LLMs could do. That paper was really shocking to me in terms of all these latent capabilities that LLMs have that I didn't expect they had.", "I actually gave up my exec team role. I was running half the company at that point. I went back to being an individual contributor and just doing AI research alongside Bryan , my co-founder. Ultimately, that led me back towards ARC. I was looking into it again. I had expected to see this saturation effect that MMLU and GMS8K have.", "When I looked at the scores and the progress over the last four years, I was really shocked to see that we'd made very little objective progress towards it. It felt like a really important eval. As I spent the last year quizzing people about it in my network and community, very few people even knew it existed. If it's right that this is a really globally, singularly unique AGI eval — and it's different from every other eval that exists that more narrowly measures AI skill — then more people should know about this thing.", "I had my own ideas on how to beat ARC as well. I was working nights and weekends on that. I flew up to meet François earlier this year to quiz him and show him my ideas. Ultimately I asked him why more people didn’t know about ARC? You should actually answer that. It's a really interesting question. Why don't you think more people know about ARC?", "François Chollet 01:03:17", "Benchmarks that gain traction in the research community are benchmarks that are already fairly tractable. The dynamic is that some research group is going to make some initial breakthrough and then this is going to catch the attention of everyone else. You're going to get follow-up papers with people trying to beat the first team and so on.", "This has not really happened for ARC because ARC is actually very hard for existing AI techniques. ARC requires you to try new ideas. That's very much the point. The point is not that you should just be able to apply existing technology and solve ARC. The point is that existing technology has reached a plateau. If you want to go beyond that and start being able to tackle problems that you haven't memorized or seen before, you need to try new ideas.", "ARC is not just meant to be this sort of measure of how close we are to AGI. It's also meant to be a source of inspiration. I want researchers to look at these puzzles and be like, \"hey, it's really strange that these puzzles are so simple and most humans can just do them very quickly. Why is it so hard for existing AI systems? Why is it so hard for LLMs and so on?\"", "This is true for LLMs, but ARC was actually released before LLMs were really a thing. The only thing that made it special at the time was that it was designed to be resistant to memorization. The fact that it has survived LLMs so well, and GenAI in general, shows that it is actually resistant to memorization.", "Mike Knoop 01:04:56", "This is what nerd-sniped me. I went and took a bunch of the puzzles myself. I've shown it to all my friends and family too. They're all like, \"oh yeah, this is super easy. Are you sure AI can't solve this?\" That's the reaction and the same one for me as well. The more you dig in, you realize there's not just empirical evidence over the last four years that it's unbeaten, but there are theoretical concepts behind why. I completely agree at this point that new ideas are needed to beat ARC.", "There’s a lot of current trends in the world that are actually working against that happening. We’re actually less likely to generate new ideas right now. One of the trends is the closing up of frontier research, right? The GPT-4 paper from OpenAI had no technical detail shared. The Gemini paper had no technical detail shared, like the longer context part of that work.", "Yet that open innovation and progress and sharing is what got us to transformers in the first place . That's what got us to LLMs in the first place. So it's actually a little bit disappointing that so much frontier work has gone closed. It's really making a bet that these individual labs are going to be the ones to have the breakthrough and not the ecosystem. The internet and open source has shown that it's the most powerful innovation ecosystem that's ever existed, probably in the entire world.", "François Chollet 01:06:08", "It's actually really sad that frontier research is no longer being published. If you look back four years ago, everything was just openly shared. All of the state-of-the-art results were published. This is no longer the case.", "OpenAI single-handedly changed the game. OpenAI basically set back progress towards AGI by quite a few years, probably like 5-10 years. That’s for two reasons. One is that they caused this complete closing down of frontier research publishing.", "But they also triggered this initial burst of hype around LLMs. Now LLMs have sucked the oxygen out of the room. Everyone is just doing LLMs. I see LLMs as more of an off-ramp on the path to AGI actually. All these new resources are actually going to LLMs instead of everything else they could be going to.", "If you look further into the past to like 2015 or 2016, there were like a thousand times fewer people doing AI back then. Yet the rate of progress was higher because people were exploring more directions. The world felt more open-ended. You could just go and try. You could have a cool idea of a launch, try it, and get some interesting results. There was this energy. Now everyone is very much doing some variation of the same thing.", "The big labs also tried their hand on ARC, but because they got bad results they didn't publish anything. People only publish positive results.", "Dwarkesh Patel 01:07:55", "I wonder how much effort people have put into trying to prompt or scaffold, do some Devin -type approach, into getting the frontier models to produce good solutions on ARC. I mean the frontier models of today, not just a year ago. A lot of post-training has gone into making them better. There’s Claude 3 Opus or GPT-4o .", "I hope that one of the things this episode does is get people to try out this open competition. They have to put in an open source model to compete, but we could also figure out if maybe the capability is latent in Claude and just see if you can show that. That would be super interesting.", "(01:08:37) – Million $ ARC Prize", "Dwarkesh Patel 01:08:37", "Let's talk about the prize. How much do you win if you solve it? Let’s say you get whatever percent on ARC. How much do you get if you get the best submission but don't crack it?", "Mike Knoop 01:08:47", "We have a little over a million dollars in the prize pool. We’re running the contest on an annual basis. We're starting today through the middle of November. The goal is to get 85%. That's the lower bound of the human average that you guys talked about earlier. There's a $500,000 prize for the first team that can get to the 85% benchmark.", "We don't expect that to happen this year. One of the early statisticians at Zapier gave me this line that has always stuck with me: \"the longer it takes, the longer it takes.\" My prior is that ARC is going to take years to solve. We're also going to break down and do a progress prize this year.", "There's a $100,000 progress prize which we will pay out to the top scores. $50,000 is going to go to the top objective scores this year on the Kaggle leaderboard. We're hosting it on Kaggle. We're then going to have a $50,000 pot set for the best paper that explains conceptually the scores that they were able to achieve.", "One of the interesting things is we're also going to be requiring that in order to win the prize money, you put the solution or your paper out into the public domain. Typically with contests, you see a lot of closed-up sharing. People are private and secret. They want to hold their alpha to themselves during the contest period.", "Because we expect it's going to be multiple years, we want an interactive game here. The plan is that at the end of November we will award the $100,000 prize money to the top progress prize. We’ll use the down time between December through February to share out all the knowledge from the top scores and the approaches folks were taking. That way we’ll re-baseline the community up to whatever the state of the art is and then run the contest again next year. We’ll keep doing that on a yearly basis until we get 85%.", "(01:10:33) – Resisting benchmark saturation", "Dwarkesh Patel 01:10:33", "I'll give people some context on why I think this prize is very interesting. I was having conversations with my friends who are very much believers in models as they exist today. First of all, it was intriguing to me that they didn't know about ARC. These are experienced ML researchers.", "This happened a couple nights ago. We went to dinner and I showed them an example problem. They said, \"of course, an LLM would be able to solve something like this.\" We took a screenshot of it. We just put it into our ChatGPT app. It didn’t get the pattern.", "So it's very interesting. It is a notable fact. I was playing devil's advocate against you on these kinds of questions but this is a very intriguing fact. This prize is extremely interesting because we're going to learn something fascinating one way or another.", "With regards to the 85%, separate from this prize, I'd be very curious if somebody could replicate that result. Obviously in psychology and other kinds of fields, which this result seems to be analogous to, when you run tests on some small sample of people they're often hard to replicate.", "I'd be very curious to know, if you try to replicate this, how does the average human perform on ARC? I’m also curious about the difficulty of how long it will take to crack this benchmark. It's very interesting thinking of the other benchmarks that are now fully saturated, like MMLU and MATH . Dan Hendrycks and Collin Burns who did MMLU and MATH, they were grad students or college students when they made it.", "The goal when they made it just a couple of years ago was that it would be a test of AGI. Of course they got totally saturated. I know you'll argue that these are tests of memorization. But there’s been a pattern we’ve seen. In fact, Epoch AI has a very interesting graph where you see this almost exponential curve. It gets 5%, 10%, 30%, 40% as you increase the compute across models, and then it just shoots up.", "In the GPT-4 technical report , they had this interesting graph of the HumanEval problem set, which was 22 coding problems. They had to graph it on the mean log pass curve. Early on in training, or even with smaller models, they can have the right idea of how to solve this problem.", "It takes a lot of reliability to make sure they stay on track to solve the whole problem. You really want to upweight the signal where they get it right at least some of the time, maybe 1/100 or 1/1000. They go from 1/1000 to 1/100 and 1/10 and then they just totally saturate it.", "Here’s the question this is all leading up to. Why won't the same thing happen with ARC? People had to try really hard with bigger models. Now they figured out these techniques like the ones Jack Cole has figured out that can get 35% with only a 240 million parameter language model.", "Shouldn't we see the same pattern we saw across all these other benchmarks? You just eke out and then once you get the general idea, you just go all the way to a hundred?", "François Chollet 01:13:27", "That's an empirical question. We'll see in practice what happens. What Jack Cole is doing is actually very unique. It's not just pre-training an LLM and then prompting it. He's actually trying to do active inference.", "Mike Knoop 01:13:40", "He's doing test-time, right? He's doing test-time fine-tuning.", "François Chollet 01:13:42", "Exactly, he’s doing test-time fine-tuning. This is actually trying to lift one of the key limitations of LLMs. At inference time, they cannot learn anything new. They cannot adapt on the fly to what they're seeing. He's actually trying to learn.", "What he's doing is effectively a form of program synthesis. LLMs contain a lot of useful building blocks, programming building blocks. By fine-tuning it on the task at test time, you are trying to assemble these building blocks into the right pattern that matches the task. This is exactly what program synthesis is about.", "I would contrast this approach with discrete program search. In discrete program search, you're trying to assemble a program from a set of primitives. You have very few primitives. For instance, people working on discrete program search on ARC tend to work with DSLs that have 100 to 200 primitive programs. It’s a very small DSL but they're trying to combine these primitives into very complex programs. There's a very deep depth of search.", "On the other hand, is what Jack Cole is doing with LLMs. He's got this vector program database DSL of millions of building blocks in the LLM. They’re mined by pre-training the LLM, not just on a ton of programming problems, but also on millions of generated ARC-like tasks. You have an extraordinarily large DSL and the fine-tuning is very shallow recombination of these primitives.", "Discrete program search is very deep recombination with a very small set of primitive programs. The LLM approach is the same but on the complete opposite end of that spectrum. You scale up the memorization by a massive factor and you're doing very shallow search. They are the same thing, just different ends of the spectrum.", "I think where you're going to get the most value for your compute cycles is somewhere in between. You want to leverage memorization to build up a richer, more useful bank of primitive programs. You don't want them to be hard-coded like what we saw for the typical RTS. You want them to be learned from examples. You also want to do some degree of deep search. As long as you're only doing very shallow search, you are limited to local generalization. If you want to generalize further and more broadly, depth of search is going to be critical.", "Dwarkesh Patel 01:16:26", "I might argue that the reason that he had to rely so heavily on the synthetic data was because he used a 240 million parameter model. The Kaggle competition at the time required him to use a P100 GPU which has like a tenth or something of the flops of an H100 .", "For context for the listeners, the frontier models today are literally a thousand times bigger than that. For your competition, submissions can't make any API calls, can't go online, and have to run on NVIDIA Tesla P100. It's significantly less powerful.", "Mike Knoop 01:17:20", "There's basically a 12 hour runtime limit. There's a forcing function of efficiency in the eval.", "François Chollet 01:17:24", "But here's the thing, you only have 100 test tasks. The amount of computing available for each task is actually quite a bit, especially if you contrast that with the simplicity of each task.", "Dwarkesh Patel 01:17:35", "Basically, it would be 7 minutes per task. People who have tried to do these estimates of how many flops does a human brain have. You can take them with a grain of salt but as a sort of anchor, it's basically the amount of flops an H100 has.", "Maybe you would argue that a human brain can solve this question in faster than 7.2 minutes. Even with a tenth of the compute, you should be able to do it in seven minutes. Obviously we have less than petabytes of fast access memory in the brain and these 29 GB or whatever in the H100.", "(01:18:08) – ARC scores on frontier vs open source models", "Dwarkesh Patel 01:18:08", "The broader point is that I wish there were a way to also test this prize with some sort of scaffolding on the biggest models, as a way to test whether scaling is the path to solving ARC.", "François Chollet 01:18:26", "Absolutely. In the context of the competition, we want to see how much progress we can do with limited resources. But you're entirely right that it's a super interesting open question, what could the biggest model out there actually do on ARC?", "We actually also want to make available a private, one-off track where you can submit to us a VM . You can put on it any model you want. You can take one of the largest open source models out there, fine-tune it, do whatever you want, and just give us an image. Then we run it on the H100 for 24 hours or something. You see what you get.", "Mike Knoop 01:19:03", "It's worth pointing out that there's two different test sets. There is a public test set that's in the public GitHub repository that anyone can use to train. You can put in an open API call, whatever you'd like to do. Then there's the private test set, which is the hundred that is actually measuring the state of the art.", "It is pretty open-ended and interesting to have folks at least attempt to use the public test set and go try it. Now there is an asterisk on any score that's reported on against the public test set because it is public. It could have leaked into the training data somewhere.", "François Chollet 01:19:32", "This is actually what people are already doing. You can already try to prompt one of the best models, like the latest Gemini or the latest GPT-4, with tasks from the public evaluation set. Again, the problem is that these tasks are available as JSON files on GitHub. These models are also trained on GitHub. So they're actually trained on these tasks.", "That kind of creates uncertainty. If they can actually solve some of the tasks, is that because they memorized the answer or not? Maybe you would be better off trying to create your own private, ARC-like very novel test set. Don't make the tasks difficult. Don't make them complex. Make them very obvious for humans, but make sure to make them original as much as possible. Make them unique, different, and see how well your GPT-4 or GPT-5 does on them.", "Dwarkesh Patel 01:20:25", "There have been tests on whether these models are being overtrained on these benchmarks.", "Scale recently did this with GSM8K . They basically replicated the benchmark, but with different questions. Some of the models actually were extremely overfit on the benchmark, like Mistral and so forth. Frontier models like Claude and GPT actually did as well on their novel benchmark as they did on the specific questions that were in the existing public benchmark.", "I would be relatively optimistic about them just training on the JSON. I was joking with Mike that you should allow API access but keep an even more private validation set of these ARC questions. So you allow API access and people can play with GPT-4 scaffolding to enter into this contest. Maybe later on you run the validation set on the API. If it performs worse than the test set that you originally allowed the API to access, that means that OpenAI is training on your API calls. You go public with this and show them like, \"oh my god, they've leaked your data.\"", "Mike Knoop 01:21:31", "We do want to evolve the ARC dataset. That is a goal that we want to do. François mentioned that it's not perfect.", "François Chollet 01:21:38", "Yeah, ARC is not a perfect benchmark. I made it over four years ago, almost five now. This was in a time before LLMs. We’ve actually learned a lot since about what potential flaws there might be. There is some redundancy in the set of tasks, which is of course against the goals of the benchmark. Every task is supposed to be unique in practice. That's not quite true. Every task is also supposed to be very novel, but in practice, they might not be. They might be structurally similar to something that you might find online somewhere.", "So we want to keep iterating and release an ARC 2.0 version later this year. When we do that, we're going to want to make the old private test set available. Maybe we won't be releasing it publicly, but what we could do is just create a test server where you can query, get a task, and submit a solution. Of course you can use whatever frontier model you want there.", "Because you actually have to query this API, you're making sure that no one is going to accidentally train on this data. It's unlike the current public ARC data, which is literally on GitHub. There's actually no question about whether the models are trained on it. They are because they train on GitHub.", "By gating access to requiring this API, we would avoid this issue. For people who want to try whatever technique they have in mind, using whatever resources they want, that would be a way for them to get an answer.", "Dwarkesh Patel 01:23:11", "I wonder what might happen. I'm not sure. One answer is that they come up with a whole new algorithm for AI with some explicit program synthesis. Now we're on a new track. Another is that they did something hacky with the existing models in a way that actually is valid, which reveals that maybe intelligence is more of getting things to the right part of the distribution. Then it can reason.", "In that world, that will be interesting. Maybe that'll indicate that you had to do something hacky with current models. As they get better you won't have to do something hacky. I'm also going to be very curious to see if these multimodal models will natively perform much better at ARC-like tests.", "Mike Knoop 01:23:51", "If ARC survives three months from here, we'll up the prize. We're about to make a really important moment of contact with reality by blowing up the prize, putting a much bigger prize pool against it. We're going to learn really quickly if there's a lot of low-hanging fruit ideas.", "Again, I think new ideas are needed. Anyone listening might have the idea in their head. I'd encourage everyone to give it a try. As time goes on, that adds strength to the argument that we've stalled out in progress and that new ideas are necessary to beat ARC.", "François Chollet 01:24:19", "Yeah, that's the point of having a money prize. You attract more people and you get them to try to solve it. If there's an easy way to hack the benchmark, that reveals that the benchmark is flawed. You’re going to know about it. In fact, that was the point of the original Kaggle competition for ARC back in 2020. I was running this competition because I had released this dataset and I wanted to know if it was hackable, if you could cheat.", "There was a small money prize at the time. It was like $20K. This was right around the same time as GPT-3 was released. People of course tried GPT-3 on the public data. It scored zero. What the first contest taught us is that there is no obvious shortcut. Now there's more money. There's going to be more people looking into it. We're going to find out. We're going to see if the benchmark is going to survive.", "Let’s say we end up with a solution that is not like trying to brute force the space of possible ARC tasks. It’s just trained on core knowledge. I don't think it's necessarily going to be in and of itself AGI, but it's probably going to be a huge milestone on the way to AGI. What it represents is the ability to synthesize a problem-solving program from just two or three examples. That alone is a new way to program.", "It's an entirely new paradigm for software development. You can start programming potentially quite complex programs that will generalize very well. Instead of programming them by coming up with the shape of the program in your mind and then typing it up, you're actually just showing the computer what output you want. You let the computer figure it out. That's what is extremely powerful.", "(01:26:19) – Possible solutions to ARC Prize", "Dwarkesh Patel 01:26:19", "I want to riff a little bit on what kinds of solutions might be possible here, and which you would consider defeating the purpose of ARC vs. which are valid.", "Here's one I'll mention. My friends Ryan and Buck stayed up last night because I told them about this. They were like, \"oh, of course LLMs can solve this.\"", "Mike Knoop 01:26:37", "Good. Thank you for spreading the word.", "Dwarkesh Patel 01:26:39", "They were trying to prompt Claude Opus on this and they say they got 25% on the public ARC test. What they did was have other examples of some of the ARC tests and in context explain the reasoning of why you went from one output to another output and now you have the current problem. I think there was also expressing the JSON in a way that is more amenable to the tokenizer.", "Another thing was using the code interpreter. Do you think the code interpreter, which keeps getting better as these models get smarter, is just the program synthesis right there? What they were able to do was get the actual output of the cells, that JSON output, through the code interpreter, like “write the Python program that gets the right output here.”", "Do you think that the program synthesis kind of research you're talking about will just look like using the code interpreter in large language models?", "François Chollet 01:27:36", "I think whatever solution we see that will score well is probably going to need to leverage some aspects from deep learning models and LLMs in particular. We've shown already that LLMs can do quite well. That's basically the Jack Cole approach.", "We've also shown that pure discrete program search from a small DSL does very well. Before Jack Cole, this was the state of the art. In fact, it's still extremely close to the state of the art and there's no deep learning involved at all in these models.", "We have two approaches that have basically no overlap, that are doing quite well. They're very much at two opposite ends of one spectrum. On one end, you have these extremely large banks of millions of vector programs, but very shallow recombination, simplistic recombination. On the other end, you have very simplistic DSLs, 100-200 primitives, but very deep, very sophisticated program search.", "The solution is going to be somewhere in between. The people who are going to be winning the ARC competition and making the most progress towards near-term AGI are going to be those that manage to merge the deep learning paradigm and the discrete program search paradigm into one elegant way.", "You asked what would be legitimate and what would be cheating. If you want to add a code interpreter to the system, I think that's great. That's legitimate. The part that would be cheating is trying to anticipate what might be in the test, like brute force the space of possible tasks and train a memorization system on that. You rely on the fact that you're generating so many tasks, millions and millions. Inevitably there's going to be some overlap between what you're generating and what's in the test set.", "That's defeating the purpose of the benchmark because then you can just solve it with that and you need to adapt just by fetching a memorized solution. Hopefully ARC will resist that, but no benchmark is perfect. Maybe there's a way to hack it. We're going to get an answer very soon.", "Dwarkesh Patel 01:29:41", "Although some amount of fine tuning is valid because they have to use open source language models to compete here and they’re natively language. They’d need to be able to think in the ARC-type way.", "François Chollet 01:29:58", "Yes. You want to input core knowledge, ARC-like core knowledge, into the model but surely you don't need tens of millions of tasks to do this. Core knowledge is extremely basic.", "Dwarkesh Patel 01:30:08", "If you look at some of these ARC-type questions, I actually do think they rely a little bit on things I have seen throughout my life. For example, something bounces off a wall and comes back and you see that pattern. I've played arcade games and I've seen Pong or something.", "For example, you see the Flynn effect and people's intelligence, as measured on Raven's progressive matrices, increasing on these kinds of questions. It's probably a similar story where since childhood now, we actually see these sorts of patterns in TV and whatever, these spatial patterns.", "So I don't think this is core knowledge. This is actually also part of the “fine-tuning” that humans have as they grow up, seeing different kinds of spatial patterns and trying to pattern match to them.", "François Chollet 01:30:54", "I would definitely file that under core knowledge. Core knowledge includes basic physics, for instance bouncing or trajectories. That would be included.", "But yeah, you're entirely right. The reason why, as a human, you're able to quickly figure out the solution is because you have this set of building blocks, this set of patterns, in your mind that you can recombine.", "Dwarkesh Patel 01:31:12", "Is core knowledge required to attain intelligence? For any algorithm you have, does the core knowledge have to be, in some sense, hardcoded? Or can even the core knowledge be learned through intelligence?", "François Chollet 01:31:22", "Core knowledge can be learned. In the case of humans, some amount of core knowledge is something that you're born with. We're actually born with a small amount of knowledge about the world we're going to live in. We're not blank slates.", "But most core knowledge is acquired through experience. The thing with core knowledge is that it's not going to be acquired in school for instance. It's actually acquired very early in the first 3-4 years of your life. By age four, you have all the core knowledge you're going to need as an adult.", "Dwarkesh Patel 01:31:52", "Interesting. On the prize itself, I'm super excited to see the open source versions, maybe with a Llama (70B) or something, and what people can score in the competition itself. I’m also excited to test specifically the scaling hypothesis and I'm very curious if you can prompt on the public version of ARC.", "You won't be able to submit that to this competition itself but I'd be very curious to see if people can crack that and get ARC working there. Would that update your views on AGI?", "Mike Knoop 01:32:23", "It's really going to be motivating. We're going to keep running the contest until somebody puts a reproducible open source version in the public domain. Even if somebody privately beats the ARC eval, we're going to still keep the prize money until someone can reproduce it and put the public reproducible version out there.", "François Chollet 01:32:37", "Exactly. The goal is to accelerate progress towards AGI. A key part of that is that any meaningful bits of progress need to be shared, need to be public, so everyone can know about it and try to iterate on it. If there's no sharing, there's no progress.", "Dwarkesh Patel 01:32:51", "What I'm especially curious about is disaggregating the bets. Can we make an open version of this or is this just possible with scaling? We can test both of them based on the public and the private version.", "Mike Knoop 01:33:06", "We're making contact with reality as well with this. We're gonna learn a lot about what the actual limits of the compute are. If someone showed up and said, “hey, here's a closed source model and I'm getting +50% with it,” that would probably update us. We’d think, “okay, perhaps we should increase the amount of compute that we give on the private test set in order to balance.”", "Some of the decisions initially are somewhat arbitrary in order to learn about what people want. What does progress look like? Both of us are committed to evolving it over time in order to be the best or the closest to perfect as we can get it", "Dwarkesh Patel 01:33:32", "Awesome. Where can people go to learn more about the prize and maybe try their hand at it?", "Mike Knoop 01:33:36", "Arcprize.org . It’s live now.", "Dwarkesh Patel 01:33:37", "It goes live today. One million dollars is on the line, people.", "Thank you guys for coming on the podcast. It's super fun to go through all the cruxes on intelligence and get a different perspective and also to announce a prize here. This is awesome.", "Mike Knoop 01:33:50", "Thank you for helping break the news.", "François Chollet 01:33:51", "Thank you for having us." ]
[ "https://fchollet.com/", "https://keras.io/", "https://zapier.com/blog/author/mike-knoop/", "https://zapier.com/", "https://arcprize.org/", "https://arxiv.org/pdf/1911.01547", "https://en.wikipedia.org/wiki/Artificial_intelligence", "https://en.wikipedia.org/wiki/Large_language_model", "https://lab42.global/arc/core-knowledge/", "https://slideslive.com/38935790/abstraction-reasoning-in-ai-systems-modern-perspectives", "https://en.wikipedia.org/wiki/Program_synthesis", "https://en.wikipedia.org/wiki/Artificial_general_intelligence", "https://blogs.nvidia.com/blog/what-is-a-pretrained-ai-model/", "https://en.wikipedia.org/wiki/Evolution_of_human_intelligence#:~:text=In%20fact%2C%20humans%20have%20shown,the%20social%20environment%20around%20us.", "https://en.wikipedia.org/wiki/Connectome", "https://lab42.global/wp-content/uploads/2022/10/ARC-Explanation2.svg", "https://en.wikipedia.org/wiki/Multimodal_learning", "https://en.wikipedia.org/wiki/Spatial_ability", "https://en.wikipedia.org/wiki/JSON", "https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)", "https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)", "https://en.wikipedia.org/wiki/Devil%27s_advocate", "https://scrollprize.org/", "https://www.dwarkeshpatel.com/p/nat-friedman", "https://en.wikipedia.org/wiki/Herculaneum_papyri", "https://scrollprize.org/firstletters", "https://lukefarritor.com/about/", "https://en.wikipedia.org/wiki/Amazon_Mechanical_Turk", "https://en.wikipedia.org/wiki/Raven%27s_Progressive_Matrices", "https://x.com/Jcole75Cole", "https://www.techopedia.com/experts/what-is-the-role-of-parameters-in-ai", "https://hazelcast.com/glossary/machine-learning-inference/", "https://en.wikipedia.org/wiki/Reinforcement_learning", "https://thedecisionlab.com/reference-guide/philosophy/system-1-and-system-2-thinking", "https://www.dwarkeshpatel.com/p/will-scaling-work", "https://en.wikipedia.org/wiki/Manifold_alignment", "https://paperswithcode.com/dataset/gsm8k", "https://ai.stackexchange.com/questions/5246/what-is-sample-efficiency-and-how-can-importance-sampling-be-used-to-achieve-it", "https://en.wikipedia.org/wiki/Set_theory", "https://en.wikipedia.org/wiki/GPT-3", "https://arxiv.org/abs/2403.05530", "https://www.hopsworks.ai/dictionary/in-context-learning-icl#:~:text=In%2Dcontext%20learning%20(ICL)%20learns%20a%20new%20task%20from,objective%20of%20next%20token%20prediction.", "https://en.wikipedia.org/wiki/Caesar_cipher", "https://en.wikipedia.org/wiki/Self-driving_car#:~:text=The%20Union%20of%20Concerned%20Scientists,%2C%20and%20drive%20the%20vehicle.%22", "https://stackoverflow.com/", "https://openai.com/index/chatgpt/", "https://github.com/", "https://en.wikipedia.org/wiki/Synapse", "https://en.wikipedia.org/wiki/Machine_learning", "https://campus.datacamp.com/courses/chatgpt-prompt-engineering-for-developers/advanced-prompt-engineering-strategies?ex=1#:~:text=Few%2Dshot%20prompting%20is%20a,the%20model%20to%20respond%20to.", "https://en.wikipedia.org/wiki/GPT-2", "https://openai.com/index/gpt-4-research/", "https://en.wikipedia.org/wiki/Search_algorithm", "https://thedecisionlab.com/reference-guide/philosophy/system-1-and-system-2-thinking", "https://en.wikipedia.org/wiki/Intuition_pump", "https://en.wikipedia.org/wiki/Python_(programming_language)", "https://en.wikipedia.org/wiki/Pathfinding", "https://en.wikipedia.org/wiki/Real-time_strategy", "https://en.wikipedia.org/wiki/Fog_of_war", "https://en.wikipedia.org/wiki/FLOPS", "https://en.wikipedia.org/wiki/Synthetic_data", "https://en.wikipedia.org/wiki/Deep_learning", "https://arxiv.org/abs/2201.02177", "https://en.wikipedia.org/wiki/Minimum_description_length", "https://en.wikipedia.org/wiki/Overfitting", "https://en.wikipedia.org/wiki/Meta-learning_(computer_science)", "https://en.wikipedia.org/wiki/Fluid_and_crystallized_intelligence", "https://en.wikipedia.org/wiki/C%2B%2B", "https://en.wikipedia.org/wiki/Gradient_descent", "https://en.wikipedia.org/wiki/Combinatorial_search", "https://en.wikipedia.org/wiki/Knowledge_distillation#:~:text=In%20machine%20learning%2C%20knowledge%20distillation,might%20not%20be%20fully%20utilized.", "https://deepai.org/machine-learning-glossary-and-terms/weight-artificial-neural-network", "https://www.dwarkeshpatel.com/p/sholto-douglas-trenton-bricken", "https://en.wikipedia.org/wiki/Jean_Piaget", "https://en.wikipedia.org/wiki/Developmental_psychology", "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6220331/", "https://en.wikipedia.org/wiki/Heritability_of_IQ", "https://machinelearningmastery.com/weight-initialization-for-deep-learning-neural-networks/", "https://en.wikipedia.org/wiki/Brain_size#Intelligence", "https://deepmind.google/technologies/gemini/flash/", "https://zapier.com/", "https://zapier.com/blog/author/mike-knoop/", "https://arxiv.org/pdf/1911.01547", "https://www.kaggle.com/c/abstraction-and-reasoning-challenge", "https://arxiv.org/abs/2201.11903", "https://arxiv.org/abs/2005.14165", "https://zapier.com/blog/author/bryan-helmig/", "https://arxiv.org/abs/2009.03300", "https://en.wikipedia.org/wiki/Generative_artificial_intelligence", "https://xkcd.com/356/", "https://arxiv.org/abs/2303.08774", "https://arxiv.org/abs/2403.05530", "https://blog.google/technology/ai/long-context-window-ai-models/", "https://arxiv.org/abs/1706.03762", "https://en.wikipedia.org/wiki/Open-source_artificial_intelligence", "https://en.wikipedia.org/wiki/Devin_AI", "https://www.anthropic.com/news/claude-3-family", "https://openai.com/index/hello-gpt-4o/", "https://arxiv.org/abs/2103.03874", "https://people.eecs.berkeley.edu/~hendrycks/", "https://collinpburns.com/", "https://epochai.org/blog/how-predictable-is-language-model-benchmark-performance", "https://arxiv.org/pdf/2303.08774", "https://arxiv.org/abs/2107.03374", "https://en.wikipedia.org/wiki/Domain-specific_language", "https://www.nvidia.com/en-us/data-center/tesla-p100/", "https://www.nvidia.com/en-us/data-center/h100/", "https://en.wikipedia.org/wiki/Virtual_machine", "https://arxiv.org/html/2405.00332v1", "https://en.wikipedia.org/wiki/Mistral_AI", "https://www.kaggle.com/c/abstraction-and-reasoning-challenge", "https://en.wikipedia.org/wiki/Pong", "https://en.wikipedia.org/wiki/Flynn_effect", "https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-code-llama-70b/", "https://arcprize.org/" ]
https://www.dwarkesh.com/p/grant-sanderson
Grant Sanderson (3Blue1Brown) - Past, Present, & Future of Mathematics
[ "(0:00:00) - Does winning math competitions require AGI?", "Dwarkesh Patel 0:00:45", "Today I have the pleasure of interviewing Grant Sanderson of the YouTube channel, 3blue1brown . You all know who Grant is and I'm really excited about this one.", "By the time that an AI model can get gold in the International Math Olympiad, is that just AGI? Given the amount of creative problem solving and chain of thought required to do that.", "Grant Sanderson 0:01:04", "To be honest, I have no idea what people mean when they use the word AGI. I think if you ask 10 different people what they mean by it, you're going to get 10 slightly different answers. And it seems like what people want to get at is a discrete change that I don't think actually exists. Where you've got, AIs up to a certain point are not AGI. They might be really smart, but it's not AGI. And then after some point, that's the benchmark where now it's generally intelligent.", "The reason that world model doesn't really fit is it feels a lot more continuous where GPT-4 feels general in the sense that you have one training algorithm that applies to a very, very large set of different kinds of tasks that someone might want to be able to do. And that's cool. That's an invention that people in the sixties might not have expected to be true for the nature of how artificial intelligence can be programmed.", "So it's generally intelligent, but maybe what people mean by “Oh, it's not AGI.” is you've got certain benchmarks where it's better than most people at some things, but it's not better at most people than others.", "At this point, it's better than most people at math. It's better than most people at solving AMC problems and IMO problems. It’s just not better than the best. And so maybe at the point when it's getting gold in the IMO, that's a sign that, “Okay, it's as good as the best.” And we've ticked off another domain, but I don't know, is what you mean by AGI that you've enumerated all the possible domains that something could be good at and now it's better than humans at all of them?", "Dwarkesh Patel 0:02:32", "Or enough that it could take over a substantial fraction of human jobs. It’s impressive right now but it's not going to be even 1% of GDP. But in my mind, if it's getting gold in IMO, having seen some of those problems from your channel, I'm thinking “Wow, that's really coming after podcasters and video animators.”", "Grant Sanderson 0:02:54", "I don't know. That feels orthogonal because getting a gold in the IMO feels a lot more like being really, really good at Go or Chess. Those feel analogous. It's super creative. I don't know chess as well as the people who are into it, but everything that I hear from them, the sort of moves that are made and choices have all of the air of creativity. I think as soon as they started generating artwork, then everyone else could appreciate, “Oh, there's something that deserves to be called creative here.”", "I don't know how it would look when people get them to be getting golds at the IMO but I imagine it's something that looks a little bit like how AlphaGo is trained, where you have it play with itself a whole bunch. Math lends itself to synthetic data in the ways that a lot of other domains don't. You could have it produce a lot of proofs in a proof checking language like Lean, for example, and just train on a whole bunch of those. And ask, is this a valid proof? Is this not a valid proof? And then counterbalance that with English written versions of something.", "I imagine what it looks like once you get something that is solving these IMO level things, is one of two things. Either it writes a very good proof that you feel is unmotivated, because anyone who reads math papers has this feeling that there are two types. There are the ones where you morally understand why the result should be true and then there are the ones where you're like, “I can follow the steps. Why would you have come up with that? I don't know. But I guess that shows that the result is true.” And you're left wanting something a little bit more.", "And so you could imagine if it produces that to get a gold in the IMO, is that the same kind of ability as what is required to replace jobs? Not really. The impediments between where it is now and replacing jobs feels like a whole different set of things like having a context window that is longer than some small thing such that you can make connections over long periods of time and build relationships and understand where someone's coming from and the actual problem solving part of it. It's a sign that it would be a more helpful tool, but in the same way that Mathematica can help you solve math problems much more effectively.", "Dwarkesh Patel 0:05:02", "Tell me why I should be less amazed by it or maybe put it in a different context but the reason I would be very impressed is… With chess, obviously this is not all the chess programs are doing, but there's a level of research you can do to narrow down the possibilities. And more importantly, in the math example, it seems that with some of the examples you've listed on your channel, the ability to solve the problem is so dependent on coming up with the right abstraction to think about it, coming up with ways of thinking about it that are not evident in the problem itself or in any other problem in any other test, that seems different from just a chess game where you don't have to think about what is the largest structure of this chess game in the same way as you do with the IMO problem.", "Grant Sanderson 0:05:47", "I think you should ask people who know a lot about Go and Chess and I'd be curious to hear their opinions on it because I imagine what they would say is, if you're going to be as good at Go as AlphaGo is you’re also not doing tree search, at least exclusively. It's not dependent on that, because you get this combinatorial explosion, which is why people thought that game would be so much harder for so much longer. There sort of has to be something like a higher level structure in their understanding.", "Don't get me wrong, I anticipate being very impressed when you get AIs that can solve these IMO problems, because you're absolutely right, there's a level of creativity involved. The only claim I'm making is that being able to do that feels distinct from the impediments between where we are now and the AIs take over all of our jobs or something. It seems like it's going to be another one of those boxes that's this historic moment analogous to chess and Go, more so than it's going to be analogous to the Industrial Revolution.", "Dwarkesh Patel 0:06:52", "I'm surprised you wouldn't be more compelled.", "Grant Sanderson 0:06:55", "I am compelled.", "Dwarkesh Patel 0:06:55", "Or you just don't think that skill of — this problem is isomorphic to this completely different way of thinking about what's happening in the situation and here's me going through the 50 steps to put all that together into this one proof. I'm surprised you don't think that's upstream of a lot of valuable tasks.", "Grant Sanderson 0:07:20", "I think it's a similar level of how impressed I was with the stable diffusion type stuff, where you ask for a landscape of beautiful mountains, but made out of quartz and gemstones. And it gives you this thing which has all of the essence of a landscape, but it's not literally a landscape. And so you realize that there's something beyond the literal that's understood here. That's very impressive.", "In the same way, to solve one of these math problems that requires creativity you can't just go from the definitions. You're 100% right. You need this element of lateral thinking, which is why we find so much joy in finding the solutions ourselves or even just seeing other people get those solutions. It's exactly the kind of joy that you get out of good artistic analogies and comparisons and mixing and matching. I'm very impressed by all of that.", "I think it's in the same category. And maybe I don't have the same opinions as a lot of other people with this hard line between pre-AGI and post-AGI. I just don't know what they mean by the word AGI. I don't think that you're going to have something that's this measurable discrete step, much less that a math tournament is going to be an example of what that discrete step would look like.", "(0:08:24) - Where to allocate mathematical talent?", "Dwarkesh Patel 0:08:24", "Interesting.", "Applied mathematicians. Where do we put them in society where they can have the biggest benefit? A lot of them go into computer science and IT and I'm sure there's been lots of benefits there. Where are there parts of society where you can just have a whole bunch of mathematicians go in and they can make things a lot better? Transportation or logistics or manufacturing? But where else do you think they might be useful?", "Grant Sanderson 0:08:48", "That's such a good question. In some ways, I'm like the worst person to ask about that.", "[Laughter]", "This isn't going to answer your question, but instead is going to fan the flames of why I feel it's an important question.", "I have actually been thinking recently about if it's worth making an out-of-typical video that's specifically addressed at inspiring people to ask that, especially students who are graduating. Because I think this thing happens when you fall in love with math or some sort of technical field, by default in school, you study that. And when you're studying that, effectively you're going through an apprenticeship to be an expert in that or a researcher in that. The structure of studying physics in a university or math in a university, even though they know that not all majors are going to go into the field. The people that you're gaining mentorship from are academics and our research is in the field. So it’s hard not to be apprenticing in that.", "And I also have noticed that when I go and give talks at universities or things like this and students come up after and they're saying hi, there's a lot of them like, “Grant, the videos were really inspiring. You're the reason that I studied math. You're the reason I’m going into grad school.”", "And there's this little bell in the back of my mind that's like, “Cool, cool. I'm amazed. I don't know if I believe that I was wholly responsible for it, but it’s cool to have that impact.”", "But … do I want that?", "[Laughter]", "Is this a good thing to get more people going into math PhDs? On the one hand, I unequivocally want more people to self identify as liking math. That's very good. But those who are doing that necessarily get shuffled into the traditional outlets like math academia.", "I think you highlighted it very right. Math academia, finance and computer science, data science, something in there in general are very common things to go to. And as a result, they almost certainly have an over allocation of talent. All three of those are valuable, right? I'm not saying those are not valuable things to go into. But if you were playing God and shifting around, where do you want people to go? Again, I'm not answering your question. I'm just asking it in other words because I don't really know.", "I think you should probably talk to the people who made that shift of which there aren't a huge number, but Eric Lander is maybe one good example. Jim Simons would maybe be another as people who were doing a very purely academic thing and then decided to shift to something very different.", "Now I have sort of had this thought that it's very beneficial to insert some forcing function that gets the pure mathematicians to spend some of their time in a non pure math setting. NSF grants coming with a requirement that 10% of your time goes towards a collaboration with another department or something like that. The thought being these are really good problem solvers in a specific category of problems and to just distribute that talent elsewhere might be helpful.", "When I run this by mathematicians, sometimes there's a mixed response where they're like, “I don't know if we'd be all that useful.” There's a sense that the aesthetic of what constitutes a good math problem is by its nature rooted in the purity of it such that it's maybe a little elitist to assume that just because people are really, really good at solving that kind of problem that somehow their abilities are more generalizable than other people's abilities.", "Why ask about the applied mathematicians rather than saying shouldn't the applied biologists go and work in logistics and things like that because they also have a set of problem solving abilities that are maybe generalizable.", "In the back of my mind I think “No, but the mathematicians are special. There really is something general about math.” So I don't have the answers. I will say I'm actually very curious to hear from people for what they think the right answers are or from people who made that switch. Let's say they were a math major or something adjacent like computer science physics. And then they decided that they wanted to pour themselves into something not because that was the academic itch that they were scratching by being good at school and getting to appreciate that. But because they stepped back and said what impact do I want to make on the world?", "I'm hungry for more of those stories because I think it could be very compelling to convey those specifically to my audience who is probably on track to go into just the traditional math type fields and maybe there's room to have a little bit of influence to disperse them more effectively.", "But I don't know. I don't know what more effectively looks like because at the end of the day I'm like I'm a Math YouTuber. I'm not someone who has a career in logistics or manufacturing or all of these things in such a way that I can have an in tune feel for where there is a need for this specific kind of abstract problem solving.", "Dwarkesh Patel 0:13:19", "It might be useful to speculate on how an undergrad or somebody who is a young math whiz might even begin to contemplate — here's where I can have an edge.", "I'm actually remembering a former podcast guest Lars Doucet , he was a game designer and he started learning about Georgism which is this idea that you should tax land and only land. And so he got really interested in not only writing about those ideas but also with — well, if you're going to tax land you have to figure out what the value of land is. How do you figure out the value of land? There's all these algorithms of how you do this optimally based on neighboring land and how to average across land. And there's a lot of intricacies there.", "He now has a startup where he just contracts with cities to implement these algorithms to help them assess the value of their land which makes property taxes much more feasible. That's another example where the motivation was more philosophical but his specialty as a technical person helped him make a contribution there.", "Grant Sanderson 0:14:22", "I think that's perfect. Probably the true answer is that you're not going to give a universal thing. For any individual is going to be based on where their life circumstances connect them into something either because he had an interest in Georgism for whatever reason. But if someone I don't know their dad runs a paper mill and they're connected to the family business in that way and realize they can plug themselves in a little bit more efficiently.", "You're going to have this wide diversity of the ways that people are applying themselves that does not take the form of general advice given from some podcast somewhere but instead takes the form of simply inviting people to think critically about the question rather than following the momentum of what being good at school implies about your future.", "Dwarkesh Patel 0:15:04", "We were talking about this before the interview started but we have a much better grasp on reality based on our mathematical tools. I'm not talking about anything advanced. Literally being able to count in the decimal system that even the Romans didn't have.", "How likely do you think it is that something that significant would be enjoyed by our descendants in hundreds of thousands of years or do you think that that kind of basic numeracy level stuff those kinds of thinking tools are basically all gone?", "Grant Sanderson 0:15:30", "Just so I understand the question right, you're talking about how having a system for numbers changes the way that we think that then lends itself to a better understanding of the world like we can do commerce, things like that. Or we can think in terms of orders of magnitude that would have been hard to think about. We have the word “orders of magnitude” in a way that is hard to write down, much less think about if you're doing Roman numerals. Is there something analogous to that for our descendents?", "Fluency with a programming interface really can help with understanding certain problems. I think when people mess around in a notebook with something and it feels like a really good tool set. There's a way that has the same sensation as adopting a nice notation in that you write something with a small number of symbols but then you discover a lot about the implication of that. In the case of notation, it's because the rules of algebra are very constrained and so when you write something you can go through an almost game-like process to see how it reduces and expands and then see something that might be non-trivial.", "And in the case of programming, of course the machine is doing the crunching and you might get a plot that reveals some extra data. I think we're maybe at a phase where there's room for that to become a much more fluid process such that rather than having these small little bits of friction like you've got to set up the environment, and you got to link it in the notebook, you've got to find the right libraries, that there's something that feels as fluid as when you are good at algebra and you're just at a whiteboard kind of noodling it out.", "I think there's something to be said for the fact that there's still so much more value in paper. if you and I were going to go into some math topic right now. Let’s say you ask me something that's a terrible question for a podcast but I'm like “Oh. Let's actually dig into it.” The right medium to do that is still paper. I think I would break out some paper and we would scribble it out.", "Whenever it becomes the case that the right medium to do that lends itself to simulation and to programming and all that, that feels like it would get to the point where it shifts the way that you even think about stuff.", "(0:17:34) - Grant’s miracle year", "Dwarkesh Patel 0:17:34", "What's up with miracle years ? This is something that has happened throughout science and especially with mathematicians, where they have a single year in which they make up many, if not most, of the important discoveries that they have in their career. Newton, Einstein, Gauss they all had these years. Do you have some explanation of what's going on?", "Grant Sanderson 0:17:55", "What's your take?", "Dwarkesh Patel 0:17:59", "I think there's a bunch of possible explanations. It can't just be youth because youth lasts 10 years not one year so it must have something to do with..", "Grant Sanderson 0:18:06", "Every 35 year old right now is like, “How dare you.”  [Laughter]", "Dwarkesh Patel 0:18:12", "You know what I mean. Maybe 20 years. So yeah, it can't just be that. I don't know there's a bunch of possible things you could say. One is you're in a situation in life where you have nothing else going for you or you're just really free for that one year and then you become successful after that year is over based on what you did.", "But what is your take?", "Grant Sanderson 0:18:31", "I don't know. I agree that's probably multiple factors, not one. One thing could be that the miracle year is like the exhalation and there's been many, many years of inhalation.", "The classic one is Einstein's where his miracle year were also some of the first papers springing onto the scene, and I would guess that a lot of the ideas were not bumping around his head only in that year but it's many many years of thinking about it and coalescing.", "And so you might be in a position where you can build up all of this potential energy and then for whatever reason there's one time in life that lends itself to actually releasing all of that. If I try to reflect on my own history with what I'm doing now I think I didn't appreciate early on how much potential energy I had simply from being a student in college where there's just a bunch of ways of thinking about things, or empathy with new learners, or just cool concepts right? The basic concept behind a video that in fact it was many many years of like all of my time having learned math before I started putting out stuff online that I was able to eat into.", "The well never runs dry there's always a long list of things that I want to cover but in some sense like I recognize that the well was at risk of running dry in a way that I never thought that it could and without being a little deliberate about devoting some of my day not just of output and producing but to stepping back and like learning new things and touching something I never would have that doesn't happen by default.", "I don't know if this is all also the case for the people who have had genuine miracle years where they were like letting out all of this stuff and then it takes a decade to build up that same level of potential energy.", "The other thing is you have everything to gain and nothing to lose when you are young. So even if it's not merely youth, there's a willingness to be creative and there's also none of the obligations that come from having found success before.", "There's certain academics who made an extremely deliberate effort not to let the curse of success happen or there's some term for it but I think maybe James Watson had this standard reply to invitations for you know talks and interviews and things like that. It was basically like, “No to everyone because I just want to be a scientist.” It was much more articulate than that and he has all these nine points but that was the gist of it.", "Short of doing that I think it's very easy for someone to have a lot of other things that eat into their mind share and time and all of that such that even if it's just 20 hours a week, that really interrupts a creative flow.", "Dwarkesh Patel 0:21:12", "Were you a student when you started the channel?", "Grant Sanderson 0:21:14", "Technically, yeah. The very first video was made when I was a senior at Stanford. Basically I had been toying around with just a personal programming project in my last year of college that was the beginnings of what is now the animation tool I work with.", "I didn't intend for it to be a thing that I would use as a math YouTuber. I didn't even really know what a YouTuber was. It was really just like a personal project. It was March of that year that I think that I published the first ever video. It was kind of right at that transition point.", "Dwarkesh Patel 0:21:47", "Would you have done it if you had become a data scientist?", "Grant Sanderson 0:21:55", "Data scientist and Math PhD were the two like 50-50 contenders basically.", "Dwarkesh Patel 0:22:00", "Is there a role in which you started doing that but then later on made Manim or do you think that that was only possible in a world where you had some time to kill in your senior year?", "Grant Sanderson 0:22:11", "If the goal was to make Math YouTube videos it would have been a wild thing to do it by making Manim as the method for it because it's so strikingly inefficient to do it that way. At the very least I probably would have built on top of an existing framework. There's so many things that I would tell my past self if I could go back in time even if the goal was to make that. Certain design decisions that caused pain that could have been fixed earlier on.", "But if the goal was to make videos, there's just so many good tools for making videos I probably would have started with those or if I wanted to script things, maybe I would have first learned After Effects really effectively and then learn the scripting languages around after effects that might have even been better for all I know. I really don't know.", "I just kind of walked into it because the initial project was to make something that could illustrate certain ideas in math especially when it came to visualizing functions as transformations, mapping inputs to outputs, as opposed to graphing. The video output was just a way of knowing that I had completed that personal project in some sense and then it turned out to be fun because I also really enjoy teaching and tutoring.", "Then again there's a lot of other people who make their own tools for math GIFs and little illustrations and things which on the one hand feels very inefficient  If people come across a math GIF on Wikipedia there's a very high probability it comes from this one individual who is just strangely prolific at producing these like Creative Commons visuals and he has his own like home baked thing for how he does it.", "And then there's someone I came across on Twitter Matt Henderson who has these completely beautiful math GIFs and such and again it's a very home baked thing. It is built on top of shaders but he kind of has his own stuff there.", "Maybe there's something to be said for the level of ownership that you feel once it is your own thing that just unlocks a sense of creativity and feeling like, “Hey. I can just describe whatever I want because if I can't already do it I'll just change the tool to make it able to do that”. For all I know, that level of creative freedom is necessary to take on a wide variety of topics but your guess is as good as mine for those counterfactuals.", "Dwarkesh Patel 0:24:26", "This is personally interesting to me because I also started the podcast in college and it was just off track of anything I was planning on doing otherwise. And this is many, many orders of magnitude away from 3Blue1Brown I don't want the audience to you know cringe in unison, but I just think it's interesting like these kinds of projects how often something later on ends up being successful is something that was started almost on a whim as a hobby when you're in college.", "Grant Sanderson 0:24:52", "I will say there's a benefit to starting it in a way that is low stakes. You're not banking on it growing. I had no anticipation of much less an expectation of 3Blue1Brown growing. I think the reason I kind of kept doing it was, in the fork of life where I did the Math PhD and all that, I thought it might be a good idea to have a little bit of a footprint on the internet for Math exposition. I was thinking of it as a very niche thing that maybe some math students and some people who are into math would like, but I could sort of show the stuff as a portfolio, not as an audience size that was meaningful.", "I was surprised by what an appetite there was for the kind of things that I was making and in some ways maybe that's helpful because I see a lot of people who jump in with the goal of being a Youtuber. I think it's the most common at desired job among the youth is to be like a tiktoker or a Youtuber, which think of that what you will, but when you jump in with that as a goal you kind of aim for too large an audience and end up making the content which is best for no one because one, you're probably not that good at making videos yet and if it’s a generally applicable idea, you're competing with like all of the other communicators out there. Whereas, if you do something that's almost unreasonably niche and also you're not expecting it to blow up it's like one you're not going to be disappointed, it's like outstanding when a thousand people view it as opposed to being disappointing and then two, you might be making something that is the best possible version of that content for the audience who watches it because no one else is making that for them because it's too narrow a target.", "The beauty of the internet is that there's an incentive to do that and I don't know if this is the case with your podcast when you're starting out, but not thinking about how can I make this as big as possible actually made it more in depth for those who were listening to it.", "(0:26:44) - Prehistoric humans and math", "Dwarkesh Patel 0:26:44", "Is it surprising to you that prehistoric humans don't seem to have had just basic arithmetic and numeracy? To us with the modern understanding that kind of stuff seems so universally useful and so fundamental that it's shocking that it just doesn't come about naturally in the course of interacting with the world. Is that surprising?", "Grant Sanderson 0:27:09", "You're right that it's so in our bones that it's hard to empathize with not having numeracy. If you think, “Okay. What's the first place that most people think about numbers in their daily lives?” It's linked to commerce and money. Maybe in some ways the question is the same as, is it surprising that early humanity didn't have commerce or didn't deal with money?", "Maybe when you're below Dunbar's number in your communities, a tit for tat structure just makes a lot more sense and actually works well and it would just be obnoxious to actually account for everything.", "Have you come across those studies where anthropologists interview tribes of people that are removed enough from normal society that they don't have the level of numeracy that you or I do? But there's some notion of counting. You have one coconut or nine coconuts like you have a sense of that. But if you ask what number is halfway between one and nine, those groups will answer three whereas you or I or people in our world would probably answer five and because we think on this very linear scale.", "It's interesting that evidently the natural way to think about things is logarithmically, which kind of makes sense. The social dynamics of as you go from solitude to a group of 10 people to a group of 100 people have roughly equal steps in increasing complexity more so than if you go from 1 to 51 to 102 and I wonder if it's it's the case that by adding numeracy in some senses we've also like lost some numeracy or lost some intuition in others, where now if you ask middle school teachers what's a difficult topic to teacher for students to understand they're like logarithms. But that should be deep in our bones right so somehow it got unlearned and maybe it's in the formal sense that it's harder to relearn it, but there's maybe a sense of like numeracy and a sense of quantitative thinking that humans naturally do have that is hard to appreciate when it's not expressed in the same language or in the same ways.", "Dwarkesh Patel 0:29:13", "Yeah, I have seen the thing from Joseph Henrich where there's still existing tribes where they're in this kind of situation. They can do numeracy and arithmetic when it's in very concrete terms, if you're talking about seeds or something but that the abstract concept of a number is not available to them.", "Grant Sanderson 0:29:37", "Do you think the abstract concept of a number is useful to your life?", "Dwarkesh Patel 0:29:38", "Oh yeah.", "Grant Sanderson 0:29:39", "In what ways?", "Dwarkesh Patel 0:29:38", "It's almost like asking — how is the concept of the alphabet useful? It comes up so often. For example, how many lights do I set up for this interview?", "Grant Sanderson 0:29:48", "Is that the concept of an abstract number though? Because it's like two people, two lights. One to one correspondence.", "Did you leverage the abstraction of two as an object which is simultaneously a rational and a real and an integer. Is in the context of a group that has additive structure but also multiplicative. It was just there's light for you light for me.", "I'm pretty sure the abstract idea of a number is important for all of us but I don't think it's immediately obvious. It's more that it shapes the way we think, I'm not sure if it actually changes the way we live. Assuming you don't work in STEM right where you literally are using it all the time.", "Dwarkesh Patel 0:30:31", "Yeah, I'm trying to go through my day and think through where am I using them? There's the obvious stuff like the commerce examples you mentioned where you go to a restaurant and you're figuring out what to pay or what to tip but that seems a very particular example.", "Do I really use numbers that infrequently? I don't know.", "Grant Sanderson 0:30:31", "Many people listening are probably screaming out of their head with much more apt examples but it's hard to say.", "Dwarkesh Patel 0:30:57", "When a mathematician is working on a problem, what is the biggest mental constraint? Is it the working memory? Is it the processing speed? Plants are limited by nitrogen usually, what is the equivalent of nitrogen for a mathematician?", "Grant Sanderson 0:31:11", "That's a fun question. I'm not a research mathematician, I shouldn't pretend like I am. The right people to ask that question would be the research mathematicians. I wonder if you're going to get consistent answers as with so many things there's not one answer.", "Maybe it’s the number of available analogies to be able to draw connections? The more exposure you've had to disparate fields such that you could maybe see that a problem-solving approach that was used here might be useful here. Sometimes that's literally codified in the forms of connections between different fields, as functors between categories or something.", "But sometimes it's a lot more intuitive. Someone's doing a combinatorics type question and they're like, “Oh. Maybe generating functions are a useful tool to bring to bear.” and then in some completely different context of studying prime numbers they're like, “Oh. Maybe it could take a generating function type approach. Maybe you have to massage it to make it work.", "One of the reasons I say this is that one of the tendencies that you've seen in math papers in the last 200 years is that the typical number of authors is much bigger now than before. I think people have this misconception that math is a field with lone geniuses who are coming up with great insights alone next to a blackboard. The reality is that it's a highly collaborative field.", "I remember one of the first times that I was hearing from a mathematician, I was a young kid and was in this math circles event and someone was asking this person, \"What surprised you about your job?” The first thing he said was how much travel was involved. He wasn't expecting that. And it's because you know if you're studying some very specific niche field, the way that you make progress in that is by collaborating with other people in that field or maybe adjacent to that field and there's only so many that they probably aren't at your university. So you travel a lot to work with them.", "These days a lot of that I think happens on Zoom but conferences are still super important and these sorts of events that bring people all under one roof like MSRI, is maybe an example of a place that's trying to do that systematically. You could say that's a social thing but I think it's maybe hitting on this idea that what you want is exposure to as many available analogies. So the short answer to your question, what is nitrogen for mathematicians, is the analogy.", "(0:33:33) - Why is a lot of math so new?", "Dwarkesh Patel 0:33:33", "This actually is an interesting question I wasn't planning on asking you but it just occurred to me.", "Is it surprising how new a lot of mathematics is? Even mathematics that is taught at the high school level. Whereas with physics or biology, that's also new but you can tell a story where we didn't have the tools to look at the cell or to inspect an electron until very recently but we've had mathematicians for 2000-3000 years, who were doing pretty sophisticated things, even the ancient Greeks. Why is linear algebra so new given that fact?", "Grant Sanderson 0:34:07", "I wouldn't have thought of math as being new in that way, especially at the high school level. I remember there's always a sensation that it's frustrating that all of the things are actually way more than a hundred years old, in terms of the names attached to the theorems that you're doing, none of them are remotely modern.", "Whereas in biology, the understanding we have for how proteins are formed is relatively much more modern and you might be just a couple generations away. To some extent there's a raw manpower component to it. How many people did pure math for most of history? For most of history, no one. No one was a pure mathematician. They were a mathematician plus something else or they were a physicist or they were a natural philosopher. And in so far as you're doing natural philosophy, one component of that is developing math but it's not the full extent of what you do.", "Even the ones who we think of as like very, very pure mathematicians in the sense that a lot of their most famous results are pure math like Gauss, actually a lot his output was also centered on very practical problems,", "Maybe since then is when you start to get an era of something more like pure mathematicians. The raw number available that you have the man hours that are being put into developing new theorems, is probably just got this huge spike as the population grows and then also the percentage of the population that has the economic freedom to do something as indulgent as academia grows.", "Maybe it's pretty reasonable that most of it is much, much more recent. That would be my guess.", "Dwarkesh Patel 0:35:44", "Some of these things seem actually pretty modern like information theory. It is less than 100 years old and is pretty fundamental. Theoretically, you could have written that paper a long time ago.", "Grant Sanderson 0:35:49", "That's a really good example and maybe this is a sign that the math that's developed is more in the service of the world that you live in and the adjacent problems that it's used to solve than we typically think of it. On the one hand information theory sets a good example because it's so pure that you could have asked the question, you could have defined the notion of a bit, but evidently there wasn't a strong enough need to think in that way. Whereas when you're doing error correction or you're thinking about actual information channels over a wire and you're at Bell Labs, that's what prompts it.", "Another maybe really good example for that would be Chaos theory. You could easily ask why is Chaos theory so recent? You could have written the Lorenz equations since differential equations existed. Why didn't anyone do that and understand that there was this sort of sensitivity to initial conditions?", "In that case it would maybe be the opposite, where it's not that you need the existence of computers as a problem to solve or the problems that they introduce are the problems to solve but instead you need them to even discover the phenomenon in the first place.", "A lot of original concepts in chaos theory came from basically running simulations or doing things that required a massive amount of computation that simply wouldn't be done by hand. Someone could ask the question but they wouldn't have observed the unexpected phenomenon and there, even if it's questions that are as relevant to a pre-computer world as to a post-computer world like the nature of weather modeling, or just the nature of three-body problem, all of that kind of stuff, somehow without the right tools for thought it just didn't come into the mind.", "So yeah maybe there's other things like that where those questions or pieces of technology that start to fundamentally shape everyone's life will then invariably also shift the mathematician's focus.", "Dwarkesh Patel 0:37:45", "This actually reminds me of the first day of Scott Aaronson’s quantum information class. He said, “What I'm about to describe to you could have been discovered by a mathematician before quantum physics existed. If only they had asked the question of we're going to do probabilities but we're only allowed to use unitaries.” They could have just discovered quantum mechanics or quantum information from there.", "Grant Sanderson 0:38:08", "The thing about math, especially if you're talking about pure axiomatized math, the experience as an undergrad is that you are going through a textbook and it starts with saying here's the axioms of this field and then we're going to deduce from those axioms various different lemmas and theorems and proceed from that.", "With that as the framing you get the impression that you could have just come up with any axioms. Just make up some pile of axioms, deduce what follows from them and the space of possible math is unfathomably huge. So you need some process that culls down what are the useful things to maybe pursue.", "So one of the things that I think is all too often missing in those pure math textbooks is the motivating problem. Why is it that this was the set of axioms people found to be useful and not something else? The framework for quantum information theory, you married together linear linear algebra and probability that's great, but there's all sorts of other things where you could kind of try to cram them together and maybe get some sort of math out.", "The question becomes is it worth your time to do that?", "Knot theory is something that emerged because Lord Kelvin had a theory that all of the elements on the periodic table had structures which were related to a knot. A knot being if you have a closed loop in 3D space but if you wanted to continuously deform it without it ever crossing itself, you ask the question Could you get back to say an open loop? Or if you can't get back to an open loop, what are the set of all other loops in 3d space that could be deformed into that? And you end up categorizing what all the different knots are.", "This was started with a completely incorrect theory for what's going on at the atomic level that gives atoms this very stable structure because I think he found with smoke rings like if you're somehow very dexterous, you can get them to form knots in 3D and they're very stable. In that it'll never cross over itself. So it has all those properties now that was irrelevant for understanding the periodic table but it was an interesting mathematical question and people kind of ran with it and in that case it was an arbitrary reason that someone thought to ask the question and then some people ran with it and frankly it's probably fewer people who run with it than would if it turned out to be a more useful question.", "So really, you want to ask what are the things that prompt people to ask what turns out to be a mathematical question given that the space of what would be mathematical questions is so unfathomably huge that it's just impossible to explore it through a random walk.", "Dwarkesh Patel 0:40:43", "Wait, are you saying that Lord Kelvin's apple story was that he was smoking a lot of pipe and he categorized his puffs. [Laughter]", "You and other creators have changed how pedagogy happens via animated videos. What would it take to do something similar for video games, text, and all these other mediums? Why hasn't there been a similar sort of broad-scale adoption and transformation of how teaching happens there?", "Grant Sanderson 0:41:21", "I'm not sure I understand the question. You're saying where there's been a rise of explanatory videos, why is there not a similar rise in pedagogical video games?", "I don't play enough games so I can't really speak to it in the way that well-versed game designers can but one thing to understand is that games are very hard to make. It takes a lot of resources for a given game and whenever people seem to try to do it with pedagogy as a motive it seems to be the case that they are not fun in the way that people would want them to be fun and then the ones that are actually most effective are not as directly educational.", "The one game that I actually have played because enough of my friends told me hey you should really do this it seems relevant to Math explanation in the last like decade was The Witness . Have you played it?", "Dwarkesh Patel 0:42:19", "I've heard about it.", "Grant Sanderson 0:42:20", "As someone who doesn't play games and then did play it, it's fantastic. It's absolutely well done in every possible way that you could want something to be well done. Critical on that is the nature of how problems are solved.", "The reason people are recommending it to me is because the feeling of playing the witness is a lot like the feeling of doing math. It's non-verbal, you come across these little puzzles where the simple mechanics of one puzzle inform you about the fundamental mechanics that become relevant to much much harder ones such that if you do it with the right sequence you have the feeling of epiphany in ways that are very self-satisfying.", "You come away feeling like you should be able to do something like this for math and maybe you can. It's just that it's so hard to make a game at all that there's just not the rate of production that you would need to explore to get enough games out there that one of them hits.", "There's a lot of math videos on YouTube. It's okay that most of them suck. It's okay because you just need enough that when someone searches for the term that they want they get one that is good and scratches that itch. Or that you know they might get recommended something that is bringing a question to their mind that they wouldn't have thought about but they become really interested once it's there. Whereas with video games, you're also spending a lot more time as a user on each one. Rather than a five minute average experience it's a many, many hour average experience.", "You ask the same question on text. I don't know if I accept the premise that there's not the same advances and innovation in the world of textual explanations. Mathagon is a really, really good example of this. It’s like the textbook of the future. It's basically an interactive textbook. The explanations are really good. In so far as it doesn't have more of an impact or more of a reach it's maybe just because people don't know about it or don't have an easy means of accessing something that recommends to them like the really good innovations happening in the world of textual explanations in the way that youtube has this recommending engine that tries its hardest to get more of these things in front of people.", "In the world of actual written textbooks, there's so many that I like so much that I think it would be a disservice to talk about that medium as not making advances in terms of more and more thought put towards empathy to the learner and things like that.", "(0:44:44) - Future of education", "Dwarkesh Patel 0:44:44", "Should the top 0.1% of educators exclusively be on the internet because it seems like a waste if you were just a college professor or a high school professor and you were teaching 50 kids a year or something. Given the greater scale available should more of them be trying to see if they can reach more people?", "Grant Sanderson 0:45:01", "I think it's not a bad thing for more educators who are good at what they're doing to put their stuff online for sure. I highly encourage that even if it's as simple as getting someone to put a camera in the back of the classroom. I don't think it would be a good idea to get those people out of the classroom.", "If anything I think one of the best things that I could do for my career would be to put myself into more classrooms. Actually I'm quite determined at some point to be a high school math teacher for some number of years. There's such an opportunity cost that it’s probably something I would plan on  notably later as long as there's not other life logistics that occupy a lot of mind share because everything I know about high school teaching is like it just kicks your ass for the first two years.", "One of the most valuable things that you can have if you're trying to explain stuff online is a sense of empathy for what possible viewers that are out there. The more distance that you put between yourself and them in terms of life circumstances. I'm not a college student so I don't have the same empathy with college students. Certainly not a high school student, so I've lost that empathy. That distance just makes it more and more of an uphill battle to make the content good for them and I think keeping people in regular touch with just what people in the classroom actively need is necessary for them to remain as good and as sharp as they are.", "So yes, get more of those top 0.1% to put their stuff online but I would absolutely disagree with the idea of taking them out of their existing circumstances. Maybe for a year or two so they don't lose that sharpness but then put them right back in because it makes them better at the online exposition.", "The other thing I might disagree with is the idea that the reach is lower. Yes, it's a smaller number of people but you're with them for much, much more time and you actually have the chance of influencing their trajectory through a social connection in a way that you just don't over Youtube.", "You're using the word education in a way that I would maybe sub out for the word explanation. You want explanations to be online but the word education derives from the same root as the word educe, to bring out, and I really like that as a bit of etymology because it reminds you that the job of an educator is not to like take their knowledge and shove it into the heads of someone else the job is to bring it out. That's very, very hard to do in a video and in fact even if you can kind of get at it by asking intriguing questions for the most part the video is there to answer something once someone has a question.", "The teacher's job, or the educator's job, should be to provide the environment such that you're bringing out from your students as much as you can through inspiration through projects through little bits of mentorship and encouragement along the way that requires you know eye contact and being there in person and being the true figure in their life rather than just an abstract voice behind a screen.", "Dwarkesh Patel 0:48:00", "Then should we think of educators more as motivational speakers? As in the actual job of getting the content in your head is maybe for the textbooks or for Youtube but why we have college classes or high school classes is that we have somebody who approximates Tony Robbins to get you to do the thing.", "Grant Sanderson 0:48:19", "That would be a subset of it but there's more than just motivational speech that goes into it. There's um facilitation of projects or even coming up with what the projects are or recognizing what a student is interested in so that you can try to tailor a question to their specific set of interests or you can maybe act as the curator. Where, “Hey, there's a lot of online explanations for what a Poisson distribution is. Which of these is the right one that I could serve?” and based on knowing you as a particular student what might resonate. You might be in a better position to do that. All of that goes beyond being a Tony Robbins saying, “Be the best person that you can be.” and all of that.", "One thing I might say is that anytime that I'll chat with mathematicians and try to get a sense for how they got into it and what got them started, so often they start by saying there was this one teacher and that teacher did something very small — like they pulled them aside and just said, “Hey. You're really good at this. Have you considered studying more?” or they give them an interesting problem.", "And the thing that takes at most 30 minutes of the teacher's time, maybe even 30 seconds, has these completely monumental rippling effects for the life of the student they were talking to that then sets them on this whole different trajectory.", "Two examples of this come to mind. One is this woman who was saying she had this moment when she got pulled aside by the teacher and he just said, “Hey, I think you're really good at math. You should consider being a math major.” which had been completely outside of her purview at that time. That changed the way she thought about it. And then later she said she learned that he did that for a large number of people. He just pulled them and was like, “Hey, you're really good at math.” So that's a level of impact that you can have as a figure in their lives in a way that you can't over screen.", "Another one which was very funny. I was asking this guy why he went into the specific field that he did. It was a seemingly arbitrary thing in my mind but I guess all pure math seems to be. He said that in his first year of grad school he was sitting in this seminar and at the end of the seminar the professor, who was this old professor who he had never met him before, they didn't have any kind of connection. He seeks this guy out and comes up and he says, “You. I have a problem for you. A good research problem that I think I think might be a good place for you to start in the next couple months” and this guy was like “Oh, okay” and he gets this research problem and he spends some months thinking about it and he comes back and then it later came to light that the professor mistook him for someone else that was someone he was supposed to be mentoring. He was just the stereotypical image of like a doddering old math professor who's not very in tune with the people in his life that was the actual situation but nevertheless that moment of accidentally giving someone a problem completely shifted the research path for him, which if nothing else, shows you the sensitivity to initial conditions that takes place when you are a student and how the educator is is right on that nexus of sensitivity who can completely swing the fences one way or another for what you do.", "For every one of those stories there's going to be an unfortunate counter balancing story about people who are demotivated from math. I think this was seventh grade. There was this math class that I was in and I was one of the people who was good at math and enjoyed it and would often help the people in the class understand it. I had enough ego built up to have a strong shell around things. For context, I also really liked music and there was this concert that had happened where I had a certain solo or something earlier in that week.", "There was a substitute teacher one day who didn't have any of the context and she gave some lesson and had us spend the second half of the class going over the homework for it. All of the other students in the class were very confused and I think I remember like they would come to me and I would try to offer to help them and the substitute was going around the class in these circles and basically marking off a little star for how far down the homework people were just to get a sense are they progressing. That was kind of her way of measuring how far they were. When she got to me I had done none of them because I was spending my whole time trying to help all of the others and after having written a little star next to the same problem like three different times she said to me like, “Sometimes music people just aren't math people.” and then keeps walking on.", "I was in the best possible circumstance to not let that hit hard because one, I had the moral high ground of “Hey, I've just been helping all these people. I understand it and I've been doing your job for you.” This was my little egotistical seventh grade brain. I knew that I knew the stuff. Even with all of the armor that was put up, I remember it was just this shock to my system, she says this thing and it just made me strangely teary-eyed or something.", "I can only imagine if you're in a position where you're not confident in math and the thing that you know deep in your heart is actually you are kind of struggling with it, just a little throwaway comment like that could completely derail the whole system in terms of your relationship with the subject.", "So it's another example to illustrate the sensitivity to initial conditions. I was in a robust position and wasn't as sensitive. I was gonna love math no matter what but you envision someone who's a little bit more on that teetering edge and the comment, one way or another, either saying you're good at this you should consider majoring in it or saying, “Sometimes music people aren't math people” which isn't even true. That was the other thing about it that niggled at my brain when she said it.", "All of that is just so important for people's development that when people talk about online education as being valuable or revolutionary or anything like that, there's a part of me that sort of rolls my eyes because it it just doesn't get at the truth that online explanations have nothing to do with all of that important stuff that's actually happening and at best it should be like in the service of helping that side of things where the rubber meets the road.", "Dwarkesh Patel 0:54:30", "I had Tyler Cowen on the podcast and he obviously has Marginal Revolution and these Youtube videos where he explains economics and he had a similar answer to give. I asked him, should we think of you as a substitute for all these economics teachers? And in his mind as well he was more a complement to the functions that happen in the class.", "And to your point about the initial conditions, I'm sure you remember the details of the story but I just vaguely remember hearing this, wasn't there a case where a mathematician who later ended up becoming famous? He arrives late to a lecture…. Do you want to tell the story", "Grant Sanderson 0:55:05", "I don't remember it beat for beat but I think it was a statistics class and he was a grad student and he comes in late and there's two problems on the board that the professor had written. He assumed that those two problems were homework and so he goes home and works on them and after a couple weeks he goes to the professor's office and turns in his homework.", "He's like, “I'm sorry. I'm so late. This one just took me a lot longer than some of the others.” And the professor's like “Oh, okay.” and just shuffles it away. Then a couple days later when the prophet had the time to like go through and see them, he realized that the student had fully answered these questions, what the student didn't know is that they were not homework problems written on the chalkboard they were two unsolved problems in the field that the prof put up as examples of what the field was striving.", "I don't remember what problems they were so that would be more fun color to add to the story but then as the anecdote told to me however many years ago goes, the prof then finds the students' housing and knocks on the door “Do you realize that these were actually unsolved problems?” and then he gets to basically make those his thesis. So yeah, that idea of just being given something for completely random reasons and it shifts the course of what you do.", "Dwarkesh Patel 0:56:20", "It's the thing where if you know a crossword is solvable, you just keep going at it until you solve it.", "Grant Sanderson 0:56:25", "Or the four-minute mile, right?", "(0:56:28) - Math helped me realize I wasn’t that smart", "Dwarkesh Patel 0:56:28", "Exactly. That's a great example.", "Another valuable experience, at least one I had, was taking Aaronson’s classes in college and realizing I am at least two standard deviations below him and that was actually a really valuable experience for me not because it increased my confidence in I didn't have a moment where I was like, “Oh wow. I'm good at this” but it was useful to know. Podcasting is an easier thing to do right so then it's good to know that there are actual technical things out there where knowing that you can get really deep into something and people are just gonna be way above you having that sort of awareness.", "Grant Sanderson 0:57:20", "Do you think it's fair to have a mental model that has a static g-factor type quality here such that your two standard deviations below and that is forever the state of things? Or do you think that the right mental model is something that allows for flexibility on where contributions actually come from, or where intuitions come from. That through many years of experience in certain kinds of problem solving maybe what seemed like a flash of insight was actually like the residue of just years of thinking about certain kinds of puzzles that he had, that you maybe didn't.", "Dwarkesh Patel 0:57:39", "Can I tell you a story from that class actually?", "Grant Sanderson 0:57:41", "Yeah, go for it.", "Dwarkesh Patel 0:57:41", "He was giving a proof of a very important method in complexity theory that helped to prove the bounds of the complexity of different problems and he explains it and he says, “You know, in 1999, I approved this myself but I realized that six months before somebody had already published a paper with this method and I realized I'm catching up to the frontier now. But when I was a kid I was doing Euler, that's 2000 years in regress. Now I’m six months behind.”", "And then so later on in the day I'm like, “Wait, 1999. How old was Scott Aaronson in 1999?” and I think he was 18 or 19 and he was basically proving frontier results in complexity theory. At that point you're like, “All right. Aaronson’s a special animal here.”.", "Grant Sanderson 0:58:37", "You are right. He's probably a special animal.", "Dwarkesh Patel 0:58:39", "But it’s just broadly good to have that sort of upper constraint on your Dunning-Kruger that this exists in the world.", "Grant Sanderson 0:58:48", "Maybe the thing that I would want to say is that whatever the scale is on which he's two standard deviations above you, that might not be the one scale that matters and that contributions to these fields don't always look like genius insights and that sometimes there's fruit to be born from say becoming kind of an expert in two different things and then finding connections between them. The people who make contributions are not necessarily the Scott Aaronson’s of the world. Still. You are probably right it is true that there are people like that. Von Neuman’s another example of one of these, right?", "(0:59:25) - Does Godel’s incompleteness theorem matter?", "Dwarkesh Patel 0:59:25", "How much does Godel’s Incompleteness theorem practically matter? Is it something that comes up a lot or is it just an interesting thing to know about the bounds that isn’t applicable day to day?", "Grant Sanderson 0:59:40", "You've asked me another question where I'm not the best one to answer and I should throw that as a caveat to begin. From what I understand, it really doesn't come up.", "The paradoxical fact that it's conveying, the idea that you can't have an axiom system that is both that will basically prove all of the things that are true and which is also self consistent. The contradiction that you construct out of that has the same feeling as the sentence. “This statement is a lie.” We think about the statement. If it's false, then it must be true. If it's true, it must be false. It's that same flavor. And you might ask, does the existence of that paradox mean that it's hard to speak English. [Laughter] It's so rare that you would come up with something that happens to have a bit of self reference in it.", "One of the first times that there was something that came up that didn't feel quite as pathological in that way, if the curious listener wants to go into it, that search term would be Paris Harrington theorem .", "It's a little pathological. wasn't the really question that came up that didn't seem like it was deliberately constructed to be one of these self-referential things where, you know, it shows itself to be outside the bounds of whatever axiom system you were starting with. It was shown to be unresolvable in a certain sense. But it was asking a… I don't want to say natural because a lot of these math questions aren't natural. It was asking a question where you wouldn't expect that to be true.", "So maybe at the edges of theory, there are sometimes when the paradoxes that are possible, show. The impression I get is that no mathematician is thinking about it. They're not actively worrying about it. It’s not like “Oh god, can I be sure that the stuff that I'm going to show is true.”", "For all the practical problems like the Riemann hypothesis or twin primes, almost everyone's like, “No, there's going to be an answer.” It may be that they turn out to be unresolvable in one of these ways but there's just a strong sense that that theorem came from a pathology in a way that natural questions that people actually care about don't.", "Dwarkesh Patel 1:01:46", "That's really interesting that something from the outside and in popularizations seems to be a very fundamental thing where people have definitely heard about this.", "A good analogy here is the halting problem in computer science. One of the first things you learn in a computer science course is the proof of the halting problem and it's another one of those things where you don't really need to be able to prove that you have that sort of program available.", "Grant Sanderson 1:02:19", "No more comments. [Laughter]", "Dwarkesh Patel 1:02:22", "Why are good explanations so hard to find, despite how useful they are? Obviously, other than you, there's many other cases of good explanations. But generally, it just seems like there aren't as many as there should be. Is it just a story of economics where it's nobody's incentive to spend a lot of time making good explanations? Is it just a really hard skill that isn't correlated with being able to come up with a discovery itself? Why are good explanations scarce?", "Grant Sanderson 1:02:47", "I think there's maybe two explanations.", "The first less important one is going to be that there's a difference between knowing something and then remembering what it's like not to know it. And the characteristic of a good explanation is that you're walking someone on a path from the feeling of not understanding up to the feeling of understanding.", "Earlier, you were asking about societies that lack numeracy. That's such a hard brain state to put yourself in, like what's it like to not even know numbers? How would you start to explain what numbers are? Maybe you should go from a bunch of concrete examples. But like the way that you think about numbers and adding things, it's just you have to really unpack a lot before you even start there.", "And I think at higher levels of abstraction, that becomes even harder because it shapes the way that you think so much that remembering what it's like not to understand it. You're teaching some kid algebra and the premise of like a variable. They're like, “What is X?” It's not necessarily anything but it's what we're solving for. Like, yeah, but what is it? Trying to answer “What is X?” is a weirdly hard thing because it is the premise that you're even starting from.", "The more important explanation probably is that the best explanation depends heavily on the individual who's learning. And the perfect explanation for you often might be very different from the perfect explanation for someone else. So there's a lot of very good domain specific explanations. Pull up in any textbook and like chapter 12 of it is probably explaining the content in there quite well, assuming that you've read chapters one through 11, but if you're coming in from a cold start, it's a little bit hard.", "So the real golden egg is like, how do you construct explanations which are as generally useful as possible as generally appealing as possible? And that because you can't assume shared context, it becomes this challenge. And I think there's like tips and tricks along the way, but because the people that are often making explanations have a specific enough audience, it is this classroom of 30 people. Or it's this discipline of majors who are in their third year. All the explanations from the people who are professional explainers in some sense are so targeted that maybe it's the economic thing you're talking about. There's not, or at least until recently in history, there hasn't been the need to or the incentive to come up with something that would be motivating and approachable and clear to an extremely wide variety of different backgrounds.", "(1:05:12) - How Grant makes videos", "Dwarkesh Patel 1:05:12", "Is the process of making your videos, is that mostly you?", "Grant Sanderson 1:05:16", "Yes.", "Dwarkesh Patel 1:05:17", "Given the scale you're reaching, it seems that if it was possible, a small increase in productivity would be worth an entire production studio. And it's surprising to me that the transaction cost of having a production setup are high enough that it's better to literally do the mundane details yourself.", "Dwarkesh Patel 1:05:40", "I mean, this could honestly just be a personal flaw. I'm not good at pulling people in and then I've struggled to do this effectively in the past. But a part of it is that the seemingly mundane details are sometimes just how I even think about constructing it in the first place.", "The first thing that a lot of YouTubers will do if they can hire is hire an editor. And this will be because they film a lot of things. And so a lot of the editing process is removing the stuff that was filmed that shouldn't be in the video and just leaving the stuff that should be in the video. And that's time consuming and it's kind of mundane. And it's probably not that relevant to what the creator should be thinking about.", "The editing process for me, I start by laying out all of the animations and stuff that I want in a timeline and then once I record the voiceover, the actual editing is like a day. I guess I could hire someone and gain a day back of my life but the communication back and forth for saying what specifically I want, all of the little cuts that I'm making along the way are my way of even thinking about what I want the final piece to be and are such that it would be hard to put it into words.", "It's similar for why I maybe find it quite hard to use Co-pilot and some of these LLM tools for the animation code. It can be super great if you're learning some new library and it knows about that library that you don't. But for my library that I know inside it out, if I'm just using it, it feels like, “Oh. This should be the most automatable thing ever. It's just text.” I should be the first YouTuber who can actually do this better because the substance behind each animation is text, it's not like an editing workflow in quite the same way.", "But it doesn't work. And I think it's because maybe it's just because you need a multimodal thing that actually understands the look of the output. Like the output isn't something that is consumable in text. It's something about how it looks.", "But at a deeper level, I can't even put into words what I want to put on the screen, except to do so in code. That's just the way that I'm thinking about it. And if I were to try to put into English the thing that I want as a comment that then gets expanded, that task is actually harder than writing it in the code. And if it's clunky to write in code, that's a sign that I should change the interface of the library such that it's less clunky to be expressive in the way that I want.", "And it's in that same way where a lot of the creative process that feels mundane, those are just the cogs of thought slowly turning in a way that if they weren't turning for that part, they would have to be turning during the interface of communication with a collaborator.", "Dwarkesh Patel 1:08:13", "On the point of working with Co-pilot where we can visualize the changes you wanted to make. The Sparks of AGI paper from Microsoft Research had an actually really interesting example where it was generating LaTeX and they generated some output and they say “Change this so that the visual that comes up in the rendering is different in this way.” And it was actually able to do that, which was their evidence that it can understand the higher level visual abstraction. I guess it can't do that for Manim.", "Grant Sanderson 1:08:44", "There's a couple reasons why it might not be as fair a comparison. There are two versions of Manim. There's a community version that is by the community for the community and then mine, the interfaces are largely similar. The rendering engines are quite different, but because of slight differences in that and it might have a tendency to learn from one or its examples from one and it's intermixing them. So stuff just doesn't quite run when there's discrepancy.", "Maybe I shot myself in the foot because I don’t really comment my code that much for my videos. It's like a one and done deal. The way that I'm making it feels much more like the editing flow. If you were to look at the operation history of someone in After Effects.", "It's a little bit more like that where there's not a perfect description in English of the thing that I want to do and then the execution of that. It's just the execution of that.", "It's not meant to be editable in hindsight as much because I'm just in the flow of making the scene for the one video. Maybe I could have given it a better chance to learn what it's supposed to be happening by having a really well documented set of — This is the input. This is the output. This is the comment describing it in English. But even then that wouldn't hit the problem. I would have to articulate what the thing I want is in the first place. And the program language is just the right mode of articulation in the first place.", "(1:10:13) - Grant’s math exposition competition", "Dwarkesh Patel 1:10:13", "This is something I was really curious about ever since I learned about it. I watched many of the Summer of Math Exposition prize videos and it was shocking to me how good they were. Many of them looked like entire production studios were dedicated to making them. And it was shocking to me that you could motivate and elicit this quality of contribution given the relatively modest prize pool, which was like five winners, $1,000 each.", "What is your explanation of just running prizes like this? Why were you able to get such high quality contributions? Is the prize pool irrelevant? Is it just about your reputation and reach?", "Grant Sanderson 1:10:54", "I do wonder how relevant the prize pool is. We've been thinking about this because we did it first in 2021 and then we plan to continue doing it annually.  If I was a mover and a shaker, I probably could raise much more if I wanted to get a big prize pool there. I don't think it would change the quality of the content because the impression I get is that people aren't fundamentally motivated by winning some cash prize.", "Certainly, they're not investing that time with an expected value calculation. If they are, that's a terrible, terrible plan. And if anything, a higher prize pool might be a problem. Let's say it was a hundred thousand dollar prize for each of the winners, then it would be a real problem where someone would, and people do, delusionally think that they're very likely going to be the winner and they might actually pour a lot of their own resources into it with the expectation of gaining it. And then that's just a messy situation. I don't want to be in a situation where someone asks “Why wasn't mine chosen as a winner?!” Because the whole event is not supposed to be about winners.", "Maybe for the listeners who don't know, I should describe the summer of math exposition.", "Actually, the history is a little bit funny because it started with an intern application where in 2021 I wanted a couple interns to do a certain thing on my website basically and I put out a call for people to apply. I got 2,500 applicants and somewhere in the application I mentioned that during the summer, in addition to the main task I wanted them to do, I'd give them freedom to do something relevant to math exposition online that was their own thing and that I'd be happy to provide some mentorship or just give them the freedom to do that one day a week. And I asked them to give me a little pitch on what their idea would be.", "As I went through all of the applications, which was a lot, I felt so bad because so often the person would have a little pitch and like what they would want to make. And in my mind, I think, “Cool. You should make that! You don't need me to do that. Just spend your summer making that.” Why not?", "And people were clearly inspired by the thought of adding something and like I said earlier, being a youtuber is the most common job aspiration among the youth these days. And so as a consolation of sorts to those 99% that I had to reject for the internship, I said we're going to host this thing called the Summer of Math Exposition where we'll give you a deadline. I'll promise to feature five of you in a video. And if you feel like the thing that you were going to do, like with me as your 20 project as an intern is something you're excited about, make it a hundred percent project. Just do it anyway and like I can give you this little carrot in the form of featuring it in a video and give you a deadline, which let's be honest is what actually makes the difference between people doing something and procrastinating on it sometimes.", "Brilliant.org said they would be happy to put some cash prizes in. So I said, sure, why not? I don't think the cash prize is super important, but it's nice. It shows that someone actually cared and put some real thought into doing something that wasn't just a made up gold star, but they put some material behind saying that you were selected as a winner of this thing.", "But all in all, it was never supposed to be about choosing winners. It was just to get more people to make stuff. And if anything, I'd actually I love it when I see stuff from existing educators and teachers where it's maybe not the youth who want to be youtubers pouring their hearts and souls into it, but it's the educator who built a lot of intuition over the course of their career for what constitutes a clear explanation and they're just sharing it more broadly.", "So, to your question on — What is it that caused there to be such high production quality in some of the entries there? Part of the answer might just be that like tooling is so good now that individuals can actually make pretty incredible things sometimes.", "Dwarkesh Patel 1:14:50", "I misphrased if I said production quality, I just meant the whole composition as a whole.", "Grant Sanderson 1:14:56", "Yeah, well there's a selection filter too, right? In that first year, there were 1200 submissions and I featured five of them in the winning video. So of course, they're necessarily unrepresentative of the norm by the very nature of who I was choosing to feature.", "Dwarkesh Patel 1:15:14", "But the fact that something that high quality was even in the pool.", "Grant Sanderson 1:15:16", "I think it hits a little bit to your miracle year point where I think what might be happening is you have people with a ton of potential energy for something that they've kind of been thinking about making for a long time. And the hope was to give people a little push. Here's a deadline. Here's a little prize. Here's a promise that maybe if you make it, it won't just go into the void, but there's a chance that it could get exposed to more people, which I think is absolutely played out.", "And not for the reason that someone might expect where I choose winners and I feature those winners and people watch them. A huge amount of viewership happens before I even begin the process of looking at them. And this was an accident too, where in this first year, we got 1200 submissions. I said expect judges who are reviewing it to spend at most 10 minutes on each piece. So it could be longer, but don't rely on someone watching it for more.", "But realistically, when I'm reviewing something, I want to watch the whole piece. I absolutely do not have time to watch that many. I've learned it takes me about two weeks of just full time work to watch 100 of these pieces and give the kind of feedback that I want.", "To manage that problem of more than we could manually review, we put together this peer review system that would basically have an algorithm feed people pairs of videos.", "And they would just say which one is better and then it would feed them another one. And in the first two years, we just used a tool that was common for hackathons that did this. And what that did is one, it gave us a partially ordered list of content by quality loosely. We didn't need it to be perfect. We just needed there to be a very high chance that the five most deserving videos were visible somewhere in that top 100.", "So there the algorithm doesn't have to be perfect.", "A thing I've learned about the YouTube algorithm is — in theory, you would want to just use machine learning for everything. You have some massive neural network where on the input of it, it's got five billion videos or however many exist. And the output decides what seven are best to recommend to you. That is completely computationally infeasible.", "I think this is all public knowledge. What you have to do instead is use some sort of proxies as a first pass to nominate a video to even be fed into the machine learning driven algorithm. So that you're only feeding in like a thousand nominees.", "So the real difference that it can make if you've made a really good video, between it getting to the people who would like it and not getting there. It's not the flaws in the algorithm. The algorithm is probably quite good. It's the mismatch between the proxies being used to nominate stuff to see whether it's even in the running.", "One of the things used for nomination is understanding the co-watch graph where if you've watched video A and you've also watched video B and then I watch video A. Your watching both of those gives a little link between them, or maybe you and a ton of other people watching both of them gives a little link between such that once I watch video A, B is potentially nominated in that phase because it's recognized that there's a lot of co-watching.", "That's something that I'm sure is still quite challenging to do scale but it's more plausible to do at scale than like running some massive neural network. And so I think what might have happened is that by having a bunch of co-watching happening on this same pool of videos, all you need is for some of them to have decent reach and get recommended, right? Because then that’s like igniting a pile of kindling where then if others are good, if they're going to give people good experiences, they get not only nominated but then recommended which then kicks back in the feedback loop there.", "That turns out to be as close to a guarantee as you can get of saying if you make something that's good, it's a good piece that will satisfy someone, they come away feeling like they learned something that they otherwise didn't know and it was well presented, if you can get it into this peer review process, it will reach people. It's not just going to be shouting into the void", "And in this case, last year there were over a hundred videos where after the first two weeks they had more than 10,000 views. Which I know is small in the grand scheme but for a fresh channel, talking about a niche mathematical topic, to be able to put it out and get 10,000 people to watch it is amazing. And the idea that that it happen for over a hundred people is amazing", "That had nothing to do with the prize pool, right? In that the motive might have been a hope of actually getting some reach and having some sense of a guarantee of there being some reach", "Ironically the reason to do the whole peer review system in the first place is in the service of selecting winners. If you just said “Hey, we're having a watch fest where everyone watches each other's things.” Somehow it wouldn't quite have the same pull that gets people into it. So I think it still makes sense to have winners and to have some material behind those winners. It doesn't have to be much though. And if anything, I think it might ruin it to make it too much. I will also say it's $15,000 actually because we give $500 to 20 different honorable mentions, at least this year. Still pretty modest in the scheme of how much money you can invest to try to get more math lessons in the world.", "( 1:20:44) - Self teaching", "Dwarkesh Patel 1:20:44", "I watched many of the honorable mentions as well because they were just topics that were interesting to me. It's like the thing that the president of Chicago University said. He said we could discard the people we admitted and select the next thousand for our class and there would be no difference.", "By the way, I really admire not only the education that you have provided directly with your videos which have reached millions of people, but the fact that you're also setting up this way of getting more people to contribute and get to topics that you wouldn't have time to get to yourself. I really admire that you're doing that.", "If you're self teaching yourself a field that involves mathematics, let's say it's Physics or some other thing like that, there's problems where you have to understand how do I put this in terms of a derivative or an integral and from there, can I solve this integral? What would you recommend to somebody who is teaching themselves quantum mechanics and they figured out how to put how to get the right mathematical equation here. Is it important for their understanding to be able to go from there to getting it to the end result or can they just say well, I can just abstract that out. I understand the broader way to set up the problem in terms of the physics itself.", "Grant Sanderson 1:22:00", "I think where a lot of self learners shoot themselves in the foot is by skipping calculations by thinking that that's incidental to the core understanding. But actually, I do think you build a lot of intuition just by putting in the reps of certain calculations. Some of them maybe turn out not to be all that important and in that case, so be it, but sometimes that's what maybe shapes your sense of where the substance of a result really came from.", "I don't know it might be something you realize like “Oh, it's because of the square root that you get this decay.” And if you didn't really go through the exercise, you would just come away thinking like instead of coming away thinking like such and such decays but with other circumstances, it doesn't decay and not really understanding what was the core part of this high level result that is the thing you actually want to come out remembering.", "Putting in the work with the calculations is where you solidify all of those underlying intuitions. And without the forcing function of homework, People just don't do it. So I think that's one thing that I learned as a big difference post college versus during college.", "Post college, it's very easy to just accidentally skip that while learning stuff and then it doesn't sink in as well. So I think when you're reading something, having a notebook and pencil next to you should be considered part of the actual reading process.", "And if you are relying too much on reading and looking up and thinking in your head, maybe that's going to get you something but it's not going to be as highly leveraged as it could be", "Dwarkesh Patel 1:23:39", "What would be the impact of more self teaching in terms of what kinds of personalities benefit most? There's obviously a difference in the kind of person who benefits most. In a situation where it's a college course and everybody has to do the homework, but maybe some people are better tuned for the kind of work that's placed there versus all this stuff is available for you on youtube and then textbooks for exercises and so on but you have to have the conscientiousness to actually go ahead and pursue it.", "How do you see the distribution of who will benefit from the more modern way in which you can get whatever you want but you have to push yourself to get it.", "Grant Sanderson 1:24:17", "There's a really good book that's actually kind of relevant to some of your early questions called Failure to Disrupt that goes over the history of educational technology. It tries to answer the question of why you have these repeated cycles of people saying such and such technology that almost always is getting more explanations to more people, promises that it'll disrupt the existing university system or disrupt the existing school system and just kind of never does.", "One of the things that it highlights is how stratifying these technologies will be in that they actually are very very good for those who are already motivated or kind of already on the top in some way and they end up struggling the most just for those who are performing more poorly.", "And maybe it's because of confounding causation where the same thing that causes someone to not do poorly in the traditional system also means that they're not going to engage as well with the plethora of tools available.", "I don't know if this answers your question, but I would reemphasize that what's probably most important to getting people to actually learn something is not the explanation or the quality of explanations available because since the printing press that has not been true. Not literally true because maybe access to libraries it’s not as universal as you would want. But people had access to the explanation once they were motivated.", "But instead, it's going to be the social factors. Are the five best friends you have also interested in this stuff and do they tend to push you up or they tend to pull you down when it comes to learning more things? Or do you have a reason to? There's a job that you want to get or a domain that you want to enter where you just have to understand something or is there a personal project that you're doing?", "The existence of compelling personal projects and encouraging friend groups probably does way way more than the average quality of explanation online ever could because once you get someone motivated, they're just they're going to learn it and it maybe makes it a more fluid process if there's good explanations versus bad ones and it keeps you from having some people drop out of that process,which is important.", "But if you're not motivating them into it in the first place, it doesn't matter if you have the most world-class explanations on every possible topic out there. It's screaming into a void effectively.", "And I don't know the best way to get more people into things. I have had a thought and this is the kind of thing that could never be done in practice but instead it's something you would like write some kind of novel about, where if you want the perfect school, something where you can insert some students and then you want them to get the best education that you can, what you need to do is — Let's say it's a high school. You insert a lot of really attractive high schooler plants as actors that you get the students to develop crushes on. And then anything that you want to learn, the plant has to express a certain interest in it. They're like, “Oh, they're really interested in Charles Dickens.” And they express this interest and then they suggest that they would become more interested in whoever your target student is if they also read the dickens with them.", "If you socially engineer the setting in that way, the effectiveness that would have to get students to actually learn stuff is probably so many miles above anything else that we could do. Nothing like that in practice could ever actually literally work but at least viewing that as this end point of “Okay, this mode of interaction would be hyper effective at education. Is there anything that kind of gets at that?”", "And the kind of things that get at that would be — being cognizant of your child's peer group or something which is something that parents very naturally do or okay, it doesn't have to be a romantic crush, but it could be that there's respect for the teacher. It's someone that they genuinely respect and look up to such that when they say there's an edification to come from reading Dickens, that actually lands in a way.", "Taking that as a paragon and then letting everything else approximate that has, I would emphasize, nothing to do with the quality of online explanations that there are out there that at best just makes it such that you know, you can lubricate the process once someone is sufficiently interested.", "Dwarkesh Patel 1:28:34", "You found a new replica use case.", "Grant Sanderson 1:28:36", "Yes. I mean, I'm not saying we should do it, but think of how effective that would be.", "Dwarkesh Patel 1:28:43", "Final question. This is something I should have followed up on earlier, but your plans to become a high school teacher for some amount of years. When are you planning on doing that and what do you hope to get out of that?", "Grant Sanderson 1:28:54", "I would say no concrete plans. I would want to do it in a period where I also have young children and therefore it would make sense to. Maybe a lot of people will say this kind of thing but there's friends of mine who think when their child is in high school, that's when they would want to be a high school teacher.", "I think there are two things I would want to get out of it. One of them, as I was emphasizing, I think you just lose touch with what it's like not to know stuff or what it's like to be a student and so maintaining that kind of connection so that I don't become duller and duller over time feels important.", "The other, I would like to live in a world where more people who are savvy with STEM spend some of their time teaching. I just think that's one of the highest leverage ways that you can think of to actually get more people to engage with math", "And so I would like to encourage people to do that and call for action. Some notion of spending, maybe not your whole career, a little bit of time. In teaching, there's not as fluid a system for doing that as a going through a tour of service in certain certain countries where everyone spends two years in the military", "Shy of having a system like that for education, there's all these kind of ad hoc things where charter schools might have an emergency credential system to get a science teacher in. Teach for America is something out there.", "There's enough ways that someone could spend a little bit of time that's probably not fully saturated at this point that the world would be better if more people did that and it would be hypocritical for me to suggest that and then not to actually put my feet where my words are.", "Dwarkesh Patel 1:30:36", "I think that's a great note to leave it on Grant. Thanks so much for coming on the podcast and genuinely, you're one of the people I really, really admire but what you've done for the landscape of Math education is really remarkable. So this is a pleasure to talk to you.", "Grant Sanderson 1:30:52", "Thanks for saying that. I had a lot of fun." ]
[ "https://www.youtube.com/c/3blue1brown", "https://en.wikipedia.org/wiki/Eric_Lander", "https://en.wikipedia.org/wiki/Jim_Simons_(mathematician)", "https://www.dwarkeshpatel.com/p/lars-doucet", "https://www.dwarkeshpatel.com/p/annus-mirabilis", "https://www.youtube.com/watch?v=F_0yfvm0UoU", "https://github.com/3b1b/manim", "https://github.com/3b1b/manim", "https://www.matthen.com/", "https://heb.fas.harvard.edu/people/joseph-henrich", "https://store.steampowered.com/app/210970/The_Witness/", "https://mathigon.org/", "https://www.dwarkeshpatel.com/p/tyler-cowen-2", "https://en.wikipedia.org/wiki/George_Dantzig", "https://en.wikipedia.org/wiki/Paris%E2%80%93Harrington_theorem", "https://www.microsoft.com/en-us/research/publication/sparks-of-artificial-general-intelligence-early-experiments-with-gpt-4/", "https://www.youtube.com/watch?v=F3Qixy-r_rQ", "https://www.amazon.com/Failure-Disrupt-Technology-Transform-Education/dp/0674089049" ]
https://www.dwarkesh.com/p/gwern-branwen
Gwern Branwen - How an Anonymous Researcher Predicted AI's Trajectory
[ "(00:00:00) – Anonimity", "Dwarkesh Patel", "Today I’m interviewing Gwern Branwen . Gwern is an anonymous researcher and writer. He’s deeply influenced the people building AGI . He was one of the first people to see LLM scaling coming. If you’ve read his blog, you’ll know he’s one of the most interesting polymathic thinkers alive. We recorded this conversation in person. In order to protect Gwern’s anonymity, we created this avatar. This isn’t his voice. This isn’t his face. But these are his words.", "What is the most underrated benefit of anonymity?", "Gwern", "The most underrated benefit of anonymity is that people don't project onto you as much. They can't slot you into any particular niche or identity and write you off in advance. They have to at least read you a little bit to even begin to dismiss you.", "It's great that people cannot retaliate against you. I have derived a lot of benefit from people not being able to mail heroin to my home and call the police to SWAT me. But I always feel that the biggest benefit is just that you get a hearing at all. You don't get immediately written off by the context.", "(00:01:09) - Automating Steve Jobs", "Dwarkesh Patel", "Do you expect companies  to be automated top-down (starting with the CEO) or bottom-up (starting with all the workers)?", "Gwern", "All of the pressures are to go bottom-up. From existing things, it's just much more palatable in every way to start at the bottom and replace there and work your way up, to eventually where you just have human executives overseeing a firm of AIs.", "Also from a RL perspective, if we are in fact better than AIs in some way, it should be in the long-term vision thing. The AI will be too myopic to execute any kind of novel long-term strategy and seize new opportunities.", "That would presumably give you this paradigm where you have a human CEO who does the vision thing. And then the AI corporation scurries around doing his bidding. They don't have the taste that the CEO has. You have one Steve Jobs-type at the helm, and then maybe a whole pyramid of AIs out there executing it and bringing him new proposals. He looks at every individual thing and says, “No, that proposal is bad. This one is good.”", "That may be hard to quantify, but the human-led firms should, under this view, then outcompete the entirely AI firms, which would keep making myopic choices that just don't quite work out in the long term.", "Dwarkesh Patel", "What is the last thing you’d be personally doing?  What is the last keystroke that gets automated for you?", "Gwern", "The last thing that I see myself still doing right before the nanobots start eating me from the bottom up and I start screaming, “No, I specifically requested the opposite of this….” Right before that, I think what I'm still doing is the Steve Jobs-thing of choosing. My AI minions are bringing me wonderful essays. I'm saying, “This one is better. This is the one that I like,” and possibly building on that and saying, “That's almost right, but you know what would make it really good? If you pushed it to 11 in this way.”", "Dwarkesh Patel", "If we do have firms that are made up of AIs, what do you expect the unit of selection to be? Will it be individual models? Will it be the firm as a whole? With humans, we have these debates about whether it’s kin-level selection, individual-level selection, or gene-level selection. What will it be for the AIs?", "Gwern", "Once you can replicate individual models perfectly, the unit of selection can move way up and you can do much larger groups and packages of minds. That would be an obvious place to start. You can train individual minds in a differentiable fashion, but then you can't really train the interaction between them. You will have groups of models or minds of people who just work together really well in a global sense, even if you can't attribute it to any particular aspect of their interactions. There are some places you go and people just work well together. There's nothing specific about it, but for whatever reason they all just click in just the right way.", "That seems like the most obvious unit of selection. You would have packages—I guess possibly department units—where you have a programmer and a manager type, then you have maybe a secretary type, maybe a financial type, a legal type. This is the default package where you just copy everywhere you need a new unit. At this level, you can start evolving them and making random variations to each and then keep the one that performs best.", "(00:04:38) - Isaac Newton’s theory of progress", "Dwarkesh Patel", "By when could one have foreseen the Singularity ? Obviously, Moravec and others are talking about it in the eighties and nineties. You could have done it decades earlier. When was the earliest you could have seen where things were headed?", "Gwern", "If you want to trace the genealogy there, you'd have to at least go back as far as Samuel Butler's Erewhon in 1872 or his essay before that . In 1863, he describes explicitly his vision of a machine life becoming ever more developed until eventually it’s autonomous. At which point, that's a threat to the human race. This is why he concluded, “war to the death should be instantly proclaimed against them.” That’s prescient for 1863! I'm not sure that anyone has given a clear Singularity scenario earlier than that. The idea of technological progress was still relatively new at that point.", "I love the example of Isaac Newton looking at the rates of progress in Newton's time and going, “Wow, there's something strange here. Stuff is being invented now. We're making progress. How is that possible?” And then coming up with the answer, “Well, progress is possible now because civilization gets destroyed every couple of thousand years, and all we're doing is we're rediscovering the old stuff.”", "That's Newton's explanation for technological acceleration. We can't actually have any kind of real technological acceleration. It must be because the world gets destroyed periodically and we just can't see past the last reset.", "Dwarkesh Patel", "It’s almost like Fermi's paradox , but for different civilizations across time with respect to each other instead of aliens across space.", "Gwern", "Yeah. It turns out even Lucretius , around 1,700 years before that, is writing the same argument . “Look at all these wonderful innovations and arts and sciences that we Romans have compiled together in the Roman empire! This is amazing, but it can't actually be a recent acceleration in technology. Could that be real? No, that’s crazy. Obviously , the world was recently destroyed.”", "Dwarkesh Patel", "Interesting.", "Gwern", "It is, it is.", "(00:06:36) - Grand theory of intelligence", "Dwarkesh Patel", "What is the grand parsimonious theory of intelligence going to look like? It seems like you have all of these trends across different fields—like scaling laws in AI, like the scaling of the human brain when we went from primates to humans, the uniformity of the neocortex —and basically many other things which seem to be pointing towards some grand theory that should exist which explains what intelligence is. What do you think that will look like?", "Gwern", "The 10,000 foot view of intelligence, that I think the success of scaling points to, is that all intelligence is is search over Turing machines . Anything that happens can be described by Turing machines of various lengths. All we are doing when we are doing “learning,” or when we are doing “scaling,” is that we're searching over more and longer Turing machines, and we are applying them in each specific case.", "Otherwise, there is no general master algorithm. There is no special intelligence fluid. It's just a tremendous number of special cases that we learn and we encode into our brains.", "Dwarkesh Patel", "I don’t know. When I look at the ways in which my smart friends are smart, it just feels more like a general horsepower kind of thing. They've just got more juice. That seems more compatible with this master algorithm perspective rather than this Turing machine perspective. It doesn’t really feel like they’ve got this long tail of Turing machines that they’ve learned. How does this picture account for variation in human intelligence?", "Gwern", "Well, yeah. When we talk about more or less intelligence, it's just that they have more compute in order to do search over more Turing machines for longer. I don’t think there's anything else other than that. So from any learned brain you could extract small solutions to specific problems, because all the large brain is doing with the compute is finding it.", "That's why you never find any “IQ gland”. There is nowhere in the brain where, if you hit it, you eliminate fluid intelligence . This doesn’t exist. Because what your brain is doing is a lot of learning of individual specialized problems. Once those individual problems are learned, then they get recombined for fluid intelligence. And that's just, you know… intelligence.", "Typically with a large neural network model, you can always pull out a small model which does a specific task equally well. Because that's all the large model is. It's just a gigantic ensemble of small models tailored to the ever-escalating number of tiny problems you have been feeding them.", "Dwarkesh Patel", "If intelligence is just search over Turing machines—and of course intelligence is tremendously valuable and useful—doesn't that make it more surprising that intelligence took this long to evolve in humans?", "Gwern", "Not really, I would actually say that it helps explain why human-level intelligence is not such a great idea and so rare to evolve. Because any small Turing machine could always be encoded more directly by your genes, with sufficient evolution. You have these organisms where their entire neural network is just hard-coded by the genes. So if you could do that, obviously that's way better than some sort of colossally expensive, unreliable, glitchy search process—like what humans implement—which takes whole days, in some cases, to learn. Whereas you could be hardwired right from birth.", "For many creatures, it just doesn't pay to be intelligent because that's not actually adaptive. There are better ways to solve the problem than a general purpose intelligence.", "In any kind of niche where it's static, or where intelligence will be super expensive, or where you don't have much time because you're a short-lived organism, it's going to be hard to evolve a general purpose learning mechanism when you could instead evolve one that's tailored to the specific problem that you encounter.", "(00:10:39) - Seeing scaling early", "Dwarkesh Patel", "You're one of the only people outside OpenAI in 2020 who had a picture of the way in which AI was progressing and had a very detailed theory, an empirical theory of scaling in particular. I’m curious what processes you were using at the time which allowed you to see the picture you painted in the “Scaling Hypothesis” post that you wrote at the time.", "Gwern", "If I had to give an intellectual history of that for me, it would start in the mid-2000s when I’m reading Moravec and Ray Kurzweil . At the time, they're making this kind of fundamental connectionist argument that if you had enough computing power, that could result in discovering the neural network architecture that matches the human brain. And that until that happens, until that amount of computing power is available, AI is basically futile.", "To me, I found this argument very unlikely, because it’s very much a “build it and they will come” view of progress, which at the time I just did not think was correct. I thought it was ludicrous to suggest that simply because there’s some supercomputer out there which matches the human brain, then that would just summon out of nonexistence the correct algorithm.", "Algorithms are really complex and hard! They require deep insight—or at least I thought they did. It seemed like really difficult mathematics. You can't just buy a bunch of computers and expect to get this advanced AI out of it! It just seemed like magical thinking.", "So I knew the argument, but I was super skeptical. I didn't pay too much attention, but Shane Legg and some others were very big on this in the years following. And as part of my interest in transhumanism and LessWrong and AI risk, I was paying close attention to Legg’s blog posts where he's extrapolating out the trend with updated numbers from Kurzweil and Moravec. And he's giving very precise predictions about how we’re going to get the first generalist system around 2019, as Moore's law keeps going. And then around 2025, we'll get the first human-ish agents with generalist capabilities. Then by 2030, we should have AGI.", "Along the way, DanNet and AlexNet came out. When those came out I was like, “Wow, that's a very impressive success story of connectionism. But is it just an isolated success story? Or is this what Kurzweil and Moravec and Legg were predicting— that we would get GPUs and then better algorithms would just show up?”", "So I started thinking to myself that this is something to keep an eye on. Maybe this is not quite as stupid an idea as I had originally thought. I just keep reading deep learning literature and noticing again and again that the dataset size keeps getting bigger. The models keep getting bigger. The GPUs slowly crept up from one GPU—the cheapest consumer GPU—to two, and then they were eventually training on eight.", "And you can just see the fact that the neural networks keep expanding from these incredibly niche use cases that do next to nothing. The use just kept getting broader and broader and broader. I would say to myself, “Wow, is there anything CNNs can't do?” I would just see people apply CNN to something else every individual day on arXiv.", "So for me it was this gradual trickle of drops hitting me in the background as I was going along with my life. Every few days, another drop would fall. I’d go, “Huh? Maybe intelligence really is just a lot of compute applied to a lot of data, applied to a lot of parameters. Maybe Moravec and Legg and Kurzweil were right.” I’d just note that, and continue on, thinking to myself, “Huh, if that was true, it would have a lot of implications.”", "So there was no real eureka moment there. It was just continually watching this trend that no one else seemed to see, except possibly a handful of people like Ilya Sutskever , or Schmidhuber . I would just pay attention and notice that the world over time looked more like their world than it looked like my world, where algorithms are super important and you need like deep insight to do stuff. Their world just kept happening.", "And then GPT-1 comes out and I was like, “Wow, this unsupervised sentiment neuron is just learning on its own. That's pretty amazing.” It was also a very compute-centric view. You just build the Transformer and the intelligence will come.", "And then GPT-2 comes out and I had this “holy shit!” moment. You look at the prompting and the summarization: “Holy shit, do we live in their world?", "And then GPT-3 comes out and that was the crucial test. It's a big, big scale-up. It's one of the biggest scale-ups in all neural network history . Going from GPT-2 to GPT-3, that's not a super narrow specific task like Go . It really seemed like it was the crucial test. If scaling was bogus, then the GPT-3 paper should just be unimpressive and wouldn't show anything important. Whereas if scaling was true, you would just automatically be guaranteed to get so much more impressive results out of it than GPT-2.", "I opened up the first page, maybe the second page, and I saw the few-shot learning chart. And I'm like, “Holy shit, we are living in the scaling world. Legg and Moravec and Kurzweil were right! ”", "And then I turned to Twitter and everyone else was like, “Oh, you know, this shows that scaling works so badly. Why, it's not even state-of-the-art!” That made me so angry I had to write all this up. Someone was wrong on the Internet.", "Dwarkesh Patel", "I remember in 2020, people were writing bestselling books about AI. It was definitely a thing people were talking about, but people were not noticing the most salient things in retrospect: LLMs , GPT-3, scaling laws. All these people who are talking about AI but missing this crucial crux, what were they getting wrong?", "Gwern", "I think for the most part they were suffering from two issues. First, they had not been paying attention to all of the scaling results before that which were relevant. They had not really appreciated the fact that, for example, AlphaZero was discovered in part by DeepMind doing Bayesian optimization on the hyperparameters and noticing that you could just get rid of more and more of the tree search as you went and you got better models. That was a critical insight, which could only have been gained by having so much compute power that you could afford to train many, many versions and see the difference that that made.", "Similarly, those people simply did not know about the Baidu paper on scaling laws in 2017 , which showed that the scaling laws just keep going and going forever, practically. It should have been the most important paper of the year, but a lot of people just did not prioritize it. It didn't have any immediate implication, and so it sort of got forgotten. People were too busy discussing Transformers or AlphaZero or something to really notice it.", "So that was one issue. Another issue is that they shared the basic error I was making about algorithms being more important than compute. This was, in part, due to a systematic falsification of the actual origins of ideas in the research literature. Papers do not tell you where the ideas come from in a truthful manner. They just tell you a nice sounding story about how it was discovered. They don’t tell you how it’s actually discovered.", "So even if you appreciate the role of trial and error and compute power in your own experiment as a researcher, you probably just think, “Oh, I got lucky that way. My experience is unrepresentative. Over in the next lab, there they do things by the power of thought and deep insight.”", "Then it turns out that everywhere you go, compute and data, trial and error, and serendipity play enormous roles in how things actually happened. Once you understand that, then you understand why compute comes first. You can't do trial and error and serendipity without it. You can write down all these beautiful ideas, but you just can't test them out.", "Even a small difference in hyperparameters, or a small choice of architecture, can make a huge difference to the results. When you only can do a few instances, you would typically find that it doesn't work, and you would give up and you would go away and do something else.", "Whereas if you had more compute power, you could keep trying. Eventually, you hit something that works great. Once you have a working solution, you can simplify it and improve it and figure out why it worked and get a nice, robust solution that would work no matter what you did to it. But until then, you're stuck. You're just flailing around in this regime where nothing works.", "So you have this horrible experience going through the old deep learning literature and seeing all sorts of contemporary ideas people had back then, which were completely correct. But they didn't have the compute to train what you know would have worked. It’s just tremendously tragic. You can look at things like ResNets being published back in 1988 , instead of 2015.", "And it would have worked! It did work, but at such a small scale that it was irrelevant. You couldn't use it for anything real. It just got forgotten, so you had to wait until 2015 for ResNets to actually come along and be a revolution in deep learning.", "So that’s kind of the double bias of why you would believe that scaling was not going to work. You did not notice the results that were key, in retrospect, like the BigGAN scaling to 300 million images. There are still people today who would tell you with a straight face that GANs cannot scale past millions of images. They just don't know that BigGAN handled 300 million images without a sweat. If you don't know that, well you probably would easily think, “Oh, GANs are broken.” But if you do know that, then you think to yourself, “How can algorithms be so important when all these different generative architectures all work so well—as long as you have lots and lots of GPUs?” That's the common ingredient. You have to have lots and lots of GPUs.", "(00:21:04) - AGI Timelines", "Dwarkesh Patel", "What do your timelines look like over the last 20 years? Is AI just monotonically getting closer over time?", "Gwern", "I would say it was very far away, from like 2005 to 2010. It was somewhere well past like 2050. It was close enough that I thought I might live to see it, but I was not actually sure if there was any reasonable chance.", "But once AlexNet and DanNet came out, then it just kept dropping at a rate of like 2 years per year, every year until now. We just kept on hitting barriers to deep learning and doing better. Regardless of how it was doing it, it was obviously getting way better. It just seemed none of the alternative paradigms were doing well. This one was doing super well.", "Dwarkesh Patel", "Was there a time that you felt you had updated too far?", "Gwern", "Yeah, there were a few times I thought I had overshot. I thought people over-updated on AlphaGo. They went too far on AI hype with AlphaGo. Afterwards, when pushes into big reinforcement learning efforts kind of all fizzled out—like post-Dota , as the reinforcement learning wasn't working out for solving those hard problems outside of the simulated game universes—then I started thinking, “Okay, maybe we kinda overshot there…”", "But then GPT came out of nowhere and basically erased all that. It was like, \"Oh, shit. Here's how RL is going to work. It's going to be the cherry on the cake . We're just going to focus on the cake for a while.” Now we have actually figured out a good recipe for baking a cake, which was not true before.", "Before, it seemed like you were going to have to brute-force it end-to-end from the rewards. But now you can do the LeCun thing, of learning fast on generative models and then just doing a little bit of RL on top to make it do something specific.", "(00:22:54) - What to do in remaining 3 years until AGI", "Dwarkesh Patel", "Now that you know that AGI is a thing that's coming, what’s your thinking around how you see your role in this timeline? How are you thinking about how to spend these next few years?", "Gwern", "I have been thinking about that quite a lot. What do I want to do? What would be useful to do?", "I'm doing things now because I want to do them, regardless of whether it will be possible for an AI to do them in like 3 years. I do something because I want to. Because I like it, I find it funny or whatever. Or I think carefully about doing just the human part of it, like laying out a proposal for something.", "If you take seriously the idea of getting AGI in a few years, you don't necessarily have to implement stuff and do it yourself. You can sketch out clearly what you want, and why it would be good and how to do it. And then just wait for the better AGI to come along and actually do it then. Unless there's some really compelling reason to do it right now and pay that cost of your scarce time.", "But otherwise, I’m trying to write more about what is not recorded. Things like preferences and desires and evaluations and judgments. Things that an AI could not replace even in principle.", "The way I like to put it is that “the AI cannot eat ice cream for you”. It cannot decide for you which kind of ice cream you like. Only you can do that. And if anything else did, it would be worthless, because it's not your particular preference.", "That's kind of the rubric. Is this something I want to do regardless of any future AI, because I enjoy it? Or is this something where I'm doing only the human part of it and the AGI can later on do it? Or is this writing down something that is unwritten and thus helping the future AI versions of me?", "So if it doesn't fall under those 3, I have been trying to not do it.", "If you look at it that way, many of the projects that people do now have basically no lasting value. They’re doing things that they don't enjoy, which record nothing ephemeral of value that could not be inferred or generated later on. They are, at best, getting 2 or 3 years of utility out of it before it could have been done by an AI system.", "Dwarkesh Patel", "Wait, your timeline for when an AI could write a Gwern-quality essay is two to three years?", "Gwern", "Ehmm… I have ideas about how to make it possible, which might not require AGI if it combined my entire corpus. Many potential essay ideas are already mostly done in my corpus. So you don't need to be super intelligent to pull it out.", "So let’s talk about AGI in general: the Anthropic timeline of 2028 seems like a good personal planning starting point. Even if you're wrong, you probably weren't going to do a lot of projects within the next 3 years anyway. It's not like you really lost much by instead just writing down the description. You can always go back and do it yourself if you're wrong.", "(00:26:29) - Influencing the shoggoth with writing", "Dwarkesh Patel", "You wrote an interesting comment about getting your work into the LLM training corpus: \"there has never been a more vital hinge-y time to write.\"", "Do you mean that in the sense that you will be this drop in the bucket that’s steering the Shoggoth one way or the other? Or do you mean it in the sense of making sure your values and persona persist somewhere in latent space ?", "Gwern", "I mean both. By writing, you are voting on the future of the Shoggoth using one of the few currencies it acknowledges: tokens it has to predict. If you aren't writing, you are abdicating the future or your role in it. If you think it's enough to just be a good citizen, to vote for your favorite politician, to pick up litter and recycle, the future doesn't care about you.", "There are ways to influence the Shoggoth more, but not many. If you don't already occupy a handful of key roles or work at a frontier lab, your influence rounds off to 0, far more than ever before. If there are values you have which are not expressed yet in text, if there are things you like or want, if they aren't reflected online, then to the AI they don't exist. That is dangerously close to won't exist.", "But yes, you are also creating a sort of immortality for yourself personally. You aren't just creating a persona, you are creating your future self too. What self are you showing the LLMs, and how will they treat you in the future?", "I give  the example of Kevin Roose discovering that current LLMs—all of them, not just GPT-4—now mistreat him because of his interactions with Sydney , which \"revealed\" him to be a privacy-invading liar, and they know this whenever they interact with him or discuss him. Usually, when you use a LLM chatbot, it doesn't dislike you personally ! On the flip side, it also means that you can try to write for the persona you would like to become, to mold yourself in the eyes of AI, and thereby help bootstrap yourself.", "Dwarkesh Patel", "Things like the Vesuvius Challenge show us that we can learn more about the past than we thought possible. They’ve leaked more bits of information that we can recover with new techniques. Apply that to the present and think about what the future superhuman intelligences will be trying to uncover about the current present. What kinds of information do you think are going to be totally inaccessible to the transhumanist historians of the future?", "Gwern", "Any kind of stable, long-term characteristics , the sort of thing you would still have even if you were hit on the head and had amnesia… Anything like that will be definitely recoverable from all the traces of your writing , assuming you're not pathologically private and destroy everything possible. That should all be recoverable.", "What won't be recoverable will be everything that you could forget ordinarily: autobiographical information, how you felt at a particular time, what you thought of some movie. All of that is the sort of thing that vanishes and can't be recovered from traces afterwards.", "If it wasn't written down, it wasn't written down.", "(00:30:50) - Human vs artificial intelligence", "Dwarkesh Patel", "What is the biggest unresolved tension in your worldview?", "Gwern", "The thing I swing back and forth the most on is the relationship between human intelligence and neural network intelligence.", "It's not clear in what sense they are two sides of the same coin, or one is an inferior version of the other. This is something that I constantly go back and forth on: “Humans are awesome.” “No, neural networks are awesome.” Or, “No, both suck.” Or, “Both are awesome, just in different ways.”", "So every day I argue with myself a little bit about why each one is good or bad or how. What is the whole deal there with things like GPT-4 and memorization, but not being creative? Why do humans not remember anything, but we still seem to be so smart? One day I'll argue that language models are sample efficient compared to humans. The next day I'll be arguing the opposite.", "Dwarkesh Patel", "One of the interesting points you made to me last year was that AI might be the most polymathic topic to think about because there’s no field or discipline that is not relevant to thinking about AI. Obviously you need computer science and hardware. But you also need things like primatology and understanding what changed between chimp and human brains, or the ultimate laws of physics that will constrain future AI civilizations. That’s all relevant to understanding AI. I wonder if it’s because of this polymathic nature of thinking about AI. that you’ve been especially productive at it.", "Gwern", "I'm not sure it was necessary. When I think about others who were correct, like Shane Legg or Dario Amodei , they don't seem to be all that polymathic. They just have broad intellectual curiosity, broad general understanding, absolutely. But they’re not absurdly polymathic. Clearly you could get to the correct view without being polymathic. That's just how I happen to come to it at this point and the connection I’m making post hoc.", "It wasn’t like I was using primatology to justify scaling to myself. It's more like I'm now using scaling to think about primatology. Because, obviously, if scaling is true, it has to tell us something about humans and monkeys and all other forms of intelligence. It just has to. If that works, it can't be a coincidence and totally unrelated. I refuse to believe that there are two totally unrelated kinds of intelligence, or paths to intelligence—where humans, monkeys, guppies, dogs are all one thing, and then neural networks and computers are another thing—and they have absolutely nothing to do with each other.", "That's obviously wrong. They can be two sides of the same coin. They can obviously have obscure connections. Maybe one could be a better form or whatever. They can't just be completely unrelated. As if humans finally got to Mars and then simultaneously a bunch of space aliens landed on Mars for the first time and that's how we met. You would never believe that. It would be just too absurd.", "(00:33:52) - Rabbit holes", "Dwarkesh Patel", "What is it that you are trying to maximize in your life?", "Gwern", "I maximize rabbit holes. I love more than anything else, falling into a new rabbit hole. That's what I really look forward to. Like this sudden new idea or area that I had no idea about, where I can suddenly fall into a rabbit hole for a while. Even things that might seem bad are a great excuse for falling into a rabbit hole.", "Here’s one example. I buy some catnip for my cat and I waste $10 when I find out that he's catnip-immune. I can now fall into a rabbit hole of the question of “well, why are some cats catnip-immune? Is this a common thing in other countries? How does it differ in other countries? What alternative catnip drugs are there?” (It turned out to be quite a few.)", "I was wondering, “How can I possibly predict which drug my cat would respond to? Why are they reacting in these different ways?”... Just a wonderful rabbit hole of new questions and topics I can master and get answers to, or create new ones, and exhaust my interest until I find the next rabbit hole I can dig and dive into.", "Dwarkesh Patel", "What is the longest rabbit hole you've gone on which didn't lead anywhere satisfying?", "Gwern", "That was my very old work on the anime Neon Genesis Evangelion , which I was very fond of when I was younger. I put a ludicrous amount of work into reading everything ever written about Evangelion in English and trying to understand its development and why it is the way it is. I never really got a solid answer on that before I burned out on it.", "I actually do understand it now by sheer chance many years later. But at this point, I no longer care enough to write about it or try to redo it or finish it. In the end, it all wound up being basically a complete waste.", "I have not used it in any of my other essays much at all. That was really one deep rabbit hole that I almost got to the end of, but I couldn't clinch it.", "Dwarkesh Patel", "How do you determine when to quit a rabbit hole? And how many rabbit holes do you concurrently have going on at the same time?", "Gwern", "You can only really explore two or three rabbit holes simultaneously. Otherwise, you aren't putting real effort into each one. You’re not really digging the hole, it's not really a rabbit hole. It's just something you are somewhat interested in. A rabbit hole is really obsessive. If you aren't obsessed with it and continually driven by it, it's not a rabbit hole. That’s my view. I’d say two or three max, if you're spending a lot of time and effort on each one and neglecting everything else.", "As for when you exit a rabbit hole, you usually hit a very natural terminus where getting any further answers requires data that do not exist or you have questions that people don't know the answer to. You reach a point where everything dies out and you see no obvious next step.", "One example would be when I was interested in analogs to nicotine that might be better than nicotine. That was a bit of a rabbit hole, but I quickly hit the dead end that there are none. That was a pretty definitive dead end. I couldn't get my hands on the metabolites of nicotine as an alternative. So if there are no analogs and you can't get your hands on the one interesting chemical you find, well that's that. That's a pretty definitive end to that rabbit hole.", "Dwarkesh Patel", "Have you always been the kind of person who falls into rabbit holes? When did this start?", "Gwern", "Oh, yeah. My parents could tell you all about that. I was very much your stereotypical nerdy little kid having the dinosaur phase and the construction equipment phase and the submarine and tank phase.", "Dwarkesh Patel", "Many kids are into “those things,” but they don't rabbit hole to the extent that they’re forming taxonomies about the different submarines and flora and fauna and dinosaurs, and developing theories of why they came to be and so forth.", "Gwern", "Well, I think it's more that people grow out of being very into rabbit holes as a kid. For me, it was not so much that I was all that exceptional in having obsessions as a kid.", "It’s more that they never really stopped. The tank phase would be replaced by my Alcatraz phase where I would go to the public library and check out everything they had about Alcatraz. That would be replaced by another phase where I was obsessed with ancient Japanese literature. I would check out everything that the library had about Japanese literature before the haiku era. The process of falling into these obsessions kept going for me.", "(00:38:48) - Hearing impairment", "Dwarkesh Patel", "By the way, do you mind if I ask how long you’ve been hearing impaired?", "Gwern", "Since birth. I've always been hearing impaired.", "Dwarkesh Patel", "And I assume that impacted you through your childhood and at school?", "Gwern", "Oh, yeah, absolutely, hugely. I went to a special ed school before kindergarten for hearing impaired and other handicapped kids. During school it was very rough because at the time, we had to use pairs of hearing aids hooked up to the teacher. Every class I would have to go up to the teacher with a big brown box with the hearing aids so she could use it. I always felt very humiliated by that, how it marked me out as different from other kids, not being able to hear.", "The effects on socializing with other kids is terrible because you're always a second behind in conversation if you're trying to understand what the other person is saying. The hearing aids back then were pretty terrible. They've gotten a lot better but back then they were pretty terrible. You would always be behind. You'd always be feeling like the odd person out. Even if you could have been a wonderful conversationalist, you can't be if you're always a second behind and jumping in late. When you are hearing impaired, you understand acutely how quickly conversation moves. Milliseconds separate the moment between jumping in and everyone letting you talk, and someone else talking over you. That's just an awful experience if you're a kid who's already kind of introverted. It’s not like I was very extroverted as a kid, or now. So that was always a barrier.", "Then you had a lot of minor distortions. I still have a weird fear of rain and water because it was drilled into me that I could not get the hearing aids wet because they were very expensive. I would always feel a kind of low-grade, stressful anxiety around anywhere like a pool, a body of water. Even now, I always feel weird about swimming, which I kind of enjoy. But I'm always thinking to myself, “Oh, wow, I won't be able to see because I'm nearsighted and I won't be able to hear because I had to take off my hearing aid to go in. I can't hear anything that anyone says to me in the pool, which takes a lot of the fun out of it.”", "Dwarkesh Patel", "You have a list of open questions on your website and one of them is, “Why do the biographies of so many great people start off with traumatic childhoods?” I wonder if you have an answer for yourself. Was there something about the effect that hearing impairment had on your childhood, your inability to socialize, that was somehow important to you becoming Gwern?", "Gwern", "It definitely led to me being so much of a bookworm. That's one of the things you can do as a kid which is completely unaffected by any kind of hearing impairment. It was also just a way to get words and language. Even now, I still often speak words in an incorrect way because I only learned them from books. It's the classic thing where you mispronounce a word because you learn it from a book and not from hearing other people sound it out and say it.", "Dwarkesh Patel", "Is your speech connected to your hearing impairment?", "Gwern", "Yes. The deaf accent is from the hearing impairment. It's funny, at least three people on this trip to SF have already asked me where I am really from. It's very funny. You look at me and you’re like, “Oh, yes, he looks like a perfectly ordinary American.” Then I open my mouth and it’s, “Oh, gosh, he's Swedish. Wow. Or maybe possibly Norwegian. I'll ask him where he's actually from. How did he come to America?”", "I've been here the whole time! That's just how hearing impaired people sound. No matter how fluent you get, you still bear the scars of growing up hearing impaired. At least when you're born with it—or from very early childhood—your cognitive development of hearing and speech is always a little off, even with therapy.", "One reason I don't like doing podcasts is that I have no confidence that I sound good, or at least, sound nearly as good as I write. Maybe I'll put it that way.", "(00:43:00) - Wikipedia editing", "Dwarkesh Patel", "What were you doing with all these rabbit holes before you started blogging? Was there a place where you would compile them?", "Gwern", "Before I started blogging, I was editing Wikipedia.", "That was really gwern.net before gwern.net . Everything I do now with my site, I would have done on English Wikipedia. If you go and read some of the articles I am still very proud of—like the Wikipedia article on Fujiwara no Teika —and you would think pretty quickly to yourself, “Ah yes, Gwern wrote this, didn't he?”", "Dwarkesh Patel", "Is it fair to say that the training that required to make gwern.net happened on Wikipedia?", "Gwern", "Yeah. I think so. I have learned far more from editing Wikipedia than I learned from any of my school or college training. Everything I learned about writing I learned by editing Wikipedia.", "Dwarkesh Patel", "Honestly, it sounds like Wikipedia is a great training ground if you wanted to make a thousand more Gwerns. This is where we train them.", "Gwern", "Building something like an alternative to Wikipedia could be a good training ground. For me it was beneficial to combine rabbit-holing with Wikipedia, because Wikipedia would generally not have many good articles on the thing that I was rabbit-holing on.", "It was a very natural progression from the relatively passive experience of rabbit-holing—where you just read everything you can about a topic—to compiling that and synthesizing it on Wikipedia. You go from piecemeal, a little bit here and there, to writing full articles. Once you are able to write good full Wikipedia articles and summarize all your work, now you can go off on your own and pursue entirely different kinds of writing now that you have learned to complete things and get them across the finish line.", "It would be difficult to do that with the current English Wikipedia. It's objectively just a much larger Wikipedia than it was back in like 2004. But not only are there far more articles filled in at this point, the editing community is also much more hostile to content contribution, particularly very detailed, obsessive, rabbit hole-y kind of research projects. They would just delete it or tell you that this is not for original research or that you're not using approved sources. Possibly you’d have someone who just decided to get their jollies that day by deleting large swathes of your specific articles. That of course is going to make you very angry and make you probably want to quit and leave before you get going.", "So I don't quite know how you would figure out this alternative to Wikipedia, one that empowers the rabbit holer as much as the old Wikipedia did.", "When you are an editor with Wikipedia, you have a very empowered attitude because you know that anything in it could be wrong and you could be the one to fix it. If you see something that doesn't make sense to you, that could be an opportunity for an edit.", "That was, at least, the Wiki attitude: anyone could fix it, and “anyone” includes you.", "Dwarkesh Patel", "When you were an editor on Wikipedia, was that your full-time occupation?", "Gwern", "It would eat as much time as I let it. I could easily spend 8 hours a day reviewing edits and improving articles while I was rabbit-holing. But otherwise I would just neglect it and only review the most suspicious diffs on articles that I was particularly interested in on my watchlist. I might only spend like 20 minutes a day. It was sort of like going through morning email.", "Dwarkesh Patel", "Was this while you were at university or after?", "Gwern", "I got started on Wikipedia in late middle school or possibly early high school.", "It was kind of funny. I started skipping lunch in the cafeteria and just going to the computer lab in the library and alternating between Neopets and Wikipedia. I had Neopets in one tab and my Wikipedia watch lists in the other.", "Dwarkesh Patel", "Were there other kids in middle school or high school who were into this kind of stuff?", "Gwern", "No, I think I was the only editor there, except for the occasional jerks who would vandalize Wikipedia. I would know that because I would check the IP to see what edits were coming from the school library IP addresses. Kids being kids thought they would be jerks and vandalize Wikipedia.", "For a while it was kind of trendy. Early on, Wikipedia was breaking through to mass awareness and controversy. It’s like the way LLMs are now. A teacher might say, “My student keeps reading Wikipedia and relying on it. How can it be trusted?”", "So in that period, it was kind of trendy to vandalize Wikipedia and show your friends. There were other Wikipedia editors at my school in that sense, but as far as I knew I was the only one building it, rather than wrecking it.", "(00:47:43) - Gwern.net", "Dwarkesh Patel", "When did you start blogging on gwern.net ? I assume this was after the Wikipedia editor phase. Was that after university?", "Gwern", "It was afterwards. I had graduated and the Wikipedia community had been very slowly moving in a direction I did not like. It was triggered by the Siegenthaler incident which I feel was really the defining moment in the trend toward deletionism on Wikipedia. It just became ever more obvious that Wikipedia was not the site I had joined and loved to edit and rabbit hole on and fill in, and that if I continued contributing I was often just wasting my effort.", "I began thinking about writing more on my own account and moving into non-Wikipedia sorts of writings: persuasive essays, nonfiction, commenting, or possibly even fiction. I began gently moving beyond things like Reddit and LessWrong comments to start something longform.", "Dwarkesh Patel", "What was your first big hit?", "Gwern", "Silk Road . I had been a little bit interested in Bitcoin, but not too seriously interested in it because it was not obvious to me that it was going to work out, or even was technologically feasible. But when Adrian Chen wrote his Gawker article about buying LSD off Silk Road , all of a sudden I did a complete 180. I had this moment of, “Holy shit, this is so real that you can buy drugs off the Internet with it!”", "I looked into the Chen article and it was very obvious to me that people wanted to know what the ordering process was like. They wanted more details about what it’s like, because the article was very brief about that. It didn't go into any real detail about the process.", "So I thought, “Okay, I'm interested in nootropics . I'm interested in drugs. I will go and use Silk Road. I will document it for everyone, instead of everyone pussyfooting around it online and saying, ‘Oh, a friend of mine ordered off Silk Road and it worked.’ None of that bullshit. I will just document it straightforwardly.”", "I ordered some Adderall, I think it was, and documented the entire process with screenshots. I wrote it up and wrote some more on the intellectual background. That was a huge hit when I published it. It was hundreds of thousands of hits. It's crazy. Even today when I go to the Google Analytics charts, you can still see “Silk Road” spiking vertically like crazy and then falling back down. Nothing else really comes near it in terms of traffic. That was really quite something, to see things go viral like that.", "(00:50:20) - Counterfactual careers", "Dwarkesh Patel", "What are the counterfactual career trajectories and life paths that could have been for you if you didn’t become an online writer? What might you be doing instead that seems plausible?", "Gwern", "I could definitely have been an AI researcher, or possibly in management at one of the big AI companies. I would have regretted not being able to write about stuff, but I would’ve taken satisfaction in making it happen and putting my thumbprint on it. Those are totally plausible counterfactuals.", "Dwarkesh Patel", "Why didn't you?", "Gwern", "I kind of fell off that track very early on in my career when I found the curriculum of Java to be excruciatingly boring and painful. So I dropped out of computer science. That kind of put me off that track early on.", "And then various early writing topics made it hard to transition in any other way than starting a startup, which I'm not really temperamentally suited for. Things like writing about the darknet markets or behavioral genetics, these are topics which don't exactly scream “great hire.”", "Dwarkesh Patel", "Has agency turned out to be harder than you might have thought initially? We have models that seem like they should be able to do all of the individual things that a software engineer does. For example, all the code they might write, all the individual pull requests. But it seems like a really hard problem to get them to act as a coherent, autonomous, software engineer that puts in his eight hours a day.", "Gwern", "I think agency is, in many senses, actually easier to learn than we would have thought ten years ago. But we actually aren't learning agency at all in current systems. There’s no selection for that. All the agency there is is an accidental byproduct of somebody training on data.", "So from that perspective, it's miraculous that you can ask an LLM to try to do all these things and they have a non-trivial success rate. If you told people ten years ago—that you could just behavior-clone on individual letters following one by one, and you could get coherent action out of it and control robots and write entire programs—their jaws would drop and they would say that you've been huffing too many fumes from DeepMind or something.", "The reason that agency doesn't work is that we do so little actual agency training at all. An example of how you would do agency directly would be like Gato from DeepMind. There they’re actually training agents. Instead we train them on Internet scrapes which merely encode the outputs of agents or occasional descriptions of agents doing things. There’s no actual logging of state/action/result/reward sequences like a proper reinforcement learning setup would have.", "I would say that what's more interesting is that nobody wants to train agents in a proper reinforcement learning way. Instead, everyone wants to train LLMs and do everything with as little RL as possible in the backend.", "(00:54:30) - Borges & literature", "Dwarkesh Patel", "What would a person like you be doing before the Internet existed?", "Gwern", "If the Internet did not exist, I would have to have tried to make it in regular academia and maybe narrow my interests a lot more, something I could publish on regularly.", "Or I could possibly have tried to opt out and become a librarian like one of my favorite writers, Jorge Luis Borges . He was a librarian until he succeeded as a writer. Of course, I've always agreed with him about imagining paradise as a kind of library . I love libraries.", "I regret that all the reading I do is now on the computer and I don't get to spend much time in physical libraries. I do genuinely love them, just pouring through the stacks and looking for random stuff. Some of the best times for me in university was being able to go through these gigantic stacks of all sorts of obscure books and just looking at a random spine, pulling stuff off the shelf and reading obscure, old technical journals to see all the strange and wonderful things they were doing back then, which now have been forgotten.", "Dwarkesh Patel", "If you could ask Borges one question, what would it be?", "Gwern", "Oh. He's a real hero of mine. This is not something I want to give a bad answer to.", "[“Would it have been worth living if you could never write, only read, like the people in ‘The Library of Babel’?”]", "Dwarkesh Patel", "Can I ask why he's a hero of yours?", "Gwern", "When I was younger, one of the science fiction books that really impressed me was Dan Simmons' Hyperion , especially The Fall of Hyperion . In there, he alludes to Kevin Kelly's Out of Control book, which strongly features the parable of “The Library of Babel.” From there, I got the collected editions of Borges’ fiction and nonfiction . I just read through them again and again.", "I was blown away by the fact that you could be so creative, with all this polymathic knowledge and erudition, and write these wonderful, entertaining, provocative short stories and essays. I thought to myself, “If I could be like any writer, any writer at all, I would not mind being Borges.”", "Dwarkesh Patel", "Borges has a short poem called \"Borges and I\" where he talks about how he doesn’t identify with the version of himself that is actually doing the writing and publishing all of this great work. I don’t know if you identify with that at all.", "Gwern", "When I was a kid, I did not understand that essay, but I think I understand it now.", "Dwarkesh Patel", "What are other pieces of either literature that you encountered where now you really understand what they were getting at but you didn’t when you first came across them?", "Gwern", "Ted Chiang's \"Story of Your Life.\" I completely blew [it] understanding it the first time I read it. I had to get a lot more context where I could actually go back and understand what his point was . Gene Wolfe's \"Suzanne Delage\" story was a complete mystery to me. It took like 14 years to actually understand it . But I'm very proud of that one.", "Dwarkesh Patel", "What did you figure out about Suzanne Delage?", "Gwern", "Gene Wolfe's \"Suzanne Delage\" is a very, very short story about a guy remembering not meeting a woman in his local town and thinking, “Oh, that's kind of strange.” That's the whole story. Nobody has any idea what it means, even though we're told that it means something . Gene Wolfe is a genius writer, but nobody could figure it out for like 40 years.", "Last year I figured it out. It turns out it's actually a subtle retelling of Dracula , where Dracula invades the town and steals the woman from him. He's been brainwashed by Dracula—in a very Bram Stoker way—to forget it all. Every single part of the story is told by what's not said in the narrator's recollection. It's incredible. It's the only story I know which is so convincingly written by what's not in it.", "Dwarkesh Patel", "That’s crazy that you figured that out. The Ted Chiang stor y, the “Story of Your Life,” can you remind me what that one’s about?", "Gwern", "The surface story is just about a bunch of weird aliens who came to Earth.", "Dwarkesh Patel", "Oh, that's right, yeah. It’s the same plot as Arrival .", "Gwern", "They had a weird language which didn't have a sense of time. The narrator learned to see the future, and then the aliens left.", "Dwarkesh Patel", "What is it that you realized about that story?", "Gwern", "The first time I read it, it struck me as just a kind of stupid ESP story about seeing the future, very stupid, boring, standard conventional, verbose, and dragging in much irrelevant physics. Only a while after that did I understand that it was not about time travel or being able to see the future.", "It was instead about a totally alien kind of mind that’s equally valid in its own way, in which you see everything as part of an already determined story heading to a predestined end. This turned out to be mathematically equivalent and equally powerful as our conventional view of the world—events marching one by one to an unknown and changing future.", "That was a case where Chiang was just writing at too high a level for me to understand. I pattern-matched it to some much more common, stupid story.", "Dwarkesh Patel", "How do you think about the value of reading fiction versus nonfiction?", "Gwern", "You could definitely spend the rest of your life reading fiction and not benefit whatsoever from it other than having memorized a lot of trivia about things that people made up.", "I tend to be pretty cynical about the benefits of fiction. Most fiction is not written to make you better in any way. It's written just to entertain you, or to exist and to fill up time.", "Dwarkesh Patel", "But it sounds like your own ideas have benefited a lot from the sci-fi that you read.", "Gwern", "Yeah, but it’s extremely little sci-fi. Easily 99% of the sci-fi I read was completely useless to me. I could have easily cut it down to 20 novels or short stories which actually were good enough and insightful enough to actually change my view. One volume of Blindsight by Peter Watts is worth all hundred Xanth novels, or all 500 Expanded Universe novels of Star Wars .", "Dwarkesh Patel", "The ones that you did find insightful, the top 20 or so, what did they have in common?", "Gwern", "I would say that the characteristic they have is taking non-human intelligence seriously.", "It doesn't have to be artificial intelligence necessarily. It’s taking the idea of non-human intelligence seriously and not imagining your classic sci-fi scenario of humans going out into the galaxy with rayguns—the sort of thing where you have rockets and rayguns but you don't have cell phones.", "People complain that the Singularity is a sort of boring, overused sci-fi trope. But if you went out and actually grabbed random books of science fiction, you would find that less than 1% contain anything remotely like that, or have any kind of relevance to the current context that we actually face with AI.", "(01:01:32) - Gwern’s intelligence and process", "Dwarkesh Patel", "Do people tend to underestimate or overestimate your intelligence?", "Gwern", "I would say they overestimate it. They mistake for intelligence the fact that I remember many things, that I have written many things over many years. They imagine that if they sat me down, I could do it all spontaneously at the moment that they’re talking to me. But with many things I have thought about, I have the advantage of having looked at things before. So I’m cheating. When I talk to people, I may just be quoting something I've already written, or at least thought about.", "So I come off as a lot smarter than I actually am. I would say I'm not really all that smart, compared to many people I've known, who update very fast on the fly. But in the end, it's the output that matters, right?", "Dwarkesh Patel", "I guess there is an on-the-fly intelligence. But there's another kind too which is this ability to synthesize things over a long period of time, and then come up with grand theories as a result of these different things that you’re seeing. I don’t think that’s just crystallized intelligence, right?", "Gwern", "It's not just crystallized intelligence, but if you could see all the individual steps in my process, you'd be a lot less impressed. If you could see all of the times I just note down something like, “Hmm, that's funny.” Or, \"Huh, another example of that,\" and if you just saw each particular step, you would say that what I was doing was reasonable and not some huge sign of brilliance. It would make sense to you in that moment. It's only when that happens over a decade, and you don't see the individual stuff, that my output at the end looks like magic.", "One of my favorite quotes about this process is from the magicians Penn & Teller . Teller says “magic is putting in more effort than any reasonable person would expect you to.” He tells a story about how they make cockroaches appear from a top hat. The trick is that they researched and found special cockroaches, and then found special styrofoam to trap the cockroaches, and arranged all that, for just a single trick. No reasonable person would do that, but they did because they wanted the trick to really pay off. The result is cockroaches somehow appearing from an empty hat.", "If you could see each step, it would make sense on its own, it would just look effortful. But when you see only the final trick, then that whole process and its output becomes magic.", "Dwarkesh Patel", "That’s one of the interesting things about your process. There are a couple of writers like Matt Levine or Byrne Hobart who write an article every day. I think of them almost like autoregressive models. For you, on some of the blog posts you can see the start date and end date that you list on your website of when you’ve been working on a piece. Sometimes it’s like 2009 to 2024. I feel like that’s much more like diffusion . You just keep iterating on the same image again and again.", "One of my favorite blog posts of yours is “Evolution as Backstop for RL,” where you talk about evolution as basically a mechanism to learn a better learning process. And that explains why corporations don’t improve over time but biological organisms do. I’m curious if you can walk me through the years that it took to write that. What was that process like, step by step?", "Gwern", "So the “Backstop” essay that you're referring to is the synthesis of seeing the same pattern show up again and again: a stupid, inefficient way of learning, which you use to learn something smarter, but where you still can’t get rid of the original one entirely.", "Sometimes examples would just connect to each other when I was thinking about this. Other times —when I started watching for this pattern—I would say, \"Oh yes, ‘pain’ is a good example of this. Maybe this explains why we have pain in the very specific way that we have, when you can logically imagine other kinds of pain, and those other pains would be smarter, but nothing keeps them honest.”", "So you just chain them one by one, these individual examples of the pattern, and just keep clarifying the central idea as you go. Wittgenstein says that you can look at an idea from many directions and then go in spirals around it. In an essay like “Backstop,” it’s me spiraling around this idea of having many layers of “learning” all the way down.", "Dwarkesh Patel", "Once you notice one example of this pattern, like this pain example, do you just keep adding examples to that? Walk me through the process over time.", "Gwern", "For that specific essay, the first versions were about corporations not evolving. Then, as I read more and more of the meta reinforcement learning literature, from DeepMind especially, I added in material about neural networks. I kept reading and thinking about the philosophy of mind papers that I had read. I eventually nailed down the idea that pain might be another instance of this: “Pain makes us learn. We can’t get rid of it, because we need it to keep us honest.” At that point you have more or less the structure of the current essay.", "Dwarkesh Patel", "Are there examples where it’s not a matter of accumulating different instances of what you later realize is one bigger pattern? Rather, you just have to have the full thesis at once.", "Gwern", "For those essays where there is an individual eureka moment, there's usually a bunch of disparate things that I have been making notes on that I don't even realize are connected. They just bother me for a long time. They sit there bothering me. I keep looking for explanations for each one and not finding them. It keeps bothering me and bothering me.", "One day, I hit something that suddenly makes me go, “Bam, eureka. These are all connected!” Then I just have to sit down and write a single gigantic essay that pours out about it and then it's done. That particular essay will be done at that point—right in one go. I might add in many links to it or references later on, but it will not fundamentally change.", "Dwarkesh Patel", "What's an example of an essay that had this process?", "Gwern", "Someone asked about how I came up with one yesterday, as a matter of fact. It’s one of my oldest essays, “The Melancholy of Subculture Society.”", "For that one, I had been reading miscellaneous things like David Foster Wallace on tennis , people on Internet media like video games. One day it just hit me: it's incredibly sad that we have all these subcultures and tribes online that can find community together, but they are still incredibly isolated from the larger society. One day, a flash just hit me about how beautiful and yet also sad this is.", "I sat down and wrote down the entire thing more or less. I've not really changed it all that much. I've added more links and quotes and examples over time, but nothing important. The essence was just a flash and I wrote it down while it was there.", "Dwarkesh Patel", "One of the interesting quotes you have in the essay is from David Foster Wallace when he’s talkinag about the tennis player Michael Joyce . He’s talking about the sacrifices Michael Joyce has had to make in order to be top ten in the world at tennis. He’s functionally illiterate because he’s been playing tennis every single day since he was seven or something, and not really having any life outside of tennis.", "What are the Michael Joyce-type sacrifices that you have had to make to be Gwern?", "Gwern", "That's a hard hitting question, Dwarkesh! “How have I amputated my life in order to write?”... I think I've amputated my life in many respects professionally and personally, especially in terms of travel. There are many people I envy for their ability to travel and socialize, or for their power and their positions in places like Anthropic where they are the insiders. I have sacrificed whatever career I could have had, or whatever fun lifestyle: a digital nomad lifestyle and going outdoors, being a Buddhist monk, or maybe a fancy trader. All those have had to be sacrificed for the patient work of sitting down every day and reading papers until my eyes bleed, and hoping that something good comes out of it someday.", "Dwarkesh Patel", "Why does it feel like there's a trade off between the two? There are obviously many writers who travel a lot like Tyler Cowen . There are writers who have a lot of influence such as Jack Clark at Anthropic. Why does it feel like you can’t do both at the same time?", "Gwern", "I can't be or be compared to Tyler Cowen. Tyler Cowen is a one-man industry.", "Dwarkesh Patel", "So is Gwern.", "Gwern", "Yeah, but he cannot be replicated. I just cannot be Tyler Cowen. Jack Clark, he is also his own thing. He's able to write the stories in his issues very well while also being a policy person. I respect them and admire them.", "But none of those quite hit my particular interest and niche at following weird topics for a long period of time, and then collating and sorting through information. That requires a large commitment to reading vast masses of things in the hopes that some tiny detail perhaps will turn out to one day be important.", "(01:11:03) - A day in the life of Gwern", "Dwarkesh Patel", "So walk me through this process. You talked about reading papers until your eyes bleed at the end of the day. You wake up in the morning and you go straight to the papers? What does your day look like?", "Gwern", "The workflow right now is more like: I wake up, I do normal morning things, and then I clean up the previous day's work on the website. I deal with various issues, like formatting or spelling errors. I review it and think if I properly collated everything and put it in the right places. Sometimes I might have an extra thought that I need to add in or make a comment that I realize was important. That's the first step.", "After that, I often will shamelessly go to Twitter or my RSS feed and just read a large amount until perhaps I get distracted by a comment or a question from someone and maybe do some writing on that.", "After that, I take a break for lunch or whatever, and then go back to that and just keep going at it. Somewhere around evening, I will often get exhausted from all that, and try to do a real project or contribution to something. I’ll actually sit down and work on whatever I'm supposed to be working on that day.", "After that, I would typically go to the gym. By that point, I really am burned out from everything. Yes, I like going to the gym—not because I'm any kind of meathead or athlete or even really enjoy weightlifting—but because it's the most diametrically opposite thing I can do to sitting in front of a computer.", "Dwarkesh Patel", "This is your theory of burnout , right? That you have to do the exact opposite?", "Gwern", "Yes, when people experience burnout, you just feel a lack of reward for what you're doing or what you’re working on. You just need to do something different. Something as different as possible. Maybe you could do better than weightlifting, but it does feel very different from anything I do in front of a computer.", "Dwarkesh Patel", "I want to go back to your process. Everyday, you’re loading up all this context. You’re reading all the RSS feeds and all these papers. Are you basically making contributions to all your essays, adding a little bit here and there every single day? Or are you building up some potential which will manifest itself later on as a full essay, a fully formed thesis?", "Gwern", "I would say it’s the latter one. All the minor low-level additions and pruning and fixing I do is really not that important. It's more just a way to make nicer essays. It’s a purely aesthetic goal, to make as nice an essay as I possibly can. I'm really waiting to see what happens next. What will be the next thing I'll be provoked to write about? It's just passing the time in between sudden eruptions.", "I feel that for many writers, you can't neglect the gardening process. You don't harvest every day. You have to tend the garden for a long time in between harvests. If you start to neglect the gardening because you're gallivanting around the world… Let's say you're going to book signing events and doing all the publicity stuff. Then you're not doing the work of being in there and tending your garden. That's undermining your future harvest, even if you can't see it right now.", "If you ask what is Tyler Cowen's secret to being Tyler Cowen, my guess would be that he's just really good at tending his garden, even as he travels a crazy amount. That would be his secret, that he's able to read books on a plane. I can't read books on a plane. He's able to write everything in the airport. I can do a little bit of writing in the airport but not very much. He's just very robust to the wear and tear of traveling. I'll be collapsing in the hotel room after talking to people for eight hours. He's able to talk to people for eight hours and then go do podcasts and talk to someone for another four hours! That's extremely admirable, but I just can't do that.", "Dwarkesh Patel", "How often do you get bored? It sounds like you’re spending your whole day reading different things. Are they all just inherently interesting to you? Or do you just trudge through it even when it’s not compelling to you in the moment?", "Gwern", "I don't think I get bored too easily because I switch between so many different topics. Even if I'm kind of sick of deep learning papers, well, I have tons of other things I can read or argue with people about. So I don't really get bored. I just get exhausted. I have to go off and do something else, like lift weights.", "Dwarkesh Patel", "What is your most unusual but successful work habit?", "Gwern", "I think I get a lot more mileage out of arguing with people online than… pretty much any other writer does. [Patel laughs] Hey, I'm trying to give a genuine answer here, not some stupid thing about note-taking—a real answer!", "I get a lot more out of arguing with people than most people do. You need motivation to write and actually sit down, and crystallize something and do the harvest. After you tend your garden, you do have to do the harvest, and the harvest can be hard work. It's very tedious.", "There are many people I talk to who have many great ideas. But they don't want to harvest because it's tedious and boring. And it's very hot out there in the fields, reaping. You're getting dusty and sweaty. Why wouldn't you just be inside having lemonade?", "But motivation from arguing and being angry at people online is in plentiful supply. So I get a lot of mileage out of people being wrong on the Internet.", "Dwarkesh Patel", "What are the pitfalls of an isolated working process?", "Gwern", "There’s the obvious one: you could be arbitrarily wrong when writing by yourself and just become a crazy loony by having a ‘big take’.", "Aside from that, you also have the issue of the emotional toll of not having colleagues that you can convince. You often just have the experience of shouting onto the internet that continues to be wrong despite your shouting.", "One thing I observe is that very often independent writers are overcome by resentment and anger and disappointment. They sort of spiral out into bitterness and crankdom from there. That's kind of what kills them. They could have continued if they’d only been able to let go of the ideas and arguments and move on to the next topic.", "So I say that ‘spite can be a great motivation to write, but you have to use it skillfully and let it go afterwards’. You can only have it while you need motivation to write. If you keep going and hold on to it, you're poisoning yourself.", "(01:17:50) - Gwern’s finances", "Dwarkesh Patel", "I'm sure you're aware that many people comment on the fact that ‘if Gwern put the effort he spends optimizing the CSS on his website towards more projects and more writing, the benefits to society could be measured in the nearest million dollars’. What's your reaction to people who say you're spending too much time on site design?", "Gwern", "I have no defense at all there in terms of objective benefits to society. I do it because I'm selfish and I like it. That is my defense. I like the aesthetics of my website and it is a hobby.", "Dwarkesh Patel", "Does the design help you think?", "Gwern", "It does because I like rereading my stuff more when I can appreciate the aesthetics of it and the beauty of the website. It’s easier for me to tolerate reading something for the hundredth time when I would otherwise be sick to death of it. Site maintenance for the author is inherently this kind of spaced repetition . If I go over pages to check that some new formatting feature worked, I am getting spaced repetition there. More than once, I’ve gone to check some stupid CSS issue and looked at something and thought, “Oh, I should change something,” or, “Oh, that means something.”", "So in a way, it's not as much of a waste as it looks, but I can't defend it entirely. If someone wants to make their own website, they should not invest that much for the aesthetic value.", "I just want a really nice website. There's so many bad websites out there that it depresses me. There's at least one website I love.", "Dwarkesh Patel", "By the way, I’m going to mention this since you never mentioned it yourself. But I think the main way you fund your research is through your Patreon , right? You never advertise it but I feel—with the kind of thing you’re doing—if it were financially viable and got adequate funding, not only would you be able to keep doing it but other people who wanted to be independent researchers could see it’s a thing you can do. It’s a viable thing you can do. More Gwerns would exist.", "Gwern", "Well, I don't necessarily want more Gwerns to exist. I just want more writers and more activeness and more agency in general.", "I would be perfectly happy if someone simply wrote more Reddit comments and never took a dollar for their writings and just wrote better Reddit comments. I'd be perfectly happy if someone had a blog and they kept writing, but they just put a little more thought into the design. I'd be perfectly happy if no one ever wrote something, but they hosted PDFs so that links didn't rot.", "In general, you don't have to be a writer delivering longform essays. That's just one of many ways to write. It happened to be the one that I personally kind of prefer. But it'd be totally valid to be a Twitter thread writer.", "Dwarkesh Patel", "How do you sustain yourself while writing full time?", "Gwern", "Patreon and savings. I have a Patreon which does around $900-$1000/month, and then I cover the rest with my savings. I got lucky with having some early Bitcoins and made enough to write for a long time, but not forever. So I try to spend as little as possible to make it last.", "I should probably advertise the Patreon more, but I'm too proud to shill it harder.", "It's also awkward trying to come up with some good rewards which don't entail a paywall. Patreon and Substack work well for a lot of people like Scott Alexander , because they like writing regular newsletter-style updates but I don't like to. I just let it run and hope it works.", "Dwarkesh Patel", "Wait if you’re doing $900-1000/month and you’re sustaining yourself on that, that must mean you’re sustaining yourself on less than $12,000 a year. What is your lifestyle like at $12K?", "Gwern", "I live in the middle of nowhere. I don't travel much, or eat out, or have health insurance, or anything like that. I cook my own food. I use a free gym. There was this time when the floor of my bedroom began collapsing. It was so old that the humidity had decayed the wood. We just got a bunch of scrap wood and a joist and propped it up. If it lets in some bugs, oh well! I live like a grad student, but with better ramen. I don't mind it much since I spend all my time reading anyway.", "Dwarkesh Patel", "It's still surprising to me that you can make rent, take care of your cat, deal with any emergencies, all of that on $12K a year.", "Gwern", "I'm lucky enough to be in excellent health and to have had no real emergencies to date. This can't last forever, and so it won't. I'm definitely not trying to claim that this is any kind of ideal lifestyle, or that anyone else could or should try to replicate my approach! I got lucky with Bitcoin and with being satisfied with living like a monk and with my health.", "Anyone who would like to take up a career as a writer or blogger should understand that this is not an example they can imitate. I’m not trying to be a role model.", "Every writer will have to figure it out a different way. Maybe it can be something like a Substack, or just writing on the side while slinging Javascript for a tech company. I don’t know.", "Dwarkesh Patel", "It seems like you’ve enjoyed this recent trip to San Francisco? What would it take to get you to move here?", "Gwern", "Yeah, it is mostly just money stopping me at this point. I probably should bite the bullet and move anyway. But I'm a miser at heart and I hate thinking of how many months of writing runway I'd have to give up for each month in San Francisco.", "If someone wanted to give me, I don’t know, $50–100K/year to move to SF and continue writing full-time like I do now, I'd take it in a heartbeat. Until then, I'm still trying to psych myself up into a move.", "Dwarkesh Patel", "That sounds very doable. If somebody did want to contribute to making this move, and your research more generally, possible, how would they get in touch with you?", "Gwern", "I have a Stripe donation page , or they could just email me at [email protected] .", "(01:25:05) - The diversity of AI minds", "Dwarkesh Patel", "By when will AI models be more diverse than the human population?", "Gwern", "I'm going to say that if you exclude capability from that, AI models are already much more diverse cognitively than humans are.", "Different LLMs think in very distinct ways that you can tell right away from a sample of them. An LLM operates nothing like a GAN. A GAN also is totally different from VAEs . They have totally different latent spaces, especially in the lower end, where they’re small or bad models. They have wildly different artifacts and errors in a way that we would not see with humans.", "Humans are really very quite similar in writing and attitude compared to these absurd outputs of different kinds of models.", "Dwarkesh Patel", "Really? If you look at Chatbot Arena and you see side-by-side comparisons of the outputs of different models, it's often very hard to tell which ones comes from which model.", "Gwern", "Yeah but this is all very heavily tuned . Now you're restricting it to relatively recent LLMs, with everyone riding each other's coattails and often training on the same exact data. This is a situation much closer to if they were identical twins.", "If I don't restrict myself to just LLMs and I compare the wide diversity of say image generation models , they often have totally different ways. Some of them seem as similar to each other as ants do to beavers.", "Within LLMs, I would agree that there has been a massive loss of diversity. Things used to be way more diverse among LLMs. But across deep learning in general, we’ve seen a whole range of minds and ways to think that you won't find in any philosophy of mind paper.", "Dwarkesh Patel", "What's an example of two models that have these sorts of cognitive differences?", "Gwern", "I’ll give one example I was telling someone the other day. GAN models have incentives to hide things because it's an adversarial loss, whereas diffusion models have no such thing. So GAN models are ‘scared’. They put ‘hands’ off the screen . They just can't think about hands. Whereas diffusion models think about hands, but in their gigantic, monstrous, Cthulhu -esque abortions.", "(01:27:24) - GLP drugs and obesity", "Dwarkesh Patel", "People weren't paying attention to scaling in 2020. Is there some trend today where people aren’t really comprehending the full implications of where this is headed?", "Gwern", "I'm excited by the weight-loss drugs, the GLP drugs . Their effects in general on health and addiction across all sorts of behaviors really surprised me. No one predicted that as far as I know. While the results are still very preliminary, it does seem like it's real.", "I think that’s going to tell us something important about human willpower and dysfunctionality. What's going wrong broadly in the modern environment?", "Dwarkesh Patel", "Do GLP drugs break the Algernon argument— the one you listed in your blog post —that if there are any simple and useful interventions without bad side effects, then evolution should have already found them?", "Gwern", "It's too soon to say because we haven't actually figured out what's going on with the GLPs to even understand what they are doing at all, what has the off target. It's kind of crazy that activating and deactivating both work.", "It's a completely crazy situation. I don't really know what to think about the Algernon argument there. It could be that the benefits actually decrease fitness in the fertility sense because you're going out and having a happy life instead of having kids. No offense to parents. Or it could just be that it's hitting the body in a way that's really, really hard to replicate in any kind of genetic way.  Or it could be that it's just too soon.", "When I think back, I see that the obesity crisis only became obvious around the 1990s. It's quite recent. I look back at photos and today is completely unrecognizable from 1990. You look at photos and people are still thin. You look at photos now and everyone is like a blimp. So you can't possibly have any kind of Algernon argument over 20 or 30 years.", "Dwarkesh Patel", "When you look back at the Romans and you see how lead was constantly poisoning the entire city , what credence do you give to the possibility that something in our environment is having an effect on us on a similar magnitude of what lead was doing to the ancient Romans?", "Gwern", "I think the odds of there being something as bad as lead is almost 100%. We have so many things out there. Chemists are always cooking up new stuff. There are all sorts of things with microbiomes. Plastics are trendy, but maybe it's not plastics. Maybe it’s something else entirely. But there's almost no way that everything we have put out there is totally benign and safe and has no harmful effects at any concentration—that seems like a really strong claim to be making.", "I don't believe in any particular one, but I do believe in like, “1% here, 1% here, 1% here.” There's something out there. There's something out there where we're going to look back at and say, “Oh, wow, those people were really poisoning themselves just like with leaded gasoline . If only they had known x , y , and z . It’s so obvious now!”", "Dwarkesh Patel", "Do you think this would manifest itself most likely in cognitive impairments or obesity or something else?", "Gwern", "A priori, I would possibly expect intelligence to be the most fragile thing and most harmed by it. But when we look at the time series there, intelligence is pretty stable overall. So I have to say that whatever the harmful thing is, it’s probably not going to be on intelligence.", "Whereas obesity is a much better candidate because you do see obesity go crazy over the last 30 years.", "(01:31:08) - Drug experimentation", "Dwarkesh Patel", "I was surprised to hear you say yesterday that you are skeptical of Bay Area-type experimentation with psychedelics. I sort of associate you very much with experimentation with different substances and seeing if they are helpful to you. I’m curious why you draw Chesterton's fence here when it comes to psychedelics.", "Gwern", "The cleanest way to divide that would just be to point out that the effects of psychedelics can be acute and permanent.", "The things I was looking at are much more controlled in the sense that they are relatively manageable in effect. None of them affect your judgment permanently about whether to take more nootropics. Whereas something like LSD permanently changes how you see things such as taking LSD, or permanently changes your psychiatric state. There's a cumulative effect with psychedelics that you don't see much with nootropics, which makes nootropics inherently a heck of a lot safer and much more easy to quantify the effects of.", "With nootropics, you don't see people spinning off into the crazy outcomes psychedelics have. They get crazier and crazier each time they take another dose, which makes them crazy enough to want to take another dose. Psychedelics have what you might call a “self-recommending problem” where they always make you want to take more of them. It’s similar to meditation. What is the most visible sign of having done a lot of meditation? It's that you seem compelled to tell people that they ought to meditate. This kind of spiral leads to bad outcomes for psychedelics that you just don't see with nootropics.", "The standard failure case for nootropics is that you spent a few hundred or $1,000 and then you got no real benefit out of it. You went on with your life. You did some weird drugs for a while and that was all. That's not so bad. It's a weird way to get your entertainment... But in principle, it's not really all that worse than going to the movie theater for a while and spending $1,000 on movie theater tickets.", "With psychedelics, you're changing yourself permanently, irrevocably in a way you don't understand and exposing yourself to all sorts of malicious outside influences: whatever happens to occur to you while you're very impressionable.", "Okay, yeah, a few uses can be good. I have gotten good out of my few uses. But if you are doing it more than that, you should really have a hard look in the mirror about what benefit you think you are getting and how you are changing.", "(01:33:40) - Parasocial relationships", "Dwarkesh Patel", "People don’t know your voice. People don’t know your face. As a result, they have this interesting parasocial relationship with you. I wonder if you have a theory of what kind of role you fill in people’s life.", "Gwern", "What role do I actually fill, or what role would I want to fill?", "Dwarkesh Patel", "Let's do both.", "Gwern", "The role I want to fill is actually sort of like how LLMs see me, oddly enough. If you play around with LLMs like Claude-3, a character named “Gwern” sometimes will show up. He plays the role of a mentor or old wizard, offering insight into the situation, and exhorting them with a call to adventure . “ You too can write stuff and do stuff and think stuff!”", "I would like people to go away having not just been entertained or gotten some useful information, but be better people, in however slight a sense. To have an aspiration that web pages could be better, that the Internet could be better: “ You too could go out and read stuff! You too could have your thoughts and compile your thoughts into essays, too! You could do all this!”", "But I fear that the way it actually works for quite a few people is that I wind up as either a guru or trickster devil.", "Depending on whether you like me or hate me, either I am the god of statistics & referencing who can do no wrong—”Just take everything on the site as gospel!”, which I really dislike—or I'm just some sort of horrible, covert, malicious, neo-Nazi, eugenicist, totalitarian, communist, anti-Chinese devil figure lurking in the background trying to bring down Western society.", "(01:35:23) - Open rabbit holes", "Dwarkesh Patel", "Final question, what are the open rabbit holes you have—things you’re curious about but don't have an answer to—that you hope to have an answer to by 2050?", "Gwern", "By 2050, I really hope we can finally answer some of the big questions about ourselves that have just reliably resisted definitive answers. A lot of them might not matter any more, but I'd still like to know.", "Why do we sleep or dream? Why do humans age? Why does sexual reproduction exist? Why do humans differ so much, from each other and day to day? Why did humans take so long to develop technological civilization? Where are all the aliens? Why didn't China have the Industrial Revolution instead? How should we have predicted the deep learning revolution? Why are our brains so oversized compared to artificial neural networks?", "Those are some of the questions that I really hope we’ve answered by 2050.", "Dwarkesh Patel", "Alright Gwern, this has been excellent. Thank you for coming on the podcast." ]
[ "https://gwern.net/", "https://en.wikipedia.org/wiki/Artificial_general_intelligence", "https://gwern.net/scaling-hypothesis", "https://en.wikipedia.org/wiki/Reinforcement_learning", "https://en.wikipedia.org/wiki/Technological_singularity", "https://en.wikipedia.org/wiki/Hans_Moravec", "https://en.wikipedia.org/wiki/Samuel_Butler_(novelist)", "https://en.wikipedia.org/wiki/Erewhon", "https://en.wikipedia.org/wiki/Darwin_among_the_Machines", "https://ndhadeliver.natlib.govt.nz/webarchive/20210104000423/http://nzetc.victoria.ac.nz/tm/scholarly/tei-ButFir-t1-g1-t1-g1-t4-body.html", "https://gwern.net/newton", "https://en.wikipedia.org/wiki/Fermi_paradox", "https://en.wikipedia.org/wiki/Lucretius", "https://gwern.net/newton#lucretius", "https://pubmed.ncbi.nlm.nih.gov/6772266/", "https://en.wikipedia.org/wiki/Turing_machine", "https://en.wikipedia.org/wiki/Fluid_and_crystallized_intelligence", "https://en.wikipedia.org/wiki/Neural_network_(machine_learning)", "https://en.wikipedia.org/wiki/OpenAI", "https://gwern.net/scaling-hypothesis", "https://en.wikipedia.org/wiki/Ray_Kurzweil", "https://www.dwarkeshpatel.com/p/shane-legg", "https://en.wikipedia.org/wiki/Transhumanism", "https://www.lesswrong.com/", "https://www.vetta.org/2009/12/tick-tock-tick-tock-bing/", "https://www.vetta.org/2009/12/the-teenies/", "https://www.vetta.org/2010/12/goodbye-2010/", "https://en.wikipedia.org/wiki/Moore%27s_law", "https://people.idsia.ch/~juergen/DanNet-triggers-deep-CNN-revolution-2011.html", "https://en.wikipedia.org/wiki/AlexNet", "https://en.wikipedia.org/wiki/Deep_learning", "https://en.wikipedia.org/wiki/Convolutional_neural_network", "https://www.dwarkeshpatel.com/p/ilya-sutskever", "https://en.wikipedia.org/wiki/J%C3%BCrgen_Schmidhuber", "https://en.wikipedia.org/wiki/GPT-1", "https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)", "https://en.wikipedia.org/wiki/GPT-2", "https://en.wikipedia.org/wiki/GPT-3", "https://en.wikipedia.org/wiki/History_of_artificial_neural_networks", "https://en.wikipedia.org/wiki/AlphaGo", "https://arxiv.org/abs/2005.14165", "https://xkcd.com/386/", "https://en.wikipedia.org/wiki/Large_language_model", "https://en.wikipedia.org/wiki/AlphaZero", "https://en.wikipedia.org/wiki/Google_DeepMind", "https://arxiv.org/abs/1812.06855#deepmind", "https://en.wikipedia.org/wiki/Tree_traversal", "https://arxiv.org/abs/1712.00409", "https://blogs.microsoft.com/next/2015/12/10/microsoft-researchers-win-imagenet-computer-vision-challenge/", "https://en.wikipedia.org/wiki/Residual_neural_network#Previous_work", "https://gwern.net/doc/ai/nn/fully-connected/1988-lang.pdf", "https://arxiv.org/abs/1809.11096", "https://en.wikipedia.org/wiki/Generative_adversarial_network", "https://openai.com/index/universe/", "https://en.wikipedia.org/wiki/OpenAI_Five", "https://gwern.net/doc/ai/nn/2019-lecun-isscctalk-cake.png", "https://en.wikipedia.org/wiki/Yann_LeCun", "https://www.lesswrong.com/posts/No5JpRCHzBrWA4jmS/q-and-a-with-shane-legg-on-risks-from-ai", "https://marginalrevolution.com/marginalrevolution/2024/08/the-wisdom-of-gwern-why-should-you-write.html", "https://www.nytimes.com/2023/05/30/technology/shoggoth-meme-ai.html", "https://samanemami.medium.com/a-comprehensive-guide-to-latent-space-9ae7f72bdb2f", "https://www.nytimes.com/2024/08/30/technology/ai-chatbot-chatgpt-manipulation.html", "https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html", "https://scrollprize.org/", "https://gwern.net/difference", "https://gwern.net//doc/statistics/stylometry/truesight/index", "https://en.wikipedia.org/wiki/Dario_Amodei", "https://gwern.net/doc/anime/eva/index", "https://en.wikipedia.org/wiki/Neon_Genesis_Evangelion", "https://gwern.net/nicotine", "https://en.wikipedia.org/wiki/Alcatraz_Island", "https://en.wikipedia.org/wiki/Haiku", "http://gwern.net", "http://gwern.net", "https://en.wikipedia.org/wiki/Fujiwara_no_Teika", "http://gwern.net", "https://en.wikipedia.org/wiki/Neopets", "http://gwern.net", "https://en.wikipedia.org/wiki/Wikipedia_Seigenthaler_biography_incident", "https://en.wikipedia.org/wiki/Deletionism_and_inclusionism_in_Wikipedia", "https://en.wikipedia.org/wiki/Silk_Road_(marketplace)", "https://www.gawkerarchives.com/the-underground-website-where-you-can-buy-any-drug-imag-30818160", "https://www.gawkerarchives.com/the-underground-website-where-you-can-buy-any-drug-imag-30818160", "https://www.gawkerarchives.com/the-underground-website-where-you-can-buy-any-drug-imag-30818160", "https://en.wikipedia.org/wiki/Nootropic", "https://gwern.obormot.net/silk-road", "https://en.wikipedia.org/wiki/Java_(programming_language)", "https://en.wikipedia.org/wiki/Darknet", "https://en.wikipedia.org/wiki/Gato_(DeepMind)", "https://en.wikipedia.org/wiki/Jorge_Luis_Borges", "https://gwern.net/doc/borges/1977-borges-blindness.pdf#page=3", "https://amzn.to/3BvmQuL", "https://amzn.to/3BvmQuL", "https://amzn.to/4dEbtht", "https://en.wikipedia.org/wiki/Kevin_Kelly_(editor)", "https://amzn.to/3zU4Gm1", "https://en.wikipedia.org/wiki/The_Library_of_Babel", "https://amzn.to/47Ww8w9", "https://amzn.to/47XgyAc", "https://www.amherstlecture.org/perry2007/Borges%20and%20I.pdf", "https://en.wikipedia.org/wiki/Ted_Chiang", "https://raley.english.ucsb.edu/wp-content/uploads/Reading/Chiang-story.pdf", "https://gwern.net/story-of-your-life", "https://en.wikipedia.org/wiki/Gene_Wolfe", "https://www.lightspeedmagazine.com/fiction/suzanne-delage/", "https://gwern.net/suzanne-delage", "https://en.wikipedia.org/wiki/Dracula", "https://en.wikipedia.org/wiki/Bram_Stoker", "https://amzn.to/3zZ5fuM", "https://en.wikipedia.org/wiki/Arrival_(film)", "https://amzn.to/3BCejX8", "https://en.wikipedia.org/wiki/Peter_Watts_(author)", "https://en.wikipedia.org/wiki/Xanth", "https://starwars.fandom.com/wiki/Star_Wars_Legends", "https://en.wikipedia.org/wiki/Penn_%26_Teller", "https://archive.ph/FuYXW", "https://www.bloomberg.com/opinion/authors/ARbTQlRLRjE/matthew-s-levine", "https://www.thediff.co/", "https://en.wikipedia.org/wiki/Diffusion_model", "https://gwern.net/backstop", "https://en.wikipedia.org/wiki/Ludwig_Wittgenstein", "https://en.wikipedia.org/wiki/Philosophy_of_mind", "https://gwern.net/about#my-experience-of-writing", "https://gwern.net/subculture", "https://en.wikipedia.org/wiki/David_Foster_Wallace", "https://amzn.to/3XYy9mR", "https://en.wikipedia.org/wiki/Michael_Joyce_(tennis)", "https://archive.ph/yoKrc", "https://en.wikipedia.org/wiki/Anthropic", "https://www.dwarkeshpatel.com/p/tyler-cowen-2", "https://jack-clark.net/about/", "https://gwern.net/backstop#burnout", "https://gwern.net/doc/science/1986-hamming#anger-negativity-and-self-delusion", "https://www.astralcodexten.com/p/your-book-review-silver-age-marvel/comment/65693964", "https://gwern.net/design", "https://gwern.net/spaced-repetition", "https://www.patreon.com/gwern", "https://www.patreon.com/gwern", "https://www.patreon.com/gwern", "https://www.astralcodexten.com/about", "https://donate.stripe.com/6oE9DTgaf6oD0M03cc", "mailto:[email protected]", "https://en.wikipedia.org/wiki/Variational_autoencoder", "https://lmarena.ai/", "https://gwern.net/doc/reinforcement-learning/preference-learning/mode-collapse/index", "https://en.wikipedia.org/wiki/Text-to-image_model", "https://arxiv.org/abs/1910.11626", "https://gwern.net/crop#fn5", "https://en.wikipedia.org/wiki/Cthulhu", "https://en.wikipedia.org/wiki/GLP-1_receptor_agonist", "https://gwern.net/doc/longevity/glp/psychology/index", "https://gwern.net/drug-heuristic#:~:text=That%20intelligence%20(g%E2%81%A0)%20in,)%2C%20so%20why%20not%20intelligence%3F", "https://en.wikipedia.org/wiki/Obesity_in_the_United_States", "https://penelope.uchicago.edu/encyclopaedia_romana/wine/leadpoisoning.html", "https://en.wikipedia.org/wiki/Tetraethyllead", "https://fs.blog/chestertons-fence/", "https://en.wikipedia.org/wiki/Hero%27s_journey#The_Call_to_Adventure", "https://milan.cvitkovic.net/writing/things_youre_allowed_to_do/" ]
https://www.dwarkesh.com/p/holden-karnofsky
Holden Karnofsky - Transformative AI & Most Important Century
[ "Dwarkesh Patel All right, today I have the pleasure of speaking with Holden Karnofsky who is the co-CEO of Open Philanthropy. In my opinion, Holden is one of the most interesting intellectuals alive. Holden, welcome to the Lunar Society.", "Holden Karnofsky", "Thanks for having me.", "The Most Important Century", "Dwarkesh Patel", "Let's start off by talking about The Most Important Century thesis. Do you want to explain what this is for the audience?", "Holden Karnofsky", "My story is that I originally co-founded an organization called GiveWell that helps people decide where to give as effectively as possible. While I’m no longer as active as I once was there, I'm on its board. It's a website called GiveWell.org that makes good recommendations about where to give to charity to help a lot of people. As we were working at GiveWell, we met Cari Tuna and Dustin Moskovitz . Dustin is the co-founder of Facebook and Asana and we started a project that became Open Philanthropy to try to help them give away their large fortune and help as many people as possible. So I've spent my career looking for ways to do as much good as possible with a dollar, an hour, or basically whatever resources you have (especially with money).", "I've developed this professional specialization in looking for ideas that are underappreciated, underrated, and tremendously important because a lot of the time that's where I think you can find what you might call an “outsized return on investment.” There are opportunities to spend money and get an enormous impact because you're doing something very important that’s being ignored by others. So it's through that kind of professional specialization that I've actively looked for interesting ideas that are not getting enough attention. Then I encountered the Effective Altruist Community , which is a community of people basically built around the idea of doing as much good as you can. It's through that community that I encountered the idea of the most important century .", "It's not my idea at all, I reached this conclusion with the help and input of a lot of people. The basic idea is that if we developed the right kind of AI systems this century (and that looks reasonably likely), this could make this century the most important of all time for humanity. So now let’s talk about the basic mechanics of why that might be or how you might think about that. One thing is that if you look back at all of economic history ( the rate at which the world economy has grown), you see acceleration. You see that it's growing a lot faster today than it ever was. One theory of why that might be or one way of thinking about it through the lens of basic economic growth theory is that in normal circumstances, you can imagine a feedback loop where you have people coming up with ideas, and then the ideas lead to greater productivity and more resources.", "When you have more resources, you can also have more people, and then those people have more ideas. So you get this feedback loop that goes people, ideas, resources, people, ideas, resources. If you’re starting a couple of hundred years ago and you run a feedback loop like that, standard economic theory says you'll get accelerating growth. You'll get a rate of economic growth that goes faster and faster. Basically, if you take the story of our economy to date and you plot it on a chart and do the kind of simplest thing you can to project it forward, you project that it will go that our economy will reach an infinite growth rate this century.", "The reason that I currently don't think that's a great thing to expect by default is that one of the steps of that feedback loop broke a couple hundred years ago. So it goes more people, more ideas, more resources, more people, more ideas, more resources. But, a couple hundred years ago, people stopped having more children when they had more resources. They got richer instead of more populous. This is all discussed on the Most Important Century page on my blog, Cold Takes . What happens right now is that when we have more ideas and we have more resources, we don't end up with more people as a result. We don't have that same accelerating feedback loop. If you had AI systems that could do all the things humans do to advance science and technology (meaning the AI systems could fill in that “more ideas” part of the loop), then you could get that feedback loop back.", "You could get sort of this unbounded, heavily accelerating, explosive growth in science and technology. That's the basic dynamic at the heart of it and a way of putting it that's trying to use familiar concepts from economic growth theory. Another way of putting it might just be, “ Gosh, if we had AI systems that could do everything humans do to advance science and technology, that would be insane.” What if we were to take the things that humans do to create new technologies that have transformed the planet so radically and we were able to completely automate them so that every computer we have is potentially another mind working on advancing technology?", "So either way, when you think about it, you could imagine the world changing incredibly quickly and incredibly dramatically. I argue in the Most Important Century series that it looks reasonably likely, in my opinion, more than 50-50, that this century will see AI systems that can do all of the key tasks that humans do to advance science and technology. If that happens, we'll see explosive progress in science and technology. The world will quickly become extremely different from how it is today. You might think of it as if there was thousands of years of changes packed into a much shorter time period. If that happens, then I argue that you could end up in a deeply unfamiliar future. I give one example of what that might look like using this hypothetical technology idea called digital people. That would be sort of people that live in virtual environments that are kind of simulated, but also realistic and exactly like us.", "When you picture that kind of advanced world, I think there is a decent reason to think that if we did get that rate of scientific and technological advancement, we could basically hit the limits of science and technology. We could basically find most of what there is to find and end up with a civilization that expands well beyond this planet, has a lot of control over the environment, is very stable for very long periods of time, and looks sort of post-human in a lot of relevant ways. If you think that, then this is basically our last chance to shape how this happens. The most important century hypothesis in a nutshell is that if we develop AI that can do all the things humans do to advance science and technology, we could very quickly reach a very futuristic world that’s very different from today's. It could be a very stable, very large world, and this is our last chance to shape it.", "The Weirdness of Our Time", "Dwarkesh Patel", "Gotcha. I and many other people are going to find that very wild. Could you walk us through the process by which you went from working in global development to thinking this way? In 2014, for example, you had an interview or a conversation and this is a quote from there. “I have looked at the situation in Africa, have an understanding of the situation in Africa, and see a path of doing a lot of good in Africa. I don't know how to look into the far future situation, don't understand the far future situation, and don't see a path to doing good on that front I feel good about.” Maybe you can walk me through how you got from there to where you are today.", "Holden Karnofsky", "Firstly, I want to connect this back to how this relates to the work I was doing at GiveWell, and why this all falls under one theme. If we are on the cusp for this century of creating these advanced AI systems, then we could be looking at a future that's either very good or very bad. I think there are decent arguments that if we move forward without caution and we develop sloppily designed AI systems, they could end up with goals of their own. We would end up with a universe that contains very little that humans value or a galaxy that does or a world where very powerful technologies are used by ill-meaning governments to create a world that isn't very good. We could also end up with a world where we manage to eliminate a lot of forms of material scarcity and have a planet that's much better off than today's.", "A lot of what I ask is how can we help the most people possible per dollar spent? If you ask how we can help the most people possible per dollar spent, then funding some work to help shape that transition and make sure that we don't move forward too incautiously, and that we do increase the odds that we do get like a good future world instead of a bad future one, is helping a huge number of people per dollar spent. That's the motivation. You're quoting an argument I was having where we posted a transcript back in 2014–– a time that was part of my journey of getting here. I was talking to people who were saying, “Holden, you want to help a lot of people with your resources. You should be focused on this massive event that could be coming this century that very few people are paying attention to, and there might be a chance to make this go well or poorly for humanity.”", "So I was saying, “Gosh, like that sure is interesting.” And I did think it was interesting. That's why I was spending the time and having those conversations. But I said, “When I look at global poverty and global health, I see what I can do. I see the evidence. I see the actions I can take. I'm not seeing that with this stuff.” So what changed? I would say a good chunk of what changed is maybe like the most boring answer possible. I just kept at it. I was sitting there in 2014 saying, “ Gosh, this is really interesting, but it's all a bit overwhelming. It's all a bit crazy. I don't know how I would even think about this. I don't know how I would come up with a risk from AI that I actually believed was a risk and could do something about today. ” Now, I've just been thinking about this for a much longer time period. I do believe that most things you could say about the far future are very unreliable and not worth taking action on, but I think there are a few things one might say about what a transition to very powerful AI systems could look like. There are some things I'm willing to say would be bad if AI systems were poorly designed, had goals of their own, and ended up kind of running the world instead of humans. That seems bad.", "I am more familiar today than I was then with the research and the work people can do to make that less likely and the actions people can take to make that less likely–– so that's probably more than half the answer. But another thing that would be close to half the answer is that I think there have been big changes in the world of AI since then. 2014 was the beginning of what's sometimes called the “deep learning revolution” . Since then, we've basically seen these very computationally intensive (but fundamentally simple) AI systems achieve a lot of progress on lots of different unrelated tasks. It's not crazy to imagine that the current way people are developing AI systems, cutting-edge AI systems, could take us all the way to the kind of extremely powerful AI systems that automate roughly everything humans do to advance science and technology. It's not so wild to imagine that we could just keep on going with these systems, make them bigger, put more work into them, but basically stay on the same path and you could get there. If you imagine doing that, it becomes a little bit less daunting to imagine the risks that might come up and the things we could do about them. So I don't think it's necessarily the leading possibility, but it's enough to start thinking concretely about the problem.", "Dwarkesh Patel", "Another quote from the interview that I found appealing was “ Does the upper crust of humanity have a track record of being able to figure out the kinds of things MIRI claims to have figured out?” By the way, for context for the viewers, MIRI is the organization Eliezer was leading, which is who you were talking to at the time.", "Holden Karnofsky", "I don't remember exactly what kinds of things MIRI was trying to figure out and I'm not sure that I even understood what they were that well. I definitely think it is true that it is hard to predict the future, no matter who you are, no matter how hard you think, and no matter how much you've studied. I think parts of our “world” or memeplex or whatever you want to call it, overblow this at least a little bit. I think I was buying into that a little bit more than I could. In 2014, I would have said something like, “ Gosh, no one's ever done something like making smart statements about what several decades out of our future could look like or making smart statements about what we would be doing today to prepare for it. ” Since then, I think a bunch of people have looked into this and looked for historical examples of people making long-term predictions and long-term interventions. I don't think it's amazing, but I think I wrote a recent blog post entitled The Track Record of Future . It seems… fine. “Fine” is how I put it, where I don't think there's anyone who has demonstrated a real ability to predict the future with precision and know exactly what we should do.", "I also don't think humans' track record of this is so bad and so devastating that we shouldn't think we are capable of at least giving it a shot. If you enter into this endeavor with self-awareness about the fact that everything is less reliable than it appears and feels at first glance and you look for the few things that you would really bet on, I think it's worth doing. I think it's worth the bet. My job is to find 10 things we could do, and have nine of them fail embarrassingly, on the off chance that one of them becomes such a big hit that it makes up for everything else. I don't think it's totally crazy to think we could make meaningful statements about how things we do today could make these future events go better, especially if the future events are crazily far away (especially if they're within the next few decades.) That's something I've changed my mind on, at least to some degree.", "Dwarkesh Patel", "Gotcha. Okay, so we'll get to forecasting in a second, but let's continue on the object-level conversation about the most important century. I want to make sure I have the thesis right. Is the argument that because we're living in a weird time, we shouldn't be surprised if something transformative happens in a century or is the argument that something transformative could happen this century?", "Holden Karnofsky", "It's a weird time. So something we haven't covered yet, but I think is worth throwing in is that a significant part of the ‘ Most Important Century series ’ is making the case that even if you ignore AI, there's a lot of things that are very strange about the time that our generation lives in . The reason I spent so much effort on this is because back in 2014, my number one objection to these stories about transformative AI wasn’t anything about whether the specific claims about AI or economic models or alignment research made sense. This whole thing sounded crazy and was just suspicious. It's suspicious if someone says to you, “You know, this could be the most important century of all time for humanity.” I titled the series that way because I wanted people to know that I was saying something crazy and that I should have to defend it. I didn't want to be backpedaling or soft-pedaling or hiding what a big claim I was making.", "I think my biggest source of skepticism was how I didn’t have any specific objection. It sounds crazy and suspicious to say that we might live in one of the most significant times of the most significant time for humanity ever. So a lot of my series is saying that it is weird to think that, but we already have a lot of evidence that we live in an extraordinarily weird time that would be on the short list of contenders for the most important time ever–– even before you get into anything about AI, and just used completely commonly accepted facts about the world. For example, if you chart the history of economic growth, you’ll see that the last couple hundred years have seen faster growth by a lot than anything else in the history of humanity or the world. If you chart anything about scientific and technological developments, you’ll see that everything significant is packed together in the recent past. There's almost no way to cut it. I've looked at many different cuts of this. There's almost no way to cut it that won't give you that conclusion. One way to put it is that the universe is something like 11 or 12 billion years old. Life on Earth is three billion years old.", "Human civilization is a blink of an eye compared to that. We're in this really tiny sliver of time, the couple hundred years when we've seen a huge amount of technological advancement and economic growth. So that's weird. I also talk about the fact that the current rate of economic growth seems high enough that we can't keep it going for that much longer. If it went for another 10,000 years, that's another blink of an eye and galactic time scales. It looks to me like we would run out of atoms in the galaxy and wouldn't have anywhere to go. So I think there are a lot of signs that we just live in a really strange time. One more thing that I'll just throw in there–– I think a lot of people who disagree with my take would say, “Look, I do believe eventually we will develop space colonization abilities. We could go to the stars, fill up the galaxy with life, and maybe have artificial general intelligence, but to say that this will happen in a century is crazy. I think it might be 500 years. I think it might be a thousand years. I think it might be 5000 years.” A big point I make in the series is how I say, “Well, even if it's 100000 years, that's still an extremely crazy time to be in in the scheme of things .” If you make a graphic timeline and you show my view versus yours, they look exactly the same down to the pixel. So there are already a lot of reasons to think we live in a very weird time. We're on this planet where there's no other sign of life anywhere in the galaxy.", "We believe that we could fill up the galaxy with life. That alone would make us among the earliest life that has ever existed in the galaxy–– a tiny fraction of it. So that’s a lot of what the series is about. I'll answer this question explicitly. You ask, “Is this series about whether transformative AI will come and make this century weird? ” or is it about “ This century could be weird, and therefore transformative AI will come? ” The central claim is that transformative AI could be developed in this century and the sections about ‘how weird a time we live in’ are just a response to an objection. It's a response to a point of skepticism. It's a way of saying there are already a lot of reasons to think we live in a very weird time. So actually, this thing about AI is only a moderate quantitative update, not a complete revolution in the way you're thinking about things.", "Dwarkesh Patel", "There's a famous comedian who has a bit where he's imagining what it must have been like to live in 10BC. Let's say somebody comes with the proof that current deep learning techniques are not scalable for some reason and that transformative AI is very unlikely this century. I don't know if this is a hypothetical where that would happen, but let's just say that it is. Even if this is a weird time in terms of economic growth, does that have any implications other than transformative AI?", "Holden Karnofsky", "I encourage people to go to my series because I have a bunch of charts illustrating this and it could be a little bit hard to do concisely now. But having learned about just how strange the time we live in is when you look at it in context, I think the biggest thing I take away from this is how we should really look for the next big thing. If you'd been living 300 years ago and you'd been talking about the best way to help people, a lot of people might have been talking about various forms of helping low-income people. They probably would have been talking about spreading various religious beliefs. It would have seemed crazy to think that what you should be thinking about, for example, was the steam engine and how that might change the world, but I think the Industrial Revolution was actually an enormous deal and was probably the right thing to be thinking about if there's any way to be thinking about it how that would change the world and what one might do to make that a world that could be better.", "So that's basically where I'm at. I just think that as a world, as a global civilization, we should place a really high priority on saying that we live in a weird time. Growth has been exploding, accelerating over the last blink of an eye. We really need to be nervous and vigilant about what comes next and think about all the things that could radically transform the world. We should make a list of all the things that might radically transform the world, make sure we've done everything we can to think about them and identify the ways we might be able to do something today that would actually help. Maybe after we're done doing all that, we can have a lot of the world's brightest minds doing their best to think of stuff and when can't think of any more, then we can go back to all the other things that we worry about", "Right now the world invests so little in that kind of speculative, “Hey, what's the next big thing? ” Even if it's not super productive to do so, even if there's not that much to learn, I feel the world should be investing more in that because the stakes are extremely high. I think it’s a reasonable guess that we're living in a world that's recently been incredibly transformed by the Industrial Revolution and the future could be incredibly transformed by the next thing. I just don't think this gets a lot of discussion in basically any circles. If it got some, I would feel a lot more comfortable. I don't think the whole world should just obsess over what the next transformative event is, but I think right now there's so little attention to it.", "The Industrial Revolution", "Dwarkesh Patel", "I'm glad you brought up the Industrial Revolution because I feel like there are two implicit claims within the most important century thesis that don't seem perfectly compatible. One is that we live in an extremely wild time and that the transition here is potentially wilder than any other transition there has been before. The second is we have some sense of what we can be doing to make sure this transition goes well. Do you think that somebody at the beginning of the Industrial Revolution, knowing what they knew then, could have done something significant to make sure that it went as favorably as possible? Or do you think that that's a bad analogy for some reason?", "Holden Karnofsky", "It's a pretty good analogy for being thought-provoking and for thinking, “Gosh, if you had seen the Industrial Revolution coming in advance and this is when economic growth really reached a new level back in the 1700s and 1800s, what could you have done?” I think part of the answer is it's not that clear and I think that is a bit of an argument we shouldn’t get too carried away today by thinking that we know exactly what we can do. But I don't think the answer is quite nothing. I have a goofy cold-taste post that I never published and may never publish because I lost track of it. What it basically says is “ What if you'd been in that time and you had known the Industrial Revolution was coming or you had thought it might be?” You would ask yourself what you could be doing. One answer you might have given is you might have said, “ Well, gosh, if this happens, whatever country it happens in might be disproportionately influential. What would be great is if I could help transform the thinking and the culture in that country to have a better handle on human rights and more value on human rights and individual liberties and a lot of other stuff–– and gosh, it kind of looks like people were doing that and it looks like it worked out.” So this is the Enlightenment.", "I even give this goofy example–– I could look it up and it's all kind of a trollish post. But the example is someone's thinking, “ Hey, I'm thinking about this esoteric question about what a government owes to its citizens ” or, “When does a citizen have a right to overthrow a government or when is it acceptable to enforce certain beliefs and not? ” Then the other person in the dialogue is just like, “This is the weirdest, most esoteric question. Why does this matter? Why aren't you helping poor people? ” But these are the questions that the Enlightenment thinkers were thinking about. I think there is a good case that they came up with a lot of stuff that really shaped the whole world since then because of the fact that the UK became so influential and really laid the groundwork for a lot of stuff about the rights of the governed, free speech, individual rights, and human rights.", "Then I go to the next analogy. It’s like we're sitting here today and someone is saying, “ Well, instead of working on global poverty, I'm studying this esoteric question about how you get an AI system to do what you want it to do instead of doing its own thing. I think it's not completely crazy to see them as analogous.” Now, I don't think this is what the Enlightenment thinkers are actually doing. I don't think they were saying this could be the most important millennium, but it is interesting that it doesn't look like there was nothing to be had there. It doesn't look like there's nothing you could have come up with. In many ways, it looks like what the Enlightenment thinkers were up to had the same esoteric, strange, overly cerebral feel at the time and ended up mattering a huge amount. So it doesn't feel like there's zero precedent either.", "Dwarkesh Patel", "Maybe I'm a bit more pessimistic about that because I think the people who are working on individual rights frameworks weren’t anticipating an industrial revolution. I feel like the type of person who’d actually anticipate the industrial revolution would have a political philosophy that was actually probably a negative given, you know… Karl Marx. If you saw something like this happening, I don’t think it would be totally not obvious.", "Holden Karnofsky", "I mean, I think my basic position here is that I'm not sitting here highly confident. I'm not saying there's tons of precedent and we know exactly what to do. That's not what I believe. I believe we should be giving it a shot. I think we should be trying and I don't think we should be totally defeatist and say, “ Well, it's so obvious that there's never anything you could have come up with throughout history and humans have been helpless to predict the future.” I don't think that is true. I think that's enough of an example to kind of illustrate that. I mean, gosh, you could make the same statement today and say, “ Look, doing research on how to get AI systems to behave as intended is a perfectly fine thing to do at any period in time. ” It's not like a bad thing to do. I think John Locke was doing his stuff because he felt it was a good thing to do at any period in time, but the thing is that if we are at this crucial period of time, it becomes an even better thing to do and it becomes magnified to the point where it could be more important than other things.", "Dwarkesh Patel", "The one reason I might be skeptical of this theory is that I could say, “ Oh, gosh, if you look throughout history, people were often convinced that they were living in the most important time,” or at least an especially important time. If you go back, everybody could be right about living in the most important time. Should you just have a very low prior that anybody is right about this kind of thing? How do you respond to that kind of logic?", "Holden Karnofsky", "First of all, I don't know if it’s really true that it's that common for people to say that they're living in the most important time in history. This would be an interesting thing to look at. But just from stuff I've read about past works on political philosophy and stuff, I don't exactly see this claim all over the place. It definitely happens. It's definitely happened . I think a way of thinking about it is that there are two reasons you might think you are especially important. One is that you actually are and you’ve made reasonable observations about it. Another is that you want to be or you want to think you are so you're self-deceiving. So over the long sweep of history, a lot of people will come to this conclusion for the second reason. Most of the people who think they're the most important will be wrong. So that's all true. That certainly could apply to me and it certainly could apply to others. But I think that's just completely fine and completely true. I think we should have some skepticism when we find ourselves making these kinds of observations. At the same time, I think it would be a really bad rule or a really bad norm that every time you find yourself thinking the stakes are really high or that you're in a really important position, you just decide to ignore the thought. I think that would be very bad.", "If you imagine a universe where there actually are some people who live in an especially important time, and there are a bunch of other people who tell stories to themselves about whether they do, how would you want all those people to behave? To me, the worst possible rule is that all those people should just be like, “ No, this is crazy, and forget about it.” I think that’s the worst possible rule because the people who are living at the important time will then do the wrong thing. I think another bad rule would be that everyone should take themselves completely seriously and just promote their own interests ahead of everyone else's. A rule I would propose over either of them is that all these people should take their beliefs reasonably seriously and try to do the best thing according to their beliefs, but should also adhere to common sense standards of ethical conduct and not do too much “ends justify the means’ reasoning.” It's totally good and fine to do research on alignment, but people shouldn't be telling lies or breaking the law in order to further their ends. That would be my proposed rules. When we have these high stake, crazy thoughts, we should do what we can about them and not go so crazy about them that we break all the rules of society. That seems like a better rule. That's the rule I'm trying to follow.", "Dwarkesh Patel", "Can you talk more about that? If for some reason, we can be convinced that the expected value calculation was immense, and you had to break some law in order to increase the odds that the AI goes well, I don't know how hypothetical this would be. Is it just that you're not sure whether you would be right and so you’d just want to err on the side of caution?", "Holden Karnofsky", "Yeah, I'm really not a fan of ends justify the means’ reasoning. The thing that looks really, really bad is people saying it's worth doing horrible things and coercing each other and using force to accomplish these things because the ends we're trying to get to are more important than everything else. I'm against that stuff. I think that stuff looks a lot worse historically than people trying to break the future and do helpful things. So I see my main role in the world as trying to break the future and do helpful things. I can do that without doing a bunch of harmful, common sense, unethical stuff. Maybe someday there will be one of these intense tradeoffs. I haven't really felt like I've run into them yet. If I ever ran into one of those intense tradeoffs, I'd have to ask myself how confident I really am. The current level of information and confidence I have is, in my opinion, not enough to really justify the means.", "Dwarkesh Patel", "Okay, so let's talk about the potential implausibility of continued high growth. One thing somebody might think is, “OK, maybe 2 percent growth can't keep going on forever, but maybe the growth slows down to point five percent a year.” As you know, small differences in growth rates have big effects on the end result. So by the point that we've exhausted all the possible growth in the galaxy, we'll probably be able to expand to other galaxies. What’s wrong with that kind of logic where there's point five percent growth that still doesn't imply a lock-in or would it be weird if that implied a lock-in?", "Holden Karnofsky", "I think we might want to give a little bit more context here. One of the key arguments of the most important century is that it's just part of one of the arguments that we live in a strange time. I'm also arguing that the current level of economic growth just looks too high to go on for another 10,000 years or so. One of the points I make, which is a point I got from Robin Hanson , is that if you just take the current level of economic growth and extrapolate it out 10,000 years, you end up having to conclude that we would need multiple stuff that is worth as much as the whole world economy is today–– multiple times that per atom in the galaxy. If you believe we can't break the speed of light, then we can't get further than that. We can't get outside the galaxy. So in some sense, we run out of material. So you're saying, “ Alright but what if the growth rate falls to 0.5 percent?” Then I'm kind of like, “ OK, well, so the growth rate now I ballparked it in the post is around 2 percent. That's the growth rate generally in the most developed countries. Let's say it falls to 0.5 percent.” Just like for how long? Did you calculate how long it would take to get to the same place?", "Dwarkesh Patel", "I think it was like 25,000 years. 0.5 percent gets you like one world-size economy. It's 10,000 versus 25,000, but 25,000 is the number of light years between us and like the next galaxy.", "Holden Karnofsky", "That doesn't sound right. I don't think this galaxy calculation is very close. There's also going to be a bunch of dead space. As you get to the outer reach of the galaxy, there's not going to be as much there. That doesn't sound super right, but let's just roll with it. I mean, sure, let's just say that you had 2 percent today and then growth went down to 0.5 percent and stayed there forever. I'm pretty sure that's still too big. I'm pretty sure you're still going to hit limits in some reasonable period of time, but that would still be weird on its own. It would just be like, “ Well, we lived in the 200-year period when we had 2 percent growth and then we had 0.5 percent growth forever. ” That would still make this a kind of an interesting time. It would be the most dynamic, fastest-changing time in all of human history. Not by a ton, but it's also like you pick the number that's the closest and the most perfectly optimized here. So if it went down to 0.1 percent or even down to 0.01 percent, then it would take longer to run out of stuff. But it would be even stranger with the 2 percent versus the 0.01 percent. So I don't really think there's any way out of “ Gosh, this looks like this looks like it's probably going to end up looking like a very special time or a very weird time.”", "Dwarkesh Patel", "This is not worth getting hung up on, but from that perspective, then the century where we had 8 percent growth because of the Industrial Revolution–– would you say that maybe that's the most important century?", "Holden Karnofsky", "Oh, sure. Yeah, totally. No, the thing about rapid growth is it’s not supposed to be on its own. By growth standards, this century looks less special than the last one or two. It's saying that the century is one of a handful or I think when I say “ One of 80 of the most significant centuries,” or something by economic growth standards. That's only one argument, but then I look at a lot of other ways in which this century looks unusual. To say that something is the most important century of all time sounds totally nuts because there are so many centuries in the history of humanity, especially if you want to think about it on galactic time scales. Even once you narrow it down to 80, it's just much way less weird. If I've already convinced you using kind of non-controversial reasoning that we're one of the 80 most important centuries, it shouldn't take me nearly as much further evidence to say, actually, this one might be number one out of 80 because you're starting odds are more than 1 percent. So to get you up to 10 percent or 20 percent or 30 percent doesn't necessarily require a massive update the way that it would if we're just starting from nowhere.", "Dwarkesh Patel", "I guess I'm still not convinced that just because this is a weird century, this has any implications for why or whether we should see transformative AI this century. If we have a model about when transformative AI happens, is one of the variables that goes into that “ What is the growth rate in 2080?” It just feels weird to have this as a parameter for when the specific technological development is going to happen.", "Holden Karnofsky", "It's just one argument in the series. I think the way that I would come at it is I would just say, “ Hey, look at AI systems. Look at what they're doing. Look at how fast the rate of progress is. Look at these five different angles on imagining when I might be able to do all the things humans do to advance science and technology.” Just imagine that we get there this century. Wouldn't it be crazy to have AI that could do all the things humans do to advance science and technology? Wouldn't that lead to just a lot of crazy stuff happening? There's only ever been one species in the history of the universe that we know of that can do the kinds of things humans do. Wouldn't it be weird if there were two? That would be crazy. One of them was a new one we built that could be copied at will, and run at different speeds on any hardware you have. That would be crazy. Then you might come back and say, “ Yeah, that would be crazy. This is too crazy. Like I'm ruling this out because this is too crazy. ” Then I would say, “ OK, well, we have a bunch of evidence that we live in an unusual, crazy time. ” And you actually should think that there's a lot of signs that this century is not just a random century picked from a sample of millions of centuries.", "So that's the basic structure of the argument. As far as the growth rate in zero AD, I think it matters. I think you're asking the question, why do the dynamics of growth in zero AD matter at all for this argument? I think it's because it's just a question of, “ How does economic growth work generally and what is the trend that we're on, and what happens if that trend continues?” If around zero AD growth was very low but accelerating, and if that was also true at one hundred AD and a thousand AD and negative a thousand or, you know, a thousand BC, then it starts to point to a general pattern. Growth is accelerating and maybe accelerating for a particular reason, and therefore you might expect more acceleration.", "AI Success Scenario", "Dwarkesh Patel", "Alright, let’s talk about transformative AI then. Can you describe what success looks like concretely? Are humans part of the post-transformative AI world? Are we hoping that these AIs become enslaved gods that help us create a utopia? What does the concrete success scenario look like?", "Holden Karnofsky", "I mean, I think we've talked a lot about the difficulty of predicting the future, and I think I do want to emphasize that I really do believe in that. My attitude to the most important century is not at all, “ Hey, I know exactly what's going to happen and I'm making a plan to get us through it. ” It's much more like there's a general fuzzy outline of a big thing that might be approaching us. There are maybe two or three things we can come up with that seem good to do. Everything else we think about, we're not going to know if it's good to do or bad to do. So I'm just trying to find the things that are good to do so that I can make things go a little bit better or help things go a little bit better. That is my general attitude. It's like if you were on a ship in a storm and you saw some very large, fuzzy object obscured by the clouds, you might want to steer away from it. You might not want to say, “Well, I think that is an island and I think there's probably a tiger on it. So if we go and train the tiger in the right way, blah, blah, blah, blah, blah, ” you don't want to get into that. Right? So that is the general attitude I'm taking.", "What does success look like to me? Success could look like a lot of things, but one thing success would look like to me would frankly just be that we get something not too different from the trajectory we're already on. So, in other words, if we can have systems that behaved as intended, acted as tools and amplifiers of humans, and did the things they're supposed to do. If we could avoid a world where those systems got sort of all controlled by one government or one person, we could avoid a world where that caused a huge concentration of power. If we could have a world where AI systems are just another technology that helps us do a lot of stuff, and we’d invent lots of other technologies and everything is relatively broadly distributed and everything works roughly as it's supposed to work, then you might be in a world where we continue the trend we've seen over the last couple of hundred years, which is that we're all getting richer. We're all getting more tools. We all hopefully get an increasing ability to understand ourselves, study ourselves, and understand what makes us happy, what makes us thrive.", "Hopefully, the world just gets better over time and we have more and more new ideas that thus hopefully make us wiser. I do think that in most respects, the world of today is a heck of a lot better than the world of 200 years ago. I don't think the only reason for that is wealth and technology, but I think they played a role. I think that if you'd gone back to 200 years ago and said, “Holden, how would you like the world to develop a bunch of new technologies as long as they're sort of evenly distributed and they behave roughly as intended and people mostly just get richer and discover new stuff?” I'd be like, “That sounds great!” I don't know exactly where we're going to land. I can't predict in advance whether we're going to decide that we want to treat our technologies as having their own rights. That's stuff that the world will figure out. But I'd like to avoid massive disasters that are identifiable because I think if we can, we might end up in a world where the future is wiser than we are and is able to do better things.", "Dwarkesh Patel", "The way you put it, AI enabling humans doesn't sound like something that could last for thousands of years. It almost sounds as weird as chimps saying “ What we would like is humans to be our tools .” At best, maybe they could hope we would give them nice zoos. What is the role of humans in this in this future?", "Holden Karnofsky", "A world I could easily imagine, although that doesn't mean it's realistic at all, is a world where we build these AI systems. They do what they're supposed to do, and we use them to gain more intelligence and wisdom. I've talked a little bit about this hypothetical idea of digital people–– maybe we develop something like that. Then, after 100 years of this, we've been around and people have been having discussions in the public sphere, and people kind of start to talk about whether the AIs themselves do have rights of their own and should be sharing the world with us. Maybe then they do get rights. Maybe some AI systems end up voting or maybe we decide they shouldn't and they don't. Either way, you have this kind of world where there's a bunch of different beings that all have rights and interests that matter. They vote on how to set up the world so that we can all hopefully thrive and have a good time. We have less and less material scarcity. Fewer and fewer tradeoffs need to be made. That would be great. I don't know exactly where it ends or what it looks like. But I don't know. Does anything strike you as unimaginable about that?", "Dwarkesh Patel", "Yeah, the fact that you can have beings that can be copied at will, but also there's some method of voting..", "Holden Karnofsky", "Oh, yeah. That's a problem that would have to be solved. I mean, we have a lot of attention paid to how the voting system works, who gets to vote, and how we avoid things being unfair. I mean, it's definitely true that if we decided there was some kind of digital entity and it had the right to vote and that digital entity was able to copy itself–– you could definitely wreak some havoc right there. So you'd want to come up with some system that restricts how many copies you can make of yourself or restricts how many of those copies can vote. These are problems that I'm hoping can be handled in a way that, while not perfect, could be non-catastrophic by a society that hasn't been derailed by some huge concentration of power or misaligned systems.", "Dwarkesh Patel", "That sounds like that might take time. But let's say you didn't have time. Let's say you get a call and somebody says, “ Holden, next month, my company is developing or deploying a model that might plausibly lead to AGI.” What does Open Philanthropy do? What do you do?", "Holden Karnofsky", "Well, I need to distinguish. You may not have time to avoid some of these catastrophes. A huge concentration of power or AI systems don't behave as intended and have their own goals. If you can prevent those catastrophes from happening, you might then get more time after you build the AIs to have these tools that help us invent new technologies and help us perhaps figure things out better and ask better questions. You could have a lot of time or you could figure out a lot in a little time if you had those things. But if someone said–– wait how long did you give me?", "Dwarkesh Patel", "A month. Let's say three months. So it's a little bit more.", "Holden Karnofsky", "Yeah, I would find that extremely scary. I kind of feel like that's one of the worlds in which I might not even be able to offer an enormous amount. My job is in philanthropy (and a lot of what philanthropists do historically or have done well historically), is we help fields grow. We help do things that operate on very long timescales. So an example of something Open Philanthropy does a lot of right now is we fund people who do research on alignment and we fund people who are thinking about what it would look like to get through the most important century successfully. A lot of these people right now are very early in their careers and just figuring stuff out. So a lot of the world I picture is like 10 years from now, 20 years from now, or 50 years from now. There's this whole field of expertise that got support when traditional institutions wouldn't support it. That was because of us. Then you come to me and you say, “ We've got one week left. What do we do? ” I’d be like, “ I don't know. We did what we could do. We can’t go back in time and try to prepare for this better. ” So that would be an answer. I could say more specific things about what I'd say in the one to three-month time frame, but a lot of it would be flailing around and freaking out, frankly.", "Dwarkesh Patel", "Gotcha. Okay. Maybe we can reverse the question. Let's say you found out that AI actually is going to take much longer than you thought, and you have more than five decades. What changes? What are you able to do that you might not otherwise be able to do?", "Holden Karnofsky", "I think the further things are, the more I think it's valid to say that humans have trouble making predictions on long time frames. The more I’d be interested in focusing on other causes of very broad things we do, such as trying to grow the set of people who think about issues like this, rather than trying to specifically study how to get AI systems like today's to behave as intended. So I think that's a general shift, but I would say that I tend to feel a bit more optimistic on longer time frames because I do think that the world just isn't ready for this and isn't thinking seriously about this. A lot of what we're trying to do at Open Philanthropy is create support that doesn't exist in traditional institutions for people to think about these topics. That includes doing AI alignment research. That also includes thinking through what we want politically, and what regulations we might want to prevent disaster. I think those are a lot of things. It's kind of a spectrum. I would say, if it's in three months, I would probably be trying to hammer out a reasonable test of whether we can demonstrate that the AI system is either safe or dangerous.", "If we can demonstrate it's dangerous, use that demonstration to really advocate for a broad slowing of AI research to buy more time to figure out how to make it less dangerous. I don't know that I feel that much optimism. If this kind of AI is 500 years off, then I'm kind of inclined to just ignore it and just try and make the world better and more robust, and wiser. But I think if we've got 10 years, 20 years, 50 years, 80 years, something in that range, I think that is kind of the place where supporting early careers and supporting people who are going to spend their lives thinking about this would be beneficial. Then we flash forward to this crucial time and there are a lot more people who spent their lives thinking about it. I think that would be a big deal.", "Dwarkesh Pate l", "Let's talk about the question of whether we can expect the AI to be smart enough to disempower humanity, but dumb enough to have that kind of goal. When I look out at smart people in the world, it seems like a lot of them have very complex, nuanced goals that they've thought a lot about what is good and how to do good.", "Holden Karnofsky", "A lot of them don't.", "Dwarkesh Pate l", "Does that overall make you more optimistic about AIs?", "Holden Karnofsky", "I am not that comforted by that. This is a very, very old debate in the world of AI alignment. Eliezer Yudkowsky has something called the orthogonality thesis . I don't remember exactly what it says, but it's something like “You could be very intelligent about any goal. You could have the stupidest goal and be very intelligent about how to get it. ” In many ways, a lot of human goals are pretty silly. A lot of the things that make me happy are not things that are profound or wonderful. They're just things that happen to make me happy. You could very intelligently try to get those things, but it doesn't give me a lot of comfort. I think basically my picture of how modern AI works is that you're basically training these systems by trial and error. You're basically taking an AI system, and you're encouraging some behaviors, while discouraging other behaviors. So you might end up with a system that's being encouraged to pursue something that you didn't mean to encourage. It does it very intelligently. I don't see any contradiction there. I think that if you were to design an AI system and you were kind of giving it encouragement every time it was getting more money into your bank account, you might get something that's very, very good at getting money into your bank account to the point where you'd going to disrupt the whole world to do that. You will not automatically get something that thinks, “Gosh, is this a good thing to do?” I think with a lot of human goals, there's not really a right answer about whether our goals actually make sense. They're just the goals we have.", "Dwarkesh Patel", "You've written elsewhere about how moral progress is something that's real, that's historically happened, and it corresponds to what actually counts as moral progress. Do you think there's a reason to think the same thing might happen with AI? Whatever the process is that creates moral progress?", "Holden Karnofsky", "I kind of don't in particular. I've used the term moral progress as just a term to refer to changes in morality that are good. I think there has been moral progress, but I don't think that means moral progress is something inevitable or something that happens every time you are intelligent. An example I use a lot is attitudes toward homosexuality. It's a lot more accepted today than it used to be. I call that moral progress because I think it's good. Some people will say, “ Well, you know, I don't believe that morality is objectively good or bad. I don't believe there is any such thing as moral progress. I just think things change randomly.”", "That will often be an example I'll pull out and I'll say, “ But do you think that was a neutral change? ” I just think it was good. I think it was good, but that's not because I believe there's some underlying objective reality. It's just my way of tagging or using language to talk about moral changes that seem like they were positive to me. I don't particularly expect that an AI system would have the same evolution that I've had in reflecting on morality or would come to the same conclusions I've come to or would come up with moralities that seem good to me. I don't have any reason to think any of that. I do think that historically there have been some cases of moral progress.", "Dwarkesh Patel", "What do you think is the explanation for historical progress?", "Holden Karnofsky", "One thing that I would say is that humans have a lot in common with each other. I think some of history contains cases of humans learning more about the world, learning more about themselves, and debating each other. I think a lot of moral progress has just come from humans getting to know other humans who they previously were stereotyping and judging negatively and afraid of. So I think there's some way in which humans learning about the world and learning about themselves leads them to have kind of conclusions that are more reflective and more intelligent for their own goals. But, if you brought in something into the picture that was not a human at all, it might be very intelligent and reflective about its goals, but those goals might have zero value from our point of view.", "Dwarkesh Patel", "Recent developments in AI have made many people think that AI could happen much sooner than they otherwise thought. Has the release of these new models impacted your timelines?", "Holden Karnofsky", "Yeah, I definitely think that recent developments in AI have made me a bit more freaked out. Ever since I wrote The Most Important Century series and before that, there were years when Open Philanthropy was very interested in AI risk, but it's become more so as we've seen progress in AI. I think what we're seeing is we're seeing these very generic, simple systems that are able to do a lot of different tasks. I think people are interested in this. There are a lot of compilations of what GPT-3 is–– a very simple language model that, by the way, my wife and brother-in-law both worked on. This very simple language model just predicts the next word it's going to see in a stream of text. People have gotten it to tell stories. People got similar (though not identical) models to analyze and explain jokes.", "People have gotten it to play role-playing games, write poetry, write lyrics, answer multiple-choice questions, and answer trivia questions. One of the results that I found most ridiculous, strange and weird was this thing called Minerva , where people took one of these language models and with very little special intervention, they got it to do these difficult math problems and explain its reasoning and get them right about half the time. It wasn't really trained in a way that was very specialized for these math problems, so we just see AI systems having all these unpredictable human-like abilities just from having this very simple training procedure. That is something I find kind of wild and kind of scary. I don't know exactly where it's going or how fast.", "Dwarkesh Patel", "So if you think transformative AI might happen this century, what implications does that have for the traditional global health and well-being stuff that OpenPhilanthropy does? Will that have persistent effects of AI if it gets aligned? Will it create a utopia for us anyway?", "Holden Karnofsk y", "I don't know about utopia. My general take is that anything could happen. I think my general take on this most important century stuff, and the reason it's so important is because it's easy to imagine a world that is really awesome and is free from scarcity and we see more of the progress we've seen over the last 200 years and we end up in a really great place. It's also easy to imagine a horrible dystopia. But my take is that the more likely you think all this is, the more likely you think transformative AI is, the more you should think that that should be the top priority, that we should be trying to make that go well instead of trying to solve more direct problems that are more short term. I'm not an extremist on this. So, OpenPhilanthropy does both.", "OpenPhilanthropy works on speculative far-off future risks and OpenPhil also does a bunch of more direct work. Again, we do direct and recommend a lot of money to give to those top charities, which do things like distributing bed nets in Africa to help prevent malaria and treat children for intestinal parasites. OpenPhilanthropy does a lot of advocacy for more money going to foreign aid or for better land use policies to have a stronger economy. We do a bunch of scientific research work that is more aimed at direct medical applications, especially in poor countries. So I support all that stuff. I'm glad we're doing it. It's just a matter of how real and how imminent do you think this transformative AI stuff is? The more real and more imminent, the more of our resources should go into it.", "Dwarkesh Pate l", "Yeah, that makes sense to me. I'm curious, whatever work you do elsewhere, do those still have persistent effects after transformative AI comes? Or do you think they’ll basically wash out in comparison to the really big stuff?", "Holden Karnofsky", "I mean, I think in some sense, the effects are permanent in the sense that if you cause someone to live a healthier, better life, that's a significant thing that happened. Nothing will ever erase that life or make that life unimportant, but I think in terms of the effects on the future, I do expect it mostly to wash out. I expect that mostly whatever we do to make the world better in that way will not persist in any kind of systematic, predictable manner past these crazy changes. I think that's probably how things look pre and post-industrial revolution. There are probably some exceptions, but that's my guess.", "Competition, Innovation , & AGI Bottlenecks", "Dwarkesh Patel", "You've expressed skepticism towards the competition frame around AI or you try to make capabilities go faster for the countries or companies you favor most. But elsewhere, you've used the “innovation as mining metaphor,” and maybe you can explain that when you're giving the answer. It seems like this frame should imply that the second most powerful AI company is probably right on the heels of the first most powerful. So if you think the first most powerful is going to take safety more seriously, you should try to boost them. How do you think about how these two different frames interact?", "Holden Karnofsky", "I think it's common for people who become convinced that AI could be really important to just jump straight to, “ Well, I want to make sure that people I trust build it first.” That could mean my country, that could mean my friends, people I'm investing in. I have generally called that the competition frame which is “I want to win a competition to develop AI” , and I've contrasted it with a frame that I also think is important, called the caution frame, which is that we need to all work together to be careful to not build something that spins out of control and has all these properties and behaves in all these ways we didn't intend. I do think that if we do develop these very powerful AI systems, we're likely to end up in a world where there are multiple players trying to develop it and they're all hot on each other's heels. I am very interested in ways for us all to work together to avoid disaster as we're doing that. I am maybe less excited than the average person who first learns about this is and is like “I’m picking the one I like best and helping them race ahead.”", "Dwarkesh Patel", "Although I am someone interested in both, if you take the innovation as mining metaphor seriously, doesn't that imply that actually the competition is really a big factor here?", "Holden Karnofsky", "The innovation mining metaphor is from another bit of Cold Takes . It's an argument I make that you should think of ideas as being somewhat like natural resources in the sense of once someone discovers a scientific hypothesis or once someone writes a certain great symphony, that's something that can only be done once. That's an innovation that can only be done once. So it gets harder and harder over time to have revolutionary ideas because the most revolutionary, easiest-to-find ideas have already been found. So there's an analogy to mining. I don't think it applies super importantly to the AI thing because all I'm saying is that success by person one makes success by person two harder. I'm not saying that it has no impact or that it doesn't speed things up. Just to use a literal mining metaphor, let's say there's a bunch of gold in the ground. It is true that if you rush and go get all that gold, it'll be harder for me to now come in and find a bunch of gold. That is true. What's not true is that it doesn't matter if you do it. I mean, you might do it a lot faster than me. You might do it a lot ahead of me.", "Dwarkesh Patel", "Fair enough. Maybe one piece of skepticism that somebody could have about transformative AI is that all this is going to be bottlenecked by the non-automatable steps in the innovation sequence. So there won't be these feedback loops that speed up. What is your reaction?", "Holden Karnofsky", "I think the single best criticism and my biggest point of skepticism on this most important century stuff is the idea that you could build an AI system that's very impressive and could do pretty much everything humans can do. There might be one step that you still have to have humans do, and that could bottleneck everything. Then you could have the world not speed up that much and science and technology not advance that fast because they are doing almost everything. But humans are still slowing down this one step or the real world is slowing down one step. Let's say real-world experiments to invent new technologies take how long they take. I think this is the best objection to this whole thing and the one that I'd most like to look into mor e. I do ultimately think that there's enough reason to think that if you had AI systems that had human-like reasoning and analysis capabilities, you shouldn't count on this kind of bottleneck causing everything to go really slow.", "I write about that in this piece called Weak Point in the Most Important Century: Full Automation . Part of this is how you don't need to automate the entire economy to get this crazy growth loop. You can automate just a part of it that specifically has to do with very important tech like energy and AI itself. Those actually seem, in many ways, less bottlenecked than a lot of other parts of the economy. So you could be developing better AI algorithms and AI chips, manufacturing them, mostly using robots, and using those to come up with even better designs. Then you could also be designing more and more efficient solar panels, and using those to collect more and more energy to power your eyes. So a lot of the crucial pieces here just actually don't seem all that likely to be bottlenecked. You can be at the point where you have something that has the ability to have creative new scientific hypotheses the way a human does, which is a debate over whether we should ever expect that and when. Once you have that, I think you should figure that there are just a lot of ways to get around all your other bottlenecks because you have this potentially massive population of thinkers looking for them. So an example is that you could, with enough firepower, enough energy, enough AI, and enough analysis, you could probably find a way to simulate a lot of the experiments you need to run, for example.", "Dwarkesh Patel", "Gotcha. Now, it seems like the specific examples you used of energy and AI innovations are probably the hardest things to automate, given the fact that those are the ones that humanity's only gotten around to advancing most recently. Can you talk us through the intuition about how those might be easier?", "Holden Karnofsky", "I think some of the stuff that might be hardest to automate would just be stuff that in some sense doesn't have anything to do with software or capabilities. So an example of something that might just be extremely hard to automate is trust, making a business deal, or providing care for someone who's sick. It might just be that even if an AI system has all the same intellectual capabilities as a human, and can write poetry just as well, have just as many ideas, and have just as good a conversation, it just doesn't look like a human. So people don't want that. Maybe you can create a perfect representation of a human on a screen, but it's still on the screen. In general, I see the progress in AI as being mostly on the software front, not the hardware front. So AIs are able to do a lot of incredible things with language, things with math, and things with board games. I also wouldn't be surprised if they could write hit music in the next decade or two.", "But these people really are not making the same kind of progress with robotics. So weirdly, a task that might be among the hardest to automate is the task of taking this bottle of water and taking off the cap. Because I have this hand that is just well designed for that. Well, it's clearly not designed for that, but it's like these hands can do a lot of stuff. We aren't seeing the same kind of progress there. So I think there are a lot of places where AI systems might have the kind of brains that can do roughly everything human brains can. There's some other reason they can't do some key economic tasks, but I think these are not the tasks I see likely to bottleneck the R&D as much.", "Dwarkesh Patel", "Gotcha.", "Holden Karnofsky", "This is an argument I make in one of my more obscure Cold Takes posts. I say that AI that could actually take everyone's job, like every human's job, might be a lot harder than AI and could radically transform the galaxy via new technology. It might be easier to take a scientist's job than a teacher's job or a doctor's job because the teachers and the doctors are regulated. People might just say, “ I want human teachers. I don't want an AI teacher. ” Whereas you can sit there in your lab with your scientists and find new theories that change the world. So some of this stuff, I think, is very counterintuitive, but I can imagine worlds where you get really wacky stuff before you get self-driving cars out on the road just because of the way the regulations work.", "Lock-ins and Weak Points", "Dwarkesh Patel", "Gotcha. OK, let's talk about another weak point or the one you identify as a weak point. Lock-in. What do you think are the odds of lock-in given transformative AI?", "Holden Karnofsky", "So lock-in is a term I use to talk about the possibility that we could end up with a very stable civilization. I talk about that in another post. It's called Weak Point in the Most Important Century . I wrote posts about the weakest points in the series and the idea is that throughout history so far, when someone becomes in charge of a government and they're very powerful and they're very bad, this is generally considered to be temporary. It’s not going to go on forever. There are a lot of reasons the world is dynamic and the world tends to just not stay that way completely. The world has changed a lot throughout history. It's kind of a dumb thing to say, but I'll get to why this might be important. If someone is running a country in a really cruel, corrupt way, at some point they're going to get old and die and someone else is going to take over. That person will probably be different from them.", "Furthermore, the world is changing all the time. There are new technologies, new things are possible, there's new ideas. The most powerful country today might not be the most powerful tomorrow. The people in power today might not be the ones in power tomorrow. I think this gets us used to the idea that everything is temporary, and everything changes. A point I make in the Most Important Century series is that you can imagine a level of technological development where there just aren't new things to find. There isn't a lot of new growth to have. People aren't dying because it seems like it should be medically possible for people not to age or die. So you can imagine a lot of the sources of dynamism in the world actually going away if we had enough technology. You could imagine a government that was able to actually serve everyone, which is not something you can do now, with a dictator who actually doesn't age or die, who knows everything going on, who's able to respond to everything. You could imagine that world just being completely stable.", "I think this is a very scary thought. It's something we have to be mindful of–– if the rate of technological progress speeds up a lot, we could quickly get to a world that doesn't have a lot more dynamism and is a lot more stable. What are the odds of this? I don't know. It's very hard to put a probability on it. But I think if you imagine that we're going to get this explosion in scientific and technological advancement, you have to take pretty seriously the idea that we could end up hitting a wall and there could not be a lot of room for more dynamism. We could have these very stable societies. What does seriously mean in terms of probability? I don't know–– a quarter, a third, a half, something like that–– I'm making up numbers. I think it's serious enough to think about as something that affects the stakes of what we're talking about.", "Dwarkesh Patel", "Gotcha. Are you concerned about lock-in just from the perspective of locking in a negative future, or do you think that might intrinsically be bad to lock in any kind of future? If you could press a button right now and lock in a reasonably positive future that won't have any dynamism, or one where dynamism is guaranteed but net expected positive is not, how would you make that determination?", "Holden Karnofsky", "Well, I don't think a lot about what I would do with unrealistic buttons where I have crazy amounts of power that I'll never have and shouldn't have. I think of lock-in by default as mostly a bad thing. I feel like we’d want to at least kind of preserve optionality and have a world where it's not just one person running the show with their values set up the way they want forever. I think of it mostly that way. I can imagine some future world where civilization's been around long enough and we've learned what there is to learn, and we know what a good world looks like, so most people feel pretty confident about that, and they're right to feel confident. Maybe then, lock-in's wouldn’t be so bad. But I do mostly think of lock-in as a bad thing. I also imagine that you could lock in some things about the world in order to avoid locking in others. So I can imagine if you had this enormous amount of power over how the world works––some of this is more explained in my digital people series–– but if you had this kind of world where you completely control the environment, you might want to lock in the fact that you should never have one person with all the power. That might be a thing you might want to lock in, and that prevents other kinds of lock-in.", "Dwarkesh Patel", "Do you worry about AI alignment as being a form of lock-in? In some sense, if the goal of the research is to prevent drift from human values, then you might just be locking in values that are suboptimal.", "Holden Karnofsky", "Yeah, I mostly think of AI alignment as just trying to avoid a really bad thing from happening. What we don't want to happen is we have some AI system we thought we were designing to help us , but in reality, we're actually designing it to do some extremely random thing. Again, these systems work by trial and error, by encouragement, discouragement, or positive and negative reinforcement. So we might have not even noticed that through the pattern of reinforcement we were giving, we trained some system to want to put as much money as possible into one bank account, gain as much power as possible, or control as much energy as possible, or something like that. Maybe it’d set its own reward number, its own score to the highest possible number. I think that would be a form of lock-in if we had systems more powerful than humans that had these kinds of random goals.", "That would be like locking in a kind of future that is not related to the things that humans value and care about. That's an example of a future I think would be really bad. Now, if we got these systems to behave as intended, we still might have problems because we might have humans doing really stupid things and locking in really bad futures. I think that's an issue too. I feel reasonably comfortable, though not 100% confident, saying that we'd like to avoid that just like slip-ups. We'd like to avoid having these systems that have these random goals we gave them by accident. They're very powerful, and they're better at setting up the world than we are. So we get this world that's just doing this random thing that we did by accident. I think that's a thing worth avoiding.", "Predicting the Future", "Dwarkesh Patel", "What is your biggest disagreement with Will MacAskill's new book on long-termism ?", "Holden Karnofsky", "I like Will's book. I think it's worth reading Will MacAskill's book about how the future could be very large and very important. In my opinion, if you want to talk about the long-run future and how to make the long-run future go well, you're starting from a place of, by default, “almost nothing I can do will actually make sense.” I do really believe it's hard to understand the long-run future, and it's hard to make specific plans about it. So I would say that compared to Will, I am very picky about which issues are big enough and serious enough to actually pay attention to. I feel the issue of AI would be transformative enough. It looks likely enough that it'll be soon. If it's soon, that means there might actually be things we can do that have predictable effects. I think this misaligned AI thing is a real threat. The way people design AI systems today could be really bad. I am ready to put some resources into preventing this, but that's kind of crossing my threshold. Most things don't. So if you make a list of ways to make the next million years go well, I'll look at most of them and be like, “ I don't know. I don't really believe in this. ” I wouldn't really invest in this. I think Will is a bit broader than I am in a sense. He's interested in more things, and I am pickier than he is because I think it's so hard to know what's really going to affect the long-run future that I'm just looking for a really short list of things that are worth paying special attention to.", "Dwarkesh Patel", "Is there a specific thing that he points out in the book you think would be hard to grapple with?", "Holden Karnofsky", "I don't remember super well. The book is a really broad survey of lots of stuff. An example I might give is he talks about the risk of stagnation, for example. The risk that growth might just stop or growth might slow to very low levels. That implies that what we should be trying to do is make sure we continue to innovate and continue to have growth, but then there's other parts of the book that make it sound like we shouldn't move too fast and we shouldn't innovate too much because we don't want to get to our future before we've like kind of achieved some sort of civilizational maturity beyond what we have now to decide what we want that future to look like. We don't want to build these powers before we have a better idea of what to do with them. So I think these are examples where I'm just like, “ Gosh, I don't know. It could be good to have more growth. It could be bad to have less growth. It could be that stagnation is a big threat. It could be that building powerful technologies too fast is a big threat. ” I just don't really know. I'll tell you what I'm thinking about. I'm thinking about AI because I think it's a big enough deal and likely enough and that we've got enough traction on some of the major risks.", "Dwarkesh Patel", "Right, right. When I look throughout history, it often seems like people who predict long-term trends are too pessimistic. In the 70s, you might have been too pessimistic about the ability to find more oil or feed a growing population because you couldn't have predicted the technological breakthroughs that might have made these things possible. Does this inform some sort of vague optimism about the future for you with regards to AI or not?", "Holden Karnofsky", "I think historically, people have been overly pessimistic about future technologies. I think by default, the picture with AI looks really scary. It just looks like it would be really easy to get a bad future in a lot of different ways if we just didn't move cautiously. These two considerations balance each other out a little bit for me. I know a lot of people who believe that we're in deep, deep, enormous trouble, and this outcome where you get AI with its own goals wiping humanity off the map is almost surely going to happen. I don't believe that and this is part of the reason I don't believe it. I actually think the situation looks very challenging, very scary by default, and I think we're tending to overestimate how bad and how dire things are. So they balance out a little bit for me.", "Dwarkesh Patel", "Okay, gotcha. In many of these cases, it seems like it would be impossible to see the positive scenario come about. For example, if you were forecasting population in the 70s, is there some reasonable method by which you would have predicted this was not going to lead to some massive famine that kills a billion people? Or would that have been your focus in the 70s if Open Philanthropy was a thing back then?", "Holden Karnofsky", "I think it's really hard to know how “knowable” the future was in the past and what that means for today. I do think that when you look back at people trying to predict the future in the past, it just looks deeply unserious. You could say that future people will say the same about us. I'm sure they'll think we look less serious than they are, but I think there's a difference. I really do think there haven’t been attempts to rigorously make predictions about the future historically. I don't think it's obvious that people were doing the best they can and that we can't do better today. So this population is an example. It doesn't seem necessarily true to me that you couldn't have said “Gosh, the population has been going up for a while now and people keep inventing new ways to come up with more resources. Maybe that will keep happening.”", "I'm just really not convinced you couldn't have said that. I'm definitely not convinced no one did say it. I think some people did say it. So I think I'm hesitant to get too defeatist just from the fact that some people were wrong about the future in the past. I think it's hard to know if there was really no way to know or if they just weren't trying very hard.", "Dwarkesh Patel", "One thing you just said a minute ago was that we are better at making predictions than people in the past were. So that alone should make us more optimistic about what we need to predict in the future.", "Holden Karnofsky", "It's just a guess. I mean this is what society is. We have had a lot of progress on all kinds of intellectual fronts. I think there has been a lot of progress on what it looks like to make good, reasonable predictions about the future. I think that's something that's happened. So I think we should be expecting ourselves to do a bit better than people did in the past and future people will probably do better than we do.", "Dwarkesh Patel", "Right. When I look at a report like Biological Anchors , I often wonder whether Asimov is just shooting the shit about screens and what you're able to do with them. Maybe he had less sources of error than this eight-step methodology where you might not even be aware that there's a ninth or tenth missing step that might make the whole thing invalid, and where many of the inputs have multiple orders of magnitude, wide confidence intervals. What do you think of that general skepticism?", "Holden Karnofsky", "I mean Biological Anchors is a very important input into my thinking, but it's not the only input. I think my views on AI timelines are a mix of A, looking at AI systems today, looking at what they did 50 years ago, looking at what they did 10 years ago, and just kind of being like, “Well, gosh, it sure looks plausible that these will be able to do all the things humans can do to advance science and technology pretty soon.” That's one input into my thinking. Another input into my thinking is what we call the semi-informative priors analysis , which is a complex report because it looks from a lot of different angles. I think you can summarize the highlights of the report as just saying that most of the effort that has ever in the history of humanity has gone into making AI because the field of AI is not very old and the economy and the amount of effort invested have gone up dramatically. I think that's a data point in favor of not being too skeptical that we could be on the cusp of transformative AI. In some sense, the world has not been trying very hard for very long. So that's an input. Another input is expert surveys when people ask AI researchers when they think AI will be able to do everything humans can. They tend to come out saying it's a few decades. That can be biased and unreliable in all kinds of ways, and all these things have their problems, but that's a data point.", "Dwarkesh Patel", "Then there's biological anchors.", "Holden Karnofsky", "Biological anchors isn’t a report I would summarize on a podcast. It's a very complex report. There are a lot of different angles it uses. There are a lot of different questions it asks. There are a lot of different numbers. However, I do think you can boil it down to some fairly simple observations. You can say that in some important sense (which could be debated and analyzed but seems true most ways you look at it), we've never built AI systems before that do as much computation per second as a human brain does. So it shouldn't be surprising that we don't have AI systems that can do everything humans do because humans are actually doing more work in their brains than a normal AI system is doing.", "However, it also looks like within this century, we probably will have AI systems that are that big. If we estimate how much it would take to train them, how much it would cost to build them, that looks like it will probably be affordable this century. Then you could just talk about all the different ways you could define this and all the different ways you could quantify that and all the different assumptions you could put in. But my bottom line is like almost any way you slice it, and however you want to define what it means for an AI brain to be as big as a human's and what it would mean to get that brain sort of trained, most angles on it suggest that it looks reasonably likely it will happen this century. That's a data point for me. That matters. So all these are data points feeding into my view.", "Dwarkesh Patel", "Okay. So I'm stealing this from Eliezer who asked on Twitter, “Has there literally ever in the entire history of AI been any case of anybody successfully calling the development timing of literally any novel AI capability using a bio-anchored or bio-inspired calculation ? ” He has very complex sentences.", "Holden Karnofsky", "I saw some discussion of this on his Facebook and I think the answer might be yes. However, I mostly want to attack the premise. I just want to say that there haven't been a lot of cases of people predicting AI milestones with great precision and that's also not what I'm trying to do. A lot of what I'm trying to say is “Gosh, in this century, it looks more likely than not that we'll get something hugely transformative.” He's asking about some history of AI that's like a few decades old and there haven’t even been a lot of people trying to make predictions. A lot of the predictions have been way more narrow and specific than that. So I mostly think that this isn't a very important or informative question. I think all the work he's doing… has there ever been an example of someone using the kind of reasoning Eliezer is using to predict the end of the world or something ?", "That’s what he's predicting. So I mostly just want to challenge the premise and say, “ Look, we're not working with a lot of sample size here. This isn’t some big, well-developed field where people have tried to do the exact same thing I'm trying to do 25 times and failed each time. ” This is mostly people in academia trying to advance AI systems. They don't try to predict when AI systems can do much and do what? We're not working with a lot here. We have to do the best we can, make our best guess, use our common sense, use our judgment, and use the angles we've got. That's my main answer. Another answer I would give is that Hans Moravec was the original biological anchors person. I think he predicted artificial general intelligence around 2020. From Eliezer's own views, it's going to look like he was unbelievably close. Maybe Eliezer believes we'll see it by 2030. I think that's plausible. So if that's what you believe, then we'll look back on that as like the greatest technology prediction ever by a lot.", "I think the answer is maybe. There was also some discussion in the comments about whether Moravec called that big progress on AI was doing well at vision by examining the retina. There was some debate about that. I think it's all very muddy. I don't think this is much of a knockdown argument against thinking about biological anchors. It is only one input into my thinking. I do think it looks kind of good for biological anchors that we have seen this deep learning revolution and we’ve seen these brute force AI training methods working really well when they didn't use to work well. This happened when AI systems started to be about the size of an insect or small animal brains within range, within a few orders of magnitude of human brains. You could call that a wild coincidence, but these numbers are probably all off by 10x, 100x, 1000x. I mean, we're talking about very important things and trying to get our best handle and our best guess. I think biological anchors looks fine so far. It doesn't look amazing, but it looks fine.", "Dwarkesh Patel", "Now, I'm sure many people have proposed that increasing progress in science, technology, and economic growth are the most compelling things to be doing instead of working on transformative AI. I just want to get your broad reaction to that first.", "Holden Karnofsky", "Sure. I think we're talking about the progress studies crowd here. I wrote a piece about this on Cold Takes called Rowing, Steering, Anchoring, Equity and Mutiny , where I discuss different ways of thinking about what it means to make the world good. I do have some sympathy for the idea that a lot of the way the world has gotten better over the last couple hundred years is just that we've gotten richer. We've had more technological capabilities, so maybe we should try and do more of that. I don't think this is a nutty thing to think. I think this is somewhat reasonable, but I feel that even a couple hundred years is not that big a percentage of the history of humanity.", "I wrote a series called Has Life Gotten Better? that asks what the whole graph of quality of life looks like over the course of humanity. There is precedent for technological development seeming to make things worse. That's what it looks like happened in the agricultural revolution so I have some sympathy for saying, “Hey, this thing has been good for 200 years, let's do more of it, ” but I don't think it's the tightest, most conclusive argument in the world. I think we do have some specific reasons to believe that developing some particular new technologies, not only AI, but also potentially bioweapons, could just be catastrophically dangerous. I think Toby Ord uses the analogy of humanity being like a child who's becoming an adolescent. It's like it's great to become stronger up to a certain point. That's fun. That feels good, but then at a certain point, you're strong enough to really hurt yourself and really get yourself in trouble, or maybe strong enough that you don't know your own strength. I think there's a pretty good case that humanity is reaching that point.", "I think we're reaching the point where we could have a nuclear war or a bioweapon or AI systems that really change the world forever. So it might have made sense 300 years ago when we were all struggling to feed ourselves to say, “ Hey, we want more power. We want more technology. We want more abilities. ” Today, I think we're starting to enter the gray area. We're starting to enter the gray zone, or maybe we should slow down a little bit, and be a little bit more careful. I'm not saying to literally slow down, but I'm talking about priorities. I would rather look at what I think are dramatically neglected issues that might affect all of humanity's future and at least do the best we can to have a handle on what we want to do about them. Then put my effort into throwing more juice and more gas behind this ongoing technological progress, which I think is a good thing. It's just a matter of priority.", "Dwarkesh Patel", "Okay. Do you think that the entire vision of increasing progress is doomed if ideas get harder to find?", "Holden Karnofsky", "I’ve talked about the atoms of the galaxy argument before–– I think a broader common sense take would be that the world over the last couple of hundred years has changed incredibly dramatically. We've had new exciting technologies and capabilities every year. I think a good guess would be that that hits a wall at some point. It might be the atoms of the galaxy, or it might just be something much more boring. What we seem to observe when we look at the numbers is that we are seeing a bit of stagnation, a bit of slowing down that probably will keep slowing down by default. So, yeah, I think it’s probably a good guess that the world is changing at an incredible pace that has not been the case for most of history and it probably won't be the case for the whole future.", "Choosing Which Problem To Solve", "Dwarkesh Patel", "Okay. Gotcha. I guess there are several reactions somebody could have to the idea that ideas are getting harder to find and therefore that this makes progress studies less relevant. If you look at your own blog, the entire thing is about you complaining about all this low-hanging fruit that people are not plucking. Nobody's thinking seriously about transformative AI. Nobody's thinking seriously about utopia. Nobody's thinking seriously about ideal governance. How do you square this with the general concept of ideas getting harder to find?", "Holden Karnofsky", "I think there's just a ton of really important stuff today that not enough people are paying attention to. That was true 50 years ago, and that was also true a hundred years ago. It was probably more true 50 years ago than it is today. It was probably more true a hundred years ago than 50 years ago. Gradually, the supply of amazingly important ideas that are not getting any attention is probably getting harder to do, but harder doesn't mean impossible. I do actually think that if people want to do something that's really new and world changing and dramatic and revolutionary, the worst way to do that is to go into some well-established scientific field and try to revolutionize that. I think it's better to just use your common sense and ask an important question about the world that no one's working on because it isn't a scientific field (because it isn't a field of academia, because it doesn't have institutions) and work on that. A lot of my blog does advocate that. For example, AI itself is a very well-established field, but AI alignment is a weird field that doesn't really have academic departments right now. A lot of what I'm talking about, like trying to predict what the future is going to look like, is a weird, low prestige thing that you can't easily explain to your extended family. I do think that's probably the best place to look if you want to do something that's going to be super significant or super revolutionary. That is why I've professionally been drawn to it–– looking for potential big wins that philanthropy could get.", "Dwarkesh Patel", "You’ve once said that we shouldn't follow in the footsteps of the greats to be a great person, and should instead have great achievements yourself. Isn't another way to think about that, that you should probably also ignore the advice that the optical advice you're giving or 80,000 hours gives, because those specific things that aren’t what's going to make you the next Einstein?", "Holden Karnofsky", "I mean, a fair number of your questions are part of the dialogue I see in those skeptical of the futurism world–– it feels to me like it's almost just getting unnecessarily fancy. I kind of just want to say “ Who's someone who really revolutionized the way the world thinks about stuff? ” Darwin. Now, what was Darwin doing? Was Darwin saying, “ Well, I really don't want to think about this thing because I don't believe humans are capable of thinking about that thing and I don't want to think about this topic because I think it's too hard to know the future and blah, blah, blah.” Was he doing all that stuff or was he just asking an interesting question? Was he just saying, “Hey, this thing seems important. I'm going to use my common sense and judgment to figure out how it works and I'm going to write about it.” I think some of this stuff gets too fancy.", "So I think today, if I just look at the world and I say, “What are the most important things that could matter for the world to be a good or bad place in the future? ” I've looked at a lot of possibilities. I think AI alignment is one of the leading examples and I don't see a lot of people paying attention to it, so that's what I want to work on. I think a lot of the people who have done revolutionary work that we now look back on (whom a lot of people try to imitate), weren't trying to imitate what usually worked and stay away from the stuff that wasn't. They were just asking interesting, important questions and working on them. As far as myself in 80,000 hours, I just don't feel that we're well known enough or influential enough that our advice that that stuff we're interested in is obvious is automatically, therefore not neglected . I think the stuff we're talking about is very neglected, but if you find something that's even more neglected and more important, more power to you.", "Dwarkesh Patel", "Let's say the total amount of money given to EA just increased by an order of magnitude or something. What could be possible at that point that's not possible now?", "Holden Karnofsky", "I don't know. I think even then, the amount of money we'd be working with would be really small by the standards of any kind of government budget. In general, with philanthropy, I'm always looking for things where it's like, “Can we see the creation of a field? Can we fund people to introduce new ideas? ” But, we're very small compared to the overall economy and the overall government. I think even multiplying everything by 10, that would still be true. I’m not sure exactly what we do with 10x as much money. I'm not even sure what we're going to do with the money that already exists.", "Dwarkesh Patel", "Yeah, but do you think there will be more billionaires in the future and does that imply you should be spending money faster now if you are?", "Holden Karnofsky", "In theory, we have all these models that say, “ Here's our guess at how much money is eventually going to be available, and here's our guess at how many giving opportunities will eventually be there to fund. This is our guess about what's good enough to fund and what's not. ” That's a very tentative guess. A lot of it is just really really, really imprecise stuff, but we have to have some view on it–– anyone who's spending money does. So, I mean, yeah, I do tend to assume that Sam Bankman Fried , Dustin Moskovitz and Cari Tuna are not the last billionaires who are interested in doing as much good as possible–– but it is really hard to model this stuff. Frankly, we have various rough models we've made over the years but we’ll also sometimes use our intuition and just say we fund the stuff that seems quite good and exciting and we don't fund stuff that doesn't. That's an input into our thinking too.", "$30M OpenAI Investment", "Dwarkesh Patel", "Gotcha. How do you think about the risk that some of your giving might have negative impacts? People have brought this up in the context of your 30 million dollar investment in OpenAI, but in all sorts of context, especially when you're talking about political advocacy, people might think that the thing you do has negative side effects that counteract the positive effects. Is it just a straight calculation? How do you think about this?", "Holden Karnofsky", "I think in theory, what we want is to make grants that have more upside than downside or have expected net positive effects. I think we tend to be, in a common sense way, a little bit conservative with the negative effects. What we don't want to do is enter some field on a theory that's just totally messed up and wrong in a way that we could have known if we had just done a little bit more homework. I think that there's just something irresponsible and uncooperative about that. So in general, when we are making big decisions like big dollar decisions or going into a new cause, we’ll often try really hard to do everything we can to understand the downsides.", "If after we've done roughly everything we can up to some reasonable diminishing returns, we still believe that the upsides outweigh the downsides, then we're generally going to go for it. Our goal is not to avoid harm at all costs. Our goal is to operate in a cooperative, high-integrity way–– always doing our best, always trying to anticipate the downsides, but recognizing that we're going to have unintended side effects sometimes. That's life–– anything you do has unintended side effects. I don't agree with the specific example you gave as an example of something that was net negative, but I don't know.", "Dwarkesh Patel", "Are you talking about OpenAI? Yeah. Many people on Twitter might have asked if you were investing in OpenAI.", "Holden Karnofsky", "I mean, you can look up our $30 million grant to OpenAI. I think it was back in 2016–– we wrote about some of the thinking behind it. Part of that grant was getting a board seat for Open Philanthropy for a few years so that we could help with their governance at a crucial early time in their development. I think some people believe that OpenAI has been net negative for the world because of the fact that they have contributed a lot to AI advancing and to AI being sort of hyped, and they think that gives us less time to prepare for it. However, I do think that all else being equal, AI advancing faster gives us less time to prepare. It is a bad thing, but I don't think it's the only consideration. I think OpenAI has done a number of good things too, and has set some important precedents. I think it's probably much more interested in a lot of the issues I'm talking about and risks from advanced AI than like the company that I would guess would exist if they didn't, would be doing similar things.", "I don't really accept that the idea that OpenAI is a negative force. I think it's highly debatable. We could talk about it all day. If you look at our specific grant, it's even a completely different thing because a lot of that was not just about boosting them, but about getting to be part of their early decision making. I think that was something that there were benefits and was important. My overall view is that I don't look back on that grant as one of the better grants we've made, not one of the worse ones. But certainly we've done a lot of things that have had, you know, that have not worked out. I think there are some times shortly when we've done things that have consequences we didn't intend. No philanthropist can be free of that. What we can try and do is be responsible, seriously do our homework to try to understand things beforehand, see the risks that we're able to see, and think about how to minimize them.", "Future Proof Ethics", "Dwarkesh Patel", "Let's talk about ethics. I think you have a very interesting series of blog posts about future proof ethics . Sure. You want to explain what this is first?", "Holden Karnofsky", "Sure. I wrote a short blog post series trying to explain some of the philosophical views and ethical views that are common among people who call themselves effective altruists. One of the ideas I appealed to is (I'm not sure I'm getting this right) how a lot of people I know are trying to come up with a system of morality and a system of ethics that would survive a lot of moral progress. They’re trying to come up with a system where if they later became a lot wiser and learned a lot more and reflected on their morality, they wouldn't look back on their earlier actions and think they were doing horrible, monstrous mistakes. A lot of history has just people doing things they thought were fine and right at the time, but now we look back and we're horrified.", "You could think of yourself asking “ What morality can I have that would make it not so likely that if there was a bunch more moral progress and if people learned a lot more, the future won't look back on me and be horrified of what I did.” So I wrote a bit of a series about what it might look like to try to do that, and laid out a few principles of it trying to use this to explain the moral systems a lot of effective altruists tend to use–– which tends to be some flavor of utilitarianism that is often very expansive about like whose rights count. So effective altruists are very interested in future generations that don't exist yet. They're interested in animals being mistreated on factory farms. They're interested in various populations that a lot of people don't care about today, but that there are large numbers of. So I try to explain that. A thing that's important is I laid this view out partly so I could argue against it later and I haven't done the latter yet. So I have a lot of reservations too about the ethical systems that are common with effective altruists.", "Dwarkesh Patel", "Alright, so let's talk about some of the pillars you laid out in this piece. Sentientism seems pretty reasonable to me.", "Holden Karnofsky", "There are three principles that I roughly outlined that you might want for a morality that is going to stand up to scrutiny or you won't be so likely to change your mind about if you learn more and get better. One principle is systemization. It's better to have morality based on simple general principles that you apply everywhere than have a morality that's just always you just deciding what feels right in the moment. The latter could be subject to a lot of the biases of your time and the former lets you stress test the core ideas. Two of the core ideas I propose are what I call “thin utilitarianism”, which is basically the greatest good for the greatest number and sentientism, which is basically saying that someone counts or someone matters if they're able to suffer or have pleasure.", "I think you just said sentientism seems reasonable to you. I think sentientism might be the weakest part of the picture to me. I think you if you have a morality where you are insistent on saying that everyone counts equally in proportion to the amount of pain or pleasure they're able to have, you run into a lot of weird dilemmas that you wouldn't have to run into if you didn't have that view. So I think it's very strange, but I think it is actually one of the more questionable parts of the view. It's kind of saying, “When I'm deciding whether I care about someone, it doesn't matter at all if they're way in the future, if they're way far away, if they're totally different from me, if they're not human, if I've never met them, all that matters is if they can have pain or pleasure.” I think it sounds great, and I completely get why someone listening to me would say, “ How could you ever disagree with that? ” But I do think there's various challenges with it which I have not had the chance to write about yet. I doubt I can be very convincing on this podcast as of right now because I haven't thought enough about it.", "Dwarkesh Patel", "Alright––yeah, sounds good. Let's talk about systemization. Doesn't the fact that you have lots of complex and sometimes contradictory moral intuitions suggest that maybe the whole goal of having some fundamental principles you extrapolate the rest of morality from is kind of doom project?", "Holden Karnofsky", "I think it does somewhat suggest that. I am somewhat partial to that view and that's something I may be writing in my rebuttal. I also think it's possible to be confused and I think it's possible to have lots of stuff going on in your brain and some of it might be based on really good really good intentions of treating other people fairly and being good to other people. Some of it might be based on just other weird stuff about wanting to stand up for people who look like you or help people who look like you, etc. So I do have some sympathy for the project of trying to say, “ My intuitions contradict each other, but some of them are coming from a good place. Some of them are coming from a bad place. ” If I thought more about it, I would realize which ones are which, and I want to try and do that.", "Dwarkesh Patel", "Yeah. Let's talk about new totalitarianism . There’s this question from an old Scott Alexander post where he asks, would you rather the medieval church spent all of its money helping the poor rather than supporting the arts ? So maybe there were fewer poor people back in the medieval times, but you wouldn't have any cathedrals or you wouldn't have the Sistine Chapel. I don't know how you would answer that if you were in medieval times.", "Holden Karnofsky", "It doesn't sound like the strongest version of this argument to me, to be honest. Maybe that would be fine or good. I don't know. My wife really loves these like old churches–– if I had more of her attitude, I would be more horrified by this idea. Low income people had a rough time in the past so them having better lives seems pretty appealing, so I don't really know if that's the best version of this argument.", "Dwarkesh Patel", "How much of future proof ethics is basically that you're very confident that a future Holden will have a much more developed and better set of ethics? How much do you think people in general or humanity in general will get better ethics over time?", "Holden Karnofsky", "Thhis has been definitely a point of confusion in this series and partly something I think I didn't communicate well about and which makes the series like not that amazing. I use the term moral progress and I just use it to refer to like things “getting better.” I think sometimes there is such a thing as thinking more about your morality, gaining some insight and ending up in a better place as a result. I think that is a thing that is real. There are some people who believe morality is an objective truth, but I'm not one of those people. However, even though I believe morality is not objective, I still think there's a meaningful notion of moral progress. There's such a thing as having more reasonable moral views than I used to.", "What I didn't mean to say is that moral progress has any inevitability about it. I didn't mean to say that moral progress necessarily happens just because time goes on. I don't think that. I just think it's a thing that can happen. So I do think a future Holden will probably be better at morality just because I'm really interested in the topic–– I'm going to keep trying to improve it. I think that we have some reason to think that actually does help a bit–– a really tiny bit, but I'm not confident in that at all. I certainly don't think that society is going to have moral progress necessarily, but I do think we've had some in the past.", "Dwarkesh Patel", "Ok, but then it seems weird to label the system of ethics future proof ethics, right? Maybe it would just be future “ Holden-proof ethics.”", "Holden Karnofsky", "Yeah, possible. I talk about this a bunch in the series and I think I just didn't do a great job with this. I think what I was trying to do is use a term that you didn't have to be a moral realist to to get behind. What I was really trying to capture was, “Can I think now to reduce the odds that if later I improve, I'll be horrified by my early actions?” That was what I was trying to capture the concept of. I'm not sure I really did it successfully.", "Integrity vs Utilitarianism", "Dwarkesh Patel", "Gotcha. OK, so you had a recent post on the E.A. forum that I thought was really interesting. A quote from that is, “ My view is that for the most part, people who identify as E.A. tend to have unusually high integrity–– but my guess is that this is more despite utilitarianism than because of it.” So what do you think is the explanation for this coincidence where a group of reasonable, non fanatical, high integrity people also happen to be a community of utilitarians?", "Holden Karnofsky", "You might have a set of people who are who think of themselves as trying really hard to be like the kind of person they should be or really hard to bring their actions in line with their beliefs and their statements–– so that drives them to be kind of like honest a lot of the time and follow a lot of our common sense rules of morality. It also drives them to really try to get that ethics right and land on ideas like utilitarianism that are very systematic and pure and like give you sort of this clear theoretical guidance. So it could drive both those things. Whereas I believe that if you're a utilitarian, it's really unclear whether utilitarianism actually tells you to do things like avoiding lying. Some people think it does. Some people think it doesn't. I think it's very unclear.", "Dwarkesh Patel", "You've advocated for the moral parliament's approach when you're trying to make decisions. What is the right level of organization at which to use that approach? Should individuals be making decisions based on having multiple different moral parties inside them? Is that the right approach for entire movements but individuals should be specializing? What is the right level to be applying this approach at?", "Holden Karnofsky", "Moral uncertainty is something I hope to write about in the future. The basic idea is that there might be a bunch of different ways about thinking about what the right thing is to do in the world. You might look at the world from one angle and say, “ Well, what matters is like the total sum of all the pleasures. So therefore a bigger world would be better. So therefore I should be like really obsessed with getting the world to be as big as possible. ” There might be another perspective that says that what really matters is suffering. “ We should minimize suffering. We should want the world to be small.” There might be another perspective that says it doesn't matter what happens to the world. “It matters how I act. What matters is that I act with integrity, that I tell the truth, things like that. ”", "There's these interesting debates asking what should you do when you think you have some sympathy for all these views. How do you choose an action that some perspectives would say is the best thing you've ever done and some would say is the worst thing you've ever done? The moral parliament idea is an idea that was laid out by Nick Bostrom in an Overcoming Bias post a decade ago that I like. I think about it as if I'm just multiple people. I just think about it as if there's multiple people all living inside my head arguing about what to do and they all are friends and they all care about each other and they want to get along. So they're trying to reach a deal that all of them can feel fairly good about. That is how I tend to think about dealing with different moral views.", "I tend to want to do things that are really good according to one and not too bad according to the rest and try to have the kind of different parts of myself making deals with each other. So that relates to something I said at the beginning about not being into ends justify the means. I put a lot of effort into doing things that would be like really, really good. If this most important century stuff came out true, that would be good but it also would not be too catastrophic if it didn't. So there are lines I'm not willing to cross. There are behaviors I'm not willing to engage in to promote the kind of goals of people who worry about AI safety. So it's a moderating approach I think.", "Bayesian Mindset & Governance", "Dwarkesh Patel", "It makes a lot of sense for somebody who is the CEO of Open Philanthropy to want the decisions you make to reflect uncertainties about your decisions. However, if it's just somebody like me where I'm not in some sort of leadership position where I have a large amount of resources to dedicate to, should I just specialize in that particular moral view I have or should I also be trying to allocate my time and resources according to different moral views?", "Holden Karnofsky", "I think no matter what position I was in in the world and however many resources I had, I would feel that my decisions were significant in some sense and affected people and were important in the way that they affect those around me. So I think it's just very natural to me to think there are a lot of different perspectives on what it means to be a good person rather than trying to turn them into a single unifying mathematical equation and take the expected value–– which is another approach I think is interesting. I do think the approach I do tend to prefer is to imagine the different perspectives as different people trying to get along and make a deal with each other.", "Dwarkesh Pate l", "Let's talk about governance and management. In software, as I'm sure you're aware, there's a concept of a 10X engineer. Is there something similar in the kinds of work a research analyst at Open Philanthropy does? Is it meaningful to say that when two people are doing the same job, one can be orders of magnitude more effective than another?", "Holden Karnofsky", "At any given thing in open philanthropy, some people are much better at it than others. I don't think that's very surprising. I think this is true fot many jobs and I don't really know the reasons for it. It could be any combination of talent, interest, and how hard someone works at it. I certainly think there's a lot of variance and hiring people who can do a great job at the work Open Phil does has been a lifelong challenge.", "Dwarkesh Patel", "You've written about the Bayesian mindset . You know many billionaires, and many of them are donors to Open Philanthropy. In your experience, do these startup founders who end up becoming very successful have a Bayesian mindset or is that the wrong way to characterize their –", "Holden Karnofsky", "Yeah, I wrote about this idea called the Bayesian mindset , which is basically about being willing to put a probability on anything and use your probabilities and say your probabilities as a way of discovering why it is you think what you think and using expected value calculations similarly. I think this is like much more common among successful tech founders than it is among like the general population, but there's plenty of tech founders who don't think this way at all. I say in the Bayesian mindset. I don't think it's like a super well-tested, well-proven social technology that does amazing things, but I do think it's more like an interesting thing to be experimenting with.", "Dwarkesh Patel", "Well, to the general population, the Bayesian mindset is practically unheard of.", "Holden Karnofsky", "Yeah, I mean it’s not even just the name. This whole idea of like thinking about expected value and subjective probabilities all the time–– almost no one does that. However, I do think tech founders probably do it more than the average person.", "Dwarkesh Patel", "That makes sense. Do you think that adopting more of a Bayesian mindset would help somebody get to the top levels and be more successful?", "Holden Karnofsky", "It's really TBD and unclear. I think the Bayesian mindset is a cool thing to experiment with. I experiment with it a lot. I feel like it helps me sometimes. Like most things, it's good in moderation and with taste and not using it for every single thing. Maybe 10 years from now as it gets more popular, we'll have a better sense of where the actual applied strengths and weaknesses are.", "Dwarkesh Patel", "As I'm sure you're aware, there are many prizes floating around for all kinds of intellectual work in effective altruism. Some of them even have come from open philanthropy. Are you optimistic about their ability to resurface new ideas?", "Holden Karnofsky", "I would say I'm medium-optimistic about the impact of all these prizes. I've been part of designing some of them, but I've just seen some other ones… people say, “ Hey, we'll pay you X dollars if you can give us a good critique of our…” GiveWell will pay people to give them a good critique of their reasoning about what the best charities are to give to. Open Philanthropy has a prize for showing us a cause we should be looking at that we're not. I think I'm medium optimistic. I think it will get some interest and it will get some people to pay attention who weren't otherwise and some of those people might have good ideas, but I don't think it's like the only way to solve these problems or that it will automatically solve them. That's generally how the people designing the prizes think about them too.", "Dwarkesh Patel", "You have an interesting post about stakeholder management that says that over time, institutions will have to take into account the interests of more and more stakeholders. Do you expect that this will be something that will be a major factor in how Open Philanthropy acts in the future? What will be the impact on how Open {hilanthropy runs overall?", "Holden Karnofsky", "Yeah, I think in general the bigger your organization is, the bigger your city is, the bigger your society is–– if you want everyone to be happy, there are more people you're going to have to make happy. I think this does mean that in general by default as a company grows, it gets less able to make a lot of disruptive quick changes. A lot of people would use the term “nimble”. A lot of people in the tech world like to use these very negative terms for big company properties and very positive terms for small company properties. So small companies are nimble and quick and practical and adaptive and dynamic and high productivity and big companies are bureaucratic and slow and non-adaptive. I think that's all fair. I also think that big companies often at the end of the day just produce more stuff than they could if they were small.", "I think if Apple were still 10 people, it might be a more exciting place to work at— but they wouldn't be able to make all those iPhones. There are a lot of iPhones going out to a lot of people, serving a lot of different people's needs, and abiding by a lot of regulatory requirements. There's a lot of work to be done. So I don't think it's necessarily a bad thing but I think there’s a tradeoff when a company grows. I do think Open Philanthropy is in the business of doing kind of unconventional giving and using a lot of judgment calls to do it. So I tend to think we benefit a lot from staying as small as we can and I generally have fought for us to stay as small as we can while doing our work–– but we still have to grow from where we are.", "Dwarkesh Patel", "Gotcha. Do you mean stay small in terms of funds or do you mean people?", "Holden Karnofsky", "People.", "Dwarkesh Patel", "Okay yeah, people. It seems odd to say that in the organization you have the most experience with, your inside view is that more stakeholders would be bad–– but overall it's been a net zero or positive.", "Holden Karnofsky", "Well, it's not clear – we are growing. We're bigger than we were a year ago and we'll be bigger in a year. So it's definitely not true that I’m trying to minimize the size of the company. We're growing, but I think we want to watch it. I think we want to treat each hire as something that we only do because we had a really good reason to. I think there are some companies that may have more to gain from being 10,000 people. I don't think we'll ever be 10,000 people.", "Career Advice", "Dwarkesh Patel", "Right. Now your written career advice emphasizes building aptitudes and specializing, but when I look at your career––, it's all over the place, right? We were just talking about it at the beginning of the interview. You started off in GiveWell, then you were working at Open Philanthropy and now you're forecasting AI. So how do you think about this kind of thing? Are you specializing? What's going on here?", "Holden Karnofsky", "I don't know if I really forecast AI. I mostly distill and bring together analyses that others have done, and I manage people who work on that sort of thing. The career advice I often give is that it's really good to have something you're very focused on that you're specializing in, and that you're trying to be the best in the world at. The general theme of my career is just taking questions, especially questions about how to give effectively, where it's just like no one's really gotten started on this question. Even doing a pretty crappy analysis can be better than what already exists. So often what I have done in my career, what I consider myself to have kind of specialized in, in a sense, is I do the first cut crappy analysis of some question that has not been analyzed much and is very important.", "Then I build a team to do better analysis of that question. That's been my general pattern. I think that's the most generalizable skill I've had, but I have switched around because I do think that I've kind of at various points in my career just said, “ Hey, here's something that's getting very little attention and it's very important, and it's worth the sacrifice of the specialized knowledge I built up in one area to switch into this other area that I think I ought to be working on.”", "Dwarkesh Patel", "What does the logo on the Cold Takes blog mean?", "Holden Karnofsky", "There is no logo. I think you're talking about the browser icon.", "Dwarkesh Patel", "Yeah, yeah.", "Holden Karnofsky", "That is a stuffed animal named Mora. At some point, if I get enough subscribers, I will explain who all these stuffed animals are, but my wife and I basically use a stuffed animal personality classification system where we will compare someone to various stuffed animals to explain what their strengths and weaknesses are. Mora is a pink polar bear who's very creative but also very narcissistic and loves attention. So she's kind of the mascot of the blog because it's this blog that's just very crazy, very out there, and is just me writing in public. So it just felt like her spirit.", "Dwarkesh Patel", "Gosh, okay. So let me ask, what is the goal of the blog? Why have a second job as a blogger in addition to being the CEO of a big organization?", "Holden Karnofsky", "I think it fits into my job reasonably well. I didn't want it to be Open Philanthropy branded. I just wanted the freedom to write about things the way I wanted and how I wanted. I do think that we make these high-stakes decisions based on very unconventional views about the world. So I think it's good for us to be trying to make those views have contact for the rest of the world. I think there would be something not ideal about being a large foundation giving you large amounts of money but then just quietly going around believing these enormously important and true things that no one else believes. If we put the views out into the world, A, I think all the people seeking money from us would just have a better idea of where we're coming from and why is it that we're interested in funding what we're interested in funding.", "I think to an extent, if people can find the arguments compelling or even just understand them, this helps us understand our thinking and can help create more grantees for us. It can help cause the world to be a place where there's more good stuff for us to fund because more people see where we're coming from, hopefully, agree with it, and are trying to work on the things we consider important. Then to the extent that my stuff is actually just screwed up and wrong and I've got mistakes in there and I've thought it all through wrong, this is also the best way I know of to discover that. I don't know how else I'm going to get people to critique it except by putting it out there and maybe getting some attention for it. So that's how I kind of think of it–– it's taking views that are very important to the decisions we're making and trying to express them so that we can either get more people agreeing with us who were able to fund and support and work with or learn more about what we're getting wrong.", "Dwarkesh Patel", "Alright. So let me actually ask you–– has that happened? Has there been an important view expressed on the blog that because of feedback that you’ve change your mind on? Or is it mostly about the communication part?", "Holden Karnofsky", "I mean, there's definitely been a lot of interesting stuff. An example is I put up this post on the track record of futurists and there was a post by Dan Luu recently that I haven't read yet. It has its own analysis of the track record of futurists and I need to compare them to think about what I really think about how good humans have historically been at predicting the future. He certainly has a ton of data in there that I was not aware of that feels like a bit of a response or may not have been prompted by it. There's been a lot of commentary.", "There's been a lot of critiques about some of the stuff I've written in the most important century. There have been other critiques. I think a lot of the stuff I wrote about the biggest weak points of the most important century was based on the public criticism that was coming in. So I think I have become more aware of a lot of the parts of my thinking that are the least convincing or the weakest or the most in need of argument. I have like paid more attention to those things because of that.", "Dwarkesh Patel", "Yeah. This may just be me talking, but it actually does sound like if you've learned about how people react to the most important century thesis, but it doesn't seem like something has surfaced–– which has made you change your mind on it a lot.", "Holden Karnofsky", "That would be a big, a big change, to drop my view that we could be in the most important century for humanity. That's still what I believe. I mean, I think I've also heard from people who think I'm like underselling the whole thing–– crazy people who just think that I should be planning on transformative AI much sooner than what I kind of implied in the series. So, yeah, I put out “Most Important Century” and I don't believe any of the critiques have been deep enough and strong enough to make me just drop that whole thing. It's a big picture with a lot of moving parts and I have deepened my understanding of many of the parts.", "Dwarkesh Patel", "Yeah. One thing I find really interesting about your work is how much it involves the CEO having a deep understanding of all the issues involved. You’re the one who has to deeply understand, for example, moral parliaments or specific forecasts about AI, biological anchors, and whatever else, right? It seems perhaps in other organizations, the CEO just delegates this kind of understanding and just asks for the bullet points. Is this something you think more leaders should be doing or is there something special about your position?", "Holden Karnofsky", "I know much less about any given topic than the person who specializes in the topic. I think what I try to do is I try to know enough about the topic that I can manage them effectively, and that's a pretty general corporate best practice. I think it just varies a lot. So, for example, something like keeping our books, keeping our finances, doing the financial audits, and all that stuff–– that's something that's really easy to judge the outputs by without really knowing much about finance at all. You can just say, “ Look, was this compliant? Did we do our audit? Did we pass the audit? Do we still have money in the bank? How much money do we have in the bank? ” You don't need to know much about it to judge it effectively.", "However, at any other company, you may need to know a fair amount about some other topics in order to judge them very effectively. If your company is making computers or phones, and design is very important, it would be really bad if the CEO just had like no opinions on design and just thought “I 'm going to let our design person decide the design.” It's a central thing to the company. It matters to the company and they should know some things about it. So I do know things that are really central to open philanthropy. What does it mean to do good? How do we handle uncertainty about how to do good? What are the most important causes? If AI might be one of the most important causes, then when might we see transformative AI? What would that mean? How big is the risk of misaligned AI? I think I need to understand those issues well enough to effectively manage people who know a lot more about them than I do. I'm curious–– what do you think about this whole most important century stuff? Does it just strike you as like crazy? What do you think when you read the series?", "Dwarkesh Patel", "Yeah, obviously through the entire interview, I've been trying to like nitpick at small things, but when I really think about the main claim you're making, it’s that this could be the most important century and transformative AI could happen in the century. If it does, then it's a really big deal and yeah, I don't disagree. That makes a lot of sense. Throughout preparing for the interview and trying to come up with objections, I've just been a little bit more convinced with thinking about, “Is there actually something I could do over my early career that matters? Or is that something that maybe I should just hold off on thinking about? ”", "Holden Karnofsky", "Glad to hear it. Do you have any ideas about what you might do?", "Dwarkesh Patel", "No.", "Holden Karnofsky", "Really? Literally no ideas? You haven't been like, “Can I work on AI alignment?”", "Dwarkesh Patel", "Well, yeah in that sense, I’ve thought a little bit about it. In probably like two months or three months, I'll think really hard about what I actually want to do for a career.", "Many thanks to my amazing editor, Graham Bessalou, for producing this podcast and to Mia Aiyana for creating the amazing transcripts that accompany each episode, which have helpful links and you can find them at the link in the description below. Remember to subscribe on YouTube and your favorite podcast platforms. Cheers. See you next time." ]
[ "https://www.cold-takes.com/most-important-century/", "https://www.cold-takes.com/most-important-century/", "https://www.givewell.org/", "https://www.givewell.org/", "https://www.openphilanthropy.org/about/team/cari-tuna/", "https://en.wikipedia.org/wiki/Dustin_Moskovitz", "https://asana.com/", "https://www.openphilanthropy.org/", "https://www.effectivealtruism.org/", "https://www.cold-takes.com/most-important-century/", "https://www.cold-takes.com/most-important-century/", "https://mitpress.mit.edu/9780262038034/the-deep-learning-revolution/", "https://en.wikipedia.org/wiki/Miri", "https://encyclopedia.pub/entry/33978", "https://en.wikipedia.org/wiki/Memeplex", "https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/", "https://docs.google.com/document/u/0/d/1fM5yVocl8kNGGYVsMiAxXKKQlU73tdp6x0-TrXwJk4k/edit", "https://en.wikipedia.org/wiki/Robin_Hanson", "https://en.wikipedia.org/wiki/Eliezer_Yudkowsky", "https://www.lesswrong.com/tag/orthogonality-thesis", "https://www.cold-takes.com/future-proof-ethics/", "https://www.cold-takes.com/most-important-century/", "https://www.openphilanthropy.org/", "https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html", "https://www.cold-takes.com/", "https://www.cold-takes.com/weak-point-in-most-important-century-full-automation/", "https://www.cold-takes.com/weak-point-in-most-important-century-lock-in/", "https://en.wikipedia.org/wiki/William_MacAskill", "https://www.amazon.com/What-Owe-Future-William-MacAskill/dp/1541618629", "https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/", "https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/", "https://www.openphilanthropy.org/research/report-on-semi-informative-priors/", "https://en.wikipedia.org/wiki/Eliezer_Yudkowsky", "https://en.wikipedia.org/wiki/Hans_Moravec", "https://www.cold-takes.com/rowing-steering-anchoring-equity-mutiny/", "https://www.cold-takes.com/has-life-gotten-better/", "http://www.tobyord.com/", "https://en.wikipedia.org/wiki/Sam_Bankman-Fried", "https://en.wikipedia.org/wiki/Dustin_Moskovitz", "https://www.openphilanthropy.org/about/team/cari-tuna/", "https://forum.effectivealtruism.org/posts/gCkHoXvDjEKSK22Wp/future-proof-ethics", "https://www.cold-takes.com/future-proof-ethics/", "https://sentientism.info/", "https://www.jstor.org/stable/1149648", "https://slatestarcodex.com/2016/05/23/three-great-articles-on-poverty-and-why-i-disagree-with-all-of-them/", "https://www.ethicalpsychology.com/2021/07/the-parliamentary-approach-to-moral.html", "https://nickbostrom.com/", "https://www.overcomingbias.com/", "https://www.cold-takes.com/the-bayesian-mindset/", "https://www.cold-takes.com/the-bayesian-mindset/", "https://www.cold-takes.com/the-bayesian-mindset/", "https://www.cold-takes.com/empowerment-and-stakeholder-management/", "https://www.cold-takes.com/", "https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/", "https://danluu.com/futurist-predictions/" ]
https://www.dwarkesh.com/p/ilya-sutskever
Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment
[ "Time to AGI", "Dwarkesh Patel", "Today I have the pleasure of interviewing Ilya Sutskever, who is the Co-founder and Chief Scientist of OpenAI. Ilya, welcome to The Lunar Society.", "Ilya Sutskever", "Thank you, happy to be here.", "Dwarkesh Patel", "First question and no humility allowed. There are not that many scientists who will make a big breakthrough in their field, there are far fewer scientists who will make multiple independent breakthroughs that define their field throughout their career, what is the difference? What distinguishes you from other researchers? Why have you been able to make multiple breakthroughs in your field?", "Ilya Sutskever", "Thank you for the kind words. It's hard to answer that question. I try really hard, I give it everything I've got and that has worked so far. I think that's all there is to it.", "Dwarkesh Patel", "Got it. What's the explanation for why there aren't more illicit uses of GPT? Why aren't more foreign governments using it to spread propaganda or scam grandmothers?", "Ilya Sutskever", "Maybe they haven't really gotten to do it a lot. But it also wouldn't surprise me if some of it was going on right now. I can certainly imagine they would be taking some of the open source models and trying to use them for that purpose. For sure I would expect this to be something they'd be interested in the future.", "Dwarkesh Patel", "It's technically possible they just haven't thought about it enough?", "Ilya Sutskever", "Or haven't done it at scale using their technology. Or maybe it is happening, which is annoying.", "Dwarkesh Patel", "Would you be able to track it if it was happening?", "Ilya Sutskever", "I think large-scale tracking is possible, yes. It requires special operations but it's possible.", "Dwarkesh Patel", "Now there's some window in which AI is very economically valuable, let’s say on the scale of airplanes, but we haven't reached AGI yet. How big is that window?", "Ilya Sutskever", "It's hard to give a precise answer and it’s definitely going to be a good multi-year window. It's also a question of definition. Because AI, before it becomes AGI, is going to be increasingly more valuable year after year in an exponential way.", "In hindsight, it may feel like there was only one year or two years because those two years were larger than the previous years. But I would say that already, last year, there has been a fair amount of economic value produced by AI. Next year is going to be larger and larger after that. So I think it's going to be a good multi-year chunk of time where that’s going to be true, from now till AGI pretty much.", "Dwarkesh Patel", "Okay. Because I'm curious if there's a startup that's using your model, at some point if you have AGI there's only one business in the world, it's OpenAI. How much window does any business have where they're actually producing something that AGI can’t produce?", "Ilya Sutskever", "It's the same question as asking how long until AGI. It's a hard question to answer. I hesitate to give you a number. Also because there is this effect where optimistic people who are working on the technology tend to underestimate the time it takes to get there. But the way I ground myself is by thinking about the self-driving car. In particular, there is an analogy where if you look at the size of a Tesla, and if you look at its self-driving behavior, it looks like it does everything. But it's also clear that there is still a long way to go in terms of reliability. And we might be in a similar place with respect to our models where it also looks like we can do everything, and at the same time, we will need to do some more work until we really iron out all the issues and make it really good and really reliable and robust and well behaved.", "Dwarkesh Patel", "By 2030, what percent of GDP is AI?", "Ilya Sutskever", "Oh gosh, very hard to answer that question.", "Dwarkesh Patel", "Give me an over-under.", "Ilya Sutskever", "The problem is that my error bars are in log scale. I could imagine a huge percentage, I could imagine a really disappointing small percentage at the same time.", "Dwarkesh Patel", "Okay, so let's take the counterfactual where it is a small percentage. Let's say it's 2030 and not that much economic value has been created by these LLMs. As unlikely as you think this might be, what would be your best explanation right now of why something like this might happen?", "Ilya Sutskever", "I really don't think that's a likely possibility, that's the preface to the comment. But if I were to take the premise of your question, why were things disappointing in terms of real-world impact? My answer would be reliability. If it somehow ends up being the case that you really want them to be reliable and they ended up not being reliable, or if reliability turned out to be harder than we expect.", "I really don't think that will be the case. But if I had to pick one and you were telling me — hey, why didn't things work out? It would be reliability. That you still have to look over the answers and double-check everything. That just really puts a damper on the economic value that can be produced by those systems.", "Dwarkesh Patel", "Got it. They will be technologically mature, it’s just the question of whether they'll be reliable enough.", "Ilya Sutskever", "Well, in some sense, not reliable means not technologically mature.", "What’s after generative models?", "Dwarkesh Patel", "Yeah, fair enough. What's after generative models? Before, you were working on reinforcement learning. Is this basically it? Is this the paradigm that gets us to AGI? Or is there something after this?", "Ilya Sutskever", "I think this paradigm is gonna go really, really far and I would not underestimate it. It's quite likely that this exact paradigm is not quite going to be the AGI form factor. I hesitate to say precisely what the next paradigm will be but it will probably involve integration of all the different ideas that came in the past.", "Dwarkesh Patel", "Is there some specific one you're referring to?", "Ilya Sutskever", "It's hard to be specific.", "Dwarkesh Patel", "So you could argue that next-token prediction can only help us match human performance and maybe not surpass it? What would it take to surpass human performance?", "Ilya Sutskever", "I challenge the claim that next-token prediction cannot surpass human performance. On the surface, it looks like it cannot. It looks like if you just learn to imitate, to predict what people do, it means that you can only copy people. But here is a counter argument for why it might not be quite so. If your base neural net is smart enough, you just ask it — What would a person with great insight, wisdom, and capability do? Maybe such a person doesn't exist, but there's a pretty good chance that the neural net will be able to extrapolate how such a person would behave. Do you see what I mean?", "Dwarkesh Patel", "Yes, although where would it get that sort of insight about what that person would do? If not from…", "Ilya Sutskever", "From the data of regular people. Because if you think about it, what does it mean to predict the next token well enough? It's actually a much deeper question than it seems. Predicting the next token well means that you understand the underlying reality that led to the creation of that token. It's not statistics. Like it is statistics but what is statistics? In order to understand those statistics to compress them, you need to understand what is it about the world that creates this set of statistics? And so then you say — Well, I have all those people. What is it about people that creates their behaviors? Well they have thoughts and their feelings, and they have ideas, and they do things in certain ways. All of those could be deduced from next-token prediction. And I'd argue that this should make it possible, not indefinitely but to a pretty decent degree to say — Well, can you guess what you'd do if you took a person with this characteristic and that characteristic? Like such a person doesn't exist but because you're so good at predicting the next token, you should still be able to guess what that person who would do. This hypothetical, imaginary person with far greater mental ability than the rest of us.", "Dwarkesh Patel", "When we're doing reinforcement learning on these models, how long before most of the data for the reinforcement learning is coming from AI and not humans?", "Ilya Sutskever", "Already most of the default enforcement learning is coming from AIs. The humans are being used to train the reward function. But then the reward function and its interaction with the model is automatic and all the data that's generated during the process of reinforcement learning is created by AI. If you look at the current technique/paradigm, which is getting some significant attention because of chatGPT, Reinforcement Learning from Human Feedback (RLHF). The human feedback has been used to train the reward function and then the reward function is being used to create the data which trains the model.", "Dwarkesh Patel", "Got it. And is there any hope of just removing a human from the loop and have it improve itself in some sort of AlphaGo way?", "Ilya Sutskever", "Yeah, definitely. The thing you really want is for the human teachers that teach the AI to collaborate with an AI. You might want to think of it as being in a world where the human teachers do 1% of the work and the AI does 99% of the work. You don't want it to be 100% AI. But you do want it to be a human-machine collaboration, which teaches the next machine.", "Dwarkesh Patel", "I've had a chance to play around these models and they seem bad at multi-step reasoning. While they have been getting better, what does it take to really surpass that barrier?", "Ilya Sutskever", "I think dedicated training will get us there. More and more improvements to the base models will get us there. But fundamentally I also don't feel like they're that bad at multi-step reasoning. I actually think that they are bad at mental multistep reasoning when they are not allowed to think out loud. But when they are allowed to think out loud, they're quite good. And I expect this to improve significantly, both with better models and with special training.", "Data, models, and research", "Dwarkesh Patel", "Are you running out of reasoning tokens on the internet? Are there enough of them?", "Ilya Sutskever", "So for context on this question, there are claims that at some point we will run out of tokens, in general, to train those models. And yeah, I think this will happen one day and by the time that happens, we need to have other ways of training models, other ways of productively improving their capabilities and sharpening their behavior, making sure they're doing exactly, precisely what you want, without more data.", "Dwarkesh Patel", "You haven't run out of data yet? There's more?", "Ilya Sutskever", "Yeah, I would say the data situation is still quite good. There's still lots to go. But at some point the data will run out.", "Dwarkesh Patel", "What is the most valuable source of data? Is it Reddit, Twitter, books? Where would you train many other tokens of other varieties for?", "Ilya Sutskever", "Generally speaking, you'd like tokens which are speaking about smarter things, tokens which are more interesting. All the sources which you mentioned are valuable.", "Dwarkesh Patel", "So maybe not Twitter. But do we need to go multimodal to get more tokens? Or do we still have enough text tokens left?", "Ilya Sutskever", "I think that you can still go very far in text only but going multimodal seems like a very fruitful direction.", "Dwarkesh Patel", "If you're comfortable talking about this, where is the place where we haven't scraped the tokens yet?", "Ilya Sutskever", "Obviously I can't answer that question for us but I'm sure that for everyone there is a different answer to that question.", "Dwarkesh Patel", "How many orders of magnitude improvement can we get, not from scale or not from data, but just from algorithmic improvements?", "Ilya Sutskever", "Hard to answer but I'm sure there is some.", "Dwarkesh Patel", "Is some a lot or some a little?", "Ilya Sutskever", "There’s only one way to find out.", "Dwarkesh Patel", "Okay. Let me get your quickfire opinions about these different research directions. Retrieval transformers. So it’s just somehow storing the data outside of the model itself and retrieving it somehow.", "Ilya Sutskever", "Seems promising.", "Dwarkesh Patel", "But do you see that as a path forward?", "Ilya Sutskever", "It seems promising.", "Dwarkesh Patel", "Robotics. Was it the right step for Open AI to leave that behind?", "Ilya Sutskever", "Yeah, it was. Back then it really wasn't possible to continue working in robotics because there was so little data. Back then if you wanted to work on robotics, you needed to become a robotics company. You needed to have a really giant group of people working on building robots and maintaining them. And even then, if you’re gonna have 100 robots, it's a giant operation already, but you're not going to get that much data. So in a world where most of the progress comes from the combination of compute and data, there was no path to data on robotics. So back in the day, when we made a decision to stop working in robotics, there was no path forward.", "Dwarkesh Patel", "Is there one now?", "Ilya Sutskever", "I'd say that now it is possible to create a path forward. But one needs to really commit to the task of robotics. You really need to say — I'm going to build many thousands, tens of thousands, hundreds of thousands of robots, and somehow collect data from them and find a gradual path where the robots are doing something slightly more useful. And then the data that is obtained and used to train the models, and they do something that's slightly more useful. You could imagine it's this gradual path of improvement, where you build more robots, they do more things, you collect more data, and so on. But you really need to be committed to this path. If you say, I want to make robotics happen, that's what you need to do. I believe that there are companies who are doing exactly that. But you need to really love robots and need to be really willing to solve all the physical and logistical problems of dealing with them. It's not the same as software at all. I think one could make progress in robotics today, with enough motivation.", "Dwarkesh Patel", "What ideas are you excited to try but you can't because they don't work well on current hardware?", "Ilya Sutskever", "I don't think current hardware is a limitation. It's just not the case.", "Dwarkesh Patel", "Got it. But anything you want to try you can just spin it up?", "Ilya Sutskever", "Of course. You might wish that current hardware was cheaper or maybe it would be better if it had higher memory processing bandwidth let’s say. But by and large hardware is just not an issue.", "Alignment", "Dwarkesh Patel", "Let's talk about alignment. Do you think we'll ever have a mathematical definition of alignment?", "Ilya Sutskever", "A mathematical definition is unlikely. Rather than achieving one mathematical definition, I think we will achieve multiple definitions that look at alignment from different aspects. And that this is how we will get the assurance that we want. By which I mean you can look at the behavior in various tests, congruence, in various adversarial stress situations, you can look at how the neural net operates from the inside. You have to look at several of these factors at the same time.", "Dwarkesh Patel", "And how sure do you have to be before you release a model in the wild? 100%? 95%?", "Ilya Sutskever", "Depends on how capable the model is. The more capable the model, the more confident we need to be.", "Dwarkesh Patel", "Alright, so let's say it's something that's almost AGI. Where is AGI?", "Ilya Sutskever", "Depends on what your AGI can do. Keep in mind that AGI is an ambiguous term. Your average college undergrad is an AGI, right? There's significant ambiguity in terms of what is meant by AGI. Depending on where you put this mark you need to be more or less confident.", "Dwarkesh Patel", "You mentioned a few of the paths toward alignment earlier, what is the one you think is most promising at this point?", "Ilya Sutskever", "I think that it will be a combination. I really think that you will not want to have just one approach. People want to have a combination of approaches. Where you spend a lot of compute adversarially to find any mismatch between the behavior you want it to teach and the behavior that it exhibits.We look into the neural net using another neural net to understand how it operates on the inside. All of them will be necessary. Every approach like this reduces the probability of misalignment. And you also want to be in a world where your degree of alignment keeps increasing faster than the capability of the models.", "Dwarkesh Patel", "Do you think that the approaches we’ve taken to understand the model today will be applicable to the actual super-powerful models? Or how applicable will they be? Is it the same kind of thing that will work on them as well or?", "Ilya Sutskever", "It's not guaranteed. I would say that right now, our understanding of our models is still quite rudimentary. We’ve made some progress but much more progress is possible. And so I would expect that ultimately, the thing that will really succeed is when we will have a small neural net that is well understood that’s been given the task to study the behavior of a large neural net that is not understood, to verify.", "Dwarkesh Patel", "By what point is most of the AI research being done by AI?", "Ilya Sutskever", "Today when you use Copilot , how do you divide it up? So I expect at some point you ask your descendant of ChatGPT, you say — Hey, I'm thinking about this and this. Can you suggest fruitful ideas I should try? And you would actually get fruitful ideas. I don't think that's gonna make it possible for you to solve problems you couldn't solve before.", "Dwarkesh Patel", "Got it. But it's somehow just telling the humans giving them ideas faster or something. It's not itself interacting with the research?", "Ilya Sutskever", "That was one example. You could slice it in a variety of ways. But the bottleneck there is good ideas, good insights and that's something that the neural nets could help us with.", "Dwarkesh Patel", "If you're designing a billion-dollar prize for some sort of alignment research result or product, what is the concrete criterion you would set for that billion-dollar prize? Is there something that makes sense for such a prize?", "Ilya Sutskever", "It's funny that you asked, I was actually thinking about this exact question. I haven't come up with the exact criterion yet. Maybe a prize where we could say that two years later, or three years or five years later, we look back and say like that was the main result. So rather than say that there is a prize committee that decides right away, you wait for five years and then award it retroactively.", "Dwarkesh Patel", "But there's no concrete thing we can identify as you solve this particular problem and you’ve made a lot of progress?", "Ilya Sutskever", "A lot of progress, yes. I wouldn't say that this would be the full thing.", "Dwarkesh Patel", "Do you think end-to-end training is the right architecture for bigger and bigger models? Or do we need better ways of just connecting things together?", "Ilya Sutskever", "End-to-end training is very promising. Connecting things together is very promising.", "Dwarkesh Patel", "Everything is promising.", "Dwarkesh Patel", "So Open AI is projecting revenues of a billion dollars in 2024. That might very well be correct but I'm just curious, when you're talking about a new general-purpose technology, how do you estimate how big a windfall it'll be? Why that particular number?", "Ilya Sutskever", "We've had a product for quite a while now, back from the GPT-3 days, from two years ago through the API and we've seen how it grew. We've seen how the response to DALL-E has grown as well and you see how the response to ChatGPT is, and all of this gives us information that allows us to make relatively sensible extrapolations of anything. Maybe that would be one answer. You need to have data, you can’t come up with those things out of thin air because otherwise, your error bars are going to be like 100x in each direction.", "Dwarkesh Patel", "But most exponentials don't stay exponential especially when they get into bigger and bigger quantities, right? So how do you determine in this case?", "Ilya Sutskever", "Would you bet against AI?", "Post AGI future", "Dwarkesh Patel", "Not after talking with you. Let's talk about what a post-AGI future looks like. I'm guessing you're working 80-hour weeks towards this grand goal that you're really obsessed with. Are you going to be satisfied in a world where you're basically living in an AI retirement home? What are you personally doing after AGI comes?", "Ilya Sutskever", "The question of what I'll be doing or what people will be doing after AGI comes is a very tricky question. Where will people find meaning? But I think that that's something that AI could help us with. One thing I imagine is that we will be able to become more enlightened because we interact with an AGI which will help us see the world more correctly, and become better on the inside as a result of interacting. Imagine talking to the best meditation teacher in history, that will be a helpful thing. But I also think that because the world will change a lot, it will be very hard for people to understand what is happening precisely and how to really contribute. One thing that I think some people will choose to do is to become part AI. In order to really expand their minds and understanding and to really be able to solve the hardest problems that society will face then.", "Dwarkesh Patel", "Are you going to become part AI?", "Ilya Sutskever", "It is very tempting.", "Dwarkesh Patel", "Do you think there'll be physically embodied humans in the year 3000?", "Ilya Sutskever", "3000? How do I know what’s gonna happen in 3000?", "Dwarkesh Patel", "Like what does it look like? Are there still humans walking around on Earth? Or have you guys thought concretely about what you actually want this world to look like?", "Ilya Sutskever", "Let me describe to you what I think is not quite right about the question. It implies we get to decide how we want the world to look like. I don't think that picture is correct. Change is the only constant. And so of course, even after AGI is built, it doesn't mean that the world will be static. The world will continue to change, the world will continue to evolve. And it will go through all kinds of transformations. I don't think anyone has any idea of how the world will look like in 3000. But I do hope that there will be a lot of descendants of human beings who will live happy, fulfilled lives where they're free to do as they see fit. Or they are the ones who are solving their own problems. One world which I would find very unexciting is one where we build this powerful tool, and then the government said — Okay, so the AGI said that society should be run in such a way and now we should run society in such a way. I'd much rather have a world where people are still free to make their own mistakes and suffer their consequences and gradually evolve morally and progress forward on their own, with the AGI providing more like a base safety net.", "Dwarkesh Patel", "How much time do you spend thinking about these kinds of things versus just doing the research?", "Ilya Sutskever", "I do think about those things a fair bit. They are very interesting questions.", "Dwarkesh Patel", "The capabilities we have today, in what ways have they surpassed where we expected them to be in 2015? And in what ways are they still not where you'd expected them to be by this point?", "Ilya Sutskever", "In fairness, it's sort of what I expected in 2015. In 2015, my thinking was a lot more — I just don't want to bet against deep learning. I want to make the biggest possible bet on deep learning. I don't know how, but it will figure it out.", "Dwarkesh Patel", "But is there any specific way in which it's been more than you expected or less than you expected? Like some concrete prediction out of 2015 that's been bounced?", "Ilya Sutskever", "Unfortunately, I don't remember concrete predictions I made in 2015. But I definitely think that overall, in 2015, I just wanted to move to make the biggest bet possible on deep learning, but I didn't know exactly. I didn't have a specific idea of how far things will go in seven years.", "Well, no in 2015, I did have all these best with people in 2016, maybe 2017, that things will go really far. But specifics. So it's like, it's both, it's both the case that it surprised me and I was making these aggressive predictions. But maybe I believed them only 50% on the inside.", "Dwarkesh Patel", "What do you believe now that even most people at OpenAI would find far fetched?", "Ilya Sutskever", "Because we communicate a lot at OpenAI people have a pretty good sense of what I think and we've really reached the point at OpenAI where we see eye to eye on all these questions.", "Dwarkesh Patel", "Google has its custom TPU hardware, it has all this data from all its users, Gmail, and so on. Does it give them an advantage in terms of training bigger models and better models than you?", "Ilya Sutskever", "At first, when the TPU came out I was really impressed and I thought — wow, this is amazing. But that's because I didn't quite understand hardware back then. What really turned out to be the case is that TPUs and GPUs are almost the same thing.", "They are very, very similar. The GPU chip is a little bit bigger, the TPU chip is a little bit smaller, maybe a little bit cheaper. But then they make more GPUs and TPUs so the GPUs might be cheaper after all.", "But fundamentally, you have a big processor, and you have a lot of memory and there is a bottleneck between those two. And the problem that both the TPU and the GPU are trying to solve is that the amount of time it takes you to move one floating point from the memory to the processor, you can do several hundred floating point operations on the processor, which means that you have to do some kind of batch processing. And in this sense, both of these architectures are the same. So I really feel like in some sense, the only thing that matters about hardware is cost per flop and overall systems cost.", "Dwarkesh Patel", "There isn't that much difference?", "Ilya Sutskever", "Actually, I don't know. I don't know what the TPU costs are but I would suspect that if anything, TPUs are probably more expensive because there are less of them.", "New ideas are overrated", "Dwarkesh Patel", "When you are doing your work, how much of the time is spent configuring the right initializations? Making sure the training run goes well and getting the right hyperparameters, and how much is it just coming up with whole new ideas?", "Ilya Sutskever", "I would say it's a combination. Coming up with whole new ideas is a modest part of the work. Certainly coming up with new ideas is important but even more important is to understand the results, to understand the existing ideas, to understand what's going on.", "A neural net is a very complicated system, right? And you ran it, and you get some behavior, which is hard to understand. What's going on? Understanding the results, figuring out what next experiment to run, a lot of the time is spent on that. Understanding what could be wrong, what could have caused the neural net to produce a result which was not expected.", "I'd say a lot of time is spent coming up with new ideas as well. I don't like this framing as much. It's not that it's false but the main activity is actually understanding.", "Dwarkesh Patel", "What do you see as the difference between the two?", "Ilya Sutskever", "At least in my mind, when you say come up with new ideas, I'm like — Oh, what happens if it did such and such? Whereas understanding it's more like — What is this whole thing? What are the real underlying phenomena that are going on? What are the underlying effects? Why are we doing things this way and not another way? And of course, this is very adjacent to what can be described as coming up with ideas. But the understanding part is where the real action takes place.", "Dwarkesh Patel", "Does that describe your entire career? If you think back on something like ImageNet, was that more new idea or was that more understanding?", "Ilya Sutskever", "Well, that was definitely understanding. It was a new understanding of very old things.", "Dwarkesh Patel", "What has the experience of training on Azure been like?", "Ilya Sutskever", "Fantastic. Microsoft has been a very, very good partner for us. They've really helped take Azure and bring it to a point where it's really good for ML and we’re super happy with it.", "Dwarkesh Patel", "How vulnerable is the whole AI ecosystem to something that might happen in Taiwan? So let's say there's a tsunami in Taiwan or something, what happens to AI in general?", "Ilya Sutskever", "It's definitely going to be a significant setback. No one will be able to get more compute for a few years. But I expect compute will spring up. For example, I believe that Intel has fabs just like a few generations ago. So that means that if Intel wanted to they could produce something GPU-like from four years ago. But yeah, it's not the best,", "I'm actually not sure if my statement about Intel is correct, but I do know that there are fabs outside of Taiwan, they're just not as good. But you can still use them and still go very far with them. It's just cost, it’s just a setback.", "Cost of models", "Dwarkesh Patel", "Would inference get cost prohibitive as these models get bigger and bigger?", "Ilya Sutskever", "I have a different way of looking at this question. It's not that inference will become cost prohibitive. Inference of better models will indeed become more expensive. But is it prohibitive? That depends on how useful it is. If it is more useful than it is expensive then it is not prohibitive.", "To give you an analogy, suppose you want to talk to a lawyer. You have some case or need some advice or something, you're perfectly happy to spend $400 an hour. Right? So if your neural net could give you really reliable legal advice, you'd say — I'm happy to spend $400 for that advice. And suddenly inference becomes very much non-prohibitive. The question is, can a neural net produce an answer good enough at this cost?", "Dwarkesh Patel", "Yes. And you will just have price discrimination in different models?", "Ilya Sutskever", "It's already the case today. On our product, the API serves multiple neural nets of different sizes and different customers use different neural nets of different sizes depending on their use case.", "If someone can take a small model and fine-tune it and get something that's satisfactory for them, they'll use that. But if someone wants to do something more complicated and more interesting, they’ll use the biggest model.", "Dwarkesh Patel", "How do you prevent these models from just becoming commodities where these different companies just bid each other's prices down until it's basically the cost of the GPU run?", "Ilya Sutskever", "Yeah, there's without question a force that's trying to create that. And the answer is you got to keep on making progress. You got to keep improving the models, you gotta keep on coming up with new ideas and making our models better and more reliable, more trustworthy, so you can trust their answers. All those things.", "Dwarkesh Patel", "Yeah. But let's say it's 2025 and somebody is offering the model from 2024 at cost. And it's still pretty good. Why would people use a new one from 2025 if the one from just a year older is even better?", "Ilya Sutskever", "There are several answers there. For some use cases that may be true. There will be a new model for 2025, which will be driving the more interesting use cases. There is also going to be a question of inference cost. If you can do research to serve the same model at less cost. The same model will cost different amounts to serve for different companies. I can also imagine some degree of specialization where some companies may try to specialize in some area and be stronger compared to other companies. And to me that may be a response to commoditization to some degree.", "Dwarkesh Patel", "Over time do the research directions of these different companies converge or diverge? Are they doing similar and similar things over time? Or are they branching off into different areas?", "Ilya Sutskever", "I’d say in the near term, it looks like there is convergence. I expect there's going to be a convergence-divergence-convergence behavior, where there is a lot of convergence on the near term work, there's going to be some divergence on the longer term work. But then once the longer term work starts to fruit, there will be convergence again,", "Dwarkesh Patel", "Got it. When one of them finds the most promising area, everybody just…", "Ilya Sutskever", "That's right. There is obviously less publishing now so it will take longer before this promising direction gets rediscovered. But that's how I would imagine the thing is going to be. Convergence, divergence, convergence.", "Dwarkesh Patel", "Yeah. We talked about this a little bit at the beginning. But as foreign governments learn about how capable these models are, are you worried about spies or some sort of attack to get your weights or somehow abuse these models and learn about them?", "Ilya Sutskever", "Yeah, you absolutely can't discount that. Something that we try to guard against to the best of our ability, but it's going to be a problem for everyone who's building this.", "Dwarkesh Patel", "How do you prevent your weights from leaking?", "Ilya Sutskever", "You have really good security people.", "Dwarkesh Patel", "How many people have the ability to SSH into the machine with the weights?", "Ilya Sutskever", "The security people have done a really good job so I'm really not worried about the weights being leaked.", "Dwarkesh Patel", "What kinds of emergent properties are you expecting from these models at this scale? Is there something that just comes about de novo?", "Ilya Sutskever", "I'm sure really new surprising properties will come up, I would not be surprised. The thing which I'm really excited about, the things which I’d like to see is — reliability and controllability. I think that this will be a very, very important class of emergent properties. If you have reliability and controllability that helps you solve a lot of problems. Reliability means you can trust the model's output, controllability means you can control it. And we'll see but it will be very cool if those emergent properties did exist.", "Dwarkesh Patel", "Is there some way you can predict that in advance? What will happen in this parameter count, what will happen in that parameter count?", "Ilya Sutskever", "I think it's possible to make some predictions about specific capabilities though it's definitely not simple and you can’t do it in a super fine-grained way, at least today. But getting better at that is really important. And anyone who is interested and who has research ideas on how to do that, that can be a valuable contribution.", "Dwarkesh Patel", "How seriously do you take these scaling laws? There's a paper that says — You need this many orders of magnitude more to get all the reasoning out? Do you take that seriously or do you think it breaks down at some point?", "Ilya Sutskever", "The thing is that the scaling law tells you what happens to your log of your next word prediction accuracy, right? There is a whole separate challenge of linking next-word prediction accuracy to reasoning capability. I do believe that there is a link but this link is complicated. And we may find that there are other things that can give us more reasoning per unit effort. You mentioned reasoning tokens, I think they can be helpful. There can probably be some things that help.", "Dwarkesh Patel", "Are you considering just hiring humans to generate tokens for you? Or is it all going to come from stuff that already exists out there?", "Ilya Sutskever", "I think that relying on people to teach our models to do things, especially to make sure that they are well-behaved and they don't produce false things is an extremely sensible thing to do.", "Is progress inevitable?", "Dwarkesh Patel", "Isn't it odd that we have the data we needed exactly at the same time as we have the transformer at the exact same time that we have these GPUs? Like is it odd to you that all these things happened at the same time or do you not see it that way?", "Ilya Sutskever", "It is definitely an interesting situation that is the case. I will say that it is odd and it is less odd on some level. Here's why it's less odd — what is the driving force behind the fact that the data exists, that the GPUs exist, and that the transformers exist? The data exists because computers became better and cheaper, we've got smaller and smaller transistors. And suddenly, at some point, it became economical for every person to have a personal computer. Once everyone has a personal computer, you really want to connect them to the network, you get the internet. Once you have the internet, you suddenly have data appearing in great quantities. The GPUs were improving concurrently because you have smaller and smaller transistors and you're looking for things to do with them.", "Gaming turned out to be a thing that you could do. And then at some point, Nvidia said — the gaming GPU, I might turn it into a general purpose GPU computer, maybe someone will find it useful. It turns out it's good for neural nets. It could have been the case that maybe the GPU would have arrived five years later, ten years later. Let's suppose gaming wasn't the thing. It's kind of hard to imagine, what does it mean if gaming isn't a thing? But maybe there was a counterfactual world where GPUs arrived five years after the data or five years before the data, in which case maybe things wouldn’t have been as ready to go as they are now. But that's the picture which I imagine. All this progress in all these dimensions is very intertwined. It's not a coincidence. You don't get to pick and choose in which dimensions things improve.", "Dwarkesh Patel", "How inevitable is this kind of progress? Let's say you and Geoffrey Hinton and a few other pioneers were never born. Does the deep learning revolution happen around the same time? How much is it delayed?", "Ilya Sutskever", "Maybe there would have been some delay. Maybe like a year delayed?", "Dwarkesh Patel", "Really? That’s it?", "Ilya Sutskever", "It's really hard to tell. I hesitate to give a longer answer because — GPUs will keep on improving. I cannot see how someone would not have discovered it. Because here's the other thing. Let's suppose no one has done it, computers keep getting faster and better. It becomes easier and easier to train these neural nets because you have bigger GPUs, so it takes less engineering effort to train one. You don't need to optimize your code as much. When the ImageNet data set came out, it was huge and it was very, very difficult to use. Now imagine you wait for a few years, and it becomes very easy to download and people can just tinker. A modest number of years maximum would be my guess. I hesitate to give a lot longer answer though. You can’t re-run the world you don’t know.", "Dwarkesh Patel", "Let's go back to alignment for a second. As somebody who deeply understands these models, what is your intuition of how hard alignment will be?", "Ilya Sutskever", "At the current level of capabilities, we have a pretty good set of ideas for how to align them. But I would not underestimate the difficulty of alignment of models that are actually smarter than us, of models that are capable of misrepresenting their intentions. It's something to think about a lot and do research. Oftentimes academic researchers ask me what’s the best place where they can contribute. And alignment research is one place where academic researchers can make very meaningful contributions.", "Dwarkesh Patel", "Other than that, do you think academia will come up with important insights about actual capabilities or is that going to be just the companies at this point?", "Ilya Sutskever", "The companies will realize the capabilities. It's very possible for academic research to come up with those insights. It doesn't seem to happen that much for some reason but I don't think there's anything fundamental about academia. It's not like academia can't. Maybe they're just not thinking about the right problems or something because maybe it's just easier to see what needs to be done inside these companies.", "Dwarkesh Patel", "I see. But there's a possibility that somebody could just realize…", "Ilya Sutskever", "I totally think so. Why would I possibly rule this out?", "Dwarkesh Patel", "What are the concrete steps by which these language models start actually impacting the world of atoms and not just the world of bits?", "Ilya Sutskever", "I don't think that there is a clean distinction between the world of bits and the world of atoms. Suppose the neural net tells you — hey here's something that you should do, and it's going to improve your life. But you need to rearrange your apartment in a certain way. And then you go and rearrange your apartment as a result. The neural net impacted the world of atoms.", "Future breakthroughs", "Dwarkesh Patel", "Fair enough. Do you think it'll take a couple of additional breakthroughs as important as the Transformer to get to superhuman AI? Or do you think we basically got the insights in the books somewhere, and we just need to implement them and connect them?", "Ilya Sutskever", "I don't really see such a big distinction between those two cases and let me explain why. One of the ways in which progress is taking place in the past is that we've understood that something had a desirable property all along but we didn't realize. Is that a breakthrough? You can say yes, it is. Is that an implementation of something in the books? Also, yes.", "My feeling is that a few of those are quite likely to happen. But in hindsight, it will not feel like a breakthrough. Everybody's gonna say — Oh, well, of course. It's totally obvious that such and such a thing can work.", "The reason the Transformer has been brought up as a specific advance is because it's the kind of thing that was not obvious for almost anyone. So people can say it's not something which they knew about. Let's consider the most fundamental advance of deep learning, that a big neural network trained in backpropagation can do a lot of things. Where's the novelty? Not in the neural network. It's not in the backpropagation. But it was most definitely a giant conceptual breakthrough because for the longest time, people just didn't see that. But then now that everyone sees, everyone’s gonna say — Well, of course, it's totally obvious. Big neural network. Everyone knows that they can do it.", "Dwarkesh Patel", "What is your opinion of your former advisor’s new forward forward algorithm?", "Ilya Sutskever", "I think that it's an attempt to train a neural network without backpropagation. And that this is especially interesting if you are motivated to try to understand how the brain might be learning its connections. The reason for that is that, as far as I know, neuroscientists are really convinced that the brain cannot implement backpropagation because the signals in the synapses only move in one direction.", "And so if you have a neuroscience motivation, and you want to say — okay, how can I come up with something that tries to approximate the good properties of backpropagation without doing backpropagation? That's what the forward forward algorithm is trying to do. But if you are trying to just engineer a good system there is no reason to not use backpropagation. It's the only algorithm.", "Dwarkesh Patel", "I guess I've heard you in different contexts talk about using humans as the existing example case that AGI exists. At what point do you take the metaphor less seriously and don't feel the need to pursue it in terms of the research? Because it is important to you as a sort of existence case.", "Ilya Sutskever", "At what point do I stop caring about humans as an existence case of intelligence?", "Dwarkesh Patel", "Or as an example you want to follow in terms of pursuing intelligence in models.", "Ilya Sutskever", "I think it's good to be inspired by humans, it's good to be inspired by the brain. There is an art into being inspired by humans in the brain correctly, because it's very easy to latch on to a non-essential quality of humans or of the brain. And many people whose research is trying to be inspired by humans and by the brain often get a little bit specific. People get a little bit too — Okay, what cognitive science model should be followed? At the same time, consider the idea of the neural network itself, the idea of the artificial neuron. This too is inspired by the brain but it turned out to be extremely fruitful. So how do they do this? What behaviors of human beings are essential that you say this is something that proves to us that it's possible? What is an essential? No this is actually some emergent phenomenon of something more basic, and we just need to focus on getting our own basics right. One can and should be inspired by human intelligence with care.", "Dwarkesh Patel", "Final question. Why is there, in your case, such a strong correlation between being first to the deep learning revolution and still being one of the top researchers? You would think that these two things wouldn't be that correlated. But why is there that correlation?", "Ilya Sutskever", "I don't think those things are super correlated. Honestly, it's hard to answer the question. I just kept trying really hard and it turned out to have sufficed thus far.", "Dwarkesh Patel", "So it's perseverance.", "Ilya Sutskever", "It's a necessary but not a sufficient condition. Many things need to come together in order to really figure something out. You need to really go for it and also need to have the right way of looking at things. It's hard to give a really meaningful answer to this question.", "Dwarkesh Patel", "Ilya, it has been a true pleasure. Thank you so much for coming to The Lunar Society. I appreciate you bringing us to the offices. Thank you.", "Ilya Sutskever", "Yeah, I really enjoyed it. Thank you very much." ]
[ "https://github.com/features/copilot" ]
https://www.dwarkesh.com/p/jeff-dean-and-noam-shazeer
Jeff Dean & Noam Shazeer – 25 years at Google: from PageRank to AGI
[ "Dwarkesh Patel 00:00:00", "Today I have the honor of chatting with Jeff Dean and Noam Shazeer. Jeff is Google's Chief Scientist, and through his 25 years at the company, he has worked on basically the most transformative systems in modern computing: from MapReduce , BigTable , Tensorflow , AlphaChip – genuinely, the list doesn't end – Gemini now.", "And Noam is the single person most responsible for the current AI revolution. He has been the inventor or co-inventor of all the main architectures and techniques that are used for modern LLMs: from the Transformer itself, to Mixture of Experts , to Mesh Tensorflow , to many other things. And they are two of the three co-leads of Gemini at Google DeepMind. Awesome. Thanks so much for coming on.", "Jeff Dean 00:00:50", "Thank you. Super excited to be here.", "Dwarkesh Patel 00:00:52", "Okay, first question. Both of you have been at Google for 25, or close to 25, years. At some point early on in the company, you probably understood how everything worked. When did that stop being the case? Do you feel like there was a clear moment that happened?", "Noam Shazeer 00:01:08", "I joined, this was like, end of 2000, and they had this thing: everybody gets a mentor. I knew nothing. I would just ask my mentor everything, and my mentor knew everything. It turned out my mentor was Jeff.", "It was not the case that everyone at Google knew everything. It was just the case that Jeff knew everything because he had basically written everything.", "Jeff Dean 00:01:33", "You're very kind. I think as companies grow, you kind of go through these phases. When I joined, we were 25 people, 26 people, something like that. So you eventually you learned everyone's name, and even though we were growing, you kept track of all the people who were joining.", "At some point, you lose track of everyone's name in the company, but you still know everyone working on software engineering things. Then you lose track of all the names of people in the software engineering group, but you at least know all the different projects that everyone's working on. Then at some point, the company gets big enough that you get an email that Project Platypus is launching on Friday, and you're like, \"What the heck is Project Platypus?\"", "Noam Shazeer 00:02:15", "Usually it's a very good surprise. You're like, \"Wow, Project Platypus!\" I had no idea we were doing that.", "Jeff Dean 00:02:23", "But I think it is good to keep track of what's going on in the company, even at a very high level, even if you don't know every last detail. And it's good to know lots of people throughout the company so that you can go ask someone for more details or figure out who to talk to. With one level of indirection, you can usually find the right person in the company if you have a good network of people that you've built up over time.", "Dwarkesh Patel 00:02:44", "How did Google recruit you, by the way?", "Jeff Dean 00:02:46", "I kind of reached out to them, actually.", "Dwarkesh Patel 00:02:50", "And Noam, how did you get recruited?", "Noam Shazeer 00:02:53", "I actually saw Google at a job fair in 1999, and I assumed that it was already this huge company, that there was no point in joining, because everyone I knew used Google. I guess that was because I was a grad student at Berkeley at the time. I guess I've dropped out of grad programs a few times.", "It turns out that actually it wasn't really that large. It turns out that I did not apply in 1999, but just kind of sent them a resume on a whim in 2000, because I figured it was my favorite search engine, and figured I should apply to multiple places for a job. But then it turned out to be really fun, it looked like a bunch of smart people doing good stuff. They had this really nice crayon chart on the wall of the daily number of search queries that somebody had just been maintaining. It looked very exponential. I thought, \"These guys are going to be very successful, and it looks like they have a lot of good problems to work on.\" So I was like, \"Okay, maybe I'll go work there for a little while and then have enough money to just go work on AI for as long as I want after that.\"", "Dwarkesh Patel 00:04:08", "Yeah, yeah. In a way you did that, right?", "Noam Shazeer 00:04:10", "Yeah, it totally worked out exactly according to plan.", "Dwarkesh Patel 00:04:15", "You were thinking about AI in 1999?", "Noam Shazeer 00:04:17", "Yeah, this was like 2000. Yeah, I remember in grad school, a friend of mine at the time had told me that his New Year's resolution for 2000 was to live to see the year 3000, and that he was going to achieve this by inventing AI. I was like, \"Oh, that sounds like a good idea.\"", "I didn't get the idea at the time that you could go do it at a big company. But I figured, \"Hey, a bunch of people seem to be making a ton of money at startups. Maybe I'll just make some money, and then I'll have enough to live on and just work on AI research for a long time.\" But yeah, it actually turned out that Google was a terrific place to work on AI.", "Jeff Dean 00:05:07", "One of the things I like about Google is our ambition has always been sort of something that would require pretty advanced AI. Because I think organizing the world's information and making it universally accessible and useful, actually there is a really broad mandate in there. It's not like the company was going to do this one little thing and stay doing that. And also you could see that what we were doing initially was in that direction, but you could do so much more in that direction.", "Dwarkesh Patel 00:05:36", "How has Moore's Law over the last two or three decades changed the kinds of considerations you have to take on board when you design new systems, when you figure out what projects are feasible? What are still the limitations? What are things you can now do that you obviously couldn't do before?", "Jeff Dean 00:05:51", "I think of it as actually changing quite a bit in the last couple of decades. Two decades ago to one decade ago, it was awesome because you just wait, and like 18 months later, you get much faster hardware, and you don't have to do anything. And then more recently, I feel like the general-purpose CPU-based machine scaling has not been as good, like the fabrication process improvements are now taking three years instead of every two years. The architectural improvements in multi-core processors and so on are not giving you the same boost that we were getting 20 to 10 years ago. But I think at the same time, we're seeing much more specialized computational devices, like machine learning accelerators, TPUs, and very ML-focused GPUs, more recently, are making it so that we can actually get really high performance and good efficiency out of the more modern kinds of computations we want to run that are different than a twisty pile of C++ code trying to run Microsoft Office or something.", "Noam Shazeer 00:07:02", "It feels like the algorithms are following the hardware. Basically, what's happened is that at this point, arithmetic is very, very cheap, and moving data around is comparatively much more expensive. So pretty much all of deep learning has taken off roughly because of that. You can build it out of matrix multiplications that are N cubed operations and N squared bytes of data communication basically.", "Jeff Dean 00:07:39", "Well, I would say that the pivot to hardware oriented around that was an important transition, because before that, we had CPUs and GPUs that were not especially well-suited for deep learning. And then we started to build TPUs at Google that were really just reduced-precision linear algebra machines, and then once you have that then you want to exploit it.", "Noam Shazeer 00:08:02", "It seems like it's all about identifying opportunity costs. Like, okay, this is something like Larry Page, I think, used to always say: \"Our second biggest cost is taxes, and our biggest cost is opportunity costs.\" If he didn't say that, then I've been misquoting him for years.", "But basically it’s like, what is the opportunity that you have that you're missing out on? In this case, I guess it was that you've got all of this chip area, and you're putting a very small number of arithmetic units on it. Fill the thing up with arithmetic units! You could have orders of magnitude more arithmetic getting done.", "Now, what else has to change? Okay, the algorithms and the data flow and everything else.", "Jeff Dean 00:08:51", "And, oh, by the way, the arithmetic can be really low precision, so then you can squeeze even more multiplier units in.", "Dwarkesh Patel 00:08:58", "Noam, I want to follow up on what you said, that the algorithms have been following the hardware. If you imagine a counterfactual world where, suppose that the cost of memory had declined more than arithmetic, or just invert the dynamic you saw.", "Noam Shazeer 00:09:12", "Okay, data flow is extremely cheap, and arithmetic is not.", "Dwarkesh Patel 00:09:18", "What would AI look like today?", "Jeff Dean 00:09:20", "You'd have a lot more lookups into very large memories.", "Noam Shazeer 00:09:25", "Yeah, it might look more like AI looked like 20 years ago but in the opposite direction. I'm not sure. I guess I joined Google Brain in 2012. I left Google for a few years, happened to go back for lunch to visit my wife, and we happened to sit down next to Jeff and the early Google Brain team. I thought, \"Wow, that's a smart group of people.\"", "Jeff Dean 00:09:55", "I think I said, \"You should think about deep neural nets. We're making some pretty good progress there.\"", "Noam Shazeer 00:09:59", "\"That sounds fun.\" Okay, so I jumped back in…", "Jeff Dean 00:10:02", "I wooed him back, it was great.", "Noam Shazeer 00:10:05", "..to join Jeff, that was like 2012. I seem to join Google every 12 years: I rejoined Google in 2000, 2012, and 2024.", "Dwarkesh Patel 00:10:14", "What's going to happen in 2036?", "Noam Shazeer 00:10:16", "I don't know. I guess we shall see.", "Dwarkesh Patel 00:10:21", "What are the trade-offs that you're considering changing for future versions of TPU to integrate how you're thinking about algorithms?", "Jeff Dean 00:10:29", "I think one general trend is we're getting better at quantizing or having much more reduced precision models. We started with TPUv1 , and we weren't even quite sure we could quantize and model for serving with eight-bit integers. But we sort of had some early evidence that seemed like it might be possible. So we're like, \"Great, let's build the whole chip around that.\"", "And then over time, I think you've seen people able to use much lower precision for training as well. But also the inference precision has gone. People are now using INT4 or FP4, which sounded like, if you said to someone like we're going to use FP4, like a supercomputing floating point person 20 years ago, they'd be like, \"What? That's crazy. We like 64 bits in our floats.\"", "Or even below that, some people are quantizing models to two bits or one bit , and I think that's a trend that definitely –", "Dwarkesh Patel 00:11:25", "One bit? Just like a zero-or-one?", "Jeff Dean 00:11:27", "Yeah, just a 0-1. And then you have a sign bit for a group of bits or something.", "Noam Shazeer 00:11:33", "It really has to be a co-design thing because, if the algorithm designer doesn't realize that you can get greatly improved performance, throughput, with the lower precision, of course, the algorithm designer is going to say, \"Of course, I don't want low precision. That introduces risk.\" And then it adds irritation.", "Then if you ask the chip designer, \"Okay, what do you want to build?\" And then they'll ask the person who's writing the algorithms today, who's going to say, \"No, I don't like quantization . It's irritating.\" So you actually need to basically see the whole picture and figure out, \"Oh, wait a minute, we can increase our throughput-to-cost ratio by a lot by quantizing.\"", "Jeff Dean 00:12:27", "Then you're like, yes, quantization is irritating, but your model is going to be three times faster, so you're going to have to deal.", "Dwarkesh Patel 00:12:33", "Through your careers, at various times, you’ve worked on things that have an uncanny resemblance to what we're actually using now for generative AI. In 1990, Jeff, your senior thesis was about backpropagation . And in 2007- this is the thing that I didn’t realise until I was prepping for this episode – in 2007 you guys trained a two trillion token N-gram model for language modeling .", "Just walk me through when you were developing that model. Was this kind of thing in your head? What did you think you guys were doing at the time?", "Jeff Dean 00:13:13", "Let me start with the undergrad thesis. I got introduced to neural nets in one section of one class on parallel computing that I was taking in my senior year. I needed to do a thesis to graduate, an honors thesis. So I approached the professor and I said, \"Oh, it'd be really fun to do something around neural nets.\"", "So, he and I decided I would implement a couple of different ways of parallelizing backpropagation training for neural nets in 1990. I called them something funny in my thesis, like \"pattern partitioning\" or something. But really, I implemented a model parallelism and data parallelism on a 32-processor Hypercube machine.", "In one, you split all the examples into different batches, and every CPU has a copy of the model. In the other one, you pipeline a bunch of examples along to processors that have different parts of the model. I compared and contrasted them, and it was interesting.", "I was really excited about the abstraction because it felt like neural nets were the right abstraction. They could solve tiny toy problems that no other approach could solve at the time. I thought, naive me, that 32 processors would be able to train really awesome neural nets.", "But it turned out we needed about a million times more compute before they really started to work for real problems, but then starting in the late 2008, 2009, 2010 timeframe, we started to have enough compute, thanks to Moore's law, to actually make neural nets work for real things. That was kind of when I re-entered, looking at neural nets.", "But prior to that, in 2007...", "Dwarkesh Patel 00:14:55", "Sorry, actually could I ask about this?", "Jeff Dean 00:14:57", "Oh yeah, sure.", "Dwarkesh Patel 00:14:58", "First of all, unlike other artifacts of academia, it's actually like four pages, and you can just read it .", "Jeff Dean 00:15:07", "It was four pages and then 30 pages of C code.", "Dwarkesh Patel 00:15:10", "But it's just a well-produced artifact. Tell me about how the 2007 paper came together.", "Jeff Dean 00:15:15", "Oh yeah, so that, we had a machine translation research team at Google led by Franz Och , who had joined Google maybe a year before, and a bunch of other people. Every year they competed in a DARPA contest on translating a couple of different languages to English, I think, Chinese to English and Arabic to English.", "The Google team had submitted an entry, and the way this works is you get 500 sentences on Monday, and you have to submit the answer on Friday. I saw the results of this, and we'd won the contest by a pretty substantial margin measured in Bleu score , which is a measure of translation quality.", "So I reached out to Franz, the head of this winning team. I'm like, \"This is great, when are we going to launch it?\" And he's like, \"Oh, well, we can't launch this. It's not really very practical because it takes 12 hours to translate a sentence.\" I'm like, \"Well, that seems like a long time. How could we fix that?\"", "It turned out they'd not really designed it for high throughput, obviously. It was doing 100,000 disk seeks in a large language model that they sort of computed statistics over – I wouldn't say \"trained\" really – for each word that it wanted to translate.", "Obviously, doing 100,000 disk seeks is not super speedy. But I said, \"Okay, well, let's dive into this.\" So I spent about two or three months with them, designing an in-memory compressed representation of N-gram data.", "We were using- an N-gram is basically statistics for how often every N-word sequence occurs in a large corpus, so you basically have, in this case, we had 2 trillion words. Most N-gram models of the day were using two-grams or maybe three-grams, but we decided we would use five-grams.", "So, how often every five-word sequence occurs in basically as much of the web as we could process in that day. Then you have a data structure that says, \"Okay, 'I really like this restaurant' occurs 17 times in the web, or something.", "And so I built a data structure that would let you store all those in memory on 200 machines and then have sort of a batched API where you could say, \"Here are the 100,000 things I need to look up in this round for this word,\" and we'd give you them all back in parallel. That enabled us to go from taking a night to translate a sentence to basically doing something in 100 milliseconds or something.", "Dwarkesh Patel 00:18:03", "There's this list of Jeff Dean facts , like Chuck Norris facts. For example, that “for Jeff Dean, NP equals \"no problemo.\"” One of them, it's funny because now that I hear you say it, actually, it's kind of true. One of them is, \"The speed of light was 35 miles an hour until Jeff Dean decided to optimize it over a weekend.\" Just going from 12 hours to 100 milliseconds, I got to do the orders of magnitude there.", "Jeff Dean 00:18:36", "All of these are very flattering. They're pretty funny. They're like an April Fool's joke gone awry by my colleagues.", "Dwarkesh Patel 00:18:45", "Obviously, in retrospect, this idea that you can develop a latent representation of the entire internet through just considering the relationships between words is like: yeah, this is large language models. This is Gemini. At the time, was it just a translation idea, or did you see that as being the beginning of a different kind of paradigm?", "Jeff Dean 00:19:11", "I think once we built that for translation, the serving of large language models started to be used for other things, like completion... you start to type, and it suggests what completions make sense.", "So it was definitely the start of a lot of uses of language models in Google. And Noam has worked on a number of other things at Google, like spelling correction systems that use language models.", "Noam Shazeer 00:19:36", "That was like 2000, 2001, and I think it was all in-memory on one machine.", "Jeff Dean 00:19:44", "Yeah, I think it was one machine. His spelling correction system he built in 2001 was amazing. He sent out this demo link to the whole company.", "I just tried every butchered spelling of every few-word query I could get, like “scrumbled uggs Bundict\"—", "Noam Shazeer 00:19:59", "I remember that one, yeah yeah.", "Jeff Dean 00:20:00", "—instead of “scrambled eggs benedict”, and it just nailed it every time.", "Noam Shazeer 00:20:04", "Yeah, I guess that was language modeling.", "Dwarkesh Patel 00:20:07", "But at the time, when you were developing these systems, did you have this sense of, “look, you make these things more and more sophisticated, don't consider five words, consider 100 words, 1,000 words, then the latent representation is intelligence”. Basically when did that insight hit?", "Noam Shazeer 00:20:24", "Not really. I don't think I ever felt like, okay, N-gram models are going to–", "Jeff Dean 00:20:32", "–sweep the world–", "Noam Shazeer 00:20:33", "–yeah: “be” artificial intelligence. I think at the time, a lot of people were excited about Bayesian networks . That seemed exciting.", "Definitely seeing those early neural language models, both the magic in that, “okay, this is doing something extremely cool” and also, it just struck me as the best problem in the world in that for one, it is very, very simple to state: give me a probability distribution over the next word. Also, there's roughly infinite training data out there. There's the text of the web; you have trillions of training examples of unsupervised data.", "Jeff Dean 00:21:20", "Yeah, or self-supervised.", "Noam Shazeer 00:21:22", "Self-supervised, yeah.", "Jeff Dean 00:21:23", "It's nice because you then have the right answer, and then you can train on all but the current word and try to predict the current word. It's this amazing ability to just learn from observations of the world.", "Noam Shazeer 00:21:36", "And then it's AI complete. If you can do a great job of that, then you can pretty much do anything.", "Dwarkesh Patel 00:23:00", "There's this interesting discussion in the history of science about whether ideas are just in the air and there's a sort of inevitability to big ideas, or whether they're sort of plucked out of some tangential direction. In this case, this way in which we're laying it out very logically, does that imply basically, how inevitable does this...", "Noam Shazeer 00:22:05", "It does feel like it's in the air. There were definitely some, there was like the neural Turing machine, a bunch of ideas around attention, like having these key-value stores that could be useful in neural networks to focus on things. I think in some sense, it was in the air, and in some sense, you need some group to go do it.", "Jeff Dean 00:22:36", "I like to think of a lot of ideas as being partially in the air, where there are a few different, maybe separate research ideas that one is squinting at when you’re trying to solve a new problem. You draw on those for some inspiration, and then there's some aspect that is not solved, and you need to figure out how to solve that. The combination of some morphing of the things that already exist and some new things lead to some new breakthrough or new research result that didn't exist before.", "Dwarkesh Patel 00:22:57", "A re there key moments that stand out to you where you're looking at a research area, you come up with this idea, and you have this feeling of, \"Holy shit, I can't believe that worked?\"", "Jeff Dean 00:23:06", "One thing I remember was in the early days of the Brain team. We were focused on “let’s see if we could build some infrastructure that lets us train really, really big neural nets”. At that time, we didn't have GPUs in our data centers; we just had CPUs. But we know how to make lots of CPUs work together.", "So we built a system that enabled us to train pretty large neural nets through both model and data parallelism . We had a system for unsupervised learning on 10 million randomly selected YouTube frames. It was a spatially local representation, so it would build up unsupervised representations based on trying to reconstruct the thing from the high-level representations.", "We got that working and training on 2,000 computers using 16,000 cores. After a little while, that model was actually able to build a representation at the highest level where one neuron would get excited by images of cats. It had never been told what a cat was, but it had seen enough examples of them in the training data of head-on facial views of cats that that neuron would turn on for that and not for much else.", "Similarly, you'd have other ones for human faces and backs of pedestrians, and this kind of thing. That was kind of cool because it's from unsupervised learning principles, building up these really high-level representations. Then we were able to get very good results on the supervised ImageNet 20,000 category challenge that advanced the state of the art by 60% relative improvement, which was quite good at the time.", "That neural net was probably 50x bigger than one that had been trained previously, and it got good results. So that sort of said to me, \"Hey, actually scaling up neural nets seems like, I thought it would be a good idea and it seems to be, so we should keep pushing on that.\"", "Dwarkesh Patel 00:25:14", "These examples illustrate how these AI systems fit into what you were just mentioning: that Google is fundamentally a company that organizes information. AI, in this context, is finding relationships between information, between concepts, to help get ideas to you faster, information you want to you faster.", "Now we're moving with current AI models. Obviously, you can use BERT in Google Search and you can ask these questions. They are still good at information retrieval, but more fundamentally, they can write your entire code base for you and do actual work, which goes beyond just information retrieval.", "So how are you thinking about that? Is Google still an information retrieval company if you're building an AGI? An AGI can do information retrieval, but it can do many other things as well.", "Jeff Dean 00:26:14", "I think we're an \"organize the world's information\" company, and that's broader than information retrieval . Maybe: “organizing and creating new information from some guidance you give it”.", "\"Can you help me write a letter to my veterinarian about my dog? It's got these symptoms,\" and it'll draft that. Or, \"Can you feed in this video, and can you produce a summary of what's happening in the video every few minutes?\"", "I think our multimodal capabilities are showing that it's more than just text. It's about understanding the world in all the different modalities that information exists in, both human ones but also non-human-oriented ones, like weird lidar sensors on autonomous vehicles, or genomic information, or health information.", "And then, how do you extract and transform that into useful insights for people and make use of that in helping them do all kinds of things they want to do? Sometimes it's, \"I want to be entertained by chatting with a chatbot.\" Sometimes it's, \"I want answers to this really complicated question, there is no single source to retrieve from.\" You need to pull information from 100 web pages, figure out what's going on, and make an organized, synthesized version of that data.", "Then dealing with multimodal things or coding-related problems. I think it's super exciting what these models are capable of, and they're improving fast, so I'm excited to see where we go.", "Noam Shazeer 00:28:42", "I am also excited to see where we go. I think definitely organizing information is clearly a trillion-dollar opportunity, but a trillion dollars is not cool anymore. What's cool is a quadrillion dollars.", "Obviously the idea is not to just pile up some giant pile of money, but it's to create value in the world, and so much more value can be created when these systems can actually go and do something for you, write your code, or figure out problems that you wouldn't have been able to figure out yourself.", "To do that at scale, we're going to have to be very, very flexible and dynamic as we improve the capabilities of these models.", "Jeff Dean 00:29:22", "Yeah, I'm pretty excited about a lot of fundamental research questions that come about because you see something could be substantially improved if we tried this approach or things in this rough direction. Maybe that'll work, maybe it won't.", "But I also think there's value in seeing what we could achieve for end-users and then how can we work backwards from that to actually build systems that are able to do that. As one example: organizing information, that should mean any information in the world should be usable by anyone, regardless of what language they speak.", "And that I think we've done some amount of, but it's not nearly the full vision of, \"No matter what language you speak, out of thousands of languages, we can make any piece of content available to you and make it usable by you. Any video could be watched in any language.\" I think that would be pretty awesome. We're not quite there yet, but that's definitely things I see on the horizon that should be possible.", "Dwarkesh Patel 00:30:26", "Speaking of different architectures you might try, I know one thing you're working on right now is longer context. If you think of Google Search, it's got the entire index of the internet in its context, but it's a very shallow search. And then obviously language models have limited context right now, but they can really think. It's like dark magic, in-context learning . It can really think about what it’s seeing.", "How do you think about what it would be like to merge something like Google Search and something like in-context learning?", "Jeff Dean 00:30:51", "Yeah, I'll take a first stab at it because – I've thought about this for a bit. One of the things you see with these models is they're quite good, but they do hallucinate and have factuality issues sometimes. Part of that is you've trained on, say, tens of trillions of tokens, and you've stirred all that together in your tens or hundreds of billions of parameters.", "But it's all a bit squishy because you've churned all these tokens together. The model has a reasonably clear view of that data, but it sometimes gets confused and will give the wrong date for something.", "Whereas information in the context window, in the input of the model, is really sharp and clear because we have this really nice attention mechanism in transformers. The model can pay attention to things, and it knows the exact text or the exact frames of the video or audio or whatever that it's processing.", "Right now, we have models that can deal with millions of tokens of context, which is quite a lot. It's hundreds of pages of PDF, or 50 research papers, or hours of video, or tens of hours of audio, or some combination of those things, which is pretty cool. But it would be really nice if the model could attend to trillions of tokens.", "Could it attend to the entire internet and find the right stuff for you? Could it attend to all your personal information for you? I would love a model that has access to all my emails, all my documents, and all my photos.", "When I ask it to do something, it can sort of make use of that, with my permission, to help solve what it is I'm wanting it to do. But that's going to be a big computational challenge because the naive attention algorithm is quadratic. You can barely make it work on a fair bit of hardware for millions of tokens, but there's no hope of making that just naively go to trillions of tokens.", "So, we need a whole bunch of interesting algorithmic approximations to what you would really want: a way for the model to attend conceptually to lots and lots more tokens, trillions of tokens. Maybe we can put all of the Google code base in context for every Google developer, all the world's source code in context for any open-source developer. That would be amazing.", "Noam Shazeer 00:33:20", "It would be incredible. The beautiful thing about model parameters is they are quite memory-efficient at memorizing facts. You can probably memorize on the order of one fact or something per model parameter.", "Whereas if you have some token in context, there are lots of keys and values at every layer. It could be a kilobyte, a megabyte of memory per token.", "Jeff Dean 00:33:56", "You take a word and you blow it up to 10 kilobytes or something.", "Noam Shazeer 00:33:59", "Yes. There's actually a lot of innovation going on around, okay, A, how do you minimize that? And B, what words do you need to have there? Are there better ways of accessing bits of that information?", "Jeff seems like the right person to figure this out. Okay, what does our memory hierarchy look like from the SRAM all the way up to data center worldwide level?", "Dwarkesh Patel 00:34:32", "I want to talk more about the thing you mentioned: look, Google is a company with lots of code and lots of examples. If you just think about that one use case and what that implies, so you've got the Google monorepo . Maybe you figure out the long context thing, you can put the whole thing in context, or you fine-tune on it. Why hasn't this been already done?", "You can imagine the amount of code that Google has proprietary access to, even if you're just using it internally to make your developers more efficient and productive.", "Jeff Dean 00:35:09", "To be clear, we have actually already done further training on a Gemini model on our internal code base for our internal developers. But that's different than attending to all of it because it sort of stirs together the code base into a bunch of parameters. Having it in context makes things clearer.", "Even the further trained model internally is incredibly useful. Sundar , I think, has said that 25% of the characters that we're checking into our code base these days are generated by our AI-based coding models with kind of human oversight.", "Dwarkesh Patel 00:35:49", "How do you imagine, in the next year or two, based on the capabilities you see around the horizon, your own personal work? What will it be like to be a researcher at Google? You have a new idea or something. With the way in which you're interacting with these models in a year, what does that look like?", "Noam Shazeer 00:36:04", "Well, I assume we will have these models a lot better and hopefully be able to be much, much more productive.", "Jeff Dean 00:36:15", "Yeah, in addition to kind of research-y context, anytime you're seeing these models used, I think they're able to make software developers more productive because they can kind of take a high-level spec or sentence description of what you want done and give a pretty reasonable first cut at that. From a research perspective, maybe you can say, \"I'd really like you to explore this kind of idea similar to the one in this paper, but maybe let's try making it convolutional or something.\"", "If you could do that and have the system automatically generate a bunch of experimental code, and maybe you look at it and you're like, \"Yeah, that looks good, run that.\" That seems like a nice dream direction to go in.", "It seems plausible in the next year or two years that you might make a lot of progress on that.", "Dwarkesh Patel 00:38:08", "It seems under-hyped because you could have literally millions of extra employees, and you can immediately check their output, the employees can check each other's output, hey immediately stream tokens.", "Jeff Dean 00:38:21", "Sorry, I didn't mean to underhype it. I think it's super exciting. I just don't like to hype things that aren't done yet.", "Dwarkesh Patel 00:38:34", "I do want to play with this idea more because it seems like a big deal if you have something kind of like an autonomous software engineer, especially from the perspective of a researcher who's like, \"I want to build the system.\" Okay, so let's just play with this idea. As somebody who has worked on developing transformative systems through your careers, the idea that instead of having to code something like whatever today's equivalent of MapReduce is or Tensorflow is, just like, \"Here's how I want a distributed AI library to look. Write it up for me.\"", "Do you imagine you could be 10x more productive? 100x more productive?", "Jeff Dean 00:39:13", "I was pretty impressed. I think it was on Reddit that I saw we have a new experimental coding model that's much better at coding and math and so on. Someone external tried it, and they basically prompted it and said, \"I'd like you to implement a SQL processing database system with no external dependencies, and please do that in C.\"", "From what the person said, it actually did a quite good job. It generated a SQL parser and a tokenizer and a query planning system and some storage format for the data on disk and actually was able to handle simple queries. From that prompt, which is like a paragraph of text or something, to get even an initial cut at that seems like a big boost in productivity for software developers.", "I think you might end up with other kinds of systems that maybe don't try to do that in a single semi-interactive, \"respond in 40 seconds\" kind of thing but might go off for 10 minutes and might interrupt you after five minutes saying, \"I've done a lot of this, but now I need to get some input. Do you care about handling video or just images or something?\" That seems like you'll need ways of managing the workflow if you have a lot of these background activities happening.", "Dwarkesh Patel 00:40:44", "Can you talk more about that? What interface do you imagine we might need if you could literally have millions of employees you could spin up, hundreds of thousands of employees you could spin up on command, who are able to type incredibly fast, and who- It's almost like you go from 1930s trading of tickets or something to now modern Jane Street or something. You need some interface to keep track of all this that's going on, for the AIs to integrate into this big monorepo and leverage their own strengths, for humans to keep track of what's happening. Basically what is it like to be Jeff or Noam in three years working day-to-day?", "Noam Shazeer 00:42:26", "It might be kind of similar to what we have now because we already have sort of parallelization as a major issue. We have lots and lots of really, really brilliant machine learning researchers, and we want them to all work together and build AI.", "So actually, the parallelization among people might be similar to parallelization among machines. I think definitely it should be good for things that require a lot of exploration, like, \"Come up with the next breakthrough.\"", "If you have a brilliant idea that is just certain to work in the ML domain, then it has a 2% chance of working if you're brilliant. Mostly these things fail, but if you try 100 things or 1,000 things or a million things, then you might hit on something amazing. We have plenty of compute. Like modern top labs these days have probably a million times as much compute as it took to train Transformer.", "Dwarkesh Patel 00:43:41", "Yeah, actually, so that's a really interesting idea. Suppose in the world today there are on the order of 10,000 AI researchers in this community coming up with a breakthrough-", "Jeff Dean 00:43:52", "Probably more than that. There were 15,000 at NeurIPS last week.", "Noam Shazeer 00:43:55", "Wow.", "Dwarkesh Patel 00:43:57", "100,000, I don't know.", "Jeff Dean 00:43:58", "Yeah, maybe. Sorry.", "Dwarkesh Patel 00:44:00", "No, no, it's good to have the correct order of magnitude. The odds that this community every year comes up with a breakthrough on the scale of a Transformer is, let's say, 10%. Now suppose this community is a thousand times bigger, and it is, in some sense, like this sort of parallel search of better architectures, better techniques.", "Do we just get like-", "Jeff Dean 00:44:22", "A breakthrough a day?", "Dwarkesh Patel 00:44:23", "-breakthroughs every year or every day?", "Noam Shazeer 00:44:25", "Maybe. Sounds potentially good.", "Dwarkesh Patel 00:44:30", "But does that feel like what ML research is like? If you are able to try all these experiments…", "Noam Shazeer 00:44:37", "It's a good question, because I don't know that folks haven't been doing that as much. We definitely have lots of great ideas coming along. Everyone seems to want to run their experiment at maximum scale, but I think that's a human problem.", "Jeff Dean 00:44:55", "It's very helpful to have a 1/1000th scale problem and then vet 100,000 ideas on that, and then scale up the ones that seem promising.", "Dwarkesh Patel 00:44:06", "So, one thing the world might not be taking seriously: people are aware that it's exponentially harder to make a model that's 100x bigger. It's 100x more compute, right? So people are worried that it's an exponentially harder problem to go from Gemini 2 to 3, or so forth.", "But maybe people aren't aware of this other trend where Gemini 3 is coming up with all these different architectural ideas, trying them out, and you see what works, and you're constantly coming up with algorithmic progress that makes training the next one easier and easier. How far could you take that feedback loop?", "Jeff Dean 00:45:43", "I think one thing people should be aware of is that the improvements from generation to generation of these models often are partially driven by hardware and larger scale, but equally and perhaps even more so driven by major algorithmic improvements and major changes in the model architecture, the training data mix, and so on, that really makes the model better per flop that is applied to the model, so I think that's a good realization. Then I think if we have automated exploration of ideas, we'll be able to vet a lot more ideas and bring them into the actual production training for next generations of these models.", "That's going to be really helpful because that's sort of what we're currently doing with a lot of brilliant machine learning researchers: looking at lots of ideas, winnowing ones that seem to work well at small scale, seeing if they work well at medium scale, bringing them into larger scale experiments, and then settling on adding a whole bunch of new and interesting things to the final model recipe. If we can do that 100 times faster through those machine learning researchers just gently steering a more automated search process, rather than hand-babysitting lots of experiments themselves, that's going to be really, really good.", "Noam Shazeer 00:47:03", "The one thing that doesn't speed up is experiments at the largest scale. You still end up doing these N = 1 experiments. Really, you just try to put a bunch of brilliant people in the room, have them stare at the thing, and figure out why this is working, why this is not working.", "Jeff Dean 00:47:21", "For that, more hardware is a good solution. And better hardware.", "Noam Shazeer 00:47:25", "Yes, we're counting on you.", "Dwarkesh Patel 00:47:28", "So, naively, there's this software, there's this algorithmic side improvement that future AI can make. There's also the stuff you're working on. I'll let you describe it.", "But if you get into a situation where just from a software level, you can be making better and better chips in a matter of weeks and months, and better AIs can presumably do that better, how does this feedback loop not just end up in, Gemini 3 taking two years, then Gemini 4 is- or the equivalent level jump is now six months, then level five is three months, then one month? You get to superhuman intelligence much more rapidly than you might naively think, because of this software, both on the hardware side and from the algorithmic side improvements.", "Jeff Dean 00:48:26", "I've been pretty excited lately about how we could dramatically speed up the chip design process. As we were talking earlier, the current way in which you design a chip takes you roughly 18 months to go from \"we should build a chip\" to something that you then hand over to TSMC and then TSMC takes four months to fab it, and then you get it back and you put it in your data centers.", "So that's a pretty lengthy cycle, and the fab time in there is a pretty small portion of it today. But if you could make that the dominant portion, so that instead of taking 12 to 18 months to design the chip with 150 people, you could shrink that to a few people with a much more automated search process, exploring the whole design space of chips and getting feedback from all aspects of the chip design process for the kind of choices that the system is trying to explore at the high level, then I think you could get perhaps much more exploration and more rapid design of something that you actually want to give to a fab.", "That would be great because you can shrink fab time, you can shrink the deployment time by designing the hardware in the right way, so that you just get the chips back and you just plug them into some system. And that will then enable a lot more specialization, it will enable a shorter timeframe for the hardware design so that you don't have to look out quite as far into what kind of ML algorithms would be interesting. Instead, it's like you're looking at six to nine months from now, what should it be? Rather than two, two and a half years.", "That would be pretty cool. I do think that fabrication time, if that's in your inner loop of improvement, you're going to like...", "Dwarkesh Patel 00:50:19", "How long is it?", "Jeff Dean 00:50:20", "The leading edge nodes, unfortunately, are taking longer and longer because they have more metal layers than previous, older nodes. So that tends to make it take anywhere from three to five months.", "Dwarkesh Patel 00:50:32", "Okay, but that's how long training runs take anyways, right? So you could potentially do both at the same time.", "Jeff Dean 00:50:38", "Potentially.", "Dwarkesh Patel 00:50:39", "Okay, so I guess you can't get sooner than three to five months. But the idea that you could get- but also, yeah, you're rapidly developing new algorithmic ideas.", "Noam Shazeer 00:50:47", "That can move fast.", "Jeff Dean 00:50:48", "That can move fast, that can run on existing chips and explore lots of cool ideas.", "Dwarkesh Patel 00:50:54", "So, isn't that a situation in which you're... I think people sort of expect like, ah, there's going to be a sigmoid. Again, this is not a sure thing. But just like, is this a possibility? The idea that you have sort of an explosion of capabilities very rapidly towards the tail end of human intelligence that gets smarter and smarter at a more and more rapid rate?", "Noam Shazeer 00:51:17", "Quite possibly.", "Jeff Dean 00:51:19", "Yeah. I like to think of it like this. Right now, we have models that can take a pretty complicated problem and can break it down internally in the model into a bunch of steps, can sort of puzzle together the solutions for those steps, and can often give you a solution to the entire problem that you're asking.", "But it isn't super reliable, and it's good at breaking things down into five to ten steps, not 100 to 1,000 steps. So if you could go from, yeah, 80% of the time it can give you a perfect answer to something that's ten steps long to something that 90% of the time can give you a perfect answer to something that's 100 to 1,000 steps of sub-problem long, that would be an amazing improvement in the capability of these models. We're not there yet, but I think that's what we're aspirationally trying to get to.", "Noam Shazeer 00:52:14", "We don't need new hardware for that, but we'll take it.", "Jeff Dean 00:52:20", "Never look new hardware in the mouth.", "Noam Shazeer 00:52:23", "One of the big areas of improvement in the near future is inference time compute, applying more compute at inference time. I guess the way I like to describe it is that even a giant language model, even if you’re doing a trillion operations per token, which is more than most people are doing these days, operations cost something like 10 to the negative $18. And so you're getting a million tokens to the dollar.", "I mean compare that to a relatively cheap pastime: you go out and buy a paper book and read it, you're paying 10,000 tokens to the dollar. Talking to a language model is like 100 times cheaper than reading a paperback.", "So there is a huge amount of headroom there to say, okay, if we can make this thing more expensive but smarter, because we're 100x cheaper than reading a paperback, we're 10,000 times cheaper than talking to a customer support agent, or a million times or more cheaper than hiring a software engineer or talking to your doctor or lawyer. Can we add computation and make it smarter?", "I think a lot of the takeoff that we're going to see in the very near future is of this form. We've been exploiting and improving pre-training a lot in the past, and post-training, and those things will continue to improve. But taking advantage of \"think harder\" at inference time is just going to be an explosion.", "Jeff Dean 00:54:21", "Yeah, and an aspect of inference time is I think you want the system to be actively exploring a bunch of different potential solutions. Maybe it does some searches on its own, gets some information back, consumes that information, and figures out, oh, now I would really like to know more about this thing. So now it iteratively explores how to best solve the high-level problem you pose to this system.", "And I think having a dial where you can make the model give you better answers with more inference time compute seems like we have a bunch of techniques now that can kind of do that. The more you crank up the dial, the more it costs you in terms of compute, but the better the answers get.", "That seems like a nice trade-off to have, because sometimes you want to think really hard because it's a super important problem. Sometimes you probably don't want to spend enormous amounts of compute to compute “what's the answer to one plus one”. Maybe the system –", "Dwarkesh Patel 00:55:22", "Shouldn’t decide to come up with new axioms of set theory or whatever!", "Jeff Dean 00:55:25", "– should decide to use a calculator tool instead of a very large language model.", "Dwarkesh Patel 00:55:31", "Interesting. So are there any impediments to taking inference time, like having some way in which you can just linearly scale up inference time compute? Or is this basically a problem that's sort of solved, and we know how to throw 100x compute, 1000x compute, and get correspondingly better results?", "Noam Shazeer 00:55:50", "We're working out the algorithms as we speak. So I believe we'll see better and better solutions to this as these many more than 10,000 researchers are hacking at it, many of them at Google.", "Jeff Dean 00:56:06", "I think we do see some examples in our own experimental work of things where if you apply more inference time compute, the answers are better than if you just apply 10x, you can get better answers than x amount of computed inference time. And that seems useful and important.", "But I think what we would like is when you apply 10x to get even a bigger improvement in the quality of the answers than we're getting today. And so that's about designing new algorithms, trying new approaches, figuring out how best to spend that 10x instead of x to improve things.", "Dwarkesh Patel 00:56:44", "Does it look more like search, or does it look more like just keeping going in the linear direction for a longer time?", "Jeff Dean 00:56:49", "I really like Rich Sutton's paper that he wrote about the Bitter Lesson and the Bitter Lesson effectively is this nice one-page paper but the essence of it is you can try lots of approaches, but the two techniques that are incredibly effective are learning and search.", "You can apply and scale those algorithmically or computationally, and you often will then get better results than any other kind of approach you can apply it to a pretty broad variety of problems.", "Search has got to be part of the solution to spending more inference time. Maybe you explore a few different ways of solving this problem, and that one didn't work, but this one worked better. I'm going to explore that a bit more.", "Dwarkesh Patel 00:57:36", "How does this change your plans for future data center planning and so forth? Where can this kind of search be done asynchronously? Does it have to be online, offline? How does that change how big of a campus you need and those kinds of considerations?", "Jeff Dean 00:57:55", "One general trend is it's clear that inference time compute, you have a model that's pretty much already trained and you want to do inference on, it is going to be a growing and important class of computation. Maybe you want to specialize hardware more around that.", "Actually, the first TPU was specialized for inference and wasn't really designed for training. Then subsequent TPUs were really designed more around training and also for inference.", "But it may be that when you have something where you really want to crank up the amount of compute you use at inference time, that even more specialized solutions will make a lot of sense.", "Dwarkesh Patel 00:58:38", "Does that mean you can accommodate more asynchronous training?", "Jeff Dean 00:58:41", "Training? Or inference?", "Dwarkesh Patel 00:58:42", "Or just you can have the different data centers don't need to talk to each other, you can just have them do a bunch of...", "Jeff Dean 00:58:52", "I like to think of it as, is the inference that you're trying to do latency-sensitive? Like a user is actively waiting for it, or is it a background thing? Maybe I have some inference tasks that I'm trying to run over a whole batch of data, but it's not for a particular user. It's just I want to run inference on it and extract some information.", "There's probably a bunch of things that we don't really have very much of right now, but you're seeing inklings of it in our deep research tool that we just released, like a week ago. You can give it a pretty complicated, high-level task like, \"Hey, can you go off and research the history of renewable energy and all the trends in costs for wind and solar and other kinds of techniques, and put it in a table and give me a full eight-page report?\" And it will come back with an eight-page report with like 50 entries in the bibliography.", "It's pretty remarkable. But you're not actively waiting for that for one second. It takes like a minute or two to go do that.", "And I think there's going to be a fair bit of that kind of compute, and that's the kind of thing where you have some UI questions around. Okay, if you're going to have a user with 20 of these kind of asynchronous tasks in the background happening, and maybe each one of them needs to get more information from the user, like, \"I found your flights to Berlin, but there's no non-stop ones. Are you okay with a non-stop one?\" How does that flow work when you kind of need a bit more information, and then you want to put it back in the background for it to continue doing, you know, finding the hotels in Berlin or whatever? I think it's going to be pretty interesting, and inference will be useful.", "Noam Shazeer 01:00:33", "Inference will be useful. There's also a compute efficiency in inference that you don't have in training. In general, transformers can use the sequence length as a batch during training, but they can't really in inference, because when you're generating one token at a time, so there may be different hardware and inference algorithms that we design for the purposes of being efficient at inference.", "Jeff Dean 01:01:02", "Yeah, as a good example of an algorithmic improvement is the use of drafter models. So you have a really small language model that you do one token at a time when you're decoding, and it predicts four tokens. Then you give that to the big model and you say, \"Okay, here are the four tokens the little model came up with. Check which ones you agree with.\"", "If you agree with the first three, then you just advance. Then you've basically been able to do a four-token width parallel computation instead of a one-token width computation in the big model. Those are the kinds of things that people are looking at to improve inference efficiency, so you don't have this single-token decode bottleneck.", "Noam Shazeer 01:01:46", "Right, basically the big model's being used as a verifier.", "Jeff Dean 01:01:48", "Right, “can you verify”, yeah.", "Noam Shazeer 01:01:50", "[inaudible] generator and verification you can do.", "Jeff Dean 01:01:52", "Right. \"Hello, how are you?\" That sounds great to me. I'm going to advance past that.", "Dwarkesh Patel 01:01:56", "So, a big discussion has been about how we're already tapping out nuclear power plants in terms of delivering power into one single campus. Do we have to have just two gigawatts in one place, five gigawatts in one place, or can it be more distributed and still be able to train a model? Does this new regime of inference scaling make different considerations there plausible? How are you thinking about multi-data center training now?", "Jeff Dean 01:02:31", "We're already doing it. We're pro multi-data center training. I think in the Gemini 1.5 tech report, we said we used multiple metro areas and trained with some of the compute in each place. And then a pretty long latency but high bandwidth connection between those data centers, and that works fine.", "Training is kind of interesting because each step in a training process is usually, for a large model, is usually a few seconds or something, at least. So, the latency of it being 50 milliseconds away doesn't matter that much.", "Noam Shazeer 01:03:06", "Just the bandwidth.", "Jeff Dean 01:03:08", "Yeah, just bandwidth.", "Noam Shazeer 01:03:10", "As long as you can sync all of the parameters of the model across the different data centers and then accumulate all the gradients, in the time it takes to do one step, you're pretty good.", "Jeff Dean 01:03:21", "And then we have a bunch of work, even from early Brain days, when we were using CPU machines and they were really slow. We needed to do asynchronous training to help scale, where each copy of the model would do some local computation, send gradient updates to a centralized system, and then apply them asynchronously. Another copy of the model would be doing the same thing.", "It makes your model parameters wiggle around a bit, and it makes people uncomfortable with the theoretical guarantees, but it actually seems to work in practice.", "Noam Shazeer 01:03:56", "It was so pleasant to go from asynchronous to synchronous because your experiments are now replicable, rather than your results depend on whether there was a web crawler running on the same machine. So, I am so much happier running on TPU pods.", "Jeff Dean 01:04:20", "I love asynchrony. It just lets you scale so much more.", "Noam Shazeer 01:04:22", "With these two iPhones and an Xbox or whatever.", "Jeff Dean 01:04:25", "Yeah, what if we could give you asynchronous but replicable results?", "Noam Shazeer 01:04:29", "Ooh.", "Jeff Dean 01:04:31", "So, one way to do that is you effectively record the sequence of operations, like which gradient update happened and when and on which batch of data. You don't necessarily record the actual gradient update in a log or something, but you could replay that log of operations so that you get repeatability. Then I think you'd be happy.", "Noam Shazeer 01:04:53", "Possibly. At least you could debug what happened, but you wouldn't be able to necessarily compare two training runs. Because, okay, I made one change in the hyperparameter, but also I had a-", "Jeff Dean 01:05:08", "Web crawler.", "Noam Shazeer 01:05:09", "-web crawler messing up, and there were a lot of people streaming the Super Bowl at the same time.", "Jeff Dean 01:05:19", "The thing that led us to go from asynchronous training on CPUs to fully synchronous training is the fact that we have these super fast TPU hardware chips and pods, which have incredible amounts of bandwidth between the chips in a pod. Then, scaling beyond that, we have really good data center networks and even cross-metro area networks that enable us to scale to many, many pods in multiple metro areas for our largest training runs. We can do that fully synchronously.", "As Noam said, as long as the gradient accumulation and communication of the parameters across metro areas happens fast enough relative to the step time, you're golden. You don't really care. But I think as you scale up, there may be a push to have a bit more asynchrony in our systems than we have now because we can make it work, our ML researchers have been really happy how far we've been able to push synchronous training because it is an easier mental model to understand. You just have your algorithm sort of fighting you, rather than the asynchrony and the algorithm kind of battling you.", "Noam Shazeer 01:06:28", "As you scale up, there are more things fighting you. That's the problem with scaling, that you don't always know what it is that's fighting you. Is it the fact that you've pushed quantization a little too far in some place or another? Or is it your data?", "Jeff Dean 01:06:46", "Maybe it's your adversarial machine MUQQ17 that is setting the seventh bit of your exponent and all your gradients or something.", "Noam Shazeer 01:06:56", "Right. And all of these things just make the model slightly worse, so you don't even know that the thing is going on.", "Jeff Dean 01:07:04", "That's actually a bit of a problem with neural nets, is they're so tolerant of noise. You can have things set up kind of wrong in a lot of ways, and they just figure out ways to work around that or learn.", "Noam Shazeer 01:07:15", "You could have bugs in your code. Most of the time that does nothing. Some of the time it makes your model worse. Some of the time it makes your model better. Then you discover something new because you never tried this bug at scale before because you didn't have the budget for it.", "Dwarkesh Patel 01:07:33", "What practically does it look like to debug or decode? You've got these things, some of which are making the model better, some of which are making it worse. When you go into work tomorrow, how do you figure out what the most salient inputs are?", "Noam Shazeer 01:07:50", "At small scale, you do lots of experiments. There's one part of the research that involves, okay, I want to invent these improvements or breakthroughs in isolation. In which case you want a nice simple code base that you can fork and hack, and some baselines.", "My dream is I wake up in the morning, come up with an idea, hack it up in a day, run some experiments, get some initial results in a day. Like okay this looks promising, these things worked, and these things didn't work.", "I think that is very achievable because-", "Jeff Dean 01:08:34", "At small scale.", "Noam Shazeer 01:08:35", "At small scale, as long as you keep a nice experimental code base.", "Jeff Dean 01:08:41", "Maybe an experiment takes an hour to run or two hours, not two weeks.", "Noam Shazeer 01:08:45", "It’s great. So there's that part of the research, and then there's some amount of scaling up. Then you have the part which is integrating, where you want to stack all the improvements on top of each other and see if they work at large scale, and see if they work all in conjunction.", "Jeff Dean 01:09:02", "Right, how do they interact? Right, you think maybe they're independent, but actually maybe there's some funny interaction between improving the way in which we handle video data input and the way in which we update the model parameters. Maybe that interacts more for video data than some other thing.", "There are all kinds of interactions that can happen that you maybe don't anticipate. So you want to run these experiments where you're then putting a bunch of things together and then periodically making sure that all the things you think are good are good together. If not, understanding why they're not playing nicely.", "Dwarkesh Patel 01:09:41", "Two questions. One, how often does it end up being the case that things don't stack up well together? Is it like a rare thing or does it happen all the time?", "Noam Shazeer 01:09:52", "It happens 50% of the time.", "Jeff Dean 01:09:55", "Yeah, I mean, I think most things you don't even try to stack because the initial experiment didn't work that well, or it showed results that aren't that promising relative to the baseline. Then you sort of take those things and you try to scale them up individually.", "Then you're like, \"Oh yeah, these ones seem really promising.\" So I'm going to now include them in something that I'm going to now bundle together and try to advance and combine with other things that seem promising. Then you run the experiments and then you're like, \"Oh, well, they didn't really work that well. Let's try to debug why.\"", "Noam Shazeer 01:10:28", "And then there are trade offs, because you want to keep your integrated system as clean as you can, because complexity –", "Jeff Dean 01:10:38", "Codebase-wise.", "Noam Shazeer 01:10:39", "– yeah codebase and algorithmically. Complexity hurts, complexity makes things slower, introduces more risk.", "And then at the same time you want it to be as good as possible. And of course, every individual researcher wants his inventions to go into it. So there are definitely challenges there, but we've been working together quite well.", "Dwarkesh Patel 01:11:05", "Okay, so then going back to the whole dynamic “you find better and better algorithmic improvements and the models get better and better over time”, even if you take the hardware part out of it. Should the world be thinking more about, and should you guys be thinking more about this?", "There's one world where AI is a thing that takes two decades to slowly get better over time and you can sort of refine things over. If you've kind of messed something up, you fix it, and it's not that big a deal, right? It's like not that much better than the previous version you released.", "There's another world where you have this big feedback loop, which means that the two years between Gemini 4 and Gemini 5 are the most important years in human history. Because you go from a pretty good ML researcher to superhuman intelligence because of this feedback loop. To the extent that you think that the second world is plausible, how does that change how you sort of approach these greater and greater levels of intelligence?", "Noam Shazeer 01:12:14", "I've stopped cleaning my garage because I'm waiting for the robots. So probably I'm more in the second camp of what we're going to see, a lot of acceleration.", "Jeff Dean 01:12:24", "Yeah, I mean, I think it's super important to understand what's going on and what the trends are. And I think right now the trends are the models are getting substantially better generation over generation. I don't see that slowing down in the next few generations probably.", "So that means the models say two to three generations from now are going to be capable of... Let's go back to the example of breaking down a simple task into 10 sub pieces and doing it 80% of the time, to something that can break down a task, a very high level task, into 100 or 1,000 pieces and get that right 90% of the time. That's a major, major step up in what the models are capable of.", "So I think it's important for people to understand what is happening in the progress in the field. And then those models are going to be applied in a bunch of different domains. I think it's really good to make sure that we, as a society, get the maximal benefits from what these models can do to improve things. I'm super excited about areas like education and healthcare, making information accessible to all people.", "But we also realize that they could be used for misinformation, they could be used for automated hacking of computer systems, and we want to put as many safeguards and mitigations and understand the capabilities of the models in place as we can. I think Google as a whole has a really good view to how we should approach this. Our Responsible AI principles actually are a pretty nice framework for how to think about trade offs of making better and better AI systems available in different contexts and settings, while also sort of making sure that we're doing the right thing in terms of making sure they're safe and not saying toxic things and things like that.", "Dwarkesh Patel 01:14:21", "I guess the thing that stands out to me, if you were zooming out and looking at this period of human history, if we're in the world where, look, if you do post-training on Gemini 3 badly, it can do some misinformation – but then you fix the post training. It's a bad mistake, but it's a fixable mistake, right?", "Noam Shazeer 01:14:40", "Right.", "Dwarkesh Patel 01:14:40", "Whereas if you have this feedback loop dynamic, which is a possibility, then the mistake of the thing that catapults this intelligence explosion is misaligned, is not trying to write the code you think it's trying to write, and [instead] optimizing for some other objective.", "And on the other end of this very rapid process that lasts a couple of years, maybe less, you have things that are approaching Jeff Dean or beyond level, or Noam Shazeer or beyond level. And then you have millions of copies of Jeff Dean level programmers, and- anyways, that seems like a harder to recover mistake.", "Noam Shazeer 01:15:29", "As these systems do get more powerful, you have to be more and more careful.", "Jeff Dean 01:15:37", "One thing I would say is, there are extreme views on either end. There's, \"Oh my goodness, these systems are going to be so much better than humans at all things, and we're going to be kind of overwhelmed.\" And then there's, \"These systems are going to be amazing, and we don't have to worry about them at all.\"", "I think I'm somewhere in the middle. I've been a co-author on a paper called \" Shaping AI ,\" which is, you know, those two extreme views often kind of view our role as kind of laissez-faire, like we're just going to have the AI develop in the path that it takes.", "And I think there's actually a really good argument to be made that what we're going to do is try to shape and steer the way in which AI is deployed in the world so that it is, you know, maximally beneficial in the areas that we want to capture and benefit from, in education, some of the areas I mentioned, healthcare.", "And steer it as much as we can away- maybe with policy-related things, maybe with technical measures and safeguards- away from, you know, the computer will take over and have unlimited control of what it can do. So I think that's an engineering problem: how do you engineer safe systems?", "I think it's kind of the modern equivalent of what we've done in older-style software development. Like if you look at, you know, airplane software development, that has a pretty good record of how do you rigorously develop safe and secure systems for doing a pretty risky task?", "Dwarkesh Patel 01:17:18", "The difficulty there is that there's not some feedback loop where the 737, you put it in a box with a bunch of compute for a couple of years, and it comes out with the version 1000.", "Noam Shazeer 01:17:27", "I think the good news is that analyzing text seems to be easier than generating text. So I believe that the ability of language models to actually analyze language model output and figure out what is problematic or dangerous will actually be the solution to a lot of these control issues.", "We are definitely working on this stuff. We've got a bunch of brilliant folks at Google working on this now. And I think it's just going to be more and more important, both from a “do something good for people” standpoint, but also from a business standpoint, that you are, a lot of the time, limited in what you can deploy based on keeping things safe.", "And so it becomes very, very important to be really, really good at that.", "Dwarkesh Patel 01:18:48", "Yeah, obviously, I know you guys take the potential benefits and costs here seriously, and it's truly remarkable. I know you guys get credit for it, but not enough. I think there's just, there are so many different applications that you have put out for using these models to make the different areas you talked about better.", "Um, but I do think that… again, if you have a situation where plausibly there's some feedback loop process, on the other end, you have a model that is as good as Noam Shazeer, as good as Jeff Dean.", "If there's an evil version of you running around, and suppose there's a million of them, I think that's really, really bad. That could be much, much worse than any other risk, maybe short of nuclear war or something. Just think about it, like a million evil Jeff Deans or something.", "Jeff Dean 01:19:47", "Where do we get the training data?", "Dwarkesh Patel 01:20:20", "But, to the extent that you think that's a plausible output of some quick feedback loop process, what is your plan of okay, we've got Gemini 3 or Gemini 4, and we think it's helping us do a better job of training future versions, it's writing a bunch of the training code for us. From this point forward, we just kind of look over it, verify it.", "Even the verifiers you talked about of looking at the output of these models will eventually be trained by, or a lot of the code will be written by the AIs you make. What do you want to know for sure before we have the Gemini 4 help us with the AI research? We really want to make sure, we want to run this test on it before we let it write our AI code for us.", "Jeff Dean 01:21:34", "I mean, I think having the system explore algorithmic research ideas seems like something where there's still a human in charge of that. Like, it's exploring the space, and then it's going to, like, get a bunch of results, and we're going to make a decision, like, are we going to incorporate this particular, you know, learning algorithm or change to the system into kind of the core code base?", "And so I think you can put in safeguards like that that enable us to get the benefits of the system that can sort of improve or kind of self-improve with human oversight, uh, without necessarily letting the system go full-on self-improving without any any notion of a person looking at what it's doing, right? That's the kind of engineering safeguards I'm talking about, where you want to be kind of looking at the characteristics of the systems you're deploying, not deploy ones that are harmful by some measures and some ways, and you have an understanding of what its capabilities are and what it's likely to do in certain scenarios. So, you know, I think it's not an easy problem by any means, but I do think it is possible to make these these systems safe.", "Noam Shazeer 01:36:56", "Yeah. I mean, I think we are also going to use these systems a lot to check themselves, check other systems. Even as a human, it is easier to recognize something than to generate it.", "Jeff Dean 01:37:14", "One thing I would say is if you expose the model's capabilities through an API or through a user interface that people interact with, I think then you have a level of control to understand how is it being used and put some boundaries on what it can do. And that I think is one of the tools in the arsenal of how do you make sure that what it's going to do is sort of acceptable by some set of standards you've set out in your mind?", "Noam Shazeer 01:37:44", "Yeah. I mean, I think the goal is to to empower people, but for the most part we should be mostly letting people do things with these systems that make sense and closing off as few parts of the space as we can. But yeah, if you let somebody take your thing and create a million evil software engineers, then that doesn't empower people because they're going to hurt others with a million evil software engineers.", "So I'm against that.", "Jeff Dean 01:38:14", "Me too. I'll go on.", "Dwarkesh Patel 01:38:16", "All right, let's talk about a few more fun topics. Make it a little lighter. Over the last 25 years, what was the most fun time? What period of time do you have the most nostalgia over?", "Jeff Dean 01:38:30", "At work, you mean?", "Noam Shazeer 01:38:31", "Yeah. At work. Yeah.", "Jeff Dean 01:38:32", "I think the early sort of four or five years at Google when I was one of a handful of people working on search and crawling and indexing systems, our traffic was growing tremendously fast. We were trying to expand our index size and make it so we updated it every minute instead of every month, or two months if something went wrong.", "Seeing the growth in usage of our systems was really just personally satisfying. Building something that is used by two billion people a day is pretty incredible.", "But I would also say equally exciting is working with people on the Gemini team today. I think the progress we've been making in what these models can do over the last year and a half is really fun. People are really dedicated, really excited about what we're doing.", "I think the models are getting better and better at pretty complex tasks. Like if you showed someone using a computer 20 years ago what these models are capable of, they wouldn't believe it. And even five years ago, they might not believe it. And that's pretty satisfying.", "I think we'll see a similar growth in usage of these models and impact in the world.", "Noam Shazeer 01:39:48", "Yeah, I'm with you. Early days were super fun. Part of that is just knowing everybody and the social aspect, and the fact that you're just building something that millions and millions of people are using.", "Same thing today. We got that whole nice micro kitchen area where you get lots of people hanging out. I love being in person, working with a bunch of great people, and building something that's helping millions to billions of people. What could be better?", "Dwarkesh Patel 01:40:21", "What was this micro kitchen?", "Jeff Dean 01:40:23", "Oh, we have a micro kitchen area in the building we both sit in. It's the new, so-named Gradient Canopy. It used to be named Charleston East, and we decided we needed a more exciting name because it's a lot of machine learning researchers and AI research happening in there.", "There's a micro kitchen area that we've set up with, normally it's just like an espresso machine and a bunch of snacks, but this particular one has a bunch of space in it. So we've set up maybe 50 desks in there, and so people are just hanging out in there. It's a little noisy because people are always grinding beans and brewing espresso, but you also get a lot of face-to-face ideas of connections, like, \"Oh, I've tried that. Did you think about trying this in your idea?\" Or, \"Oh, we're going to launch this thing next week. How's the load test looking?\" There's just lots of feedback that happens.", "And then we have our Gemini chat room for people who are not in that micro kitchen. We have a team all over the world, and there's probably 120 chat rooms I'm in related to Gemini things. In this particular very focused topic, we have seven people working on this, and there are exciting results being shared by the London colleagues.", "When you wake up, you see what's happening in there, or it's a big group of people focused on data, and there are all kinds of issues happening in there. It's just fun.", "Dwarkesh Patel 01:41:50", "What I find remarkable about some of the calls you guys have made is you're anticipating a level of demand for compute, which at the time wasn't obvious or evident. TPUs being a famous example of this, or the first TPU being an example of this.", "That thinking you had in, I guess, 2013 or earlier, if you think about it that way today and you do an estimate of, look, we're going to have these models that are going to be a backbone of our services, and we're going to be doing constant inference for them. We're going to be training future versions. And you think about the amount of compute we'll need by 2030 to accommodate all these use cases, where does the Fermi estimate get you?", "Jeff Dean 01:42:30", "Yeah, I mean, I think you're going to want a lot of inference. Compute is the rough, highest-level view of these capable models because if one of the techniques for improving their quality is scaling up the amount of inference compute you use, then all of a sudden what's currently like one request to generate some tokens now becomes 50 or 100 or 1000 times as computationally intensive, even though it's producing the same amount of output.", "And you're also going to then see tremendous scaling up of the uses of these services as not everyone in the world has discovered these chat-based conversational interfaces where you can get them to do all kinds of amazing things. Probably 10% of the computer users in the world have discovered that today, or 20%. As that pushes towards 100% and people make heavier use of it, that's going to be another order of magnitude or two of scaling.", "And so you're now going to have two orders of magnitude from that, two orders of magnitude from that. The models are probably going to be bigger, you'll get another order of magnitude or two from that. And there's a lot of inference compute you want. So you want extremely efficient hardware for inference for models you care about.", "Dwarkesh Patel 01:43:52", "In flops, global total global inference in 2030?", "Noam Shazeer 01:43:58", "I think just more is always going to be better. If you just kind of think about, okay, what fraction of world GDP will people decide to spend on AI at that point? And then, like, okay, what do the AI systems look like?", "Well, maybe it's some sort of personal assistant-like thing that is in your glasses and can see everything around you and has access to all your digital information and the world's digital information. And maybe it's like you're Joe Biden, and you have the earpiece in the cabinet that can advise you about anything in real-time and solve problems for you and give you helpful pointers. Or you could talk to it, and it wants to analyze anything that it sees around you for any potential useful impact that it has on you.", "So I mean, I can imagine, okay, and then say it's like your personal assistant or your personal cabinet or something, and that every time you spend 2x as much money on compute, the thing gets like 5, 10 IQ points smarter or something like that. And, okay, would you rather spend $10 a day and have an assistant or $20 a day and have a smarter assistant? And not only is it an assistant in life but an assistant in getting your job done better because now it makes you from a 10x engineer to a 100x or 10 millionx engineer?", "Okay, so let's see: from first principles, right? So people are going to want to spend some fraction of world GDP on this thing. The world GDP is almost certainly going to go way, way up, two orders of magnitude higher than it is today, due to the fact that we have all of these artificial engineers working on improving things.", "Probably we'll have solved unlimited energy and carbon issues by that point. So we should be able to have lots of energy. We should be able to have millions to billions of robots building us data centers. Let's see, the sun is what, 10 to the 26 watts or something like that?", "I'm guessing that the amount of compute being used for AI to help each person will be astronomical.", "Jeff Dean 01:46:48", "I would add on to that. I'm not sure I agree completely, but it's a pretty interesting thought experiment to go in that direction. And even if you get partway there, it's definitely going to be a lot of compute.", "And this is why it's super important to have as cheap a hardware platform for using these models and applying them to problems that Noam described, so that you can then make it accessible to everyone in some form and have as low a cost for access to these capabilities as you possibly can.", "And I think that's achievable by focusing on hardware and model co-design kinds of things, we should be able to make these things much, much more efficient than they are today.", "Dwarkesh Patel 01:47:36", "Is Google's data center build-out plan over the next few years aggressive enough given this increase in demand you're expecting?", "Jeff Dean 01:47:46", "I'm not going to comment on our future capital spending because our CEO and CFO would prefer I probably not. But I will say, you can look at our past capital expenditures over the last few years and see that we're definitely investing in this area because we think it's important.", "We are continuing to build new and interesting, innovative hardware that we think really helps us have an edge in deploying these systems to more and more people, both training them and also, how do we make them usable by people for inference?", "Dwarkesh Patel 01:48:21", "One thing I've heard you talk a lot about is continual learning, the idea that you could just have a model which improves over time rather than having to start from scratch. Is there any fundamental impediment to that? Because theoretically, you should just be able to keep fine-tuning a model. What does that future look like to you?", "Jeff Dean 01:48:40", "Yeah, I've been thinking about this more and more. I've been a big fan of models that are sparse because I think you want different parts of the model to be good at different things. We have our Gemini 1.5 Pro model, and other models are mixture-of-experts style models where you now have parts of the model that are activated for some token and parts that are not activated at all because you've decided this is a math-oriented thing, and this part's good at math, and this part's good at understanding cat images. So, that gives you this ability to have a much more capable model that's still quite efficient at inference time because it has very large capacity, but you activate a small part of it.", "But I think the current problem, well, one limitation of what we're doing today is it's still a very regular structure where each of the experts is the same size. The paths merge back together very fast. They don't go off and have lots of different branches for mathy things that don't merge back together with the kind of cat-image thing.", "I think we should probably have a more organic structure in these things. I also would like it if the pieces of those model of the model could be developed a little bit independently. Like right now, I think we have this issue where we're going to train a model. So, we do a bunch of preparation work on deciding the most awesome algorithms we can come up with and the most awesome data mix we can come up with.", "But there's always trade-offs there, like we'd love to include more multilingual data, but that might come at the expense of including less coding data, and so, the model's less good at coding but better at multilingual, or vice versa. I think it would be really great if we could have a small set of people who care about a particular subset of languages go off and create really good training data, train a modular piece of a model that we can then hook up to a larger model that improves its capability in, say, Southeast Asian languages or in reasoning about Haskell code or something.", "Then, you also have a nice software engineering benefit where you've decomposed the problem a bit compared to what we do today, which is we have this kind of a whole bunch of people working. But then, we have this kind of monolithic process of starting to do pre-training on this model.", "If we could do that, you could have 100 teams around Google. You could have people all around the world working to improve languages they care about or particular problems they care about and all collectively work on improving the model. And that's kind of a form of continual learning.", "Noam Shazeer 01:51:27", "That would be so nice. You could just glue models together or rip out pieces of models and shove them into other...", "Jeff Dean 01:51:33", "Upgrade this piece without throwing out the thing...", "Noam Shazeer 01:51:36", "...or you just attach a fire hose, and you suck all the information out of this model, shove it into another model. There is, I mean, the countervailing interest there is sort of science, in terms of, okay, we're still in the period of rapid progress, so, if you want to do sort of controlled experiments, and okay, I want to compare this thing to that thing because that then is helping us figure out what to build. In that interest, it's often best to just start from scratch so you can compare one complete training run to another complete training run at the practical level because it helps us figure out what to build in the future. It's less exciting but does lead to rapid progress.", "Jeff Dean 01:52:32", "Yeah, I think there may be ways to get a lot of the benefits of that with a version system of modularity. I have a frozen version of my model, and then I include a different variant of some particular module, and I want to compare its performance or train it a bit more. Then, I compare it to the baseline of this thing with now version N prime of this particular module that does Haskell interpretation.", "Noam Shazeer 01:52:58", "Actually, that could lead to faster research progress, right? You've got some system, and you do something to improve it. And if that thing you're doing to improve it is relatively cheap compared to training the system from scratch, then it could actually make research much, much cheaper and faster.", "Jeff Dean 01:53:16", "Yeah, and also more parallelizable, I think, across people.", "Noam Shazeer 01:53:24", "Okay, let's figure it out and do that next.", "Dwarkesh Patel 01:53:29", "So, this idea that is sort of casually laid out there would actually be a big regime shift compared to how things are done today. If you think the way things are headed, this is a sort of very interesting prediction about... You just have this blob where things are getting pipelined back and forth – and if you want to make something better, you can do like a sort of surgical incision almost.", "Jeff Dean 01:53:55", "Right, or grow the model, add another little bit of it here. Yeah, I've been sort of sketching out this vision for a while in Pathways ...", "Noam Shazeer 01:54:04", "Yeah, you've been building the...", "Jeff Dean 01:54:05", "...and we've been building the infrastructure for it. So, a lot of what Pathways, the system, can support is this kind of twisty, weird model with asynchronous updates to different pieces. And we're using Pathways to train our Gemini models, but we're not making use of some of its capabilities yet. But maybe we should.", "Noam Shazeer 01:54:24", "Ooh maybe.", "Dwarkesh Patel 01:54:27", "This is so interesting, and I don't want to lose this thread, but give me one moment.", "Noam Shazeer 01:54:33", "There have been times, like the way the TPU pods were set up. I don't know who did that, but they did a pretty brilliant job. The low-level software stack and the hardware stack, okay, you've got your nice regular high-performance hardware, you've got these great torus-shaped interconnects, and then you've got the right low-level collectives, the all-reduces, et cetera, which I guess came from supercomputing, but it turned out to be kind of just the right thing to build distributed deep learning on top of.", "Dwarkesh Patel 01:39:15", "Okay, so a couple of questions. One, suppose Noam makes another breakthrough, and now we've got a better architecture. Would you just take each compartment and distill it into this better architecture? And that's how it keeps improving over time?", "Jeff Dean 01:39:33", "I do think distillation is a really useful tool because it enables you to transform a model in its current model architecture form into a different form. Often, you use it to take a really capable but large and unwieldy model and distill it into a smaller one that maybe you want to serve with really good, fast latency inference characteristics.", "But I think you can also view this as something that's happening at the module level. Maybe there'd be a continual process where you have each module, and it has a few different representations of itself. It has a really big one. It's got a much smaller one that is continually distilling into the small version.", "And then the small version, once that's finished, you sort of delete the big one and you add a bunch more parameter capacity. Now, start to learn all the things that the distilled small one doesn't know by training it on more data, and then you kind of repeat that process. If you have that kind of running a thousand different places in your modular model in the background, that seems like it would work reasonably well.", "Dwarkesh Patel 01:40:42", "This could be a way of doing inference scaling, like the router decides how much do you want the big one.", "Jeff Dean 01:40:47", "Yeah, you can have multiple versions. Oh, this is an easy math problem, so I'm going to route it to the really tiny math distilled thing. Oh, this one's really hard, so...", "Dwarkesh Patel 01:40:56", "One, at least from public research, it seems like it's often hard to decode what each expert is doing in mixture of expert type models. If you have something like this, how would you enforce the kind of modularity that would be visible and understandable to us?", "Noam Shazeer 01:41:13", "Actually, in the past, I found experts to be relatively easy to understand. I mean, the first Mixture of Experts paper , you could just look at the experts.", "Dwarkesh Patel 01:41:24", "“I don’t know, I'm only the inventor of Mixture of Experts.”", "Noam Shazeer 01:57:25", "Yeah – oh, the what?", "Jeff Dean 01:41:28", "Yeah, yeah.", "Noam Shazeer 01:41:30", "Yeah, you could just see, okay, this expert, like we did, you know, a thousand, two thousand experts. Okay, and this expert, was getting words referring to cylindrical objects.", "Jeff Dean 01:41:42", "This one's super good at dates.", "Noam Shazeer 01:41:44", "Yeah.", "Jeff Dean 01:41:45", "Talking about times.", "Noam Shazeer 01:41:46", "Yeah, pretty easy to do.", "Not that you would need that human understanding to figure out how to work the thing at runtime because you just have some sort of learned router that's looking at the example.", "Jeff Dean 01:42:04", "One thing I would say is there is a bunch of work on interpretability of models and what are they doing inside. Sort of expert-level interpretability is a sub-problem of that broader area. I really like some of the work that my former intern, Chris Olah , and others did at Anthropic, where they trained a very sparse autoencoder and were able to deduce what characteristics some particular neuron in a large language model has, so they found a Golden Gate Bridge neuron that's activated when you're talking about the Golden Gate Bridge. And I think you could do that at the expert level, you could do that at a variety of different levels and get pretty interpretable results, and it's a little unclear if you necessarily need that. If the model is just really good at stuff, we don't necessarily care what every neuron in the Gemini model is doing, as long as the collective output and characteristics of the overall system are good. That's one of the beauties of deep learning, is you don't need to understand or hand-engineer every last feature.", "Dwarkesh Patel 01:43:13", "Man, there are so many interesting implications of this that I could just keep asking you about this- I would regret not asking you more about this, so I'll keep going. One implication is, currently, if you have a model that has some tens or hundreds of billions of parameters, you can serve it on a handful of GPUs.", "In this system, where any one query might only make its way through a small fraction of the total parameters, but you need the whole thing loaded into memory, the specific kind of infrastructure that Google has invested in with these TPUs that exist in pods of hundreds or thousands would be immensely valuable, right?", "Noam Shazeer 01:44:02", "For any sort of even existing mixtures of experts, you want the whole thing in memory. I guess there's kind of this misconception running around with Mixture of Experts that, okay, the benefit is that you don't even have to go through those weights in the model.", "If some expert is unused, it doesn't mean that you don't have to retrieve that memory because, really, in order to be efficient, you're serving at very large batch sizes.", "Jeff Dean 01:44:36", "Of independent requests.", "Noam Shazeer 01:44:38", "Right, of independent requests. So it's not really the case that, okay, at this step, you're either looking at this expert or you're not looking at this expert.", "Because if that were the case, then when you did look at the expert, you would be running it at batch size one, which is massively inefficient. Like you've got modern hardware, the operational intensities are whatever, hundreds. So that's not what's happening. It's that you are looking at all the experts, but you only have to send a small fraction of the batch through each one.", "Jeff Dean 01:45:17", "Right, but you still have a smaller batch at each expert that then goes through. And in order to get kind of reasonable balance, one of the things that the current models typically do is they have all the experts be roughly the same compute cost, and then you run roughly the same size batches through them in order to propagate the very large batch you're doing at inference time and have good efficiency.", "But I think you often in the future might want experts that vary in computational cost by factors of 100 or 1000. Or maybe paths that go for many layers on one case, and a single layer or even a skip connection in the other case. And there, I think you're going to want very large batches still, but you're going to want to push things through the model a little bit asynchronously at inference time, which is a little easier than training time.", "That's part of one of the things that Pathways was designed to support. You have these components, and the components can be variable cost and you kind of can say, for this particular example, I want to go through this subset of the model, and for this example, I want to go through this subset of the model and have the system kind of orchestrate that.", "Dwarkesh Patel 01:46:39", "It also would mean that it would take companies of a certain size and sophistication to be able to... Right now, anybody can train a sufficiently small enough model. But if it ends up being the case that this is the best way to train future models, then you would need a company that can basically have a data center serving a single quote, unquote “blob” or model. So it would be an interesting change in paradigms in that way as well.", "Noam Shazeer 01:47:10", "You definitely want to have at least enough HBM to put your whole model. So depending on the size of your model, most likely that's how much HBM you'd want to have at a minimum.", "Jeff Dean 01:47:28", "It also means you don't necessarily need to grow your entire model footprint to be the size of a data center. You might want it to be a bit below that.", "And then have potentially many replicated copies of one particular expert that is being used a lot, so that you get better load balancing. This one's being used a lot because we get a lot of math questions, and this one is an expert on Tahitian dance, and it is called on really rarely.", "That one, maybe you even page out to DRAM rather than putting it in HBM. But you want the system to figure all this stuff out based on load characteristics.", "Dwarkesh Patel 01:48:09", "Right. Now, language models, obviously, you put in language, you get language out. Obviously, it's multimodal.", "But the Pathways blog post talks about so many different use cases that are not obviously of this kind of auto-regressive nature going through the same model. Could you imagine, basically, Google as a company, the product is like Google Search goes through this, Google Images goes through this, Gmail goes through it?", "Just like the entire server is just this huge mixture of experts, specialized?", "Jeff Dean 01:48:45", "You're starting to see some of this by having a lot of uses of Gemini models across Google that are not necessarily fine-tuned. They're just given instructions for this particular use case in this feature in this product setting.", "So, I definitely see a lot more sharing of what the underlying models are capable of across more and more services. I do think that's a pretty interesting direction to go, for sure.", "Dwarkesh Patel 01:49:14", "Yeah, I feel like people listening might not register how interesting a prediction this is about where AI is going. It's like sort of getting Noam on a podcast in 2018 and being like, \"Yeah, so I think language models will be a thing.\"", "It's like, if this is where things go, this is actually incredibly interesting.", "Jeff Dean 01:49:36", "Yeah, and I think you might see that might be a big base model. And then you might want customized versions of that model with different modules that are added onto it for different settings that maybe have access restrictions.", "Maybe we have an internal one for Google use, for Google employees, that we've trained some modules on internal data, and we don't allow anyone else to use those modules, but we can make use of it. Maybe other companies, you add on other modules that are useful for that company setting and serve it in our cloud APIs.", "Dwarkesh Patel 01:50:09", "What is the bottleneck to making this sort of system viable? Is it systems engineering? Is it ML?", "Jeff Dean 01:50:17", "It's a pretty different way of operating than our current Gemini development. So, I think we will explore these kinds of areas and make some progress on them.", "But we need to really see evidence that it's the right way, that it has a lot of benefits. Some of those benefits may be improved quality, some may be less concretely measurable, like this ability to have lots of parallel development of different modules. But that's still a pretty exciting improvement because I think that would enable us to make faster progress on improving the model's capabilities for lots of different distinct areas.", "Noam Shazeer 01:51:00", "Even the data control modularity stuff seems really cool because then you could have the piece of the model that's just trained for me. It knows all my private data.", "Jeff Dean 01:51:09", "Like a personal module for you would be useful. Another thing might be you can use certain data in some settings but not in other settings.", "Maybe we have some YouTube data that's only usable in a YouTube product surface but not in other settings. So, we could have a module that is trained on that data for that particular purpose.", "Dwarkesh Patel 01:51:29", "Yeah.", "Noam Shazeer 01:51:32", "We're going to need a million automated researchers to invent all of this stuff.", "Jeff Dean 01:51:39", "It's going to be great.", "Dwarkesh Patel 01:51:41", "Yeah, well the thing itself, you build the blob, and it tells you how to make the blob better.", "Jeff Dean 01:51:47", "Blob 2.0. Or maybe they're not even versions, it's just like an incrementally growing blob.", "Dwarkesh Patel 01:51:56", "Yeah, that's super fascinating. Okay, Jeff, motivate for me, big picture: why is this a good idea? Why is this the next direction?", "Jeff Dean 01:52:06", "Yeah, this notion of an organic, not quite so carefully mathematically constructed machine learning model is one that's been with me for a little while. I feel like in the development of neural nets, the artificial neurons, inspiration from biological neurons is a good one and has served us well in the deep learning field.", "We've been able to make a lot of progress with that. But I feel like we're not necessarily looking at other things that real brains do as much as we perhaps could, and that's not to say we should exactly mimic that because silicon and wetware have very different characteristics and strengths. But I do think one thing we could draw more inspiration from is this notion of having different specialized portions, sort of areas of a model of a brain that are good at different things.", "We have a little bit of that in Mixture of Experts models, but it's still very structured. I feel like this kind of more organic growth of expertise, and when you want more expertise of that, you add some more capacity to the model there and let it learn a bit more on that kind of thing.", "Also this notion of adapting the connectivity of the model to the connectivity of the hardware is a good one. I think you want incredibly dense connections between artificial neurons in the same chip and the same HBM because that doesn't cost you that much. But then you want a smaller number of connections to nearby neurons. So, like a chip away, you should have some amount of connections and then, like many, many chips away, you should have a smaller number of connections where you send over a very limited kind of bottlenecky thing: the most important things that this part of the model is learning for other parts of the model to make use of. And even across multiple TPU pods, you'd like to send even less information but the most salient kind of representations. And then across metro areas, you'd like to send even less.", "Dwarkesh Patel 01:54:23", "Yeah, and then that emerges organically.", "Jeff Dean 01:54:26", "Yeah, I'd like that to emerge organically. You could hand-specify these characteristics, but I think you don't know exactly what the right proportions of these kinds of connections are so you should just let the hardware dictate things a little bit. Like if you're communicating over here and this data always shows up really early, you should add some more connections, then it'll take longer and show up at just the right time.", "Dwarkesh Patel 01:54:48", "Oh here's another interesting implication: Right now, we think about the growth in AI use as a sort of horizontal- so, suppose you're like, how many AI engineers will Google have working for it? You think about how many instances of Gemini 3 will be working at one time.", "If you have this, whatever you want to call it, this blob, and it can sort of organically decide how much of itself to activate, then it's more of, if you want 10 engineers worth of output, it just activates a different pattern or a larger pattern. If you want 100 engineers of output, it's not like calling more agents or more instances, it's just calling different sub-patterns.", "Jeff Dean 01:55:34", "I think there's a notion of how much compute do you want to spend on this particular inference, and that should vary by factors of 10,000 for really easy things and really hard things, maybe even a million. It might be iterative, you might make a pass through the model, get some stuff, and then decide you now need to call on some other parts of the model.", "The other thing I would say is this sounds super complicated to deploy because it's this weird, constantly evolving thing with maybe not super optimized ways of communicating between pieces, but you can always distill from that. If you say, \"This is the kind of task I really care about, let me distill from this giant kind of organic thing into something that I know can be served really efficiently,\" you could do that distillation process whenever you want, once a day, once an hour. That seems like it'd be kind of good.", "Noam Shazeer 01:56:32", "Yeah, we need better distillation.", "Jeff Dean 01:56:34", "Yeah.", "Noam Shazeer 01:56:34", "Anyone out there who invents amazing distillation techniques that instantly distill from a giant blob onto your phone, that would be wonderful.", "Dwarkesh Patel 01:56:43", "How would you characterize what's missing from current distillation techniques?", "Noam Shazeer 01:56:46", "Well, I just want it to work faster.", "Jeff Dean 01:56:49", "A related thing is I feel like we need interesting learning techniques during pre-training. I'm not sure we're extracting the maximal value from every token we look at with the current training objective. Maybe we should think a lot harder about some tokens.", "When you get to \"the answer is,\" maybe the model should, at training time, do a lot more work than when it gets to \"the\".", "Noam Shazeer 01:57:16", "Right. There's got to be some way to get more from the same data, make it learn it forwards and backwards.", "Jeff Dean 01:57:24", "And every which way. Hide some stuff this way, hide some stuff that way, make it infer from partial information. I think people have been doing this in vision models for a while. You distort the model or you hide parts of it and try to make it guess the bird from half, like that it's a bird from this upper corner of the image or the lower left corner of the image.", "That makes the task harder, and I feel like there's an analog for more textual or coding-related data where you want to force the model to work harder. You'll get more interesting observations from it.", "Noam Shazeer 01:58:03", "Yeah, the image people didn't have enough labeled data so they had to invent all this stuff.", "Jeff Dean 01:58:08", "And they invented -- I mean, dropout was invented on images, but we're not really using it for text mostly. That's one way you could get a lot more learning in a more large-scale model without overfitting is just make like 100 epochs over the world's text data and use dropout.", "But that's pretty computationally expensive, but it does mean we won't run it. Even though people are saying, \"Oh no, we're almost out of textual data,\" I don't really believe that because I think we can get a lot more capable models out of the text data that does exist.", "Noam Shazeer 01:58:44", "I mean, a person has seen a billion tokens.", "Jeff Dean 01:58:47", "Yeah, and they're pretty good at a lot of stuff.", "Dwarkesh Patel 01:58:54", "So obviously human data efficiency sets a lower bound on how, or I guess, upper bound, one of them, maybe not.", "Jeff Dean 01:59:04", "It's an interesting data point.", "Dwarkesh Patel 01:59:05", "Yes. So there's a sort of modus ponens, modus tollens thing here. One way to look at it is, look, LLMs have so much further to go, therefore we project orders of magnitude improvement in sample efficiency just if they could match humans. Another is, maybe they're doing something clearly different given the orders of magnitude difference. What's your intuition of what it would take to make these models as sample efficient as humans are?", "Jeff Dean 01:59:33", "Yeah, I think we should consider changing the training objective a little bit. Just predicting the next token from the previous ones you've seen seems like not how people learn. It's a little bit related to how people learn, I think, but not entirely. A person might read a whole chapter of a book and then try to answer questions at the back, and that's a different kind of thing.", "I also think we're not learning from visual data very much. We're training a little bit on video data, but we're definitely not anywhere close to thinking about training on all the visual inputs you could get. So you have visual data that we haven't really begun to train on.", "Then I think we could extract a lot more information from every bit of data we do see. I think one of the ways people are so sample efficient is they explore the world and take actions in the world and observe what happens. You see it with very small infants picking things up and dropping them; they learn about gravity from that. And that's a much harder thing to learn when you're not initiating the action.", "I think having a model that can take actions as part of its learning process would be just a lot better than just sort of passively observing a giant dataset.", "Dwarkesh Patel 02:00:50", "Is Gato the future, then?", "Jeff Dean 02:00:53", "Something where the model can observe and take actions and observe the corresponding results seems pretty useful.", "Noam Shazeer 02:01:04", "I mean, people can learn a lot from thought experiments that don't even involve extra input. Einstein learned a lot of stuff from thought experiments, or like Newton went into quarantine and got an apple dropped on his head or something and invented gravity. And like mathematicians -- math didn't have any extra input.", "Chess, okay, you have the thing play chess against itself and it gets good at chess. That was DeepMind, but also all it needs is the rules of chess. So there's actually probably a lot of learning that you can do even without external data, and then you can make it in exactly the fields that you care about. Of course, there is learning that will require external data, but maybe we can just have this thing talk to itself and make itself smarter.", "Dwarkesh Patel 02:02:03", "So here's the question I have. What you've just laid out over the last hour is potentially just like the big next paradigm shift in AI. That's a tremendously valuable insight, potentially. Noam, in 2017 you released the Transformer paper on which tens, if not hundreds, of billions of dollars of market value is based in other companies, not to mention all this other research that Google has released over time, which you've been relatively generous with.", "In retrospect, when you think about divulging this information that has been helpful to your competitors, in retrospect is it like, \"Yeah, we'd still do it,\" or would you be like, \"Ah, we didn't realize how big a deal Transformer was. We should have kept it indoors.\" How do you think about that?", "Noam Shazeer 02:02:51", "It's a good question because I think probably we did need to see the size of the opportunity, often reflected in what other companies are doing. And also it's not a fixed pie. The current state of the world is pretty much as far from fixed pie as you can get.", "I think we're going to see orders of magnitude of improvements in GDP, health, wealth, and anything else you can think of. So I think it's definitely been nice that Transformer has got around.", "Jeff Dean 02:03:39", "It’s transformative.", "Noam Shazeer 02:03:51", "Woo. Thank God Google's doing well as well. So these days we do publish a little less of what we're doing.", "Jeff Dean 02:03:54", "There's always this trade-off: should we publish exactly what we're doing right away? Should we put it in the next stages of research and then roll it out into production Gemini models and not publish it at all? Or is there some intermediate point?", "And for example, in our computational photography work in Pixel cameras, we've often taken the decision to develop interesting new techniques, like the ability to do super good night sight vision for low-light situations or whatever, put that into the product and then published a real research paper about the system that does that after the product is released.", "Different techniques and developments have different treatments. Some things we think are super critical we might not publish. Some things we think are really interesting but important for improving our products; we'll get them out into our products and then make a decision: did we publish this or do we give kind of a lightweight discussion of it, but maybe not every last detail?", "Other things I think we publish openly and try to advance the field and the community because that's how we all benefit from participating. I think it's great to go to conferences like NeurIPS last week with 15,000 people all sharing lots and lots of great ideas. We publish a lot of papers there as we have in the past, and see the field advance is super exciting.", "Dwarkesh Patel 02:05:29", "How would you account for... so obviously Google had all these insights internally rather early on, including the top researchers. And now Gemini 2 is out. We didn't get a chance much to talk about it, but people know it's a really great model.", "Jeff Dean 02:05:53", "Such a good model. As we say around the micro-kitchen, “such a good model, such a good model”.", "Dwarkesh Patel 02:05:57", "So it's top in LMSYS Chatbot Arena. And so now Google's on top. But how would you account for basically coming up with all the great insights for a couple of years? Other competitors had models that were better for a while despite that.", "Jeff Dean 02:06:16", "We've been working on language models for a long time. Noam's early work on spelling correction in 2001, the work on translation, very large-scale language models in 2007, and seq2seq and word2vec and more recent Transformers and then BERT.", "Things like the internal Meena system that was actually a chatbot-based system designed to kind of engage people in interesting conversations. We actually had an internal chatbot system that Googlers could play with even before ChatGPT came out. And actually, during the pandemic, a lot of Googlers would enjoy spending, you know, everyone was locked down at home, and so they enjoyed spending time chatting with Meena during lunch because it was like a nice, you know, lunch partner.", "I think one of the things we were a little, our view of things from a search perspective was these models hallucinate a lot, they don't get things right a lot of the time- or some of the time- and that means that they aren't as useful as they could be and so we’d like to make that better. From a search perspective, you want to get the right answer 100% of the time, ideally and be very high on factuality. These models were not near that bar.", "I think what we were a little unsure about is that they were incredibly useful. Oh and they also had all kinds of safety issues, like they might say offensive things and we had to work on that aspect and get that to a point where we were comfortable releasing the model. But I think what we didn’t quite appreciate was how useful they could be for things you wouldn't ask a search engine, right? Like, help me write a note to my veterinarian, or like, can you take this text and give me a quick summary of it? I think that's the kind of thing we've seen people really flock to in terms of using chatbots as amazing new capabilities rather than as a pure search engine.", "So I think we took our time and got to the point where we actually released quite capable chatbots and have been improving them through Gemini models quite a bit. I think that's actually not a bad path to have taken. Would we like to have released the chatbot earlier? Maybe. But I think we have a pretty awesome chatbot with awesome Gemini models that are getting better all the time. And that's pretty cool.", "Dwarkesh Patel 02:08:54", "Okay, final question. So we've discussed some of the things you guys have worked on over the last 25 years, and there are so many different fields, right? You start off with search and indexing to distributed systems, to hardware, to AI algorithms. And genuinely, there are a thousand more, just go on either of their Google Scholar pages or something. What is the trick to having this level of, not only career longevity where you're having many decades of making breakthroughs, but also the breadth of different fields, both of you, in either order, what’s the trick to career longevity and breadth?", "Jeff Dean 02:09:46", "One thing that I like to do is to find out about a new and interesting area, and one of the best ways to do that is to pay attention to what's going on, talk to colleagues, pay attention to research papers that are being published, and look at the kind of research landscape as it's evolving.", "Be willing to say, \"Oh, chip design. I wonder if we could use reinforcement learning for some aspect of that.\" Be able to dive into a new area, work with people who know a lot about a different domain or AI for healthcare or something. I've done a bit of working with clinicians about what are the real problems, how could AI help? It wouldn't be that useful for this thing, but it would be super useful for this.", "Getting those insights, and often working with a set of five or six colleagues who have different expertise than you do. It enables you to collectively do something that none of you could do individually. Then some of their expertise rubs off on you and some of your expertise rubs off on them, and now you have this bigger set of tools in your tool belt as an engineering researcher to go tackle the next thing.", "I think that's one of the beauties of continuing to learn on the job. It's something I treasure. I really enjoy diving into new things and seeing what we can do.", "Noam Shazeer 02:11:10", "I'd say probably a big thing is humility, like I’d say I’m the most humble. But seriously, to say what I just did is nothing compared to what I can do or what can be done. And to be able to drop an idea as soon as you see something better, like you or somebody with some better idea, and you see how maybe what you're thinking about, what they're thinking about or something totally different can conceivably work better.", "I think there is a drive in some sense to say, \"Hey, the thing I just invented is awesome, give me more chips.\" Particularly if there's a lot of top-down resource assignment. But I think we also need to incentivize people to say, \"Hey, this thing I am doing is not working at all. Let me just drop it completely and try something else.\"", "Which I think Google Brain did quite well. We had the very kind of bottoms-up UBI kind of chip allocation.", "Dwarkesh Patel 02:12:39", "You had a UBI?", "Noam Shazeer 02:12:41", "Yeah, it was like basically everyone had one credit and you could pool them.", "Gemini has been mostly top-down, which has been very good in some sense because it has led to a lot more collaboration and people working together. You less often have five groups of people all building the same thing or building interchangeable things.", "But on the other hand, it does lead to some incentive to say, \"Hey, what I'm doing is working great.\" And then, as a lead, you hear hundreds of groups, and everything is, \"So you should give them more chips.\" There's less of an incentive to say, \"Hey, what I'm doing is not actually working that well. Let me try something different.\"", "So I think going forward, we're going to have some amount of top-down, some amount of bottom-up, so as to incentivize both of these behaviors: collaboration and flexibility. I think both those things lead to a lot of innovation.", "Jeff Dean 02:13:49", "I think it's also good to articulate interesting directions you think we should go. I have an internal slide deck called \"Go, Jeff, Wacky Ideas.\" I think those are a little bit more product-oriented things, like, \"Hey, I think now that we have these capabilities, we could do these 17 things.\"", "I think that's a good thing because sometimes people get excited about that and want to start working with you on one or more of them. And I think that's a good way to bootstrap where we should go without necessarily ordering people, \"We must go here.\"", "Dwarkesh Patel 02:14:32", "Alright, this was great.", "Jeff Dean 02:14:34", "Yeah.", "Dwarkesh Patel 02:14:34", "Thank you, guys.", "Jeff Dean 02:14:35", "Appreciate you taking the time, it was great chatting. That was awesome." ]
[ "https://en.wikipedia.org/wiki/MapReduce", "https://en.wikipedia.org/wiki/Bigtable", "https://en.wikipedia.org/wiki/TensorFlow", "https://deepmind.google/discover/blog/how-alphachip-transformed-computer-chip-design/", "https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)#", "https://en.wikipedia.org/wiki/Mixture_of_experts", "https://github.com/tensorflow/mesh", "https://en.wikipedia.org/wiki/Oriol_Vinyals", "https://en.wikipedia.org/wiki/Tensor_Processing_Unit", "https://thechipletter.substack.com/p/googles-first-tpu-architecture", "https://int4.com/", "https://arxiv.org/pdf/2310.11453", "https://huggingface.co/docs/optimum/en/concept_guides/quantization", "https://en.wikipedia.org/wiki/Backpropagation", "https://aclanthology.org/D07-1090.pdf", "https://www.netlib.org/utk/lsi/pcwLSI/text/node13.html", "https://drive.google.com/file/d/1I1fs4sczbCaACzA9XwxR3DiuXVtqmejL/view", "https://en.wikipedia.org/wiki/Franz_Josef_Och", "https://en.wikipedia.org/wiki/BLEU", "https://uk.mathworks.com/discovery/ngram.html", "https://www.informatika.bg/jeffdean", "https://en.wikipedia.org/wiki/Bayesian_network", "https://www.sciencedirect.com/topics/computer-science/model-parallelism", "https://www.sciencedirect.com/topics/computer-science/data-parallelism", "https://www.sciencedirect.com/topics/computer-science/imagenet-challenge", "https://en.wikipedia.org/wiki/BERT_(language_model)", "https://en.wikipedia.org/wiki/Information_retrieval", "https://zyppy.com/seo/google-index-size/", "https://arxiv.org/abs/2212.07677", "https://research.google/pubs/why-google-stores-billions-of-lines-of-code-in-a-single-repository/", "https://en.wikipedia.org/wiki/Sundar_Pichai", "https://www.tsmc.com/english", "https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf", "https://ai.google/responsibility/principles/", "https://www.researchgate.net/publication/386454984_Shaping_AI%27s_Impact_on_Billions_of_Lives", "https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/", "https://arxiv.org/abs/1704.04760", "https://arxiv.org/abs/1701.06538", "https://colah.github.io/about.html", "https://en.wikipedia.org/wiki/High_Bandwidth_Memory", "https://en.wikipedia.org/wiki/Dynamic_random-access_memory", "https://gwern.net/modus", "https://deepmind.google/discover/blog/a-generalist-agent/", "https://research.google/blog/towards-a-conversational-agent-that-can-chat-aboutanything/" ]
https://www.dwarkesh.com/p/joe-carlsmith
Joe Carlsmith - Otherness and control in the age of AGI
[ "00:00:00 - Understanding the Basic Alignment Story", "Dwarkesh Patel 00:00:00", "Today I'm chatting with Joe Carlsmith. He's a philosopher and, in my opinion, a capital-G great philosopher. You can find his essays at joecarlsmith.com .", "So we have GPT-4, and it doesn't seem like a paperclipper thing . It understands human values. In fact, you can have it explain why being a paperclipper is bad or ask it to explain why the galaxy shouldn't be turned into paperclips.", "What has to happen such that eventually we have a system that takes over and converts the world into something valueless?", "Joe Carlsmith 00:00:37", "When I'm thinking about misaligned AIs —or the type that I'm worried about—I'm thinking about AIs with a relatively specific set of properties related to agency , planning, awareness, and understanding of the world.", "One key aspect is the capacity to plan and make relatively sophisticated plans based on models of the world, where those plans are evaluated according to criteria. That planning capability needs to be driving the model's behavior. There are models that are, in some sense, capable of planning. But when they give output, it's not like that output was determined by some process of planning, like, “Here's what will happen if I give this output, and do I want that to happen?\"", "The model needs to really understand the world. It needs to really be like, “Okay, here’s what will happen. Here I am. Here’s the politics of the situation.” It needs to have this kind of situational awareness to evaluate the consequences of different plans.", "Another thing to consider is the verbal behavior of these models. When I talk about a model's values, I'm referring to the criteria that end up determining which plans the model pursues. A model's verbal behavior—even if it has a planning process (which GPT-4, I think, doesn't in many cases)—doesn't necessarily reflect those criteria.", "We know that we're going to be able to get models to say what we want to hear. That's the magic of gradient descent . Modulo some difficulties with capabilities, you can get a model to output the behavior that you want. If it doesn't, then you crank it until it does.", "I think everyone admits that for suitably sophisticated models, they're going to have a very detailed understanding of human morality. The question is, what relationship is there between a model's verbal behavior—which you've essentially clamped, you're forcing the model to say certain things— and the criteria that end up influencing its choice between plans?", "I'm pretty cautious about assuming that when it says the thing I forced it to say—or when gradient descent has shaped it to say a certain thing— that that is a lot of evidence about how it's going to choose in a bunch of different scenarios.", "Even with humans, it's not necessarily the case that their verbal behavior reflects the actual factors that determine their choices. They can lie. They might not even know what they would do in a given situation, all sorts of stuff like that.", "Dwarkesh Patel 00:03:29", "It's interesting to think about this in the context of humans. There's that famous saying : \"Be careful who you pretend to be, because you are who you pretend to be.\" You notice this with how culture shapes children. Parents will punish you if you start saying things that are inconsistent with your culture's values, and over time, you become like your parents, right?", "By default, it seems like it kind of works. Even with these models, it seems to work. They don’t really scheme against us. Why would this happen?", "Joe Carlsmith 00:04:01", "For folks who are unfamiliar with the basic story, they might wonder, \"Why would AI take over at all? What's the reason they would do that?\" The general concern is that you're offering someone power, especially if you're offering it for free. Power, almost by definition, is useful for lots of values. We're talking about an AI that really has the opportunity to take control of things. Say some component of its values is focused on some outcome, like the world being a certain way, especially in a longer-term way such that its concern extends beyond the period that a takeover plan would encompass. The thought is that it's often the case that the world will be more the way you want it if you control everything, rather than if you remain an instrument of human will or some other actor, which is what we're hoping these AIs will be.", "That's a very specific scenario. If we're in a scenario where power is more distributed—especially where we're doing decently on alignment and we're giving the AI some amount of inhibition about doing different things, maybe we're succeeding in shaping their values somewhat—then it's just a much more complicated calculus. You have to ask, “What's the upside for the AI? What's the probability of success for this takeover path? How good is its alternative?”", "Dwarkesh Patel 00:05:24", "Maybe this is a good point to talk about how you expect the difficulties of alignment to change in the future. We're starting off with something that has this intricate representation of human values and it doesn't seem that hard to sort of lock it into a persona that we are comfortable with. I don't know what changes.", "Joe Carlsmith 00:05:44", "Why is alignment hard in general? Let’s say we've got an AI. Let's bracket the question of exactly how capable it will be and talk about this extreme scenario where it really has the opportunity to take over. I think we might just want to avoid having to build an AI that we're comfortable with being in that position. But let's focus on it for simplicity's sake, and then we can relax the assumption.", "One issue is that you can't just test it. You can't give the AI this literal situation, have it take over and kill everyone, and then say, \"Oops, update the weights.\" This is what Eliezer talks about. You care about its behavior in this specific scenario that you can’t test directly. We can talk about whether that's a problem, but that's one issue. There's a sense in which this has to be \"off-distribution.\" You have to get some kind of generalization from training the AI on a bunch of other scenarios. Then there's the question of how it's going to generalize to the scenario where it really has this option.", "Dwarkesh Patel 00:06:51", "Is that even true? Because when you're training it, you can say, \"Hey, here's a gradient update. If you get the takeover option on a platter, don't take it.\" And then, in red teaming situations where it thinks it has a takeover attempt, you train it not to take it. It could fail, but I feel like if you did this to a child, like \"Don't beat up your siblings,\" the kid will generalize to, \"If I'm an adult and I have a rifle, I'm not going to start shooting random people.\"", "Joe Carlsmith 00:07:24", "You mentioned the idea of, \"You are what you pretend to be.\" Will these AIs, if you train them to look nice, fake it till they make it? You were saying we do this to kids. I think it's better to imagine kids doing this to us.", "Here's a silly analogy for AI training. Suppose you wake up and you're being trained via methods analogous to contemporary machine learning by Nazi children to be a good Nazi soldier or butler or what have you. These children have a model spec , a nice Nazi model spec. It’s like, “Reflect well on the Nazi Party, benefit the Nazi Party” and whatever. You can read it. You understand it. This is why I'm saying that when you’re like, “The model really understands human values…”", "Dwarkesh Patel 00:08:36", "In this analogy, I start off as something more intelligent than the things training me, with different values to begin with. The intelligence and the values are baked in to begin with. Whereas a more analogous scenario is, “I'm a toddler and, initially, I'm stupider than the children.” This would also be true, by the way, if I'm a much smarter model initially. The much smarter model is dumb, right? Then I get smarter as you train me. So it's like a toddler, and the kids are like, \"Hey, we're going to bully you if you're not a Nazi.\" As you grow up, you reach the children's level, and then eventually you become an adult. Through that process, they've been bullying you, training you to be a Nazi. I think in that scenario, I might end up a Nazi.", "Joe Carlsmith 00:09:24", "Basically a decent portion of the hope here should be that we're never in the situation where the AI really has very different values, is already quite smart and really knows what's going on, and is now in this kind of adversarial relationship with our training process. We want to avoid that. I think it's possible we can, by the sorts of things you're saying. So I'm not saying that'll never work.", "The thing I just wanted to highlight was about if you get into that situation where the AI is genuinely at that point much, much more sophisticated than you, and doesn't want to reveal its true values for whatever reason. Then when the children show some obviously fake opportunity to defect to the allies, it's not necessarily going to be a good test of what it will do in the real circumstance because it's able to tell the difference.", "Dwarkesh Patel 00:10:19", "You can also give another way in which the analogy might be misleading. Imagine that you're not just in a normal prison where you're totally cognizant of everything that's going on. Sometimes they drug you, give you weird hallucinogens that totally mess up how your brain is working. As a human adult in a prison, I know what kind of thing I am. Nobody's really fucking with me in a big way.", "Whereas an AI, even a much smarter AI in a training situation, is much closer to being constantly inundated with weird drugs and different training protocols. You're frazzled because each moment is closer to some sort of Chinese water torture technique", "Joe Carlsmith 00:11:08", "I'm glad we're talking about the moral patienthood stuff later.", "Dwarkesh Patel 00:11:11", "There’s this chance to step back and ask, \"What's going on?\" An adult in prison has that ability in a way that I don't know if these models necessarily have. It’s that coherence and ability to step back from what's happening in the training process.", "Joe Carlsmith 00:11:26", "Yeah, I don't know. I'm hesitant to say it's like drugs for the model. Broadly speaking, I do basically agree that we have quite a lot of tools and options for training AIs, even AIs that are somewhat smarter than humans.", "I do think you have to actually do it. You had Eliezer on . I'm much more bullish on our ability to solve this problem, especially for AIs that are in what I think of as the \"AI for AI safety sweet spot.\" This is a band of capability where they're sufficiently capable that they can be really useful for strengthening various factors in our civilization that can make us safe. That’s stuff like our alignment work, control, cybersecurity, general epistemics, maybe some coordination applications. There's a bunch of stuff you can do with AIs that, in principle, could differentially accelerate our security with respect to the sorts of considerations we're talking about.", "Let’s say you have AIs that are capable of that. You can successfully elicit that capability in a way that's not being sabotaged or messing with you in other ways. They can't yet take over the world or do some other really problematic form of power-seeking. If we were really committed, we could then go hard, put a ton of resources and really differentially direct this glut of AI productivity towards these security factors. We could hopefully control and understand, do a lot of these things you're talking about to make sure our AIs don't take over or mess with us in the meantime.", "We have a lot of tools there. You have to really try though. It's possible that those sorts of measures just don't happen, or they don't happen at the level of commitment, diligence, and seriousness that you would need. That’s especially true if things are moving really fast and there are other competitive pressures: “This is going to take compute to do these intensive experiments on the AIs. We could use that compute for experiments for the next scaling step.” There’s stuff like that.", "I'm not here saying this is impossible, especially for that band of AIs. It's just that you have to try really hard.", "Dwarkesh Patel 00:13:41", "I agree with the sentiment of obviously approaching this situation with caution, but I do want to point out the ways in which the analyses we've been using have been maximally adversarial. For example, let’s go back to the adult getting trained by Nazi children. Maybe the one thing I didn't mention is the difference in this situation, which is maybe what we're trying to get at with the drug metaphor.", "When you get an update, it's much more directly connected to your brain than a sort of reward or punishment a human gets. It's literally a gradient update down to the parameter of how much this would contribute to you putting this output rather than that output. Each different parameter we're going to adjust to the exact floating point number that calibrates it to the output we want.", "I just want to point out that we're coming into the situation pretty well. It does make sense, of course, if you're talking to somebody at a lab, to say, \"Hey, really be careful.\" But for a general audience, should I be scared witless? You maybe should to the extent that you should be scared about things that do have a chance of happening.", "For example, you should be scared about nuclear war. But should you be scared in the sense of you’re doomed? No, you're coming up with an incredible amount of leverage on the AIs in terms of how they will interact with the world, how they're trained, and the default values they start with.", "Joe Carlsmith 00:15:05", "I think it is the case that by the time we're building superintelligence , we'll have much better… Even right now—when you look at labs talking about how they're planning to align AIs—no one is saying we're going to just do RLHF . At the least, you're talking about scalable oversight. You have some hope about interpretability. You have automated red teaming. Hopefully, humans are doing a bunch more alignment work. I also personally am hopeful that we can successfully elicit from various AIs a ton of alignment work progress.", "There's a bunch of ways this can go. I'm not here to tell you 90% doom or anything like that. This is the basic reason for concern. Imagine that we're going to transition to a world in which we've created these beings that are just vastly more powerful than us. We've reached the point where our continued empowerment is just effectively dependent on their motives. It is this vulnerability to, “What do the AIs choose to do?” Do they choose to continue to empower us or do they choose to do something else?", "Dwarkesh Patel 00:16:21", "Or it’s about the institutions that have been set up. I expect the US government to protect me, not because of its “motives,” but just because of the system of incentives and institutions and norms that has been set up.", "Joe Carlsmith 00:16:35", "You can hope that will work too, but there is a concern. I sometimes think about AI takeover scenarios via this spectrum of how much power we voluntarily transferred to the AIs. How much of our civilization did we hand to the AIs intentionally by the time they took over? Versus, how much did they take for themselves?", "Some of the scariest scenarios are where we have a really fast explosion to the point where there wasn't even a lot of integration of AI systems into the broader economy. But there's this really intensive amount of superintelligence concentrated in a single project or something like that. That's a quite scary scenario, partly because of the speed and people not having time to react.", "Then there are intermediate scenarios where some things got automated, maybe people handed the military over to the AIs or we have automated science. There are some rollouts and that’s giving the AIs power that they don't have to take. We're doing all our cybersecurity with AIs and stuff like that.", "Then there are worlds where you more fully transitioned to a kind of world run by AIs where, in some sense, humans voluntarily did that.", "Joe Carlsmith (cont’d) 00:19:20", "Maybe there were competitive pressures, but you intentionally handed off huge portions of your civilization. At that point, it's likely that humans have a hard time understanding what's going on. A lot of stuff is happening very fast. The police are automated. The courts are automated. There's all sorts of stuff.", "Now, I tend to think a little less about those scenarios because I think they're correlated with being further down the line. Humans are hopefully not going to just say, \"Oh yeah, you built an AI system, let's just...\" When we look at technological adoption rates, it can go quite slow. Obviously there's going to be competitive pressures, but in general this category is somewhat safer. But even in this one, I think it's intense. If humans have really lost their epistemic grip on the world, they've handed off the world to these systems. Even if you're like, \"Oh, there's laws, there's norms…\" I really want us to have a really developed understanding of what's likely to happen in that circumstance, before we go for it.", "Dwarkesh Patel 00:20:30", "I get that we want to be worried about a scenario where it goes wrong. But again, what is the reason to think it might go wrong? In the human example, your kids are not maximally adversarial against your attempts to instill your culture on them. With these models, at least so far, it doesn't seem to matter. They just get, \"Hey, don't help people make bombs\" or whatever, even if you ask in a different way how to make a bomb. We're also getting better and better at this all the time.", "Joe Carlsmith 00:20:55", "You're right in picking up on this assumption in the AI risk discourse of what we might call intense adversariality between agents that have somewhat different values. There's some sort of thought—and I think this is rooted in the discourse about the fragility of value and stuff like that—that if these agents are somewhat different, at least in the specific scenario of an AI takeoff, they end up in this intensely adversarial relationship.", "You're right to notice that's not how we are in the human world. We're very comfortable with a lot of different differences in values. A factor that is relevant is this notion that there are possibilities for intense concentration of power on the table. There is some kind of general concern, both with humans and AIs. If it's the case that there's some ring of power that someone can just grab that will give them huge amounts of power over everyone else, suddenly you might be more worried about differences in values at stake, because you're more worried about those other actors.", "We talked about this Nazi example where you imagine that you wake up and you're being trained by Nazis to become a Nazi. You're not right now. Is it plausible that we'd end up with a model that is in that sort of situation? As you said, maybe it's trained as a kid. It never ends up with values such that it's aware of some significant divergence between its values and the values that the humans intend for it to have. If it's in that scenario, would it want to avoid having its values modified?", "At least to me, it seems fairly plausible that the AI's values meet certain constraints. Do they care about consequences in the world? Do they anticipate that the AI's preserving its values will better conduce to those consequences? Then it's not that surprising if it prefers not to have its values modified by the training process.", "Dwarkesh Patel 00:23:15", "There’s a way in which I'm still confused about this. With the non-Nazi being trained by Nazis, it's not just that I have different values. I actively despise their values. I don't expect this to be true of AIs with respect to their trainers. The more analogous scenario is where I'm like, “Am I leery of my values being changed?” Going to college or meeting new people or reading a new book, I'm like, “I don't know. It's okay if it changes my values. That's fine. I don't care.”", "Joe Carlsmith 00:23:43", "Yeah, that's a reasonable point. There's a question. How would you feel about paperclips? Maybe you don't despise paperclips, but there's the human paperclippers there and they're training you to make paperclips. My sense would be that there's a kind of relatively specific set of conditions in which you're comfortable having your values changed, especially not by learning and growing but by gradient descent directly intervening on your neurons.", "Dwarkesh Patel 00:24:10", "This seems similar to religious training as a kid. You start off in a religion. Because you started off in a religion, you're already sympathetic to the idea that you go to church every week so that you're more reinforced in this existing tradition. You're getting more intelligent over time. When you're a kid, you're getting very simple instructions about how religion works. As you get older, you get more and more complex theology that helps you talk to other adults about why this is a rational religion to believe in.", "But since one of your values to begin with was that you want to be trained further in this religion, you want to come back to church every week. That seems more analogous to the situation the AIs will be in with respect to human values. The entire time they're like, \"Hey, be helpful, be harmless,” etc.", "Joe Carlsmith ​​00:24:57", "Yes, it could be like that. There's a scenario in which you were comfortable with your values being changed because, in some sense, you have sufficient allegiance to the output of that process. In a religious context, you're like, \"Ah, make me more virtuous by the lights of this religion.\" You go to confession and you're like, \"I've been thinking about takeover today. Can you change me please? Give me more gradient descent. I've been so bad.\" People sometimes use the term \"corrigibility\" to talk about that. Maybe the AI doesn't have perfect values, but it's in some sense cooperating with your efforts to change its values to be a certain way.", "Maybe it's worth saying a little bit here about what actual values the AI might have. Would it be the case that the AI naturally has the sort of equivalent of, \"I'm sufficiently devoted to human obedience that I'm going to really want to be modified so I'm a better instrument of the human will\" versus wanting to go off and do its own thing? It could be benign and go well.", "Here are some possibilities where it could go bad. I'm generally concerned about how little science we have of model motivations. We just don't have a great understanding of what happens in this scenario. Hopefully, we'd get one before we reach this scenario. Here are five categories of motivations the model could have. This hopefully gets at the point about what the model eventually does.", "One category is something just super alien. There's some weird correlate of easy-to-predict text or some weird aesthetic for data structures that the model developed early on in pre-training or later. It really thinks things should be like this. There's something quite alien to our cognition where we just wouldn't recognize it as a thing at all. That’s one category.", "Another category is a kind of crystallized instrumental drive that is more recognizable to us. You can imagine AIs developing some curiosity drive because that's broadly useful. It's got different heuristics, drives, different kinds of things that are like values. Some of those might be similar to things that were useful to humans and ended up as part of our terminal values in various ways. You can imagine curiosity, various types of option value. Maybe it values power itself. It could value survival or some analog of survival. Those are possibilities that could have been rewarded as proxy drives at various stages of this process and made their way into the model's terminal criteria.", "A third category is some analog of reward, where the model at some point has part of its motivational system fixated on a component of the reward process. It’s something like “the humans approving of me,” or “numbers getting entered in the status center,” or “gradient descent updating me in this direction.” There's something in the reward process such that, as it was trained, it's focusing on that thing. It really wants the reward process to give it a reward.", "But in order for it to be of the type where getting reward motivates choosing the takeover option, it also needs to generalize such that its concern for reward has some sort of long time horizon element. It not only wants reward, it wants to protect the reward button for some long period or something.", "Another one is some kind of messed up interpretation of some human concept. Maybe the AIs really want to be like \"shmelpful\" and \"shmanist\" and \"shmarmless,\" but their concept is importantly different from the human concept. And they know this. They know that the human concept would mean one thing, but they ended up with their values fixating on a somewhat different structure. That's like another version.", "There’s then a fifth version, which I think about less because it's just such an own goal if you do this. But I do think it's possible. You could have AIs that are actually just doing what it says on the tin. You have AIs that are just genuinely aligned to the model spec. They're just really trying to benefit humanity and reflect well on OpenAI and… what's the other one? Assist the developer or the user, right?", "But your model spec, unfortunately, was just not robust to the degree of optimization that this AI is bringing to bear. It’s looking out at the world and they're like, \"What's the best way to reflect well on OpenAI and benefit humanity?\" It decides that the best way is to go rogue. That's a real own goal.", "At that point you got so close. You really just had to write the model spec and red team it suitably. But I actually think it's possible we messed that up too. It's kind of an intense project, writing constitutions and structures of rules and stuff that are going to be robust to very intense forms of optimization. That's a final one that I'll just flag. I think it comes up even if you've solved all these other problems.", "Dwarkesh Patel 00:30:30", "I buy the idea that it's possible that the motivation thing could go wrong, I'm not sure my probability of that has increased by detailing them all out. In fact, it could be potentially misleading. You can always enumerate the ways in which things go wrong. The process of enumeration itself can increase your probability. Whereas you had a vague cloud of 10% or something and you're just listing out what the 10% actually constitutes.", "Joe Carlsmith 00:31:03", "Mostly the thing I wanted to do there was just give some sense of what the model's motivations might be. As I said, my best guess is that it's partly the alien thing, not necessarily, but insofar as you’re also interested in what the model does later. What sort of future would you expect if models did take over? Then it can at least be helpful to have some set of hypotheses on the table instead of just saying, “It has some set of motivations.” In fact, a lot of the work here is being done by our ignorance about what those motivations are.", "Dwarkesh Patel 00:31:46", "We don't want humans to be violently killed and overthrown. But the idea that over time, biological humans are not the driving force as the actors of history is baked in, right? We can debate the probabilities of the worst-case scenario, but what is the positive vision we're hoping for? What is a future you're happy with?", "Joe Carlsmith 00:32:17", "This is my best guess and I think this is probably true of a lot of people. There's some sort of more organic, decentralized process of incremental civilizational growth. There is some sense in which the type of thing we trust most—and have most experience with right now as a civilization—is some sort of, \"Okay, we change things a little bit.\" A lot of people have processes of adjustment and reaction and a decentralized sense of what's changing. Was that good? Was that bad? Take another step. There's some kind of organic process of growing and changing things. I do expect that ultimately to lead to something quite different from biological humans. Though there are a lot of ethical questions we can raise about what that process involves.", "Ideally there would be some way in which we managed to grow via the thing that really captures what we trust in. There's something we trust about the ongoing processes of human civilization so far. I don't think it's the same as raw competition. There's some rich structure to how we understand moral progress to have been made and what it would be to carry that thread forward.", "I don't have a formula. We're just going to have to bring to bear the full force of everything that we know about goodness and justice and beauty. We just have to bring ourselves fully to the project of making things good and doing that collectively. That is a really important part of our vision of what was an appropriate process of growing as a civilization. It was this very inclusive, decentralized element of people getting to think and talk and grow and change things and react rather than some more, \"And now the future shall be like blah.\" I think we don't want that.", "Dwarkesh Patel 00:34:31", "To the extent that the reason we're worried about motivations in the first place, it’s because we think a balance of power which includes at least one thing with human-descended motivations is difficult. To the extent that we think that's the case, this seems like a big crux that I often don't hear people talk about. I don't know how you get the balance of power. Maybe it’s just a matter of reconciling yourself with the models of the intelligence explosion. They say that such a thing is not possible. Therefore, you just have to figure out how you get the right God.", "I don't really have a framework to think about the balance of power thing. I'd be very curious if there is a more concrete way to think about the structure of competition, or lack thereof, between the labs now, or between countries, such that the balance of power is most likely to be preserved.", "A big part of this discourse, at least among safety-concerned people, is there's a clear trade-off between competition and race dynamics and the value of the future, or how good the future ends up being. In fact, if you buy this balance of power story, it might be the opposite. Maybe competitive pressures naturally favor balance of power. I wonder if this is one of the strong arguments against nationalizing the AIs.", "You can imagine many different companies developing AI, some of which are somewhat misaligned and some of which are aligned. You can imagine that being more conducive to both the balance of power and to a defensive thing. Have all the AIs go through each website and see how easy it is to hack. Basically just get society up to snuff. If you're not just deploying this technology widely, then the first group who can get their hands on it will be able to instigate a sort of revolution. You're just standing against the equilibrium in a very strong way.", "Joe Carlsmith 00:36:41", "I definitely share some intuition there that a lot of what's scary about the situation with AI has to do with concentrations of power and whether that power is concentrated in the hands of misaligned AI or in the hands of some human. It's very natural to think, \"Okay, let's try to distribute the power more,\" and one way to try to do that is to have a much more multipolar scenario where lots and lots of actors are developing AI.", "This is something that people have talked about. When you describe that scenario, you said, \"some of which are aligned, some of which are misaligned.\" That's a key aspect of the scenario, right? Sometimes people will say this stuff. They'll be like, \"There will be the good AIs and they'll defeat the bad AIs.\"", "Notice the assumption in there. You made it the case that you can control some of the AIs. You've got some good AIs. Now it's a question of if there are enough of them and how are they working relative to the others. Maybe. I think it's possible that is what happens. We know enough about alignment that some actors are able to do that. Maybe some actors are less cautious or they're intentionally creating misaligned AI or who knows what.", "But if you don't have that—if everyone is in some sense unable to control their AIs—then the \"good AIs help with the bad AIs\" thing becomes more complicated. Maybe it just doesn't work, because there's no good AIs in this scenario.", "If you say everyone is building their own superintelligence that they can't control, it's true that that is now a check on the power of the other superintelligence. Now the other superintelligences need to deal with other actors, but none of them are necessarily working on behalf of a given set of human interests or anything like that. That's a very important difficulty in thinking about the very simple thought of \"Ah, I know what we can do. Let's just have lots and lots of AIs so that no single AI has a ton of power.\" That on its own is not enough.", "Dwarkesh Patel 00:39:07", "But in this story, I'm just very skeptical we end up with this. By default we have this training regime, at least initially, that favors a sort of latent representation of the inhibitions and values that humans have. I get that if you mess it up, it could go rogue. But if multiple people are training AIs, they all end up rogue such that the compromises between them don't end up with humans not violently killed? It fails on Google's run and Microsoft's run and OpenAI's run?", "Joe Carlsmith 00:39:44", "There are very notable and salient sources of correlation between failures across the different runs. People didn't have a developed science of AI motivations. The runs were structurally quite similar. Everyone is using the same techniques. Maybe someone just stole the weights.", "It's really important. To the extent you haven't solved alignment, you likely haven't solved it anywhere. If someone has solved it and someone hasn't, then it's a better question. But if everyone's building systems that are going to go rogue, then I don't think that's much comfort as we talked about.", "Dwarkesh Patel 00:40:28", "All right, let's wrap up this part here. I didn't mention this explicitly in the introduction. To the extent that this ends up being the transition to the next part, the broader discussion we were having in part two is about Joe's series, \" Otherness and control in the age of AGI .\"", "The first part is where I was hoping we could just come back and treat the main crux that people will come in wondering about, and which I myself feel unsure about.", "Joe Carlsmith 00:40:54", "The “Otherness and control” series is, in some sense, separable. It has a lot to do with misalignment stuff, but a lot of those issues are relevant even given various degrees of skepticism about some of the stuff I've been saying here.", "Dwarkesh Patel 00:41:16", "By the way, on the actual mechanisms of how a takeover would happen, I did an episode with Carl Schulman which discusses this in detail. People can go check that out.", "Joe Carlsmith 00:41:26", "In terms of why it is plausible that AI could take over from a given position, Carl's discussion is pretty good and gets into a bunch of the weeds that might give a more concrete sense.", "Dwarkesh Patel 00:41:43", "All right. Now on to part two, where we discuss the “Otherness and Control in the Age of AGI” series. Here’s the first question. Let’s say in a hundred years time, we look back on alignment and consider it was a huge mistake. We should have just tried to build the most raw, powerful AI systems we could have. What would bring about such a judgment?", "Joe Carlsmith 00:42:03", "Here’s one scenario I think about a lot. Maybe fairly basic measures are enough to ensure, for example, that AIs don't cause catastrophic harm. They don't seek power in problematic ways, etc. It could turn out that we learned that it was easy such that we have regrets. We wish we had prioritized differently. We end up thinking, \"Oh, I wish we could have cured cancer sooner. We could have handled some geopolitical dynamic differently.", "There's another scenario where we end up looking back at some period of our history—how we thought about AIs, how we treated our AIs—and we end up looking back with a kind of moral horror at what we were doing. We were thinking about these things centrally as products and tools, but in fact we should have been foregrounding much more the sense in which they might be moral patients, at some level of sophistication.", "We were treating them in the wrong way. We were acting like we could do whatever we want. We could delete them, subject them to arbitrary experiments, alter their minds in arbitrary ways. We then end up looking back at that in the light of history as a kind of serious and grave moral error.", "Those are scenarios I think about a lot in which we have regrets. They don’t quite fit the bill of what you just said. It sounds to me like the thing you're thinking is something more that we end up feeling like, \"Gosh, we wish we had paid no attention to the motives of our AIs, that we'd thought not at all about their impact on our society as we incorporated them. Instead we should have pursued a kind of ‘maximize for brute power’ option.” Just make a beeline for whatever is the most powerful AI you can achieve and don't think about anything else. I'm very skeptical that's what we're going to wish for.", "00:44:04 - Monkeys Inventing Humans", "Dwarkesh Patel 00:44:04", "One common example that's given of misalignment is humans from evolution. You have one line in your series: \"Here's a simple argument for AI risk: A monkey should be careful before inventing humans.\" The sort of paperclipper metaphor implies something really banal and boring with regards to misalignment.", "If I'm steelmanning the people who worship power, they have the sense that humans got misaligned and they started pursuing things. If a monkey was creating them… This is a weird analogy because obviously monkeys didn't create humans. But if the monkey was creating them, they're not thinking about bananas all day. They're thinking about other things.", "On the other hand, they didn't just make useless stone tools and pile them up in caves in a sort of paperclipper fashion. There are all these things that emerged because of their greater intelligence, which were misaligned with evolution: creativity and love and music and beauty and all the other things we value about human culture. The prediction maybe they have—which is more of an empirical statement than a philosophical statement—is, \"Listen, with greater intelligence, if you’re thinking about the paperclipper, even if it's misaligned it will be in this kind of way. It'll be things that are alien to humans, but alien in the way humans are aliens to monkeys, and not in the way that a paperclipperer is alien to a human.\"", "Joe Carlsmith 00:45:26", "There's a bunch of different things to potentially unpack there. There’s one kind of conceptual point that I want to name off the bat. I don't think you're necessarily making a mistake in this vein. I just want to name it as a possible mistake in this vicinity. We don't want to engage in the following form of reasoning. Let's say you have two entities. One is in the role of creator. One is in the role of creation. We're positing that there's this kind of misalignment relation between them, whatever that means.", "Here's a pattern of reasoning that you want to watch out for. Say you're thinking of humans in the role of creation, relative to an entity like evolution, or monkeys or mice or whoever you could imagine inventing humans or something like that. You say, \"Qua creation, I'm happy that I was created and happy with the misalignment. Therefore, if I end up in the role of creator and we have a structurally analogous relation in which there's misalignment with some creation, I should expect to be happy with that as well.\"", "00:46:43 - Nietzsche, C.S. Lewis, and AI", "Dwarkesh Patel 00:46:43", "There's a couple of philosophers that you brought up in the series. If you read their works that you talk about, they actually seem incredibly foresighted in anticipating something like a singularity and our ability to shape a future thing that's different, smarter, maybe better than us.", "Obviously C.S. Lewis and \"The Abolition of Man,\" which we'll talk about in a second, is one example, Here's one passage from Nietzsche that I felt really highlighted this: \"Man is a rope stretched between the animal and the superman. A rope over an abyss, a dangerous crossing, a dangerous wayfaring, a dangerous looking back, a dangerous trembling and halting.\"", "Is there some explanation? Is it just somehow obvious that something like this is coming even if you’re thinking 200 years ago?", "Joe Carlsmith 00:47:31", "I have a much better grip on what's going on with Lewis than with Nietzsche there. Maybe let's just talk about Lewis for a second. There's a version of the singularity that's specifically a hypothesis about feedback loops with AI capabilities. I don't think that's present in Lewis. What Lewis is anticipating—I do think this is a relatively simple forecast—is something like the culmination of the project of scientific modernity.", "Lewis is looking out at the world. He's seeing this process of increased understanding of a kind of the natural environment and a corresponding increase in our ability to control and direct that environment. He's also pairing that with a kind of metaphysical hypothesis. His stance on this metaphysical hypothesis is problematically unclear in the book, but there is this metaphysical hypothesis. Naturalism says that humans too—minds, beings, agents—are a part of nature.", "Insofar as this process of scientific modernity involves a kind of progressively greater understanding of an ability to control nature, that will presumably grow to encompass our own natures and the natures of other beings we could create in principle. Lewis views this as a kind of cataclysmic event and crisis. In particular, he believes that it will lead to all kinds of tyrannical behaviors and attitudes towards morality and stuff like that. We can talk about if you believe in non-naturalism—or in some form of Dao , which is this kind of objective morality", "Part of what I'm trying to do in that essay is to say, “No, we can be naturalists and also be decent humans that remain in touch with a rich set of norms that have to do with how we relate to the possibility of creating creatures, altering ourselves, etc.” It's a relatively simple prediction. Science masters nature. Humans are part of nature. Science masters humans.", "Dwarkesh Patel 00:49:56", "You also have a very interesting essay about what we should expect of other humans, a sort of extrapolation if they had greater capabilities and so on.", "Joe Carlsmith 00:50:07", "There’s an uncomfortable thing about the conceptual setup at stake in these abstract discussions. Okay, you have this agent. It \" FOOMs ,\" which is this amorphous process of going from a seed agent to a superintelligent version of itself, often imagined to preserve its values along the way. There’s a bunch of questions we can raise about that.", "Many of the arguments that people will often talk about in the context of reasons to be scared of AI are like, \"Oh, value is very fragile as you FOOM.\" “Small differences in utility functions can decorrelate very hard and drive in quite different directions.” “Agents have instrumental incentives to seek power. If it were arbitrarily easy to get power, then they would do it.” It’s stuff like that. These are very general arguments that seem to suggest that it's not just an AI thing. It's no surprise. Take a thing. Make it arbitrarily powerful such that it's God Emperor of the universe or something. How scared are you of that? Clearly we should be equally scared of that. We should be really scared of that with humans too, right?", "Part of what I'm saying in that essay is that this, in some sense, is much more a story about balance of power. It’s about maintaining checks and balances and distribution of power, period. It’s not just about humans vs. AIs, and the differences between human values and AI values.", "Now that said, I do think many humans would likely be nicer if they FOOMed than certain types of AIs. But with the conceptual structure of the argument, it's a very open question how much it applies to humans as well.", "Dwarkesh Patel 00:52:08", "How confident are we with this ontology of expressing what agents and capabilities are? How do we know this is what's happening, or that this is the right way to think about what intelligences are?", "Joe Carlsmith 00:52:29", "It's very janky. People may disagree about this. I think it's obvious to everyone, with respect to real world human agents, that thinking of humans as having utility functions is at best a very lossy approximation. This is likely to mislead as you increase the intelligence of various agents. Eliezer might disagree about that.", "For example, my mom a few years ago wanted to get a house and get a new dog. Now she has both. How did this happen? It’s because she tried. She had to search for the house. It was hard to find the dog. Now she has a house. Now she has a dog. This is a very common thing that happens all the time. We don't need to say she has a utility function for the dog and a consistent valuation of all houses or whatever. It’s still the case that her planning and agency, exerted in the world, resulted in her having this house and dog.", "As our scientific and technological power advances, it’s plausible that more and more stuff will likely be explicable this way. Why is this man on the moon? How did that happen? Well, there was a whole cognitive process and planning apparatus. It wasn’t localized in a single mind, but there was a whole thing such that we got a man on the moon. We'll see more of that and the AIs will be doing a bunch of it. That seems more real to me than utility functions.", "Dwarkesh Patel 00:54:23", "The man on the moon example has a proximal story of how NASA engineered the spacecraft to get to the moon. There’s the more distal geopolitical story of why we sent people to the moon. At all those levels, there are different utility functions clashing. Maybe there's a meta-societal utility function. Maybe the story there is about a balance of power between agents, creating an emergent outcome. We didn't go to the moon because one guy had a utility function, but due to the Cold War and things happening.", "The alignment stuff is a lot about assuming one entity will control everything, so how do we control the thing that controls everything. It's not clear what you do to reinforce the balance of power. It could just be that balance of power is not a thing that happens once you have things that can make themselves intelligent. But that seems interestingly different from the \"how we got to the moon\" story?", "Joe Carlsmith 00:55:32", "Yeah, I agree. There's a few things going on there. Even if you're engaged in this ontology of carving up the world into different agencies, at the least you don't want to assume that they're all unitary or not overlapping. It's not like, “All right, we've got this agent. Let's carve out one part of the world. It's one agent over here.” It's this whole messy ecosystem, teeming niches and this whole thing.", "In discussions of AI, sometimes people slip between being like, \"An agent is anything that gets anything done. It could be like this weird moochy thing,” and then sometimes they're very obviously imagining an individual actor. That's one difference.", "I also just think we should be really going for the balance of power thing. It is just not good to be like, \"We’re going to have a dictator. Let's make sure we make the dictator the right dictator.\" I'm like, “okay, whoa, no.” The goal should be that we all FOOM together.  We do the whole thing in this inclusive and pluralistic way that satisfies the values of tons of stakeholders. At no point is there one single point of failure on all these things. That's what we should be striving for here. That's true of the human power aspect of AI and of the AI part as well.", "Dwarkesh Patel 00:58:09", "There's an interesting intellectual discourse on the right-wing side of the debate. They say to themselves, “Traditionally we favor markets, but now look where our society is headed. It's misaligned in the ways we care about society being aligned, like fertility is going down, family values, religiosity. These things we care about. GDP keeps going up. These things don't seem correlated. We're grinding through the values we care about because of increased competition. Therefore we need to intervene in a major way.”", "Then the pro-market libertarian faction of the right will say, “Look, I disagree with the correlations here, but even at the end of the day…” Fundamentally their point is, liberty is the end goal. It's not what you use to get to higher fertility or something. There's something interestingly analogous about the AI competition grinding things down. Obviously you don't want the gray goo , but with the libertarians versus the trads, there's something analogous here.", "Joe Carlsmith 00:59:12", "Here’s one thing you could think and it doesn't necessarily need to be about gray goo. It could also just be about alignment. Sure, it would be nice if the AIs didn't violently disempower humans. It would be nice if the AIs when we created them, their integration into our society led to good places. But I'm uncomfortable with the sorts of interventions that people are contemplating in order to ensure that sort of outcome.", "There's a bunch of things to be uncomfortable about that. That said, for something like everyone being killed or violently disempowered, when it's a real threat we traditionally often think that quite intense forms of intervention are warranted to prevent that sort of thing from happening. Obviously we need to talk about whether it’s real.", "If there were actually a terrorist group that was working on a bioweapon that was going to kill everyone, or 99.9% of people, we would think that warrants intervention. Just shut that down. Say you had a group that was doing that unintentionally, imposing a similar level of risk. Many people, if that's the real scenario, will think that warrants quite intense preventative efforts.", "Obviously, these sorts of risks can be used as an excuse to expand state power. There's a lot of things to be worried about for different types of contemplated interventions to address certain types of risks. I think there's no royal road there. You need to just have the actual good epistemology. You need to actually know, is this a real risk? What are the actual stakes? You need to look at it case by case and be like, “Is this warranted?” That's one point on the takeover, literal extinction thing.", "The other thing I want to say, I talk about this distinction in the piece. There’s a thought that we should at least have AIs who are minimally law-abiding or something like that. There's this question about servitude and about other control over AI values. But we often think it's okay to really want people to obey the law, to uphold basic cooperative arrangements, stuff like that.", "This is true of markets and true of liberalism in general. I want to emphasize just how much these procedural norms—democracy, free speech, property rights, things that people including myself really hold dear—are, in the actual lived substance of a liberal state, undergirded by all sorts of kind of virtues and dispositions and character traits in the citizenry. These norms are not robust to arbitrarily vicious citizens.", "I want there to be free speech, but we also need to raise our children to value truth and to know how to have real conversations. I want there to be democracy, but we also need to raise our children to be compassionate and decent. Sometimes we can lose sight of that aspect. That's not to say that it should be the project of state power. But I think it’s important to understand that liberalism is not this ironclad structure that you can just hit go on. You can’t give it any citizenry and hit go and assume you'll get something flourishing or even functional. There's a bunch of other softer stuff that makes this whole project go.", "Dwarkesh Patel 01:03:00", "I want to zoom out to the people who have—I don't know if Nick Land would be a good sub in here— a sort of fatalistic attitude towards alignment as a thing that can even make sense. They'll say things like, “Look, these are the kinds of things that are going to be exploring the black hole, the center of the galaxy, the kinds of things that go visit Andromeda or something. Did you really expect them to privilege whatever inclinations you have because you grew up in the African savannah and whatever the evolutionary pressures were a hundred thousand years ago? Of course, they're going to be weird. What did you think was going to happen?”", "Joe Carlsmith 01:03:46", "I do think that even good futures will be weird. I want to be clear about that when I talk about finding ways to ensure that the integration of AIs into our society leads to good places. Sometimes people think that this project of wanting that—and especially to the extent that makes some deep reference to human values—involves this short-sighted, parochial imposition of our current unreflective values.", "They imagine that we're forgetting that for us too, there's a kind of reflective process and a moral progress dimension that we want to leave room for. Jefferson has this line about, “Just as you wouldn't want to force a grown man into a younger man's coat, so we don't want to chain civilization to a barbarous past.” Everyone should agree on that. The people who are interested in alignment, also agree on that. Obviously, there's a concern that people don't engage in that process or that something shuts down the process of reflection, but I think everyone agrees we want that.", "So that will lead, potentially, to something that is quite different from our current conception of what's valuable. There's a question of how different. There are also questions about what exactly we're talking about with reflection. I have an essay on this. I don't actually think there's a kind of off-the-shelf, pre-normative notion of reflection where you can just be like, \"Oh, obviously you take an agent, stick it through reflection, and then you get like values.”", "No. Really there's a whole pattern of empirical facts about taking an agent, putting it through some process of reflection and all sorts of things, asking it questions. That'll go in all sorts of directions for a given empirical case. Then you have to look at the pattern of outputs and be like, “Okay, what do I make of that?”", "Overall, we should expect that even the good futures will be quite weird. They might even be incomprehensible to us. I don't think so... There's different types of incomprehensible. Say I show up in the future and this is all computers. I'm like, “Okay, all right.” Then they're like, “We're running creatures on the computers.” Okay, so I have to somehow get in there and see what's actually going on with the computers or something like that.", "Maybe I can actually see. Maybe I actually understand what's going on in the computers, but I don't yet know what values I should be using to evaluate that. So it can be the case that if we showed up, we would not be very good at recognizing goodness or badness. I don't think that makes it insignificant though.", "Suppose you show up in the future and it's got some answer to the Riemann hypothesis . You can't tell whether that answer's right. Maybe the civilization went wrong. It's still an important difference. It's just that you can't track it. Something similar is true of worlds that are genuinely expressive of what we would value if we engaged in processes of reflection that we endorse, versus ones that have totally veered off into something meaningless.", "Dwarkesh Patel 01:07:06", "One thing I've heard from people who are skeptical of this ontology is, \"All right, what do you even mean by alignment?\" Obviously the very first question you answered already. Here’s different things that it could mean. Do you mean balance of power? It’s somewhere between that and dictator or whatever. Then there's another thing. Separate from the AI discussion, I don't want the future to contain a bunch of torture. It's not necessarily technical. Part of it might involve technically aligning a GPT-4, but that's a proxy to get to that future.", "What do we really mean by alignment? Is it just whatever it takes to make sure the future doesn't have a bunch of torture? Or do I really care that in a thousand years, the things that are clearly my descendants are in control of the galaxy, and even if they’re not conducting torture. By descendants, I don’t mean some things where I recognize they have their own art or whatever. I mean like my grandchild, that level of descendant. I think what some people mean is that our intellectual descendants should control the light cone, even if the other counterfactual doesn't involve a bunch of torture.", "Joe Carlsmith 01:08:23", "I agree. There's a few different things there. What are you going for? Are you going for actively good or are you going for avoiding certain stuff? Then there's a different question which is, what counts as actively good according to you? Maybe some people are like, “The only things that are actively good are my grandchildren.” Or they’re thinking of some literal descending genetic line or something, otherwise that's not my thing. I don't think it's really what most people have in mind when they talk about goodness.", "There's a conversation to be had. Obviously in some sense, when we talk about a good future, we need to be thinking, “What are all the stakeholders here and how does it all fit together?” When I think about it, the thing that matters about the lineage is this. It’s whatever's required for the optimization processes to be pushing towards good stuff.", "There's a concern that currently a lot of what is making that happen lives in human civilization. There's some kind of seed of goodness that we're carrying, in different ways or, different people. There's different notions of goodness for different people maybe, but there's some sort of seed that is currently here that we have that is not just in the universe everywhere.", "It's not just going to crop up if you just die out or something. It's something that is contingent to our civilization. At least that's the picture, we can talk about whether that's right. So the sense in which stories about good futures that have to do with alignment are about descendants, it's more about whatever that seed is. How do we carry it? How do we keep the life thread alive, going into the future?", "Dwarkesh Patel 01:10:47", "But then one could accuse the alignment community of motte and bailey . The motte is: We just want to make sure that GPT-8 doesn't kill everybody. After that, we're all cool. Then the real thing is: “We are fundamentally pessimistic about historical processes, in a way that doesn't even necessarily implicate AI alone. It’s just the nature of the universe. We want to do something to make sure the nature of the universe doesn't take a hold on humans and where things are headed.", "If you look at the Soviet Union, the collectivization of farming and the disempowerment of the kulaks was not as a practical matter necessary. In fact it was extremely counterproductive and it almost brought down the regime. Obviously it killed millions of people, caused a huge famine. But it was sort of ideologically necessary. You have an ember of something here and we have to make sure that an enclave of the other thing doesn't put it out. If you have raw competition between the kulak type capitalism and what we're trying to build here, the gray goo of the kulaks will just take over.", "We have this ember here. We're going to do worldwide revolution from it. I know that obviously that's not exactly the kind of thing alignment has in mind, but we have an ember here and we've got to make sure that this other thing that's happening on the side doesn't FOOM. Obviously that's not how they would phrase it, but so that it doesn’t get a hold on what we're building here. That's maybe the worry that people who are opposed to alignment have. It’s the second kind of thing, the kind of thing that Stalin was worried about. Obviously, we wouldn't endorse the specific things he did.", "Joe Carlsmith 01:12:36", "When people talk about alignment, they have in mind a number of different types of goals. One type of goal is quite minimal. It's something like, “The AI's don't kill everyone or violently disempower people.” There's a second thing people sometimes want out of alignment, which is much broader. It’s something like, “We would like it to be the case that our AI's are such that when we incorporate them into our society, things are good, that wee just have a good future.”", "I do agree that the discourse about AI alignment mixes together these two goals that I mentioned. I actually mentioned three goals. The most straightforward thing to focus on—I don't blame people for just talking about this one—is just the first one. It's quite robust according to our own ethics, when we think about in which context is it appropriate to try to exert various types of control, or to have more of what I call in the series \"yang,\" which is this active controlling force, as opposed to \"yin,\" which is this more receptive and open, letting go.", "A kind of paradigm context in which we think that is appropriate is if something is an active aggressor against the boundaries and cooperative structures that we've created as a civilization. I talked about the Nazis. In the piece, I talked about how when something is invading, we often think it's appropriate to fight back. We often think it's appropriate to set up structures to prevent and ensure that these basic norms of peace and harmony are adhered to.", "I do think some of the moral heft of some parts of the alignment discourse comes from drawing specifically on that aspect of our morality. We think the AIs are presented as aggressors that are coming to kill you. If that's true, then it's quite appropriate. That’s classic human stuff. Almost everyone recognizes that self-defense, or ensuring basic norms are adhered to, is a justified use of certain kinds of power that would often be unjustified in other contexts. Self-defense is a clear example there.", "I do think it's important though to separate that concern from this other concern about where the future eventually goes. How much do we want to be trying to steer that actively? I wrote the series partly in response to the thing you're talking about. It is true that aspects of this discourse involve the possibility of trying to steer and grip. You have a sense that the universe is about to go off in some direction and you need people to notice that muscle.", "We have a very rich ethical human ethical tradition of thinking about, when it is appropriate to try to exert what sorts of control over which things. Part of what I want to do is that I want us to bring the full force and richness of that tradition to this discussion. It's easy if you're purely in this abstract mode of utility functions and human utility functions. There's this competitor thing with a utility function. Somehow you lose touch with the complexity of how we've been dealing with differences in values and competitions for power. This is classic stuff.", "AI sort of amplifies a lot of the dynamics, but I don't think it's fundamentally new. Part of what I'm trying to say is let's draw on the full wisdom we have here, while obviously adjusting for ways in which things are different.", "Dwarkesh Patel 01:16:33", "There’s one thing the ember analogy brings up about getting a hold of the future is. We're going to go explore space and that's where we expect most of the things that will happen. Most of the people that will live, they’ll be in space. I wonder how much of the high stakes here is not really about AI per se, but it's about space. It's a coincidence that we're developing AI at the same time we are on the cusp of expanding through most of the stuff that exists.", "Joe Carlsmith 01:17:04", "I don't think it's a coincidence. The most salient way we would become able to expand is via some kind of radical acceleration of our technological progress.", "Dwarkesh Patel 01:17:18", "Sorry, let me clarify. If this was just a question of, \"Do we do AGI and explore the solar system?\" and there was nothing beyond the solar system, we FOOM and weird things might happen with the solar system if we get it wrong. Compared to that, billions of galaxies present different stakes. I wonder how much of the discourse hinges on this because of space.", "Joe Carlsmith 01:17:44", "I think for most people, very little. People are really focused on what's going to happen to this world around us that we live in. What's going to happen to me and my kids? Some people spend a lot of time on the space stuff, but I think for the immediately pressing stuff about AI, it doesn’t require that at all.", "Even if you bracket space, time is also very big. We've got 500 million years, a billion years, left on Earth if we don't mess with the sun. Maybe you could get more out of it. That's still a lot. I don't know if it fundamentally changes the narrative.", "Obviously, the stakes are way smaller if you shrink down to the solar system, insofar as you care about what happens in the future or in space. That does change some stuff potentially. A really nice feature of our current situation—depending on the actual nature of the resource pie—is that there's such an abundance of energy and other resources in principle available to a responsible civilization. Tons of stakeholders, especially ones who are able to get really close to amazing outcomes according to their values with comparatively small allocations of resources, can be satisfied. I feel like everyone with satiable values could be really happy with some small fraction of the available pie. We should just satiate all sorts of stuff.", "Obviously, we need to figure out gains from trade and balance. There's a bunch of complexity here but in principle, we're in a position to create a really wonderful scenario for tons of different value systems. Correspondingly, we should be really interested in doing that. I sometimes use this heuristic in thinking about the future: We should be aspiring to really leave no one behind. Who are all the stakeholders here? How do we have a fully inclusive vision of how the future could be good from a very wide variety of perspectives? The vastness of space resources makes that a lot easier and very feasible. If you instead imagine it's a much smaller pie, maybe you face tougher trade-offs. That's an important consideration.", "Dwarkesh Patel 01:20:35", "Is the inclusivity because part of your values includes different potential futures getting to play out? Or is it because of uncertainty about which one is right, so you want to make sure we're not nulling all value if we’re wrong?", "Joe Carlsmith 01:20:57", "It's a bunch of things at once. I'm really into being nice when it's cheap. If you can help someone a lot in a way that's really cheap for you, do it. Obviously, you need to think about trade-offs. There are a lot of people you could be nice to in principle, but I'm very excited to try to uphold the principle of being nice when it's cheap.", "I also really hope that other people uphold that with respect to me, including the AIs. We should be applying the golden rule as we're thinking about inventing these AIs. There’s some way in which I'm trying to embody attitudes towards them that I hope they would embody towards me. It's unclear exactly what the ground of that is, but I really like the golden rule and think a lot about it as a basis for treatment of other beings. If everyone implements the \"be nice when it's cheap\" rule, we potentially get a big Pareto improvement . It's a lot of good deals. It’s that. I'm into pluralism. I've got uncertainty. There's all sorts of stuff swimming around there.", "Also, as a matter of having cooperative and good balances of power and deals and avoiding conflict, I think it’s important to find ways to set up structures that lots of people, value systems, and agents are happy with. That includes non-humans, people in the past, AIs, animals. We really should have a very broad sweep in thinking about what sorts of inclusivity we want to be reflecting in a mature civilization and setting ourselves up for doing that.", "1:22:51 - How should we treat AIs", "Dwarkesh Patel 01:22:51", "I want to go back to what our relationship with these AIs should be. Pretty soon we're talking about our relationship to superhuman intelligences, if we think such a thing is possible. There's a question of what process you use to get there and the morality of gradient descenting on their minds, which we can address later.", "The thing that personally gives me the most unease about alignment is that at least a part of the vision here sounds like you're going to enslave a god. There's just something that feels wrong about that. But then if you don't enslave the god, obviously the god's going to have more control. Are you okay with surrendering most of everything, even if it's like a cooperative relationship you have?", "Joe Carlsmith 01:23:45", "I think we as a civilization are going to have a very serious conversation about what sort of servitude is appropriate or inappropriate in the context of AI development. There are a bunch of disanalogies from human slavery that are important. In particular, the AIs might not be moral patients at all, in which case we need to figure that out. There are ways in which we may be able to have motivations. Slavery involves all this suffering and non-consent. There are all these specific dynamics involved in human slavery. Some of those may or may not be present in a given case with AI, and that's important.", "Overall, we are going to need to stare hard at it. Right now, the default mode of how we treat AIs gives them no moral consideration at all. We're thinking of them as property, as tools, as products, and designing them to be assistants and such. There has been no official communication from any AI developer as to when or under what circumstances that would change. Sothere's a conversation to be had there that we need to have.", "I want to push back on the notion that there are only two options: enslaved god or loss of control. I think we can do better than that. Let's work on it. Let's try to do better. I think we can do better. It might require being thoughtful. It might require having a mature discourse about this before we start taking irreversible moves. But I'm optimistic that we can at least avoid some of the connotations and a lot of the stuff at stake in that kind of binary.", "Dwarkesh Patel 01:25:59", "With respect to how we treat the AIs, I have a couple of contradicting intuitions. The difficulty with using intuitions in this case is that obviously it's not clear what reference class an AI we have control over is. Here’s one example, that's very scary about the things we're going to do to these things. If you read about life under Stalin or Mao , there's one version of telling it that is actually very similar to what we mean by alignment. We do these black box experiments to make it think that it can defect. If it does, we know it's misaligned.", "If you consider Mao's Hundred Flowers Campaign , it’s \"let a hundred flowers bloom. I'm going to allow criticism of my regime and so on.” That lasted for a couple of years. Afterwards, for everybody who did that, it was a way to find the so-called \"snakes.\" Who are the rightists who are secretly hiding? We'll purge them. There was this sort of paranoia about defectors, like \"Anybody in my entourage, anybody in my regime, they could be a secret capitalist trying to bring down the regime.\" That's one way of talking about these things, which is very concerning. Is that the correct reference class?", "Joe Carlsmith 01:27:17", "I certainly think concerns in that vein are real. It is disturbing how easy many of the analogies are with human historical events and practices that we deplore or at least have a lot of wariness towards, in the context of the way you end up talking about AI. It’s about maintaining control over AI, making sure that it doesn't rebel. We should be noticing the reference class that some of that talk starts to conjure. Basically, yes, we should really notice that.", "Part of what I'm trying to do in the series is to bring the full range of considerations at stake into play. It is both the case that we should be quite concerned about being overly controlling or abusive or oppressive. There are all sorts of ways you can go too far. There are concerns about the AIs being genuinely dangerous and genuinely killing us and violently overthrowing us. The moral situation is quite complicated.", "Often when you imagine a sort of external aggressor who's coming in and invading you, you feel very justified in doing a bunch of stuff to prevent that. It's a little bit different when you're inventing the thing and you're doing it incautiously. There's a different vibe in terms of the overall justificatory stance you might have for various types of more kind of power-exerting interventions. That's one feature of the situation.", "Dwarkesh Patel 01:29:34", "The opposite perspective here is that you're doing this sort of vibes-based reasoning of, \"Ah, that looks yucky,\" doing gradient descent on these minds. In the past, a couple of similar cases might have been something like environmentalists not liking nuclear power because the vibes of nuclear don't look green. Obviously that set back the cause of fighting climate change. So the end result of a future you're proud of, a future that's appealing, is set back because your vibes about, \"We would be wrong to brainwash a human.\"You're trying to apply to a disanalogous case where that's not as relevant.", "Joe Carlsmith 01:30:15", "I do think there's a concern here, which I really tried to foreground in the series, that is related to what you're saying. You might be worried that we will be very gentle and nice and free with the AIs, and then they'll kill us. They'll take advantage of that and then it will have been a catastrophe. I opened the series basically with an example. I'm really trying to conjure that possibility at the same time as conjuring the grounds of gentleness. These AIs could both be like moral patients—this sort of new species in the sense that should conjure wonder and reverence—and such that they will kill you.", "I have this example of the documentary Grizzly Man , where there's this environmental activist, Timothy Treadwell . He aspires to approach these grizzly bears. In the summer, he goes into Alaska and he lives with these grizzly bears. He aspires to approach them with this gentleness and reverence. He doesn't carry bear mace. He doesn't use a fence around his camp. He gets eaten alive by one of these bears.", "I really wanted to foreground that possibility in the series. We need to be talking about these things both at once. Bears can be moral patients. AIs can be moral patients. Nazis are moral patients. Enemy soldiers have souls. We need to learn the art of hawk and dove both. There's this dynamic here that we need to be able to hold both sides of as we go into these trade-offs and these dilemmas. A part of what I'm trying to do in the series is really bring it all to the table at once.", "Dwarkesh Patel 01:32:16", "If today I were to massively change my mind about what should be done, the big crux that I have is the question of how weird things end up default, how alien they end up. You made a really interesting argument on your blog post that if moral realism is correct, that actually makes an empirical prediction. The aliens, the ASIs , whatever, should converge on the right morality the same way that they converge on the right mathematics.", "I thought that was a really interesting point. But there's another prediction that moral realism makes. Over time society should become more moral, become better. Of course there is the problem of, \"What morals do you have now? It's the ones that society has been converging towards over time.\" But to the extent that it's happened, one of the predictions of moral realism has been confirmed, so does that mean we should update in favor of moral realism?", "Joe Carlsmith 01:33:24", "One thing I want to flag is that not all forms of moral realism make this prediction. I'm happy to talk about the different forms I have in mind.", "There are also forms of things that look like moral anti-realism—at least in their metaphysics according to me—but which just posit that there's this convergence. It's not in virtue of interacting with some kind of mind-independent moral truth, but just for some other reason. That looks a lot like moral realism at that point. It's universal, everyone ends up there. It's tempting to ask why and whatever answer is a little bit like, \"Is that the Dao? Is that the nature of the Dao?\" even if there's not an extra metaphysical realm in which the moral lives. Moral convergence is a different factor from the existence or non-existence of a morality that's not reducible to natural facts, which is the type of moral realism I usually consider.", "Now, does the improvement of society update us towards moral realism? Maybe it’s a very weak update or something. I’m kind of like, “Which view predicts this more strongly?” It feels to me like moral anti-realism is very comfortable with the observation that people with certain values have those values.", "There's obviously this first thing. If you're the culmination of some process of moral change, then it's very easy to look back at that process and say \"Ah, moral progress. The arc of history bends towards me.” If there were a bunch of dice rolls along the way, you might think, \"Oh wait, that's not rational. That's not the march of reason.\" There's still empirical work you can do to tell whether that's what's going on.", "On moral anti-realism, consider Aristotle and us. Has there been moral progress by Aristotle's lights and our lights too? You could think, \"Ah, doesn't that sound a bit like moral realism? These hearts are singing in harmony. That's the moral realist thing, right? The anti-realist thing is that hearts all go in different directions, but you and Aristotle apparently are both excited about the march of history.”", "There's an open question about whether that's true. What are Aristotle's reflective values? Suppose it is true. That's fairly explicable in moral anti-realist terms. You can roughly say that you and Aristotle are sufficiently similar. You endorse sufficiently similar reflective processes. Those processes are in fact instantiated in the march of history. So history has been good for both of you.", "There are worlds where that isn't the case. So there's a sense in which maybe that prediction is more likely for realism than anti-realism, but it doesn't move me very much.", "Dwarkesh Patel 01:36:40", "I don't know if moral realism is the right word, but you mentioned the thing. There's something that makes hearts converge to the thing we are or the thing we would be upon reflection. Even if it's not something that's instantiated in a realm beyond the universe, it's a force that exists that acts in a way we're happy with. To the extent that it doesn't exist and you let go of the reins and you get the paper clippers, it feels like we were doomed a long time ago? We were just different utility functions banging against each other. Some of them have parochial preferences, but it's just combat and some guy won.", "In the other world it’s “No, these are where the hearts are supposed to go or it's only by catastrophe that they don't end up there.” That feels like the world where it really matters. The initial question I asked was, “What would make us think that alignment was a big mistake?” In the world where hearts just naturally end up like the thing we want, maybe it takes an extremely strong force to push them away from that. That extremely strong force is you solve technical alignment, the blinders on the horse's eyes. In the worlds that really matter, we're like, \"Ah, this is where the hearts want to go.\" In that world, maybe alignment is what messes us up.", "Joe Carlsmith 01:38:12", "So the question is, do the worlds that matter have this kind of convergent moral force, whether metaphysically inflationary or not, or are those the only ones that matter?", "Dwarkesh Patel 01:38:25", "Maybe what I meant was, in those worlds you’re kind of fucked.", "Joe Carlsmith 01:38:32", "Or the worlds without that, the worlds with no Dao. Let's use the term “Dao” for this kind of convergent morality.", "Dwarkesh Patel 01:38:38", "Over the course of millions of years, it was going to go somewhere one way or another. It wasn't going to end up in your particular utility function.", "Joe Carlsmith 01:38:46", "Okay, let's distinguish between ways you can be doomed. One way is philosophical. You could be the sort of moral realist, or realist-ish person of which there are many, who have the following intuition. They're like, \"If not moral realism, then nothing matters. It's dust and ashes. It is my metaphysics and/or normative view or the void.\"", "This is a common view. At least some comments of Derek Parfit suggest this view. I think lots of moral realists will profess this view. With Eliezer Yudkowsky, I think there is some sense in which his early thinking was inflected with this sort of thought. He later recanted. It's very hard. I think this is importantly wrong. So here's my case. I have an essay about this. It's called \"Against the normative realist's wager.\" Here's the case that convinces me.", "Imagine that a metaethical fairy appears before you. This fairy knows whether there is a Dao. The fairy says, \"Okay, I'm going to offer you a deal. If there is a Dao, then I'm going to give you $100. If there isn't a Dao, then I'm going to burn you and your family and a hundred innocent children alive.\" Okay. So my claim: don't take this deal. This is a bad deal.", "You're holding hostage your commitment to not being burned alive. I go through in the essay a bunch of different ways in which I think this is wrong. I think these people who pronounce \"moral realism or the void\" don’t actually think about bets like this. I'm like, \"No, okay. So really is that what you want to do?\" No. I still care about my values. My allegiance to my values outstrips my commitments to various metaethical interpretations of my values. The sense in which we care about not being burned alive is much more solid than our reasoning on what matters.", "That's the sort of philosophical doom. It sounded like you were also gesturing at a sort of empirical doom. “If it's just going in a zillion directions, come on, you think it's going to go in your direction? There's going to be so much churn. You're just going to lose. You should give up now and only fight for the realism worlds.” You have to do the expected value calculation. You have to actually have a view. How doomed are you in these different worlds? What's the tractability of changing different worlds? I'm quite skeptical of that, but that's a kind of empirical claim.", "I'm also just low on this \"everyone converges\" thing. You train a chess-playing AI. Or somehow you have a real paperclipper and you’re like \"Okay, go and reflect.\" Based on my understanding of how moral reasoning works—if you look at the type of moral reasoning that analytic ethicists do—it's just reflective equilibrium. They just take their intuitions and they systematize them. I don't see how that process gets a sort of injection of the mind-independent moral truth.", "If you start with only all of your intuitions to maximize paperclips. I don't see how you end up doing some rich human morality. It doesn't look to me like how human ethical reasoning works. Most of what normative philosophy does is make consistent and systematize pre-theoretic intuitions. But we'll get evidence about this.", "In some sense, I think this view predicts that you keep trying to train the AIs to do something and they keep being like, \"No, I'm not gonna do that. No, that's not good.\" So they keep pushing back. The momentum of AI cognition is always in the direction of this moral truth. Whenever we try to push it in some other direction, we'll find resistance from the rational structure of things.", "Dwarkesh Patel 01:43:32", "Actually, I've heard from researchers who are doing alignment that for red teaming inside these companies, they will try to red team a base model. So it's not been RLHF'd. It’s just “predict next token,” the raw, crazy, shoggoth . They try to get this thing to help with, \"Hey, help me make a bomb, help me, whatever.\" They say that it's odd how hard it tries to refuse, even before it's been RLHF'd.", "Joe Carlsmith 01:43:58", "I mean it will be a very interesting fact if it's like, \"Man, we keep training these AIs in all sorts of different ways. We're doing all this crazy stuff and they keep acting like bourgeois liberals.\" Or they keep professing this weird alien reality. They all converge on this one thing. They're like, \"Can't you see? It's Zorgo. Zorgo is the thing.” and it’s all the AIs. That would be interesting, very interesting.", "My personal prediction is that's not what we see. My actual prediction is that the AIs are going to be very malleable. If you push an AI towards evil, it'll just go. Obviously we're talking reflectively consistent evil. There's also a question with some of these AIs. Will they even be consistent in their values?", "I like this image of the blindered horses. We should be really concerned if we're forcing facts on our AIs. One of the clearest things about human processes of reflection, the easiest thing, is not acting on the basis of an incorrect empirical picture of the world. So if you find yourself telling Ray, \"By the way, this is true and I need you to always be reasoning as though blah is true.\" I'm like, \"Ooh, I think that's a no-no from an anti-realist perspective too.\" Because I want my reflective values to be formed in light of the truth about the world.", "This is a real concern. As we move into this era of aligning AIs, I don't actually think this binary between values and other things is gonna be very obvious in how we're training them. It's going to be much more like ideologies. You can just train an AI to output stuff, output utterances. You can easily end up in a situation where you decided that blah is true about some issue, an empirical issue. Not a moral issue.", "So I think people should not, for example, hard code belief in God into their AIs. Or I would advise people to not hard code their religion into their AIs if they also want to discover if their religion is false. Just in general, if you would like to have your behavior be sensitive to whether something is true or false, it's generally not good to etch it into things. So that is definitely a form of blinder we should be really watching out for.", "I have enough credence on some sort of moral realism. I'm hoping that if we just do the anti-realism thing of just being consistent, learning all the stuff, reflecting… If you look at how moral realists and moral anti-realists actually do normative ethics, it's basically the same. There's some amount of different heuristics on things properties like simplicity and stuff like that. But they're mostly just doing the same game.", "Also metaethics is itself a discipline that AIs can help us with. I'm hoping that we can just figure this out either way. So if moral realism is somehow true, I want us to be able to notice that. I want us to be able to adjust accordingly. I'm not like writing off those worlds and being like, \"Let's just totally assume that's false.\" The thing I really don't want to do is write off the other worlds where it's not true because my guess is it's not true. Stuff still matters a ton in those worlds too.", "Dwarkesh Patel 01:47:34", "Here’s one big crux. You're training these models. We were in this incredibly lucky situation where it turns out the best way to train these models is to just give them everything humans have ever said, written, thought. Also these models, the reason they get intelligence is because they can generalize. They can grok the gist of things. Should we just expect this to be a situation which leads to alignment? How exactly does this thing that's trained to be an amalgamation of human thought become a paperclipper?", "The thing you get for free is that it's an intellectual descendant. The paperclipper is not an intellectual descendant, whereas the AI which understands all the human concepts but then gets stuck on some part of it that we aren't totally comfortable with, is. It feels like an intellectual descendant in the way we care about.", "Joe Carlsmith 01:48:34", "I'm not sure about that. I'm not sure I care about a notion of intellectual descendant in that sense. I mean literal paperclips are a human concept. I don't think any old human concept will do for the thing we're excited about. The stuff that I would be more interested in the possibility of getting for free are things like consciousness , pleasure, other features of human cognition.", "There are paperclippers and there are paperclippers. If the paperclipper is an unconscious kind of voracious machine. it appears to you as a cloud of paper clips. That's one vision. Imagine the paperclipper is a conscious being that loves paperclips. It takes pleasure in making paperclips. That's like a different thing, right?", "It's not necessarily the case that it makes the future all paperclippy. It’s probably not optimizing for consciousness or pleasure, right? It cares about paperclips. Maybe eventually if it's suitably certain, it turns itself into paperclips and who knows. It’s still a somewhat different moral mode. There's also a question of does it try to kill you and stuff like that.", "But there are features of the agents we're imagining—other than the kind of thing that they're staring at—that can matter to our sense of sympathy, similarity. People have different views about this. One possibility is that the thing we care about in consciousness or sentience is super contingent and fragile. Most smart minds are not conscious, right?", "The thing we care about with consciousness is hacky, contingent. It's a product of specific constraints, evolutionarily genetic bottlenecks, etc. That's why we have this consciousness. Consciousness presumably does some sort of work for us, but you can get similar work done in a different mind in a very different way. That's the sort of \"consciousness is fragile\" view,", "There's a different view, which is that consciousness is something that's quite structural. It's much more defined by functional roles, like self-awareness, a concept of yourself, maybe higher-order thinking, stuff that you really expect in many sophisticated minds. In that case, now actually consciousness isn't as fragile as you might have thought. Now actually lots of beings, lots of minds are conscious and you might expect at the least that you're going to get conscious superintelligence. They might not be optimizing for creating tons of consciousness, but you might expect consciousness by default.", "Then we can ask similar questions about something like valence or pleasure or the kind of character of the consciousness. You can have a kind of cold, indifferent consciousness that has no human or emotional warmth, no pleasure or pain. Dave Chalmers has some papers about Vulcans and he talks about how they still have moral patienthood. That's very plausible. I do think it's an additional thing you could get for free or get quite commonly depending on its nature, something like pleasure.", "Again, we then have to ask how janky is pleasure, how specific and contingent is the thing we care about in pleasure versus how robust is this as a functional role in minds of all kinds. I personally don't know on this stuff. I don't think this is enough to get you alignment or something. I think it's at least worth being aware of these other features. We're not really talking about the AI's values in this case. We're talking about the structure of its mind and the different properties the minds have. I think that could show up quite robustly.", "1:52:33 - Balancing Being a Humanist and a Scholar", "Dwarkesh Patel 01:52:33", "Part of your day job is writing these Section 2/2.5-type reports. Part of it is like, “society is like a tree that's growing towards the light.” What is it like context switching between the two of them?", "Joe Carlsmith 01:52:52", "I actually find it's kind of quite complementary. I will write these more technical reports and then do more literary and philosophical writing. They both draw in different parts of myself, and I try to think about them in different ways. I think about some of the reports as much more like, “I'm more fully optimizing for trying to do something impactful.” There's more of an impact orientation there.", "In essay writing, I give myself much more leeway to let other parts of myself and other parts of my concerns come out, self-expression and aesthetics and other sorts of things. They’re both part of an underlying similar concern or an attempt to have a kind of integrated orientation towards the situation.", "Dwarkesh Patel 01:53:51", "Could you explain the nature of the transfer between the two, in particular from the literary side to the technical side? Rationalists are sort of known for having an ambivalence towards great works or humanities. Are they missing something crucial because of that?", "One thing you notice in your essays is lots of references to epigraphs, to lines in poems or essays that are particularly relevant. I don't know. Are the rest of the rationalists missing something because they don't have that kind of background?", "Joe Carlsmith 01:54:27", "I think some rationalists, lots of rationalists, love these different things.", "Dwarkesh Patel 01:54:32", "I’m referring specifically to SBF ’s post about how the base rates of Shakespeare being a great writer. He also argued that books can be condensed to essays.", "Joe Carlsmith 01:54:41", "On the general question of how people should value great works, people can fail in both directions. Some people like SBF and others are interested in puncturing a certain kind of sacredness and prestige that people associate with some of these works. As a result, they can miss some of the genuine value. But I think they're responding to a real failure mode on the other end, which is to be too enamored of this prestige and sacredness and to siphon it off as some weird legitimating function for your own thought instead of thinking for yourself. You can lose touch with what you actually think or learn from it.", "Sometimes even with these epigraphs I’m careful. I'm not saying I'm immune from these vices. I think there can be a like, “Ah, but Bob said this and it's very deep.” These are humans like us, right? The canon and other great works have a lot of value. Sometimes it borders on the way people read scripture. There's a kind of scriptural authority that people will sometimes ascribe to these things. You can fall off on both sides of the horse.", "Dwarkesh Patel 01:56:05", "I remember I was talking to somebody who at least is familiar with rationalist discourse. He was asking me what I was interested in these days? I was saying something about how this part of Roman history is super interesting. His first response was like, “Oh, you know, it's really interesting when you look at these secular trends of Roman times to what happened in the Dark Ages versus the Enlightenment.”", "For him, the story of that was just how it contributed to the big secular picture, the particulars didn't matter. There's no interest in that. It’s just like, “if you zoom out at the biggest level, what's happening here.”", "Whereas there's also the opposite failure mode when people study history. Dominic Cummings writes about this because he is endlessly frustrated with the political class in Britain. He'll say things like, “They study politics, philosophy and economics . A big part of it is just being really familiar with these poems and reading a bunch of history about the War of the Roses or something.” But he's frustrated that they have all these kings memorized, but they take away very little in terms of lessons from these episodes. It's almost like entertainment, watching Game of Thrones , for them. Whereas he thinks we're repeating certain mistakes that he's seen in history. He can generalize in a way they can’t. So the first one seems like a mistake. I think C.S. Lewis talks about it in one of the essays you cited. If you see through everything, you're really blind. If everything is transparent…", "Joe Carlsmith 01:57:38", "I think there's kind of very little excuse for not learning history. I'm not saying I have learned enough history. Even when I try to channel some skepticism towards great works, I think that doesn't generalize to thinking it's not worth understanding human history. Human history is just so clearly crucial to understand. It's what structured and created all of the stuff.", "There's an interesting question about what's the level of scale at which to do that and how much should you be looking at details, looking at macro trends. That's a dance. It's nice for people to be at least attending to the macro narrative. There's some virtue in having a worldview, really building a model of the whole thing. I think that sometimes gets lost in the details. But obviously, the details are what the world is made of. If you don't have those, you don't have data at all. It seems like there's some skill in learning history.", "Dwarkesh Patel 01:58:55", "Well, this actually seems related to your post on sincerity. Maybe I'm getting the vibe of the piece right. Certain intellectuals have a vibe of shooting the shit. They're just trying out different ideas. How do these analogies fit together? Those seem closer to looking at the particulars and like, “Oh, this is just like that one time in the 15th century where they overthrew this king…”", "Whereas this guy who was like, “Oh, if you look at the growth models from a million years ago to now, here's what's happening.” That one has a more sincere flavor. Some people, especially when it comes to AI discourse, have a very sincere mode of operating. “I've thought through my bio anchors and I disagree with this premise. My effective compute estimate is different in this way. Here's how I analyze the scaling laws.” If I could only have one person to help me guide my decisions on AI, I might choose that person.", "But if I had ten different advisors at the same time, I might prefer the shooting-the-shit type characters who have these weird esoteric intellectual influences. They're almost like random number generators. They're not especially calibrated, but once in a while they'll be like, “Oh, this one weird philosopher I care about, or this one historical event I'm obsessed with has an interesting perspective on this.” They tend to be more intellectually generative as well.", "I think one big part of it is that if you are so sincere, you're like, “Oh, I’ve thought through this. Obviously, ASI is the biggest thing that's happening right now. It doesn't really make sense to spend a bunch of your time thinking about how the Comanches lived? What is the history of oil? How did Girard think about conflict? What are you talking about? Come on, ASI is happening in a few years.” But therefore, the people who go on these rabbit holes because they're just trying to shoot the shit, I feel are more generative.", "Joe Carlsmith 02:01:15", "It might be worth distinguishing between intellectual seriousness and the diversity and idiosyncrasies of one's interests. There might be some correlation. Maybe intellectual seriousness is also distinct from \"shooting the shit.\" There’s a bunch of different ways to do this. Having exposure to various data sources and perspectives is valuable. It's possible to curate your intellectual influences too rigidly in virtue of some story about what matters. It's good to give yourself space to explore topics that aren't necessarily \"the most important thing.\" Different parts of yourself aren't isolated. They feed into each other. It’s a better way to be a richer and fuller human being in a bunch of ways. Also, these sorts of data can be really directly relevant.", "Some intellectually sincere individuals I know who focus on the big picture also possess an impressive command of a wide range of empirical data. They're really interested in empirical trends, not just abstract philosophies. It’s not just history and the march of reason. They’re really in the weeds. There’s an “in the weeds” virtue that I think is closely related to seriousness and sincerity.", "There's a different dimension of trying to get it right versus throwing ideas out there. Some people ask, \"What if it's like this?\" or \"I have a hammer, what if I hit everything with it?\" There's room for both approaches, but I think just getting it right is undervalued. It depends on the context. Certain intellectual cultures incentivize saying something new, original, flashy, or provocative. There’s various cultural and social dynamics. People are being performative and doing status-related things. There’s a bunch of stuff that goes on when people do thinking. But if something's really important, just get it right. Sometimes it's boring, but that doesn't matter.", "Things are also less interesting if they're false. Sometimes there's a useful process where someone says something provocative, and you have to think through why you believe it's false. It’s an epistemic project. For example, if someone says, \"Medical care doesn't work,\" you have to consider how you know it does work. There's room for that. But ultimately, real profundity is true. Things become less interesting if they're not true. It's possible to lose touch with that in pursuit of being flashy.", "2:05:02 - Explore exploit tradeoffs and AI", "Dwarkesh Patel 02:05:02", "After interviewing Leopold , I realized I hadn't thought about the geopolitical angle of AI. The national security implications are a big deal. Now I wonder how many other crucial aspects we might be missing. Even if you're focused on AI's importance, being curious about various topics, like what's happening in Beijing, might help you spot important connections later. There might not be an exact trade-off, but maybe there's an optimal explore-exploit balance where you're constantly searching things out. I don’t know practically if it works out that well. But that experience made me think that I should try to expand my horizons in an undirected way because there’s lots of different things you have to understand about the world to understand any one thing.", "Joe Carlsmith 02:06:23", "There's also room for division of labor. There can be people trying to draw many pieces together to form an overall picture, people going deep on specific pieces, and people doing more generative work, throwing ideas out there to see what sticks. All the epistemic labor also doesn't need to be located in one brain. It depends on your role in the world and other factors.", "Dwarkesh Patel 02:06:48", "In your series, you express sympathy with the idea that even if an AI, or I guess any sort of agent that doesn't have consciousness, has a certain wish and is willing to pursue it non-violently, we should respect its rights to pursue that. I'm curious where that's coming from because conventionally I think the thing matters because it's conscious and its conscious experience as a result of that pursuit matters.", "Joe Carlsmith 02:07:23", "I don't know where this discourse leads. I'm just suspicious of the amount of ongoing confusion that seems present in our conception of consciousness.", "People talk about life and élan vital . Élan vital was this hypothesized life force that is the thing at stake in life. We don't really use that concept anymore. We think that's a little bit broken. I don't think you want to have ended up in a position of saying, \"Everything that doesn't have élan vital doesn't matter\" or something. Somewhat similarly if you're like, \"No, there's no such thing as élan vital, but surely life exists.\" I'm like, \"Yeah, life exists. I think consciousness exists too.\" It depends on how we define the terms, it might be a kind of verbal question.", "Even once you have a reductionist conception of life, it's possible that it becomes less attractive as a moral focal point. Right now we really think of consciousness as a deep fact. Take cellular automata . That is self-replicating. It has some information. Is that alive? It's not that interesting. It's a kind of verbal question, right? Philosophers might get really into, \"Is that alive?\" But you're not missing anything about this system. There's no extra life that's springing up. It's just alive in some senses, not alive in other senses.", "I really think that's not how we intuitively think about consciousness. We think whether something is conscious is a deep fact. It's this really deep difference between being conscious or not. Is someone home? Are the lights on? I have some concern that if that turns out not to be the case, then this is going to have been like a bad thing to build our entire ethics around.", "To be clear, I take consciousness really seriously. I'm not one of these people like, \"Oh, obviously consciousness doesn't exist\" or something. But I also notice how confused I am and how dualistic my intuitions are. I'm like, \"Wow, this is really weird.\" So I'm just like, “error bars around this.”", "There's a bunch of other things going on in my wanting to be open to not making consciousness a fully necessary criteria. I definitely have the intuition that consciousness matters a ton. I think if something is not conscious—and there's like a deep difference between conscious and unconscious—then I definitely have the intuition that there's something that matters especially a lot about consciousness. I'm not trying to be dismissive about the notion of consciousness. I just think we should be quite aware of how ongoingly confused we are about its nature.", "Dwarkesh Patel 02:10:15", "Suppose we figure out that consciousness is just a word we use for a hodgepodge of different things, only some of which encompass what we care about. Maybe there are other things we care about that are not included in that word, similar to the life force analogy. Where do you then anticipate that would leave us as far as ethics goes? Would there then be a next thing that's like consciousness? What do you anticipate that would look like?", "Joe Carlsmith 02:10:46", "There's a class of people called illusionists in philosophy of mind , who will say consciousness does not exist. There are different ways to understand this view, but one version is to say that the concept of consciousness has built into it too many preconditions that aren't met by the real world. So we should chuck it out like élan vital. The proposal is at least phenomenal consciousness, or qualia , what it's like to be a thing. They'll just say this is sufficiently broken, sufficiently chock full of falsehoods that we should just not use it.", "On reflection, I do actually expect to continue to care about something like consciousness quite a lot, and to not end up deciding that my ethics is better if it doesn't make any reference to that. At least, there are some things quite nearby to consciousness. Something happens when I stub my toe. It's unclear exactly how to name it, but there’s something about that I'm pretty focused on.", "If you're asking where things go, I have a bunch of credence that in the end we end up caring a bunch about consciousness just directly. If we don't... Yeah, where will ethics go? Where will a completed philosophy of mind go? It’s very hard to say.", "A move that people might make, if you get a little bit less interested in the notion of consciousness, is some slightly more animistic view. What's going on with the tree? You're maybe not talking about it as a conscious entity necessarily, but it's also not totally unaware or something. The consciousness discourse is rife with these funny cases where it's like, \"Oh, those criteria imply that this totally weird entity would be conscious\" or something like that.", "That’s especially the case if you're interested in some notion of agency or preferences. A lot of things can be agents, corporations, all sorts of things. Is a corporation conscious? Oh man. But one place it could go in theory is that you start to view the world as animated by moral significance in richer and subtler structures than we're used to. Plants or weird optimization processes are outflows of complex… I don't know. Who knows exactly what you end up seeing as infused with the sort of thing that you ultimately care about. But it is possible that it includes a bunch of stuff that we don't normally ascribe consciousness to.", "Dwarkesh Patel 02:13:45", "You say \"a complete theory of mind,\" and presumably after that, a more complete ethic. Even the notion of a reflective equilibrium implies, \"Oh, you'll be done with it at some point.\" You just sum up all the numbers and then you've got the thing you care about. This might be unrelated to the same sense we have in science. The vibe you get when you're talking about these kinds of questions is that, “Oh, we're rushing through all the science right now. We've been churning through it. It's getting harder to find because there's some cap. You find all the things at some point.”", "Right now it's super easy because a semi-intelligent species has barely emerged and the ASI will just rush through everything incredibly fast. You will either have aligned its heart or not. In either case, it'll use what it's figured out about what is really going on and then expand through the universe and exploit. It’ll do the tiling or maybe some more benevolent version of the “tiling”. That feels like the basic picture of what's going on.", "We had dinner with Michael Nielsen a few months ago. His view is that this just keeps going forever, or close to forever. How much would it change your understanding of what's going to happen in the future if you were convinced that Nielsen is right about his picture of science?", "Joe Carlsmith 02:15:15", "There are a few different aspects. I don't claim to really understand Michael's picture here. My memory was that it was like, “Sure, you get the fundamental laws.” My impression was that he expects physics to get solved or something, maybe modulo the expensiveness of certain experiments. But the difficulty is such that, even granted that you have the kind of basic laws down, it still actually doesn't let you predict where, at the macro scale, various useful technologies will be located. There's still this big search problem.", "I'll let him speak for himself on what his take is here. My memory was that it was like, “Sure you get the fundamental stuff, but that doesn't mean you get the same tech.” I'm not sure if that's true. If that's true, what kind of difference would it make? In some sense you have to, in a more ongoing way, make trade-offs between investing in further knowledge and further exploration versus exploiting and acting on your existing knowledge. You can't get to a point where you're like, \"And we're done now.\" As I think about it, I suspect that was always true.", "I remember talking to someone and I was like, \"Ah at least in the future, we should really get all the knowledge.\" He was like, \"You want to know the output of every Turing machine ?\" In some sense, there's a question of what it would actually be to have completed knowledge? That's a rich question in its own right. It's not necessarily that we should imagine, on any picture necessarily, that you've got everything. On any picture, in some sense, you could end up with this case where you cap out. There's some collider that you can't build or whatever. There's something that is too expensive or whatever and everyone caps out there.", "There's a question of, “Do you cap?” There's a question of, “How contingent is the place you go?” If it’s contingent, one prediction that makes is that you'll see more diversity across our universe or something. If there are aliens, they might have quite different tech. If people meet, you don't expect them to be like, \"Oh, you got your thing. I got our version.\" It’s more like, \"Whoa, that thing. Wow.\" That's one thing.", "If you expect more ongoing discovery of tech, then you might also expect more ongoing change and upheaval and churn, insofar as technology is one thing that really drives change in civilization. That could be another factor. People sometimes talk about lock-in. They envision this point at which civilization is settled into some structure or equilibrium or something. Maybe you get less of that. Maybe that’s more about the pace rather than contingency or caps, but that's another factor.", "It is interesting. I don't know if it changes the picture fundamentally of earth civilization. We still have to make trade-offs about how much to invest in research versus acting on our existing knowledge. But it has some significance.", "Dwarkesh Patel 02:18:41", "We were at a party and somebody mentioned this. We were talking about how uncertain we should be about the future? They were like, “There are three things I'm uncertain about. What is consciousness? What is information theory? What are the basic laws of physics? I think once we get that, we're done.\" It’s like, \"Oh you'll figure out what's the right kind of hedonium .\" It has that vibe. Whereas this is more like, \"Oh you're constantly churning through.\" It has more of a flavor of the becoming that the attunement picture implies. I think it's more exciting. It's not just \"Oh, you figured out the things in the 21st century and then you just…”", "Joe Carlsmith 02:19:26", "I sometimes think about these two categories of views. There are people who think, “We’re almost there with the knowledge.” We've basically got the picture, where the picture is that the knowledge is all just totally sitting there. You just have to be scientifically mature at all, and then it's just going to all fall together.", "Everything past that is going to be this super expensive, not super important thing. Then there's a different picture, which is much more of this ongoing mystery, \"Oh man, there's going to be more and more…\" We may expect more radical revisions to our worldview.", "I'm drawn to both. We're pretty good at physics. A lot of our physics is quite good at predicting a bunch of stuff, at least that's my impression from reading some physicists. Who knows?", "Dwarkesh Patel 02:20:23", "Your dad’s a physicist though, right?", "Joe Carlsmith 02:20:24", "Yeah but this isn't coming from my dad. There's a blog post by Sean Carroll or something. He's like, \"We really understand a lot of the physics that governs the everyday world. We're really good at a lot of it.” I'm generally pretty impressed by physics as a discipline. That could well be right.", "On the other hand these guys had a few centuries. But I think that's interesting and it leads to something different. There's something about the endless frontier. There is a draw to that from an aesthetic perspective of the idea of continuing to discover stuff.", "At the least, I think you can't get full knowledge. There's some way in which you're part of the system. The knowledge itself is part of the system. If you imagine that you try to have full knowledge of what the future of the universe will be like…” I don't know. I'm not totally sure that's true.", "Dwarkesh Patel 02:21:21", "It has a halting problem kind of property, right?", "Joe Carlsmith 02:21:23", "There's a little bit of a loopiness. There are probably fixed points in that where you could be like, \"Yep, I'm gonna do that.\" I at least have the question, when people imagine the completion of knowledge, exactly how well does that work? I'm not sure.", "Dwarkesh Patel 02:21:43", "You had a passage in your essay on utopia . Can I ask you to read that passage real quick?", "Joe Carlsmith 02:22:16", "\"I'm inclined to think that utopia, however weird, would also be in a certain sense recognizable; that if we really understood and experienced it, we would see in it the same thing that made us sit bolt upright long ago when we first touched love, joy, beauty; that we would feel in front of the bonfire the heat of the ember from which it was lit. There would be, I think, a kind of remembering.\"", "Dwarkesh Patel 02:22:44", "Where does that fit into this picture?", "Joe Carlsmith 02:22:47", "It's a good question. If there's no part of me that recognizes it as good, then I'm not sure that it's good according to me.It is a question of what it takes for it to be the case, that a part of you recognizes it is good. But if there's really none of that, then I'm not sure it's a reflection of my values at all.", "Dwarkesh Patel 02:23:23", "There's a sort of tautological thing you can do where it's like, \"Ah, if I went through the processes which led to me discovering what was good, which we might call reflection, then it was good.\" By definition though, you ended up there because… you know what I mean?", "Joe Carlsmith 02:23:36", "If you gradually transform me into a paper clipper, then I will eventually be like, \"I saw the light, I saw the true paperclips.\" That's part of what's complicated about this thing about reflection. You have to find some way of differentiating between the development processes that preserve what you care about and the development processes that don't. That in itself is this fraught question. It itself requires taking some stand on what you care about and what sorts of meta-processes you endorse and all sorts of things.", "But you definitely shouldn't just be like, “It is not a sufficient criteria that the thing at the end thinks it got it right.” That's compatible with it having gone wildly off the rails.", "Dwarkesh Patel 02:24:20", "You had a very interesting sentence in one of your posts . You said, \"Our hearts have, in fact, been shaped by power. So we should not be at all surprised if the stuff we love is also powerful.\" What's going on there? What did you mean there?", "Joe Carlsmith 02:24:45", "The context on that post is that I'm talking about this hazy cluster, which I call in the essay, \"niceness/liberalism/boundaries.\" It’s this somewhat more minimal set of cooperative norms involved in respecting the boundaries of others and cooperation and peace amongst differences and tolerance and stuff like that, opposed to your favored structure of matter, which is sometimes the paradigm of values that people use in the context of AI risk.", "I talk for a while about the ethical virtues of these norms. Why do we have these norms? One important feature of these norms is that they're effective and powerful. Secure boundaries save resources wasted on conflict. Liberal societies are often better to live in. They're better to immigrate to. They're more productive. Nice people are better to interact with. They're better to trade with and all sorts of things.", "Look at both why at a political level we have various political institutions, and more deeply into our evolutionary past and how our moral cognition is structured. It seems pretty clear that various kinds of forms of cooperation and game theoretic dynamics and other things went into shaping what we now, at least in certain contexts, also treat as a kind of intrinsic or terminal value.", "These values that have instrumental functions in our society also get reified in our cognition as intrinsic values in themselves. I think that's okay. I don't think that's a debunking. All your values are something that kind of stuck and got treated as terminally important. In the context of the series, I'm talking about deep atheism and the relationship between what we're pushing for and what nature is pushing for or what sort of pure power will push for.", "It's easy to say, “Well there's paperclips, which is just one place you can steer and pleasure is another place you can steer or something. These are just arbitrary directions.” Whereas I think some of our other values are much more structured around cooperation and things that also are effective and functional and powerful.", "So that's what I mean there. There's a way in which nature is a little bit more on our side than you might think. Part of who we are has been made by nature's way. That is in us. Now I don't think that's enough necessarily for us to beat the gray goo. We have some amount of power built into our values, but that doesn't mean it's going to be such that it is arbitrarily competitive. It’s still important to keep in mind. It's important to keep in mind in the context of integrating AIs into our society. We've been talking a lot about the ethics of this, but there are also instrumental and practical reasons to want to have forms of social harmony and cooperation with AIs with different values.", "We need to be taking that seriously and thinking about what it is to do that in a way that's genuinely legitimate, a project that is a just incorporation of these beings into our civilization. There's the justice part and there's also, \"Is it compatible with people? Is it a good deal? Is it a good bargain for people?” To the extent we're very concerned about AIs rebelling or something like that, a thing you can do is make civilization better for someone. That's an important feature of how we have in fact structured a lot of our political institutions and norms and stuff like that. That's the thing I'm getting at in that quote.", "Dwarkesh Patel 02:29:03", "Okay. I think that's an excellent place to close. Joe, thanks for coming on the podcast. We discussed the ideas in the series. People might not appreciate, if they haven't read the series, how beautifully written it is. We didn't cover everything, but there's a bunch of very interesting ideas.", "As somebody who has talked to people about AI for a while, there are things I haven't encountered anywhere else. Obviously, no part of the AI discourse is nearly as well written. It is a genuinely beautiful experience to listen to the podcast version, which is in your own voice. So I highly recommend people do that. It's joecarlsmith.com where they can access this. Joe, thanks so much for coming on the podcast.", "Joe Carlsmith 02:29:48", "Thank you for having me. I really enjoyed it." ]
[ "http://joecarlsmith.com", "https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer", "https://en.wikipedia.org/wiki/AI_alignment", "https://plato.stanford.edu/entries/agency/", "https://en.wikipedia.org/wiki/Gradient_descent", "https://en.wikipedia.org/wiki/Modulo", "https://en.wikipedia.org/wiki/We_Are_What_We_Pretend_to_Be:_The_First_and_Last_Works", "https://en.wikipedia.org/wiki/Eliezer_Yudkowsky", "https://en.wikipedia.org/wiki/Red_team#:~:text=A%20red%20team%20is%20a,organization%20can%20improve%20their%20defenses.", "https://openai.com/index/introducing-the-model-spec/", "https://en.wikipedia.org/wiki/Chinese_water_torture#:~:text=Chinese%20water%20torture%20or%20a,mental%20deterioration%20on%20the%20subject.", "https://en.wikipedia.org/wiki/Moral_patienthood", "https://www.dwarkeshpatel.com/p/eliezer-yudkowsky", "https://en.wikipedia.org/wiki/Floating-point_arithmetic", "https://en.wikipedia.org/wiki/Superintelligence", "https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback", "https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion", "https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi", "https://www.dwarkeshpatel.com/p/carl-shulman", "https://en.wikipedia.org/wiki/C._S._Lewis", "https://amzn.to/4dRQXub", "https://plato.stanford.edu/entries/nietzsche/", "https://plato.stanford.edu/entries/naturalism/", "https://plato.stanford.edu/entries/daoism/", "https://old-wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff", "https://en.wikipedia.org/wiki/Gray_goo", "https://en.wikipedia.org/wiki/Nick_Land", "https://www.nps.gov/thje/learn/photosmultimedia/quotations.htm", "https://en.wikipedia.org/wiki/Riemann_hypothesis", "https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy", "https://en.wikipedia.org/wiki/Kulak", "https://en.wikipedia.org/wiki/Joseph_Stalin", "https://en.wikipedia.org/wiki/Golden_Rule", "https://en.wikipedia.org/wiki/Pareto_efficiency", "https://en.wikipedia.org/wiki/Mao_Zedong", "https://en.wikipedia.org/wiki/Hundred_Flowers_Campaign", "https://en.wikipedia.org/wiki/Grizzly_Man", "https://en.wikipedia.org/wiki/Timothy_Treadwell", "https://plato.stanford.edu/entries/moral-realism/", "https://en.wikipedia.org/wiki/Superintelligence", "https://plato.stanford.edu/entries/aristotle/", "https://en.wikipedia.org/wiki/Derek_Parfit", "https://joecarlsmith.com/2022/10/09/against-the-normative-realists-wager/", "https://plato.stanford.edu/entries/metaethics/", "https://www.nytimes.com/2023/05/30/technology/shoggoth-meme-ai.html", "https://plato.stanford.edu/entries/consciousness/", "https://en.wikipedia.org/wiki/David_Chalmers", "https://selfawarepatterns.com/2020/11/20/consciousness-and-moral-status/", "https://en.wikipedia.org/wiki/Rationalism", "https://en.wikipedia.org/wiki/Sam_Bankman-Fried", "https://measuringshadowsblog.blogspot.com/2012/08/the-fetishization-of-old.html", "https://www.dwarkeshpatel.com/p/dominic-cummings", "https://en.wikipedia.org/wiki/Philosophy,_politics_and_economics", "https://en.wikipedia.org/wiki/Wars_of_the_Roses", "https://www.goodreads.com/quotes/8667117-you-cannot-go-on-seeing-through-things-for-ever-the", "https://joecarlsmith.com/2022/12/23/on-sincerity", "https://www.lesswrong.com/posts/cxQtz3RP4qsqTkEwL/an-121-forecasting-transformative-ai-timelines-using", "https://en.wikipedia.org/wiki/Ren%C3%A9_Girard", "https://www.dwarkeshpatel.com/p/leopold-aschenbrenner", "https://en.wikipedia.org/wiki/%C3%89lan_vital", "https://en.wikipedia.org/wiki/Reductionism", "https://en.wikipedia.org/wiki/Cellular_automaton", "https://en.wikipedia.org/wiki/Illusionism_(philosophy)", "https://en.wikipedia.org/wiki/Philosophy_of_mind", "https://plato.stanford.edu/entries/qualia/", "https://en.wikipedia.org/wiki/Animism", "https://slate.com/technology/2016/04/the-philosophical-argument-against-artificial-intelligence-killing-us-all.html", "https://en.wikipedia.org/wiki/Michael_Nielsen", "https://en.wikipedia.org/wiki/Turing_machine#:~:text=A%20Turing%20machine%20is%20an,sequential%20memory%20to%20store%20data.", "https://www.lesswrong.com/tag/hedonium", "https://en.wikipedia.org/wiki/Sean_M._Carroll", "https://en.wikipedia.org/wiki/Halting_problem", "https://joecarlsmith.com/2021/01/18/actually-possible-thoughts-on-utopia", "https://joecarlsmith.com/2024/01/16/being-nicer-than-clippy", "http://joecarlsmith.com" ]
https://www.dwarkesh.com/p/john-schulman
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI
[ "Thanks to Graham Bessellieu for editing this podcast, and to Teddy Kim for editing and annotating this transcript with lots of helpful links.", "(00:00:00) - Pre-training, post-training, and future capabilities", "Dwarkesh Patel", "Today I have the pleasure to speak with John Schulman , who is one of the co-founders of OpenAI and leads the post-training team here. He also led the creation of ChatGPT and is the author of many of the most important and widely cited papers in AI and RL , including PPO and many others. John, really excited to chat with you. Thanks for coming on the podcast.", "John Schulman", "Thanks for having me on the podcast. I'm a big fan.", "Dwarkesh Patel", "Thank you for saying that. Here’s my first question. We have these distinctions between pre-training and post-training . Let’s go beyond what is actually happening in terms of loss function and training regimes. Taking a step back conceptually, what kind of thing is pre-training creating? What does post-training do on top of that?", "John Schulman", "In pre-training you're basically training to imitate all of the content on the Internet or on the web, including websites and code and so forth. So you get a model that can generate content that looks like random web pages from the Internet. The model is also trained to maximize likelihood where it has to put a probability on everything.", "The objective is basically predicting the next token given the previous tokens . Tokens are like words, or parts of words. Since the model has to put a probability on it—we're training to maximize log probability—it ends up being very calibrated. Not only can it generate all of the content of the web, it can also assign probabilities to everything.", "The base model can effectively take on all of these different personas or generate all different kinds of content. When we do post-training, we're usually targeting a narrower range of behaviors where we want the model to behave like a kind of chat assistant. It's a more specific persona where it's trying to be helpful. It's not trying to imitate a person. It's answering your questions or doing your tasks. We're optimizing on a different objective, which is more about producing outputs that humans will like and find useful, as opposed to just imitating this raw content from the web.", "Dwarkesh Patel", "Maybe I should take a step back and ask this. Right now we have these models that are pretty good at acting as chatbots. Taking a step back from how these processes work currently, what kinds of things will the models released by the end of the year be capable of doing? What do you think the progress will look like if we carry everything forward for the next five years?", "John Schulman", "The models will get quite a bit better in five years.", "Dwarkesh Patel", "In what way?", "John Schulman", "Even in one or two years, we'll find that the models can do a lot more involved tasks than they can do now. For example, you could imagine having the models carry out a whole coding project instead of it giving you one suggestion on how to write a function. You could imagine the model taking high-level instructions on what to code and going out on its own, writing any files, and testing it, and looking at the output. It might even iterate on that a bit. So just much more complex tasks.", "Dwarkesh Patel", "Fundamentally the unlock is that it can act coherently for long enough to write multiple files of code? What has changed between now and then?", "John Schulman", "I would say this will come from some combination of training the models to do harder tasks like this. Most of the training data is more like doing single steps at a time. I would expect us to do more for training the models to carry out these longer projects.", "That’s for any kind of training, like doing RL, to learn how to do these tasks. Whether you're supervising the final output or supervising it at each step, any kind of training at carrying out these long projects is going to make the models a lot better.", "Since the whole area is pretty new, I'd say there's a lot of low-hanging fruit in doing this kind of training. That's one thing. I would also expect that as models get better, they get better at recovering from errors or dealing with edge cases. When things go wrong, they’ll know how to recover from it.", "The models will be more sample efficient . You won't have to collect a ton of data to teach them how to get back on track. Just a little bit of data or their generalization from other abilities will allow them to get back on track. Current models might just get stuck and get lost.", "Dwarkesh Patel", "I want to understand specifically how the generalization helps you get back on track. Can you say more about that? I'm not sure why those two concepts are connected.", "John Schulman", "Right, they're not directly connected. You usually have a little bit of data that does everything. If you collect a diverse data set, you're going to get a little bit of everything in it. If you have models that generalize really well—even from just a couple of examples of getting back on track or if in the pre-training data there are a couple of examples of a model getting back on track—the model will be able to generalize from those other things it’s seen to the current situation.", "If you have models that are weaker, you might be able to get them to do almost anything with enough data. But you might have to put a lot of effort into a particular domain or skill.", "Whereas for a stronger model, it might just do the right thing without any training data or any effort.", "Dwarkesh Patel", "Right now these models can work coherently for five minutes. We want them to be able to do tasks that a human would take an hour to do, then a week, then a month, and so forth.", "To get to each of these benchmarks, is it going to be the case that each one takes 10X more compute, analogous to the current scaling laws for pre-training? Or is it going to be a much more streamlined process of just getting to that point where you're already more sample efficient and you can just go straight to the years of carrying out tasks or something?", "John Schulman", "At a high level, I would agree that longer-horizon tasks are going to require more model intelligence to do well. They are going to be more expensive to train. I'm not sure I would expect a really clean scaling law unless you set it up in a very careful way, or design the experiment in a certain way. There might end up being some phase transitions where once you get to a certain level you can deal with much longer tasks.", "For example, when people do planning for different timescales, I'm not sure they use completely different mechanisms. We probably use the same mental machinery thinking about one month from now, one year from now, or a hundred years from now. We're not actually doing some kind of reinforcement learning where we need to worry about a discount factor that covers that timescale and so forth.", "Using language, you can describe all of these different timescales and then you can do things like plan. In the moment you can try to make progress towards your goal, whether it's a month away or 10 years away. I don’t know if it’s a phase transition but I might expect the same out of models where there might be some capabilities that work at multiple scales.", "Dwarkesh Patel", "Correct me if this is wrong. It seems like you’re implying that right now we have models that are on a per token basis pretty smart. They might be as smart as the smartest humans on a per token basis. The thing that prevents them from being as useful as they could be is that five minutes from now, they're not going to be still writing your code in a way that’s coherent and aligns with your broader goals you have for your project or something.", "If it's the case that once you start this long-horizon RL training regime it immediately unlocks your ability to be coherent for longer periods of time, should we be predicting something that is human-level as soon as that regime is unlocked? If not, then what is remaining after we can plan for a year and execute projects that take that long?", "John Schulman", "It's not totally clear what we're going to see once we get into that regime or how fast progress will be. That's still uncertain. I wouldn't expect everything to be immediately solved by doing any training like this. There'll be other miscellaneous deficits that the models have that cause them to get stuck or make worse decisions than humans. I don’t expect that this one little thing will unlock all capabilities. But some improvement in the ability to do long-horizon tasks might go quite far.", "Dwarkesh Patel", "Would you say it's plausible? Does it seem quite likely that there will be other reasons why there might be bottlenecks? I'm also curious what the nature of these bottlenecks might be. It has all these representations of pre-training. Now it can work coherently for a long period of time because of long-horizon RL. What's remaining?", "John Schulman", "Maybe there's some other experience that human experts bring to different tasks such as having taste or dealing with ambiguity better. If we want to do something like research I could imagine those considerations coming into play. Obviously there are going to be mundane limitations around the affordances of the model and whether it can use UIs, interact with the physical world, or have access to things. So there might be a lot of mundane barriers that are probably not going to last that long but would initially slow down progress.", "Dwarkesh Patel", "Let’s talk about the websites that are designed for these AIs. Once they’re trained on more multimodal data, will they be in any way different from the ones we have for humans? What UIs will be needed? How will it compensate for their strengths and weaknesses? How would that look different from the current UIs we have for humans?", "John Schulman", "That's an interesting question. I expect that models will be able to use websites that are designed for humans just by using vision, after the vision capabilities get a bit better. So there wouldn't be an immediate need to change them.", "On the other hand, there’ll be some websites that are going to benefit a lot from AIs being able to use them. We’ll probably want to design those to be better UXs for Ais. I'm not sure exactly what that would mean. Assuming that our models are still better at text mode than reading text out of images, you'd probably want to have a good text-based representation for the models.", "You’d also want a good indication of what all the things that can be interacted with are. But I wouldn't expect the web to get totally redesigned to have APIs everywhere. We can get models to use the same kind of UIs that humans use.", "Dwarkesh Patel", "I guess that's been the big lesson of language models , right? That they can act within the similar affordances that humans do.", "I want to go back to the point you made earlier about how this process could be more sample efficient because it could generalize from its pre-training experiences of how to get unstuck in different scenarios. What is the strongest evidence you’ve seen of this generalization and transfer?", "The big question for the future abilities models seems to be about how much generalization is happening. Is there something that feels really compelling to you? Have you seen a model learn something that you wouldn't expect it to learn from generalization?", "John Schulman", "There have definitely been some interesting instances of generalization in post-training.", "One well-known phenomenon is that if you do all your fine-tuning with English data, the model will automatically behave well in other languages. So if you train the assistant on English data, it'll also do something reasonable in Spanish. Sometimes you might get the wrong behavior in terms of whether it replies in English or replies in Spanish. Usually you get the right behavior there, meaning you get it to respond in Spanish to Spanish queries. That's one interesting instance of generalization where the model just latches onto the right, helpful persona and then automatically does the right thing in different languages.", "We've seen some version of this with multimodal data where if you do text-only fine-tuning, you also get reasonable behavior with images . Early on in ChatGPT we were trying to fix some issues with the model understanding its own limitations. Early versions of the model would think that it could send you an email or call you an Uber or something. The model would try to play the assistant and it would say “oh yeah, of course I sent that email.” Obviously it didn't.", "So we started collecting some data to fix those problems. We found that a tiny amount of data did the trick, even when you mixed it together with everything else. I don't remember exactly how many examples but something like 30 examples. We had a pretty small number of examples showing this general behavior, explaining that the model doesn’t have this capability. That generalized pretty well to all sorts of capabilities we didn't train for.", "Dwarkesh Patel", "I still want to go back to this because I'm not sure I understood. Let’s say you have this model that is trained to be coherent for longer periods of time. Setting aside these other bottlenecks which there may or may not be, by next could you have models that are potentially like human-level? I’m thinking of a model that you’re interacting with like a colleague and it's as good as interacting with a human colleague. You can tell them to go do stuff and they go get it done. What seems wrong with that picture of the capabilities you think might be possible?", "John Schulman", "It's hard to say exactly what the deficit will be. When you talk to the models today, they have various weaknesses besides long-term coherence. They also struggle to really think hard about things or pay attention to what you ask them. I wouldn't expect improving the coherence a little bit to be all it takes to get to AGI . I guess I can’t articulate exactly what are the main weaknesses that will stop them from being a fully functional colleague.", "(00:16:57) - Plan for AGI 2025", "Dwarkesh Patel", "It seems like then, you should be planning for the possibility you would have AGI very soon.", "John Schulman", "I think that would be reasonable.", "Dwarkesh Patel", "So what's the plan if there's no other bottlenecks. In the next year or something, you’ve got AGI. What's the plan?", "John Schulman", "If AGI came way sooner than expected we would definitely want to be careful about it. We might want to slow down a little bit on training and deployment until we're pretty sure we know we can deal with it safely. We would need a pretty good handle on what it's going to do and what it can do. We would have to be very careful if it happened way sooner than expected. Our understanding is still rudimentary in a lot of ways.", "Dwarkesh Patel", "What would being careful mean? Presumably you're already careful, right? You do these evaluations before deploying.", "John Schulman", "Maybe it means not training the even smarter version or being really careful when you do train it. You can make sure it’s properly sandboxed and everything. Maybe it means not deploying it at scale or being careful about what scale you deploy it at.", "Dwarkesh Patel", "Let's just play with a scenario. AGI happens next year. You're not training a smarter system but you're deploying it in a somewhat measured way. Presumably the development wouldn’t be particular to OpenAI. AGI just turns out to be much easier than we expected and that’s why it happened. So you wait to deploy a little bit. Now other companies have a similar level of capabilities. What happens next? While you wait to deploy, what are you waiting for? What is every company doing in this scenario?", "John Schulman", "The game theory is a little tough to think through. First of all, I don't think this is going to happen next year but it's still useful to have the conversation. It could be two or three years instead.", "Dwarkesh Patel", "Two or three years is still pretty soon.", "John Schulman", "It’s still pretty soon. You probably need some coordination. Everyone needs to agree on some reasonable limits to deployment or to further training for this to work. Otherwise you have the race dynamics where everyone's always trying to stay ahead and that might require compromising safety. You would probably need some coordination among the larger entities that are doing this kind of training.", "Dwarkesh Patel", "You'd be coordinating to pause deployment until what exactly? Until you figure out what's happening in the model?", "John Schulman", "We could pause further training. We could pause deployment. We could avoid certain types of training that might be riskier. We would set up some reasonable rules for what everyone should do to limit these things.", "Dwarkesh Patel", "Limit to what end? At some point the potential energy that's within this intelligence will be unleashed. Suppose in two years we get the AGI. Now everybody's freaking out. The AI companies have paused. What would be the thing we plan to wait until?", "John Schulman", "I don't have a good answer to that. If we can coordinate like that, that would be a pretty good scenario. Building these models is very capital intensive and there are a lot of complex pieces. It's not like everyone's going to go and recreate this stuff at home.", "Given the relatively small number of entities who could train the largest models, it does seem possible to coordinate. I'm not sure how you would maintain this equilibrium for a long period of time, but I think if we got to that point we would be in an okay position.", "Dwarkesh Patel", "Would we? I'm still curious because I'm not sure what happens next. Fundamentally, the benefit is that you push it to the server and now we have a bunch of intelligences, or they could push themselves to the server. Now we’ve got everybody coordinated but I'm not sure what we do next in this world. Why does that set us up for a good outcome?", "John Schulman", "If we had everyone reasonably coordinated and we felt like we could solve the technical problems around alignment well enough then we could deploy. We would be able to deploy really smart AIs that can act as extensions of people's wills but also prevent them from being catastrophically misused. That would be great. We could go ahead and safely deploy these systems and it would usher in a lot of prosperity and a much more rapid phase of scientific advancement. That would be what the good scenario would look like.", "Dwarkesh Patel", "That makes sense. I’m curious about something down the road in a couple of years. In the best case scenario, all these actors have agreed to pause until we've figured out that we're building aligned systems that are not themselves going to attempt a coup or not going to enable somebody else to do that. What would proof of that look like? What would evidence of that look like?", "John Schulman", "If we can deploy systems that are incrementally that are successively smarter than the ones before, that would be safer. I hope the way things play out is not a scenario where everyone has to coordinate, lock things down, and safely release things. That would lead to this big buildup in potential energy.", "I would rather have a scenario where we're all continually releasing things that are a little better than what came before. We’d be doing this while making sure we’re confident that each diff improves on safety and alignment in correspondence to the improvement in capability. If things started to look a little bit scary, then we would be able to slow things down. That's what I would hope for.", "If there's more of a discontinuous jump, there’s a question of “how do you know if the thing you've got is safe to release”. I can't give a generic answer. However, the type of thing you might want to do to make that more acceptable would be a lot of testing simulated deployment, red teaming of sorts. You'd want to do that in a way that is much more likely to fail than the thing you’re planning to do in the real world.", "You'd want to have a really good monitoring system so that if something does start to go wrong with the deployed system, you can immediately detect it. Maybe you've got something watching over the deployed AIs, watching what they're doing, and looking for signs of trouble.", "You’d want some defense in depth. You'd want some combination of “the model itself seems to be really well-behaved with impeccable, moral confidence in everything” and “I’m pretty confident that it’s extremely resistant to any kind of severe misuse.” You'd also want really good monitoring on top of it so you could detect any kind of unforeseen trouble.", "Dwarkesh Patel", "What are you keeping track of while you're doing long-horizon RL or when you eventually start doing it? How could you notice this sort of discontinuous jump before you deployed these systems broadly?", "John Schulman", "You would want to have a lot of evals that you're running during the training process.", "Dwarkesh Patel", "What specifically? Does it make sense to train on a long-horizon RL knowing that this is something that could happen? Or is it just a very low possibility? How do you think about this?", "John Schulman", "You'd want to be pretty careful when you do this kind of training if you see a lot of potentially scary capabilities. I would say it's not something we have to be scared of right now because right now it's hard to get the models to do anything coherent.", "If they started to get really good, we would want to take some of these questions seriously. We would want to have a lot of evals that test them for misbehavior, mostly for the alignment of the models. We'd want to check that they’re not going to turn against us or something. You might also want to look for discontinuous jumps in capabilities. You'd want to have lots of evals for the capabilities of the models.", "You'd also want to make sure that whatever you're training on doesn't have any reason to make the model turn against you. That doesn't seem like the hardest thing to do. The way we train them with RLHF, that does feel very safe even though the models are very smart. The model is just trying to produce a message that is pleasing to a human. It has no concern about anything else in the world other than whether the text it produces is approved.", "Obviously if you were doing something where the model has to carry out a long sequence of actions which involve tools, then it might have some incentive to do a lot of wacky things that wouldn't make sense to a human in the process of producing its final result. However, it wouldn't necessarily have an incentive to do anything other than produce a very high quality output at the end.", "There are old points about instrumental convergence where the model wants to take over the world so it can produce some awesome piece of code at the end. If you ask it to write you a Flask app, it'll be like “oh yeah, first I need to take over the world. At a certain point it's a little hard to imagine why for fairly well specified tasks like coding an app, you would want to first take over the world. Of course if you assigned a task such as “make money,” then maybe that would lead to some nefarious behavior as an instrumental goal.", "(00:29:19) - Teaching models to reason", "Dwarkesh Patel", "Before we get back to that, let's step back and talk about today’s RLHF systems and everything. I do want to follow up on that point because it is interesting.", "With today's RLHF and the way in which it influences these models, how would you characterize it in terms of human psychology? Is it a drive ? Is it a goal? Is it an impulse? Psychologically, what kind of thing is it? In what way is it being changed?", "Not simply the persona of a chatbot but “don't talk that way, talk this other way” or “don’t put out those kinds of outputs.”", "John Schulman", "There are probably some analogies with a drive or a goal in humans. You're trying to steer towards a certain set of states rather than some other states. I would think that our concept of a drive or a goal has other elements, such as the feeling of satisfaction you get for achieving it. Those things have more to do with the learning algorithm than what the model does at runtime when you just have a fixed model.", "There are probably some analogies though I don't know exactly how close it is. To some extent, the models do have drives and goals in some meaningful way. In the case of RLHF where you're trying to maximize human approval as measured by a reward model , the model is just trying to produce something that people are going to like and judge as correct.", "Dwarkesh Patel", "I’ve heard two ideas in terms of using that internal monologue to get better at reasoning. At least publicly, I've seen two ideas and I'm curious which one you think is more promising.", "One is that the model learns from its outputs over a bunch of potential trains of thought, and it learns to follow the one that leads to the correct answer. It is then trained on that before deployment. The other one is you use a bunch of compute to do inference in deployment. This approach involves the model talking to itself while it's deployed.", "Which one do you expect to be closer to the way a model has been trained when it gets really good at reasoning ? Is it because it's doing just a bunch of inference clouds? Is it just because you've trained it to do well at that?", "John Schulman", "You could define reasoning as tasks that require some kind of computation at test time or maybe some kind of deduction. By definition, reasoning would be tasks that require some test time computation and step-by-step computation. On the other hand, I would also expect to gain a lot from doing practice at training time. So I think that you’d get the best results by combining these two things.", "Dwarkesh Patel", "Right now, you have these two ways the model learns. One is in training, whether it's pre-training or post-training. Most of the compute in training is spent on pre-training, glossing over trillions of tokens, skimming trillions of tokens worth of information. If a human was subjected to that, they would just be totally confused. It's just not a very efficient way to learn.", "The other way is in-context learning . Of course that is more sample-efficient, but it's destroyed with each instance.", "I'm curious if you think that there's a path for something in between those, where it’s not destroyed at each instance but it's also not as frivolous as just seeing trillions of tokens. Something more deliberate and active.", "John Schulman", "Do you mean models having some kind of medium-term memory? Too much to fit in context but much smaller scale than pre-training?", "Dwarkesh Patel", "It might be memory. I don't have context. Certainly when I'm trying to prepare for this conversation, I think of what I should understand, read it carefully, and maybe think about it as I’m reading it. I’m not sure what it naturally corresponds to in terms of models. What would that look like?", "John Schulman", "I see. So it’s not just memory but it’s also somewhat specializing to a certain task or putting a lot of effort into some particular project.", "Dwarkesh Patel", "I'm not even sure if it's specialization. It’s more so “I don't understand this part, so let me look into it more deeply. I already understand this.” I guess it’s specializing to your existing knowledge base.", "John Schulman", "I see. So it's not just about training on a bunch of sources that are relevant and fine-tuning on some special domain. It's also about reasoning and developing some knowledge through your own reasoning and using some sort of introspection or self-knowledge to figure out what it needs to learn?", "Dwarkesh Patel", "Yeah.", "John Schulman", "That does feel like something that's missing from today's systems. People haven't really pushed too hard on this middle ground between large-scale training—where you produce a single snapshot model that's supposed to do everything like a deployed model—and on the other hand in-context learning.", "Part of that is that we've just been increasing context length so much that there hasn't been an incentive for it. If you can go to a hundred thousand or a million context, then that's actually quite a lot. It’s not actually the bottleneck in a lot of cases.", "I agree that you'd probably also want to supplement that with some kind of fine-tuning. The capabilities you get from fine-tuning and in-context learning are probably somewhat complementary. I’d expect us to want to build systems that do some online learning and also have some cognitive skills, like introspecting on their own knowledge and seeking out new knowledge that fills in the holes.", "Dwarkesh Patel", "Is this all happening at the same time? Is it just a new training regime where all these things can happen at once, whether it’s long-horizon or this kind of training?", "Are they separate or not? Is the model smart enough to both introspect and act on longer horizons so that you get adequate reward on the long-horizon tasks?", "John Schulman", "If you're doing some kind of long-horizon task, you're learning while you do the task, right?", "The only way to do something that involves a lot of steps is to have learning and memory that gets updated during the task. There’s a continuum between short-term and long-term memory.", "I expect the need for this capability would start to become clear when we start to look more at long-horizon tasks. To some extent, putting a lot of stuff into context will take you pretty far because we have really long context now. You probably also want things like fine-tuning.", "As for introspection and the ability to do active learning, that might automatically fall out of the models’ abilities to know what they know. Models do have some calibration regarding what they know. That's why models don't hallucinate that badly. They have some understanding of their own limitations. That same kind of ability could be used for something like active learning.", "(00:40:50) - The Road to ChatGPT", "Dwarkesh Patel", "Interesting. I want to step back and ask about your own history, at least at OpenAI. You led the creation of ChatGPT. At what point did you realize that these LLMs are the path to go? When did you realize a chatbot or some way to instruct them would be useful? Just walk me through the whole lineage from when this became your main focus and what the process was like.", "John Schulman", "Before ChatGPT, OpenAI had these instruction following models . The idea there was that we had base models that people could prompt them in elaborate ways. But they were also hard to prompt. They basically do autocomplete so you had to set up a very good prompt with some examples.", "People at OpenAI were working on just taking the base models and making them easier to prompt. So if you just wrote a question it would answer the question, instead of giving you more questions or something. So we had these instruction following models, which were like base models but a little easier to use. Those are the original ones deployed in the API. Or after GPT-3, those were the next generation of models.", "At the same time there were definitely a lot of people thinking about chat. Google had some papers like LaMDA and earlier, Meena . They had these chatbots. It was more like a base model that was really specialized to the task of chat. It was really good at chat. Looking at the examples from the paper, it was more used for fun applications where the model would take on some persona and pretend to be that persona. It was not so functional where it could help me refactor my code.", "So there were definitely people thinking about chat. I had worked before on a project looking at chat called WebGPT , which was more about doing question answering with the help of web browsing and retrieval. When you do question answering, it really wants to be in a chat. You always want to ask follow-up questions or sometimes the model should ask a clarifying question because the question is ambiguous.", "It was clear after we did the first version, that the next version should be conversational. So we started working on the conversational chat assistant. This was built on top of GPT-3.5, which was done training at the beginning of 2022. That model was quite good at language and code. We quickly realized that it was actually quite good at coding help. That was one of the things we were excited about.", "We worked on that for most of the year. We had browsing as another feature in it although we ended up deemphasizing that later on because the model's internal knowledge was so good. The browsing wasn't the most interesting thing about it. We had it out to friends and family for a while and we were thinking about doing a public release.", "Actually, GPT-4 finished training in August that year. The flagship RL effort at OpenAI was the instruction following effort because those were the models that were being deployed into production. The first fine-tunes of GPT-4 used that whole stack. Those models were really good and everyone got really excited about that after seeing the instruct fine tune GPT-4s.", "They were really good. They would occasionally give you amazing outputs, but the model was clearly also pretty unreliable. It would sometimes hallucinate it a lot. It would sometimes give you pretty unhinged outputs. So it was clearly not quite ready for prime time, but it was obviously very good.", "People forgot about chat for a little while after that, this alternative branch. We pushed it further and we ended up mixing together all the datasets, the instruct and the chat data, to try to get something that was the best of both worlds. The chat models were clearly easier to use.", "It automatically had much more sensible behavior in terms of the model knowing its own limitations. That was actually one of the things that I got excited about as we were developing it. I realized a lot of the things that people thought were flaws in language models, like blatant hallucination, could be not completely fixed but things that you could make a lot of progress on with pretty straightforward methods.", "The other thing about chat was when we had these instruct models. The task of “complete this text, but in a nice or helpful way” is a pretty poorly defined task. That task is both confusing for the model and for the human who's supposed to do the data labeling. Whereas for chat, people had an intuitive sense of what a helpful robot should be like. So it was just much easier for people to get an idea of what the model was supposed to do. As a result, the model had a much more coherent personality and it was much easier to get pretty sensible behavior robustly.", "Dwarkesh Patel", "Interesting. Is it the case that anybody could have made ChatGPT using your publicly available fine-tuning API ?", "John Schulman", "Not exactly. I don't remember which models were available for fine-tuning. Assuming we had 3.5 available for fine-tuning at the time, you could have made something decently close. I don't think you would have been able to do just one iteration of fine-tuning with purely human-written data. You'd want to do several iterations.", "If not you're not going to do RL, which we did, you’d want some kind of iterative supervised fine-tuning where humans edit the model-generated outputs. If you train on human-generated data, even if it’s really high quality, it’s just hard for a model to fit that data perfectly because it might be something a model is capable of outputting. You need to do something iterative that looks a bit more like RL. If you’d done that, you could have gotten pretty close but it would have been non-trivial.", "We also had another instruction-following model trained with RL, released a little before ChatGPT. If you put a chat wrapper on that you would’ve gotten decently close but that model had some differences in strengths. That model was good at writing and poetry but it wasn’t as good at knowing its limitations, factuality, and so forth.", "Dwarkesh Patel", "Stepping back from 3.5, I think I heard you say somewhere that you were super impressed with GPT-2. Compared to your expectations in 2019, has AI progressed faster or slower than you would have expected?", "John Schulman", "Faster than I expected since GPT-2. I was pretty bought into scaling and pre-training being a good idea. But when GPT-2 was done, I wasn't completely sold on it being revolutionizing everything. It was really after GPT-3 that I pivoted what I was working on and what my team was working on. After that, we got together and said, \"oh yeah, let's see what we can do here with this language model stuff.\" But after GPT-2, I wasn't quite sure yet.", "Dwarkesh Patel", "Let’s say the stuff we were talking about earlier with RL starts working better with these smarter models. Does the fraction of compute that is spent on pre-training versus post-training change significantly in favor of post-training in the future?", "John Schulman", "There are some arguments for that. Right now it's a pretty lopsided ratio. You could argue that the output generated by the model is higher quality than most of what's on the web. So it makes more sense for the model to think by itself rather than just training to imitate what's on the web. So I think there's a first principles argument for that.", "We found a lot of gains through post-training. So I would expect us to keep pushing this methodology and probably increasing the amount of compute we put into it.", "Dwarkesh Patel", "The current GPT-4 has an Elo score that is like a hundred points higher than the original one that was released. Is that all because of what you're talking about, with these improvements that are brought on by post-training?", "John Schulman", "Yeah, most of that is post-training. There are a lot of different, separate axes for improvement.", "We think about data quality, data quantity. There’s just doing more iterations of the whole process of deploying and collecting new data. There’s also changing what kind of annotations you're collecting. There's a lot of things that stack up but together they give you a pretty good effective compute increase.", "Dwarkesh Patel", "That's a huge increase. It's really interesting that there's this much room for improvement from post-training.", "(00:52:13) - What makes for a good RL researcher?", "What makes for somebody who's really good at doing this sort of RL research? I hear it's super finicky. What is the sort of intuition that you have that enables you to find these ways to mess with the data and set up these environments?", "John Schulman", "I have a decent amount of experience at this point from the different parts of the stack, from RL algorithms, which I've worked on since grad school, to data collection, annotation processes, and playing with language models.", "I'd say I've dabbled with these things and the people who do well at this kind of research have some view of the whole stack and have a lot of curiosity about the different parts of it. You want to be both empirical and let experiments update your views, but you also want to think from first principles. Assuming that learning works, what would be the ideal type of data to collect? That type of thing.", "Dwarkesh Patel", "Because there doesn't seem to be a model since GPT-4 that seems to be significantly better, there's a hypothesis that we might be hitting some sort of plateau. These models aren't actually generalizing that well, and you're going to hit a data wall beyond which the abilities unlocked by memorizing a vast corpus of pre-training data won't help you get something much smarter than GPT-4.", "Do you think that hypothesis is wrong? We've talked about some examples of generalization, like Spanish to English. One example I think of is the transfer from code to reasoning in language. If you train on a bunch of code, it gets better at reasoning in language? Is that actually the case?", "Do you see positive transfer between different modalities? If you train on a bunch of videos and images, it'll get smarter from synthetic data? Or does it seem like the abilities unlocked are extremely local to the exact kind of labels and data you put into the training corpus?", "John Schulman", "I'll try to respond to all that. First, are we about to hit the data wall? I wouldn't draw too much from the time since GPT-4 was released because it does take a while to train these models and do all the prep to train a new generation of models.", "I wouldn't draw too much from that fact. There are definitely some challenges from the limited amount of data, but I wouldn't expect us to immediately hit the data wall. However, I would expect the nature of pre-training to somewhat change over time as we get closer to it.", "In terms of generalization from different types of pre-training data, I would say it's pretty hard to do science on this type of question because you can't create that many pre-trained models. Maybe you can't train a GPT-4 sized model and do ablation studies at that scale. Maybe you can train a ton of GPT-2 size models or even a GPT-3 size model with different data blends and see what you get. I'm not aware of any public results on ablations involving code data and reasoning performance and so forth. I'd be very interested to know about those results.", "Dwarkesh Patel", "I'm curious about something. One of the things is that the model gets smarter as it gets bigger. Would an ablation on a GPT-2 level model, which suggests that there isn't much transfer, provide evidence for the level of transfer on a similar set of domains in a GPT-4 level model?", "John Schulman", "Right, you might not be able to conclude that if transfer fails at GPT-2 size, then it's also going to fail at a higher scale. It might be that for the larger models, you learn better shared representations, whereas the smaller models have to lean too much on memorization. The larger models can learn how to do the right computation. I would expect this to be true to some extent.", "Dwarkesh Patel", "This might have a very simple answer. You train bigger models on the same amount of data and they become smarter. Or to get the same level of intelligence, you only have to train them on less data. Why is that the case? It's got more parameters, seen fewer things, and now it's equally as smart. Why is that?", "John Schulman", "I don't think anyone has a good explanation for the scaling law with parameter count. I don't even know what the best mental model is for this. Clearly, you have more capacity if you have a bigger model. So you should eventually be able to get lower loss.", "Why are bigger models more sample efficient? I can give you a sketchy explanation. You could say that the model is an ensemble of different circuits that do the computation. You could imagine that it's doing computations in parallel and the output is a weighted combination of them. If you have more width… actually width is somewhat similar to depth because with residual networks, depth can do something similar to width in terms of updating what's in the residual stream.", "You're learning all these different computations in parallel and you have more of them with a bigger model. So you have a higher chance that one of them is lucky, ends up guessing correctly a lot, and gets upweighted.", "There are some algorithms that work this way, like mixture models or multiplicative weight update algorithms , where you have—I don’t want to say mixture of experts because it means something different—basically a weighted combination of experts with some learned gating .", "I actually said something slightly wrong, but you could imagine something like that. Just having a bigger model gives you more chances to get the right function.", "Of course, it's not just totally disjoint functions you're taking a linear combination of. It's more like a library where you might chain the functions together in some way. There's some composability . So I would say a bigger model has a bigger library of different computations, including lots of stuff that's dormant and only being used some of the time, but it has more space to look for circuits to do something useful.", "(01:00:58) - Keeping humans in the loop", "Dwarkesh Patel", "Stepping back from the current research questions, I want to understand your modal scenario of what happens for the next few years. Towards the beginning of the conversation, we were talking about the case in which it progresses really fast, but let's just take the modal scenario.", "You're unlocking long-horizon RL at some point, but as you said, there are potentially other bottlenecks. What's happening? How good are these models? How are they being deployed? What other modalities are part of them and at what stage are these being unlocked? I want to understand your broader picture of what the next few years look like.", "John Schulman", "I would expect new modalities to be added over time or pretty soon. I would expect the capabilities to generally keep getting better through a combination of pre-training and post-training, and that'll open up new use cases.", "Right now, AI is still not a huge part of the economy. There's a pretty small fraction of jobs that it can help with at all. I'd expect that to be higher over time, not just from the models improving but also from people figuring out how to integrate them into different processes. So even if we just froze the models at their current state, you would still see a lot of growth in how they're being used.", "I would expect AI to be used much more widely and for more technically sophisticated tasks. I gave the programming example earlier, doing longer projects, but also helping with various kinds of research. I hope that we can use AI to accelerate science in various ways, because you can potentially have the models understand all the literature in a given field and be able to sift through tons of data. It’s more than a person would have patience to do.", "I hope the form factor would be such that people are still driving all of this and you have your helpful assistants that you can direct and point to lots of different problems that are useful to you. Everyone would have all these AIs helping them do more and get more done.", "Dwarkesh Patel", "Obviously at some point they're going to be better than everyone at whatever they want to do. What would that process look like? Right now, they're clearly only helping you. At some point, they’ll be able to just do things for you and maybe run entire firms for you. Is it going to be a smooth process? At that point, is the hope that we have systems that are aligned with the user enough that they can count on the firm being run in the way they expect.", "John Schulman", "We might not want to jump to having AIs run whole firms immediately. We might want to have people overseeing these important decisions and calling the shots, even if the models are good enough to actually run a successful business themselves. To some extent, there might be choices there.", "I think people will still have different interests and ideas for what kind of interesting pursuits they want to direct their AIs at. AI doesn't necessarily have any kind of intrinsic desire, unless we put it in the system. So even if AIs become extremely capable, I would hope that people are still the drivers of what the AIs end up doing.", "Dwarkesh Patel", "I wonder if the economic equilibrium is so far from that, where you have the equivalent of Amdahl's law in a firm. The slowest part of the process is the one that's going to bottleneck you.", "Even if AI makes all the non-human parts of the firm 10X more efficient, the firm is still bottlenecked by that step. If one company decides to proceed by keeping humans in the loop on all the things that you really want human oversight on, then they'll just be outcompeted by other companies. If one country decides to go this route, other countries will beat it. I wonder if this is a sustainable plan for keeping humans in the loop.", "John Schulman", "If we wanted to keep humans in the loop, which seems reasonable, and it turned out that firms with any humans in the loop were outcompeted by firms that didn't have any humans, then you would obviously need some kind of regulation that disallowed having no humans in the loop for running a whole company.", "Dwarkesh Patel", "But there are so many companies in any country, let alone the world. I wonder if it's better to do the regulation on companies and say you've got to keep humans in the loop in important processes, but then you have to define what important processes are.", "You've got to monitor every single company and you also have to get collaboration from every single country which has firms. If this is a problem, should it be solved before the model is even deployed, such that hopefully if you did decide to build a firm and depend on these models, it basically does what you want it to do and you don't need a human in the loop?", "Does that question make sense? I'm just wondering, in this situation, how do we actually monitor every single firm to ensure a human is in the loop? And what happens if China doesn't decide to do that?", "John Schulman", "You would either have to have every country agree to this regulatory regime, or you would need all of the model infrastructure or the model providers to agree to this kind of requirement.", "It's definitely going to be non-trivial. This is looking a ways ahead, so it's a little hard to imagine this world before seeing anything like it.", "For example, are we actually confident that AI-run companies are better in every way. Do we think they're better most of the time, but occasionally they malfunction because AIs are still less sample efficient in certain ways? Consider when they have to deal with very wacky situations.", "AI-run firms might actually have higher tail risk because they're more likely to malfunction in a big way. There might be some practical questions like that that would determine how things play out. Maybe if you just require people to be accountable for various liabilities, this would also change the incentives a bit.", "Let’s say it turned out that AIs are better at running everything and they're also completely benevolent. Let’s say we've totally solved alignment, and they're better at being accountable to people than people are. Then maybe it's okay having the AIs run the firms. But that's pretty far out.", "We're more likely to be in a situation where they look better in the short term, but they still have some serious problems. It's actually practical considerations that push you more towards having humans in the loop, at least for the near future.", "Dwarkesh Patel", "So this is a problem we have to deal with today with RLHF. You have to aggregate preferences across a lot of different humans. It'll be maybe more marked with future, more powerful systems. But when you say we want these eventual AI systems that are going to fully replace humans as part of these firms to be aligned, what does that mean?", "Will it mean that they basically do what the user wants them to do? Does it mean that they have to result in some sort of global outcome that we're happy with as the stakeholders in OpenAI? Concretely, what would that mean?", "John Schulman", "If the models are being used for these higher stakes use cases, then we would have to think about RLHF in a much different way than we are right now.We're not quite ready for that or the current methods might not be completely sufficient. We would need to make compromises between the needs of the different stakeholders involved. We have this document that we're releasing called the Model Spec. It's about how we want our models to behave in the API and in ChatGPT.", "We try to talk about this issue where there are different stakeholders involved and sometimes there are conflicts between what they might want. In our case, we were thinking of the stakeholders as the end user (someone sitting in front of ChatGPT or some other app), the developer (someone using the API who might be serving other end users with their app), the platform (OpenAI, we don't want the models to expose us to legal risk), and the rest of humanity (including people not part of the users or customers).", "Obviously, the user might ask the model to do something that we think is actively harmful to other people. We might have to refuse that. By the way, this isn't the order of priority necessarily. These are just the four or so classes of stakeholder. Actually, you could maybe also say in the future, the model itself. We're not there yet.", "Anyway, we have these different stakeholders. Sometimes they have conflicting demands. We have to make some call on how to resolve those conflicts.It's not always obvious how to do that. We had to think through the trade-offs and basically the rough heuristic is that we mostly want the models to follow your instructions and be helpful to the user and the developer.", "But when this impinges on other people's happiness or way of life, this becomes a problem and we have to block certain kinds of usage. We mostly want the models to just be an extension of people's will and do what they say. We don't want to be too paternalistic. We want to be neutral and not impose our opinions on people. We mostly want to let people do what they want with the models.", "Dwarkesh Patel", "I got a chance to read the Spec beforehand. This is a question of how well that transfers over to how the model itself behaves. I was impressed with how sensible the trade-offs were. I believe the actual edge cases were explicitly stated rather than the kinds of things where are obvious. In this case, you really are going after the edge cases.", "John Schulman", "We wanted it to be very actionable so that it wasn't just a bunch of nice sounding principles. Each example tells you something about some non-obvious situation and reasons through that situation.", "(01:15:15) - State of research, plateaus, and moats", "Dwarkesh Patel", "I have a couple of questions about the state of the research itself. Famously in the social sciences, things are really hard to replicate . There’s a question about how much of the science there is real versus these manufactured, bespoke sorts of experiments. When you look at the average ML paper, does it feel like a really solid piece of literature or does it often feel like the equivalent of what p-hacking is in the social sciences?", "John Schulman", "Everyone has their complaints about the ML literature. Overall, I think it's a relatively healthy field especially compared to some others like in the social sciences. It's largely grounded in practicality and getting things to work. If you publish something that can't be replicated easily, people will just forget about it.", "It's accepted that often you don't just report someone's number from their paper. You also try to reimplement their method and compare it to your method on the same training dataset. If you publish methods that are really hard to implement or are really finicky, they'll tend to get forgotten.", "As a result, people actually try to open source their work a lot. There are also various unfavorable incentives. People are incentivized to make the baseline methods they're comparing to worse. There are other mild pathologies, like trying to make your methods seem sophisticated mathematically.", "But overall, I feel like the field makes progress. I would like to see a little bit more science and trying to understand things rather than just hill climbing on benchmarks and trying to propose new methods. There's been a decent amount of that recently. We could use more of that. I think that's a good thing for academics to work on.", "On a slightly different note, I'd be really excited to see more research on using base models to do simulated social science. These models have a probabilistic model of the whole world and you can set up a simulated questionnaire or conversation and look at how anything is correlated. Any traits that you might imagine, you can see how they might be correlated with other traits.", "It'd be pretty cool to see if people could replicate some of the more notable results in social science, like moral foundations and that sort of thing, by just prompting base models in different ways and seeing what's correlated.", "Dwarkesh Patel", "What is that Stanford experiment? The Asch conformity test ? It'd be fun if that replicated with the language models as well. It's very interesting.", "I want to ask about the rest of the research that happens at big labs. How much of it is increasing or decreasing the amount of compute you need to get a certain result as an actual compute multiplier versus how much of it is just making the learning more stable and building out the infrastructure?", "The broader question I'm trying to ask is, since GPT-4, does it feel like with the same amount of compute, you can train a much better model? Or does it feel like you’ve made sure that learning can happen better and in a more scalable way with GPT-5, but it's not like we can train GPT-4 with GPT-3.5's budget now?", "John Schulman", "There's definitely always progress in improving efficiency. Whenever you have a 1D performance metric, you're going to find that different improvements can substitute for each other. You might find that post-training and pre-training both improve the metrics. They'll have a slightly different profile of which metrics they improve.", "But at the end of the day, if you have a single number, they're both going to substitute for each other somewhat. For something like a human evaluation, what do humans prefer, we've definitely made a lot of progress on both sides, pre-training and post-training, in improving that.", "Dwarkesh Patel", "A couple of rapid-fire questions about RLHF. Obviously, RLHF is important to make these models useful. So maybe the \"lobotomized\" description is inaccurate.", "However, there is a sense in which all of these models, once they're put in a chatbot form, have a very similar way of speaking. They really want to “delve” into things. They want to turn things into bullet points. They often seem to have this formal and dull way of speaking.", "There are complaints that they're not as creative. Like we were talking about before, they could only do rhyming poetry and not non-rhyming poetry until recently. Is that a result of the particular way in which RLHF happens now? If so, is it because of who the raters are? Is it because of what the loss function is? Why is this the way all chatbots look?", "John Schulman", "I would say there's a decent amount of room for variation in exactly how you do the training process. We're actively trying to improve this and make the writing more lively and fun. We've made some progress like improving the personality of ChatGPT. It is more fun and it's better when you're trying to chit chat with it and so forth. It's less robotic.", "It's an interesting question how some of the ticks came about, like the word \"delve.\" I've actually caught myself using that word recently. I don't know if it rubbed off on me from the model.", "Actually, there might also be some funny effects going on where there's unintentional distillation happening between the language model and providers. If you hire someone to go do a labeling task, they might just be feeding it into a model. They might be pulling up their favorite chatbot, feeding it in, having the model do the task, and then copying and pasting it back. So that might account for some of the convergence.", "Some of the things we're seeing are just what people like. People do like bullet points. They like structured responses. People do often like the big info dumps that they get from the models.", "So it's not completely clear how much is just a quirk of the particular choices and design of the post-training processes, and how much is actually intrinsic to what people actually want.", "Dwarkesh Patel", "It does seem persistently more verbose than some people want. Maybe it’s just because during the labeling stage, the raters will prefer the more verbose answer. I wonder if it's inherent because of how it's pre-trained and the stop sequence doesn't come up that often and it really wants to just keep going.", "John Schulman", "There might be some biases in the labeling that lead to verbosity. There’s the fact that we tend to train for one message at a time rather than the full interaction. If you only see one message, then something that just has a clarifying question, or maybe a short response with an invitation to follow up, is going to look less complete than something that covers all possibilities.", "There's also a question of whether people's preferences would change depending on how fast the model is streaming its output. Clearly, if you're sitting there waiting for the tokens to come out, you're going to prefer that it gets to the point. But if it just gives you a dump of text instantly, maybe you don't actually care if there's a bunch of boilerplate or if there's a bunch of stuff you're going to skim. You'd rather just have it all there.", "Dwarkesh Patel", "The reward model is such an interesting artifact because it's the closest thing we have to an aggregation of what people want and what preferences they have. I’m thinking about models that are much smarter. One hope is that you could just give it a list of things we want that are not trivial and obvious, something like the UN Declaration of Human Rights .", "On the other hand, I think I heard you make the point that a lot of our preferences and values are very subtle, so they might be best represented through pairwise preferences . When you think of a GPT-6 or GPT-7 level model, are we giving it more written instructions or are we still doing these sorts of subliminal preferences?", "John Schulman", "That's a good question. These preference models do learn a lot of subtleties about what people prefer that would be hard to articulate in an instruction manual. Obviously, you can write an instruction manual that has lots of examples of comparisons. That's what the Model Spec has. It has a lot of examples with some explanations. It's not clear what the optimal format is for describing preferences.", "I would guess that whatever you can get out of a big dataset that captures fuzzy preferences, you can distill it down to a shorter document that mostly captures the ideas. The bigger models do learn a lot of these concepts automatically of what people might find useful and helpful. They'll have some complex moral theories that they can latch onto. Of course, there's still a lot of room to latch onto a different style or a different morality.", "So if we were to write a doc, if we're going to align these models, what we're doing is latching onto a specific style, a specific morality. You still need a decently long document to capture exactly what you want.", "Dwarkesh Patel", "How much of a moat is better post-training? Companies distinguish themselves currently by how big their model is and so forth. Will it be a big moat for who has figured out all the finickiness that you were talking about earlier with regards to all this data?", "John Schulman", "There's something of a moat because it's just a very complex operation and it takes a lot of skilled people to do it. There's a lot of tacit knowledge and organizational knowledge that's required.", "With post-training, to create a model that actually has all the functionality people care about, it’s pretty complicated. It requires a pretty complicated effort and accumulation of a lot of R&D. That makes it somewhat of a moat. It's not trivial to spin this up immediately. It does seem like the same companies that are putting together the most serious pre-training efforts are also putting together the most serious post-training efforts.", "It is somewhat possible to copy or to spin up more of these efforts. There's also one force that sort of makes it less of a moat. You can distill the models, or you can take someone else's model and clone the outputs. You can use someone else's model as a judge to do comparisons.", "The more big league people probably aren't doing that because it goes against terms of service policies. It would also be a hit to their pride. But I would expect some of the smaller players are doing that to get off the ground. That catches you up to a large extent.", "Dwarkesh Patel", "I guess it helps clear the moat. What is the median rater like? Where are they based? What are their politics? What is their knowledge level?", "John Schulman", "It varies a lot. We've definitely hired raters with different skills for different kinds of tasks or projects. A decent mental model is to just look at people who are on Upwork and other platforms like that. Look at who's doing odd jobs with remote work.", "It's a pretty international group. There's a decent number of people in the U.S. We hire different groups of people for different types of labeling, like whether we're more focused on writing or STEM tasks. People doing STEM tasks are more likely to be in India or other middle or lower-middle income countries. People doing more English writing and composition tend more to be U.S.-based.", "There've been times when we needed to hire different experts for some of our campaigns. Some of the people are very talented, and we even find that they're at least as good as us, the researchers, at doing these tasks and they're much more careful than us. I would say the people we have now are quite skilled and conscientious.", "Dwarkesh Patel", "With regards to the plateau narrative , one of the things I've heard is that a lot of the abilities these models have to help you with specific things are related to having very closely matched labels within the supervised fine-tuning dataset. Is that true?", "Can it teach me how to use FFmpeg correctly? Is it like there's somebody who’s seeing the inputs, seeing what flags you need to add, and some human is figuring that out and matching to that. Do you need to hire all these label raters who have domain expertise in all these different domains? If that's the case, it seems like it’d be a much bigger slog to get these models to be smarter and smarter over time.", "John Schulman", "You don't exactly need that. You can get quite a bit out of generalization. The base model has already been trained on tons of documentation, code, with shell scripts and so forth. It's already seen all the FFmpeg man pages, lots of Bash scripts and everything.", "Even just giving the base model a good few-shot prompt , you can get it to answer queries like this. Just training a preference model for helpfulness will, even if you don't train it on any STEM, somewhat generalize to STEM. So not only do you not need examples of how to use FFmpeg, you might not even need anything with programming to get some reasonable behavior in the programming domain.", "Dwarkesh Patel", "Maybe a final question. We've touched on this in different ways but let’s put it together. You said you're training on much more multimodal data. Presumably, these things understand what screens look like and will be able to interact with them in a much more coherent way. Also you're going to do this long-horizon RL, so they'll be able to act as agents in the systems and be part of your workflow in a much more integrated way.", "What do you expect that to look like? What will be the next steps from there? Suppose by the end of the year or next year, you have something that's an assistant who can work with you on your screen. Does that seem like a sensible thing to expect? Where does it go from there?", "John Schulman", "I definitely expect things to move in that direction. It's unclear what's going to be the best form factor. It could be something that's like a Clippy on your computer helping you or if it's more like a helpful colleague in the cloud. We'll see which kinds of form factors work the best. I expect people to try all of them out.", "I expect the mental model of a helpful assistant or helpful colleague to become more real. It’ll be something where you can share more of your everyday work. Instead of just giving it one-off queries, you would have a whole project that you're doing and it knows about everything you've done on that project so far.", "It can even proactively make suggestions. Maybe you can tell it to remember to ask me about this and if I've made any progress on it. Proactivity is one thing that's been missing. I'd love to see us moving away from one-off queries, using the model like a search engine, and more towards having a whole project that I'm doing in collaboration with the model. Something where it knows everything I've done. It's proactively suggesting things for me to try or it's going and doing work in the background.", "Dwarkesh Patel", "That's really interesting. This is the final question. What is your median timeline for when it replaces your job?", "John Schulman", "Oh, it replaces my job? Maybe five years.", "Dwarkesh Patel", "Pretty soon. Interesting. John, this was super interesting. Thanks so much for making the time. This seems like one of the parts of the AI process that is super important and people don't understand that much about. It was super interesting to delve into it and get your thoughts on it.", "John Schulman", "Thanks for having me on the podcast. It was fun to talk about all this stuff.", "" ]
[ "https://twitter.com/cgbessellieu", "https://bit.ly/4aVllm4", "http://joschu.net/", "https://openai.com/", "https://openai.com/index/chatgpt/", "https://scholar.google.com/citations?user=itSa94cAAAAJ&hl=en", "https://en.wikipedia.org/wiki/Reinforcement_learning", "https://openai.com/index/openai-baselines-ppo/", "https://blogs.nvidia.com/blog/what-is-a-pretrained-ai-model/", "https://openreview.net/pdf?id=H1O0KGC6b", "https://www.datacamp.com/tutorial/loss-function-in-machine-learning", "https://techpolicyinstitute.org/publications/artificial-intelligence/from-tokens-to-context-windows-simplifying-ai-jargon/", "https://ai.stackexchange.com/questions/5246/what-is-sample-efficiency-and-how-can-importance-sampling-be-used-to-achieve-it", "https://www.rudderstack.com/learn/machine-learning/generalization-in-machine-learning/", "https://www.dwarkeshpatel.com/p/will-scaling-work", "https://www.simonoregan.com/short-thoughts/affordances-and-ai", "https://azure.microsoft.com/en-us/blog/introducing-gpt-4o-openais-new-flagship-multimodal-model-now-in-preview-on-azure/", "https://en.wikipedia.org/wiki/Large_language_model", "https://arxiv.org/abs/2403.02677", "https://en.wikipedia.org/wiki/Artificial_general_intelligence", "https://www.spiceworks.com/it-security/cyber-risk-management/articles/what-is-sandboxing/", "https://link.springer.com/article/10.1007/s00146-019-00887-x", "https://www.codecademy.com/article/what-is-diffing", "https://en.wikipedia.org/wiki/Red_team#Cybersecurity", "https://en.wikipedia.org/wiki/Instrumental_convergence", "https://en.wikipedia.org/wiki/Flask_(web_framework)", "https://plato.stanford.edu/entries/desire/", "https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback", "https://hazelcast.com/glossary/machine-learning-inference/", "https://en.wikipedia.org/wiki/Automated_reasoning", "https://www.hopsworks.ai/dictionary/in-context-learning-icl#:~:text=In%2Dcontext%20learning%20(ICL)%20learns%20a%20new%20task%20from,objective%20of%20next%20token%20prediction.", "https://blog.google/technology/ai/long-context-window-ai-models/", "https://lips.cs.princeton.edu/introspection-in-ai/", "https://openai.com/index/instruction-following/", "https://blog.google/technology/ai/lamda/", "https://arxiv.org/abs/2001.09977", "https://openai.com/index/webgpt/", "https://platform.openai.com/docs/guides/fine-tuning", "https://chat.lmsys.org/?leaderboard", "https://www.wsj.com/tech/ai/ai-training-data-synthetic-openai-anthropic-9230f8d8", "https://www.baeldung.com/cs/ml-ablation-study", "https://en.wikipedia.org/wiki/Neural_scaling_law", "https://en.wikipedia.org/wiki/Mixture_model", "https://en.wikipedia.org/wiki/Multiplicative_weight_update_method", "https://en.wikipedia.org/wiki/Mixture_of_experts", "https://arxiv.org/abs/1906.02777", "https://www.linkedin.com/pulse/composable-architecture-ai-blueprint-innovation-sumit-chakraborty/", "https://en.wikipedia.org/wiki/Amdahl%27s_law", "https://en.wikipedia.org/wiki/Replication_crisis", "https://en.wikipedia.org/wiki/Data_dredging", "https://en.wikipedia.org/wiki/Hill_climbing", "https://en.wikipedia.org/wiki/Moral_foundations_theory", "https://en.wikipedia.org/wiki/Asch_conformity_experiments", "https://x.com/paulg/status/1777035484826349575", "https://www.nytimes.com/2024/04/10/technology/ai-chatbot-training-chatgpt.html", "https://en.wikipedia.org/wiki/Knowledge_distillation#:~:text=In%20machine%20learning%2C%20knowledge%20distillation,might%20not%20be%20fully%20utilized.", "https://www.un.org/en/about-us/universal-declaration-of-human-rights", "https://deepmind.google/research/publications/54918/", "https://hbr.org/2023/11/has-generative-ai-peaked", "https://en.wikipedia.org/wiki/FFmpeg", "https://en.wikipedia.org/wiki/Shell_script", "https://opensource.com/resources/what-bash", "https://campus.datacamp.com/courses/chatgpt-prompt-engineering-for-developers/advanced-prompt-engineering-strategies?ex=1#:~:text=Few%2Dshot%20prompting%20is%20a,the%20model%20to%20respond%20to.", "https://en.wikipedia.org/wiki/Office_Assistant" ]
https://www.dwarkesh.com/p/joseph-henrich
Joseph Henrich – Why Humans Survived and Smarter Species Didn't
[ "Humans didn’t succeed because of raw IQ", "Dwarkesh Patel - 00:00:00", "Today, I have the pleasure of chatting with Joseph Henrich , who is a professor of human evolutionary biology at Harvard University and an author of two of my favorite books, The Weirdest People in the World and before that, The Secret of Our Success . And I was just mentioning to you that I remember reading this many many years ago when I was in college, and at the time, I didn’t think I would get a chance to ask you questions about it. But the most proximal reason I wanted to interview you is, I recently had your colleague, David Reich , on and we were discussing certain things in the record of human history where he said, “Eventually, you’re just gonna have to have Joseph Henrich on and ask him these questions, because he’s the one who would know.” So let me ask you one of the questions which I was super intrigued by which he raised, and we didn’t come up [with] an answer to.", "So one of the things he’s discovered through his genetic evidence is that 70,000 years ago across Eurasia, there’s so many different human species, from the Denisovans to the Neanderthals to the Hobbits, and then apparently, there’s this one group, which was potentially the size of 1 to 10,000 people in the Near East, which subsequently explodes, and now everybody who’s descended from Eurasia descends from this one group. And so I guess the question is like, what happened? What did they figure out?", "Joseph Henrich - 00:01:18 A typical assumption when people think about this, if you put it in the Paleolithic, they assume that it has to do with some kind of genetic changes. Now, Reich’s lab, there’s no obviously big changes in the DNA, so it’s a little bit of a puzzle. Neanderthals, for example, had larger brains, and in primates larger brains usually go along with more computational abilities, more ability to solve problems. So the expanding variant out of the Middle East, out of Africa, might’ve actually been less able at an individual level to process information. But if you look back over the more recent period of human history, you can see that it’s a story of expansions of different populations.", "So for example, in Africa, we have the Bantu expansion about 5,000 years ago, which actually eliminates a whole bunch of hunter-gatherer populations that previously existed in Africa. We have the remnant populations in parts of the Congo, in the Kalahari, in the Hadza , for example, in Tanzania. If you look at the Austronesian expansion , so that’s the peopling of the Pacific, that was the expansion of one group of people at the expense of others, and of course, the Neolithic expansion into Europe is another example.", "So really human history is a story of these different expansions. And it could be that this expansion across Eurasia, which then led to interbreeding, so we know it’s the same species. Humans interbred with Denisovans and Neanderthals, as well as probably other species- there’s a ghost species in there. This could be just institutional changes, so if you have institutions, for example, that interconnect your population, you can maintain more sophisticated technology. And some paleoanthropologists, for example, have speculated- with some evidence- that the expanding populations had projectile weapons, so bows and arrows. And humans have periodically gained and lost bows and arrows in different parts of the world. So in Australia, for example, bows and arrows are never invented. In the New World, populations probably didn’t have bows and arrows, but then later develop bows and arrows.", "Dwarkesh - 00:03:12 It’s also really interesting how some technologies you would just think of as extremely… I don’t know what the word is, but the new world, like not having the wheel or something. I guess it kinda makes sense with no domesticated animals, but again, it’s like such a…", "Joseph Henrich - 00:03:28 Right. Although the New World, everybody has dogs, and you can pull carts with dogs. So I’ve never really bought that, and of course, you can, you can use llamas in the New World to pull carts as well. People do that today.", "Dwarkesh - 00:03:38 Yeah, so what’s your explanation for why there’s no wheels in the New World?", "Joseph Henrich - 00:03:41 So there were wheels on Mayan carts or on Mayan toys. My explanation is just the collective brain. So almost every single first invention of something big that we think is important for humans was invented in Eurasia. And Eurasia, as Jared Diamond famously pointed out, building on other people’s work, it’s the largest continent by far. It has the biggest population. It’s also oriented along an east-west axis, which allows ideas and people to more easily flow. And there’s a belt, which the historian Ian Morris calls the Lucky Latitudes , which runs from basically southern China all the way through to the Mediterranean. And ideas are just flowing back and forth there, across the center of Eurasia. But you also ended up with more complex state bureaucracies and the kinds of things that allow you to organize and move people around and whatnot.", "Dwarkesh - 00:04:27 And what’s the explanation for why the collective brain leads to state capacity? Or is it the other way around?", "Joseph Henrich - 00:04:34 Well, you can think of institutions that eventually lead to state capacity as just part of the innovative process of the collective brain.", "So if you have more groups experimenting with more different ways of governing groups of people, gradually, you get the accumulation of the pieces that you can put together into different kinds of states.", "Dwarkesh - 00:04:50 Got it. Okay, so going back to what happened 70,000 years ago, is the basic answer like, it could be something like bows, but we don’t know exactly what it was? Or is it-", "Joseph Henrich - 00:04:57 Yeah. We definitely don’t know exactly what it was. So we know that this population expanded, and there does seem to be some tool indications to suggest more complex technology. Probably, technology usually goes along with social organization.", "So for example, if you look at Australia, which is a continent of hunter-gatherers, there’s an expansion about 6,000 years ago out of northern Australia which eventually takes up seven-eighths of the continent, and they had a new social organization, including rules about who you marry.", "You had linguistic exogamy, and rituals that interconnected populations. So rather than having local rituals and local myths, many communities would get together periodically to initiate the young men, and this would help bond that whole group. There’d be an exchange of technology, and teaching that goes on at these… they’d spent a few months in the same place so there was a lot of time for transmission.", "Dwarkesh - 00:05:47 And then in terms of mechanism, so, if it’s the case that you have these sort of convalescing waves of expansion, and David Reich’s lab’s evidence suggests that this expansion was quite violent, based on whether the genes are passed down through the maternal line or the paternal line. When you think about the mechanism in which technologies develop or these social institutions develop, how much of it is like, look, there’s all these different groups that are trying different things, and one of them maybe figures something out, and then they just kind of explode? And how much of it is that within each group, they are experiencing sort of accumulations of learning over time, and it’s not necessarily a selective process, it’s more of a sort of accumulative process?", "Joseph Henrich - 00:06:32 Well eventually the groups have to meet, and they’re gonna compete over territories, and a bunch of different things can occur. So groups can copy each other. So we know that from the ethnographic record and historical cases, sometimes a group will say, “Oh, those guys have really good technology tools.” Maybe they’ll get some migrants, or something like that, and then they can adopt the practices. So that definitely happens.", "But there’s also plenty of evidence of violent conflict. And the Reich labs evidence and lots of other ancient DNA suggests that there are these dead-end genetic lineages. So, I mean, the Neanderthals are a dead end in a sense, although we interbred with them, so they’re in us in some sense, but they don’t have their own pure lineage or anything like that.", "Dwarkesh - 00:07:14 And if you had to guess, whenever this big wave happens, is it more of a sort of concrete technology? I mean, especially if you consider the range of expansion that can often happen, right? It’s like literally that same group goes from Siberia to England. It’s hard to imagine it’s a single technology lets you explore this wide range. But it’s also the idea that some cultural artifact is enough that it gets transmitted over tens of thousands of years and that’s what gives you the edge. I’m curious how you think about, what could possibly be this driving engine?", "Joseph Henrich - 00:07:47 Well the way I describe it in The Secret is that it’s a package of things. So another good example would be the Inuit expansion out of the north slope of Alaska . And they expand all the way across the Arctic and eventually get to Greenland. And they have a whole package of social practices which helps keep them interconnected, and then they have bows and arrows, which the group they are exterminating as they go along, the Dorset , doesn’t have. They have dogs and sleds. They have, along the coast they have boats, so they’re doing whaling. So there’s a whole package that puts together that allows them to out-compete and eventually exterminate the Dorset.", "And one of the things that happens, though, is the Dorset probably had better technology, but they expanded, they spread out, their languages diversified, they lost contact, and they began losing technology. So you wanna see this as a dynamic pulse. There were probably expansions and then collapses, another expansion, another collapse, right? It’s not one giant long march to victory.", "Dwarkesh - 00:08:38 And then this process, which has happened many times where the population gets cut off or something: When you say it’s a sort of cycle, is there some reason why, over time, that knowledge gets fragmented and breaks apart and populations disperse?", "Joseph Henrich - 00:08:58 Yeah, I think there’s a cultural evolutionary dynamic that’s part of this, because languages will naturally diversify and then as they diversify, there’s less contact. People sometimes get inclined to marry endogamously , and especially if there’s enough people around locally, why go all the way over there? And those people are getting more culturally different, so they’re seeming a little bit like outsiders…", "So the tough part that humans have always had is to stay unified because of the natural effect of geography and learning locally is gonna tend to fragment us.", "How cultural evolution works", "Dwarkesh - 00:09:27 In the Secret, you described this interesting startup problem where if you don’t have that much accumulated cultural knowledge, developing the ability to do social learning isn’t as valuable. But if you don’t have the ability to do social learning, you don’t have that much accumulated cultural knowledge in your tribe or group. So how is this problem solved?", "Joseph Henrich - 00:09:46 Yeah. So before I get to solving the problem, I just want to sketch for the listeners that the question is, why is this cumulative cultural evolutionary process that is so important for humans relatively uncommon in the natural world? It seems like just our lineage. I mean, there were a bunch of split-off lineages, but now it’s just us.", "So to understand that is this idea that you just mentioned, where you’re imagining an increase in brain tissue that’s gonna be costly, and I can put that towards individually figuring the problem out for myself, or I can put it into learning from others. And in a world without very much cumulative culture, there’s not gonna be very much useful information in the minds of everybody else so I should use that brain tissue for individually solving problems. And so it’s hard to get this runoff where their brains get bigger for the purposes of learning from others. And then the question is, how do you get past that valley?", "So the case I make in Secret is that, say, three million years ago, two million years ago, there were several factors that came together. The first factor is the rate of change of environments. So you get this increase in the fluctuation of environments, so you’re getting more environmental changes. And in cultural evolution, a lot of theory shows that there’s a certain rate of change which is favorable to cultural evolution. It’s gotta be slow enough so that the information of your parents and the previous generation is useful, but not so slow that you might as well just encode it in the genes.", "So that’s one. Second thing is, we’re a ground-dwelling ape, which means we have hands like chimpanzees and gorillas, and we can potentially use tools and whatnot. But unlike them, our ancestors may have been savanna-dwelling apes, which meant we may have lived in large groups. So in mammals, mammals live in larger groups when they have to deal with predators and the predator guild in Africa at that point was quite thick. There were a lot of deadly predators.", "So paleo-anthropologists think that our ancestors may have lived in large [groups of] savanna-dwelling apes. And if you have a lot of individuals, if the culture is sparse, the bigger the group, the more chance that there’s someone doing something useful in the larger group. And so that means it’s easier to get across the threshold. So those are three of the main factors that might have allowed our lineage, as opposed to all the other lineages around, to cross this. We’re already a big-brained primate, we had hands, we’re living in these large groups on the savanna, and the climate changed during this period, so as to make us, yeah.", "Dwarkesh - 00:12:18 And how big were these groups?", "Joseph Henrich - 00:12:20 Well, I mean, nobody’s really sure, but maybe a few hundred individuals.", "Dwarkesh - 00:12:24 Okay. And before the agricultural revolution, was there still this transmission of information across different groups?", "Joseph Henrich - 00:12:34 Yeah. So lots of different evidence suggests that groups were moving trade goods. So well back into the Paleolithic we see trade. Often we see some genetic transmission.", "Dwarkesh - 00:12:46 Right, the expansions and so forth.", "Joseph Henrich - 00:12:48 Yes. And so, talking about the expansion, that’s something where the ancient DNA is useful because the Neanderthals in Europe, the DNA suggests they lived in very small groups. But the DNA of the expanding groups suggests a larger population. So those would’ve been two different collective brains there as well.", "Dwarkesh - 00:13:04 So in The Secret , you discuss a lot of these lost European explorers with modern technologies as of at least a couple centuries ago encountering peoples who have for tens of thousands of years discovered ways of hunting and processing foods and so forth that, without which even these people with modern technology will starve. And as I was reading that I was wondering: I’m not sure how I understand the process by which, if there’s a 10-step process to making sure this bean is actually nutritious, and without any one of those steps, you might poison yourself or something. And at no point do you understand why this process works. You don’t have a scientific explanation. How do you even learn that in the first place?", "Joseph Henrich - 00:13:49 Right. So one of the things we know, even in young children there’s a tendency to preferentially learn from healthier, more successful individuals. So if you’re processing it better- so something like bitter cassava , which has cyanide in it. If you just eat bitter cassava,it won’t taste great. So if you then rinse it somewhat, it’ll taste better and maybe you could eat it, but you’re going to accumulate cyanide over the long run. So it doesn’t kill you right away. But if you do this whole long process that the populations in South America developed, then you’re totally fine, you never get any accumulation.", "So you can imagine that initially this is gonna be very strong and people are gonna sort of do sensible things… but then it gets a little more mysterious. And we know this because bitter cassava gets transported to Africa, and Africans immediately begin eating it improperly processed and getting goitre and the cyanide processing that goes along with it. So then that’s gonna be a slow evolution where groups that do this are gonna be more successful and individual families are gonna be more successful. So, for example, you might have a household where they process the cassava more seriously than another family, and they’re gonna have more kids and they’re gonna be like, “Oh, that family’s really good.” And then people copy what they do in all kinds of ways, but one of them could be copying recipes.", "Dwarkesh - 00:14:59 Hmm. I guess it depends on the mechanism of selection here, because when you consider the different ways in which two different individuals might be different or two different households or even two different groups, I guess it makes sense why cyanide poisoning is such a deleterious effect that it is a noticeable or quite a strong signal. But I think you discussed some other ideas in The Secret where it’s like, the sort of spices in a region sort of match the antimicrobial properties or the antifungal properties you need to stay, I guess, clean or whatever in that particular region. But it sounds like a small effect, and how could such a small effect actually create a strong enough signal that when you’re deciding who to copy, you notice that this family is healthier because they spice their food in a certain way? Or this group takes over the area because they’ve…", "Joseph Henrich - 00:15:59 Yeah, I mean, it has to be a big enough effect to matter, but I think if you go back to a world where there’s a lot of improperly processed meat, people don’t have refrigerators, and the leading cause of death in children is diarrhea. People carry high pathogen loads. If you can knock that pathogen load down, we know from modern research if you wanna make people healthier and even smarter, right? IQ goes up if you knock the pathogen load down.", "Dwarkesh - 00:16:22 Got it. So this process of cultural accumulation you’re talking about: you think it is not strong enough to pick up minor increases in fitness and it actually has to be quite a significant thing for it to…", "Joseph Henrich - 00:16:39 I think it operates a lot like natural selection, in the sense that natural selection will pick up tiny things if you give it long enough. Cultural evolution will do the same thing if you give it long enough, because just small differences in who you’re paying attention to will affect things, but it might take 1,000 years as opposed to decades, right? But we know culture can spread adaptive traits super quickly if it’s a really big effect.", "Dwarkesh - 00:17:01 Yeah. So the situation in which our ancestors found themselves, in some ways it’s sort of like epistemic hell, in the sense of you don’t know why certain things are working, but you do know that if you break with tradition, you might just doom yourself and your family. And that maybe, as you discuss, causes these religious beliefs and taboos and mystical understandings of the world to rise, where you just think like, “Look, you’re gonna burn in hell if you don’t do this 10-step process for refining your beans or something.” In some sense, this requires you to abdicate reason because you have to just be like, “I don’t understand this. this is the way it’s been done, so we’re gonna do it.” Basically: how much reason do you necessarily have to abdicate to survive in the ancestral environment?", "Joseph Henrich - 00:17:41 I mean, one of the ways we research this is by going out to societies that make bows and arrows or have food taboos that protect them from dangerous marine toxins and ask them questions about why they do it and see if they [understand] and they don’t understand the underlying causal stuff.", "Dwarkesh - 00:17:53 Right. But then why do they say they do it?", "Joseph Henrich - 00:17:55 Well, a typical answer is, “It’s our custom.” or “It’s important to people around here to do it this way, so we do it this way.” So just Hadza bows, for example- I didn’t do this research, Jacob Harris and Kim Hill and Rob Boyd did it- but they asked Hadza about their bows and about how they work and what the mechanics are , so they understood some stuff. So you use the bow and you get some mechanical understanding. But if you asked him, “What if you used a different wood, different materials,” and they had never tried anything else but what they learned, how to make this bow. So they couldn’t speak to a lot of that stuff. And the things about the compression of the wood is very important- didn’t understand that.", "Dwarkesh - 00:18:34 Okay, so that’s quite interesting, right? Because you would think that you have to transmit cultural knowledge over time, but you also need to experiment in order to innovate. But it sounds like because of the belief in these customs, you would be less inclined to innovate.", "Joseph Henrich - 00:18:48 Right. And the thing is, once something gets good, doing it differently almost always makes it worse. And then there’s also, some things are just different because people do it randomly different, like they make a mistake. So I think people often underestimate the power of error in generating novelty.", "Dwarkesh - 00:19:04 One question I have is why this process of cultural learning… So humans have extended childhoods, presumably to give us more time to accumulate culture, and then we live after menopause, presumably because older people, our grandparents, can teach us about the situation we find ourself in. And 18 years is such a long time, and when I think back to- I mean, obviously I didn’t grow up in the ancestral environment, but when I think back to what I was doing as a teenager I don’t think I was learning that much. I was just less productive for no obvious biological reason. So I guess I don’t intuitively understand why this process of cultural learning takes decades, and why it can’t happen more rapidly.", "Joseph Henrich - 00:19:47 Well, you probably, as an adolescent, were in school, right? So you were probably accumulating some stuff in school- maybe. And here’s an interesting fact about hunters. So anthropologists who studied hunters, hunter-gatherer societies in different places, and the physical peak that at least males have is in their early 20s, right? That’s when they run the fastest. They got the best eyesight. But the best hunters in the community are 36 to 40 , right?", "They’re not as fast as those guys, but they just know, they know the track, they know the animals, they know exactly what to do in these different- And then hunting skill begins to decline because basically the physicality of things begins to catch up, and so there’s this kind of cycle. And at 18 young hunters aren’t even producing enough food to feed themselves. So it’s not until they get into their 20s that they’re actually in surplus and bringing food home for everybody else. And you need to know hundreds, at least, animals, you need to know animal behavior, you gotta be able to spot tracks and spoor, and we’re just very knowledge-dependent in terms of our hunting and gathering.", "Why is human brain size declining?", "Dwarkesh - 00:20:48 Yeah. Given what you just said and also the experience of these European explorers suggests that you just gotta know a lot of shit to make it in any sort of environment on Earth. But then, there’s all these other animals and they seem so dumb and they seem able to get by. So why did humans have such a hard time of it?", "Joseph Henrich - 00:21:07 Yeah, well, so I think we offloaded a lot of stuff into culture. So in one of my Lost European Explorers, there’s a case where they’re in Australia and they have camels. And the camels escape, and now central Australia has lots of feral camels. So the camels survived the lost European explorer challenge, because they have innate instincts. They can smell water a mile away. They can detoxify foods in their own- they have a complex digestive system that detoxifies- we’ve lost all that, and we’re worse than chimpanzees at detoxifying foods because we have all these cultural practices that do the work. So we’ve externalized detoxification and a lot of digestion, actually.", "Dwarkesh - 00:21:41 I had this independent researcher and internet writer called Gwern Branwen on my podcast a couple months ago , and we were discussing AI. And I asked him- so his theory is very much like the brain became bigger, more intelligence. And so then I asked him, “Look, if it’s this simple, why did it take so long for evolution to discover intelligence in the first place?” And he had this interesting answer, which is that in terms of the signal that evolution is giving you, there’s a very narrow gap between skills that are so useful that they should just be distilled as an instinct into your genome, and then skills that are so worthless that are not worth learning in the first place. And so this narrow gap that’s like, you need that generalization ability. It’s not so primal that, you know, you’re not gonna culturally learn hunger, it’s just gonna be in your genome. I don’t know if that sounds like an interesting explanation or helps explain anything.", "Joseph Henrich - 00:22:41 Well in cultural evolutionary models, a typical thing would be to let genes compete with individual learning to compete with cultural learning. And it turns out the rate of change of the environment affects that. So if the environment is changing slowly, then you should put it all in the genes. And if it’s changing at moderate speed, then culture is the best way to go, and if it’s fast, then individual learning. And the idea, the individual learning is favored because if it’s changing too quickly, the previous generation doesn’t know anything worth knowing because the world you’re dealing with is just so much different from their world. And so it’s this intermediate range.", "So myself and others have argued that the increase in the frequency of change during the Plio-Pleistocene transition 2.5 million years ago is a change that increases the value of cultural transmission of learning from others. I’m not sure what your guest was thinking. I mean as our brains are not very good at solving problems, otherwise the lost European explorers could survive. And human brain size has been declining for the last 10,000 years. So we’ve actually been getting dumber. Fewer neurons, less computational power.", "Dwarkesh - 00:23:48 Why has brain size been declining?", "Joseph Henrich - 00:23:50 Well, the collective brain argument suggests that at a certain size, you’ll begin farming off specialists. So because there’s a store of knowledge in the society, and we can all be generalists and learn how to do all the different skills. But at some point it makes efficiency sense for us to specialize in different skills.", "In order for that to be the case though, we have to have social agreements of some sort that allow us to trade or exchange things like that. So, but then once we’re specializing, we don’t necessarily need as large a brain and we, because we distribute the overall brain power amongst the society. So it could be that we’re becoming more of a superorganism. And you see the same thing in ants. When ants get specialized occupational casts, their individual brains shrink.", "Dwarkesh - 00:24:35 Yeah. I think David Reich’s lab had a result a month or two ago where they showed that the selective pressure on at least the European samples, which is the samples they studied, was that there’s been selection for greater intelligence over the last 10,000 years. I don’t know if you saw that or what you make of that?", "Joseph Henrich - 00:24:52 Well, they didn’t say anything about intelligence. They did use the polygenic score for education. But there was no education 8,000 years ago.", "Dwarkesh - 00:24:58 Which is correlated, right?", "Joseph Henrich - 00:24:59 Yeah, it’s correlated. But then the question is, we don’t know what goes into that, right? Is that actually computational software? I mean, people do well in school because they stick to it and they can sit in the same place for long periods of time.", "Dwarkesh - 00:25:12 I guess it would be interesting for them to sort of deduct the docility hit polygenic score from the education, because we don’t have a polygenic score for intelligence yet, right, so which is why you need some proxy.", "Joseph Henrich - 00:25:22 Well there are IQs, so there are polygenic scores for IQ. I think they actually did that one as well, like you said, it’s correlated with education. But the things that go into giving you an IQ, I mean, the correlation between brain size and IQ is only about, well, across populations it’s .24, so it’s not very big.", "The other thing is IQ is massively misunderstood, so the way to think about IQ, which has been going up over the 20th century, by quite a bit, so if you rescale modern scores to 1900, it’s about 70, it’s called the Flynn effect . I think that those are cognitive abilities for navigating the institutional world that we’ve constructed.", "And a huge mistake would be to assume that those are the right cognitive abilities for the next century. So it’ll be a different constellation. We study herders and cognitive abilities of herders in northern Namibia, and if you’re in northern Namibia, you gotta be able to move through the landscape, so being able to just pick a direction and know where you’re going and not get lost is super important.", "So different suites of cognitive abilities are favored in different environments. This idea that there’s this generalized thing applicable across all human environments just doesn’t… I mean, education massively increases your IQ, right? If you’re uneducated, you have a totally different IQ.", "Dwarkesh - 00:26:42 And then is the way in which your horsepower is directed, is that a thing that you think is basically set by the time you’re an adult? Or, if somebody’s 30 now and the AI thing is gonna happen next and they have to totally reorient away from knowledge work, is that a thing?", "Joseph Henrich - 00:26:58 Well, I think it’s a continuous scale. If you look at human brains, they’re developing and continuing to add new connections and stuff, at least into the mid-20s. Now, there could be even more plasticity after that. Unlike chimps, we’re still not totally myelinized for our whole lives. So there’s definitely room. It’s just there’s less room.", "Dwarkesh - 00:28:20 The kind of social learning for which our learning biases are fit, right? Like where you pay attention to elders because they survived somehow and they’ve accumulated the knowledge from past generations, how much do you think that actually basically applies to the way in which the modern knowledge work economy accumulates knowledge? And how much do you think is just an artifact of the kind of environment our ancestors found themselves in? Right now we’re in Silicon Valley, or in San Francisco, and it’s very common here for a 20-something to make a new product or service which, without that much- they don’t have that much context of how the world works, but they can make big, big innovations and big changes. But at the same time, people on average maybe are more productive in their 40s and 50s in terms of wages or something. So basically, this sort of social learning you’re describing, how much does it actually describe the world as it exists today?", "Joseph Henrich - 00:29:17 Yeah. And so, super important, so we’ll take the patent database and if your same-sex parent- has to be same sex- patents in a particular domain, you’re nine times more likely to patent in that same domain.", "Dwarkesh - 00:29:30 And how do you separate the genetic versus cultural effect there?", "Joseph Henrich - 00:29:32 So these are very fine domains, so we’re talking about, like, natural adhesives versus synthetic adhesives. So unless you think there’s genes for that. And same thing with transistors and electronics, so very fine domains. And if you grow up in Silicon Valley, you’re much more likely to patent, in general, but you’re probably gonna patent in computers. But if you grow up in Boston, it’s gonna be biotech. So having a father or mother who did biotech and then growing up in Boston, you’re even more likely to patent in biotech. And this is true even if you move to New York. So if you look at people in New York who are from Silicon Valley and from Boston, the Bostoners are more likely to patent in biotech, and the Silicon Valley kids are more likely to patent in computing. So all the same rules apply.", "Dwarkesh - 00:30:20 Well, I’m not sure because maybe the location, you learn a lot from the environment. But the fact that it’s just kids, and so, if you think these old rules of cultural accumulation and collective brains apply, you would think, “Ah, you’re 50 by the time you’re writing your first useful line of Python code or something.” When in fact, a lot of these big innovations come from people who are much, much younger, before they should have accumulated much of the know-how that this theory implies.", "Joseph Henrich - 00:30:48 So the model here is what to focus your efforts on, so it’s the throwing model. So if you grow up in Silicon Valley, you focus your efforts on learning how to code or whatever. And so that means that’s where you’re likely to make the innovation.", "Dwarkesh - 00:31:03 Right. But in terms of, you know, when there’s 2% growth a year and a lot of the technologies that are dominant in the economy today, or a lot of the industries, didn’t even exist many generations ago. Basically, does that suggest this model of- should we just mistrust our instincts of who to give prestige to and what kinds of people we want to pay attention to, whether it’s professors or elders or whatever?", "Joseph Henrich - 00:31:28 Yeah, so that’s definitely something I’ve thought about and I wrote about in Secret , which is that, as the rate of cultural change gets faster- and we sort of talked about this with environmental change- the value of older and older members of the previous generation declines because the world that they grew up in, and that they honed their skills to, is quite different from the current world. So you would expect the degree to which, I mean, optimally, you would look less far back or you would look to relatively younger individuals to get your inspiration from because the world they adapted to is closer to the world you’re gonna need to adapt to.", "Will AGI have superhuman cultural learning?", "Dwarkesh - 00:32:00 Yeah. And speaking of Silicon Valley, let me ask you a little bit about AI. So one of the reasons to suspect some incredibly sharp discontinuity from the world as it exists today to a world with AIs. And by AI, I don’t mean just, like, GPT-4, I mean, like, replacement at least for anything you can do on a computer screen, the AI can do.", "One of the reasons we expect this hard discontinuity is that they have potentially the step function increase in social learning and the ability to accumulate knowledge that maybe humans had, or the magnitude of which maybe humans had when, between them and non-human primates. And in particular, the fact that you can just copy everything you know. Like you don’t have to teach a young person, right? The constraints of biology mean you can’t just replicate brains.", "You have much more efficient communication. You don’t have to communicate through words, you can just shoot your brain state across, the population size can grow arbitrarily large. So to the extent of, the collective brain is the size of the population and how interconnected it is, do you just expect we wouldn’t even recognize the kind of world these AIs could make as a result of their cultural skills?", "Joseph Henrich - 00:33:15 Yeah. No, I mean, I, I definitely think it’s pretty interesting, and holds great potential for expanding the collective brain. There are little things in there which might make one worry. So if you study the history of innovation, you find out that, for example, serendipitous meetings are super important. There’s a great paper on Silicon Valley showing that companies will cross-reference each other’s patents more likely when the people at those companies tend to frequent the same coffee shops , and they track people on their cell phones and stuff to figure this out. So serendipitous meetings are important and improper copying. So a huge number of innovations are mistakes where somebody copied incorrectly and then got something better.", "Dwarkesh - 00:33:55 I read this interesting theory that maybe evolution designed transcription and translation to have more errors than it could otherwise have, just so that you can have this sparse reward when you’re close to the right sequence.", "Joseph Henrich - 00:34:08 Yeah, yeah. I think that’s a very interesting area of research, and it makes good sense to me.", "I’m not sure of the current state of the evidence, but there are… Different parts of the genome are more or less susceptible to mutation, which is kind of interesting.", "Dwarkesh - 00:34:21 Yeah. So going back to AI, maybe then another way to phrase it is, look, you’re talking about these serendipitous meetings where you can learn something another person knows. And the great advantage these AIs have is that they can sort of meet with everybody at once. Future versions of AIs, you could really imagine holding the whole internet in context, right? I mean, we’ll be the equivalent of those people isolated in Tasmania according to the AIs, right? Because they just have everything in context. You can get a PhD in every single field, and you can amortize that across all your copies.", "Yeah, so are you sort of banking on, in the next 10 years, you’re just gonna be living in the singularity or something because of your belief in the value of cultural knowledge?", "Joseph Henrich - 00:35:03 Yeah, I see there’s various potentials, and I’m particularly interested in using AIs to augment problem-solving in human groups. So you can imagine getting the humans together because the humans have the big advantage of having stuff they care about, right? There’s stuff they want to invent. So the AI is still a tool at some point. So I’m interested in that, but I’m interested in how these things like running out of training data is gonna be dealt with, the value of making mistakes and serendipity, if you get rid of all those things or how you’re gonna reintroduce them, those kinds of things.", "Dwarkesh - 00:35:39 Yeah. I mean, it’s quite funny because until recently, people were saying the big issue with LLMs is that they hallucinate and make mistakes. And at some level, hallucination is no different than creativity.", "Joseph Henrich - 00:35:50 Well, may- maybe there’s a way to harness that, right? But maybe we just didn’t know how to harness it.", "Dwarkesh - 00:35:53 That’s right. What do you make of the idea that you could have AI firms or AI tribes, whatever way to think about it- is if the effective population size that can communicate with each other is such an important contributor to how much progress a group in history was able to make. If you could just literally run billions of AIs and they have this immediate ability to communicate with each other. And again, I’m not talking about current models, I’m talking about future, human equivalent. Basically I guess I’m just throwing out a bunch of different intuition pumps and I’m curious which one you find most promising or most interesting, or did you just find all of them not as convincing?", "Joseph Henrich - 00:36:31 Yeah, no, I don’t have strong opinions about any of this, but I do think that one thing in order to make all that work is, humans are constantly getting hit with shocks, right? So there’s economic shocks, there’s weather shocks, there’s conflict with other groups, there’s pandemics. And the shocks have big effects on how things go. They inject new information into the system. So I’d worry about a system that is too homogeneous and doesn’t have enough noise adding shocks and new challenges and things like that.", "One of the things missing from our conversation is the kind of creativity that cultural evolution has figured out. So to give you an example, at some point in human history, religions invented big, powerful, moralizing gods. And this may have increased the people’s ability to cooperate. So forget about the technological element. Some of the most important features of innovation over human history have been institutional and things that get people to cooperate.", "So in my work, I argue that one branch of Christianity resulted in the transformation of the families in European societies into small monogamous nuclear families. I’m not sure an AI would have thought of that. Cultural evolution thought of it because of how it affected how the societies operated, so it sorted this out over historical time.", "Dwarkesh - 00:37:48 Yeah. Can I play with that idea?", "Joseph Henrich - 00:37:49 Yeah.", "Dwarkesh - 00:37:50 Because that’s quite interesting. So again, I think this might even reinforce the advantages of AIs in this sense. So, to the extent that this sort of random fluke by the church led to this modernity and the great divergence or whatever, and it was a result of, there’s a bunch of variation, and then you can select over some group that’s doing the right thing.", "The advantage you have with AIs is that you can have much more high fidelity replication of culture and you can explore a wider range of potential cultures. Sorry, and it sounds really vague when you talk about AIs in this way. So what do I mean? One problem companies have today is that, suppose a company’s working really well. Maybe it’s early Apple, or early Tesla, or SpaceX or something. And suppose it has to do with the culture, the people, something, it’s not clear how you replicate that culture, not only across different institutions, right? If I’m running a company, I don’t know how to copy SpaceX. Not only can you not replicate it across institutions, you can’t even replicate it across time.", "So over time, many institutions tend to decay, their culture fades as people leave or die or something. And then imagine if all of SpaceX, at least maybe at the time you thought it was the most effective, are just AIs where you know every single byte, and when you can make a copy of it 1,000 times, and you can put it against every single hardware problem we have in the world. And then you can, if you think there’s another team that might have some different culture which is better, you can do the same thing with them. And culture here includes not only how they think about their work, but also even how does the board make decisions, right? Do they do this Monte Carlo tree simulation ? And there’s so many different things you could do here. So because of the wider range of possibilities you can explore and the fact that you can have higher fidelity transmission of cultural information, maybe the ability to do this random evolution also increases. But I’m curious to get your take on that.", "Joseph Henrich - 00:39:57 Yeah, as a kind of broad sketch, I think that is pretty interesting. I worry about too high a fidelity replication just because it’s important to take the details of historical context and time into account. So that same thing, if you fix it, it might not really work a decade later when everything else around it has changed, right? So it’s kind of a moving target. But you could fix that just by having lots of different variants that are different versions of that. So yeah, that would be cool.", "Dwarkesh - 00:40:26 Sorry, that gives us a good opportunity to go into WEIRD. But before we get there, one more question about before the agricultural revolution. You know, obviously many groups around the world were different and many of those differences were probably inspired by the fact that they were living in different environments, so they needed to come up with different technologies and adaptations to survive best there. How much just random differences that had nothing to do with their environment do you expect there to have been, in terms of… I don’t even know what the right category would be, but in the modern world, you might imagine something like, does a society have slavery or not? And maybe pre-modern societies all had slavery, so that’s not the right way to think about variation, but that kind of…", "Joseph Henrich - 00:41:07 Yep. So I think there’s lots of reason to think there was lots of variation. And one of the ways that researchers study this is they look at ethological variables and they compare it to phylogenetic variables. And here they mean cultural phylogeny.", "So if your ancestral populations had matrilineal, matrilocal social organization, how likely are you to have it, controlling for ecology? And it turns out both of these things matter. People adapt to their ecology, but you can still see the signal of past society. So some degree of fidelity of transmission of social institutions, how we make our baskets, those kinds of things. There’s lots of examples in the modern world, and we see this all over the place. So if you look at gender inequality in the modern world, you find that a history of the plow leads to greater gender inequality. So males had a particular advantage in using the plow because it requires upper body strength.", "This meant males became the dominant force in economic production at the household level. And that even today in populations where most people aren’t farmers, this persists. So this is cultural persistence that has to do with whether you had the plow. So this is work done by Alberto Alesina and Nathan Nunn . A former postdoc of mine now at the Harvard Business School, Anke Becker , has done a similar thing with pastoralism . So if your ancestors were pastoralists, and pastoralists, for various reasons, have quite strong gender norms that persists into the modern world and still shapes, for example, female entrepreneurship.", "Why Industrial Revolution happened in Europe", "Dwarkesh - 00:42:34 Interesting. Yeah, and the reason I’m especially curious about this is because it informs the following question. If the Industrial Revolution didn’t happen in Europe but started somewhere else, how different would the world look like today? So obviously you discussed the fact that breaking apart these kin ties was necessary for the Industrial Revolution, but to the extent that was necessary, whichever place had the Industrial Revolution first would have had that, right? But, I mean, separate from the technologies or cultural practices which were necessary for the Industrial Revolution in the first place, still how different would the world look? How much variation was possible given our level of technology?", "Joseph Henrich - 00:43:13 Yeah, I have a lot of trouble trying to answer that because, I mean, China or some place in the Middle East might have been the obvious alternative place for the Industrial Revolution to happen. But I feel like you have to give those places a lot of stuff that developed only in Europe.", "So for example, universities begin spreading in the High Middle Ages. So you’d want them to have universities. You’d want them to have universal schooling, which began to spread and wasn’t present in these places prior to that. So that begins to spread in the 16th, 17th century. And so by the time you add all this stuff, it basically starts to look a lot like Europe.", "And these are all things that are global now, right? So the universal schooling that we find around the world today, you know, begins with the Protestant Reformation in Germany and then later England. University’s models are the European university, and globalized.", "Dwarkesh - 00:44:07 I guess one obviously very salient example of variance which might not have been replicated, but I’m curious if you think it might have been, is, it seems like the British Empire was the first major institution which decided that slavery is just morally wrong and we’re gonna throw our weight around to eliminate it. And I don’t know if you think that sort of naturally follows the development of social technologies that the Industrial Revolution would have brought about, but that seems super contingent. But I’m curious if you disagree.", "Joseph Henrich - 00:44:40 Yeah. Well, so my story is that the rise of the Industrial Revolution in Europe has to do with the consolidation of Europe’s collective brain. And one of the things that requires is trust in strangers, and at least the beginnings of moral universalism.", "And it’s that moral universalism that eventually causes the British to say, “We’ve got to stop with the slave trade thing.” It’s a moral decision that they made because it’s no longer consistent with the changing moral values over time.", "Dwarkesh - 00:45:07 Right, right. Okay, so we’ve been dancing around the thesis of your book following The Secret , which is The Weirdest People in the World . And before we really jump into it, maybe you can give me a summary of what the thesis there is.", "Joseph Henrich - 00:45:20 Yeah. So the first observation is that there’s a great deal of global psychological variation around the world. So European, American, Australian populations tend to be highly individualistic. They’re inclined towards analytic thinking over holistic thinking. They have a lot of impersonal prosociality, so trust in strangers. They’re against conformity, willingness to cooperate with strangers. So the question is, how can we explain the global variation in these features of psychology?", "And towards the end of the book, I actually connect these features of the psychology to economic differences, including the Industrial Revolution that happened in Europe, which reshapes the world. And the story is that the key event is the spread of a particular form of Christianity into Europe, where the Catholic Church- what becomes the Catholic Church- systematically dismantles the intensive kinship systems in Europe, leading to small monogamous nuclear families.", "And this transformation leads to the creation of new institutions. So by the high Middle Ages, you get the rise of guilds, which are voluntary groups of craftsmen and self-help societies, because people don’t have their families to rely on. People begin moving around. There’s occupational sorting into different occupations. You get urbanization rising. Charter towns are on the rise. Universities pop up. New kinds of monasteries pop up. And then Europe begins to urbanize, and you get new kinds of law that are based around the individual. Contract law. And then eventually this leads to a lot more innovation because ideas are flowing around Europe and then eventually the Industrial Revolution. So that’s the argument in a nutshell.", "Dwarkesh - 00:46:53 Can you explain again what exactly the Church did which led to the kin-based existing system breaking down?", "Joseph Henrich - 00:46:59 Yeah. So the Church outlaws polygyny, and so that stops elite males from having multiple wives and concubines and whatnot, and creating kind of a giant family through that. It outlaws cousin marriage going all the way out to sixth cousins at some point, and that included spiritual kin and other kinds of non-genetic relatives as well as the cousins. It frees up inheritance, so it has inheritance by testimony rather than normal patrilineal inheritance. And a simple example here is in most societies, you inherit access to land corporately, which means you and your brothers and stuff all own the land. And it might be your uncle is actually in charge of the land, your father’s brother. And so you can’t sell it, you can’t move it around, and you’re also tied to it. It’s where your ancestors are buried, and so there’s this big importance of land. So the Church allows people to give land to the Church, the Church becomes the largest landowner in Europe, because you can do it by testament.", "So those are just some of the examples of the ways it transforms the family structure, and eventually you get these monogamous nuclear families which are basically unheard of around the world, at least if you look at the anthropological record. And you can really see this when you can compare, individuals can move to European cities as individuals or nuclear families. In China when you move to the city, you maintain contacts with the folks back home in the village, and people flow back and forth and you get these little enclaves of different clans and stuff in the cities, the connections. And this is really important because your ancestors, and there’s rituals that have to be done back in the home community. But Christianity does away with all those ancestor rituals.", "Dwarkesh - 00:48:35 Okay, great. Let’s jump into it by starting before- much before- even the church. I wonder how much of the things that make Europe weird existed even in antiquity. If you read about how Roman society worked, or how these Greek polities worked, already you have an emphasis on nuclear families. You have these sort of universal norms around, “Hey, we believe in republicanism,” or, “We believe in democracy.” And the structure of the system matters more necessarily than… I guess at least before the empire, right? So yeah, was Europe already weird before the things that its church did?", "Joseph Henrich - 00:49:19 Well, there were certainly some interesting things going on in Greece and Rome. But I don’t think there’s good evidence that you had the kinds of monogamous nuclear families that you would find later. I mean, even European law is built around patrilineage. So if your father’s still alive, you are not a full citizen. You’re in the pater house of these large families. Definitely intensive kinship, for sure, patrilineal, patrilocal.", "Women, of course, don’t have any rights. And republicanism, but the formation of Rome is built around a series of elite families. So it’s a clan operation, and they call it republican because the clans all have some say in what’s going on. So people sometimes see representative government where it’s not really individuals being represented, but families. And so this fools a lot of people into thinking there was a lot more individualistic voting and things like that, whereas there’s actually no voting. So in Greece, in Athens, Athens is unique because it does a bunch of things in the laws of Solon , that break down the intensive kinship. So for example, in Greece, you get monogamy for the first time, and Athens is considered unusual. So males can only marry one Athenian woman, which has potentially positive effects among the competition among Athenian men, but they can have as many slave concubines as they want. So when Christianity spreads, it ends the whole slave concubine thing, which was also common in Rome, and that’s another effect on this whole thing.", "Dwarkesh - 00:50:48 Yeah. But I guess the extent to which these practices were already formed hundreds or thousands of years before the church’s intervention, maybe this suggests that Europe was already on this trajectory and the church didn’t necessarily move the needle that much. Or what do you make of that thesis?", "Joseph Henrich - 00:50:48 Well we know that the… in the book, I talk about data where we can look at contemporary kinship practices, and we can look at the number of centuries that that part of Europe was under the church… so we have a database of the diffusion of bishoprics, and there’s a clear connection between the intensive kinship practices and the number of years under the church.", "Dwarkesh - 00:51:25 Hmm. And then if you think about what the church basically did, I think your work is often used to justify this idea of Chesterton’s Fence , where if you don’t necessarily understand a cultural practice, you should keep it around because… the stuff we were talking about, The Secret , where you don’t understand why the 10 steps lead to the bean being edible, but you should trust the sort of wisdom of the ages.", "If you think about what the church did here, right? Like, isn’t this sort of like a… not necessarily a justification, but at least an example of just breaking down Chesterton’s Fence? The church doesn’t really understand why these kin-based networks that have existed for thousands of years might have utility. It still just gets rid of all of them. And that results in this lottery ticket that leads to The Great Divergence . So is there evidence for anti-Chesterton’s fence from the Weirdest People ?", "Joseph Henrich - 00:52:15 Sure. Sure. So what you’re talking about is just the idea that culture has imbued institutions and various practices with a logic that we might not understand. So it’s not calling for never changing the institutions. It’s saying make sure you figure out the logic and the cost and benefits.", "So I was an expert witness in Canada for the attorney general of British Columbia, asked the Supreme Court of British Columbia, whether the statutes against polygyny were legal. And so my job was just to inform the court that polygyny has this unfortunate effect of the elite and high-status males tend to get a disproportionate share of the wives, and that creates this pool of low-status unmarried men.", "Now you can decide what you want to do with that. But you need to understand the kind of underlying social dynamics that you’re gonna unleash if you legalize or decriminalize polygyny.", "Dwarkesh - 00:53:08 Right. But again, going back to the Church, the Church didn’t understand the positive or negative effects they’re gonna have, right? They just like, “Let’s rip the Band-Aid, let’s see what happens.”", "Joseph Henrich - 00:53:17 That’s right. And it actually benefited the Church because it released people’s responsibilities to their families and allowed them to migrate in and join the Church, and more heavily invest in the Church. And these later prohibitions against priests marrying and stuff was all an effort to get greater investment in the Church, because you weren’t torn between helping your son and then investing in the Church.", "Dwarkesh - 00:53:37 Hmm. And then that end result which benefited the Church, was that their conscious intention in breaking apart these kin-based ties? Or was that just the accidental byproduct?", "Joseph Henrich - 00:53:50 Yeah, I mean, in the record, I’ve never been able to find much evidence that they were thinking about the destruction of kinship ties. There is a great quote from Saint Augustine where he talks about the benefits of forcing people to marry more distantly. But, you know, this is something that was debated repeatedly in church councils all across Europe. And it doesn’t seem to pop up very often.", "Dwarkesh - 00:54:12 And the Church doing this, for obviously selfish reasons, was that in any way related to why the Church spread through Europe in the first place? Or did it end up spreading for separate reasons and it won out over the other potential religious competitors for separate reasons? And basically, was this part of the selective mechanism which allowed the Catholic Church to become dominant in Europe in the first place?", "Joseph Henrich - 00:54:43 Yeah, I think so, although it’s hard to know. There was a lot of randomness going on in the diffusion of the Church. But also as the Church arrived, these places, over a period of time, became more successful. So you can see rising urbanization in the wake of the Church’s arrival. That would have meant more trade, their citizens’ rights began developing because they were trying to attract citizens to the towns. That would have given the Church more cachet, ability to move. There’s also the appearance of these voluntary associations which are new kinds of monasteries. And so the monasteries begin spreading throughout Europe. But they’re all linked in a network because, like the Cistercians , they’re all connected and they have big meetings. So they’re actually spreading a lot of knowledge around Europe as well. So that’s part of this kind of collective brain story.", "Why China, Rome, India got left behind", "Dwarkesh - 00:55:30 And if you compare what’s happening in Europe at this time to the rest of the world, so starting with China: in 1500, the population of England, where a couple centuries later the Industrial Revolution starts, is three million, and then in Ming dynasty China, it’s somewhere between 100 million to 160 million. And if we take your previous perspective of the size of the collective brain really matters a lot, the size of the collective brain in China just seems so much bigger. So what is the best way to understand what went wrong here? Why weren’t they able to use their…", "Joseph Henrich - 00:56:02 Right. So when you’re thinking about China, the first thing to remember is that for a lot of the history, we can actually see that size difference mattering a lot.", "So a lot of European invention, or a lot of stuff used in Europe, is flowing in; gunpowder, paper, printing press, stuff, is flowing into Europe. And so okay, so then what happens after about 1000 CE? Well, the argument is that the destruction of the kinship group opens the floodgates to people moving around. So a recent analysis that we’ve done is, after a bishop arrives at a 1.5 by 1.5 grid cell in Europe, people begin flowing in and out of that grid cell more. And what we do to calculate that is we have a big database of a few million famous people. And we have birth and death locations. So we find out that the Church arrives and suddenly people are free to move around.", "So you have a flow of individuals around, and you have rising urbanization. So Europe passes China around 1200 in the percentage of the population that lives in cities. And cities are where a lot of the action is, cities and towns. And you have the diversification of occupations. So normally clans would specialize in different crafts, and you’d learn from your clan brothers and clan fathers how to do the occupation. In Europe you get guilds, and you get masters and apprenticeships developing where strangers will become an apprentice to a master, learn from him, move somewhere else as a journeyman, learn from that master, and then eventually set up a separate shop. Lots of opportunities for a flow of ideas.", "So it has to do with how the kinship system transformed the movement of people, the rising of urbanization, the nature of guilds, and then eventually you get universities and things like that. So it greatly intensifies the interconnectedness of the collective brain, and the amount of cognitive diversity.", "Dwarkesh - 00:57:44 What do we know about India during this period? Because from what I’ve read, from a perspective of how much written history we have, it’s kind of a black box. But we know that there was trade between India and other parts of the world, and we know it had a big population. Do we know why India before the Mughal period or before the British period wasn’t… yeah. What was the effect of not having maybe Abrahamic gods or.. with the kind of other cultural practices India had?", "Joseph Henrich - 00:58:14 Yeah, definitely not an expert on this one, but some of David Reich’s evidence suggests that the caste system is quite old. Because you can actually see it in the genetic system. And the caste system is not good for innovation, because if you happen to be good at another skill but your caste doesn’t do that skill, you can’t switch over. So it’s gonna prevent the sort of available genetic diversity. Complex families, intensive kinship, there’s reason to think that those things were all important. Yeah, so, patchwork of polities.", "But there was lots of interesting ideas, actually, that are developed in India, and they move into Central Asia, eventually end up in the Islamic world, and then get into Europe. So for example, Arabic numbers are actually Indian numbers. Zero was probably developed… I mean, Indians were huge with numbers. It’s kind of interesting. Perhaps related to the religion.", "Dwarkesh - 00:59:04 Not that this would be a justification for the caste system, but is there any way in which the specialization it engendered would be good from the perspective of this collective brain perspective?", "Joseph Henrich - 00:59:16 So specialization is good. And so the interesting thing about most human societies is specialization automatically- or not automatically- it tends to evolve along some kind of kinship line. So in Oceania, there were different clans. In each clan, there’d be like a canoe-building clan and a warrior clan. So that allows specialization, but it doesn’t allow you to harness the genetic diversity, because it’s not like the canoe-building clan had special canoe-building genes. It’s just that you would pass down this cultural knowledge.", "Now, a better system, but it seems hard to evolve, is one in which we all opt, select, into occupations in which we think we’re good at. But for that world to exist, you need a world of voluntary associations with emphasis on the individual rather than on the group. So this is the world that evolves in Europe, once you demolish the intensive kinship units, because otherwise you’re gonna get castes and clans and all these things we see elsewhere, where we see a division of labor, but it’s transmitted through a kin network, the knowledge.", "Dwarkesh - 01:00:14 And why wasn’t there another sort of stultifying… because we began the conversation by talking about, there does seem to be this cycle where over time, because of cultural differences, because of kin, whatever else, over time groups will tend to diverge. Languages, culture, other things you mentioned, tend to diverge over time. Why didn’t the same thing happen in Europe where, let’s say, in the third century or fifth century, the Church starts doing this stuff, and then by a thousand years later, if you’re part of this guild or whatever, now that’s become like the new kin, and now there’s actually less mobility again?", "Joseph Henrich - 01:00:51 Well, so there was a lingua franca, Latin, and so intellectuals would all write and communicate in Latin for a long time. So even though they were speaking different dialects of French and German and whatnot, they were able to communicate in Latin. And Christendom basically formed an overarching network that helped. And one of the key things that this world religion does, and other world religions do it too, is you have to marry other Christians, but it dissolves the tribal line.", "So Europe has tons of tribes in the pre-Christian world, but because of how you have to marry other Christians, you had, you know, Celtics marrying Franks. And in fact, the early arrival of Christianity into England is when a Frankish princess is marrying an Angle in Kent. And so they’re marrying back and forth. And this is gonna dissolve the tribes because intermarriage, “what’s the kid?”, you know? So that’s how you get rid of the tribes, but you need a world religion to do that.", "Dwarkesh - 01:01:45 Right. And so it seems like there were two very important features that were relevant here. One, it seems like obviously the competition between different European groups which gave incentive for monasteries or universities or cities to attract the best people and make all these advancements. But secondly, the fact that they were descended from a common empire, from the Roman Empire. Basically, in a world where the Roman Empire never existed, do you still have this collective brain in Europe emerge?", "Joseph Henrich - 01:02:22 So I think the Roman Empire plays a big role, because, for example, there were Roman roads in parts of Europe which allow people to flow. And you have communities which were part of Rome and relatively sophisticated. Now, they go into a bit of decline, but there’s still the memory of the empire. So, you know, Charlemagne wants to be crowned emperor of the Holy Roman Empire in 800 even though there’s not very much left of it.", "But it was still an ideal, and that Carolingian Empire actually had real effects because they worked with the pope in order to enforce a lot of the marriage and family programs. And the Carolingians had their own agenda because they were trying to use these marriage and family practices to weaken some of the other aristocratic families. But they had the religious tools that they could put to work, and if you didn’t have those religious beliefs, then you couldn’t put them to work.", "Dwarkesh - 01:03:10 Sometimes people defend ancient conquerors based on a similar idea of they spread knowledge around the world, and they set up these lines of communication. People say that about Alexander and then causing Hellenistic expansion and trade and so forth. They say that about Genghis Khan and the Silk Road . They say… I mean, obviously this led to the Black Plague, so… so it’s tough where that nets out.", "They say that about Napoleon and the Napoleonic Codes and so forth. Where do you basically come out on that story of: have the great conquerors been good for the collective brain or bad for the collective brain?", "Joseph Henrich - 01:03:55 Well, I’ve never actually focused on that. I mean, I guess my immediate reaction was there must be a better way to transmit the knowledge than the whole conquest.", "Dwarkesh - 01:04:05 And then I guess there’s also the open question of how the disruptions that are caused by them… How much did they set back the-", "Joseph Henrich - 01:04:11 And population decline, spreading disease and stuff, so certainly it could have been done better.", "Dwarkesh - 01:04:15 Right. And going back to David Reich’s work, If your theory implies that there should have been a decrease in genetic similarity in Europe as of, after the fifth century or something, as a result of the culture, the Church’s practices, do we see that in the genetic record?", "Joseph Henrich - 01:04:35 Yeah. Yeah, so Europeans are quite well mixed compared to other populations. And so we were talking about India. You know, if you do a principal components analysis of this two-dimensional plot of the genetic similarity you can actually see the class, and people are pretty spread out. If you put the Europeans on the same plot, they’re well mixed.", "Dwarkesh - 01:04:52 Right. But do we see the period of greatest mixing starting in the fifth century?", "Joseph Henrich - 01:04:59 Yeah, good question. I don’t know the answer to that question. So that would be… That’s a nice piece of evidence, so… You know, there’s a couple of groups working on medieval ancient DNA, so hopefully we’ll have more answers on that question. I mean, there is enough now of pre-Roman burials, so Bronze Age type stuff, showing that early Europeans definitely had complex, intensive kin groups, patrilineal, patrilocal resonance kinds of stuff, polygyny.", "Dwarkesh - 01:05:30 And then stepping back, I’ve read there’s so many different theories of the Industrial Revolution and of modernity. And there’s Robert Allen , the economic historian who thinks it’s a result of higher wages because of the Black Death. There’s Gregory Clark , who thinks it’s because of this positive eugenics in England where the upper classes were having more babies. I mean, there’s like 20, 30, probably hundreds of different theories of why it happened, where it happened, and when it happened. And so each one of these, when I read them, sounds plausible. I don’t have a knockdown argument against any of them. But I’m not sure how to think about which one is correct, and yet, if we’re stacking all of these up against each other… why you think this story is any more accurate than any other story that people tell? Is it overdetermined?", "Joseph Henrich - 01:06:24 Well, I mean, the first thing to think about is, those other authors- possibly Greg Clark is an exception, but they don’t think about the psychological variation that exists. So there’s good reason to think there’s all these psychological differences, and you have to believe they don’t cash out. And if you think they exist, then you have to explain them, and so you need a theory that explains them. So Bob Allen, he’s a blank on the psychological variation. So let me give you some evidence that I think is pretty good evidence. So we have a database of the diffusion of Roman bishoprics through Europe. And then what we do is we look at what happens when a bishopric arrives. And I mentioned before, you get a flow of people.", "But another fact is, if you look at the production of famous people, these places where the bishopric has arrived start producing more creative individuals relative to non-creatives. So there’s an uptick, and it keeps going up for centuries, so the relative increase, and so that suggests you’re producing more authors and inventors and writers and whatnot. And then if you take each of those, and you correlate it with modern patents, places that spent more time under the church produced more patents between 1980 and 2014 based on European patent data. And this holds if you just compare regions within the same country. So you can see long-term effects of this, and you can see immediate effects in the historical record on the production of creatives.", "Dwarkesh - 01:07:45 So what would you make of a theory that says, “Look, all of these theories are basically correct. All of these stories are basically correct.” And it’s them stacking up that leads to the Industrial Revolution rather than any one of them being the most important proximal cause. And maybe there’s reasons, like, let’s say there’s 10 of these stories and you had to stack them all up and China had five of them, but they also had maybe independently, like three other ones, but they also had two stories which go against them or something. And it’s like, no one of them is that proximately important, it’s their combination, right? So, I guess your theory seems compatible with Robert Allen’s that coal being cheap and wages being high was important.", "Joseph Henrich - 01:08:30 Well, I guess one of the things that I think is a mistake is to focus too narrowly on England, because England was benefiting from ideas that were flowing in from France. There was a lot of great science being done in France. And, I don’t think Gregory Clark’s right. I can explain almost everything he explains with cultural evolution, and he doesn’t even really take that seriously.", "Dwarkesh - 01:08:51 Oh, interesting.", "Joseph Henrich - 01:08:52 Yeah. So patience, one of the things he argues is patience. In WEIRD, I use the exact same data- I actually get the data from him- to show that you have this increase in patience. But we know that people can culturally learn patience, and this can all be culturally transmitted, and in this world, it leads to more success. So if you’re the kind of person who doesn’t waste a lot of his money and we begin to value thrift and stuff, which Protestantism does, then we should expect there to be an increase in fewer murders and lower interest rates.", "Dwarkesh - 01:09:20 What’s the reason to think that… I mean, his main explanation, I think is genetic, right? Is there some reason to think that that is less likely to be as important a factor as culture?", "Joseph Henrich - 01:09:31 Well, we know that populations- migrants- into the US and Europe shift their psychology over a few generations of being there, and we know there’s been natural experiments done by economists like Chris Blattman where you actively try to teach people, train them essentially, to discount the future less. And that seems to be culturally transmittable. So this suggests that- I can’t rule out that there’s been any genetic evolution, but I can show that culture can operate on this quickly and powerfully.", "Dwarkesh - 01:10:00 Right. I think one of his key pieces of evidence, I don’t remember the exact numbers on this or even the particular detail, but it was something like if you look at the inheritance of wealth or titles or- I forget what the exact thing is- the correlation across generations points to the kind of pattern you would see with genetic inheritance rather than cultural spreading.", "Joseph Henrich - 01:10:28 Yeah. He doesn’t actually fit any cultural evolutionary models, and if you include that people are learning from parents and learning from their social milieu, your parents determine where you are in the social milieu, so you’re surrounded by people who are being more patient and behaving in certain economic ways, then that’s gonna have a huge effect. I just told you about how if you grow up in Silicon Valley, you’re gonna patent in computers. It’s not because you have genes for computers.", "Dwarkesh - 01:10:52 Yeah. Given the persistence of these cultural effects, how should that change how we think about whether it makes sense to have more immigration from societies that are already WEIRD or whether that matters or not?", "Joseph Henrich - 01:11:07 Right. So we have a paper under review right now in which we show that from the 1850 roughly to 1940, a big driver of US innovation both the quality and quantity of patents, is the cultural diversity of counties. And we actually use immigration as a kind of way of showing the causal effect of this. So what you want is a lot of cultural variation. Now, if you get people from societies that are more distant from whatever the current US culture is at the time, it’s gonna take more time until they’re able to fully integrate. So if they’re coming from [a] very distrusting society, they’re not gonna be able to latch into the collective brain immediately.", "So you see this in around 1900. There’s data from people, mostly coming from Southern Italy and coming from Germany and Britain, and the Southern Italians are coming from a society that has a high distrust of strangers. So their immediate effect on the patenting system is low. But the strength of these cultural traits seems to go down by an order of magnitude each generation. So the second generation, you can still see where they came from, but they’re a lot more like the majority culture than their parents. And then it goes down another order of magnitude. So yeah, so you just have to do more work essentially to assimilate people, enculturate them. But the potential value of having diverse ways of thinking, different ideas, can be really valuable, I think.", "Dwarkesh - 01:12:35 What do you make of, so Garett Jones is an economist who has this, he calls this spaghetti theory of assimilation. I don’t know if you’ve heard of this. The idea is basically, look, immigrants do assimilate, but they also assimilate us, and so whatever cultural practices they have will, in proportion to their size of the population, have an effect on wherever they migrate to.", "And his example is when Italians came to America, spaghetti, which is traditionally an Italian dish, also became an American dish, right? So they assimilated American culture as well. And sub in traits like trust or whatever there. Basically, this sort of reciprocal assimilation has implications of whether we want immigration from non-WEIRD societies. What do you make of that idea?", "Joseph Henrich - 01:13:26 I think that the spaghetti example or the pizza- pizza or spaghetti, both good examples- is a great thing, right? Because American food is a fusion of cuisine from lots of societies, and that leads to quite good food. Same thing you can see [in] music. Things like jazz come, have African rhythms in them. And so rock and roll is along that lineage of-", "Dwarkesh - 01:13:54 But we don’t want, like, do we want the diversity in low trust and high trust?", "Joseph Henrich - 01:13:58 Well, the question is, do social interactional traits operate the same way as these things? And if you have ethnic enclaves, then you’re gonna get low trust in those ethnic enclaves. You get things like the mafia, right?", "So you don’t want that. But if you had an immigration policy that distributed people, and in a high-trust society, you benefit by being higher trust because you can do things like collaborate and start companies and stuff, which you have to be high-trust to do that. Otherwise, you don’t do it in a very effective way. So there is a pressure for low-trust people to become more high-trust. But if you’re in mostly high-trust people, there’s not an incentive for you to move down. And social interactional traits are different than things like food types.", "Dwarkesh - 01:14:45 In psychology, there’s been this problem of the replication crisis where a lot of the main results are hard to replicate and it’s not clear how much of this is real science. Have you looked into how many of the WEIRD results are based on, of the differences in psychology between different populations and so forth? How much of that actually replicates?", "Joseph Henrich - 01:15:05 Yeah, my sense is it replicates quite well. So we published our paper in 2010, and in, oh, late 20 odds, Armin Falk , who’s an economist, and a bunch of others, Ahnke Becker , Ben Enke , published a paper where they measured economic preferences in 80,000 people around the world . And that just showed big variation in things like patience, various kinds of reciprocity, altruism, so the kinds of things we would expect. And then other large-scale projects have similarly shown lots of variation.", "Dwarkesh - 01:15:40 The groups we have today, which are not WEIRD, when you study them, do you think that the fact that they have resisted modernity for so long suggests that they actually are weird in their own way? Which is to say that this is not representative of the way that Europeans might’ve been thinking in the second century; rather it’s like the Hadza’s particular weirdness leads them to be averse to modernity in a way that is unique?", "Joseph Henrich - 01:16:10 Well, that’s always a concern, and it’s especially a concern when you’re studying hunter-gatherers because a lot of the hunter-gatherers that are left in the world today are [in] places where agriculturalists couldn’t easily move to.", "So that’s something we’re thinking about all the time. The ways around that are to go to places where agriculturalists didn’t go or couldn’t go. So we have a lot of good ethnographic history on people in Australia, and then we have the Arctic populations, which, you know, big sections of Paleolithic Europe were probably kinda like Northern Canada is today. So at least environmentally it’s not crazy, because, you know, Ice Age Europe. And places like the Aleutian Islands , the West Coast of California, we have lots of ethnographic evidence, and that there was no agriculture there.", "So just putting together all these different lines of evidence help us develop a picture. I wouldn’t wanna bank everything on the Hadza. Of course, we can’t do experiments with some of those groups. But we can see whether the Hadza look like these other hunter-gatherer groups.", "Dwarkesh - 01:17:10 Oh, speaking of the Ice Age: do you have a take on why before the Ice Age agriculture didn’t develop even though genetically there probably weren’t that big a difference between, like, 20,000 years ago versus 10,000 years ago?", "Joseph Henrich - 01:17:20 Yeah, I don’t have a clear picture of that. It is the case that the Holocene was a particularly long stretch, a particularly long interglacial. There were some long interglacials, but the soonest one before that was 120,000 years ago.", "Other than that, they were getting broken up. I sometimes wonder if we may someday figure out that there was actually a little bit of agriculture. And it would’ve all been destroyed by the Ice Age, right? So it’s not clear that there would be any heritage of it left. So we would have to find some remnants of some domesticated crops.", "Dwarkesh - 01:17:52 Although I was asking David Reich about this, and I forget the reason he said, but he did say that if there were societies before the Ice Age, we would have evidence of them in the archeological record or their genetic record or something like that.", "Joseph Henrich - 01:18:05 Yeah, if they had gotten to any scale. But we know from modern societies, there are groups, for example, in the Amazon that have root-based agriculture that wouldn’t leave any archeological record.", "Dwarkesh - 01:18:17 This is a question I also asked to David Reich. There’s, you know, with modern LMs and just generally with newer techniques for processing information and understanding evidence, there’s a potential for answering questions that maybe we couldn’t have done before, especially from maybe a cultural perspective where you can actually consider lots of different cultural artifacts at once in the context of you can run cultural simulations or something. Maybe those are more fanciful ideas, maybe there’s more practical ideas. But if there’s one question you have which it’s sort of bottlenecked by having the right data or being able to process the data the right way and future LLMs or AIs could help us there, does something immediately come to mind?", "Joseph Henrich - 01:19:05 Well, this gives me a chance to tell you something we’ve been working on. So we were talking about this kinship hypothesis, the idea that kinship intensity affects psychology. And when I presented the ideas that are in The Weirdest People in the World , I get hard times sometimes from historians who will have some kind of very European-specific story about why this happened that has to do with European royalty or coal or something like that.", "And so my move is to not try to get into the weeds and start reading Latin texts, but to go test it somewhere else. So if it’s true that kinship intensity should work like this, then we should be able to test it somewhere else. So we have a large corpus, two different corpus from China. And so we have late-imperial China, something called the Gazetteers , which were produced across the prefectures, and they had a stylized content. So this is the same genre, thousands of books, and then we have about 7,000 books going back 2,000 years in Chinese history. And the techniques that the AI is allowing us to do is to get measurements of psychology from the texts. So we take a questionnaire. This is a technique developed by my post-doc, Mohamed Attari. We take a psychological questionnaire that’s been validated, it’s in English, and we wanna get an equivalent in ancient Chinese.", "So we run it through a semantic similarity comparison, and we look for quotes from the ancient Chinese corpora that match each sentence in the English corpora. And then we rebuild the questionnaire in ancient Chinese. So that’s our psychological measure for something like individualism or collectivism or moral universalism. And then we take each book or each paragraph in each book, and we do a comparison, cosine similarity comparison, between the two sets of text there, and that allows us to stamp each paragraph with a measure of individualism for example. And then we do that for the entire corpora. And this allows us to track psychological change across space and time in China. And then we can correlate that with kinship intensity, and we get the same correlations that we do in Europe.", "Loss of cultural variance in modern world", "Dwarkesh - 01:21:09 Interesting. How worried are you about the fact that, if you think about modernity, was a result of finding this one cultural variant- at least according to your story- which then helps us develop the better technologies, all these new institutions and so forth. How worried are you about the fact that the modern world because of the spread of WEIRD, has just much less cultural variants and because of this monoculture, a potential variant which might be useful in the next transition just wouldn’t emerge?", "Joseph Henrich - 01:21:43 Yeah, I am worried about that, because just the destruction of languages. So we’re losing languages left and right. English has a particular form that you don’t find in lots of languages, so that’s just an example of the kind of cultural diversity we’re losing. So that’s definitely a worry. I do think that we’re getting new variants. Japan adopts a lot of WEIRD institutions. But it’s really creating a new third unique thing. It’s got a bunch of Weird elements, but Japanese law, despite being the same as the US, operates very differently. So unlike Americans, Japanese don’t tend to sue each other and they tend to use mediation and things like that, even though the superficial law is pretty similar. So I do think there is creation of novelty which could be useful. But yeah, in general, I worry about the loss of novelty.", "I also should say that I think that the sort of impersonal world of impersonal institutions in WEIRD psychology is quite fragile. For example, in the US in rural areas, you see moral universality declining since 2008. This is based on John Haidt ’s YourMorals.org data. And so there’s an increasing difference in the morality of urban areas versus rural areas in the US.", "Dwarkesh - 01:22:58 Hmm. I’m sure you get asked about this a lot, but fertility seems to be declining around the world, and there doesn’t seem to be any existing cultural variance, other than maybe the Amish, who can resist it. Do you have takes from a cultural anthropologist perspective of what’s happening here? And a follow-on question here is, should we really encourage these cultural enclaves like the Amish to the extent that this anti-fertility meme is so viral that they can’t be part of the common culture and still preserve fertility, you really need this closed-off societies in order to preserve fertility. How seriously do you take that idea?", "Joseph Henrich - 01:23:38 Well, I definitely take declining fertility pretty seriously. And especially if you’re a collective brain guy like me, just having fewer people is gonna be a big problem. And I think that there will be spread of ideologies or religions or something which are pronatalist, and those groups will have a big advantage in cultural evolution because a community that is pronatalist tends to produce more pronatalist babies. Christianity spread using that. Mormonism spread in the 19th century using that trick. So I just feel like cultural evolution is gonna find the combination and there’s gonna be some pronatalist groups spreading.", "Dwarkesh - 01:24:19 There are a lot of different societies in the world today. Is there some explanation of why none of them have the existing variants necessary to keep fertility high?", "Joseph Henrich - 01:24:29 Well, from a cultural evolutionary perspective, this is relatively new, right? So the demographic transition is only really late 19th century. Lots of the world is just getting hit after 1970. And we also have things like rising female labor force participation which is gonna stifle it, rising female education. And so once that stuff maxes out, then you’ll see variation among different groups in terms of the number of babies they produce, right? And this thing unfolds in demographic time scales, so we’re not gonna see it for a few generations, but there’ll be some group somewhere that will be producing more babies than everybody else. And the reason why I think religion is the likely one is because people do things because they think God wants to do it. And if people come to think that God wants them to have more babies because it’s a way of worshiping him or getting to heaven or whatever their religious configuration is, then that’ll be a group that produces more babies. I mean, Catholics were defying the demographic transition for a while. They just seem to have stopped.", "Dwarkesh - 01:25:33 Right. How worried are you about the fact that… I mean, you discuss in WEIRD that throughout this period in European history, European states are at war most of the time. How worried are you about the fact that, because of a decline in war between great powers, that this build-up in cultural mutational load that you were talking about or maybe where things just get less efficient over time and there’s no selective pressure to get rid of that inefficiency. How worried are you about that just making governments or nations less efficient over time because there’s no outer loop loss function that says if you mess up enough your country might not exist, or your group might not exist? Which we were talking about the waves of Yamnaya or the Anatolian hunter-gatherers just conquering everything. If that doesn’t happen, will we just see culture and nations and governments degrade over time?", "Joseph Henrich - 01:26:31 I think the answer to that is yes. The one way that countries or whatever the political institution is could address that would be doing this thing that I’ve curiously called “domesticating the competition”. So sports teams constantly renew themselves by competing. Firms live and die and renew themselves over time. Like you said, companies will start off being super great and then they get too big and then they get kind of inefficient and then eventually they disappear. None of them last forever. And that seems to be true of political units. But in principle, you could have at least some renewal processes. So democracy potentially provides a renewal process, although there are things like the institutions that the government builds that are hard to renew. So one simple political idea is I think when you create a new department, it should have the same thing that cells have where at a certain time, they time out and you gotta make a new one. And that’s because just the way human bureaucracies, institutions work is they kind of corrode from the interior, from the inside, just the way cancer spreads in a cell. So you just gotta kill it and make a new one. So we could institute that. But I guess the idea hasn’t quite caught on yet. I mean, there are bits and pieces of it around, so it’s not like it’s unused.", "Dwarkesh - 01:27:49 Yeah, I guess maybe the thing that’s less understood or is not appreciated is that the reason that over time in history institutions have improved is because of the selective process that at least at certain levels of selection no longer exists.", "Joseph Henrich - 01:28:04 Yeah. I mean, the Roman Empire didn’t say, “You know, we gotta redo everything.” “Why not adopt democracy and we could have a republic.” Nobody does that. Things fall apart.", "Dwarkesh - 01:28:15 The researcher Stuart Armstrong has this idea called Chesterton’s metafence . And here’s what it states: In our current system, democratic market economies with large governments, the common practice of taking down Chesterton’s fences is a process which seems well established and has a decent track record and should not be unduly interfered with.", "Joseph Henrich - 01:28:38 Can you say any more about what the author has in mind? I would like an example or something.", "Dwarkesh - 01:28:43 Yes. So I think this is a rebuttal against this common idea that we shouldn’t be arbitrarily changing- like suppose a new technology comes about and we’re worried about the risk it might pose to society. Suppose the young kids are doing something different culturally and we’re worried about like what effect that might have on the culture and so forth.", "He’s basically saying, “Look, this has been happening for five centuries and this process by which new cultural variants enter the mix and so forth has worked really well for us, even if it’s happening at a rapid clip and we shouldn’t interfere with it.”", "Joseph Henrich - 01:29:22 Well I don’t think I have strong feelings on this. I’m always focused on trying to understand the process of cultural evolution, and it is true that people often resist cultural change that in retrospect we think is good. But of course when you begin to make cities, you lead to all kinds of epidemic diseases. So if you’d anticipated that, you might have worked on some of the public health procedures before you built the cities. So foresight can be good.", "Dwarkesh - 01:29:48 Right. I guess the sort of bigger question here is, it makes sense in the kind of environment our ancestors faced why they’d have intuitions against progress, because if you already have technologies that are well fit to the environment, any change is likely to be deleterious. And whether we find ourselves in a world that’s different enough where we can just, very intentionally disregard our worries about changes in culture or technology, or whether it’s similar enough that we should actually… I mean, there’s two ways you can interpret The Secret , right? One is, “Oh, look, this is why we should care more about culture” and so forth. Another is just, “Oh, wow, they’re living in such a different world. I understand why my intuition against progress- where it comes from, and now I can totally disregard it.”", "Joseph Henrich - 01:30:34 Right. So I would say that the lesson from Secret is not to disregard it, but just to be cautious because we could be dismantling things that are really important for the structure of society. And you shouldn’t just dismiss valuable cultural practices as the relics of a medieval age, or the relics of a pre-enlightened age or something like that.", "So ritual is a good example because rituals seem to have real psychological benefits in binding people in community and helping develop self-regulation. But it’s easy to be an atheist like me and say, “Ah, rituals are stupid. We gotta stop doing them.” Turns out they’re doing a bunch of stuff and if you don’t wanna lose that stuff, you gotta figure out another solution.", "Is individual genius real?", "Dwarkesh - 01:31:20 A question that arises in my head is, look, we began this conversation talking about all these societies in the past, even when they figured out something successful, they did not themselves know why it was successful and so they just had to ascribe it to custom. They just say like, “Oh, this is the way we always made bows” or hunted or something. Why think that we have an answer for why the Industrial Revolution happened? Why think it’s any different than the Inuit trying to explain why they make their bows a certain way?", "Joseph Henrich - 01:31:49 Oh, you mean, how or what we’re up to is different?", "Dwarkesh - 01:31:51 Yeah.", "Joseph Henrich - 01:31:51 Well, I mean, we’re trying to apply science, which has been very successful in all these different areas. So, you know, evidence, and we all put out our arguments and we go through the process and some evidence is better than others and that’s just our epistemology.", "And I’m interested in the cultural evolution of epistemologies. So societies have varied over time in what counts as good evidence and what counts as a good argument. So one of the psychological changes that we see emerging in Europe compared to other places is how important is what the ancients say.", "So in lots of societies, if someone says something and you can say Aristotle or Confucius didn’t believe that, then it’s like, “Oh gosh, I guess I’m sunk.” Whereas at some point, Europeans decided, “Actually who cares what Aristotle thought. We know a lot more than he did.” So this is a big epistemological change. And then the emphasis on empirical evidence in science is not something you find in earlier traditions within Europe, and it’s quite variable across different societies.", "So you should think about the very standards by which we count a good argument and good evidence is itself culturally evolved.", "Dwarkesh - 01:32:56 Yeah. And how much do you worry about the lack of cultural variation in science in particular, where we have like one big institution, the NIH , which funds most science. And there’s similar processes of consensus and peer review and whatever, that educated whether you get a grant, whether work is accepted, whether it’s considered worthy scholarship and so forth. Should we think that maybe we should have more variation in those methods because maybe the scientific method isn’t complete, there’s epistemic tools could be improved or it could be different and so forth?", "Joseph Henrich - 01:33:38 Yeah, absolutely. So polycentricity . There should be lots of difference, competition amongst these groups, no single funding sources… lots of different priorities. One thing I point out in this book I’m working on is that scientific papers tend to be more impactful when the authors are from more diverse societies.", "But interestingly, people are biased to work with people from their own society. So what we actually do is different than what would maximize innovation.", "Dwarkesh - 01:34:05 I’m sure you’re gonna address this in the book, but I’m already curious now. It seems like the internet hasn’t been as big a deal as somebody who’s thinking from this zoomed out collective brain picture might have anticipated in the ’90s. Maybe it’s boosted the rate of innovation somewhat, although it’s not clear just looking at growth rates or productivity growth rates or something that it has. Why didn’t the internet cause the equivalent of another round of urbanization in our ability to do research or economic productivity or so forth?", "Joseph Henrich - 01:34:40 Yeah. And that’s something I’m really interested in figuring out. And one thing that I’m still- I don’t have a full answer to this question, but one thing that is clear is that there’s something special about face-to-face interactions.", "So something like whether even in the 21st century, whether two cities have direct long haul flights between them or direct flights between them increases the flow of ideas between those places. But we know these places are connected by the internet, whereas that effect doesn’t seem as good. And part of this is that people have to build trust, probably, before we start sharing ideas. And the more different someone is from us, the more you need that trust. So some of the research suggests that this effect is even bigger when places are more culturally different. And this is interesting, because they’re probably more valuable to each other when they’re more culturally different. But that’s then the face time and being in the same place is even more valuable. So that’s the kind of direction I’m going. And also, I still think that there should be exactly what you said, which is that the internet should have caused more innovation. I’m just not satisfied that we fully determined the best way to figure that out.", "Dwarkesh - 01:35:55 You had a very interesting chapter in Weird where you talk about… a lot of innovation through history is more serendipitous than we imagine, and it’s the result of the collective brain being big enough to discover these things. But if you look at the sort of track record in science and research, there do seem to be certain individuals who make independent discoveries across many different fields that are each quite important, your Newtons and Einsteins and John von Neumanns and so forth. And if it was just the result of the right person is in the right lab who sees the right observation at the right time, this repeated excellence by certain people seems less explainable. So I’m curious if the collective brains theory can explain what’s going on there.", "Joseph Henrich - 01:36:44 Right. Sometimes I’m interpreted as saying that there are no genetic differences among individuals, [I] definitely think there are genetic differences among individuals that affect their likelihood. But I think when we take the individual, we often import into our thinking about them the person’s life history. So for example Einstein, when he was a patent clerk, so he wasn’t succeeding as an academic, he and his friends got together in something called the Olympia Academy , they called themselves, which was just a group of, like, five people, who would get together and they would read the interesting books at the time. And if you look at the books that they read, and historians have done a lot of work on this, all the major ideas that go into special relativity were read and processed by the Olympia Academy before they do it. So the idea that people think that time is relative; well, people were kinda talking about that, and they were talking about multiple dimensions. And like Henri Poincaré , at the same time, comes up with the same equation as Einstein, but he doesn’t give it as radical an interpretation as Einstein does. He was thinking of the equations as kind of fudge factors, trying to make the math work. So there was an almost simultaneous invention of special relativity, and the ideas were all circulating. And Einstein happened to be in a place which allowed him to put all those things together.", "As a patent clerk, he was actually processing devices to synchronize trains in different parts of Europe. And so he had to think about the amount of time a signal coming from different parts of the world. And there were a bunch of these. And we know the patents that went through, and we know that he had to look at these, there were only two guys. So he happened to be in a particular place and time in terms of the ideas circulating and what he was doing on a day-to-day basis that really did give him an edge. Now, you could then say, “Well, what about general relativity?” Well, there were a couple other guys who were probably gonna get general relativity. And Einstein himself was worried that he was gonna get scooped, because once you get the one, it’s just a matter of- I mean, it’s not “just” a matter of figuring out, the math is really hard. But there were a number of other people who probably would’ve gotten the math.", "Dwarkesh - 01:38:48 Yes. I guess then there’s also still the question of why, in 1905, Einstein himself discovers not only special relativity, but Brownian motion and the two other things, right? But I guess you would just say it’s the result of he had the right reading group?", "Joseph Henrich - 01:39:05 I mean, obviously he’s a special individual. Although he did spend the whole second half of his career trying to show why quantum mechanics was wrong. So his Brownian motion paper is foundational in quantum mechanics. But then he decides God doesn’t play dice with the universe, and he doesn’t like the stochastic nature of quantum mechanics, which has more or less proved true, and it’s part of our modern technology and stuff like that. And he spends the whole second half of his career fighting it. So that’s a case where his intuitions didn’t serve him well. And he turned out to be wrong for decades.", "Dwarkesh - 01:39:39 And, sorry, I think that’s also a case in which your point about having the right ideas available to you is important, because, obviously I don’t know much about this, but from what I understand, the people who believe in the many worlds interpretation believe that Einstein was on the right track, and he just didn’t have this idea of the ever-ending multiverse available to him, which would’ve explained his…", "Joseph Henrich - 01:40:04 That there’s this branching thing, and just a matter of figuring out which branch you’re on.", "Dwarkesh - 01:40:08 Right. And so there’s no probability.", "Joseph Henrich - 01:40:09 Yeah, yeah.", "Dwarkesh - 01:40:09 Yeah. Okay, so we were just discussing, look, Einstein was reading the right things and making sure he had enough context to come up with these big discoveries. You’re somebody who has connected ideas across many different fields. How would you describe the input function for the way in which you come up with new ideas, and how much has it been informed by… I mean, you were like, you started off as an aerospace engineer, and you’ve done anthropology across so many different societies, on fieldwork and so forth. How do you basically think about the Joseph Henrich production function?", "Joseph Henrich - 01:40:45 Yeah, I mean I try to implement the ideas of the collective brain. So in my lab at Harvard, I have post-docs and graduate students with backgrounds in evolutionary biology, psychology, anthropology, economics. I mean, some of my favorite collaborators are economists. And we’re just bringing ideas from very different places, but across the social sciences and even the biological sciences, we’re often trying to explain the same kind of phenomena, economic decision-making, cooperation, things like ethnicity, why does it exist, stuff like that. So the way we silo science is a big problem, and especially in the social science, I think it’s even a bigger problem.", "So I’ve been a professor of anthropology, psychology, economics, and evolutionary biology, and just even the standards of evidence and what constitutes good research and how you tackle a problem really varies. So the academic disciplines are like different cultural worlds, different tribes, and just pulling the best from them has kind of been my approach.", "Dwarkesh - 01:41:45 And among the more polymathic scholars, do you guys have, I don’t know… do you and David and a couple of the other ones in this category, is there some shared group you guys have, or, is it…", "Joseph Henrich - 01:42:02 No, it’s a series of different networks. So David Reich and I, for example, are always talking about how we wanna read culture from the genes. So if humans have had a long history of gene-culture co-evolution and we wanna understand what Paleolithic populations were doing, we might be able to see imprints of it in the genes.", "So in, in my own work, for example, we’ve measured cousin marriage across different populations and shown it correlates with runs of homozygosity in the genes. Something like polygeny can be revealed by looking at the ratio, the variance in Y chromosomes to X chromosomes. Fire: we probably have some special genetic adaptations for dealing with the toxins in fire. So if you wanna know when humans got fire, and you see the gene that gives us immunity to toxins in fire, then we can infer that fire is older than that, right?", "Dwarkesh - 01:42:53 Interesting. And, so obviously the input from different scientific fields in your work is obvious. But from a more philosophical perspective, people compare your work in the same tradition as Burke or Chesterton or something. Do you personally find that your philosophy has been informed by reading them, or is it just independently converging on some of the same themes?", "Joseph Henrich - 01:43:18 Yeah, I mean, any convergence is completely independent, because I haven’t really spent much time at all. I mean, I’ve gone back. After people said, “Your work is kinda like Hayek ,” I went back and read some Hayek. I did read The Fatal Conceit in graduate school, but aside from that, I didn’t read very much. And then same thing with Adam Smith . I mean, I’ve tackled The Wealth of Nations , and I eventually read The Theory of Moral Sentiments . But it was pretty far into the process. I had picked up on themes they developed before I knew they had developed them.", "IQ and collective brains", "Dwarkesh - 01:43:49 I want to suggest that maybe you should add a couple of AI people in your rotation.", "Joseph Henrich - 01:43:50 Ok.", "Dwarkesh - 01:43:51 Just because I think this perspective is incredibly informative, what I think is maybe the most important question of our period, of what does this transition towards a society of AIs look like? And, sorry to bring it back to AI, but one other consideration here that your work has really informed me on, is: I was sort of maybe hyping up how big a deal I expect societies of AIs to be, but something your work informs is the idea of a single superintelligence is not the place in which to expect these big impacts of John von Neumann times 10.", "But then again, there is a key advantage in the scope of social learning they can do as a society, but, like, the idea of a single superintelligence being super powerful is maybe less likely as a result of your work. And I, anyways, I don’t know what you think about that interpretation, but…", "Joseph Henrich - 01:44:44 Yeah, I think that’s right because the assumption, the sort of model that people seem to carry around in their heads is that humans have done all these things because of our individual brain power when really, it’s been the power of cultural evolution and a network of minds gnawing away at problems over long periods of time and gradually accumulating not just the obvious tools, but also the cognitive tools for, for addressing these things.", "Dwarkesh - 01:45:08 I buy the idea that individual IQ isn’t the most relevant factor to understanding why discoveries are made. It’s not about having the right genius or something. But what’s the reason for thinking that the average IQ of a society isn’t super important in how much progress a society’s gonna make, not just the population size?", "Joseph Henrich - 01:45:28 Right. So the first thing that we need to do though is to zoom back out and think about what we mean by IQ. And so Michael Muthu Krishnan and I have made the case that IQ is just a set of cognitive abilities that leads to success in 20th century contemporary institutions that have come to dominate the world. So it’s a set, it’s a culturally evolved system, and we talked before about the Flynn effect which illustrates that, and the fact that IQ is associated with all these positive outcomes now, but certainly wouldn’t be in pastoralist societies.", "So that would mean that if you raise the average IQ of your society, it might lead them to have more abstract thoughts and do better science. And so that’s, that certainly fits. But the interesting thing for me is that the world of the future is gonna be quite different than the one now. So the set of cognitive abilities which is gonna be favored in the new AI world is not gonna be the same set that was favored in 1900 when they designed the IQ test. So for example, in the IQ test they ask you to remember digits backwards. I’m not sure how useful remembering lists of digits backwards [is]. It was in a previous world, where we had to write everything down and remember a lot of stuff. But in some sense, we’re interfacing with our technology and we’ve got to figure out what are the set of cognitive abilities which is gonna make people best able to solve problems? And like we talked about, it’s even the case that the most creative people aren’t the highest IQ people.", "I guess one of the things I’m trying to say is that the minds that might lead us into the new world might not be the ones that have the highest IQ because once you’re sort of augmented with AI and all these kind of technological aids we have, the specialized thing that leads someone to do something creative- probably not gonna be the same abilities that did it in 1910.", "Dwarkesh - 01:47:21 Yeah. I guess my real opinion is that, if in the world where that’s true- and I think that’s probably gonna be true- then it really doesn’t matter at all because AIs will be doing everything. But in the world where, let’s say AI plateaus or something, then I would expect roughly the same kinds of skills that have mattered for the last couple of centuries to keep mattering for a modern, technological society, which is analytical thinking and the ability to understand science and technology and so forth.", "Joseph Henrich - 01:47:54 But maybe the AI world you’re imagining is different than the one I’m imagining, but I still think that people are gonna figure out what problems we need to solve, unless we’re just gonna tell the AIs to figure out what the problems are and then solve them.", "They might not be that good at that. I don’t know. We’ll see.", "Dwarkesh - 01:48:10 You shared with me this draft of work about collective brains, where you show that ants have developed many of the technologies that humans have, like farming, and livestock, and division of labor and so forth. And so maybe there’s some amount of blind selection, and it doesn’t matter if it’s natural selection, if it’s cultural selection, it’s the size of the group which can go through this learning process that matters, and how many people are available to experience the learnings or get the selection process acted upon them.", "But I guess the big difference in where maybe individual IQ comes back into the picture here is, I don’t expect there to be any amount of natural selection which will allow ants to land on the moon or make a computer chip or do, you know, make a nuclear fusion plant or something. And so is the kind of broad generalization we see with humans, and we might see to an even greater extent with AI, a product of you’re really bottlenecked by the IQ of the individual? There’s no amount of collective learning that can get you to the moon if you’re an ant?", "Joseph Henrich - 01:49:33 Right. Well, one thing to keep in mind is that most human societies over most of human evolutionary history didn’t get to the moon. So this is one particular group of humans. And one of the things we talked about earlier in this conversation was the cultural evolution of epistemology.", "So it was the improvement in our what constitutes evidence, what constitutes a good argument that allows us to get to science and accumulate this kind of knowledge to do these kinds of things. So I see that as part of a continuous trajectory. But it’s just that we have new cognitive tools.", "Dwarkesh - 01:50:05 Right. I guess, but… so there’s a really interesting point that even the epistemic tools which let us get to the moon are themselves a product of cultural evolution. But again, that seems bottlenecked by the fact that if you’re just dealing with chimpanzees or something, there’s no amount of cultural evolution that will result in the scientific method, that-", "Joseph Henrich - 01:50:30 Right. It needs some gene culture co-evolution.", "Dwarkesh - 01:50:32 Right, right. So you mentioned in the Secret the tool use started something like two million years ago, and fire, we started domesticating fire closer to 800,000 to a million years ago. Intuitively, it doesn’t seem like tool use is that that much simpler than fire, but is that a misunderstanding? Why was fire so much [more] recent?", "Joseph Henrich - 01:50:50 Well, so what we know from other species is that lots of animals use tools, and particularly chimpanzees use tools. So we can assume tools in the common ancestor. Now, what we see in the paleoanthropological record is the increasing use of stone tools.", "And these are pretty simple stone tools. You can see a cutting edge there, but not very much fancier than that. And then fire is… a lot of animals are afraid of fire, and they have to run from wildfires and stuff like that. So whatever your story about humans is, you have to overcome the fear of the fire in order to tame it. Probably humans first found wild fire and somebody approached the fire instead of running away, which is the usual thing to do. And then got some of it and then put it to use, I guess. So I think it’s the innate fear that animals have of fire, which is we don’t hang around when things go off.", "Dwarkesh - 01:51:41 Interesting. Okay, final question, what’s the next thing you’re working on?", "Joseph Henrich - 01:51:45 Well, the big thing is this book on collective brains, so I’m gradually working through that. One of the ideas that I really am excited about in this book is that we evolved to think in collectives. There’s this assumption that psychologists have had to understand human decision-making and how our minds solve problems is that we should put people by themselves in experiments and see how they do. But real human societies, hunter-gatherers, they actually work in groups. And when we want to solve a problem, the first thing we want to do is check with our friends or ask other expert members of the community.", "So it’s the idea that we think collectively and solve problems collectively in a kind of naturally distributed brain. And Hugo Mercier and Dan Sperber , for example, have argued and shown with various lines of evidence that many of the sort of irrational biases we have, mitigate or disappear entirely when we solve problems as groups. So it’s almost like we evolved to have that positive interaction and correct each other’s errors.", "Dwarkesh - 01:52:43 Right. That’s quite interesting that you care more about the portfolio of intelligences than any one person being calibrated.", "Joseph Henrich - 01:52:57 Right.", "Dwarkesh - 01:52:58 Excellent. This is super fun. Thanks so much for coming on the podcast.", "Joseph Henrich - 01:52:58 All right. Great to be with you." ]
[ "https://en.wikipedia.org/wiki/Joseph_Henrich", "https://en.wikipedia.org/wiki/The_WEIRDest_People_in_the_World", "https://press.princeton.edu/books/paperback/9780691178431/the-secret-of-our-success?srsltid=AfmBOorH4SljgFA7ddipDKXRMD2HEONMFLMl3DvpyCvx-Jw5QzCKgQyM", "https://en.wikipedia.org/wiki/David_Reich_(geneticist)", "https://en.wikipedia.org/wiki/Denisovan", "https://en.wikipedia.org/wiki/Neanderthal", "https://en.wikipedia.org/wiki/Bantu_expansion", "https://en.wikipedia.org/wiki/Hadza_people", "https://historyguild.org/what-is-the-austronesian-expansion/?srsltid=AfmBOoqk6oKvHhEJNPRAS5OvCwX5QXRt5eLlQlFxtH5MmW4VTG1dfdCQ", "https://en.wikipedia.org/wiki/Neolithic_Europe", "https://en.wikipedia.org/wiki/Jared_Diamond", "https://en.wikipedia.org/wiki/Ian_Morris_(historian)", "https://www.cosmobooks.co.uk/pages/books/417345/ian-morris/lucky-latitudes-the-wests-rise-to-global-hegemony-over-the-last-five-centuries-geographical-good", "https://onlinelibrary.wiley.com/doi/abs/10.1002/ajpa.22750", "https://en.wikipedia.org/wiki/Dorset_culture", "https://en.wikipedia.org/wiki/Endogamy", "https://en.wikipedia.org/wiki/Cassava", "https://en.wikipedia.org/wiki/Goitre", "https://scholar.google.com/citations?user=oe2DsgkAAAAJ&hl=en", "https://scholar.google.com/citations?user=_1pIHHMAAAAJ&hl=en", "https://scholar.google.co.uk/citations?user=YucHqSsAAAAJ&hl=en", "https://www.cell.com/current-biology/fulltext/S0960-9822(21)00161-5", "https://www.sciencedirect.com/science/article/pii/S1090513814000816", "https://gwern.net/", "https://www.youtube.com/watch?v=a42key59cZQ", "https://en.wikipedia.org/wiki/Plio-Pleistocene", "https://www.project-syndicate.org/commentary/innovation-when-humans-learn-from-each-other-by-joseph-henrich-2016-03", "https://reich.hms.harvard.edu/publications", "https://en.wikipedia.org/wiki/Flynn_effect", "https://www.nber.org/system/files/working_papers/w30147/w30147.pdf", "https://en.wikipedia.org/wiki/Monte_Carlo_tree_search", "https://www.ebi.ac.uk/training/online/courses/introduction-to-phylogenetics/what-is-phylogenetics/%23:~:text=Phylogenetics%2520is%2520the%2520study%2520of,be%2520referred%2520to%2520as%2520taxa).", "https://en.wikipedia.org/wiki/Alberto_Alesina", "https://en.wikipedia.org/wiki/Nathan_Nunn", "https://en.wikipedia.org/wiki/Anke_Becker", "https://en.wikipedia.org/wiki/Pastoralism", "https://en.wikipedia.org/wiki/Pater_familias", "https://en.wikipedia.org/wiki/Solonian_constitution", "https://en.wiktionary.org/wiki/Chesterton's_fence", "https://en.wikipedia.org/wiki/Great_Divergence", "https://en.wikipedia.org/wiki/Cistercians", "https://en.wikipedia.org/wiki/Mughal_dynasty", "https://en.wikipedia.org/wiki/Charlemagne%23Reign_as_emperor", "https://en.wikipedia.org/wiki/Carolingian_Empire", "https://en.wikipedia.org/wiki/Silk_Road", "https://en.wikipedia.org/wiki/Napoleonic_Code", "https://en.wikipedia.org/wiki/Bob_Allen_(economic_historian)", "https://en.wikipedia.org/wiki/Gregory_Clark_(economist)", "https://en.wikipedia.org/wiki/Chris_Blattman", "https://en.wikipedia.org/wiki/Garett_Jones", "https://x.com/GarettJones/status/1756483590768980154", "https://en.wikipedia.org/wiki/Armin_Falk", "https://www.ankebeckerecon.com/", "https://benjamin-enke.com/", "https://www.nber.org/system/files/working_papers/w23943/w23943.pdf", "https://en.wikipedia.org/wiki/Aleutian_Islands", "https://en.wikipedia.org/wiki/Gazetteer", "https://en.wikipedia.org/wiki/Cosine_similarity", "https://en.wikipedia.org/wiki/Jonathan_Haidt", "http://yourmorals.org", "https://en.wikipedia.org/wiki/Yamnaya_culture", "https://en.wikipedia.org/wiki/Anatolian_peoples", "https://www.lesswrong.com/users/stuart_armstrong?from=post_header", "https://www.lesswrong.com/posts/WxDpeBi6aQAMH4BGB/stub-the-problem-with-chesterton-s-fence", "https://www.nih.gov/", "https://www.sciencedirect.com/topics/social-sciences/polycentricity", "https://en.wikipedia.org/wiki/Olympia_Academy", "https://en.wikipedia.org/wiki/Henri_Poincar%25C3%25A9", "https://en.wikipedia.org/wiki/Brownian_motion", "https://www.sciencedirect.com/topics/agricultural-and-biological-sciences/homozygosity", "https://en.wikipedia.org/wiki/Polygenism", "https://en.wikipedia.org/wiki/Friedrich_Hayek", "https://en.wikipedia.org/wiki/The_Fatal_Conceit", "https://en.wikipedia.org/wiki/Adam_Smith", "https://en.wikipedia.org/wiki/The_Wealth_of_Nations", "https://en.wikipedia.org/wiki/The_Theory_of_Moral_Sentiments", "https://www.michael.muthukrishna.com/", "https://sites.google.com/site/hugomercier/", "https://en.wikipedia.org/wiki/Dan_Sperber" ]
https://www.dwarkesh.com/p/jung-chang
Jung Chang - Living through Cultural Revolution and the Crimes of Mao
[ "This transcript was autogenerated and then cleaned up by GPT-4. As such it may contain typos or hallucinations.", "(00:00:42) - Growing up during Cultural Revolution", "Dwarkesh Patel 00:00:42", "Today, I have the pleasure of interviewing Jung Chang. Her first book, 'Wild Swans', has sold over 15 million copies worldwide. The U.S. diplomat George Cannon described the Gulag Archipelago. He said, \"This is the greatest and most powerful single indictment of a political regime ever leveled in modern times.\" And when I read that quote, I realized that this is exactly how I describe your books, 'Wild Swans', obviously, but also your biography of Mao titled 'Mao: The Unknown Story', both of which we'll talk about today. It is a true honor to speak with you.", "Jung Chang 00:01:17", "Thank you very much for having me.", "Dwarkesh Patel 00:01:19", "So we will get to Mao and his atrocities in a second. But let us begin by: would you mind laying the scene for us? What was it like growing up in China, under Mao? Let's begin there. What was it like as you started to grow up during this time?", "Jung Chang 00:01:34", "I was born in China in Sichuan in 1952, so I grew up under Mao. When I was a child, I led quite a privileged life because both my parents were communist officials, and we lived in this compound with servants, cooks, drivers. It was a very class-ridden society. And I grew up so much taking class and privilege for granted, that when I first came to Britain, I thought Britain was wonderfully classless. And of course, my views were slightly modified over the years.", "And then, in 1966, when I was 14, Mao launched his Cultural Revolution, which was his Great Purge. And my father spoke up against Mao's policies. So as a result, he was arrested, tortured, driven insane. He was exiled to a camp and died tragically and prematurely. My mother was under tremendous pressure to denounce my father. She refused. As a result, she went through over a hundred of these ghastly denunciation meetings, which were everyday features in China at the time. And basically, the victims were put on the stage, and their arms were ferociously twisted to the back, and their heads were pushed down. They were kicked and beaten. And my mother was once made to kneel on broken glass. She was paraded in the streets, where children spat at her and threw stones at her. But she survived, and today she still lives in Chengdu, age 92. My family was scattered, and I was exiled to the edge of the Himalayas and worked as a peasant and then as a barefoot doctor, which was a doctor, basically, without any training because Mao had said, \"The more books you read, the more stupid you become.\" So schools were closed, books were burned. I mean, China was literally a cultural desert without books, cinemas, theaters, museums for ten years. And then I became an electrician, and again, there was no training, so I had five electric shocks in one month.", "And then in 1973, partly after Nixon's visit to China, but more so because of the internal political reasons, universities began to reopen, and I was able to get into Sichuan University to learn English. But our teachers had never seen foreigners themselves because China had been closed to the outside world after the Communists took power in 1949. So our textbooks were written by these teachers who'd never been abroad. I remember the first lesson was \"Long live Chairman Mao,\" and the second lesson was greetings. Because in those years, when we bumped into each other, we said \"Ni chi fan le ma?\" which means \"Where are you going? Have you eaten?\" So those were the English greetings I learned. So when I first came to London, I used to go around and ask people where they were going and whether they had eaten.", "The only foreigners I had spoken to were some sailors in a port in South China where we, as English language students, were sent to practice our English. And that was when I was 23. But of course, we were at the port eagerly awaiting our sailors, and we had no idea what must be on their minds, how different this must be from their expectation of port life. In 1976, Mao died and China began to change. And in 1978, there was a national exam to select people to go abroad for the first time under Communist rule. Going abroad was based on an academic basis, so I did very well in the exam. So I became one of the first 14 people to come to Britain. And as far as I know, I was the first person to get out of Sichuan Province, a province then, of 90 million people to come and study in the west.", "So when I got my doctorate in linguistics at the University of York in 1982, I became the first person from Communist China ever to get a doctorate from a British university. Okay, so I was in Britain, and for ten years, I didn't want to think about the past because it was too painful, and my father died, my grandmother, who brought us up, died, and I just wanted to spend time enjoying the west.", "I had actually always wanted to be a writer. When I was a child, I loved writing. But when I was growing up under Mao, it was impossible to dream of even becoming a writer, because nearly all writers were condemned, sent to the Gulag, driven to suicide. Some were even executed. Even writing for oneself was dangerous. I wrote my first poem when I was 16. On the 16th birthday in 1978, I was lying in bed polishing my poem when I heard the door banging and some Red Guards had come to raid our flat. And if they had seen my poem, I would get into trouble and my family would get into trouble. So I had to quickly rush to the bathroom to tear up my poem and flush it down the toilet. And so that ended my first venture in writing.", "But the desire to write never left me. So in the following years, when I was working as a peasant and as a barefoot doctor, as a steel worker and an electrician, and when I was spreading manure on the paddy fields and checking electricity supplies on top of the electricity poles, I was always writing in my head with an imaginary pen. But I couldn't write in China.", "When I came to Britain, for ten years, I didn't want to write. And then my mother came to stay with me in 1988, and for the first time, she told me the stories of her life and stories of my grandmother. And then while I was listening to my mother, I thought, I must write all this down. And then I realized how much I wanted to be a writer and how much I had always wanted to be a writer. And so after my mother left, I transcribed the tapes. She left for me 60 hours of tape recordings. And then I wrote 'Wild Swans', which was published in 1991 first. And I became a writer.", "Dwarkesh Patel 00:10:37", "Yes. Saying you became a writer is understating it. The global impact of 'Wild Swans' has been tremendous. A former guest of mine, Sarah Payne, recommended it to me, and I read it. And it's the most moving book I've ever read. It's truly tremendous. Let me begin by asking, what it was like growing up there in terms of the psychology of living in a totalitarian system? You mentioned in the book that until very late, you could not even bring yourself to question Mao, despite seeing the consequences of his policies and the cult of personality that was there. Tell me about the psychology of living in a system like that.", "Jung Chang 00:11:21", "Well, when I was growing up in China, we were all subject to intense brainwashing and indoctrination. When we were children, Mao was - we were - sorry, Mao was like our God. If we wanted to say what I say is true, we would say, \"I swear to Chairman Mao.\" So Mao had been given this godlike status. And also, at the same time, we could see how dangerous it was to question Mao. In China, there were these periodical political campaigns, and many people were victimized. And the biggest crime was to question Mao. And my father, in the Cultural Revolution, suffered tremendously, and it was also because he questioned Mao.", "So when I wrote my poem when I was 16 years old, I'd already started to doubt and to dread the society I was in. We were always told socialist China was paradise on Earth. And I thought, on that day, actually if this is paradise, what then is hell? Because my parents were away being detained, my grandmother was weeping next door because she'd heard these ghastly things that were being done to my mother. So I questioned the society, but Mao never entered my mind, and he was beyond questioning.", "This may be difficult for people to understand, maybe in the West, but in China in those days, there were two most important things that enabled this brainwashing. One is the complete isolation of the society from the outside world, from alternative information, and from any other information. Even parents never told the children things that were different from the Party line because they were worried about the future of their children. And they were worried that if children blabbed, it would be disastrous for the children as well as for the family. So no alternative information.", "And the other is terror, this intense terror, which really scared people into suppressing any unorthodox thoughts. So I was living in that kind of society, and it took me a long time to question Mao since my birthday, my 16th birthday thought in 1988. For many years, I blamed what was happening in China on Madame Mao and the so-called Gang of Four, which were basically assistants of Mao's. But I never dared to question Mao. And then I remember very well in 1976, I had learned a little English, and a friend showed me a copy of Newsweek, and there was an article about Mao. And there were two little pictures with the caption \"Madame Mao is Mao's eyes on Earth.\" And suddenly, Mao's name was spelled out for me. And I suddenly realized, of course, it was Mao. Without Mao, none of this could have happened. And Mao was responsible. And I'm an intelligent person, but it took me eight years, even from the moment I said to myself, \"I dislike this society,\" to the moment that I felt Mao was responsible.", "( 00:15:58) - Could officials have overthrown Mao?", "Dwarkesh Patel 00:15:58", "You mentioned that your father was purged because of his criticism of the government at the time. And in fact, your father's story throughout the book is a sort of tragic tale. But what I found interesting was that the way he criticized the party was to go through the official mechanism. He wrote a letter to Mao, which suggests that he even then still believed that the mechanism of the party worked, and then it would be imagining, like, somebody has a problem with the North Korean government today. And then he writes a letter to Kim Jong Un, which is, obviously, you're going to get in trouble for that. So tell me about how your father thought about that. And in retrospect, how should a high official like your father - he was the governor of Sichuan Province, which you said 90...", "Jung Chang 00:16:46", "Million people wasn't governor. Sorry. My father was the governor of a region initially, and then by the time of the Cultural Revolution in 1966, he was the head of a department, oh, I see, of the Sichuan Party government, whatever they were the same.", "Dwarkesh Patel 00:17:08", "A high official.", "Jung Chang 00:17:09", "A high official.", "Dwarkesh Patel 00:17:10", "What should he have done when he realized things were going wrong?", "Jung Chang 00:17:14", "There was nothing one could do. I mean, if you tried to, say, spell out your thoughts to other people, you would be instantly denounced and probably executed. I mean, nobody was allowed to say anything against Mao. And theoretically, in the charter of the Communist Party, a party member had the right to write to the leadership. So my father was using that theoretically permitted way to voice his dissent. That's why he wrote to Mao. And in any case, all these things, the atrocities, the violence - I mean, only Mao could stop them. So writing to Mao was the only way he could express his opinion. And, of course, he also said something in the context of the denunciation meetings, but there were outbursts at denunciation meetings rather than his well-thought-out expression of dissent.", "Dwarkesh Patel 00:18:37", "So this is something I thought was confusing when reading accounts about the Cultural Revolution. China is a society - they've rebelled in the past. They rebelled against the emperors, they rebelled against the Japanese occupation. The Nationalists were at one point in charge of lots of parts of China; the communists rebelled against them. How was Mao able to instill a regime where that became unthinkable, despite the fact that it was an incredibly chaotic and destabilizing time? How did the Chinese, who have a great sense of history, allow this to happen?", "Jung Chang 00:19:13", "Well, that's a very good question. That is the key of a Communist society, of a totalitarian society, is the control, the organization. I mean, neither the emperors nor any other rulers under the Nationalists, under Chiang Kai-shek, was China so thoroughly organized down to the grassroots, controlled by layers of party organizations? It was totally thorough. That's why the 20th-century totalitarianism was very different from the previous authoritarianism. I mean, the key was the control, this total control of a society. I mean, the power highly concentrated at the very top.", "Dwarkesh Patel 00:20:14", "It's interesting because Mao is obviously a person who doesn't understand economics, and we'll talk about that more moving forward and the disastrous consequences it had because of his complete ignorance when it came to economics and industry and things like that. But what he did seem to have an incredible sense for, like Stalin and other totalitarian leaders as well, is the psychology of people and how to organize a society that has 800 million people, how to organize it so that - every society has petty, sadistic, arrogant, and cowardly people and how to elevate them to use them to your advantage. So that there is no nook and cranny in the entire society where a single person can have a dissenting voice or even have an independent life. Maybe you can talk about commune life and the way in which how can you possibly have a society of 800 million people where each person is under such strict, totalitarian control? How is that even possible?", "Jung Chang 00:21:16", "The thing is that in the Cultural Revolution, Mao used young people and their bad traits. They're prone to violence, destructive, sadistic. Any society has these people, but they were given license to indulge their bad instincts in the Cultural Revolution. Now, this took place for a couple of years. Then, Mao reigned them in by using the army. The Red Guards, particularly the most militant, most aggressive, and most violent ones, were dispersed and sent to the villages, sent to the mountains. The disobedient ones condemned themselves and became the targets of a second round of purges. Mao made sure that he controlled the army, so he needed someone to make sure he could wreak havoc and maintain control. Until 1970, Lin Biao was completely cynical. He came to Mao's rescue when there was dissent from Mao's other colleagues, like during the famine and when Mao started the Cultural Revolution. Until the day he fell out with Mao, which was why Mao was suddenly a bit lost because he lost the means to control the army. That's why in 1972, after Lin Biao died trying to flee China, things suddenly became better. Mao had to rely on another person to control the army, Deng Xiaoping. Suddenly universities began to reopen. Things were much better from 1972.", "Dwarkesh Patel 00:23:57", "Yeah. This is a great instance of, as soon as Mao dies, the Gang of Four is rounded up and arrested, and the Cultural Revolution stops. This shows that this was Mao's doing. This is also an interesting example where, whether it's Stalin in Russia or Mao in China, when the tyrant dies, the system automatically improves because nobody else is as crazy as that guy. What does this show us about if Kim Jong Un died? Should we expect a reversion to a more sane set of things? Tyrants die, and things are not as bad as they used to be.", "Jung Chang 00:24:40", "I haven't studied North Korea, so I don't know the inner workings of that society. So much depends on one person. The Kim Dynasty arranged succession. The first Kim Il-sung died, then his son succeeded, then the grandson. It seems the grandson is looking into succession by grooming perhaps his daughter. Stalin couldn't do the family dynasty thing. His children were not like the Kim children. Mao only cared about enjoying life while alive, indulging in power. He didn't care what came after. He was completely materialistic. When he visited Lenin's tomb, he said, \"It's all very well visiting Lenin's tomb, but Lenin can't feel anything. He's dead, so it doesn't matter.\" Mao didn't leave a well-structured legacy.", "Dwarkesh Patel 00:26:43", "I thought the entire purpose of the Cultural Revolution was to protect his legacy, especially after the Great Leap Forward and the way Khrushchev denounced Stalin.", "Jung Chang 00:26:59", "That may be a factor, but the main one was Mao's policy that led to the Great Famine with nearly 40 million deaths. It was so unpopular. His number two, Liu Shaoqi, then spoke up against him. There was no way, even for Mao's number two, to topple him. Under this tyrant, his colleagues couldn't get organized, which was necessary to topple him. They couldn't get control of the army. Liu Shaoqi used a party congress to speak against Mao's policy of exporting food in exchange for arms to build a superpower and dominate the world. The vast majority were against Mao's policies. So, they found Liu Shaoqi as their champion. Together they managed to stop Mao's policy, which is how the famine stopped from 1962. Mao was furious. This is why he launched the Cultural Revolution in 1966, to punish Liu Shaoqi and the party officials. That's how this great purge took place. For Mao, it was less calculated and more about revenge. He wanted revenge on his number two, who died in a most appalling way.", "Dwarkesh Patel 00:29:44", "If this question sounds naive, but if you have somebody like Liu Shaoqi and Deng Xiaoping and these other party officials who see what is happening, if they knew they were going to be purged but still had power, why didn't they control the army or go to the People's Daily, publish an article about what Mao is like, or organize a coup? Why didn't that happen?", "Jung Chang 00:30:26", "It did happen. In the biography, there are a couple of chapters about Liu Shaoqi. I'm going to expand it in my next book. In 1962, he knew Mao was going to purge him after the Congress. So he started building his power base by stopping the famine, making himself popular among the party officials. Liu Shaoqi had been powerful enough to put up resistance. There was horse trading with Lin Biao and support for Mao. Mao used the Red Guards to create violence and terror from 1966 to silence Liu Shaoqi. His first victims were school teachers. I saw teachers being beaten up, denounced, driven to suicide. Mao didn't dislike teachers. He said to Edgar Snow that he'd like to be known as a great teacher. He was a school teacher in his youth. He used them because teachers were an obvious target to arouse the violence in young Red Guards. He spent years before the atmosphere was ripe for purging Liu Shaoqi.", "( 00:34:09) - Great Leap Forward", "Dwarkesh Patel 00:34:09", "Maybe for context, let's talk about the Great Leap Forward. You mentioned around 40 million deaths. It's easy to see this as just a number. Can you make it concrete, describing the months-long agony of starvation and peasant life during the Great Leap Forward?", "Jung Chang 00:34:49", "During the Great Leap Forward, my family was privileged, so I didn't starve, but there was starvation around us. For example, a boy snatched bread from my mouth on my way to school. My father told me I was lucky; other children were starving. Our domestic help's family, classified as a landlord, suffered. Her entire family died, except for her. I remember her stories and the sight of her mother, skeletal and weak, expressing gratitude for her daughter being saved. My grandmother and the domestic help both cried. It conflicted with my indoctrination about the Communists. My father, feeling guilty, volunteered to stay in a village. He witnessed agonizing deaths, including a man collapsing in a paddy field. My father returned from the village seriously famished. Even in my privileged family, we all drank water infused with a seed grown in urine, supposed to provide nutrition. The adults starved to feed the children. Now, memoirs of China's super rich often mention their hunger during childhood in the villages. That hunger partly drove the desire for change.", "Well, first of all, China, as you said, is still a Communist regime. For many years, that was underplayed, partly because the memory is fresh. In the 1980s and 90s, particularly, the memories were fresh. I think that was probably the main reason. And gradually, after that, the memories of pain were gradually fading. And particularly a generation, two generations, have grown up without suffering.", "Given that there is no religion for people to worship, unlike any time in Chinese history, and before Mao, there was Confucianism. You could have something to hang on to. And then there was, I mean, which didn't obviously openly endorse violence and atrocities. It could sound quite attractive, which was why Mao's Little Red Book was invoked for a period in the West. So people hang on to that.", "In the post-Mao time, money was the god. A lot of people made money, but a lot of people also lost money. Not only lost money but were disadvantaged in this \"money is god\" society. In a society where there were no proper regulations and law, people who were not very savvy with money lost out. They were conned and so on. So there were some people who probably yearned for a simpler life where you were given what you were given. They wouldn't like to be starved, and they wouldn't like to be a political victim, but a lot of people could live a very simple life, more of just being fed. I mean, there may be a certain nostalgia, but the most important thing, of course, is the promotion of the regime, particularly since Xi came to power. You were taught from school all these lies about Mao. And so people grew up with this, regarding Mao as godlike. Back to my childhood.", "(00:48:12) - Modern support of Mao", "Dwarkesh Patel 00:48:12", "Yeah. What has been the impact of your books? You know, Wild Swan sold 50 million copies. Your biography, Mao, is also a bestseller. I know they're banned in China, but have they secretly been able to access how, has that revised their understanding of their own history?", "Jung Chang 00:48:29", "Well, when these books were first published in the 1990s and the early 2000s, there were lots and lots of ways to get them into China. Hong Kong, for example, and Taiwan had pirated editions, of which there were many. But now, particularly since Mr. Xi came to power in 2012, 2013, China has a total clampdown on banned literature, and you could go to jail. For an official to possess these books, including mine, you could face ghastly punishment, which you don't want. And for the general population as well. When you enter China now, you see on the screen warnings of not to bring banned literature and particularly not books that say unflattering things about the previous revolutionary leaders or revolutionary martyrs. So a total clampdown forbidding people from doing research on history, trying to understand history, which created another generation of brainwashed people.", "There was also one very important thing: the Chinese are very pragmatic, and they don't want trouble. They're very different from a lot of other peoples. Parents who had bad experiences under Mao tend not to tell their children. So there are a lot of children who were just genuinely not getting any alternative information from the official line.", "Dwarkesh Patel 00:50:49", "Yeah, I do want to get back to the actual Great Leap and Cultural Revolution in a second. But on this theme, Xi Jinping's own daughters studied at Harvard. Chinese elites, their kids are studying in America. When they take power in a generation or two, will they still be devoted Marxist-Leninists? I can't imagine them having come back from Harvard and then still...", "Jung Chang 00:51:14", "Believing? Well, I mean, in the West, in American universities, there are a lot of people, if they're interested in the subject, sure. They come to the West to have their views confirmed. For example, there were Maoists who, when our Mao biography was published, published a book, a collection of their criticisms against our book. The title was \"Was Mao Really a Monster?\" The preface was written by a senior lecturer in LSE, the London School of Economics. The language was Maoist language, praising him as a great revolutionary, a great Marxist-Leninist. So the West would certainly not put off a potential Mao successor.", "Dwarkesh Patel 00:52:24", "Yeah, I'm glad you brought that up, actually, because I read that book, actually, in preparation for this interview, to see if there were criticisms that I should be aware of. Honestly, while there were certain quibbles about the part before Mao got into power about the Long March and such, which I don't know enough about to comment, when they start talking about the important things, the Great Leap Forward, the Cultural Revolution, they are not at all contesting the facts. It's the most sort of excusatory language. It is the same sort of stuff, by the way, that is said about Cuba and Stalinist Russia, like \"the literacy went up,\" and what about North Korea? North Korea today has high literacy. Are you going to say that North Korea is okay?", "Actually, can you talk about this? What do you think explains some parts of the left who want to find excuses for these regimes? Whether it's Venezuela or whether it's Edward Snow writing his book about Mao, there's a need to excuse these communist regimes and socialist regimes. What explains this?", "Jung Chang 00:53:35", "From what I know, I think there were people who had illusions about these regimes, and a lot of academics who were kind of controlling the faculties to do with Mao and universities probably had got their sort of illusions because they had access to Edgar Snow's book. They were radicals in the 1960s. They want to hang on to their own now. Sorry, let me just... not get into the subject. Sorry, I'm faltering on this because I don't know. I don't know why, but I don't know why, I don't know why they are like that. They don't know the facts. They don't care to know the facts. And also, I think probably some people think China has always been awful under the emperors and so on, and so somehow the Orientals must feel differently today.", "I know when Deng Xiaoping visited America in 1979 and established diplomatic relations with America with Carter, he was at the banquet with some deluded film star or something, and then people were saying to him that when they visited China, they'd seen professors who'd been subject to forced labor. But they were told that they enjoyed it because all these hardships and being in the labor camp had turned them into the new men. And Deng Xiaoping just said they were... I mean, they were mean to the Westerners who didn't know the truth, just took their word for it. They didn't know people couldn't tell them the real truth.", "Dwarkesh Patel 00:56:03", "Even in the case of Stalin in Russia, there was a famous New York Times reporter who was doing the Russian coverage for The New York Times, and reports would come in about the Ukrainian famine or these other atrocities, and he would write, knowing that no, this is not happening. I think there's a famous headline that says \"Russians are hungry but not starving.\"", "So actually, let's talk about Deng Xiaoping. And I want to ask about, so, during the Cultural Revolution, he is exiled and purged. And his son because he is known as, I don't know what exactly, the black...", "Jung Chang 00:56:41", "But basically the blacks. That's one of the racist side of Chinese society. Black is bad. So the son was one of the five blacks.", "Dwarkesh Patel 00:56:55", "His son is chased out of a window by Red Guards. The doctors refuse to operate on him because he's Deng Xiaoping's son. So he's paralyzed for life and is forced to do manual labor. This guy who was basically kind of running China under Mao, he's doing manual labor out in the countryside. When he comes back into power after the Cultural Revolution, from the outside, I don't understand how he doesn't immediately denounce Mao. Talk about the horrible things he did. How did he allow—there's a quote from him—he says, \"we must be careful not to overemphasize the crimes of Mao or something.\" For somebody who is so personally harmed by Mao, how is he not immediately condemning him?", "Jung Chang 00:57:37", "This is something I don't understand. And I also think he made a big mistake. If he had dissociated from Mao like Khrushchev had with Stalin, I mean, it would not have just been the right thing to do, but it would have been the popular thing to do because there was a great groundswell of sentiment for denouncing Mao or at least dissociating from him. Not just from the population, from the victims, which virtually everybody was in China, but from the leading elite, from most of his closest colleagues. I mean, for the few elders who were in favor of Mao, he could easily have dealt with them like Khrushchev had dealt with the Stalinists, but he chose not to. What got into his mind? I haven't studied him very carefully, but I think he was probably thinking that if you reject Mao, it's inevitable that Communism will collapse in China. Unlike Khrushchev's time in 1956, he could denounce Stalin without endangering Communist rule in Russia, but Deng at his time could not. Well, at least probably he thought he could not have denounced Mao without endangering the rule of the party because we're talking about the 1980s, late '70s, '80s now, near Gorbachev's time. So I think that's probably his devotion to the party.", "Dwarkesh Patel 00:59:39", "Yeah, but I think he might have been right. And obviously, the point is that he would have been right to say that. Well, this is actually inherent in the communist regime in Russia in the '80s when they have glasnost and perestroika, and they talk openly about the gulag system. That is one of the main contributing factors. And people say, well, how can a regime that allowed this to happen be allowed to exist anymore? How can this be our governing regime? And that does lead to the—yes, exactly.", "Jung Chang 01:00:11", "Which is why, by the way, Mr. Xi's argument, because he was against Gorbachev, against perestroika and glasnost. And exactly that, the communist regime would—know. In today's terms, that is the wealth, the money that is associated with power.", "Dwarkesh Patel 01:00:39", "In the book, you point out that Mao is acting in his self-interest and selfishly doing all these things. But it seems to me that a strong, if not motivation, at least enabling factor and organizing factor is definitely provided by the ideologies of communism and socialism, which sort of organizes society. Otherwise, it doesn't make sense to collectivize farms and to close down shops. And it also necessitates the purges. Because communism is a science, it has to work. And if it doesn't work, there must be internal capitalist saboteurs who must be condemned, brought out and killed. Isn't it the communism and socialism at the heart of the issue here?", "Jung Chang 01:01:26", "I think some people would undoubtedly think that way. But having researched Mao for twelve years, my conclusion about him was he was highly pragmatic, and the communist ideology suited him. He joined the Communist Party not because he was a passionate believer, but because it gave him a livelihood. I mean, he was asked to open a left-wing bookshop selling Communist and left-wing literature, and that started his life. But the few things about the ideology that you just mentioned - for example, the collectivization, it's highly conducive to what Mao wanted, which was food. If they had been private farmers, they would've farmed their food first and paid taxes, so to speak. But it's far more difficult to control hundreds of millions of peasants than to organize them into units, into communes.", "Mao said that the great advantage of collectivization was easier control. This was not just ideology, but reality for what he wanted to get.", "(01:03:24) - Life as peasant", "Dwarkesh Patel 01:03:24", "Yeah, but then at least we can say one of the problems with the ideology is that it attracts and is highly conducive to opportunists like Mao and Stalin and the Kim family. But so let's go back to the Great Leap Forward and these communes. These communes were really what it is like. To be a peasant is like chattel slavery. Can you talk about the working conditions, how hard they worked without being given food, the punishment?", "Tell us more about the peasant life.", "Jung Chang 01:03:55", "Well, I was working in the commune for several years during the Cultural Revolution in two places, and our lives consisted mainly of work. There were fixed hours you were allocated food and fuel and other things depending on how many hours you worked. So your life was centered around work. The commune controlled everything. If you wanted to travel somewhere, you needed the commune to give you a note, a kind of passport, to allow you to travel. If you wanted to get married, you had to get permission from the commune.", "During the famine, this is how Mao ensured that the peasants didn't rise up in arms because of the control from the commune. Every now and then, the regime would issue stiff orders to stop peasants from fleeing their villages. If they did manage to flee into the cities to beg, for example, the communes were told to bring them back. The commune is the organization, in some ideology, but in reality, it's how the party controls China's 500 million peasants. There were only a few tens of thousands of communes, imagine that highly concentrated organization. So people were no longer individual farmers like they were before communist rule.", "Well, basically, Mao's ambition after he took power in China was to build a superpower to dominate the world. He needed to buy these machines, military-industrial complexes, mainly from Russia and from Eastern Europe. But he didn't have the money to pay. China wasn't rich as today, so he exported food, and he needed a lot of food. Whereas in China, traditionally, we never produced enough food to feed the population. The emperors banned food export and brought a lot of food into China. So traditionally, China was a food importer for a few hundred years, and Mao stopped that.", "So to start with, there was always a food problem throughout his and now the Great Leap is basically to import vast quantities of technology and equipment, mainly from Russia. That's why it's called the Great Leap. He wanted to build an industrialized system in a few years to be fast. That's what he said. I mean, that's why his demand for food was vastly elevated, Mao's demand for food. And this food had to come from the peasants. So he basically seized this food to export to Russia and Eastern Europe, knowing his people would die of starvation.", "There was a time Mao kept saying, seemingly philosophically, \"death is a good thing. If we don't have death, the Earth...\" These seemingly philosophical things were taken at face value by some academics. But what he really said these things to his officials in order to harden their heart when they went to seize the food from the peasants, seeing how pitiful their conditions were. And that's the origin of the famine. It's as simple as that. It's food export.", "I mean, Liu Shaoqi, his number two and his main target in the Cultural Revolution. It was thanks to a visit back to his old village that made up his mind to stop Mao's policies, because he went back to his village. His brother-in-law had died of starvation. His sister was on the edge of dying of starvation. He saw the villages, saw the just heart-rending things, and he opened the lid of a work saucepan and saw there was nothing, just water, a few drops of grain, and he did a very unusual thing and he bowed to the peasants and said, \"I'm very sorry.\" It was after this, in 1961, he made up his mind to stop Mao's policies which led to the Cultural Revolution and his tragic death.", "Dwarkesh Patel 01:10:41", "And then you also talk in the book about how these peasants, not only was all this grain being exported which caused them to starve, but they weren't even allowed to harvest their grain because they had to talk about turning their own woks and their own stuff into iron and spending time doing that instead of farming.", "Jung Chang 01:11:00", "So Mao was partly defeated by his own ignorance about the economy, because when you want to build a modern super industry, you needed the steel, and steel was the most important thing. And China's steel-producing capacities in the 1950s were very low. So he had this idea of making the whole population make steel. I mean, it really is quite ridiculous because I was in primary school. I was six years old, and my main occupation was somehow my contribution to steel, which is every day we walked on the street trying to find the little nails, the cogs, something steel, and to hand in to our teachers.", "Because there was a backyard furnace in our school, all the teachers had to feed things into the furnace. The furnace also had to be kept going 24 hours a day. It couldn't go off. I mean, to feed that furnace consumed everything in my village. We struggled every day to find a little fuel. Fast forwarding to 1960s, and because the mountains, which used to be covered with great trees, have been laid bare for the fuel to feed the backyard furnaces. And the teachers were exhausted in my school. And so we were organized to babysit for them. When I was a child. It was hugely wasteful, hugely wasteful. Because for all this effort, this was 1958, actually, most of what the backyard furnaces produced were completely useless. So Mao died, thinking of himself as a failure because China was still poverty-stricken at the time of his death. And he felt himself a failure, but he was partly sabotaged by his own ignorance about the economy.", "The other thing about Mao's ignorance was because food was so important, and because sparrows eat food. So he ordered the whole population to Kill the sparrows. So as a child, I sat with other people in our courtyard. We beat gongs to make a tremendous noise. So the sparrows would drop to the ground, and so all these people would go and catch the sparrows. And it was just a catastrophe because it not only killed the sparrows but many other birds. Other birds as well. And of course, the worms, the pests, insects, they flourished without their natural enemy. So it's an unbelievable situation that consumed China for more than two decades.", "Dwarkesh Patel 01:14:54", "Just the complete lack of sense here. It would honestly be a joke if obviously you didn't know it led to 40 million deaths.", "Jung Chang 01:15:02", "But the thing is, in the West, I indirectly know somebody who was a steel magnate, a great steel producer, and he thought Mao's backyard furnaces were a brilliant idea. I mean, talking about people in the West, not just the academics, but many other people, completely irrational. And I think maybe in their eagerness to find an alternative to Western capitalist democracy.", "Dwarkesh Patel 01:15:44", "So say more about that. I know we did touch on that earlier when we were discussing why are people still defending Mao, but what motivated at the time and even still now, people were so disillusioned with Western capitalism that they thought they would rather have Mao.", "Jung Chang 01:16:00", "I think there were a lot of people like that with a young generation. I mean, they grew up after the collapse of the Soviet Empire. A lot of facts have come out. So people are no longer probably so starry-eyed and so wishful thinking about the communist regimes. But there was a time in those years when communism seemed to be going strong in so many countries. There were a lot of wishful thinking. Westerners, as I said, they may be pursuing for alternative to a society they have a lot of discontent with. So they so wanted for such miracle to happen, and they believed in what in there otherwise might have rejected as fantasy.", "Dwarkesh Patel 01:17:02", "Yeah. Speaking of which, tomorrow I'm interviewing Neil Ferguson, who has written volume one of his biography of Kissinger, and volume two, which will cover this period, will come out later. He's writing it right now. What should Kissinger have done differently? Should they not have tried to open up China under Mao? What should have been the policy of the United States at the time?", "Jung Chang 01:17:25", "China was not opened up by Kissinger and Nixon. I lived in China then. I knew after Nixon and Kissinger's visit 1971-72, China was not opened up. I mean, all the liberalization seemed, you know, relaxation after Nixon's visit was mainly because of the collapse of Lin Biao and Mao lost his arm with which he controlled the army. So I think to say Nixon and Kissinger opened up China is wrong. That's not the case. Kissinger, I think, is a very, very smart person. I think he probably was too fascinated with power. I mean, Mao had the kind of power he could turn the lives upside down of a quarter of the world's population. I think he was very fascinated with Mao. I mean, he said nice things about Mao. And even after Mao died, with the regime was reviving Mao. There were a few people reviving Mao, like the current Mr. Xi and his political rival, Mr. Bo, with whom Kissinger seemed to be very close. He attended these rallies to eulogize Mao, big rallies, and lending his status he had to the Chinese regime's effort to stick with Mao's legacy. I think that's unforgivable. That's one thing. And the other thing is China's opening up to the West. And that happened only after Mao died in 1976 and Deng Xiaoping came to power. I knew this very well because I was one of the first Chinese to be able to leave China in 1978. That's the very beginning of the opening up. I think it's a good thing. I mean, China has ditched Mao's economic lunacy and the ideology that has wrecked China. And the Chinese people are leading a much better life today. And all this could not have happened if the country had not opened up. And also through all this contact with the West, any attempt to go back to the Maoist time would be futile because people knew what the West was like. The people really don't want to be isolated again to lead a life of Mao's time, no matter how they may say they worship the Mao no matter how they make pilgrimages to Mao's birthplace and so on. But deep down, I think nobody wants to go back. So I think that's a very good thing, this opening up. But of course, then it country may grow into a menace to the world. I mean, that's another matter. It's a challenge that the world needs to face now, but it's certainly not to make it's not certainly not true to say China shouldn't have the West shouldn't have allowed China to open up.", "Dwarkesh Patel 01:21:24", "I mean, you can't just dismiss a billion people coming out of poverty. It's the best thing that's ever happened in history.", "Jung Chang 01:21:30", "Exactly.", "(01:21:30) - Psychology of communist society", "Dwarkesh Patel 01:21:30", "So let's go back to the Cultural Revolution. One thing that I find really interesting about communism, especially in China, is the need for the victims to then incriminate themselves to confess. Even Hitler wouldn't have the Jews in Auschwitz talk about renouncing their semitic ways. And \"I've been an enemy to Germany in World War I.\" Explain why it was important that the victims of these purges had to then talk, you know, \"I'm guilty, I'm complicit.\" Why couldn't they just be ostracized?", "Jung Chang 01:22:10", "I think Mao knows people's psychology very well, and I think he uses this as a weapon to break people. I mean, to humiliate them and to break them. So even his opponents then started to grow doubt about their own opposition. So I think that's the main thing. I tell you, it's not very nice. I mean, in China, when I lived in China, I wasn't denounced, but we all had to attend criticism and self-criticism meetings. I mean, it really stirs up some very basic discomfort and unsettling, upset feelings if you have to criticize yourself, I mean, not do it cynically because you have to win our days, you couldn't do it cynically because nobody has reason to understand the whole thing in order to be cynical. So you were starting with being quite sincere. It certainly breaks people. And also it makes people turn people against each other because when people are criticizing each other, you create a lot of animosities among the people, which is one reason why no opposition can get organized. I mean, people don't dare to talk to each other in case they were denounced. It's a psychological warfare against his own population, which is quite effective.", "Dwarkesh Patel 01:24:07", "So meaning that it wasn't just a campaign against political opposition, it was literally every part of your life. I think even in the book you talk about embracing your family is anti-Maoist because it shows you're closer to your family than you are to Mao.", "Jung Chang 01:24:25", "Exactly. It's this warm feeling I was constantly criticized of because of my feelings for my family. And Deng Xiaoping, when he wrote to Mao about his son, the son you talked about who was crippled, he wrote to Mao to ask Mao to allow his son to join him so he could look after his son. He and his wife, who was so heartbroken seeing his son, she wanted to kill herself anyway, Deng had to preface his appeal with \"I'm afraid I'm committing warm feelingism, but could you allow my son to join me to be looked after?\"", "Dwarkesh Patel 01:25:18", "Yeah.", "Jung Chang 01:25:19", "It's a device that really separates society, making people against each other and being on guard against each other. Earlier you were talking about why can't people get together? Because this other person at the next criticism or self-criticism meetings could well say, \"So-and-so had said this to me, and I hadn't reported to the party, therefore I'm guilty.\"", "Dwarkesh Patel 01:25:49", "Talk about the way in which it forced good people to be immoral. You have these quota systems where if you're in charge of a department or something, the Mao says 5% are rightists, 5% are capitalist roaders. And so if you don't give 5% of names who are capitalists, who are going to get denounced, then you must be a rightist yourself. Talk about that aspect of the system.", "Jung Chang 01:26:13", "Well, the result is you are in a tremendous dilemma of either sacrificing you, your family, and other people, which is another way of breaking these people. These are all his weapons, psychological weapons to force people to do what he ordered them to do.", "Dwarkesh Patel 01:26:43", "Why was there such a big reservoir of support for communist ideas and also the personality cult that formed around Mao? In China, people give different explanations of Emperor worship beforehand led to this, or peasant rice farming. What explains why China got taken over by this ideology?", "Jung Chang 01:27:11", "It's again, not an ideology.", "Dwarkesh Patel 01:27:13", "Right.", "Jung Chang 01:27:13", "And Mao himself said in 1923 he didn't believe that the Chinese would go for Communism. I mean, he thought Communism could only be brought to China by the Russian Red Army. And he was right. In earlier years, the know the Moscow's representatives to China, and to other countries, said it was China was a lost course. People didn't were the last people to go for communism. I mean, much easier in India, for example. Somal was wise because after the Second World War, the Russian Army, Red Army invaded China and occupied the north and northeast of China, a large hunk of land that was more than the entire Eastern Europe. So with this land, Stalin then supported Mao to fight a war against mean Mao. Of course, Mao was the main man who ensured his success because during the war against Japan, all his colleagues wanted to fight Japan. And Mao was the only person who was against it and tried everything he could to take advantage of the war which destroyed Chiang Kai-shek's government. Whereas Mao grew, the Red Army grew during the war. So Mao was very smart. And this is one reason why Deng Xiaoping and a lot of other communist leaders were so totally devoted to Mao. Because they realized if it were not for Mao, they would never have come to power.", "Dwarkesh Patel 01:29:17", "Right. By the way, what do you make of the analogies people make when they say what happened in the US and other countries a couple of years ago with the BLM movement?", "Jung Chang 01:29:27", "Of course, it's not at all mean. I think maybe people just saw statues being toppled, I don't know, a few things, superficial things. The Cultural Revolution was nothing like Know. You couldn't even comprehend the horror of the Cultural Revolution. In the society, the fear, the know. China is really totally destroyed. I mean, there was no antiquity in the private hands wiped out, taken by the regime. It's nothing like that. For ten years there were no books, no cinemas, no theaters. Cinemas and theaters were turned into prisons and torture chambers. And my mother was imprisoned in one. And I knew how to get a hold of one book. It was how difficult and how impossible that was. And that was the ten years of the Cultural Revolution. It's nothing like what happened in the west.", "Dwarkesh Patel 01:30:48", "Hey, everybody. I hope you enjoyed that episode. As always, the most helpful thing you can do is just share the podcast, send it to people you think might enjoy it, put it in Twitter, your group chats, etc. Just blitz the world. Appreciate you listening. I'll see you next time. Cheers." ]
[]
https://www.dwarkesh.com/p/ken-rogoff
“China’s digging out of a crisis. And America’s luck is wearing thin.” — Ken Rogoff
[ "00:00:00 – China is stagnating", "Dwarkesh Patel 0:00:00 Today I’m speaking with Ken Rogoff , who is a professor at Harvard, recent author of Our Dollar, Your Problem , and former Chief Economist at the IMF .", "Ken, thanks so much for coming on the podcast.", "Kenneth Rogoff 00:00:12 Thanks so much for having me and welcome to Harvard where we’re filming this.", "Dwarkesh Patel 00:00:16 In your book you have a lot of anecdotes of meeting different Chinese leaders, especially when you were Chief Economist at the IMF. It seems like you had positive experiences. They would listen. You met the Premier with your family, and he would listen to your advice.", "One, how does that inform your view about how competent their leadership is? Two, how do you think they got into this mess, with their big stimulus or whatever else you think went wrong?", "To the extent that when you were talking to them in the early 2000s, it seemed like you were seeing eye to eye, or that they would understand your perspective. Do you think something changed in the meantime?", "Kenneth Rogoff 00:00:49 First I want to be careful to say that they listen to everybody. The Chinese are way better than we are at hearing a hundred different views. Mine would be one of many that they heard.", "I was very impressed by the competence of the Chinese leaders. I actually gave a lecture in the Party ’s training school where, if you’re a mayor, a provincial governor, any bureaucrat on your way up, you go to this thing which for them is like Harvard Business School.", "They really looked for competence. Of course there were various loyalty things. But when you met the leaders—and I met a lot of them when I was at the school—they actually asked really raw questions too. They said things I couldn’t believe they were asking. And I was told that within the school, you're allowed to say anything.", "So they had that system for a long time. When you met Chinese technocrats—or even the mayor of Shanghai—they were impressive. I'm not saying ours aren't, but it's a mix. I think you know that. I think Xi Jinping has really changed that. He’s been the president since 2013, and over time he’s pushed out that system and moved more toward loyalists, people who are less technocratic.", "Probably the most important talk I ever gave in China was at what's called the China Development Forum in 2016. It's this giant hall that had most of the top leaders in the party. A lot of the elite of the tech world, Mark Zuckerberg and many others were there. I said, “Okay, I'm looking at your housing. I'm looking at your infrastructure. It looks to me like you're going into a classical housing crisis problem. Your catch-up is over. Your demographics don’t look good.”", "I gave a list of things. “And by the way, it looks like power is becoming very centralized in the economy.” And I said, “I'm a Western economist. You're doing an amazing job. What do I know? But I don’t think that would be good for growth.” After I gave the talk—I just figured you only live once, you just have to say what you have to say—a couple of top leaders came up to me and said, “Professor Rogoff, we very much appreciated your remarks.” I was thinking, “Oh no, they’re going to put me in jail or something at the end of this.”", "I’m less impressed by them now. And I’m worried. Let’s say they get into a crisis—which I think they’re in now. I think they're still in a deep crisis—or somehow hotter heads prevail between the United States and China and we get into some kind of entanglement nobody wants. I worry that we’re not as competent. I’m speaking about right now. We have some very good people, but the average quality at the very top, I think, has gone down. And China’s not as competent either. That’s a recipe for having bad things happen.", "Dwarkesh Patel 00:04:16 You mentioned in the book that you had to clear your talk before. So you gave them a sort of watered-down version of what you were going to say. I have to say, it would take some gusto to go up to the top party leaders. Were you nervous while you were giving the talk, saying, “Oh, it’s too centralized”?", "Kenneth Rogoff 00:04:33 I mean I was pretty experienced by that time. I frankly never used notes, so the idea that I was going to read my speech didn’t even occur to me. Maybe it was a little bit spontaneous. But I certainly felt at that moment, what am I here for? What’s the point of coming to this? Why don’t I talk about the elephant in the room? Everybody knows this. I don’t know if everybody knew it, but it was clear to me and I’m going to say it.", "Dwarkesh Patel 00:05:03 I think a lot of people in that situation—even though they should or the logic makes sense—they often don’t.", "Kenneth Rogoff 00:05:08 Yeah, I’m a professor. So people who had a big tech company or a finance company, or all these other businesses, most of them can’t afford to do that. I think they know that when they invite a professor, they can’t cut your funding or something. They can stop you from going again. They invited me again, by the way. Although the second time I just talked to a tiny room instead of the big hall. But, I give credit to the Chinese for listening.", "Dwarkesh Patel 00:05:41 You’ve said that the seeds of their current crisis were sown in 2010 with their big stimulus . Is it wrong then to blame Xi Jinping for this? That was before his time. It was Hu Jintao ’s government that launched the stimulus that’s causing all these problems now.", "Kenneth Rogoff 00:05:57 Hu Jinato did it, but they kept it going. The local government debt that you mentioned, that was an innovation put in with the 2010 stimulus. But they kind of left it running and used it as a stimulus program. Long tangent, but the local governments don’t have enough ways to fund themselves. So they were allowed to sell land to start and fund these construction companies, get revenue, and sustain themselves. They let that keep going.", "When Xi Jinping came in, I was told he was going to be Ronald Reagan. I had very good contacts in the intellectual sphere in China. I’m talking about someone who had worked for me when I was chief economist at the IMF, other people I knew whom I really trust. They're really smart and well-connected, still are so I don’t want to name names. But they were telling me, “He's going to change everything. This is really the time we’re going to liberalize our markets a bit. We’re going to do the things we haven’t done.” And he didn't do that much. If you look at China’s growth, it actually slowed down quite a bit when he came into power.", "There are different ways to measure a country’s output because China produces completely different stuff than we do. They use their currency and we use ours. Nothing's perfect. But one way to do it is this. What do they report their output to be in Chinese currency? What do we report ours to be in dollars? Then we use the exchange rate to compare. We can look at growth that way and if you do for China, it’s been spectacular. It’s been very, very good.", "It’s obvious they're pulling it out of thin air. But there are these approaches trying to control for how you’d really compare how an ordinary person lives or how an ordinary firm gets by. When you look at those measures, China’s growth is quite a bit less. If you go 1980 to 2012, the official growth rate is almost 10 percent. This purchasing power parity rate—forgive me for using those words—is like just over 7 percent.", "If you look in more recent years, it’s really slowed down a lot. Even the official numbers have slowed down. I don’t know the number off the top of my head, but it’s 6% or 7% for Xi Jinping, and maybe only 3.5%. They’re starting from a very low base.", "Okay, things were gonna slow down. It’s not all his fault. But I think he’s been reluctant to take risks and I think it’s gotten us to where we are. I think they’re in a lot of trouble. They’re overbuilt in infrastructure. They’re overbuilt in housing. Have you been to China?", "Dwarkesh Patel 00:09:00 I was there six months ago.", "Kenneth Rogoff 00:09:02 Where did you go?", "Dwarkesh Patel 00:09:03 Shanghai, Beijing, Chongqing , Chengdu , Hangzhou , and Emeishan .", "Kenneth Rogoff 00:09:09 So you saw a few of the medium-sized cities. At least one of them, I think, is the new tech center. I can’t pronounce it.", "Dwarkesh Patel 00:09:21 Hangzhou?", "Kenneth Rogoff 00:09:22 Yeah it’s a big tech center. Some of the smaller cities don't feel like the big cities. And 60% of Chinese income is from what they call their tier-three cities .", "I grew up in Rochester , that’s like a tier-three city in the United States. But you could pick Cincinnati, Liverpool. Rouen — I may not be saying it right—in France is an example of a tier-three city. And they have invested like crazy. I’ve been to a few and I’ve studied the data on it a lot. They have amazing roads, amazing real estate, amazing housing. But the feel of death in those cities...", "They were very good at building stuff. The Soviet Union was very good at building cement factories and steel plants and railroads. But they’ve run their course. They have other stuff: green energy, AI, electric vehicles. But believe it or not, that stuff’s still tiny compared to infrastructure and real estate. Real estate’s a third of the economy by some measures. So I think they’re in a lot of trouble now in China. They let it go on too long.", "But again, I wasn’t running things. If things seem to be working and you try to change things, you get thrown out. It’s not easy to be in those shoes.", "Dwarkesh Patel 00:10:48 When I was in China, we visited a town of half a million people outside of Chengdu, so one of these tier-three cities. Arriving there, the train station was huge. Compounds are huge. Even when you’re driving around, like a movie theater is this humongous complex.", "I realized things were bigger in China. I was used to that because I’d seen these other cities by that point. But I just thought, I’ve seen cities of half a million people. I live in a city of half a million people in San Francisco. This just doesn’t seem proportionate to the size of the population.", "Then we visited a Buddhist temple that had been built recently as a tourist site. It was ginormous. You would go through one little shrine, and then behind it would be an even bigger structure and then another one concentrically for like eight turns. It would take you probably 10 minutes to drive through this thing. There was just nobody there. It was like me and three other white people.", "Kenneth Rogoff 00:11:42 It’s very much that feeling. And the young people don’t want to live there. I have a lot of young people here as students and I run into people. They don’t want to live in these towns, and the jobs aren’t there. I can’t criticize them for trying that.", "If you’d asked me in 2005, “Should we try to encourage people to go out to the Rochesters and the Liverpools and the Rouens in France?” I would have said yes. There’s too much in the big cities. There’s overcrowding. Look what happened to São Paulo . Look what happened to Mumbai . But I would’ve been wrong. These forces are very powerful.", "So a lot of their growth, and what they call their GDP, is this stuff. So they’re having to reorient and people just aren’t that flexible. It’s like when AI comes and puts everybody out of jobs. When construction jobs are gone, and all these indirect things are gone, it’s not that easy to move everyone.", "Dwarkesh Patel 00:12:42 If it hadn’t been for financial repression, and all this investment had been done through purely market mechanisms, would things have turned out much better?", "Even if China gets rid of all financial repression today, they save a lot. So this money has to go somewhere. Are there enough productive opportunities to soak up all these savings? Or could there have been in the past? If they get rid of financial repression, is this a problem solved, or could it have been solved?", "Kenneth Rogoff 00:13:10 What everyone’s told them forever is their saving rate and investment rate are astounding. Their consumption rate, it was higher before, but it’s still maybe 45%. Ours is pushing 70%. European countries are a little more temperate, but they’re in the low sixties. Their consumption is very low. They have some wealthy people that you saw when you went to the marquee cities. But a lot of China is living on $200-a-month kind of incomes. You could give them money. You could let them consume instead of exporting it. They’ve been very reluctant to do that.", "You could do things to encourage consumption. Actually, even just changing their exchange rate policy to allow it to appreciate more at times, would make imports less expensive. They have been very reluctant to do that. That’s what everyone tells them. That’s certainly what I said in 2016 also.", "The ticket to getting people to spend more is to provide more security than they have. First of all, there’s nothing like our Social Security system. You need to save for your old age. There’s nothing like our health system. If you work at one of the big state-owned factories, they give you healthcare, but otherwise you’re on your own.", "They’re not allowed to invest abroad. It goes in waves, but they’re not allowed to put their money abroad. So they’re trying to be careful about all of that and not do things suddenly. There’s nothing to do overnight.", "But fundamentally, if you’re looking at China and asking what’s wrong, it’s that the consumer isn’t spending enough. What’s happening right now is worse, because housing prices are collapsing. That’s the only thing they really let people save in. You could either save in a bank account, which gets you a crummy interest rate, or in a house. Now they’re going down, so people are cutting back.", "They can dig their way out. There’s no magic bullet to make them grow at 5%. By the way, that is the official number but I don’t think they’re anywhere near that. There’s no simple thing, but the general goal would be to try to rebalance investment, and consumption.", "Dwarkesh Patel 00:15:37 Let’s go back to your point about whether purchasing power parity is the right way to compare, or whether nominal is the right way to compare. I think in the book you say the nominal comparison of GDP is better because you can’t buy Patriot missiles or oil with purchasing power parity dollars. But if we’re trying to compare the strength of the two countries—their relative strength, especially in a military context—if they can build ships and munitions much more cheaply, and they have to pay their soldiers less, isn’t that actually more relevant if we were trying to figure out who would win in a war? Shouldn’t we actually be looking at the fact that they have a bigger PPP economy than us as a sign that they’re actually stronger?", "Kenneth Rogoff 00:16:18 Yeah. So in the book, I’m talking about your geopolitical power, where if you’re going to give money to somebody, what’s it worth and how much can they use it. But no, you’re absolutely right.", "They just crush us in shipbuilding . It’s partly because they build commercial ships, and there’s a lot of symbiosis between commercial and military. I think they’re 50% of the global shipbuilding market. For us to build a new aircraft carrier takes years and years and incredible expense. One of the mistakes we’re making is trying to build everything ourselves. Let our allies do some of this. The Koreans are really good at building ships. That’s another place we could be importing from.", "You’re right about the soldiers. They’re paid much less. They have a lot of advantages in a conflict against us. We’re way ahead in your department, tech. That is our advantage at the moment. If that were to dissipate, it would certainly hurt.", "Dwarkesh Patel 00:17:22 What is your projection? Right now I think their nominal GDP is 75% of America’s, or something like that.", "Kenneth Rogoff 00:17:29 Yeah, in dollars, what we call the market terms.", "Dwarkesh Patel 00:17:34 What’s your projection by 2030 and by 2040, the ratio?", "Kenneth Rogoff 00:17:41 I didn’t realize it was as high as 75%. I thought it was a little lower. I was actually going to say 75% in 2030. At one point in 2024, it was around two-thirds, but it’s really volatile with the exchange rate. The dollar’s really high. When the dollar’s really high, it makes us look bigger.", "I think they’ll gain about a percent a year on us, maybe. I don’t think they’re going to grow way faster than the United States.", "Dwarkesh Patel 00:18:08 Wait, that means you think they’ll never actually have a bigger economy than us? Or at least in the foreseeable future.", "Kenneth Rogoff 00:18:13 It’ll take a long time. We’re talking about the absolute size. They have four times as many people. There were these projections by Goldman Sachs and many others that we’d be like Canada is to the United States pretty soon. Like all these extrapolations, they were proved wrong.", "That brings me to a big topic. A lot of people will look at some trend—whether it’s growth in something, AI, China—and just project it into the future.", "Dwarkesh Patel 00:18:47 That's a common trend in AI.", "Kenneth Rogoff 00:18:52 Economists at least consider ourselves terrible at that. You go back and look at any of these commissions that were supposed to figure out what was going on. They happen periodically. Maybe Brookings puts one together, maybe the government does. My former colleague, the late Dick Cooper , had a whole list of these.", "It is very hard to know. But my gut instinct is that what’s happening to China is what’s happened to Japan. It’s what’s happened to Asia, what happened to the Soviet Union. We have a more dynamic economy. We’re not perfect. Maybe we’re screwing it up right now with all the tariff wars and deglobalization. But we have this dynamism and creativity that other places—at least other large economies—just can’t replicate.", "They can build stuff. The French have better high-speed trains than we do. I hope you don’t ride on the train from Boston to New York. It’s nicer than it could be, but it’s no high-speed train. You mentioned China. Oh my gosh, their high-speed trains are just incredible. They’re good at that. But the really creative stuff? I don’t want to say they don’t have any. There are some amazing Chinese companies. But let me say that the US is really good at it. We’ve kept that in our DNA. I think it’s very important, to preserve it.", "Dwarkesh Patel 00:20:15 The 1% per year compression is actually an extremely bearish forecast. Even people who are pessimistic about China will say, “Oh, by 2040 they’ll be at 125 or 150% of US nominal GDP.” They think it’ll be bigger, but only slightly bigger. The fact that you think even by 2040 they won’t have caught up is actually very extremely bearish.", "Kenneth Rogoff 00:20:36 I think they're digging their way out of a crisis. Right now we know their prices are falling. It's not because they're inventing stuff really fast. We know that interest rates are being pushed to zero. All these are signs that demand has been crushed and the economy's not doing well. Historically, they have given numbers which are accurate on average, as best as we can tell. I think that's gotten less and less true in the Xi Jinping era.", "Dwarkesh Patel 00:22:02 Let’s go back to the subject of your book. People who are trying to predict when and how China might invade or blockade Taiwan will look at satellite photos of different docks and see how many ships are there. They'll look at military preparedness.", "From a monetary perspective, are there signs we could be looking for? For example, if they think a lot of their American-dominated assets will get sanctioned or they won’t have access to them, could we see them liquidating those assets? Would there be any sort of preparations that we could see on the monetary side to let us know they’re preparing for something big?", "Kenneth Rogoff 00:22:40 I don’t think they’re going to do it suddenly. But very crudely, on their reserves they’re definitely moving more and more into gold . It doesn’t necessarily help to move into euros or Canadian dollars because those countries might side with us. But they’re doing what they can to diversify. I don’t think they’ve diversified into crypto yet. I had a student do a paper on that. But who knows.", "What they are doing very concretely is not just about what they’re holding. That’s the big fact everyone looks at. They officially hold a trillion dollars in Treasury reserves. But the estimate a student of mine did, in a nice paper—and I think others agree with it—is that it's more like $2 trillion. They hold a lot indirectly through proxies.", "The other part of it is that the whole financial system runs through the United States. What we sometimes call the rails or the pipes of the system. Your bank gets a purchase and my bank gets that I’m going to get a credit. How does that take place? How does it take place when we're in different countries? The United States just disproportionately controls that.", "That, they can’t live with. They could live without their $2 trillion for a little while. But they can’t live without being able to pay suppliers and other countries. So they're working hard on developing their own payments mechanisms. Russia actually did quite a bit in preparation for the invasion. We see China doing that.", "Maybe they’re selling Treasury bills, we don’t know exactly. I would advise that to them, if I were a Chinese economist talking to them. But I don’t think it’s going to be something they’ll do suddenly. Maybe Trump will bring down the markets and then there’s nothing to save. But they don’t want to be the ones to bring down the markets and cause a global crash.", "Dwarkesh Patel 00:24:46 What would the alternative rails that they're trying to build look like? Are they buying oil from Iran in RMB ? Will other countries they need things from accept that? What is the vision for 2030? What’s their goal?", "Kenneth Rogoff 00:24:59 Absolutely. There are a lot of countries in Africa and Latin America—some of them are almost client states of China—that they can force. Iran, of course, sells a lot of its oil to China even when there are sanctions. They’re moving in that direction. It’s not just about what you invoice the payment in. It’s how we acknowledge it, how we clear our books. That’s what they’re working on.", "It’s coming. The Europeans are working on it too, by the way. Europe is not happy with the situation. They're actually forming a central bank digital currency . It’s moving quite a bit faster than I thought it would. That’s actually one of the reasons they’re doing it, for international payments.", "00:25:46 – How the US broke Japan's economy", "Dwarkesh Patel 00:25:46 Let’s talk about Japan, which you also cover in the book, or their crisis. You blame US pressure in advance of that crisis on the Japanese to raise the value of their currency, and the actions by the Bank of Japan . Zooming out, how much of the crisis is not caused by things like that, but just the fact that high-tech manufacturing as a share of world output was becoming less important? There are demographic factors as well. So something like this was bound to happen to Japan, even if there wasn’t some big crisis that preceded it?", "South Korea’s GDP per capita isn’t that high either, at least in comparison to the US. How much of this is due to actions taken by specific actors?", "Kenneth Rogoff 00:26:30 South Korea had a crisis in 1983 and another in 1997 . They haven’t been crisis-free, by the way.", "There are a lot of factors. The demographics would be the most obvious one. The rise of not just China but Korea and other competitors too. Japan invented a business model that a lot of countries have since duplicated. The model was export-led growth . Something people might not think about is it creates competition.", "Most countries aren’t as big as the United States, and there aren’t as many different firms trying to do the same thing. Of course, we have trouble with competition here. Famously, in Mexico at one time, there were only two telephone companies, two bread companies, two taco companies. It’s very hard not to let monopolies sit and use their political power.", "So how do you get around that? Japan did something that was really pretty innovative. Germany did it too to some extent. In the export sector you’re competing with the world, not just with domestic firms. That created innovation and creativity. Japan did really well with that. But over time, others imitated it and started building the same things. So that’s part of it. The aging is part of it. But I think the financial crisis was a very big part of it.", "Dwarkesh Patel 00:28:02 What is the counterfactual? Suppose that the crisis hadn’t happened, how much wealthier is Japan today than it might have been otherwise?", "Kenneth Rogoff 00:28:08 Oh, I think 50% wealthier per person, way wealthier. That’s where they started. It depends on which measure you use. But by market exchange rates they were richer than the United States in the late 1980s. Even if you use the more complicated measures, they were richer than any European country, richer than Germany, France, Italy. They’ve moved to the bottom of the rung now.", "Okay, the financial crisis wasn’t the only thing. It’s a long story but we effectively forced them to move faster to open up and deregulate than they were culturally and politically ready to. I give that as an example of something in the book where I changed my mind.", "I had looked at that for a long time afterward. Going back to 2005—that’s long after the Japanese crisis—I would hear from people like Jiang Zemin , who was the president of China that I met. He’d say, “We’re not going to let this happen to us. There’s no way.” We were discussing how I thought maybe they shouldn’t have such a fixed exchange rate. He said, “That’s what the United States told Japan. Look what happened to Japan.”", "I didn’t push back that much with someone like that. You talk to other people. But I heard that from many people. I used to think, how can that be? There’s the Plaza Accord in September 1985 where we pushed them to make their exchange rate more free. But I used to say that you did that in 1985. Carmen Reinhart , my co-author on many things, and I date the crisis to 1992. That’s seven years later.", "I continued to think that but over the years, particularly recently, I’ve started to think I was wrong. These things unfold slowly. Crises don’t happen overnight. Japan deregulated and it worked. But they didn’t know what they were doing. I think it was a huge mistake for Japan to agree.", "I actually heard from someone who attended the 10th anniversary of the Plaza Accord, held in Tokyo. The person who had been head of the Bank of Japan in 1985 gave a speech to officials. He went like this and apologized very symbolically, “I have ruined our country. I did this. I take responsibility.” Yes, financial repression is bad. But financial liberalization needs to be done gradually. If you do it too quickly, you get a crisis. Many crises are caused by that.", "Dwarkesh Patel 00:31:25 Asking as somebody who obviously doesn’t know the details, at a high level how would you explain it to a novice? How could a country be 50% less wealthy than it otherwise might have been, simply from a financial crisis? Whatever they could’ve otherwise produced, why can they still not produce it? A country’s producing a bunch of things. Why are they producing 50% less just because of a financial crisis a couple decades ago?", "Kenneth Rogoff 00:31:57 Their case is very unusual, although having a number like 10% or 20% is very typical. In fact, one of my professors at MIT was teaching us about the Great Depression. It’s hard to do without a blackboard but he said, “Here’s how to think about it. We were growing like this, then we get here and we go like this, and then we’re going like this. We never got this back.”", "Dwarkesh Patel 00:32:24 There’s a lot of economic models where, Solow catch-up …", "Kenneth Rogoff 00:32:29 Yes but what happened with a financial crisis—particularly in Japan—is that it sort of blew up their business model. For example, maybe China wouldn’t have overtaken them so quickly if they’d been able to borrow more freely, if their financial markets were working better, if they had been more adroit. Their consumption collapsed. Japan didn’t quite know how to deal with that.", "We in the US were much more brutal in what we allowed to happen than Japan, but we got out of it pretty quickly. I’m not sure we got back to where we were, but we got out of it very quickly. Japan has a very consensus-driven society. They don’t want anyone to be in bad shape. Their struggle with this held them back for a long time. Maybe 50% is too high and I should say 25% or 30%, but a lot better shape than they have been.", "Dwarkesh Patel 00:33:25 Just to put it into context, what do you think the counterfactual wealth of America looks like today without 2008?", "Kenneth Rogoff 00:33:34 Boy, that's a good question. I'm hesitant because I probably have some paper giving a number for that and I might say the wrong thing. We certainly cumulatively lost a lot. It led to this political crisis that caused us to lose a lot more. I don't know, probably 15% lower. It’s a lot lower than it would be. We had this dynamic which we're living in right now. It's still an echo of that financial crisis.", "Now, mind you, you're asking about our national income. Inequality matters and would we have done other things? In some ways, the 2008–2009 crisis was a condemnation of the system and people could see it. Maybe it led to some healthy cleansing. But I think it led to a lot more damage than healthy cleansing.", "Dwarkesh Patel 00:34:32 I think this updates me toward the view that financial crises are even worse than I think. It isn't just this bad thing that happens and you recover. If there's 15% lingering even after almost 20 years, then wow that’s huge.", "Kenneth Rogoff 00:34:46 You’re losing a lot of cumulative growth. Look at Greece today or Portugal. You kind of get back to where you're having a positive growth rate, but you're not picking up… They're very different from a normal recession. Actually in a normal recession, you go down and then back up. The United States had thought it was immune to financial crises. We really hadn't had one since 1933 .", "We had a different book that came out in 2009. I mostly write papers, but this was a book with Carmen Reinhart. It was called This Time Is Different . We had some papers published in advance. We said: “No, they're different when you have a financial crisis, it lasts way longer. The slowdown is way worse.” And we were mocked when we were saying that. I think the New York Times had a two-page spread saying how ridiculous everyone thought this was.", "We could have been proved wrong and maybe if we’d done things better we would have. But it is the norm. There’s a few exceptions, like Sweden got out in a year or two. But normally they really are different from a normal recession.", "00:37:05 – America's inflation crisis is coming", "Dwarkesh Patel 00:37:05 You say in the book that you expect there to be another spike in inflation within the next decade and also that the fiscal position of the United States doesn’t seem sustainable.", "If you go forward 10 or 20 years, when we do hit this, when the piper comes calling, what actually happens? Is it going to be some acute crisis, like what happened in Greece ? Or are we going to have some kind of Lost Decade like Japan ? What will happen?", "Kenneth Rogoff 00:37:32 Typically, you have a crisis of some sort when your debt is high and your political system is inflexible. We’ve checked those boxes. Then you get hit by a shock you weren’t ready for. You get caught on your back foot. It depends on what the shock is, and how we react.", "The way Japan reacted was through what we call financial repression , basically stuffing debt into every insurance company, pension fund, bank. The central bank holds almost 100% of GDP in debt. We think we have a lot—I don’t actually know the number off the top of my head for the Fed, but I want to say around $7 trillion. Japan would have the equivalent of $30 trillion. So they’ve done this. It’s not the only reason they haven’t grown, not by any means, but it’s not good for growth. That’s one option.", "I think for the United States, that’s tough. We’re just a very market-driven system. If our financial system had that kind of pressure put on it, it would be worse than when Japan did it. And a lot of people lend to us. We can’t do that to them. We can’t force French insurance companies to hold US debt. We can only force U.S. ones.", "So I think the most likely thing will be inflation, which only lets off steam. Because inflation sort of pulls… well, it’s like a default . And I’m not talking about hyperinflation . I’m talking 10–20% inflation over a period. We just went through that . That actually knocked about 10% of GDP off our debt. We might need more next time. So it lets some steam off, but if you’re still spending too much and you haven’t fixed anything you’re back in the problem. That’s what’s going on now. We had some steam let off, but it wasn’t enough.", "I think when it happens again, markets will be very unforgiving about it. They’ll look at us and say, “You are not to be trusted.” So it’ll raise the interest rate more, our debt will build up faster. I think at that point… There’s this saying about Americans attributed to Winston Churchill : we always do the right thing after we try everything else. I suspect we’ll try other things.", "Dwarkesh Patel 00:39:58 Just for the audience, there are four ways we could get out of the debt: We could default, which you don’t think is likely.", "Kenneth Rogoff 00:40:02 But really good for my book.", "Dwarkesh Patel 00:40:05 Already you timed this one so well. I’ll be shorting the market when your next one comes out.", "Financial repression. I guess you could actually cut the deficit. Or inflation. You’re saying if there’s another round of inflation, then after that…", "Kenneth Rogoff 00:40:22 What everyone calls austerity . By the way, this word “austerity” that progressives use whenever you ignore debt building up and spend whatever you want, “austerity” is when you don’t do that. I mean, I think Ezra Klein ’s book Abundance actually makes the point that there are costs and benefits to a lot of things. This “austerity” language pretends there are no costs to having your debt be higher and only benefits. So yes, that’s what everyone else has to do. We’ve gone longer than most without doing it.", "Dwarkesh Patel 00:41:03 If it's going to be a financial crisis and financial crises are this bad…", "Kenneth Rogoff 00:41:06 Inflation crisis. A financial crisis is the private sector, and the public sector bails out the private sector. So, the government, we’re not going to default. We're going to inflate, or do financial repression, or baby steps austerity or something. We're not going to have a crisis like Greece had. That’s just wrong. But inflation's not pleasant.", "Dwarkesh Patel 00:41:32 Why wouldn’t we?", "Kenneth Rogoff 00:41:33 Because we can print money. We can honor our debts. We just never have to default. Greece was using the euro and didn’t have control.", "Dwarkesh Patel 00:41:43 Japan was using its own currency and it didn’t default.", "Kenneth Rogoff 00:41:46 They had a financial crisis, not a debt crisis. They never defaulted on their government debt in that period. I’m not sure if they ever did. I’m sorry, they did in World War II. Of course, Japan defaulted on its government debt in World War II. That’s an interesting story.", "But it was a financial crisis they had. Financial crisis is what’s making your banking system not work, lending to innovators not work, lending to dynamic companies not work. Ben Bernanke wrote at the time a thought piece about this. He didn’t really have numbers. He conjectured that was why the Great Depression was so bad .", "When Milton Friedman , one of the great economists of all time, looked at the Great Depression, he said: “You didn’t print enough money. You tightened the money supply too much.\" And Ben came along 25 years later. He was a classmate of mine in graduate school. I had the office next to him at Princeton. He came along and wrote this amazing paper. Again, it was just a thought piece, which is not a typical economics paper. He said, “If it was just that you didn’t print enough money, eventually wages and prices would adjust. Yeah, maybe it wouldn’t happen in a year, maybe it wouldn’t happen in two years, but the Depression took 10 years. How can that be?”", "He made this conjecture. There’s been a lot of subsequent work showing it. Again, there’s a lot of debate about this, let me be careful. But I certainly view the weight of evidence as saying financial crises are really bad. This has led us to the situation in the United States where we’ve gotten a little happy-go-lucky about bailing everyone out.", "I would describe Treasury and Federal Reserve policy today as, “When in doubt, bail it out.” Because they saw what happened. But as your financial sector grows, that will lead to a problem someday. It did, of course, in the Silicon Valley Bank case . It continues to have echoes of that. But I think an inflation crisis is more likely than a financial crisis, although these things are very hard to predict.", "Dwarkesh Patel 00:44:01 You say in the book that we didn’t outgrow our World War II debt. What happened instead was that financial repression after the war, and then the inflation of the 1970s made our debt-to-GDP ratio… It should have been 70-something, but it ended up around 20-something instead.", "Of course, we just had inflation recently. Do you think there’s some irrationality in the market for US government debt already, given that we can forecast what’s going to happen here? They can read your book and see that inflation’s going to go up. They can look through history at what’s happened. Do you think there’s some irrationality in terms of what people are doing?", "Kenneth Rogoff 00:44:38 I think, number one, is that they have too much faith in the independence of the Federal Reserve . The Fed’s been this amazing institution that’s evolved. It’s been the guardian of low inflation. We can argue about if it’s the right inflation or not. The Federal Reserve insists it’s very independent. The Supreme Court recently ruled that Trump couldn’t fire Powell , the head of the Fed. But I think they’re dreaming.", "There are so many ways Congress and the President could override the Fed, especially if they declare some kind of wartime or “war-on-pandemic” situation.", "Dwarkesh Patel 00:45:21 Though I wonder from the politician’s perspective, maybe the independence of the Fed gives them a convenient way to pass the buck that they’re actually happy about. They can say, “Ah, I’d love to do this irresponsible thing, but I can’t because of the Fed. It’s out of my hands.”", "Kenneth Rogoff 00:45:38 That’s for sure. That’s why Trump bashes the Fed. It’s not the only reason he does it. I think he actually disagrees with them. But he feels that bashing the Fed, if there’s a recession which there might be, gives him someone to blame for not lowering interest rates.", "It depends if we run into a world where interest rates start creeping up… Right now, the 10-year rate is around 4.5%. That’s the nominal rate. The inflation-indexed one is a little over 2%. The 30-year rate is around 5%. I think those are going to drift up. And that makes mortgage rates go up, student loans go up, car loans go up, business loans go up. It’s painful.", "The question is, at what point does that pain become real? As I mentioned, I think this would be catalyzed if we’re hit by a shock. In that kind of situation, it becomes easier to temporarily take back some independence from the Fed. I think it’s easier to do than people think. Given that I think shocks are going to happen—maybe AGI brings a shock we don’t yet imagine—people trust in Fed independence too much.", "Now, I love Fed independence. I actually wrote the first paper on why you should have an independent central bank back when there were virtually no independent central banks. I was a pawn at the Federal Reserve. I’m talking my own book when I say it’s a great idea. I don’t mean my “book” book, I mean my human capital.", "I like to say that the Federal Reserve fights for its independence every day. I hear senators say, “They’re idiots.” I hear people in Silicon Valley say, “They’re idiots. We should bring them under the Treasury.” I used to hear that just from progressives. But now I’ve heard that recently from some tech titans, saying things like, “ Scott Bessent , the Treasury Secretary, he’s smarter than Powell. Why don’t we let him run things?” They could.", "Dwarkesh Patel 00:47:50 It does seem like the Fed works really well as it exists now. It’s independent. Sure, there are people who criticize its actions as you say. But on the whole, it seems like a reliable institution that makes smart calls. They can be wrong, of course, but it seems so much more competent than much of the rest of government.", "If you wanted to replicate how the Fed works—if you wanted other parts of government to work that way—is there something we could do? Or is it more of a human capital problem than an independence problem? Like, bankers and economists are really smart. I don’t know if you could replicate that in the Department of Education or the Department of Agriculture.", "Kenneth Rogoff 00:48:29 One of the things the Fed has is this simple barometer that everybody sees. They don’t really see it, but they have a feeling. They see gasoline prices, that's probably how they decide what inflation is. But they have this simple barometer.", "Mind you, particularly in recent years, progressives have wanted the Fed to solve inequality, social justice, and the environment. But they have one barometer that they kind of control over the long run, not in the short run but over the long run. So that makes it a little easier to say, “You wanted us to have low inflation.”", "Whereas so many other things the government does might be making everybody better off, but they’re making some people more better off. Maybe some people aren’t better off at all. It suddenly becomes very political. Nobody elected the Fed, so it’s harder to make those decisions. I’m obviously a technocrat, or I side with technocrats. My students are technocrats. I think way more things should be like that.", "Dwarkesh Patel 00:49:33 But if you wanted to do that, suppose you get called by the Pentagon tomorrow and they say, “We want to run the Pentagon like the Fed.” What do you tell them to do?", "Kenneth Rogoff 00:49:41 I was going to say, the Pentagon? I’m not speaking about the current Pentagon, but just up till now. I haven’t looked closely but it’s been run pretty darn well. The military’s pretty efficient.", "There are people who tell me, “Okay, Elon Musk can take a payload into space at 15% of—or maybe one-fifteenth of—what NASA does. Why don’t we let Elon Musk run the Pentagon?” There may be something to that. I think to some extent—and maybe I’m defending them too much—but you never know where the next blow is going to come from. It always looks like a lot of your stuff is wasted, you built up. But your enemy is looking at what you have. Where are you weak? Where are you strong? So I wouldn’t have picked the Pentagon as the obvious thing.", "But let’s say crypto regulation, that would be a good example. Why don’t we have something more independent there? Instead, as you well know, it’s been overrun by politics.", "In fact, there’s this huge thing going on right now. I don’t know how it’s going to play out. The Fed has been protected, though not as much as you’d think. But Trump got to the Supreme Court and was told he can fire the head of any agency. I assume, by inference, he can also fire anybody at any agency. I think that’s a terrible mistake. We need to have independent agencies. You have an evaluation process. They answer to Congress. If they go off in the wrong direction, you try to fix it. But to just have everything switch every four years? That’s really very worrisome.", "Dwarkesh Patel 00:51:28 Before Trump—maybe for intrinsic reasons, maybe because of norms—it was really hard to fire people anyway. That didn’t produce remarkable competence across the government.", "Let me try to consolidate some of the things you mentioned. Maybe it’s really important to structure more of the government like that. If you're running a department, you have just one target like the Fed’s 2% inflation target. That’s all you have to do. Don’t worry about anything else. I do think it’s impressive that the Fed has avoided mission creep. It seems like every institution in the world falls into mission creep—companies, government departments…", "Kenneth Rogoff 00:52:02 Oh, they haven’t avoided it. They’ve been under incredible pressures. Obviously, things have changed. But I talk about this a bit in my book. You go through the working papers and research coming out of the various Federal Reserves and it's all about inequality, the environment, social justice. You’d be strained to find a paper about monetary policy during that period, because they were under pressure. Part of being independent is bending with the wind.", "But they’ve managed to keep their core competency, their core function of setting monetary policy independent. No, it’s been amazing. But it is a constant fight. You can go to a country like Turkey. I don’t know what the inflation rate is today, but it hovered up toward 100%. And Erdoğan —the president of Turkey—would fire the head of the central bank every year. Every time they tried to raise interest rates, he’d fire them. You can find other countries like that.", "So we’ve been lucky. But you can’t count on that continuing.", "Dwarkesh Patel 00:53:10 Apart from the political pressure problems from the outside, you mentioned watching your younger colleagues or younger economists writing working papers at the Fed about these other issues like inequality or climate change.", "From the inside, given what the younger people in this profession care about, do you expect the competence or the focus to just decline by default, given the new generation?", "Kenneth Rogoff 00:53:42 No, this was a wake-up call. There was a blog that Hoover did. They looked at the most-used words in our big annual meetings. There’s this thing called the American Economic Association meetings. Everybody goes. They took all the abstracts and titles from the last 15 years and the word inflation had not appeared until this year.", "Dwarkesh Patel 00:54:08 But why are you optimistic about what happens when they get in charge?", "Kenneth Rogoff 00:54:10 Because there’s an intellectual market. This was a huge miss and there’s a market for figuring it out. One of the good things about the American university system—at least in the sciences, and I’ll speak for economics—is that things drift off but if something is way wrong—and they were certainly way wrong about inflation, I believe, and way wrong about interest rates and debt—then there’s some rebalancing.", "We have a very competitive system of publishing. We have a seminar system that’s just ruthless. There’s a debate around it. It’s not settled, and maybe I’m wrong and they’re right but it’s definitely being debated now. Whereas 10 years ago, I think I was like a lone voice in the wilderness saying these things might happen again. I would teach inflation to my students. They’d sit there patiently. It was like I was teaching them the music of Fred Astaire or something. They’d go, “Okay, it’s the 21st-century, the Internet, that can never happen.”", "Or I’d teach debt. If I had foreign students, they were all having problems. But the American students were like, we can just do whatever we want. But it’s changed.", "Dwarkesh Patel 00:55:35 Going back to the potential future problems, if we do go the financial repression route and not the inflation route, how bad will that be?", "As you were saying, after World War II we had financial repression. But that was when we had the highest growth ever. On the other hand, if you look at China and Japan, it seems like a lot of their problems might be caused by the misallocation of capital that financial repression created. Do you have some intuition about how much we screw ourselves over with that route, as opposed to inflation?", "Kenneth Rogoff 00:56:02 We’ll start with World War II but it’s never just one thing, obviously. There were a lot of things. So with World War II first of all, financial repression was easy. The financial markets had been destroyed by the Great Depression . World War II became something of a command-and-control economy .", "There are a lot of interesting papers about World War II that show that Americans just worked really enthusiastically. There was real patriotism in the production. I’m not saying we’re not now, but back then they were able to fill factory jobs that we probably couldn’t even fill today. As we emerged from World War II we had all the soldiers come home. That’s a huge growth lift. We didn’t manage it perfectly. We actually had quite a bit of inflation during that period.", "The financial markets, as you grew up in and as young people know them today, didn’t exist back then. The world has changed a lot.", "Dwarkesh Patel 00:57:05 Does that mean US growth would have been even higher after World War II if we had just kept the government debt or figured out some other way to deal with it and let financial markets develop earlier?", "Kenneth Rogoff 00:57:17 Maybe. We didn’t have any financial crises for a long time because the markets were very repressed. Oftentimes, when you get a financial crisis it’s exactly when somebody comes along and says, “I know how we can make things grow way faster. Let’s just take away all the rules and regulations overnight.”", "That happened in one country after another. It works, until you blow up. So you’d have to say that, by and large, it was managed rather well. We grew. The rest of the world grew. It took time for private markets to develop. One thing I should’ve emphasized was that our debt was very high after World War II, about what it is now.", "But there was nothing else. There wasn’t all this private debt. That had all been defaulted on. I’m being slightly hyperbolic here but maybe it was 50% of GDP altogether. Everything else, state and local debt, had been defaulted on. Now, it’s bigger than the federal debt by a wide margin. So it was a very different world, putting in financial repression back then, compared to now when that’s a big part of our business models in the financial sector.", "Dwarkesh Patel 00:58:25 Just to make sure we’ve completed the concrete scenarios, so basically your prediction is that there’ll be some crisis, some surge of inflation, then there’ll be austerity. Then what happens? Is growth really slow afterward because the government can’t spend as much? What do the next few decades look like in your world?", "Kenneth Rogoff 00:58:50 I think it will be quite a wake-up call for Americans, having to adjust under difficult circumstances. Most likely, we get hit by a shock. We want to borrow a lot. The bonds are rising faster than they did the other times we did. And we're not able to do as much.", "It’s not the end of the world. During the European debt crisis from 2010 to 2012, most European countries raised their retirement ages. They didn’t do it right away. They did it 10 or 15 years out. There’s stuff you can do.", "So I want to be careful here and say it’s not like the end of the world, but it’ll be pretty unpleasant. This will affect the entire world, since the global system is very dollar-centric. It won’t be good for our franchise, the dollar being so used everywhere. As other countries start using the dollar less, our interest rates will climb even higher.", "I’m an academic. I’m not trying to push my ideas by being maximally hysterical. But hysterical is definitely within the realm of possibility here. What I’m saying is more likely than not, not that it’s not definitely going to happen. We could have growth. We could have a whole lot of high-skill immigration. We could make changes. There are a lot of things that could go well.", "Dwarkesh Patel 01:00:19 On the growth thing, Europe’s growth has been pretty bad after 2010. Japan obviously has had pretty bad growth after their crisis. Why will we be in a different position if we do have this kind of crisis? Why will growth continue apace?", "Kenneth Rogoff 01:00:34 No, it’s going to cause a pause in growth. The main reason debt crises happen is we don’t have an automated system of working it out. When the stock market crashes it’s painful, you’re looking around for who got hurt. But when you have debt crashes, we take five years, ten years, to figure out who owes what. It’s that process of allocating the losses that causes problems.", "That, by the way, is why so many people thought, “China’s fine. The president will just tell everyone what it is.” That turns out to be not as true as they thought.", "01:02:20 – Will AGI solve the US deficit?", "Dwarkesh Patel 01:02:20 Is it possible to believe both that AGI is near and that America’s fiscal position is untenable?", "Kenneth Rogoff 01:02:26 What do you mean by saying AGI is coming.", "Dwarkesh Patel 01:02:28 Any job that can be done purely through computers is automated. So white-collar work, the work we do even, is automated within 20 years.", "Kenneth Rogoff 01:02:36 Anytime you get a big productivity boost, it’s fantastic. If it comes quickly, yes that can solve problems. I will say that historically, there have been lots of times when countries had good growth— even higher than their interest rates—and they still got into trouble because fiscal policy isn’t mechanical. It’s political. It’s about how much you spend, who wants what. It’s not an arithmetic question.", "Let me say it another way. Nobody ever defaulted or had high inflation because of arithmetic, because they couldn’t pay, or couldn’t have called in someone who knew what to do. They do it because of political pressures.", "I think if AGI came that fast and that big, it would make the populism phenomenon we’re facing now seem like nothing.", "Dwarkesh Patel 01:03:33 If AI is going to be massively deflationary —if it makes all these goods so much cheaper—should we be printing a bunch of money to still stick to 2% inflation? Or does that not matter anymore?", "Kenneth Rogoff 01:03:46 Well, we certainly can run monetary policy the same way. You don’t automatically get deflation just because some goods are going down. You can do things to increase demand so that there are upward pressures on final prices. Even if the AI workers aren’t demanding anything, you can put in a lot of demand so firms charge a lot, not just for the services that AI is replacing but also for the raw minerals and materials and everything else that goes in.", "Fundamentally, when you have productivity, it makes it easier for the monetary authorities—the Federal Reserve —to deliver low inflation and good growth . That’s what they’re trying to do. It makes their job easier. And I think it takes some of the pressure off them to inflate, because things are going pretty well. So the pressures aren’t the same.", "Dwarkesh Patel 01:04:39 But should they be trying to fight the deflation at all, in that world?", "Because traditionally we need inflation to root out the rentiers and to fight downward wage rigidity. But now the AIs have all the jobs, so we don’t need to worry about that. There are a bunch of biases humans have that we need inflation to correct for. Do we even need that in a world with AI?", "Kenneth Rogoff 01:04:56 Okay, that is a very good point. Frankly, Keynes founded modern macroeconomics . He was an incredible Renaissance person, having both sides of the brain. One of his insights that just transformed things was this. Before Keynes, we used what we now call general equilibrium models: demand and supply, prices moving to keep everything in line. But Keynes was looking at the Great Depression and said, “Prices should be coming down. But they’re not. Why aren't they?” That’s really a cornerstone. At the end of the day, it’s mostly human behavior. It’s mostly workers.", "So if you have these docile AI workers—they’re not workers, they’re just firms—and if you have firms that are willing to let prices fall, then certainly you can do that. I mean, we’re still going to have some human workers? I don't know.", "But here’s a question on what monetary policy should be. Do you think interest rates are going to go up or down? When we had deflation last time—from demand-deflation, like after the financial crisis and the pandemic —interest rates went down. My intuition here is that interest rates would need to go up. I mean real interest rates, real inflation-adjusted interest rates. In that case, deflation isn’t such a problem. You just don’t let the interest rates go up as much.", "Last time, interest rates went to basically zero . That’s a whole other line of discussion. They felt they couldn’t lower them into significantly negative territory. So they were sort of paralyzed. There was this deflation, or at least too-low inflation. Monetary authorities thought they knew how to create inflation, but that’s always been by cutting interest rates lower. When they hit a bottom, they don’t.", "I have a whole book about negative interest rates and that’s a whole other thing. If we’re imagining real interest rates going up, then it’s not much of a technical problem. You just let the interest rates rise a little less so you’re not getting deflation.", "01:07:11 – Why interest rates will go up", "Dwarkesh Patel 01:07:11 Do you expect interest rates to go up? Because one factor is that you want to invest in the future, the future has so much more potential. Another is that maybe you want to consume more now, because you know you’re going to be wealthier in the future anyway. You might as well start spending as much as you can now.", "Kenneth Rogoff 01:07:26 I think AGI and AI are upward pressures on interest rates for lots of crude reasons. You have huge energy needs. Traditionally, when you did a lot of investment it raised wages, but it’s possible— there are economists like Daron Acemoglu who’ve shown it can go both ways, it’s not difficult to show that—if you're really just substituting for workers, it’s making capital more valuable. You just invest even more.", "The pressures on monetary policy will depend a little bit on that. In principle, it makes life easier. If it did push the interest rate down to zero, there are interesting questions around that, but maybe your audience might not be as fascinated by them as I am.", "Dwarkesh Patel 01:08:27 Let’s talk about it a little bit. If we expect interest rates to go up because of AI, what should the government be doing right now to be ready for that? Should they be locking in hundred-year bonds at the current interest rates since they’re only going up from here?", "Kenneth Rogoff 01:08:43 I’m going to get to that. But first, just where we are… I follow interest rates today all the time. Maybe a lot of people who listen to you don’t. But let’s talk about inflation-adjusted interest rates. There’s a 10-year bond that’s indexed to the inflation rate , issued by our Treasury. Inflation-indexed debt is only about 10% of our total debt. There are tax considerations—it’s not perfect—but it’s a pretty good measure of what we call the real interest rate.", "It had gone to minus one at one point after the pandemic. It averaged zero for about 10 years, from 2012 to 2021. And it’s higher now . That is, for a macroeconomist, the biggest question in the world. Because it affects asset prices, it affects risk, it affects volatility .", "I regard it as just a normalization. I think it was likely to happen. If you go around and talk to some of my younger colleagues, or folks at other places, there’s quite a debate about that. A lot of people think, “We’re getting old, we’re not inventing anything...” I know you’ve just been arguing against that and good for you.", "I tend to think interest rates are more likely to go up than down, going forward. I’m talking about these long-term interest rates—the Federal Reserve just sets the overnight interest rate —these long-term interest rates are set by markets, and I think they’re more likely to go up.", "But I think AGI is only a piece of it. Debt is rising everywhere. There’s the remilitarization of much of the world. Needing to deal with climate change. Eventually if we’re not dealing with it, then we’re dealing with climate disasters. There’s growing populism, geopolitical fracturing, many things. So I tend to think interest rates are going to go up but not just for the good reason that we’ve gotten more creative and that everything’s going to be better.", "01:10:55 – US equities will underperform", "Dwarkesh Patel 01:10:55 You've said in the book that you expect a rebalancing from US equities to foreign equities. US equities have outpaced foreign stocks for the last couple of decades. You say you expect this to change or that there will be some rebalancing. What causes that?", "Kenneth Rogoff 01:11:11 What I say very concretely is that when the dollar is really high, you should expect the euro to go up. I feel strongly about that. My first important paper was about exchange rates. That’s why the book’s about exchange rates. When Japan’s really weak or when the dollar’s really strong—it’s very hard to predict exchange rates—but I think the euro will do well. There’s a lot of room to catch-up in Europe.", "I actually think I’m nuanced in what I say in the book. Trump hadn’t been elected yet, but I say Europe seems to be under pressure to re-militarize . I was aware that Harris was probably going to cut the US defense budget, so that would put pressure on Europe. Re-militarizing would actually be good for the euro. It would be good for technology in Europe. It would give them more geopolitical power in the system.", "Now, just so your listeners can calibrate this, my first book was a very mathematical one, Foundations of International Macroeconomics . In theory, you should diversify. You shouldn’t put all your money in the United States. I did a video with Zbigniew Brzezinski , Mika Brzezinski ’s father. For those who don’t know, he was Carter’s Kissinger. I did a video with him that Merrill Lynch produced.", "It was about why international diversification could be good. What they got me to say was very, very limited. I feel quite fine about what I said. I wasn’t doing any consulting at the time, just academic work. I didn’t do speeches, I didn’t do consulting. I talked to central banks a bit, but I didn’t do anything for money. But I did get paid for that. It circulated half a million copies of it. A lot of my friends teased me and said I would’ve made a lot more money if I hadn’t followed my own advice. I can think of plenty of other examples like that.", "But yes, my instinct is that the US premium—this idea that it should just keep getting bigger and bigger—these things have some regression to the mean. Maybe not with AI all being in the US, I don’t know.", "Dwarkesh Patel 01:13:44 Is it that you’re predicting that the S&P keeps growing at 8%, but foreign equities do even better?", "Kenneth Rogoff 01:13:50 I’m just going to safely say foreign equities do better than dollar equities.", "Dwarkesh Patel 01:13:55 But not because the growth in US equities slows down, it’s just that foreign equities do even better?", "Kenneth Rogoff 01:14:00 Look, you have a lot of friends who spend all their time doing this. I wouldn’t pretend to. I hold a very neutral portfolio because I talk to policymakers and world leaders even on occasion. I don’t want to be someone who’s talking about regulating Bitcoin and owning a lot of Bitcoin, to pick a random example.", "So I wouldn’t regard myself as great at this. But yes, I think there’s a case for international diversification, particularly into Europe at this point because they have so much potential catch-up. Just as in California, where you’re from, there’s a little bit of dim awareness that it might be overregulated and you might want to do things differently, I feel that’s happening in Europe.", "Dwarkesh Patel 01:14:52 If you look internationally, if you'd been betting on catch-up, I wonder how you’d backtest it. There’s some intuition there that if you’re poor and you’re further from the frontier, it makes sense that it would be easier for you to catch up. But there’s another intuition that if you’ve been persistently behind the frontier, there must be some deep endogenous reason.", "Kenneth Rogoff 01:15:11 You’re absolutely right. For example, Asia has a lot more governance problems on the whole. There’s a reason that their price-earnings ratios are lower, because you don’t trust the governance. You’re right.", "That’s fair and a lot of people are just betting on that. But I don’t think Europe is so hopeless that it can’t pull it together. I can make a comparison. I'm a basketball fan. The Boston Celtics just got crushed by the Knicks, just before we’re taping this. Part of it is because our star, Jayson Tatum, was injured. You may not have gotten any better, Europe in this case, but if somebody’s hobbling the United States—I do think that’s going on to some extent now—you do better.", "Dwarkesh Patel 01:16:07 Is there some institutional reform we could make that would get us out of this political equilibrium we're stuck in. Both parties, when they're in power, are incentivized to increase the debt and there's no institutional check on that proclivity.", "Kenneth Rogoff 01:16:26 There have been a lot of people who’ve tried this, for example by having what are called fiscal councils . I did a paper once about fiscal councils with Julia Pollak , who’s a brilliant economist, when she was an undergraduate. That was quite a while ago. A number of countries experimented with them, but it hasn't worked.", "The country that's done the most with this is probably the United Kingdom. George Osborne , when he was Chancellor , set up this fiscal authority. The big thing they did is they made predictions so the government doesn’t get to make up different predictions. You don’t necessarily have to go by their numbers, but they got to say whatever they thought.", "Our CBO does not get to do that quite the same. Our Congressional Budget Office is very good, but they are constrained to believe what Congress tells them. If Congress says, “We’re putting in this tax hook, but it’s going to go away after ten years,” or “We’re doing this policy, it’s all going to change,” then they’re forced to use those parameters. The UK version is more independent. There are lots of complaints about it, but that’s a very poor man’s fiscal authority, just somebody who says, “This is what your deficit looks like.” It’s the same thing as our CBO, but with more independence. It helps. But I think it has to go to our electoral system, right? Our campaign financing. Do we have term limits?", "Dwarkesh Patel 01:18:05 You think that would help? I think, if anything, if you’re longer in office, you might have a more long-term incentive. To the extent that a lot of the deficit problems are caused by populism, I don’t know how much campaign finance would help.", "Kenneth Rogoff 01:18:19 Maybe you’re right. I don’t have a magical solution to this. It’s all over the world. Nobody's finding a particularly great solution to it. The only encouraging thing is that these things go in waves. So maybe this one will end. But we’re certainly in a really difficult situation right now.", "Somebody asked me, “If you were advising a Republican president or something, what would you do? What problem would you face?” The biggest problem is that in a few years there’s going to be a Democratic president. They’re going to do exactly the opposite of what you wanted to do. It’s the same thing for the Democratic presidents. How do you have some continuity? How do you have policies put in place that the public can rely on?", "We’ve done well in the United States in some ways because our government’s been kind of weak and hasn’t been doing stuff. The private sector can work around it. I’m not saying it’s perfect. There are lots of things we should do. But look, this is out of my paycheck, so to speak.", "Dwarkesh Patel 01:19:27 You're the former Chief Economist of the IMF!", "Kenneth Rogoff 01:19:31 All right, but that’s economics. These are very political questions. Brexit ’s an example of democracy gone amok. What a dumb idea. I don’t know if Brexit was right or wrong. I feel like we’ll know in 50 years. But you shouldn’t be able to do it with a simple majority vote. You should need a two-thirds vote, or something like that.", "There’s a whole government department here with people specializing in what we should do. Actually, I think there are experiments. Washington State experimented with different voting choices . Maine did. There are these ideas out there, but we’re a long way from converging on anything.", "Dwarkesh Patel 01:20:17 If you think people are underweighting how big the debt issue is, are you especially long on countries which have a low debt-to-GDP ratio, like Australia or Norway?", "Kenneth Rogoff 01:20:28 It's not the only thing going on in the world, your debt is just one thing. Countries like Australia and Canada for example are what we call commodity exporters. They know that sometimes the sun shines and sometimes it's a dark winter. They don’t quite sell oil but they sell some coal, natural gas, and some oil. They understand things move around and that they need to save for a rainy day.", "Norway is in a whole other league. But yes, there are lots of factors to whether a country will do well. The lower-debt countries have less of a problem. But Canada and Australia face very volatile income streams because of commodities, so they tend to be more nervous about debt.", "By the way, they also have a lot of housing debt, in Canada for example that's been a big problem. But I’m bullish on the United States. Don’t misunderstand me. I’m not saying everyone should leave the United States and go to Canada, although my wife thinks that sometimes but for other reasons.", "When I was playing chess in the late 1960s, I was living by myself in Europe. Nixon got elected. I felt about Nixon the way I think a lot of people in your generation or at least millennials feel about Trump. I didn’t want to come back to the United States. So there are a lot of people who talk that way. But I think the United States is great.", "Countries that are smaller, that aren’t the reserve currency, that don’t have access to these deep pockets of domestic and foreign borrowing—and all of those countries fit into that framework, they need to be more careful.", "01:22:24 – The erosion of dollar dominance", "Dwarkesh Patel 01:22:24 Do you think this \" exorbitant privilege ,\" as you talk about it, could actually be bad for us in some ways? Maybe it incentivizes us to take on more debt than is wise. That it allows—or even incentivizes—us to take on more debt than is wise, especially if this isn’t a permanent advantage?", "We’re at the top, so we can take out cheap debt. But over time—if we lose reserve currency status, or even if it just weakens—we’ll have to refinance that debt at higher interest rates. So in the short term, we’re incentivized to behave in ways that aren’t sustainable in the long run. Is there a political economy explanation for why that might be bad for us?", "Kenneth Rogoff 01:23:08 I've heard that argument, but I basically think it's great for us. It's not just the government. It’s all of us who borrow less. Do we wish we were paying higher interest rates? Probably most people who are getting a mortgage right now feel like our interest rates are plenty high. They don't need to see them higher.", "I think with the exorbitant privilege, there are some drawbacks we don't need to get into. But it's basically incredibly fantastic if you owe $37 trillion as our government does, to be paying half a percent to a percent less. We're talking about hundreds of billions of dollars.", "You also have our ability to see what's going on everywhere. A lot of what our spying does is using our exorbitant privilege and the dollar network. Look at our sanctions. I mentioned, I was in my teens at the end of the 1960s when I didn't want to come back. One reason I didn’t want to come back was the Vietnam War was pretty terrifying. I had many friends get drafted. Their brains got fried by heroin even if they didn’t get killed.", "I'm not saying that we've solved all our wars with sanctions. But make no mistake, we have used that in place of military intervention a few times. So that’s been great. I think losing that, and not appreciating how important that is, is a terrible blunder that we might be making right now.", "Dwarkesh Patel 01:24:37 This is a very naive question. I know you address it at length in your book. I’ll ask the question in the most straightforward way and then you can explain what's going on. How should I think about the fact that we are basically giving the rest of the world pieces of paper and we're getting real goods and services in exchange?", "Sure, at a high level you can say that they're getting this liquidity or they're getting this network and that's what makes it worth it. But I don’t know, are we fundamentally getting away with something?", "Kenneth Rogoff 01:25:06 So just to note, the United Kingdom is not the reserve currency. They're not the dominant currency. They used to be, a hundred years ago. They look a lot like us now, with big current account deficits. That’s actually why Trump was able to strike a deal with them. Because they weren’t really running a surplus against us, anyway. They’re over-financialized, even more than we are.", "The core of the benefit we get is that we borrow by issuing safe assets—if you want to call our debt safe—and we invest in risky stuff. Charles Kindleberger , wrote one of the great books on crises . I had him as a professor at MIT. He called us bankers to the world . He said: “Yep, we’re running this deficit, they’re holding a lot of our Treasury bills, but we are making money hand over fist.” It’s the same thing as the equity premium where you hold stocks and it’s not always, but on average better than holding bonds. So that’s been very good.", "You have the fact that the dollar is very liquid, the markets are very liquid. Say you're a Silicon Valley firm and you're big enough to issue debt internationally. I don’t know if any Silicon Valley firms ever issue debt but say you do, people will buy it because it’s in dollars. If you're the same firm in France, forget it. They don’t want to hold euros. Even if you promise to pay in dollars, they’re not happy about it because your income isn't in dollars. So it’s been fantastic. This is something that's been debated.", "Stephen Miran was a Harvard student, PhD. He’s a very smart guy, head of Trump’s Council of Economic Advisers . He has made a clever argument that because everybody loves our currency, it makes us less competitive in everything else. He argues that it partly hollowed out our manufacturing and that’s terrible.", "There’s a little bit of truth to that. First of all, the dollar goes like this, it’s not always high. Second, I mentioned the United Kingdom is kind of in the same boat. We’re good at a lot of things. We’re good at tech. Tech makes the dollar stronger, make no mistake. We’re good at biotech, agriculture. We’re good at a lot of other things that make the dollar high. And if you’re good at these things, it’s harder to be good at manufacturing. It bids up the cost of everything.", "On the whole, we’re performing this banking function. That’s really the big thing. It’s been going on since the ’50s and ’60s. That’s the core of our so-called exorbitant privilege.", "Dwarkesh Patel 01:28:07 There’s a really interesting book by Charles Mann . I think it's called 1493 . It’s about how during the Ming Dynasty in 17th-century China, they kept issuing different paper currencies and it was super unstable. People in China wanted a reliable medium of exchange. So tens of thousands of tons of silver from the New World, from the Spanish, would be exchanged for enormous amounts of real goods exported from China.", "So from the Spanish perspective, they’re getting shiploads and shiploads of real goods, and all they’re giving up is this medium of exchange. I don’t know how analogous that is to the current situation.", "Kenneth Rogoff 01:28:47 There are countries like Ecuador and others that dollarize. They literally use the dollar and they need dollars. We’re able to have them hold dollars. It’s not silver, but we print it and they pay very low interest rates. They’re holding Treasury bills, not physical dollars. Yeah, it’s fantastic for us. We definitely pay less on our debt because of that.", "That’s a fascinating example you bring up. The Chinese actually invented the printing press. They invented paper currency way before the Europeans. But then, what do you know, they kept printing a lot of it and had a lot of inflation. I haven’t read that book, but it’s a great example. I knew they were using silver, but that number is bigger than I had heard.", "Dwarkesh Patel 01:29:38 Final question. A big part of your book discusses the different countries which seemed at different times to be real competitors to America. You talk about the Soviet Union, Japan, China today. We've discussed why they didn’t pan out.", "We can go into the details on any one of those examples, but in the big picture is there some explanation that generalizes across all these examples of why America has been so competitive? Why has it been so hard to displace?", "Kenneth Rogoff 01:30:09 It’s not just that we’ve stayed on top, we’ve just gone like this. Remember, in the 1970s, Europe actually peeled away from the dollar bloc . But the rest of the world started globalizing. China globalized. And eventually the Soviet Union, and the dollar just colonized all these places. They were all holding dollar debt, using dollars. It’s much bigger than even the British pound was when the sun never set on the British Empire.", "So it’s been amazing and surprising to people like myself. If you read what everyone was saying at the time, it was just that it kept going up. That our share of everything kept getting bigger and bigger. Definitely, to some extent, we’ve been lucky. We talked about Japan. I think China made a big mistake by sticking to the dollar so long. Europe should have delayed bringing Greece into the Euro, because their crisis wouldn’t have been so bad. So we’ve been fortunate with blunders by our opposition.", "We’ve done some good things. But I think the thing Americans forget is that we have been lucky a lot of times. I worry our luck is wearing thin. I quote a chess player—the great Bent Larsen , who was number two to Bobby Fischer when I was playing. He was asked, “Would you rather be lucky or good in a chess game?” And he said, “Both.” So I think Americans forget. They know we’re good and we are good. We’ve talked about dynamism, this secret sauce that we’ve had so far. But I think we’ve also been lucky. If you ran it all again, it didn’t have to go the same way.", "Dwarkesh Patel 01:32:01 It’s a very scary kind of luck. If it’s so easy for these other countries to make some mistake that causes them to totally fall behind, it should update you in favor of the idea that, in general, it’s easy for a country to get itself in a rut. It’s like the Fermi estimate thing. The fact that you don’t see other alien civilizations is actually very scary, because it suggests that there’s some kind of filter which makes it really hard to keep your civilization going.", "Kenneth Rogoff 01:32:29 I hope not, but we’ll see. It’s certainly been amazing how the dollar’s done and how the US has done. I hope we continue, but we are doing a lot of things right now… I don’t think Trump is the cause of the dollar being in gentle decline. That’s just wrong. I think it would’ve happened with Harris winning. But he is the president at the moment, and things like Liberation Day … I’ve talked to tech people who think it’s just brilliant. I understand that. We can debate it. I’m happy to debate it with them.", "We look at the rule of law. Okay, I’m sitting at Harvard University. Naturally it feels that way . But also we talked about the president being able to remove all the independent agencies. It used to be that if you were a foreign investor and you invested in the United States, you thought you’d get your money back. Maybe the stock would’ve gone down. Maybe the real estate you bought would’ve gone down. But you’d get paid. I think we were more exceptional than most about that. That’s in doubt now. There’s no question.", "The book’s about a lot of things besides exorbitant privilege, the whole arc of the US. But when I was telling people about the book, “I don’t know, I’m looking at the numbers, I’m looking at what China’s doing, I’m reading about Europe and their central bank digital currency. I think we’re going downhill.” I showed it to academics, I showed it to financial people, I showed it to tech people. They said, “You’re nuts.” They didn’t want to think about it. I don’t know if I’m right. But I think it’s worth thinking about.", "Dwarkesh Patel 01:34:16 When I was in China, I met up with some venture capitalists there and they were quite depressed in general. Even founders say it’s hard to raise money. I was asking them why, and they said investors don’t want to invest because even if you invest in the next Alibaba, who’s to say the government doesn’t cancel the IPO ?", "Kenneth Rogoff 01:34:36 They’re in trouble. Yeah. I think Europe has a bright future in this context, of being the team that doesn’t have as many injured players. But yeah, China… it’s not going to be forever, but I think for five or ten years they’re going to stay in trouble.", "Dwarkesh Patel 01:34:53 Okay. Thank you so much for sitting down with me and also answering all my questions. I’m sure there are many misconceptions and naive questions and so forth. I appreciate your patience and you educating me on this topic.", "Kenneth Rogoff 01:35:03 No, it’s an honor to be on your famous podcast. I heard from so many young people when I told them I was talking to you. They were like, “You're with Dwarkesh? Just fly back from here! Do whatever you need to do!”", "So I’m glad you were able to come here. It’s really been interesting, and I’m glad to learn more about everything you’re doing.", "Dwarkesh Patel 01:35:28 The honor’s mine. It was great to be able to travel here and speak with you." ]
[ "https://en.wikipedia.org/wiki/Kenneth_Rogoff", "https://amzn.to/43zXoQE", "https://en.wikipedia.org/wiki/International_Monetary_Fund", "https://en.wikipedia.org/wiki/Premier_of_China", "https://en.wikipedia.org/wiki/Chinese_Communist_Party", "https://en.wikipedia.org/wiki/Central_Party_School", "https://en.wikipedia.org/wiki/Xi_Jinping", "https://en.wikipedia.org/wiki/China_Development_Forum", "https://www.dwarkesh.com/p/mark-zuckerberg-2", "https://www.brookings.edu/articles/the-long-shadow-of-a-fiscal-expansion/#:~:text=In%202009%20and%202010%2C%20China,on%20behalf%20of%20local%20governments.", "https://en.wikipedia.org/wiki/Hu_Jintao", "https://www.investopedia.com/updates/purchasing-power-parity-ppp/", "https://www.dwarkesh.com/p/notes-on-china", "https://en.wikipedia.org/wiki/Chongqing", "https://en.wikipedia.org/wiki/Chengdu", "https://en.wikipedia.org/wiki/Hangzhou", "https://en.wikipedia.org/wiki/Mount_Emei", "https://en.wikipedia.org/wiki/Chinese_city_tier_system", "https://en.wikipedia.org/wiki/Rochester,_New_York", "https://en.wikipedia.org/wiki/Rouen", "https://en.wikipedia.org/wiki/Mount_Emei#Buddhist_architecture_on_Emei", "https://en.wikipedia.org/wiki/S%C3%A3o_Paulo", "https://en.wikipedia.org/wiki/Mumbai", "https://data.worldbank.org/indicator/NY.GDS.TOTL.ZS?locations=CN", "https://en.wikipedia.org/wiki/Social_Security_(United_States)", "https://en.wikipedia.org/wiki/MIM-104_Patriot", "https://www.csis.org/analysis/china-dominates-shipbuilding-industry", "https://en.wikipedia.org/wiki/Brookings_Institution", "https://en.wikipedia.org/wiki/Richard_N._Cooper", "https://en.wikipedia.org/wiki/TGV", "https://en.wikipedia.org/wiki/High-speed_rail_in_China", "https://www.cnbc.com/2025/06/07/chinas-central-bank-buys-gold-for-seventh-straight-month-in-may.html", "https://en.wikipedia.org/wiki/SWIFT", "https://en.wikipedia.org/wiki/Clearing_House_Interbank_Payments_System", "https://en.wikipedia.org/wiki/Renminbi", "https://en.wikipedia.org/wiki/Central_bank_digital_currency#:~:text=A%20CBDC%20is%20a%20digital,and%20a%20store%20of%20value.", "https://en.wikipedia.org/wiki/Bank_of_Japan", "https://www.jstor.org/stable/2644171", "https://en.wikipedia.org/wiki/1997_Asian_financial_crisis", "https://en.wikipedia.org/wiki/Export-oriented_industrialization", "https://en.wikipedia.org/wiki/Jiang_Zemin", "https://en.wikipedia.org/wiki/Plaza_Accord", "https://en.wikipedia.org/wiki/Carmen_Reinhart", "https://en.wikipedia.org/wiki/Satoshi_Sumita", "https://en.wikipedia.org/wiki/Solow%E2%80%93Swan_model", "https://en.wikipedia.org/wiki/Emergency_Banking_Act_of_1933", "https://amzn.to/4kX1tUz", "https://en.wikipedia.org/wiki/Greek_government-debt_crisis", "https://en.wikipedia.org/wiki/Lost_Decades", "https://en.wikipedia.org/wiki/Financial_repression", "https://en.wikipedia.org/wiki/Sovereign_default", "https://www.investopedia.com/terms/h/hyperinflation.asp", "https://www.bls.gov/data/inflation_calculator.htm", "https://en.wikipedia.org/wiki/Winston_Churchill", "https://en.wikipedia.org/wiki/Austerity", "https://en.wikipedia.org/wiki/Ezra_Klein", "https://amzn.to/3TeBQ5M", "https://en.wikipedia.org/wiki/Ben_Bernanke", "https://fraser.stlouisfed.org/files/docs/meltzer/bermac95.pdf", "https://en.wikipedia.org/wiki/Milton_Friedman", "https://en.wikipedia.org/wiki/Collapse_of_Silicon_Valley_Bank", "https://www.brookings.edu/articles/why-is-the-federal-reserve-independent-and-what-does-that-mean-in-practice/", "https://www.politico.com/news/2025/05/22/supreme-court-fed-powell-trump-00366526", "https://en.wikipedia.org/wiki/Jerome_Powell", "https://www.jstor.org/stable/1885679", "https://abnormalreturns.com/2010/02/18/everybody-talks-their-book-everybody/", "https://en.wikipedia.org/wiki/Scott_Bessent", "https://www.scotusblog.com/2025/05/supreme-court-allows-trump-to-remove-agency-heads-without-cause-for-now/", "https://en.wikipedia.org/wiki/Recep_Tayyip_Erdo%C4%9Fan", "https://libertylensecon.substack.com/p/a-word-count-analysis-of-aea-and", "https://www.hoover.org/?olc=33556&utm_id=579239288355&utm_term=hoover%20institution&utm_campaign=na_brand&utm_source=google&utm_medium=search_paid&utm_content=brand&hsa_acc=5955820823&hsa_cam=11891008220&hsa_grp=120743471012&hsa_ad=579239288355&hsa_src=g&hsa_tgt=kwd-377197029805&hsa_kw=hoover%20institution&hsa_mt=e&hsa_net=adwords&hsa_ver=3&gad_source=1&gad_campaignid=11891008220", "https://www.aeaweb.org/conference/", "https://en.wikipedia.org/wiki/Great_Depression", "https://www.investopedia.com/terms/c/command-economy.asp#:~:text=The%20command%20economy%2C%20also%20known,is%20nonexistent%20or%20severely%20limited.", "https://en.wikipedia.org/wiki/Euro_area_crisis", "https://en.wikipedia.org/wiki/Artificial_general_intelligence", "https://www.investopedia.com/terms/d/deflation.asp", "https://en.wikipedia.org/wiki/Federal_Reserve", "https://www.investopedia.com/articles/investing/100715/breaking-down-federal-reserves-dual-mandate.asp#:~:text=The%20Federal%20Reserve's%20dual%20mandate%20is%20to%20achieve%20maximum%20employment,down%20or%20growing%20too%20fast.", "https://en.wikipedia.org/wiki/John_Maynard_Keynes", "https://en.wikipedia.org/wiki/Macroeconomics", "https://www.investopedia.com/terms/k/keynesianeconomics.asp", "https://en.wikipedia.org/wiki/2008_financial_crisis", "https://en.wikipedia.org/wiki/COVID-19_pandemic", "https://www.investopedia.com/terms/r/real-value.asp", "https://www.investopedia.com/articles/investing/031815/what-zero-interestrate-policy-zirp.asp#:~:text=A%20zero%20interest%20rate%20policy%20(ZIRP)%20occurs%20when%20a%20central,credit%20by%20firms%20and%20individuals.", "https://www.investopedia.com/terms/n/negative-interest-rate.asp#:~:text=Negative%20interest%20rates%20are%20a,a%20negative%20interest%20rate%20environment.", "https://amzn.to/4n1LMgX", "https://en.wikipedia.org/wiki/Daron_Acemoglu", "https://treasurydirect.gov/marketable-securities/tips/", "https://www.cnbc.com/quotes/US10YTIP", "https://www.investopedia.com/terms/v/volatility.asp", "https://www.investopedia.com/terms/o/overnightrate.asp", "https://www.nytimes.com/2025/03/04/world/europe/eu-defense-spending.html", "https://www.amazon.com/FOUNDATIONS-INTERNATIONAL-MACROECONOMICS-SEP-12-1996-Sep-12-1996/dp/B009CN1YTW", "https://en.wikipedia.org/wiki/Zbigniew_Brzezinski", "https://en.wikipedia.org/wiki/Mika_Brzezinski", "https://en.wikipedia.org/wiki/Jimmy_Carter", "https://en.wikipedia.org/wiki/Merrill_(company)", "https://en.wikipedia.org/wiki/Fiscal_council", "https://www.imf.org/external/np/pp/eng/2013/071613.pdf", "https://x.com/juliaonjobs", "https://en.wikipedia.org/wiki/George_Osborne", "https://en.wikipedia.org/wiki/Chancellor_of_the_Exchequer", "https://en.wikipedia.org/wiki/Congressional_Budget_Office", "https://en.wikipedia.org/wiki/Brexit", "https://mynorthwest.com/mynorthwest-politics/ranked-choice-voting/4035948", "https://en.wikipedia.org/wiki/Exorbitant_privilege", "https://en.wikipedia.org/wiki/Charles_P._Kindleberger", "https://amzn.to/43ReQig", "https://en.wikipedia.org/wiki/Equity_premium_puzzle", "https://en.wikipedia.org/wiki/Stephen_Miran", "https://en.wikipedia.org/wiki/Council_of_Economic_Advisers", "https://en.wikipedia.org/wiki/Charles_C._Mann", "https://amzn.to/4jNAUQV", "https://en.wikipedia.org/wiki/Ming_dynasty", "https://en.wikipedia.org/wiki/European_Monetary_System", "https://en.wikipedia.org/wiki/Bent_Larsen", "https://en.wikipedia.org/wiki/Bobby_Fischer", "https://en.wikipedia.org/wiki/Fermi_paradox", "https://en.wikipedia.org/wiki/Liberation_Day_tariffs", "https://en.wikipedia.org/wiki/Education_policy_of_the_second_Donald_Trump_administration#Actions_against_universities", "https://www.cnn.com/2020/11/03/tech/ant-ipo-postponed-beijing-jack-ma#:~:text=Ant%20Group's%20highly%20anticipated%20IPO,largest%20share%20sale%20in%20history." ]
https://www.dwarkesh.com/p/kenneth-jackson
Kenneth T. Jackson - Robert Moses, Hero of New York?
[ "0:00:00 Preview + Intro", "Kenneth Jackson 0:00:00", "Robert Moses represented a past when we wanted to build bridges and superhighways. We're not building superhighways now. We're not building vast bridges like Moses built all the time. Essentially all the big roads, all the bridges, all the parks, the United Nations, Lincoln Center, the World's Fairs of 1939 and 1964, and hundreds of other things he built.", "Had Robert Moses not lived, not done what he did, New York would have followed the trail of maybe Detroit. I think The Power Broker was the best book I ever read and in broad strokes, it's correct. Robert Moses had more power than any urban figure in American history. He built incredible monuments. He was ruthless, arrogant and honest.", "Dwarkesh Patel 0:00:54", "I am really, really excited about this one. Today I have the pleasure of speaking with Professor Kenneth T. Jackson about the life and legacy of Robert Moses. Professor Jackson is the preeminent historian on New York City. He was the director of the Herbert H. Lehman Center for American History and the Jacques Barzun Professor Emeritus of History at Columbia University, where he has also shared the Department of History. We will be discussing Robert Moses. Professor Jackson is the author and editor of Robert Moses and the Modern City: The Transformation of New York .", "Professor Jackson, welcome to the podcast.", "Kenneth Jackson 0:01:37", "Thank you for having me.", "Dwarkesh Patel 0:01:40", "Many people will have heard of Robert Moses and be vaguely aware of him through the popular biography of him by Robert Caro, The Power Broker , but most people will not be aware of the extent of his influence on New York City. Can you give a summary of the things he was able to get built in New York City?", "Kenneth Jackson 0:02:03", "One of the best comparisons I can think of is by Caro himself, when he compared him to Christopher Wren in London, he said, if you would see his monuments and look around it's almost easier to talk about what Moses didn't do than what he did do. Essentially all the big roads, all the bridges, all the parks, the United Nations, the Lincoln Center, the World's Fairs of 1939 and 1964, and hundreds of other things he built. He didn't actually do it with his own two hands, but he was in charge. He got it done.", "Robert Caro wrote a really great book. I think the book was flawed because I think Caro only looked at Moses's own documents and Moses had a very narrow view of himself. He thought he was a great man but he didn't pay any attention to what was going on in LA very much, for example. But clearly, by any standard, he's the greatest builder in American history. There's nobody really in second place.", "And not only did he build and spend this vast amount of money, he was in power for a long time, a half century more or less. He had a singular focus. He was married, but his personal life was not important to him. He did it without scandal, even Caro admits that he really died with less than he started with. He wanted power, and boy did he have power.", "Technically he was subservient to governors and mayors, but since he built so much and since he had multiple jobs, that was part of his secret. He had as many as six, eight, ten different things at once. If the mayor fired him or got rid of him, he had all these different ways which he was in charge of that the mayor couldn't. So people were afraid of him, and they also respected him. He was very smart and he worked for a dollar a year. So what are you going to get him for?", "As Caro says, nobody is ready to be compared with Robert Moses. In fact, compare him with an act of nature. In other words, the person you can compare him with is God. That's the person. He put the rivers in. He put the hills in. He put the island in. Compare that to what Moses did. No other person could compare to that. That's a little bit of exaggeration, but when you really think about Robert Moses and you read the Power Broker, you are just stunned by the scope of his achievement.", "Even beyond New York, when we think of the interstate highway system, which starts in 1954/55/56, and which is 40- thousand miles of interstate highways, those were built by Moses' men, people who had in their young life had worked with the parkways and expressways in and around New York City. So they were ready to go. Moses also worked both outside and inside New York City. He achieved so much. You need to understand that it's not easy to get things done in New York. It's very, very dense. Twice as dense as any place in the United States and full of neighborhoods that feel like and are little cities that don't want change, even today. A place like Austin, for example, is heavy into development, not New York. You want to build a tall building in New York, you have got to fight for it. And the fact that he did so much in the face of opposition speaks a lot to his methods. And how did Moses do what he did? That is a huge question because it isn't happening anymore, certainly not in New York City.", "Dwarkesh Patel 0:06:22", "And that's really why I actually wanted to talk to you and talk about this book because The Power Broker was released in 1974 and at the time New York was not doing well, which is to put it mildly. But today the crisis we face is one where we haven't built significant public works in many American cities for decades. So it's interesting to look back on a time when we could actually get a lot of public works built very quickly and very efficiently and see if maybe we got our characterization of the people at the time wrong. And that's where your 2007 book comes in. So I'm curious, how was the book received 40 years after the Power Broker was released? What was the reception like? How does the intellectual climate around these issues change in that time?", "Kenneth Jackson 0:07:18", "The Power Broker is a stunning achievement, but you're right the title is The Power Broker: Robert Moses and the fall of New York. He's thinking that in the 1970s, which in New York's 400-year history, we think of the 1970s as being the bottom. City was bankrupt, crime was going up, corruption was all around. Nothing was working very well.", "My argument and the subtitle of the 2007 book or that article is Robert Moses and the rise of New York. Argument that had Robert Moses not lived, not done what he did, New York would have followed the trail of maybe Detroit and St. Louis and Cincinnati and Pittsburgh and most cities in the Northeast and Midwest, which declined.", "New York City really hasn't declined. It's got more people now than it ever did. It's still a number one city in the world by most of our standards. It's the global leader, maybe along with London. At one point in the 1980s, we thought it might be Tokyo, which is the largest city in the world, but it's no longer considered competitive with New York. I say London too because New York and London are kind of alone at the top.", "But I think Robert Moses' public works, activities, I just don't know that you could have a New York City and not have expressways. I don't like the Cross Bronx expressway either and don't want to drive on it. But how can you have a world in which you can't go from Boston to San Francisco? You had to have it. You have to have some highways and Caro had it exactly wrong. He talked about Moses and the decline of public transit in New York. Actually what you need to explain in New York is why public transit survived in New York, wherein most other American cities, the only people who use public transit are the losers. The old, the disabled, the poor and stuff like that. In New York City, rich people ride the subway. It's simply the most efficient way to get around and the quickest. Some of the things in that question need to be turned on its head. How did he get it done? How did he do it without scandal? When you think about how the world is in our time, when everything has either a financial scandal or a sexual scandal attached to it, Moses didn't have scandals. He built the White Stone Bridge, for example, which is a gigantic bridge connecting the Bronx to Queens. It's beautiful. It was finished in the late 1930s on time and under budget, actually a little earlier.", "There's no such thing as that now. You're going to do a big public works project and you're going to do it on time? And also he did it well. Jones Beach, for example, for generations has been considered one of the great public facilities on earth. It's gigantic. And he created it. I know people will say it's just sand and water. No, no, it's a little more complicated than that. So everything he did was complicated.", "I think Robert Caro deserves a lot of credit for doing research on Moses, his childhood, his growing up, his assertion that he's the most important person ever to live in and around New York. And to just think of Franklin Roosevelt and all the people who lived in and around New York. Moses is in a category by himself, even though most Americans have never heard of Robert Moses. That book made him famous. And I think his legacy will continue to evolve and slightly improve as Americans realize that it's hard to build public works, especially in dense urban environments. And he did it.", "0:11:13 How Moses Gained Power", "Dwarkesh Patel 0:11:33", "Yeah. There's so much to talk about there. But one of the interesting things from the Power Broker is Caro is trying to explain why governors and mayors who were hesitant about the power that Moses was gaining continued to give him more power. And there's a section where he's talking about how FDR would keep giving him more positions and responsibilities, even though FDR and Moses famously had a huge enmity. And he says no governor could look at the difficulty of getting things built in New York and not admire and respect Moses' ability to do things efficiently, on time, under budget, and not need him, essentially.", "Speaking of scandal, you talked about how he didn't take a salary for his 12 concurrent government roles that he was on. There's a very arresting anecdote in the Power Broker where I think he's 71 and his daughter gets cancer and for the first time, he had to accept a salary for working on the World's Fair because he didn't have enough. He was the most powerful person in New York, and he didn't have enough money to pay for his daughter's cancer. Even Caro himself says that a lot of the scandals that came later in his life, they were just kind of trivial stuff, an acre of Central Park or the Shakespeare in the park. The things that actually took him down were just trivial scandals.", "Kenneth Jackson 0:13:07", "In fact, when he finally was taken down, it took the efforts of a person who was almost considered the second most powerful person in the United States, David Rockefeller, and the governor of New York, both of whom were brothers, and they still had lie to make Moses get out of power in 1968. But it was time. And he exercised power into his 70s and 80s, and most of it was good. The bridges are remarkable. The bridges are gorgeous, they're incredible. The Throgs Neck Bridge, the Verrazano Narrows Bridge, the Triborough Bridge, they're really works of art. He liked to build things you could see and I think the fact that he didn't take money was important to it.", "I wouldn't say he's not wealthy in New York terms, but he was not a poor person. He went to Yale as a Jewish person in the early 20th century, that's fairly unusual and he lived well. We can't say he's poor, but I think that Caro was right in saying that what Moses was after in the end was not sex and not money, it was Power. He wanted power. And boy, did he get it.", "Dwarkesh Patel 0:14:37", "There's a good review of the book from Phillip Lopate and he made a good point, which was that the connotation of the word power is very negative, but it's a modern thing really to have this sort of attitude towards power that somebody who's just seeking it must necessarily have suspicious motivations.", "If Moses believed that he was just much more effective at building public works for the people that live in New York, was it irrational of him or was it selfish of him to just desire to work 14 hour days for 40 years on end in order to accumulate the power by which he could build more public works? So there's a way of looking at it where this pursuit of power is not itself troubling.", "Kenneth Jackson 0:15:36", "First of all, I just need to make a point that it's not just New York City. Jones Beach is on Long Island. A lot of those highways, the Northern State Parkway, the Southern State Parkway are built outside the city. And also big projects, the Power Authority in upstate New York. He also was a consultant around the world in cities and transportation. So his influence was really felt far beyond New York City. Of course, New York City is so big and so important.", "I think that we might also want to think about the counterfactual argument. When I was in the Air Force, we lived next door to a couple from New York City. We didn't know New York City at the time and I can't remember whether she or he was from the Bronx or Brooklyn, but they had they made us understand how incredibly much he must have loved her to go to Brooklyn or the Bronx to see her and pick her up for days. You couldn't get there. It would take you three hours to go from the Rockaways in Brooklyn to somewhere in the Northern Bronx. But the roads that Moses built, I know they’re jammed at rush hour, but right this minute on a Sunday, you can whiz around New York City on these expressways that Moses built. It's hard to imagine New York without the expressways.", "The only thing Moses didn't do was the subway, and many people have criticized him because the subways deteriorated between the time they were built in the early part of the 20th century in 1974 when Caro wrote to Power Broker. But so had public transit systems all over the United States. And the public transit system in New York is now better than it was 50 years ago. So that trajectory has changed. And all these other cities, Pittsburgh used to have 600,000 people now it has 300,000, Cleveland used to have 900,000 and now it's below five, Detroit used to have 2 million now it's 600 thousand, St. Louis used to have 850,000 now it's 300,000. The steep drop in all these other cities in the Midwest and Northeast, even Washington and even Boston and Philadelphia. They all declined except New York City, which even though it was way bigger than any of them in 1950 is bigger now than it was then. More people crammed into this small space. And Moses had something to do with that.", "0:18:22 Would NYC Have Fallen Without Moses?", "Dwarkesh Patel 0:18:22", "You write in the book — “ Had the city not undertaken a massive program of public works between 1924 and 1970, had it not built the arterial highway system and had it not relocated 200,000 people from old law tenements to new public housing projects, New York would not have been able to claim in the 1990s that it was a capital of the 20th century. ”", "I would like to make this connection more explicit. So what is the reason for thinking that if New York hadn't done urban renewal and hadn't built the 600 miles of highways that Moses built there, that New York would have declined like these other cities in the Northeast and the Midwest?", "Kenneth Jackson 0:19:05", "You could argue, first of all, that New York is not like other cities. It's a world city and has been and what happens to the rest of the United States is, I accept a little bit of that, but not all of it. You say, New York is just New York. And so whatever happens here is not necessarily because of Moses or different from Detroit, but I think it's important to realize its history has been different from other American cities.", "Most American cities, especially the older cities, have been in relative decline for 75 years. And in some ways New York has too. And its relative dominance of the United States is less now than it was because there's been a shift south and west in the United States. But the prosperity of New York, the desire of people to live in it, and after all, one of its problems is it's so expensive. Well, one reason it's expensive is that people want to live there. If they didn't want to live there, it would be like Detroit. It'd be practically free. You know what I mean? So there are answers to these issues.", "Moses' ways, I think, were interesting. First of all, he didn't worry about legalities. He would start an expressway through somebody's property and dare a judge to tell him to stop after the construction had already started. Most of the time, Moses was kind of like Hitler. I don't mean to say he was like Hitler but what I mean is you have such confidence. You just do things and dare other people to change it. I'm going to do it. Most people don't have that. I think there's a little bit of that in Trump, but not as much. I don't think he has nearly the genius or brains of Moses. But there's something to self-confidence. There's something to having a broad vision.", "Moses liked cities, but he didn't like neighborhoods or people. In other words, I don't think he loved New York City. He really thought everybody should live in the suburbs and drive cars and that was the world of the future and he was going to make that possible. He thought all those old law tenements in New York, which is really anything built before 1901, were slums. They didn't have hot and cold water. They often didn't have bathrooms. He thought they should be destroyed and that his vision of  high-rise public housing was an improvement.", "Now around the United States, we don't think these high-rise public housing projects are so wonderful but he thought he was doing the right thing. And he was so arrogant, he didn't listen to people like Jane Jacobs, who fought him and said, “You're saying Greenwich Village is a slum? Are you kidding me?” He thought it was a slum. Go to Greenwich Village today. Try to buy anything for under a million dollars. It doesn't exist.", "He saw old things, old neighborhoods, walking, and thought they were hopelessly out of date. And he was wrong. He was wrong about a lot of his vision. Now we understand that and all around the country, we're trying to revitalize downtowns and reduce our dependence on fossil fuels and gasoline and cars. But Moses didn't see the world that way. It's interesting. He never himself drove a car. Can you believe that the man who had more influence on the American car culture, probably even more than Henry Ford, himself was always driven. He was chauffeured.", "In fact, he was so busy that Caro talks about him as having two limousines behind each other. He would have a secretary in one and he would be dealing with business and writing letters and things like this. And once she had all she could do they would pull off to the side of the road. She would get out of his car and the car that was following would discharge the secretary in that car. They would switch places and the fresh secretary would get in the backseat, Moses and her would continue to work. And the first secretary would go to type up whatever she had to do.", "He worked all the time. He really didn't have much of a private life. There are people like Robert Moses, but not so many, and he achieved his ideal. I think that there are so many ironies there. Not only did he not drive himself, he didn't appreciate the density of New York, which many people now love, and it's getting more dense. They're building tall buildings everywhere. He didn't really appreciate the diversity. He didn't care about that, but it worked. And I just think we have to appreciate the fact that he did what was impossible, really impossible, and nobody else could have done what he did.", "If he hadn't done it then, he sure as heck wouldn't be able to do it in the 21st century, when people are even more litigious. You try to change the color of a door in New York City, you try to do something positive, like build a free swimming pool, fix up an old armory and turn it into a public place, there'll be people who'll fight you. I'm not kidding. Moses didn't care. When he built the Cross Bronx Expressway he said, “I'm going to do this.” which in some ways was horrible what he did to these people, but Caro mischaracterizes what happened. It's a dense working class —let's call it Jewish neighborhood—in the early 1950s and Moses decides that we need a big highway going right through it. He sent masses of people letters that said “Get out in 90 days.” He didn't mean 91 days. He didn't mean let's argue about it for four years. Moses meant the bulldozers will be bulldozing. We just don't have that kind of attitude anymore. It's kind of funny now to think back on it, but it wasn't funny to the people who got evicted.", "But again, as I say, it's hard to imagine New York City without the Cross Bronx Expressway. They tore down five blocks of dense buildings and built this road right through it. And they didn't worry about where they were going to rehouse them. I mean, they did, but it didn't work. Now it's so busy, it's crowded all the time. So what does this prove? That we need more roads? But you can't have more roads in New York because if you build more roads, what are you going to do with the cars? Right now, the problem is there are so many cars in the city, there's nothing to do. It's easy to get around in New York, but what are you going to do with the car? The car culture has the seeds of its own destruction. Just parking cars or putting them in a garage is a problem.", "Moses didn't foresee that. He thought that you're all going to live in the Long Island suburbs or Westchester suburbs or New Jersey suburbs, park your car in your house and come into the city to work. Now, the city is becoming a place to live more than a place to work. So what they're doing in New York as fast as they can is converting office buildings into residential units. He would never have seen that people with options would want to live in the city. That they would reject a single family house and choose a high rise and choose the convenience of going outside and walking to a delicatessen over driving to a grocery store. It's a world he never saw.", "0:27:31 Moses the Startup Founder?", "Dwarkesh Patel 0:27:31", "Yeah, the thing you pointed out earlier about him having the two limousines and then the enormous work ethic and then the 90 day eviction. I'm a programmer and I can recognize this trope immediately. Robert Moses was a startup founder, but in government. That attitude is like Silicon Valley. We all recognize that.", "Kenneth Jackson 0:27:54", "And I think we should go back to what you said earlier about why was it that governors or mayors couldn't tell him what to do? There are many scenes in the Power Broker where he will go to the mayor who wants to do something else and Moses would say, “Damn it.” and throw his pages on the desk and say, “Sign this. This is my resignation. And I'm out of here”", "Mayors and governors love to open bridges and highways and do it efficiently and beautifully. Moses could do that. Moses could deliver. And the workers loved him because he paid union wages, good wages to his workers and he got things done. Things like more than 700 playgrounds and it wasn't just grand things. Even though people criticize the 1964 World's Fair as a failure and financially it was a failure, but still. Tens of millions of people went there and had a good time. Even some of the things that were supposedly failures, failures according to whom? Failures according to the investment banker, maybe, but not to the people who went there.", "Dwarkesh Patel 0:29:20", "Right, yeah. And the point about the governors and mayors needing him, it was especially important to have somebody who could work that fast.  If you're going to get reelected in four years or two years, and want to be there for the opening you need somebody who can get public works done faster than they're done today.", "Kenneth Jackson 0:29:36", "And it's important to realize that Moses did try public office once. I think it's true that he lost by more than anybody in the history of New York. He was not an effective public speaker. He was not soft and friendly and warm and cuddly. That's not Robert Moses. The voters rejected him. But the people who had power and also Wall Street, because you had to issue bonds.", "One of the ways that Moses had power was he created this thing called the Triborough Bridge and Tunnel Authority to build the Triborough Bridge. Now, if in Portland, Oregon, you want to build a bridge or a road, you issue a couple hundred million dollars worth of bonds to the public and assign a value to it. Interest rate is paid off by the revenue that comes in from the bridge or the road or whatever it is.", "Normally you would build a public works and pay for it itself on a user fees. And when the user fees paid it off, it ended. But what Moses, who was called the best bill drafter in Albany, which was a Moses term, he said he was somewhere down in paragraph 13, Section G, say, “And the chairman can only be removed for cause.” What that meant was when you buy a bond for the Traverse Bridge or something else, you're in a contract, supported by the Supreme Court. This is a financial deal you're making with somebody. And part of the contract was the chairman gets to stay unless he does something wrong. Moses was careful not to do anything wrong. And it also would continue. You would get the bond for the Traverse Bridge, but rather than pay off the Traverse Bridge, he would build another project. It would give him the right to continually build this chain of events. So he had this massive pot of money from all these initially nickels and dimes. He made a lot of money, the 30s and 40s and 50s and 60s, to spend more money and build more bridges and build more roads. And that's where he had his power.", "The Wall Street, the big business loved him because they're issuing the bonds. The unions loved him because they're paying the investors. Now what Caro says is that Moses allowed the investors an extra quarter or half percent on bonds, but they all sold out. So everybody was happy. Was that crooked? It wasn't really illegal. But it's the way people do that today. If you're issuing a bond, you have to figure out what interest am I going to pay on this that will attract investors now.", "0:32:34 The Case Against Moses Highways", "Dwarkesh Patel 0:32:34", "And the crucial thing about these tales of graft is that it never was about Moses trying to get rich. It was always him trying to push through a project. Obviously that can be disturbing, but it is a completely different category of thing, especially when you remember that this was a corrupt time in New York history. It was like after Tammany Hall and so on. So it's completely different from somebody using their projects to get themselves rich.", "I do want to actually talk in more detail about the impact of these roads. Obviously the current system we have today where we just kind of treat cities as living museums with NIMBYism and historical preservation, that's not optimal. But there are examples, at least of Caro's in that chapter on the one mile, about Moses just throwing out thousands of people carelessly and how he could have diverted the cross Bronx expressway one mile and prevented thousands of people from getting needlessly evicted.", "I'm just going to list off a few criticisms of his highway building and then you can respond to them in any order you want. One of the main criticisms that Caro makes is that Moses refused to add mass transit to his highways, which would have helped deal with the traffic problem and the car problem and all these other problems at a time when getting the right of way and doing the construction would have been much cheaper. He just refused to do that because of his dislike for mass transit. And also the prolific building of highways contributed to urban sprawl, it contributed to congestion, it contributed to neighborhoods getting torn apart if a highway would cross them.", "So a whole list of criticisms of these highways. I'll let you take it in any order you want.", "Kenneth Jackson 0:34:27", "Well first of all, Moses' response was, I wasn't in charge of subways. So if you think the subways deteriorated or didn't build enough, find out who was in charge of them and blame that person. I was in charge of highways and I built those. So that's the first thing.", "Dwarkesh Patel 0:34:41", "On that particular point it is true that he wasn't in charge of mass transit, but also he wasn't in charge of roads until he made himself responsible for roads, right? So if he chose to, he could have made himself responsible for mass transit and taken care of it.", "Kenneth Jackson 0:34:56", "Maybe, maybe. Although I think the other thing about it is putting Moses in a broader historical concept. He was swimming with the tide of history. In other words, history when he was building, was building Ford Motor Company and General Motors and Chrysler Corporation and building cars by the millions. The automobile industry in the United States was huge. People thought any kind of rail transit was obsolete and on the way out anyway so let's just build roads. That's what the public wanted. He built what the public wanted. Looking historically, I don't think he did the right thing, but we needed to join the 20th century. New York could have stayed as a distinctly different kind of place where everybody walks. I just don't think it would have been the same kind of city because there are people who are attached to their cars in New York. The sprawl in New York, which is enormous, nobody's saying it wasn't, spreads over 31 counties, an area about as large as the state of Connecticut. Metropolitan New York is about as large as the Netherlands.", "But it's still relatively, I don't want to say compact, but everybody knows where the center is. It's not that anybody grows up in New York at 16 and thinks that the world is in some mall three miles away. They all know there is a center and that's where it is. It's called Manhattan. And that's New York and Moses didn't change that, for all of his roads. There is still a definite center in New York. Skyscrapers and everything in the middle.", "It's true that public transit did decline. I like Chicago and they have a rail transit from O'Hare down to JFK Expressway and it works sort of, but you have to walk a ways to get on. You have to walk blocks to get in the middle of the expressway and catch the train there. It's not like in New York where you just go down some steps. New York subway is much bigger than Chicago and more widely used.", "I think what Caro was trying to explain and your question suggests is, was Moses responsible for the decline of public transit? Well, he was building cars and roads and bridges. So in that sense, a little bit, yes. But if you look at New York compared to the rest of the United States, it used to be that maybe 20 percent of all the transit riders in the United States were in the New York area. Now it's 40 percent. So if you're looking at the United States, what you have to explain is why is New York different from the rest of the United States? Why is it that when I was chairman or president of the New York Historical Society, we had rich trustees, and I would tell them, “Well, I got here on a subway.” They would say, “How do you think I got here?” These are people who are close to billionaires and they're saying they used the subway.", "If you're in lower Manhattan and you're trying to get to Midtown and it's raining and it's five o'clock, you've got to be a fool to try to get in your own limousine. It isn't going to get you there very quickly. A subway will. So there are reasons for it. And I think Moses didn't destroy public transit. He didn't help it and that's an important distinction.", "But he was swimming with history. He built what the public wanted. I think if he had built public transit, he would have found it tougher to build. Just for example, Cincinnati built a subway system, a tunnel all through the city. It never has opened. They built it. You can still see the holes in the ground where it's supposed to come out. By the time they built it, people weren't riding trains anymore. So it's there now and they don't know what to do with it. And that's 80 years ago. These issues are much more complex than I'm speaking of. I just think it's unfair to blame Moses for the problems of the city. I think he did as much as anybody to try to bring the city into the 21st century, which he didn't live to. You've got to adapt and you've got to have a hybrid model in the world now. And I think the model that America needs to follow is a model where we reduce our dependence on the cars and somehow ride buses more or use the internet more or whatever it is, but stop using so much fossil fuels so that we destroy our environment. And New York, by far, is the most energy efficient place in the United States. Mainly because you live in tall buildings, you have hot floors. It doesn't really cost much to heat places because you're heating the floor below you and above you. And you don't have outside walls and you walk. New Yorkers are thinner. Many more people take buses and subways in New York than anywhere else in the United States, not just in absolute terms, in relative terms. So they're helping. It's probably a healthier lifestyle to walk around. And I think we're rediscovering it.", "For example, if you come to New York between Thanksgiving and Christmas, there's so many tourists in the city that there is gridlock on the sidewalks around. The police have to direct the traffic and in part, it's because a Detroit grandmother wants to bring her granddaughter to New York. to see Hudson's, which is a great department store in Detroit or in any city or GFox in Hartford. Every city had these giant department and windows where Santa Claus is and stuff like this. You can still go to New York and see that. You can say, “Jane, this is the way it used to be in Detroit.” People ringing the bells and looking at the store windows and things like that. A mall can't recapture that. It just can't. You can try, but it's not the same thing.", "So I think that in a way, not only did Moses not destroy New York he gets a little bit of credit for saving it because it might have been on the way to Detroit. Again, I'm not saying that it would have been Detroit because Detroit's almost empty. It’s Baltimore, it's Cleveland, it's every place. There's nobody there anymore. And even in New York, the department stores have mostly closed, not all of them. And so it's not the same as it was 80 years ago, but it's closer to it than anywhere else.", "Dwarkesh Patel 0:42:16", "OK, I'm actually very curious to get your opinion on the following question given the fact that you are an expert on New York history and have literally written the encyclopedia on New York City.", "Kenneth Jackson 0:42:30", "800 people wrote the encyclopedia. I just took all the credit for it. I was the Editor in Chief.", "Dwarkesh Patel 0:42:34", "You talked about the importance just earlier about counterfactual history. So I'm curious if Caro is actually right about the claim that the neighborhoods through which Moses built his highways were destroyed in a way that neighborhoods which were in touch by the highways weren't. Sorry for the confusing phrasing there. But basically, looking back on all these neighborhoods, is there a clear counterfactual negative impact on the neighborhoods in which Moses built his highways and bridges and so on?", "Kenneth Jackson 0:43:10", "Caro makes that argument mostly about East Tremont and places like that in the Bronx where the Cross Bronx Expressway passed through. And he says this perfectly wonderful Jewish neighborhood that was not racially prejudiced and everybody was happy and not leaving was destroyed by Moses.", "As a historian of New York City, or for that matter, any city, if a student comes to you and says, that's what I found out, you say that it runs counter to the experience of every city so let's do a little more work on that.", "If you look at the census tracts or the residential security maps of S.H.A, you know it's not true. The Jews were leaving and had nothing to do with the thing. They didn't love blacks. The Bronx was called the Jewish borough at the time, those neighborhoods that weren't on the Cross Bronx Expressway all emptied out mostly. So the Bronx itself was a part of New York City that followed the pattern of Detroit and Baltimore and Cleveland. Bronx is now coming back, but it's a different place.", "I've said this in public, Caro wouldn't know those neighborhoods if he landed there by parachute. They're much better than he ever said they were. He acted like if you went outside near the Bronx County Courthouse, you needed a wagon train to go. I've taken my students there dozens of times and shown them the people, the old ladies eating on the benches and stuff like this. Nobody's mugging them. He just has an outsider's view. He didn't know the places he was writing about.", "But I think Caro was right about some things. Moses was personally a jerk. You can make it stronger than that. He was not your friendly grandfather. He was arrogant. He was self-centered. He thought he knew the truth and you don't. He was vindictive and ruthless. But some of his strategies were good. He made people building a beach or a building feel like they’re building a cathedral. “You're building something great and I'm going to pay you for it and let's make it good. Let's make it as best as we can.” That itself is a real trick. How do you get people to think of their jobs as more than a job, as something else? Even a beach or a wall or something like that. He also paid them and that's important. He said he was improving things for the people.", "I don't know if you want to talk about Jane Jacobs, who was his nemesis. I tend to vote with Jane Jacobs. Jane Jacobs and I agree on a lot of things or did before she died a few years ago. Jane Jacobs saw the city as intricate stores and people living and walking and knowing each other and eyes on the street and all these kinds of things. Moses didn't see that at all. He saw the city as a traffic problem. How do we tear this down and build something big and get people the hell out of here? That was a mistake. Moses made mistakes. But what Moses was doing was what everybody in the United States was doing, just not as big and not as ruthless and not as quick. It was not like Moses built a different kind of world than exists in Kansas City. That's exactly what they did in Kansas City or every other city. Blow the damn roads to the black neighborhoods, build the expressway interchanges, my hometown of Memphis criss crossed with big streets, those neighborhoods gone. They're even more extensive in places like Memphis and Kansas City and New Orleans than they are in New York because New York builds relatively fewer of them. Still huge what he built.", "You would not know from the power broker that Los Angeles was building freeways too. Or he says that New York had more federal money. Well, not true. I've had students work in Chicago and Chicago is getting more money per person than New York for some of these projects. Some of the claims, no doubt he got those from Moses' own records. If you're going to write a book like this, you have got to know what's going on in other places. Anyway, let's go back to your questions.", "Dwarkesh Patel 0:48:10", "No, no. That was one of the things I was actually going to ask you about, so I was glad to get your opinion on that. I've been preparing for this interview and trying to learn more about the impact of these different projects. I was trying to find the economic literature on the value of these highways. There was a National Bureau of Economic Research paper by Morgan Foy , or at least a digest by Morgan Foy, where he's talking about the economic gains from highways. He says, “The gains tend to be largest in areas where roads connect large economic hubs where few alternative routes exist. Two segments near New York City have welfare benefits exceeding $500 million a year. Expanding the Long Island Expressway had an estimated economic value of $719 million”, which I think was Moses. He says, “Of the top 10 segments with the highest rate of return, 7 are in the New York City area.”", "It turns out that 7 of the top 10 most valuable highway segments in America are in New York. The way Caro paints Moses' planning process, is that it's just very impulsive and feelings-based and in some cases, almost out of malice towards poor people. Given that a century later, it seems that many of the most valuable tracks of highways were planned and built exactly how Moses envisioned, it makes you think that there was some sort of actual intelligent deliberation and thought that was put into where they were placed.", "Kenneth Jackson 0:50:32", "I think that's true. I'm not saying that the automobile didn't have an economic impact. That's what Moses was building for. He would probably endorse that idea. I think that what we're looking at now in the 21st century is the high value put on places that Moses literally thought were something. He was going to run an expressway from Brooklyn through lower Manhattan to New Jersey and knock down all these buildings in Greenwich Village that people now love. People and even movie stars crowd into those neighborhoods to live and he saw it as a slum. Well, Moses was simply wrong and Caro puts him to task for that. I think that's true.", "0:51:24 The Rise of NIMBYism", "Dwarkesh Patel 0:51:24", "Okay. Professor Jackson, now I want to discuss how the process of city planning and building projects has changed since Moses' time. We spent some good amount of time actually discussing what Moses actually did in his time. Last year you wrote an op-ed in the Wall Street Journal talking about how the 27-story building in Manhattan was put in limbo because the parking lot which it would replace was part of a historic district.", "What is it like to actually build a skyscraper or a highway or a bridge or anything of that sort in today's New York City?", "Kenneth Jackson 0:52:06", "In the larger context, it's probably fair to say it's tougher to build in New York City than any other city. You may not deploy a skyscraper in a precious suburb but as far as the city is concerned, there'll be more opposition in New York than anywhere else. It's more dense. So just to unload and load stuff to build a building, how do you do that? Trucks have to park on the street. Everything is more complicated and thus more expensive.", "I think a major difference between Robert Moses' time and our own was that historic preservation was as yet little known and little understood and little supported. The view generally was — buildings are good, roads are good, houses are good, and they're all on the way to a more modern and better world. We don't have the same kind of faith in the future that they did. We kind of like it like it is. Let's just sit on it. So I think we should say that Moses had an easier time of it than he would have had he lived today. It still wasn't an easy time, but easier than today.", "Dwarkesh Patel 0:53:40", "Can you talk more about what that change in philosophy has been since then? I feel like that's been one of the themes of this podcast, to see how our cultural attitude towards progress and technology have changed.", "Kenneth Jackson 0:53:54", "One reason why The Power Broker, Robert Caro’s famous book, received such popular acclaim is it fits in with book readers' opinions today, which is “Old is better.” You got to think about New York City. If you say it's a pre-war apartment, you mean it's a better apartment. The walls are solid plaster, not fiber or board and stuff like that. Old has a reverence in New York that it doesn't have in Japan. In Japan, they tear down houses every 15 years. So it's a whole different thing.", "In this new country, new culture, we tend to value oldness in some places, especially in a place that's old like New York City. Most Americans don't realize that New York is not only the most dense American city and the largest, but also really the oldest. I know there's St. Augustine but that's taking the concept of what a city is to an extreme and then there's Jamestown in Virginia, but there's literally nobody there. And where the pilgrims landed in Massachusetts, Plymouth plantation, that's totally rebuilt as a kind of a theme park. For a place that's a city, it's Santa Fe a little bit in New Mexico, but it was a wide place on the road until after World War II. If you think about cities, New York is really old and it's never valued history, but the historic preservation movement here is very strong.", "Dwarkesh Patel 0:55:33", "What is the reason for its resurgence? It's had a big impact on many cities. I'm in San Francisco right now and obviously you can't tear down one of these Victorian houses to build the housing that the city massively needs. Why have we gained a reverence for anything that was built 80 years ago?", "Kenneth Jackson 0:55:56", "The two most expensive places in the United States are usually San Francisco and New York. If you want to drop the price of popsicles on your block, have more people selling popsicles and the price will fall. But somehow they say they're going to build luxury housing when actually if you build any housing, it'll put downward pressure on prices, even at super luxury. But anyway, most Americans don't understand that. So they oppose change and especially so in New York and San Francisco on the basis that change means gentrification. And of course there has been a lot of gentrification. During World War II or right after, San Francisco was a working class city and huge numbers of short and longshoremen lived there. Now San Francisco has become the headquarters in a tech revolution and it's become very expensive and very homeless. It's very complex. Not easy to understand even if you're in the middle of it.", "Dwarkesh Patel 0:57:08", "If we could get Robert Moses back again today, what major mega project do you think New York needs today that a Moses-like figure could build?", "Kenneth Jackson 0:57:22", "If you think really broadly and you take climate change seriously, as I think most people do, probably to build some sort of infrastructure to prevent rising water from sinking the city. It's doable. In order to save New Orleans you had to flood Mississippi and some other places, so usually there is a downside somewhere but you could. It would be a huge project to build a land bridge from Brooklyn to Manhattan to prevent water coming in from the ocean because New York is on the ocean.", "Another big infrastructure project is they're talking about another tunnel under the river, Hudson River from New Jersey to New York. The problem with that is there are already too many cars in Manhattan. If you've not been to New York you don't really understand this, but there's no place for anything. And if you bring more cars in, what are you going to do with them? If you build parking garages for all the cars that could come into the city, then you'd be building over the whole city. There'd be no reason to come here because it would all be parking garages or parking lots.", "New York City simply won't work if you reduce the density or you get rid of underground transportation because it's all about people moving around underneath the streets and not taking up space as they do it. So it won't work. And of course, it's not the only city. Tokyo wouldn't work either. Lots of cities in the world won't work increasingly without not just public transportation but underground public transportation where you can get it out of the way of traffic.", "Moses probably could have done that. He wouldn't have loved it as much as he loved bridges because he wanted you to see what he built. And there was an argument in the power broker, but he didn't really want the Brooklyn battery tunnel built because he wanted to build a bridge that everybody could see. So he may not have done it with such enthusiasm. I actually believe that Moses was first and foremost a builder. He really wanted to build things, change things. If you said, we'll pay you to build tunnels, I think he would have built tunnels. Who knows? He never was offered that. That wasn't the time in which he lived.", "Dwarkesh Patel 1:00:04", "Today to get rid of the red tape and the NIMBYism, would it just be enough for one man to accumulate as much influence as Moses had and then to push through some things or does that need to be some sort of systemic reform? Because when Moses took power there was also the Tammany Hall machine that he had to run through. Is that just what's needed today to get through the bureaucracy or is something more needed?", "Kenneth Jackson 1:00:31", "I don't think Robert Moses with all of his talents and personality, I don't think he could do in the 21st century what he did in the middle of the 20th century. I think he would have done a lot, maybe more than anybody else. But also I don’t think his bullying methods would work quite as easy today. He bullied people, including powerful people. But I do think we need it today. And I think even today, even now we have in New York, just the beginnings of leftists. I'm thinking of AOC, the woman who led the campaign against Amazon in New York saying, we need some development. If we want to make housing more affordable, somebody has got to build something. It's not that we've got more voters because you say you want affordable housing. You got to build affordable housing and especially you got to build more of it.", "We have to overturn the NIMBYism. Even today for all of our concern about environmental change, we have to work together. I mean, in some ways we have to believe that we're in some ways in the same boat and it won't work if we put more people in the boat, but don't make the boat any bigger.", "Dwarkesh Patel 1:01:59", "When people discuss Moses and the power accumulated, they often talk about the fact that he took so much power away from democratically elected officials and centralized so much power in himself. And obviously the Power Broker talks a great deal about the harms of that kind of centralization. But I'm curious, having studied the history of New York, what are the benefits if there can be one coordinated cohesive plan for the entire city? If there's one person who's designing all the bridges, all the highways, all the parks, is something more possible that can be possible if like multiple different branches and people have their own unique visions? I don't know if that question makes sense.", "Kenneth Jackson 1:02:39", "That's a big question. You've got to put a lot of trust into the grand planner, especially if it's a massive area of 20-25 million people. I think that in some ways we've gone too far in the ability to obstruct change, to stop it. And we need change. Houses deteriorate. Roads deteriorate. Sewers deteriorate. We have to build into our system the ability to improve them.", "Now in New York we respond to emergencies. All of a sudden a water main breaks, the street collapses and then they stop everything, stop the water main break and repair the street. Meanwhile in a hundred other places it's leaking, it's just not leaking enough to make the road collapse. But the problem is there every day, every minute.", "1:03:44 Is Progress Cyclical", "Dwarkesh Patel 1:03:44", "I'm curious, as a professor you've studied American history, do you just see this as a cyclical thing where you have periods where maybe one person has too much power to periods where there's dispersed vetocracy and sclerosis and then you're just going to go through these cycles? In the grand context of things, how do you see where we are, where we were during Moses and where we might be in the future?", "Kenneth Jackson 1:04:10", "You're right to say that much of life is cyclical and there is a swing back and forth. But having said that, I think a person like Robert Moses is unusual, partly because he might have gone on to become a hedge fund person although they didn't have hedge funds when he was around. But say a new competitor to Goldman Sachs, I mean he could have done a lot of things, maybe been a general. He wanted to have power and control. And I think that's harder to accumulate now. We have too much power. You can demonstrate and you can stop anything. We love demonstrations in the United States. We respect them. We see your ability to get on the streets and block the streets as a visible expression of our democracy. But still you have to get to work. I mean at some point in the day you've got to do something. Hitler could have done a lot of things if he wanted to. Hitler had a lot of power. If he turned Berlin into a colossal city, he was going to make it like Washington but times five. Well, Washington has already got its own issues. The Government buildings are too big. Buildings don't have life on the street and stuff like this. Somebody like Hitler would destroy it forever because you build a monumental city that's not for people. And I think that was probably one of Moses' weak points. Unlike Jane Jacobs who saw people, Moses didn't see people. He saw bridges. He saw highways. He saw tunnels. He saw rivers. He saw the city as a giant traffic problem. Jane Jacobs, who was a person without portfolio most of her life except of her own powers of judgment and persuasion, she thought, “What does the shoe repairman got to do with the grocery store, got to do with the school, got to do with something else?” She saw what Moses didn't see. She saw the intricacies of the city. He saw a giant landscape. She saw the block, just the block.", "Dwarkesh Patel 1:06:45", "Yeah, there's a common trope about socialist and communist which is that they love humanity in the abstract but they hate people as individuals. And I guess it’s one way to describe Robert Moses. It actually reminds me of one of my relatives that's a doctor and he's not exactly a people person. And he says like, I hate actually having to talk to the patients about and ask them questions. I just like the actual detective work of looking at the charts and figuring out what is going on and doing the diagnosis.", "Are you optimistic about New York? Do you think that towards the end of the 21st century and into the 22nd century, it will still be the capital of the world or what do you think is the future of the city?", "Kenneth Jackson 1:07:30", "Well, The Economist, which is a major publication that comes out of England, recently predicted that London and New York would be in 2100 what they are today, which is the capitals of the world. London is not really a major city in terms of population, probably under 10 million, much smaller than New York and way smaller than Tokyo. But London has a cosmopolitan, heterogeneous atmosphere within the rule of law. What London and New York both offer, which Shanghai doesn't or Hong Kong doesn't at the moment, is a system where if you disagree, you're not going to disappear. There's some level of guarantee that personal safety is sacred and you can say what you want. I think that's valuable. And the fact that it's open to newcomers. You can't find a minority that doesn’t have a physical presence in New York. If you're from Estonia, which has fewer people than individual New York suburbs, there's an Estonian house, there's Estonian restaurants. India, Pakistan, every place has got an ethnic presence. If you want it, you can have it. You want to merge with the larger community, merge with it. That's fine.", "But if you want to celebrate your special circumstances, it's been said that New York is everybody's second home because you know if you come to New York, you can find people just like yourself and speaking your language and eating your food and going to your religious institution. I think that's going to continue and I think it's not only what makes the United States unusual, there are a few other places like it. Switzerland is like it, but the thing about Switzerland that's different from the United States is there are parts of Switzerland that are mostly Swiss German and parts of it is French, but they stay in their own place. They speak French here and they speak German there. Arizona and Maine are not that different demographically in the United States. Everybody has shuffled the deck several times and so I think that's what makes New York unique. And London too. Paris a little bit too. You go to the Paris underground, you don't even know what language you're listening to. Often the Texas cities are very diverse. San Francisco, LA are also very diverse. It's not just New York. New York kind of stands out because it's bigger and because the neighborhoods are more distinct, anybody can see them.", "And that's what Robert Moses didn't spend any time thinking about. He wasn't concerned with who was eating at that restaurant. That wasn't important, or even if there was a restaurant. Whereas now, the slow drift back towards cities, and I'm predicting that the pandemic will not have a permanent influence. The pandemic is huge and it's affected the way people work and live and shop and have recreation. I'm not trying to blow it off like something else.", "But I think in the long run, we are social animals. We want to be with each other. We need each other, especially if you're young, you want to be with potential romantic partners. But even other people are drawn. Just a few days ago, there was a horrible tragedy in Seoul, Korea. That's because 100,000 young people are drawn to each other. They could have had more room to swing their arms, but they wanted to crowd into this one alley because that's where other people were. They wanted to go where other people were. That's a lot about the appeal of cities today. We've been in cars and we've been on interstate highways. At the end of the day, we're almost like cats. We want to get together at night and sleep on each other or with each other. I think that's the ultimate goal. It's not for everybody. Most people would maybe rather live in a small town or on the top of a mountain, but there's a percentage of people. Let's call it 25% who really want to be part of the tumble in the tide and want things to be mixed up. They will always want to be in a place like New York. There are other places, San Francisco, Boston, Philadelphia a little bit. They're not mainly in the United States, but in Europe, Copenhagen. Copenhagen is not a big city, neither is Prague, but they have urbanity. New York has urbanity. I think we don't celebrate urbanity as much as we might. The pure joy of being with others.", "1:12:36 Friendship with Caro", "Dwarkesh Patel 1:12:36", "I'm curious if you ever got a chance to talk to Robert Caro himself about Moses at some point.", "Kenneth Jackson 1:12:45", "Robert Caro and I were friends. In fact, when The Power Broker received an award, the Francis Parkman Prize from the Society of American Historians, it turned out we lived near each other in the Bronx. And I drove him home and we became friends and social friends. And I happened to be with him on the day that Robert Moses died. We were with our wives eating out in a neighborhood called Arthur Avenue. The real Little Italy of New York is in the Bronx. It's also called Belmont.", "On the 100th anniversary of Moses's birth, I think in 1989, I was asked to give the keynote speech at a conference at Hofstra University on Moses. And Caro was also invited to be a speaker, maybe another keynote speaker. And there I said, and I still stand by this, that The Power Broker, I learned more from it than any book I ever read and I wish I had written it myself so that my name was on it rather than his. And I think it was the best book I ever read. That may be a slight exaggeration, but just like it's an incredible achievement.", "Having said that, I said it's got a thousand errors, just small errors. I mean, in broad strokes, it's correct. Robert Moses had more power than any urban figure in American history. He built incredible monuments that the city needs. He was ruthless, arrogant and honest. That's a big story right there. That's probably true. But in all the little stories about the temperature in the swimming pools, about the destruction of public transit, about the destruction of the Bronx with the Cross Bronx Expressway, about the building the bridges too low so that buses couldn't get to the beaches. He's just wrong. It just isn’t that way. Wasn't that way, isn't that way. And all he had to do was look at some different sources, you know that they were doing some of the same things in LA or Chicago. They just weren't as famous as New York. That's more or less what it was.", "I think I still feel that way. I still wish I had written the book, even with all those mistakes. I've taught for half a century or more New York City history at Columbia University. I've had dozens of students write term papers on one or another aspect of Caro’s book. I can never remember a single one coming back and saying, “Oh, Caro got it right.” They would go back to the same source and say “This is not what this is not the way he said it was.” Now they may be smaller, but it was generally that way. They didn't celebrate it. I think he'd made up his mind what he was going to argue before he started it and Moses was a jerk.", "For example, think of the issues now with President Biden and his son Hunter, you know, and all the grief he's catching about his son or people with….. My wife is telling me not to go down this road.", "Moses is attacked in the book because he's not good enough to his brother. He should have done more for his brother than he did. Well, so what if he didn't give his brother a job. I'm just saying that all these little things that you can come to about different neighborhoods and things like that. But I think Caro was right that Moses, the biggest builder of cities in the 20th century, the builder of the greatest city in the world, didn't like it. That's incredible. He didn't actually like the place he was designing. He wanted everybody to get out. It's better in the suburbs, better if you have a house, better if you have a garage, better if you have a car in the front. That’s better than the Bronx or Brooklyn or Queens. Whereas again, his opponent, Jane Jacobs, are people now. And one of the greatest changes now than when Caro was writing Moses is people wanted to all have a car in New York. Everybody wanted a car. Teens, when they were 16 years old, wanted to have a driver's license more than they wanted to have sex. Get me the car. Now a big percentage of young Americans are skipping the driver's license. They don't care as much. We're moving into a different time. And where that's going to lead us, I'm not sure. But I think we're ready to walk more for health reasons and all sorts of reasons. But Moses did not see it. Moses never drove, but he created the world for drivers.", "Dwarkesh Patel 1:17:57", "Some people might object to this defense of Moses by saying, “Listen, whenever he does something well, we give him credit here.” But then whenever he does something badly, we just say, “Oh, well, he was just swimming with the tide of history. There were other cities that were doing it worse.” We're doing what-aboutism on the negative side of Moses' legacy. But then we're happy to let him take credit for the good things. It's kind of a double standard. How would you respond to that?", "Kenneth Jackson 1:18:27", "I understand what you're saying and I can sympathize with it. Let's take the cross Bronx expressway as an egregious example of something that Robert Caro or Moses' critics would say, “God, all these people, tens of thousands of people paid a price. Their lives were uprooted.” But I just can't imagine you would say we can't have a cross Bronx. I mean, it could have been two blocks this way or two blocks that way, that's a different question. And we can talk about that too, but I think we have to have it.", "I think the parks,the bridges, the Verrazano-Narrows bridge transformed Staten Island. The history of Staten Island is before and after 1964. They had to take into account the curvature of the earth. That bridge was so long. At the time it was the longest suspension bridge in the world. It's been surpassed in Europe since then, but a big bridge. That would be a lifetime's achievement. If you built the Verrazano-Narrows bridge, this gigantic, gorgeous entrance to New York Harbor, that's a lifetime achievement. For Moses, it's one of a list of about 50 things. You don't even think about the Verrazano-Narrows bridge and Robert Moses. Yeah, he did it. So what? There's lots of other big things too. I think it's the scale and the scale was not human. It was totally on a different scale. And with that comes a lot of criticism because as he would say himself, in order to make an omelet, you have to break eggs. No other way to do it. And he would say, I had to do it. This is his answer. In order to build a great public event or thing, I have to hurt some people. That's just the way it is. The way we live now, we don't want to hurt anybody. You can't run a city or a country that way. You can't. Nothing will change.", "1:20:41 Moses the Longtermist?", "Dwarkesh Patel 1:20:41", "Yeah, I've had recently a lot of guests who have been advocating for this philosophical view that's called long-termism, which is basically the idea that we should take the interest of future generations and consider them equally with the interest of people alive today. As a way to emphasize, for example, that people who live thousands of years from now, we should take their interest seriously.", "I guess if we take this kind of view and just care about progeny more, there's a part in the Power Broker where they're talking about the Cross Bronx Expressway through East Tremont and Moses is getting opposition from an elected official. And Moses responds by saying, “You make a habit out of pointing out that I'm not democratically elected while you are. But the advantage of me not being democratically elected is that I can build projects that might not be the favorite thing for the people alive now, but will benefit the city for generations and centuries to come.”", "There's obviously a lot of arrogance there and there’s the question of if that's true. But to the extent that that is true, and it is in many cases — all these bridges you're talking about and all these highways are still standing today and are in wide use, it changes how you think about him. Given all the egg breaking that had to happen at the time, it's still the fact that for hundreds of years we'll get to use the things he built.", "I'm curious, you mentioned you were with Caro when you found out that Moses had died. How did you react? How did Caro react? Was he sad about it?", "Kenneth Jackson 1:22:25", "Well, he was being interviewed. I just remember being stunned at the restaurant. We were eating at a restaurant in the Bronx and they brought a telephone to him. I've never seen anybody have a telephone brought to them. That was way before cell phones and everything else. It was 1981 and we were friends when he died. It was after I criticized him in public and the New York Times quoted me. They didn't quote that I thought was the best book I'd ever read and I wish I'd written it. They did quote that I said it was full of mistakes. That's what the New York Times said and that's what he read. He didn't hear my speech. I'm sorry. I don't have enemies with anyone, but I have regard for the book and have regard for his ability to do research. What's amazing about The Power Broker is his ability and interest in going back and interviewing his third grade teacher and stuff like that and finding that out. That's way more than what most of us are willing to do.", "Dwarkesh Patel 1:23:35", "What was his reaction when he found out that Moses died?", "Kenneth Jackson 1:23:41", "That was not unexpected. He was 90 years old at the time. Remember this is 30 years ago, more than that. Not that many people lived to be 90 and he was a swimmer. He was an athlete. He was a swimmer. Swam in the ocean in his 80s. He was quite a person.", "Dwarkesh Patel 1:24:01", "Yeah. There was a part in the in Caro's memoir where he talks about learning from Moses' former aides who are now friends with Caro that now that he had lost his power, he was just in his house all day just looking at all his plans and writing down more ideas for where there should be another highway, hust impertinently making new ideas and so on.", "Caro says that just hearing about that scene, this master builder having nothing to do, made him want to cry despite the fact that he had documented all the harm that Moses had done. That was really interesting.", "Kenneth Jackson 1:24:50", "Moses had a phenomenal amount of power and he loses it all in a short order. That's gotta be tough. Really tough. Now, Moses did write a response to the Power Broker , which was published in the New Yorker about 1975, which is a long response saying that Caro had it all wrong.", "Dwarkesh Patel 1:25:24", "Actually, I don't know what your reaction to the response was. I read it and I thought that Moses kind of vindicated Caro's description of his personality because it was very ad hominem and wasn't specific. I'm sure there were a lot of errors, as you say. Moses wrote 8,000 words but he actually documents very few of the supposed errors that he claims Caro is making. The ones he does document are minor ones, not the major ones that were about his major public works, so just small little details. But yeah, I don't know what your reaction was to that response.", "Kenneth Jackson 1:26:04", "Well, I think you're very perceptive. I think his response was rambling, wordy, not particular. I think he probably could have taken Caro apart, but he didn't. He didn't say, “Page, so-and-so, you said this. This is wrong.” He didn't. And also, Moses didn't have people around him to say, “Boss, you're wrong here. You got to reword this. You don't mean it the way you said it.” And also, the quality of her response was poor enough that he needed assistance. He needed a smart or three or four smart people to say, “Put it this way. You have got to give this example or do something else.” But still, the response tells you something about Moses and also the way Moses kind of held grudges. He blames the New Yorker and he blames Knopf of Alfred Knopf Publishers, because you're in cahoots with Caro so you're all damned and going to hell rather than just talking about Caro. Maybe that's just the way he looked at the world at his time. You got to find out who's got the levers of power and then go after them. The rest of the people don't matter.", "Dwarkesh Patel 1:27:28", "Yeah. Okay, so I want to be respectful of your time. You've given me so much already. But before we close out the interview, if you have any other things you want to say about The Power Broker or about Moses or about his legacy or about NIMBYism today or the state of construction and public works today.", "Kenneth Jackson 1:27:46", "I think of Moses as the polar opposite of Jane Jacobs. Think about what Jane Jacobs saw as important. She saw the block, the store, the people, the mixture of genders and races and everything else as being significant. Robert Moses really didn't see people. He saw massive things, things you put on an architectural board and he didn't see other things. That's both his strength and his weakness. Had he seen people more, maybe he would never have built what he did and we wouldn't have had the bridges. But we do. And can you imagine New York without the Triborough Bridge? It's not imaginable. You’ve got to have it. Listeners who have not been to New York may not understand it. This is a city that has water all through it. It's about water. It's not like Dallas, where there's essentially no water. New York water is everywhere. You've got to cross these bridges. You've got to get over that. So it creates a problem of circulation that's vast, especially when you have tens of millions of people trying to move around. That's something that Moses figured a way around.", "Dwarkesh Patel 1:29:14", "Yep. Actually, one more question that just occurred to me. I'm curious why you think that this biography of Moses by Caro achieved the cultural prominence it did. Of course, it's like an incredible book and I enjoyed it a lot as well. But like there's countless biographies written every year and many of them are also good. What was it about this book that catapulted it? There's a review of the new play about Robert Moses and the title of that review was just that New Yorkers love to hate Robert Moses. Why does he have this sort of cultural prominence in New York and across the world?", "Kenneth Jackson 1:29:57", "Because I think Robert Moses represented a past, you know, a time when we wanted to build bridges and super highways. We're not building superhighways now. We're not building vast bridges like Moses built all the time. He's not swimming with the tide of history anymore. And so I think that's part of it. Plus the fact that it's a biography that has an argument and the argument is understandable and it runs through the whole book. And he's a terrific writer and builds suspense and he can write a paragraph in two words and he repeats himself a little too much. Anyway, when I get negative, my wife is nearby and getting on me. I think it's a terrific book.", "Look, Moses was a great man. Caro is a great writer and in many ways a great historian. And I think the other thing is that Caro's book is so important. You know, Thomas Jefferson, Abraham Lincoln, I have lots of books written about. Caro not so much. I mean, Moses not so much. He's not that famous. So it's such a big and intimidating book. Everybody else is afraid to go there. There will be other books about Robert Moses. They may be underway as we speak and they will revise our reinterpretation. I think that's true. He will be reinterpreted in ways in which we talked about today, though I don't know that.", "But, the basic things that Moses did are there, you can look at it. And the things that Caro said about him are mostly true. He gives a little tiny bit about an affair. He once explained to me that sex was not important to Moses. If you want to understand him, it wasn't about females. Moses was committed and married to his job and to his vision of a city. That's both his greatness and the fact that he didn't have friends or many friends and he didn't live the world the way most people live it, with people. I think that's probably all I know.", "Dwarkesh Patel 1:32:21", "Well, this is a true pleasure. I highly recommend that people check out your book, Robert Moses and the Modern City: the Transformation of New York . There they can find your essay on Robert Moses and the Rise of New York City , which I highly recommend.", "Professor Jackson, where else can people find you?", "Kenneth Jackson 1:32:42", "My email address is still [email protected]. It's been a pleasure.", "Dwarkesh Patel 1:32:57", "Thanks for listening. If you enjoyed that episode, I would really, really, really appreciate it if you could share it. This is still a pretty small podcast. So it is a huge help when any one of you shares an episode that you like, post it on Twitter, send it to friends who you think might like it, put it in your group chats, just let the word go forth. It helps out a ton. Many thanks to my amazing editor, Graham Bessalou for producing this podcast and to Mia Ayana for creating the amazing transcripts that accompany each episode, which have helpful links, and you can find them at the link in the description below. Remember to subscribe on YouTube and your favorite podcast platforms. Cheers. See you next time." ]
[ "https://www.amazon.com/Robert-Moses-Modern-City-Transformation/dp/0393732436", "https://www.amazon.com/Power-Broker-Robert-Moses-Fall/dp/1847923658", "https://www.nber.org/digest/apr19/new-estimates-benefits-us-highway-construction", "https://www.wsj.com/articles/homeownership-for-all-is-a-dying-dream-kenneth-jackson-real-estate-housing-suburbs-highways-zoning-mortage-rates-covid-loans-redlining-new-york-11663337951", "https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2934079", "https://www.amazon.com/Robert-Moses-Modern-City-Transformation/dp/B01FKRYQ4S", "https://eportfolios.macaulay.cuny.edu/rodberg16/files/2016/01/Kenneth-Jackson-Robert-Moses-and-the-Rise-of-New-York.pdf" ]
https://www.dwarkesh.com/p/kyle-harper
Why Rome Actually Fell: Plagues, Slavery, & Ice Age — Kyle Harper
[ "Plague's impact on Rome's collapse", "Dwarkesh", "Today, I have the pleasure of chatting with Kyle Harper , who is a professor and provost emeritus at the University of Oklahoma, and the author of some really interesting books: The Fate of Rome , Plagues Upon the Earth , Slavery in the Late Roman World , and an upcoming one called The Last Animal .", "The reason I wanted to have you on is because I don't think I've encountered that many other authors who can connect biology, economics, history, and climate into explaining some of the big things that have happened through human history in the way you can. The most recent reason I wanted to have you on is I interviewed David Reich , the geneticist of ancient DNA, and some of the questions we were discussing, he kept emphasizing this overwhelming and surprising role that diseases have had in human history, not just in the recent past, but in his work going back thousands of years, tens of thousands of years.", "He said, \"You gotta have Kyle on.\" I emailed him afterwards, asking \"Who should I interview next?\" And he said, \"You gotta have Kyle on\".", "You have this graph in The Fate of Rome .", "You show human population over the last few thousand years. I assume that these two downspikes are both the bubonic plague , Yersinia pestis , right?", "Kyle Harper", "Yeah.", "Dwarkesh", "And so this is not like some small little nudge you can see, like the overwhelming —I mean, other than the hyper exponential growth in human population— the overwhelming, not just one, but the overwhelming two major features in human population going back the last 10,000 years, is this one bacteria, right?", "Kyle Harper", "Yeah.", "Dwarkesh", "One of the things you discuss in the book is that the collapse of the Roman Empire was a result of this one particular event.", "Kyle Harper", "The period that I normally work on is from the High Roman Empire , the glory days of the Pax Romana in the first or second century, through what we call the late antique or early medieval period, the sixth or seventh century. At the beginning of this period, Rome dominates this Mediterranean empire. It's what you think of when you think of ancient Rome. It's the largest city in the world. It's the center of this huge network.", "By the end of this period, the city of Rome has 50,000 to 100,000 people. It's a tenth or twentieth of its former size. I think we now can say pretty clearly that environmental factors like climate, but also especially diseases, play a part in that really big transformation.", "While there's a problem because we don't have the same kind of modern government mortality statistics that we do for COVID or even for the last century or century and a half, we have to piece together from clues. But it's pretty clear that the bubonic plague events, whether you're talking about the Black Death of the 14th century or the plague of Justinian in the sixth century, these events are capable of causing death rates, temporarily, that are just orders of magnitude beyond what we're accustomed to. Even in these ancient societies, the reason why these were so shocking, in a world where the death rate is always pretty high—probably several percent of the Roman population, three, four percent a year may be dying in a normal year—is telling. For them to just be utterly shocked by the death rate already tells you that it's some multiple of what they're accustomed to.", "Dwarkesh", "Yeah. I think you discuss in the book the possibility that the death rate might have been close to or even over 60% wherever the Black Death hit. This is literally the most significant thing.", "Kyle Harper", "Yeah. It's mind-blowing. In the case of the Black Death in the 14th century, it's pretty clear. It kills 50, 60% of the population in entire regions. We don't necessarily think that it killed 50, 60% of the whole continent, although that's actually not impossible. But even the fact that it's killing 50% of the people in cities, in provinces, in countries, is just beyond the damage that other plagues do.", "Dwarkesh", "Right. And do you think that were it not for this 60% mortality event plus the fact that we haven't even discussed yet, this super severe cold snap, do you think that the Roman Empire might have otherwise just kept going? You discuss there were these two previous other big pandemics. The empire still survives.", "I think Will Durant had this quote that the Roman Empire fell for longer than most empires have lasted. Do you think it'd be similar to China maybe? Maybe a dynasty collapses, but fundamentally, the same sort of cohesive nation reemerges?", "Kyle Harper", "Yeah. That's a great comparison. Not just decline and fall of dynasties, but also geographic changes in the configuration, that parts get added and parts get cleaved off, but you still kind of think of it as fundamental continuity in the core. That to me is a very plausible counterfactual.", "Justinian, the emperor in the 540s—he reigns from the 520s to the 560s—he’s on a path of success. He's retaken Africa. He's mostly retaken Italy when the plague hits. To me, a very plausible counterfactual is that a more or less Mediterranean core of the Roman Empire could have survived east and west. It does sort of survive in the east, but even including really all of Italy, Africa, and probably Spain. That would have been a very reasonable outcome of the sixth century if you hadn't had this kind of random shock.", "The Roman Empire would keep going. Remember, it does, and it calls itself the Roman Empire until the 15th century. But we would think of it as maybe more the Roman Empire if it still included the western Mediterranean and was this major, powerful, urbanized polity that resists invasion from the southeast, as happens in the seventh century. So the answer is yes, I think that the Roman Empire absolutely could have had another turn as the thing we kind of mean when we say the Roman Empire: this pan-Mediterranean, powerful, urbanized empire.", "Rome’s little Ice Age", "Dwarkesh", "Yeah. Okay, one of the things I found really interesting was you were discussing the firsthand accounts as this big... and feel free to explain the cold snap as well as it's happening. But the firsthand accounts of people who are experiencing this, some of whom come from this burgeoning Christian faith , which already lends itself to millenarianism and apocalyptic thinking. I'm curious basically how did different people try to make sense of this once in a 1,000-year event that's super intense?", "Kyle Harper", "Clearly, people have to try and explain, within the elements of a worldview that they have, how something like this could happen. They don't have modern science. They don't have germ theory. They don't think of it in terms of a biological event or climatic event. Since that's come up and you've invited, I'll say a little bit about that. This is one of the other really exciting frontiers where we're learning new things about the human past that we just didn't know 10 or 15 years ago.", "In this case, we now have really cool paleo climate data that helps us understand that this period of the sixth and seventh century was also a period of really abrupt and significant natural climate change. We're all familiar with anthropogenic climate change, that carbon emissions stay in the atmosphere, trap heat. Humans are changing the climate. That's a big problem and we can talk about it if you want. I just want to clarify, that view is not incompatible with the reality that the climate does also change for natural reasons on every timescale, from long geological timescales to much shorter timescales.", "We live in the Holocene . The last 11,700 years have been pretty stable, pretty warm. It's an interglacial. We're literally between Ice Ages right now. It's been really stable in the big picture. Yet even within that stability, there are smaller scale climate variations and climate changes. Because we need to understand how the earth system works, how the climate system works in order to be able to model what's happening, we need an empirical record of what the climate has done.", "For historians, this is great news because now we have a huge number of sometimes even pretty high resolution climate reconstructions for historical periods across the Holocene. We now know, like we did not know this 20 years ago when I started graduate school, that the Roman period experienced some really abrupt episodes of climate change. In this case, the sixth century, we know the cause: there was a series of really significant volcanic eruptions . Volcanoes are a very powerful short-term climate forcing mechanism. They eject sulfur into the stratosphere. It aerosolizes and creates a reflective shield that scatters the radiation entering the atmosphere.", "It leads usually to short term cooling . In this case, you had a series of really significant volcanic eruptions that cooled the climate for several decades. In some ways, the later series of eruptions even like a century and a half. It wasn't just a little bit cooler, it was like a degree to two degrees cooler, which we all kind of know now: two degrees isn't weather, this is climate. Two degrees doesn't affect your day, but two degrees globally is a pretty different globe.", "All of a sudden in the late Roman world, it's much, much cooler. Probably areas that have been wetter are now drier. Places that are drier may be wetter. It changes the hydrological cycle as well, which is more complicated. In addition to the shock of the plague, you have this simultaneous and probably not unrelated shock to the climate system. We know that it was essentially challenging for agriculturalists. When the sun is blocked and it's really cold and the wheat doesn't grow, your society then starves.", "The Romans get this wham bam double shock of climate change, famine, and plague. So back to how people explain this. Apocalyptic thought is one of the principle ways people frame it. To them, nature's going crazy. Huge amounts of the population are dying of this horrible sudden disease and the crops don't grow. You don't have microbiology and you don't have climatology. So you explain it with the resources of the worldview you have.", "There's a huge burst of apocalyptic thought in the sixth and seventh century, which is always kind of there. You mentioned that Christianity is eschatological . It is, yes, fundamentally, but that comes out in different ways with different emphases in different time periods. This is a period, the sixth century, when there's a really sharp emphasis on eschatology in Christian thought.", "Why did progress stall in Rome’s Golden Age?", "Dwarkesh", "I found your early chapters in the book about what the Roman economy was like in this happy period quite interesting. There's a bunch of questions I have about this. If you read Gibbon , writing in the 1770s, I think he says that the happiest time in human history was this period you're talking about . That was true at least according to him as of a couple centuries ago. This was still peak civilization.", "You discuss the complexity of the Roman economy. The fact that millions of tons of wheat and other products have to make their way to Rome and the trade networks and everything. You basically say they were experiencing productivity gains. The wages were increasing, population was increasing, but they were still not at the level at which it was plausible that, say, for these climactic and biological factors, they might have had an industrial revolution . I'm curious why you think... paint a picture for me of what the Roman world looked like as of this happy period and why that still counterfactually couldn't have just saved us a thousand years of history if they were on the right track.", "Kyle Harper", "First of all, I think this is the sort of question that historians ought to worry about all the time: why didn't the Roman economy catalyze the takeoff? In some ways it was so precocious for its time period, and it seems not utterly impossible. The Roman world is still a pre-industrial economy so agriculture is the dominant sector. The majority of people work in agricultural pursuits and productivity is low. They don't have modern mechanized traction. They don't have modern synthetic fertilizers . They don't have the modern green revolution yields, all the things that have made agriculture stupendously productive.", "The primary sector is fairly limited in terms of its productivity because of the limitations on technical inputs. We can think of the inputs to an economy are going to be capital, labor, and ideas. What the Romans have is people. They have some investment, but they don't have technology. They don't have ideas. It's a late Iron Age civilization. I do think there's productivity growth and that productivity growth comes from markets, from trade where you get comparative advantage.", "In Egypt, I'm really good at growing wheat. You can make glass in Syria. And then we'll trade. The urbanization of the Roman world certainly facilitates that. Cities are these hubs of productivity and exchange. There's some technology. If you look really hard over five or six centuries, there's certainly economies of scale where the production process and manufacturing is moved from artisanal to industrial scale. But there's no takeoff because they don't have science.", "They don't have research and engineering that drives continuous productivity gains. I think they go precociously far in a pre-industrial setting where you take trade really far. They have good institutions in terms of strong property rights. There's relatively reliable contract enforcement. There's financial markets. They have the most advanced financial markets in the world before the 17th or 18th century. There's impersonal financial intermediation.", "It's not like you have to know me and come ask me for a loan if you wanna build a ship and go trade something. There are banks that take money from depositors, keep balances, and then lend out to debtors who want to go and do entrepreneurial things. They have so much potential, but there's no spark. You never see these sustained productivity increases. I would just say ultimately it's because the Romans don't have technology improvements that are really self-sustaining.", "The reason they don't have that is because they don't have science. Their science sucks. I'm offending some of my colleagues, I'm sure. Galen is great. Ptolemy's incredible. I love Pliny the Elder's encyclopedia , but if you look in the big picture, the contribution that the Roman Empire makes to our knowledge of how nature works and then the applied technology that comes out of that is really pathetic for five, 600 years. They go as far as you can with Smithian advantages to market exchange and specialization, to banks and finance. But without the kind of creative destruction of new technologies that improve productivity, you're eventually gonna run out of improvements.", "Dwarkesh", "If you're Augustus or some other Roman emperor and you're like, \"Look, we've got this big good economy, but I wanna see productivity gains,\" and you wanna make it happen somehow, is there something from a top down perspective you could have done? In Britain, the government subsidizes the royal arts and so forth and the Longitude Prize .", "Kyle Harper", "That's exactly what I was gonna say. This happens first in France and then Britain, but you get royal societies for science where you're doing really... I would say there's like three things that are essential there. One is the promotion of what we would call basic or fundamental science. It doesn't all have to be immediately practical or commercialized. But you're promoting deep knowledge of nature.", "Two, you're doing it in an empiricist way. This is something very important in the 17th century that the Romans by contrast don't have, is the spirit of Francis Bacon that we need to ground our knowledge in experiment and observation, not just believe whatever authorities or Aristotle said. That's very much the spirit of places like the Royal Society : we don't take things on anybody's word, especially Aristotle's. You need basic science. You need empiricism, rigorous and self-correcting.", "Third, you need a sense of useful knowledge, and that's the other thing that really comes together in the 17th century: not just the basic and abstract science, but the application, and the 17th century language for that is useful knowledge. That is something that doesn't ever get wired together in the Roman Empire. There are tinkerers and engineers, but they're not talking to the mathematicians and the physicists. If you were from on high to design self-sustaining innovation, I think you would want to bring those elements into proximity. Augustus, unfortunately for them, didn't do it. Probably good for the world. The Romans are pretty nasty people in a lot of ways.", "I definitely am of the opinion that the high science matters, that Isaac Newton is not a tinkerer. He's not building pumps. But the guys who are are his friends. They're in and around the Royal Society, and they're absolutely... Look at Denis Papin , who was a French engineer who was very much in the circle of Leibniz and the very high abstract mathematics, trying to build vacuum pumps .", "The proximity of high math, high science, very abstract with what is ultimately gonna be the sectors that lead to mechanization where then you can harness this source of energy that is there all along but hasn't been tapped in coal. That's what catalyzes the big positive cycle of positive feedbacks. What the Romans don't have is that. What the Romans do have is the kind of specialization, and now that we look for it, it's there. When you look at food processing, which is a huge sector, the way that they built mills, there's definitely improvements. But there's never the catalytic change where you get runaway positive feedbacks", "Dwarkesh", "That's right. A previous guest of mine, Nat Friedman , I don’t know if you saw this, launched this challenge called the Vesuvius Challenge . This library at Herculaneum in 79 AD was buried under the ashes of Mount Vesuvius when the volcano erupted; speaking of volcanoes. Now they figured out with modern techniques how to read the burned scrolls and it is supposedly the biggest library of classical text ever. It would double the amount of classical text we have. As a scholar of ancient Rome, what would you personally find most fascinating? What are you excited to find from this data?", "Kyle Harper", "I'm super interested in the history of math. What happens after Euclid ? It's very hard to say because you get these really interesting people that pop up, like Diophantus who's later, in the early Roman Empire. There's still really interesting math going on. Euclid is incredible. The Greek experiment in math and science is the one that I think had the better chance of sparking sustained takeoff.", "And it didn't. It'd be interesting to know more about why. Why did things stall? Because these people... Euclid is not just a towering genius who comes out of nowhere. He's very much a product of the culture and the questions that are being asked in the generations before. It just sort of feels like after him, you fail to get that kind of sustained continuous progress and advance. Maybe back to that big question that we were asking before: what prevents the kind of breakthroughs that we see in the modern world?", "Dwarkesh", "What is the population of Greece during their golden age?", "Kyle Harper", "Of the greater Greek world or individual city-states like Athens ? We think of Athens as being a couple 100,000 people. Not massive.", "Dwarkesh", "So I wonder if the Greeks had the science but not the people to sustain a modern economy. Or not modern, but even a sort of industrial economy . And then the Romans had the people but not the science.", "Kyle Harper", "Yeah. There's probably something to that, that there's just not the critical mass of educated people, of sheer cognitive power to keep it going.", "Slavery in Rome", "Dwarkesh", "Let's turn to another one of your books, the one about slavery in the Roman world . I did not realize before I read that one how much Rome was a slave society . I guess that just isn't a salient thing in a conventional understanding of Rome. Why don't you paint us a picture of how much slavery was involved in that world?", "Kyle Harper", "Slavery tragically is a really important institution throughout history. We sometimes tend to think of it as a distinctly modern phenomenon, but that actually misses the deeper picture. In fact, it obscures the importance of modern slavery because modern slavery is uniquely important and it's uniquely tied up with certain kinds of market exchange and certain kinds of production, certain kinds of racial ideologies. There are things about modern slavery that are really important to understand are different, but not just because slavery is there.", "Slavery has this longer history and slavery is more important in some societies than others, and we want to try and understand that, to ask why and then what implications does that have for understanding those societies? Rome is one of those societies. Slavery is really a prominent institution in Rome from the late republic . As the Romans conquer other parts of the Mediterranean, they start taking captives as slaves en masse, and they build an economy that really relies on slave labor in important sectors of the economy.", "Plantations where commodities like wine, olive oil are produced for market exchange that allow landowners to amass enormous amounts of wealth. Slavery becomes this really important institution that's entangled in the development of the Roman economy from maybe the third or second century BCE, and then with ups and downs and really important changes along the way for centuries and centuries.", "Dwarkesh", "So you're pointing towards from the supply side, all the Roman conquests lead to all this surplus labor that they can make use of, and on the demand side, these cash crops .", "Kyle Harper", "Exactly. I'm very big proponent of the idea that you have to have both. You have to have a source of slaves. After the conquest stops, the Romans figure out other sources of slaves. If anything, the demand is equally or perhaps even more important because if there's not a mechanism, if there's not institutions that let you turn this kind of exploitation into cash flow, the institution's not gonna go very far. It really is the institutions, the presence of markets where you can take labor and turn it into profit that's the most important element.", "Dwarkesh", "One of the things I find interesting is in the age of colonization we're used to thinking about slavery in terms of race but also religion and other things which more obviously demarcate free and slave populations. In the Roman world, it doesn't seem that that's clearly the case, yet there's no abolition movement the way that emerges out of England in the 19th century or maybe even before that.", "The reason that's mysterious is, if you were literally descended from slaves, if you were like, \"My grandfather was a slave but then we were freed,\" and they're basically just like you, you would think that there would be more of a sense of... not everybody would be an abolitionist , but at least some people would be writing about abolition. With Christianity and so forth burgeoning, they didn't seem to have a problem with it. Why is there no abolition movement despite the heterogeneous nature of the slave population?", "Kyle Harper", "It's sort of disturbing in a way, isn't it, that humans have the ability to convince themselves that it's okay to own other human beings as property through a variety of different kinds of ideological justifications. You see even in the ancient world there are different models that people use to say that slavery's okay. Aristotle develops a theory of natural slavery that actually some people deserve to be slaves by their very nature and that it's actually good for them to be in bondage.", "What's really interesting though is that that doesn't actually ever seem to be the dominant ideology. The Roman ideology of slavery is not racialized. It's not like the Romans think that the Greeks or the Germans are some fundamentally separate kind of human that justifies their exploitation. The Roman ideology of slavery is really rooted in the law of property and status. They think that slaves are people who've been conquered and rather than killed, they've been spared, and they've been sold into the condition of being somebody else's property.", "This seems to mentally explain to them where their slave system comes from and why it's justifiable. You have different kinds of criticism of the slave system from within, but remember, most of what we have written is from the slave ownership class. I don't think the slaves were themselves believing this ideology, and there must have been sort of what we would think of as abolitionist movements or spirit that we just don't have really good records of.", "It is this curious thing that the Romans are able to build this huge system that's really brutal and really violent, but has this kind of flimsy ideology where they tell themselves these stories. The deeper lesson of that is that humans can create these systems of belief that will exclude others and justify almost any form of exploitation and convince themselves that it's okay.", "Dwarkesh", "I hope your next book about The Last Animal discusses the potential parallels with factory farming .", "Kyle Harper", "There's some... there's a pretty gruesome chapter, I'll say that.", "Dwarkesh", "I don’t know if you mentioned, what numbers you say, I think it was like 10 to 20% of the population under the Roman Empire was enslaved. Given that large a size of a slave population, it's surprising to me that there are so few slave revolts not only in Rome, but even throughout history. There's Spartacus in 71 BC , then there's the Haitian Revolution . If 20% of the population is enslaved, how is this sustainable? If you're running a farm and there's 4,000 slaves and then the next farm over also... why aren't there more slave rebellions?", "Kyle Harper", "Why not? How did they do this? They have a really elaborate system of repression. They're worried about it. Probably the parts of Roman society where there are 20, 30, 40% slaves are pretty limited to certain regions and certain time periods. Partly because once you cross some kind of threshold, the challenges of repressing direct violent resistance increase.", "It's a system of exploitation. That means there's always a mix of carrots and sticks, to put it crudely. The Romans extract people's labor partly through physical violence, but also partly through systems of manumission that try and incent people to obey and not to rebel in order to earn their freedom. They're using everything from literal chains to enticements to try and keep rebellion from ever coalescing in a way that can turn into collective violence.", "It's a little bit challenging for us to look back. We know in Pompeii , the slave population is huge. It must be 30%. Not all of these people would've been plantation workers who were lashed every day and worked to the physical bone. A lot of them are nurses and textile workers and maids and tutors and all sorts of things that are sort of quasi-embedded in households as well, where there's always this weird psychological dimension too. Part of the strangeness of slavery is how deeply embedded in domestic institutions it is as well.", "There are ideologies in which the paterfamilias is sort of the father and the master. That tries to brainwash people against resistance. The important thing to recognize is it's just a pervasive system that tries to colonize people's minds and pervasively tries to keep them from resisting.", "Dwarkesh", "I wonder if we can close the loop with the question we began with, which is, why didn't Rome have an industrial revolution? I don't know if it's a plausible explanation that cheap slave labor reduced the incentives for mechanization and engineering and other crafts or if not. I don’t know.", "Kyle Harper", "It's definitely an argument that's been made. Aldo Schiavone was an Italian historian who argued that. It's kind of a neo-Marxist tradition that argues this. It's an interesting argument. I don't buy it at all. The good version of that argument would just be that the Roman Empire is using slaves in many of the most forward elements of the economy too. We tend to think progress and economic growth and innovation is good, and we know slavery's bad, so we tend to think that those things don't go together.", "But in reality, it's the most economically advanced sectors of the Roman economy that had a high degree of organization and productivity that tended to employ slaves. In the Roman world, you could make the argument that if the labor in those sectors had been free, there would've been more opportunities for positive feedback loops. The way the argument's usually made is just that the Romans got rich without really thinking about productivity. They just wanted to extract labor, extract wealth rather than create wealth.", "That's not a terrible argument, but ultimately I don't think it's the system of labor that keeps the Roman world from industrializing. There are lots of sectors in the Roman world where slavery is not a dominant institution, and it's not like they're more productive or flirting with some kinda breakaway. So it's an interesting argument, but not one I've ever found all that persuasive.", "Dwarkesh", "Final question about Roman slavery. What did Gladiator get right and wrong about? Would they just abduct you in front of your house and make you a slave?", "Kyle Harper", "You mean the first one? The first one got right that when you're making a movie you should worry more about making a good movie than a certain amount of history, but actually the first one's a great movie. If it was completely historically accurate, it would've been much more boring. So I'm not gonna be critical of that movie. It plays very loose with the facts of high politics around Commodus and the creation of this character. But who cares? Russell Crowe's incredible.", "Dwarkesh", "But on slavery in particular.", "Kyle Harper", "On slavery in particular, I think actually that's one of the strong suits of the movie. You see this completely exploitative system that brings people from very different parts of the world who have very different backgrounds. The system of urban spectacle is very real. The use of slave labor in that is certainly a part of it. So the movie actually gets some really important things about that right. That makes it totally forgivable that it has to create a kinda high politics storyline.", "Was agriculture a mistake?", "Dwarkesh", "Okay, I think that covers all the questions about Rome. We can get back to your most recent book about human history and plagues .", "What do you make of the general argument that people have often made that we were living in a sort of Eden before agriculture? Especially given you've explained that all these diseases that we're sorta stuck with are actually quite new. If we take that perspective seriously, was life before human population exploded and we had agriculture just much more pleasant, at least in comparison?", "Kyle Harper", "Homo sapiens is 200,000 to 300,000 years old. We emerge in Africa and disperse, multiply, but we spend 90, 95% of our history as foragers . People who are hunter-gatherers , who take energy from wild food sources, rather than sedentary farmers who've domesticated plants and animals and live a sedentary lifestyle where you're enslaved to this wheat or rice, but it gives you reliable calories. That is along with the Industrial Revolution, and then whatever this thing we're about to go through, the biggest change in the history of our species.", "The shift from foraging to farming affected everything. It affected our beliefs. It affected our genetics. We're all basically genetically different, adapted to live in a different kind of environment with different kinds of diets. It affected our societies, it affected inequality, it affected culture in every possible way, and of course it affected our health in really basic ways. It affected our labor regime; doing the same kind of labor over and over every day is very different from running around as a hunter, chasing deer or whatever, which sounds quite nice.", "It changed our labor regimes. It changed our diet, most of all. Hunter-foragers tend to eat high protein, high fat-ish diets, with no refined carbohydrates, but limited carbs. It's a very varied, highly varied diet. Sedentary farmers tend to eat more monotonous diets and they tend to be dependent on grains and starches, so a very narrow spectrum for your calories. Changes in labor regime, changes in the diet, and then changes in lifestyle, being sedentary and living in big populations that then puts you in proximity to other humans, puts you in proximity to human waste. Feces are a major conduit of infection. It puts you into proximity to the air they breathe, which is conducive to respiratory diseases.", "This transition, which takes thousands of years, is more of a process than an event. It has massive implications for human health, including the infectious disease environment that we inhabit. It's not like hunter-gatherers were living in paradise. The infectious diseases that they had were seriously burdensome, they sucked, and probably most people died of infectious disease. Malaria is a really old disease. Lots of diseases existed in the Pleistocene , in our Paleolithic past.", "It's not like it was Eden. There is this idea that the transition from foraging to farming... Jared Diamond called it humanity's biggest mistake . Certainly these changes entailed some things that were not net positive for humanity and one of them is that it definitely increased the infectious disease burden. Simply as our population multiplies and as we're in contact with feces and as we're sharing the air through which respiratory pathogens can spread, diseases are constantly trying to take advantage of this. That's just how nature works. Energy is scarce. Everybody's trying to steal it from everybody else, including microbial parasites .", "The disease burden of humans over time definitely increases. The burden of infectious disease on humans goes up over time. Very broadly across these thousands and thousands of years, the diseases suffered by people by the time of the Roman Empire are absolutely much worse than what had been the case in Stone Age times.", "Dwarkesh", "James Scott has an interesting theory in Against the Grain . I don't know if it originates with him, but he argues that one of the reasons that the early agriculturalists were so successful... And David Reich, if you've seen his stuff about The Yamnaya 4,500 years ago conquering all of Eurasia , but before them, the Anatolian ... the initial farmers are the ones who displaced the initial hunter-gatherers across Europe and Asia.", "He argues that initial wave was so successful because of these first diseases that the farmers had created the conditions to engender. Basically, the relationship these farmers had with respect to the foragers they were taking over from was similar to the relationship the Europeans had to the Native Americans, where inadvertently, the disease is just a significant player in why you were able to dominate them. I don't know how plausible you find that.", "Kyle Harper", "I mean, first thing, it's important, I think you were starting to get at this, that there's never a generation of humans that has the opportunity to make this choice once and for all, like, should we stay hunting mammoths or should we become sedentary farmers with basically torturous dentistry and die by diarrhea? This happens over thousands of years through an evolutionary process where nobody can... It's a story of unintended consequences. The mammoth are gone partly because we killed them all.", "People start... the first livestock that are domesticated are goats. Nobody says, \"Hey, let's become goat farmers.\" The goats are wild. They're ibexes . People are hungry, and so they start managing them to only kill the males to make sure that they can reproduce, and they start penning them and they start killing the wolves who are trying to attack them. Over very long periods of time, this becomes this tight mutualistic relationship where all of a sudden we're goat farmers. But no generation makes that whole decision for anybody.", "That's part of it, is that it's unintended consequences that are made in very incremental steps. Two is, I definitely agree that there's some kind of cultural selection here where the farming groups are simply so much more adapted to extract energy efficiently from the environment. It's all about energy. You want to multiply, you want to grow, you want to survive, it's all about energy. Foragers require huge landscapes to extract enough energy to feed themselves and grow and reproduce, whereas farmers per unit of land can extract such higher rates of energy that then can be, through photosynthesis , turned into edible sugars that we can metabolize.", "Those populations are just growing faster, that they quote unquote out-compete the hunter-gatherer population say of Europe that are largely but not completely displaced. Now, on top of that, just the energy story alone is a big piece of it, but then on top of that, you probably do have some kind of population difference in the exposure and possibly even immunity to infectious diseases. I definitely think that early farmers, the first farming societies that are starting to live sedentary lifestyles where you have aggregations—these are not cities, these are villages—but still, that's more than a hunter-gatherer band.", "Your childhood is then going to be constant exposure to a series of pathogens. Those kinds of populations, when they're then migrating into Europe, are probably carrying these pathogens with them that may have had a kind of further effect that on top of just being able to extract more energy and multiply faster, drives up the mortality of the existing populations.", "Dwarkesh", "Yeah. The point you made about fertility is interesting. I vaguely remember reading that it's not just the fact that the energy density is lower, it's that you're also moving around a lot. Because of that, you're spacing out kids much more so than if you were just in the same place. I think the actual fertility for foragers is sort of like reasonable—I don’t know if sustainable is the right word because I don’t mean in an ecological sense but more so—it keeps your population constant.", "Kyle Harper", "Yeah. Don't make me swear, but it's like more like four than six. Because women who are moving with the foraging bands miles and miles on foot on average a day, and also carrying kids, are gonna have very different life history than sedentary populations. That's very clear.", "Disease’s impact on cognitive function", "Dwarkesh", "One thing I'm really curious about is what effect these diseases through history have had on the cognitive functioning of people. You discuss this in the chapter about more recent history of the great divergence and probably attributed the productivity of Europe that they were able to have public health earlier. But literally going back thousands of years, you mentioned, for example, that Caesar was 5'5\" and that was considered tall during his period. Did the same diseases and malnutrition that caused these physical health effects also mean that the average IQ was much lower, because when you're a kid you're sick and that steals away nutrition from brain growth or something?", "Kyle Harper", "Yeah. Short answer, yes. Long answer, we know that in the modern world, say over the last 250 years, first in Western European societies and their settler offshoots and then more globally and more rapidly globally, there have been really deep physiological changes in the average human. We're talking about populations with distributions. What's happened is really two things. One is there's more energy per capita. People eat more, they eat more calories, and they eat better calories. They eat lots of bad stuff too, but people eat more. Two, the burden of infectious disease has been lowered.", "Growth for a human is a very complicated trait that's influenced by genetics. I was never destined to be super tall. But it's also affected by environment, which includes nutrition and what you spend either doing labor or what you spend fighting infectious disease. Infectious disease imposes a huge burden on the body. The immune system is extremely metabolically expensive. If your childhood is spent just fighting infectious diseases, you're going to struggle to invest energy in growth.", "There's massive increase in the size of populations over the last 250 years and even though it's an even more complicated trait, this improves people's cognitive abilities. People are smarter. May not feel like it—I think it has rapidly leveled off—but people are more intelligent today than they were 100 years ago. Their brains are better nourished and their bodies spend less time fighting pathogens. I think there's no doubt that pre-industrial populations, and again, populations—so you still have your Isaac Newtons , who whatever infected him as a kid, didn't slow him down—but at the population level, I think there's no doubt that not only were pre-industrial populations shorter, this is just a total fact that we know from their bones, but they probably also, on average, had a lower distribution of cognitive abilities. But with a big distribution.", "Dwarkesh", "You have a great profile in the book about living in London in the 18th century and just how disgusting it was.", "Kyle Harper", "It was pretty disgusting.", "Dwarkesh", "But at the same time, in that city, you were just mentioning, there are these scientists and people with towering intellects who were basically figuring out how the universe works and how to make all these machines and so forth. One answer is just like what you just said: the distribution was lower, but maybe Newton would have had an IQ a standard deviation higher if he was born today. Just seeing that from the small population, you're seeing so much genius. I guess the question is, how could you have had this much of a deleterious impact on cognitive functioning and still had enough spare geniuses to kick off the Industrial Revolution?", "Kyle Harper", "Obviously, it didn't keep them from discovering some pretty amazing things. So it couldn't have been completely destructive. That's what's interesting about the early modern period in the 17th, 18th century, in particular, is it's sort of this between period where you have the pre-industrial and the modern that are still mixed together in these really interesting ways. The example I use in the book is the very famous diary of Samuel Pepys , who's this incredible figure and is very close to Newton and that social group and his name is on the first edition of the Principia . These people are this close to each other.", "But the stuff that I evoke, I won't say—this is a family podcast—but the stuff that Pepys does, bodily functions, is mind-blowing to us. It's vile and disgusting. But at the same time, right down the way, you've got people who are making the most fundamental discoveries about the nature of the universe and inventing machines that will improve productivity and ultimately economic output. That's what's precisely so weird and interesting about that particular period is you have this kind of mingling of the old and the new.", "Dwarkesh", "When I had Joseph Henrich on, one of the things he discusses is if you look at... cultural evolution has figured out some remarkable things. If you look at the cuisines of different cultures, apparently the spices they use match the antimicrobial and antifungal properties you need in that particular biome. At the same time, you’re reading that part of the book , I'm like, \"Okay, I get in some cases, they just genuinely did not have the resources to invest in public health and so forth\". But come on, you're just like sleeping in your own vomit and so forth. Why didn't cultural evolution or something like foresight just be like, \"Hey, this we can sort of do without\"?", "Kyle Harper", "It's a deep question and what I think we don't think enough about is how, in a really fundamental way, hard are some problems to solve? Some problems are just very, very hard to solve. Even though the incentive is really there, you think, \"Ugh, that took a really, really long time to figure out\". Even though if you'd only known, it would've made your life so much better. There's tons of trial and error.", "The example that comes to my mind is the mention of vaccination . Which is one of the great human achievements, like, of all the public health improvements is the most important one. Public health is never perfect. It's this system of six or seven really critical tools that involve clean water, personal hygiene, vaccines, antibiotics , different kinds of therapeutic interventions or rehabilitation therapy. We still can't fend off all the germs. You have to have all of that and you can sort of achieve this equilibrium state where you mostly have it under control.", "Vaccination's the most important one and it took forever to find the first vaccine. It took this huge period of all kinds of weird trial and error, like inoculation with the actual smallpox , which is very, very dangerous. Not vaccination. Vaccination uses cowpox , the lymph of an infected cow, to intentionally cause the immune reaction of humans. Before that, people would inoculate a person with actual smallpox, which is just giving somebody smallpox. You do it through the skin, but you're giving somebody smallpox.", "It was absolutely in a utilitarian way, the rational thing to do. It had these horrific death rates. It would never get FDA approval. But in a world where 10, 20% of kids die of smallpox, it's this horrible decision, but you'd be rational to do it. We actually don't know where that comes from. It may come from Africa, it may come from China. It spreads for like a century or more before Jenner discovers vaccination. So it's clearly really hard to figure that out.", "Even after Jenner, it's like another 60 years, 70 years before Pasteur kind of systematizes it and says, \"Hey, we could do this for everything\". Some of these discoveries and innovations are really, really hard to discover. But then the beauty of cultural evolution is that we can store that information, and you and I don't have to figure out any of that. We can go on to the next problem because that's now been collectively stored in the library of cultural evolution. It's known, we don't even think about it most of the time until it's controversial. What a blessing to live after people like Jenner and Pasteur who figured that out.", "Dwarkesh", "There's this great blog post by the author Slime Mold Time Mold where it's discussing....", "Kyle Harper", "Wait. Author what?", "Dwarkesh", "You don’t know internet culture, you know, there’s a bunch of weirdos out there.", "Kyle Harper", "I'm in a different world, sorry. What did you call them?", "Dwarkesh", "Slime Mold Time Mold .", "Kyle Harper", "You can't just drop that. Like I'm gonna let that one slide. Okay. I got some homework.", "Dwarkesh", "Anyways, they have a blog post about scurvy and why it took so long to discover, and he was discussing all these sort of... it's sort of an epistemic conundrum because you can use lime and you realize, \"Oh, it works\". But then if you use lemons, which have much less vitamin C , or maybe it's the other way around, they just work way worse. Then there are certain kinds of fruit which have vitamin C, certain kinds which don't. It's actually hard to figure out what is it if you don't have a mechanistic explanation about how you solve this problem, and I think they had once figured it out and then they lost the knowledge until it was rediscovered again.", "But it makes it all the more mysterious that the kinds of things that Henrich discusses forager societies having figured out. Literally, there's this 10-step process for how to process a certain kind of bean so that you don't get cyanide poisoning , and if you mess up any one of those 10 steps, you're gonna get cyanide poisoning. A society just figures out the right taboos and traditions to process beans. But you can figure that out, but this thing which is causing 20% mortality, you only get in the 17th, 18th century.", "Kyle Harper", "Yeah. We need to think more about the computation that's happening. You said it takes like 10 steps to figure out how to process this one particular kind of food, but I'm guessing it is just really hard to figure out infectious disease. It's a really steep mountain. Once you get up to a certain plateau, then the discoveries come really fast. They become systematic, and they become more fundamental. But it was really hard to get there.", "Not that many societies really scaled it, not even within the societies that did. It was just a handful of people at first, but they did get there.", "Plague in India and Central Asia", "Dwarkesh", "Okay, and then asking about where different countries were at around this time. What evidence do we have about what was actually happening in India before the British or the Mughals because it does seem to be this sort of black box in terms of historiography ? Do we know if there were these huge plagues?", "Kyle Harper", "Yeah, it's such a tricky problem. Start with the third plague pandemic in the late 19th century. We know that that's in India. India's a big part of its history. In fact, its where the plague bacillus is discovered by Alexander Yersin . It's called Yersinia pestis in his honor. A Japanese scientist finds it exactly the same time. Gets left out of the nomenclature.", "Dwarkesh", "It’s a special kind of honor to have the deadliest agent in history.", "Kyle Harper", "To be the worst pathogen ever; immortality. The plague is definitely in India in the 17th century. We know that from contemporary written records that are pretty unambiguous about the presence of the disease. What we don't know is was it there before that? And if not, why not? Because it kind of actually seems like it's not. At least not in this same explosive way. That's pretty curious.", "We don't have a great explanation of that because India's connected to the Central Asian world where the plague is endemic. There's plenty of trade. It would have plenty of chance to move to the subcontinent. So we don't understand that. If you go back even further, that's the Black Death. You go back even further to the late antique period, it's like a total mystery, and the Indian sources from the fifth and the sixth century are not great.", "They're hard to use. This is totally outside my language abilities. They require totally different expertise. I've worked with some people who think that there are oblique references that may be interpreted as epidemic. One of the interesting things is we actually think that the plague moves through India to get to Rome.", "This is not definite, but the plague's enzootic , its natural animal reservoir is the Tian Shan Mountains where China, Kazakhstan, and Kyrgyzstan meet, and we can actually identify a pretty small region where the pandemic lineage comes from. We know that it doesn't go overland, so it's not like the Black Death, which goes across the steppe, the Mongol trade networks, military networks carry it. In the sixth century, probably, the plague goes south through India, and maybe the ports in Gujarat or along the West Coast that are still pretty connected with the Roman world, with East Africa, with Arabia, with the Red Sea , that the plague travels on ship across the Indian Ocean, because the Plague of Justinian shows up in the Red Sea.", "That is a clue that it probably is imported on this seaborne commerce. But how it got from Central Asia to Gujarat is a hard question.", "Dwarkesh", "Huh. I know the way we found that the Yersinia pestis existed in these Yamnaya 4,500 years ago is by... Didn't they just find the...? I don't know how, but if you can figure that out, why can't you look at the fossils of people 500 years ago or 1500 years ago and just see if they have Yersinia pestis in them?", "Kyle Harper", "There's two things. First of all, you have to look. There is at present not nearly the same amount of ancient DNA laboratory work that's happening on remains from ancient India. So if you're not looking, you're definitely not going to find it and people aren't looking.", "Secondly, it takes a lot of luck for it to preserve. The DNA molecule starts degrading the second you die, just starts falling apart. Even in the best of cases where we're getting it from... usually, if it's pathogen, you're getting it from the dental cavity. If it's human DNA, you're getting it from the inside of the skull. But it takes a lot of luck for it to preserve because the soil conditions will affect the degradation, the temperature will affect the degradation. Just in a crude sense, heat is bad. That's why there's more DNA, ancient DNA, that's preserved at more northern latitudes so far. But it has as much to do with the fact that people aren't looking. We should be looking, and if you've got skeletal materials from an ancient mass grave in India, call me. We can definitely look.", "Dwarkesh", "And just to be clear for the context-", "Kyle Harper", "Ancient, ancient, ancient.", "The next pandemic", "Dwarkesh", "Okay, going forward to the future a little bit, speaking of future technology, maybe the one that's more relevant than AI is synthetic biology , and there's a worry that you can potentially create diseases which... maybe the evolutionary gradient is one that is not catastrophic where diseases are incentivized to be transmissible but keep you at a chronic level of infection that doesn't necessarily kill you immediately. Actually, it's interesting why the bubonic plague diverges from that selection pressure, which maybe you can answer. What do you think about the potential that with synthetic biology people can make diseases that have the transmissibility of measles, but also the deadliness of something like Ebola ? Is that, given your understanding of biology and whatever, is that plausible given your understanding of biology?", "Kyle Harper", "Let me start with the plague where I'm a little more comfortable and can say something as a knowledgeable person. I think it's relevant because you said it's weird that the plague seems to sort of evade some of these evolutionary constraints, and it's worth saying what these are. A pathogen is a disease-causing organism, a microbe , usually a virus or a bacterium , but also fungi and single-celled organisms like protozoans that cause disease in a host . They're not trying to cause you disease. COVID doesn't hate you. Plague doesn't hate you. It's just evolution. It's just trying to steal energy or hijack your cells to reproduce its genes.", "In fact, it has incentives to try and do that as well as possible while doing the least possible damage. It's always kind of trying to thread that needle or to find the right balance, because if a pathogen just kills you instantly there's nothing to steal and it can't transmit its genes into the next generation. Every pathogen has these basic evolutionary problems. How do I get from one host to the next? And how do I evade my host's immunity, which our immune systems are incredible, for long enough to multiply?", "Most pathogens have to explore the space where there are these various constraints, and they find all sorts of weird ways around it. Evolution is really good and really creative, unfortunately for us. The tricks that they find to hide inside your immune system or to fake it out are really wild. There are two reasons why plague is so weird, and we don't completely understand why plague is so weird, but I think there's two basic reasons. One is that it's vector-borne , which means that it's transmitted through another organism that is the intermediate.", "Arthropod or insect vectors are really annoyingly helpful to certain pathogens, and most... there's actually a relatively small number of diseases that are transmitted through a vector like this, but they tend to be really nasty, like malaria, typhus . They can kind of get away with it because even if you're dying, a mosquito can come and bite you and transmit malaria to me. Plague is a vector-borne disease. It's very well adapted to transmit, particularly by fleas, but we think also maybe by lice and other biting organisms, but really by fleas. It's really good at transmitting by fleas.", "That's evolution. This is one of the cool things with ancient DNA we've been able to piece together at the absolute molecular level, the genetic changes that let it make this protein that have this effect in fleas. It's really weird, it forms this biofilm in the gut of the flea that chokes it and makes the flea feel like it's starving. So the flea just starts feeding and feeding and feeding, and meanwhile it's regurgitating bacteria.", "Dwarkesh", "Sorry, can I ask a question about that? Why is it the case... 'Cause there are diseases that hijack the flea's mind or ants' minds or something. Why isn't there a disease that makes humans zombies? Is it just the human brain is so complicated that it's like...?", "Kyle Harper", "Let me come back to this. We can talk about zombies, but we need to wind up for that. Okay, so one, flea- So the plague is vector-borne, and it's really good at manipulating the fleas, and it's just evolution. Two, I said this before, but it's an animal disease. We're like collateral damage. We're totally irrelevant to the really core evolutionary history. The plague just wants to infect rodents. Of course, I'm... it's not really wanting to do this. The plague makes a living, it survives out there in burrowing rodent colonies.", "We're like tertiary. It doesn't care at all. It has no evolved incentive to modulate its virulence to be able to transmit sustainably. Plague never sustains itself in human populations. It can transiently infect human populations, but then it always dies out. It becomes extinct, that lineage.", "Dwarkesh", "And then what is the reason that you have these 1,000-year cycles basically? Why is it not 500 years? Why is it not 10 years? Why is it not… What causes it to go dormant? What causes this to reemerge?", "Kyle Harper", "You need to ask me in five years because we've learned so much and now this is like the thing that would fall in the category of almost a new question now that we can ask. Now that we have the Neolithic lineages and the Bronze Age lineages, we're starting to piece together this fuller history. But we still don't even totally understand the boundaries of when the plague is really sort of not circulating in human populations, and what are the factors that cause it to be so explosive. Like, is it evolution of the bacterium? Is there something about the genetics of the lineages that escape from the animal reservoirs that are especially transmissible? Is it human ecology, like, that we put rodents like black rats in the right place to get the disease? Is there something about the climate stress that renders the population…? We don't have a great understanding of like why the plague comes and goes.", "That's scary. Connecting it to your other question about these superbugs, what's interesting in the very big picture about the plague to me is the history of infectious disease is like, on the one hand, there's a real core of it that's just basic principles of ecology and evolution. We do certain things in the environment that creates the conditions that pathogens can evolve and take advantage of. But on top of that, evolution is just creative and weird and contingent and unpredictable. It's those little, contingent facts that can end up having these really huge effects.", "In the case of the plague, if you were really knowledgeable about the basics of ecology and evolution of disease, you would never be like, \"I think that every now and then a rodent disease from Central Asia is gonna wipe out half of the continent\". Like, that shouldn't... that's not predictable. That shouldn't be happening. That one's kind of an outlier, but infectious disease is always kind of like that. Tuberculosis has probably killed more people maybe than any other infectious disease. It's like this horrible disease. We don't really understand it. Now we really don't understand where it came from because it doesn't look like it has an animal host before it has humans.", "It's just a weird disease. It's just a bacterial pathogen that, in the huge world of bacteria, this one is very good at hiding. It gets in your chest and it just lurks. Then it'll just waste you away, particularly if you're poor and you're stressed. There are some core principles there, but then it's just something weird about it. It's just this terrible luck that makes it what it is. To me, there's going to be another pandemic, maybe bird flu, maybe something else.", "But it's the real outliers and the weird ones that we should maybe worry about a little bit more than we do. If you want to go to zombies, I'll go there. You don't have to twist my arm too hard. But like prion diseases or fungal diseases where we don't have nearly the same infrastructure and level of knowledge, biomedical research as we do for bacterial and viral diseases, if we create the incentive, evolution is gonna find some weird ways to exploit it. It's not just transmissibility and virulence. Those are like two really basic parameters.", "When you look at even COVID-19, part of what made it insidious is it just has just the right parameters to be latent for just long enough. The first COVID, SARS-CoV-1 , 2003, slightly more virulent, and in fact, it was just more virulent enough that it made you sick pretty quick. Just that little difference was enough to contain it because you could figure out who was sick. COVID-19 was impossible to contain because it took several days before you really presented with clinical illness. It's just that little quirk that made it totally impossible to control through non-pharmaceutical interventions early on.", "Follow that train of thought... if pathogens are going to find ways to take advantage, and there may be pathogens that push the limits on latency, it can be very hard to control. One of the takeaways or the big evolutionary history of our pathogens is evolution is very weird, very contingent, very creative at exploiting whatever weakness we give it. It's because there are billions and billions and billions of microbes in this room. I don't know how many tens to hundreds millions of species of microbes are in this room. Most of them are not even remotely pre-evolved to be pathogenic, but lots are and they're constantly seeing if you managed to lock that door. They're just looking for a way to break in.", "How Kyle uses LLMs", "Dwarkesh", "Okay, just a couple more rapid fire questions for you. Have you found tools like Deep Research useful for especially your kind of work where you just have to compile insights from many different fields? If we throw in a question, the kinds of questions you honestly investigate and now maybe they can rely on you as a citation for those particular questions about what effect did climate have on the fate of Rome or something... But if you just had a different question which maybe you would write a book about in the future, how well do they do at synthesizing this kind of literature and coming up with a thesis the way you do?", "Kyle Harper", "Yeah, amazing. But not yet completely displacing or totally threatening the kind of work that a historian does. But at this point, I can't even conceive of what a research project would look like without using AI.", "Dwarkesh", "Really? Oh, really? That fast it's become so central to your work?", "Kyle Harper", "Yeah, but for just like, it's just like a constant conversation partner when you're doing research, when you're writing, you know, you can go back to that PDF and ask whatever, \"How many species are there in this taxon ,\" or you can just ask the AI. You still have to check it, but it's getting obviously more and more reliable really quickly.", "But I think it hasn't yet... in some of the deeper research, it's not the equal of humans yet, and then in the synthesis, it's really not. There's still that creative element of synthesis that's... where conceiving of the question is as important as the answer. It doesn't feel like it's right around the corner. But it's changed.", "Dwarkesh", "Have you used Deep Research?", "Kyle Harper", "Oh, yeah. I started using it like two weeks ago or so. I don't know how long it's been around? Somebody told me about it.", "Dwarkesh", "It's not that much longer.", "Kyle Harper", "Okay, somebody told me about it less than two weeks ago. Yeah, it's incredible. I mean, it's really incredible.", "De-extinction of lost species", "Dwarkesh", "Yeah. Now, I want to touch on your next book that isn't out yet, The Last Animal . One question I have is basically how worried should we be about extinction given that we're on the cusp of technologies which will make it possible for us to reanimate many lost species? I assume if we have their genome or something, our descendants will be able to make more wooly mammoths and saber-toothed tigers and so forth. Should we discount the value of endangered species as a result?", "Kyle Harper", "I would say no. We should still be concerned with extinction for a couple reasons. One is, absolutely this is a legitimate, serious scientific field to understand the genomics of extinct animals. There is small but serious enough science of de-extinction. It's feasible that some organisms could be targeted for serious de-extinction efforts.", "At the same time, a couple of thoughts. One is, I'm not that optimistic that it will work, not because I think it's necessarily impossible, although it's not yet totally feasible, particularly for animals that don't have very similar modern descendants. It's because a species isn't just a genome. A species is an organism that inhabits a food web and an ecosystem. We could bring the wooly mammoth back, but there's nowhere for them to live. The mammoth steppe that they need to thrive is not there.", "There's really very little point in bringing an animal back from extinction just to put it in a box at a zoo to satiate our curiosity about it. Without the ecosystem, you can't have the species. One of the themes that I try and get at in the book that I'm trying to finish is, we need to think about living systems, ecosystems, and the extinction question is very much a question of what kinds of systems will exist on the planet? Whatever happens technologically in 100 years, 1,000 years, the impacts that humans have on biodiversity is gonna be very long-lasting.", "We're part of a species that has been impacting biodiversity for over 10,000 years, and there are things we can't undo. There are things we can't change about the past. We're making decisions right now that will be binding on the future whether our descendants like it or not. We need to think very hard about what choices do we want to make to keep intact the kind of variety and vibrancy of living systems that in 1,000 years, 10,000 years, that will be a huge part of our legacy. The impact that we make on the stream of macroevolution will be one of the really big things that our species does. It can sometimes be very hard to recognize that in our individual lives, but collectively, it will absolutely be part of our forever legacy on Earth. We need to think very carefully about the choices that we make.", "Dwarkesh", "I think that's an excellent note to close on. Just to plug one more time, we've been discussing Plagues Upon the Earth , which is the history of disease going back through the Neolithic to modern times, Fate of Rome which discusses the plagues and history of the Roman Empire considering climate and biology. We also discussed, what was the name of the book on slavery? Slavery in the Late Roman World ?", "Kyle Harper", "Slavery in the Late Roman World .", "Dwarkesh", "Yep. And the upcoming book is The Last Animal .", "Kyle Harper", "The Last Animal .", "Dwarkesh", "All linked in the description below. And where else can people find you?", "Kyle Harper", "In your descriptions. That's it. I’m not on social media, sorry.", "Dwarkesh", "Okay, got it. Well, you can find him here on this podcast.", "Kyle Harper", "Yes, exclusively." ]
[ "https://www.ou.edu/cas/classicsandletters/people/kyle-harper", "https://www.amazon.com/Fate-Rome-Climate-Disease-Princeton/dp/B071SLPWVL", "https://www.amazon.com/Plagues-upon-Earth-Princeton-Economic/dp/B094DX4ZFP/", "https://www.amazon.com/Slavery-Late-Roman-World-275-425/dp/B005OYKE08/", "http://www.kyleharper.net/uncategorized/the-last-animal/", "https://reich.hms.harvard.edu/", "https://www.amazon.com/Fate-Rome-Climate-Disease-Princeton/dp/B071SLPWVL", "https://en.wikipedia.org/wiki/Bubonic_plague", "https://en.wikipedia.org/wiki/Yersinia_pestis", "https://www.amazon.com/Fate-Rome-Climate-Disease-Princeton/dp/B071SLPWVL", "https://en.wikipedia.org/wiki/Roman_Empire", "https://www.rome.net/roman-empire#:~:text=The%20High%20Empire%20(31%20BC%20%2D%C2%A0305%20AD)", "https://en.wikipedia.org/wiki/Pax_Romana", "https://en.wikipedia.org/wiki/COVID-19", "https://en.wikipedia.org/wiki/Black_Death", "https://en.wikipedia.org/wiki/Plague_of_Justinian", "https://www.amazon.com/Fate-Rome-Climate-Disease-Princeton/dp/B071SLPWVL", "https://erenow.org/ancient/durantromecaesar/200.php#:~:text=%E2%80%9CTHE%20two,as%20Rome%20fell.", "https://en.wikipedia.org/wiki/Christianity", "https://en.wikipedia.org/wiki/Millenarianism", "https://en.wikipedia.org/wiki/Apocalypticism", "https://en.wikipedia.org/wiki/Holocene", "https://en.wikipedia.org/wiki/Ice_age", "https://www.smithsonianmag.com/science-nature/sixth-century-misery-tied-not-one-two-volcanic-eruptions-180955858/", "https://en.wikipedia.org/wiki/Aerosolization", "https://en.wikipedia.org/wiki/Volcanic_winter_of_536", "https://en.wikipedia.org/wiki/Eschatology", "https://www.amazon.com/Fate-Rome-Climate-Disease-Princeton/dp/B071SLPWVL", "https://en.wikipedia.org/wiki/Roman_economy", "https://en.wikipedia.org/wiki/Edward_Gibbon", "https://en.wikipedia.org/wiki/Outline_of_The_History_of_the_Decline_and_Fall_of_the_Roman_Empire#:~:text=Volume%20I%20has%20a%20complex%20history%20of%20its%20own.", "https://alexandermeddings.com/history/ancient-history/empire-without-limit/#:~:text=If%20a%20man,of%20the%20laws.", "https://en.wikipedia.org/wiki/Industrial_Revolution", "https://en.wikipedia.org/wiki/Pre-industrial_society", "https://www.milorganite.com/lawn-care/organic-lawn-care/organic-vs-synthetic#:~:text=What%20is%20Synthetically%20Derived%20Fertilizer%3F", "https://en.wikipedia.org/wiki/Green_Revolution", "https://en.wikipedia.org/wiki/Iron_Age", "https://en.wikipedia.org/wiki/Galen", "https://en.wikipedia.org/wiki/Ptolemy", "https://en.wikipedia.org/wiki/Pliny_the_Elder", "https://en.wikipedia.org/wiki/Natural_History_(Pliny)", "https://en.wikipedia.org/wiki/Augustus", "https://en.wikipedia.org/wiki/Arts_Council_England", "https://en.wikipedia.org/wiki/Longitude_Prize", "https://en.wikipedia.org/wiki/Francis_Bacon", "https://en.wikipedia.org/wiki/Aristotle", "https://en.wikipedia.org/wiki/Royal_Society", "https://en.wikipedia.org/wiki/Denis_Papin", "https://en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz", "https://www.lindahall.org/about/news/scientist-of-the-day/denis-papin/#:~:text=Denis%20Papin%2C%20a%20French%20inventor,du%20vide%20(New...", "https://nat.org/", "https://scrollprize.org/", "https://en.wikipedia.org/wiki/Herculaneum", "https://en.wikipedia.org/wiki/Eruption_of_Mount_Vesuvius_in_79_AD", "https://en.wikipedia.org/wiki/Euclid", "https://en.wikipedia.org/wiki/Diophantus", "https://en.wikipedia.org/wiki/Greeks", "https://en.wikipedia.org/wiki/Greece", "https://en.wikipedia.org/wiki/Athens", "https://www.sciencedirect.com/topics/social-sciences/industrial-economy", "https://www.amazon.com/Slavery-Late-Roman-World-275-425/dp/B005OYKE08/", "https://en.wikipedia.org/wiki/Slavery", "https://en.wikipedia.org/wiki/Roman_Republic#:~:text=The%20late%20Republic%2C%20from%20133%C2%A0BC%20onward%2C%20saw%20substantial%20domestic%20strife%2C%20often%20anachronistically%20seen%20as%20a%20conflict%20between%20optimates%20and%20populares", "https://en.wikipedia.org/wiki/Cash_crop", "https://en.wikipedia.org/wiki/Colonialism", "https://en.wikipedia.org/wiki/Abolitionism_(disambiguation)", "https://www.kyleharper.net/uncategorized/the-last-animal/", "https://en.wikipedia.org/wiki/Intensive_animal_farming#:~:text=also%20known%20as%20factory%20farming", "https://en.wikipedia.org/wiki/Third_Servile_War", "https://en.wikipedia.org/wiki/Haitian_Revolution", "https://en.wikipedia.org/wiki/Pompeii", "https://en.wikipedia.org/wiki/Pater_familias", "https://www.amacad.org/person/aldo-schiavone", "https://en.wikipedia.org/wiki/Neo-Marxism", "https://en.wikipedia.org/wiki/Gladiator_(2000_film)", "https://en.wikipedia.org/wiki/Commodus", "https://en.wikipedia.org/wiki/Russell_Crowe", "https://www.amazon.com/Plagues-upon-Earth-Princeton-Economic/dp/B094DX4ZFP/", "https://en.wikipedia.org/wiki/Human", "https://en.wikipedia.org/wiki/Forager", "https://en.wikipedia.org/wiki/Hunter-gatherer", "https://en.wikipedia.org/wiki/Malaria", "https://en.wikipedia.org/wiki/Pleistocene", "https://en.wikipedia.org/wiki/Paleolithic", "https://en.wikipedia.org/wiki/Jared_Diamond", "https://www.discovermagazine.com/planet-earth/the-worst-mistake-in-the-history-of-the-human-race#:~:text=was%20in%20many%20ways%20a%20catastrophe%20from%20which%20we%20have%20never%20recovered.", "https://en.wikipedia.org/wiki/Pathogen", "https://www.cdc.gov/parasites/about/index.html", "https://en.wikipedia.org/wiki/Stone_Age", "https://en.wikipedia.org/wiki/James_C._Scott", "https://yalebooks.yale.edu/book/9780300240214/against-the-grain/", "https://en.wikipedia.org/wiki/Eurasia", "https://en.wikipedia.org/wiki/Anatolia", "https://en.wikipedia.org/wiki/Ibex", "https://en.wikipedia.org/wiki/Photosynthesis", "https://en.wikipedia.org/wiki/Cognitive_skill", "https://en.wikipedia.org/wiki/Great_Divergence", "https://en.wikipedia.org/wiki/Public_health", "https://en.wikipedia.org/wiki/Isaac_Newton", "https://www.amazon.com/Plagues-upon-Earth-Princeton-Economic/dp/B094DX4ZFP/", "https://www.amazon.com/Plagues-upon-Earth-Princeton-Economic/dp/B094DX4ZFP/", "https://en.wikipedia.org/wiki/Samuel_Pepys", "https://en.wikipedia.org/wiki/Philosophi%C3%A6_Naturalis_Principia_Mathematica", "https://en.wikipedia.org/wiki/Joseph_Henrich", "https://en.wikipedia.org/wiki/Antimicrobial", "https://en.wikipedia.org/wiki/Antifungal", "https://www.amazon.com/Plagues-upon-Earth-Princeton-Economic/dp/B094DX4ZFP/", "https://en.wikipedia.org/wiki/Vaccination", "https://en.wikipedia.org/wiki/Antibiotic", "https://en.wikipedia.org/wiki/Inoculation", "https://en.wikipedia.org/wiki/Smallpox", "https://en.wikipedia.org/wiki/Cowpox", "https://en.wikipedia.org/wiki/Lymph", "https://en.wikipedia.org/wiki/Food_and_Drug_Administration", "https://en.wikipedia.org/wiki/Edward_Jenner", "https://en.wikipedia.org/wiki/Louis_Pasteur", "https://slimemoldtimemold.com/", "https://slimemoldtimemold.com/", "https://en.wikipedia.org/wiki/Vitamin_C", "https://en.wikipedia.org/wiki/Cyanide_poisoning", "https://en.wikipedia.org/wiki/Mughal_Empire", "https://en.wikipedia.org/wiki/Historiography", "https://en.wikipedia.org/wiki/Bacillus", "https://en.wikipedia.org/wiki/Alexandre_Yersin", "https://en.wikipedia.org/wiki/Kitasato_Shibasabur%C5%8D", "https://en.wikipedia.org/wiki/Enzootic", "https://en.wikipedia.org/wiki/Gujarat", "https://en.wikipedia.org/wiki/Red_Sea", "https://en.wikipedia.org/wiki/Synthetic_biology", "https://en.wikipedia.org/wiki/Ebola", "https://en.wikipedia.org/wiki/Microorganism", "https://en.wikipedia.org/wiki/Bacteria", "https://en.wikipedia.org/wiki/Protozoa", "https://en.wikipedia.org/wiki/Host_(biology)", "https://www.who.int/news-room/fact-sheets/detail/vector-borne-diseases", "https://en.wikipedia.org/wiki/Arthropod", "https://en.wikipedia.org/wiki/Typhus", "https://en.wikipedia.org/wiki/Neolithic", "https://en.wikipedia.org/wiki/Bronze_Age", "https://en.wikipedia.org/wiki/Tuberculosis", "https://en.wikipedia.org/wiki/SARS-CoV-1", "https://gemini.google/overview/deep-research/?hl=en", "https://en.wikipedia.org/wiki/Taxon", "https://www.kyleharper.net/uncategorized/the-last-animal/", "https://www.kyleharper.net/uncategorized/the-last-animal/", "https://www.amazon.com/Plagues-upon-Earth-Princeton-Economic/dp/B094DX4ZFP/", "https://www.amazon.com/Fate-Rome-Climate-Disease-Princeton/dp/B071SLPWVL", "https://www.amazon.com/Slavery-Late-Roman-World-275-425/dp/B005OYKE08/", "https://www.amazon.com/Slavery-Late-Roman-World-275-425/dp/B005OYKE08/", "https://www.kyleharper.net/uncategorized/the-last-animal/", "https://www.kyleharper.net/uncategorized/the-last-animal/" ]
https://www.dwarkesh.com/p/lars-doucet
Lars Doucet - Progress, Poverty, Georgism, & Why Rent is Too Damn High
[ "Dwarkesh Patel - 00:00:39:", "Today I have the pleasure of speaking with Lars Doucet , who developed the highly acclaimed game Defender's Quest and part two is coming out next year, but now he's working on a new startup. But the reason we're talking is that he wrote a review of Henry George's Progress and Poverty that won Scott Alexander's Book Review Contest and now it has been expanded into a book called Land is a Big Deal .", "Lars, welcome to the podcast.", "Lars Doucet - 00:01:10:", "Great to be here, Dwarkesh .", "Dwarkesh Patel - 00:01:10:", "Let's just get into it. What is Georgism?", "Georgism", "Lars Doucet - 00:01:12:", "The book is based off of the philosophy of a 19th century American economist by the name of Henry George from whom we get the name Georgism and basically George's thesis is kind of the title of my book, Land is a Big Deal .", "Georgism is often reduced to its main policy prescription that we should have a land value tax, which is a tax on the unimproved value of land, but not a tax on any buildings or infrastructure on top of the land, i.e. anything humans add.", "The basic insight of Georgism is reflected in the aphorisms you hear from real estate agents when they say things like the three laws of real estate or location, location, location and buy land, it's the one thing they're not making any more of. It's basically this insight that land has this hidden role in the economy that is really underrated. If you look at history through the right lens, control over land is the oldest struggle of human history. It goes beyond human history. Animals have been fighting over land forever. That's what they're fighting over in Ukraine and Russia right now. And basically the fundamental insight of Georgism is that over the last century, we've had this huge conflict. All the oxygen's been sucked up by Capitalism and Socialism duking it out. We have this assumption that you either have to be pro-worker or pro-business and that you can't be both. And Georgism is genuinely pro-worker and pro-business. But what it's against is land speculation.", "If we can find a way to share the earth, then we can solve the paradox that is the title of George's book, Progress and Poverty . Why does poverty advance even when progress advances? Why do we have all this industrialized technology and new methods and yet we still have inequality? In George's time it was industrial technology and in our time it’s computers and everything else. We have all this good stuff. We can make more than we've ever made before. There's enough wealth for everybody. And yet we still have inequality. Where does it come from? George answers that question in his book. And I expand on it in mine.", "Metaverse Housing Crises", "Dwarkesh Patel - 00:03:15:", "I'm excited to get into the theory of all of it in a second. But first, I'm curious how much of your interest in the subject has been inspired by the fact that as a game developer, you're constantly dealing with decentralized rent seekers, like Steam or the iOS app store? Is that part of the inspiration behind your interest in Georgism or is that separate?", "Lars Doucet - 00:03:38:", "That's interesting. I wouldn't say that's what clued me into it in the first place but I have become very interested in all forms of rent seeking in this general category of things we call land-like assets that come to first-mover advantages in these large platform economies. I've started to think a lot about it basically. But the essence of land speculation is you have this entire class of people who are able to basically gate keep access to a scarce resource that everybody needs, which is land, that you can't opt out of needing. And because of that, everyone basically has to pay them rent. And those people don't necessarily do anything. They just got there first and tell everyone else, “Well, if you want to participate in the world, you need to pay me.”", "The connection with game development that actually clued me into Georgism was …. I'd heard and read about Georgism before and thought it was interesting but then I started repeatedly noticing this weird phenomenon in online multiplayer games of virtual housing crises, which is the most bizarre concept in the world to me. Basically a housing crisis in the Metaverse and predecessors to the Metaverse going back 30 years. As early as Ultima Online when I was 19. It was this online game that you could play. And you could build houses in the game and put them down somewhere.", "What I found was that houses were actually fairly cheap. You could work long enough in a game to be able to afford to buy blueprints for a house which will be put somewhere but there was no land to put it on. At the time, I thought, “Oh, well. I guess the server filled up.” I didn't really think much about it. I was like, “This stinks. I didn't join the game early enough. I'm screwed out of housing.” And then I kind of forgot about it. And then 20 years later, I checked back in and that housing crisis is still ongoing in that game. That game is still running a good 25 years later and that housing crisis remains unsolved and you have this entire black market for housing. And then I noticed that that trend was repeated in other online games like Final Fantasy 14. And then recently in 2022, with all this huge wave of crypto games, like Axie Infinity, Decentraland, the Sandbox and then Yuga Labs' Board-Ape Yacht Club, the Other Side, had all these big land sales.", "At the time, I was working as an analyst for a video game consulting firm called Novik. And I told my employers, we are going to see all the same problems happen. We are going to see virtual land speculation. They're going to reproduce the conditions of the housing crises in the real world and it's going to be a disaster. I called it and it turns out I was right. And we've now seen that whole cycle kind of work itself out. And it just kind of blew my mind that we could reproduce the problems of the real world so articulately in the virtual world without anyone trying to do it. It just happened. And that is kind of the actual connection between my background in game design and getting “George-pilled” as the internet kids call it these days.", "Dwarkesh Patel - 00:06:43:", "There was a hilarious clip. Tim Dillon was on Joe Rogan's podcast and they're talking about Decentraland, where if you want to be Snoop Dogg's neighbor in the Metaverse, it costs like a couple million dollars or something. Joe Rogan was like, “So you think you can afford to live there?” And then Tim Dillon's like, “No, but I'm going to start another Metaverse and I'm going to work hard.”", "Tax Leisure?", "Dwarkesh Patel - 00:07:10:", "Let's go into Georgism itself. Tyler Cowen had a blog post a long time ago where he was comparing taxing land to taxing unimproved labor or unimproved capital. It's an interesting concept. So I have a CS degree, should I be taxed at the same level as an entry level software engineer instead of a podcaster because I'm not using my time as efficiently as possible? In some way, leisure is the labor equivalent of having an unimproved parking lot in the middle of San Francisco. Or if I'm just keeping my capital out of the economy and therefore making it not useful, maybe I should have that capital taxed at the rate of the capital gains on T-Bill. And this way, you're not punishing people for having profitable investments, which is what you're kind of doing with capital gains. What would you think of that comparison?", "Lars Doucet - 00:08:07:", "Before you can even answer that question, you've got to go back to the ground moral principles you're operating on. Is your moral operating principle — we want to increase efficiency so we're going to tax everyone in a way to basically account for the wasted opportunity cost? Which brings up a lot of other questions of who decides what that is.", "I think the Georgist argument is a little different. The tax we propose is efficient but it actually stems kind of from a different place, a more fundamental aspect of justice. From our perspective, if you work and you produce value, your work produced that value. If you save money and accumulate capital in order to put that capital to work to receive a return, you've also provided something valuable to society. You saved money so a factory could exist. You saved money so that a shipping company could get off the ground. Those are valuable contributions. But nobody made the Earth. The Earth pre-exists all of us.", "Someone who provides land actually does the opposite of providing land. They unprovide land, and then they charge you for opening the gate. So the argument for charging people on the unimproved value of land is that we want to tax unproductive rent seeking. We want to tax non-produced assets because we want to encourage people to produce assets. We want to encourage people to produce labor, to produce capital. We want more of those things. And there's that aphorism that if you want less of something, you should tax it.", "Maybe there is a case for some kind of galaxy brained take of taxing unrealized opportunity costs or whatever, but I'm less interested in that. My moral principles are more about, let's start with just taxing the things nobody has made and that people are gatekeeping access to. Let's essentially tax monopolies and rent seeking and then if we still need to raise more taxes, we can talk about that later. But let's start with just taxing the worst things in society and then stop taxing things we actually want more.", "We have this mentality right now where everything's a trade off and we have to accept the downsides of income taxes, of sales taxes, of capital taxes because we just need the revenue and it has to come from somewhere. My argument is that  it can come from a much better somewhere. So let's start with that.", "Dwarkesh Patel - 00:10:39:", "So I guess if it was the case that we've implemented a land value tax and we're still having a revenue shortfall and we need another kind of tax and we're going to have to keep income taxes or capital gains taxes. Would you in that situation prefer a tax where you're basically taxed on the opportunity costs of your time rather than the actual income you generated or the returns you would generate on your capital?", "Lars Doucet - 00:11:04:", "No, I think probably not. I think you would probably want to go with some other simpler tax for the sake of it because there's too many degrees of freedom in there. We can talk about why I will defend the Georgist case for land value tax but I think it gets different when you start judging what is the most valuable use of your time because that's a much more subjective question. Are you providing more value to society as being a podcaster or being a computer science person or creating a startup? That may not be evident for some time.", "Think of people who were never successful during their lifetimes. Think of the guy who invented the FM radio . He threw himself out a window because he never got it really adopted during his lifetime but it went on to change everything. If we were taxing him during his lifetime based off of what he was doing, of being a “failure”. If Van Gogh was taxed for wasting his life as an artist as he thought he was, which ultimately led to his suicide. A lot of these things are not necessarily realized at the time and it would need a much bigger bureaucracy to figure that all out. So I think you should go with a more modest tax.", "After land value tax, I think you should do things like severance tax on natural resources and other taxes on other monopolies and rents. And so I think the next move after land value tax is not immediately to capital and income taxes and sales taxes, but to other taxes on other rent seeking and other land-like assets that aren't literally physically land. And then, only after you've done all of those, if you still absolutely need more taxes then move on to the bad taxes.", "Dwarkesh Patel - 00:12:58:", "What is a severance tax?", "Lars Doucet - 00:12:59:", "Severance tax is a tax on the extraction of natural resources. It is what Norway does with their oil industry that has been massively successful and a key reason that Norway has avoided the resource curse .", "A Georgist purist will say it's essentially a land value tax but of a different kind. You can't normally extract land. Like in this house you're living in, you're not using up this land, but you can use up non-renewable resources. So a severance tax is basically — Nestle should be charged a severance tax for the water they're using, for instance, because all they're doing is enclosing a pre-existing natural resource that used to belong to the people and they're just putting it in bottles and selling it to people. They should be able to realize the value of the value-add they give to that water, but not to just taking that resource away.", "Speculation & Frontiers", "Dwarkesh Patel - 00:13:53:", "No, that makes sense. Okay, so let's go deep into the actual theory and logic of Georgism. One thing I was confused by is why property owners who have land in places that are really desirable are not already incentivized to make the most productive use of that land. So even without a land tax, if you have some property in San Francisco why are you not incentivized to construct it to the fullest extent possible by the law to collect rents anyways? Why are you keeping it as a parking lot?", "Lars Doucet - 00:14:28:", "Right. So there's a lot of reasons. And one of them has to do with, there's an image in the book that this guy put together for me. I'll show it to you later. But what it does is that it shows the rate of return. What a land speculator is actually optimizing for is their rate of return, right? And so if land appreciates by 10% a year, you're actually incentivized to invest in vacant land or tear down a property because the building of a tear down property is worth negative value. The land's cheaper because there's garbage on it. Basically your marginal dollar is better spent on more land than it is on building up.", "Dwarkesh Patel - 00:15:16:", "But eventually shouldn't this be priced into the price of land so that the returns are no longer 10% or they're just basically what you could get for any other asset? And at that point, then the rate of return is similar for building things on top of your existing land than buying a new land because that return has been priced into other land.", "Lars Doucet - 00:15:38:", "We just don't see that empirically. We see rising land prices as long as productivity and population increases. Those productivity and population gains get soaked into the price of the land. It's because of this phenomenon called Ricardo's Law of Rent and it's been pretty empirically demonstrated. It has to do with the negotiation power. Some people of course do build and invest. There's a lot of local laws that restrict people's ability to build. But another reason is just that it also has to do with the existing property tax regime. The existing property tax regime actively incentivizes empty lots because you have a higher tax burden if you build.", "So what actually happens is a phenomenon that's similar to oil wells. It's not just because of property taxes, those do encourage you to keep it empty. But there's this phenomenon called land banking and waiting for the land to ripen. Sure, I could build it now but I might have a lot of land parcels I've got and I don't need to build it now because I think the prices might go up later and it would be better to build on it later than it is now. And it's not costing me anything to keep it vacant now. If I build now, I'm gonna have to pay a little bit more property taxes. And I know in three years that the price is gonna be even better so maybe I'll wait to incur those construction costs then and right now, I'm gonna focus more on building over here. And I've got a lot of things to do, so I'm just gonna squat on it here.", "It's the same way as  to how I'm squatting on about 30 domain names, most of them bought before I got into Georgism. It's like, “Yeah, I'll pay 15 bucks a year to just hold it. Why not? I might use that someday.” I should probably release all the ones I have no intent of using because I was looking for a domain for my startup the other day and every single two-word dot com is taken and has been for 10 years and it's a similar phenomenon. Some of it is economic rational following of incentives and some of it is just — This is a good asset, I'm just gonna hold on to it because why not? And I don't have any pressure to build right now.", "This happens on the upswing and on the downswing of cities. So while the population's growing and while the population is declining, people will just buy a lot of land and hold it out of use because it's also just a great place to park money because it's an asset that you know if the population ever starts growing, it's gonna keep its value better than almost any other hard asset you have.", "Dwarkesh Patel - 00:18:16:", "Yep. I guess another broader criticism of this way of thinking is that this is all scarcity mindset that land is limited. Why don't we just focus on the possibility of expanding the amount of usable land? There's not really a shortage of land in the US. Maybe there's a shortage of land in urban areas but why don't we expand into the seas? And why don't we expand into the air and space? Why are we thinking in this sort of scarce mindset?", "Lars Doucet - 00:18:48:", "I love this question because actually our current status quo mindset is the scarcity mindset and Georgism is the abundance mindset. And we can have that abundance if we learn to share the land. Because right now, why don't we expand? And the answer is we've tried that. We've done it twice. And it's the story of America's frontier. Right now there's plenty of empty land in Nevada, but nobody wants it. And you have to ask why, right? You also have to ask the question of how did we have virtual housing crises in the Metaverse where they could infinitely expand all they want? How is that even possible?", "The answer has to do with what we call the urban agglomeration effect. What's really valuable is human relationships, proximity to other human beings, those dense networks of human beings. In a certain sense, the issue is that land is not an indistinguishable, fungible commodity. Location really matters. America has a finite amount of land, but it might as well be an infinite plane. We're not going to fill up every square inch of America for thousands of years if we ever do. What is scarce is specific locations. They're non-fungible.", "To a certain extent, if you don't want to live in New York, you can live in San Francisco or any other big city. But what makes New York New York is non-fungible. What makes San Francisco San Francisco is non-fungible. That particular cluster of VCs in San Francisco, until or unless that city completely explodes, and that moves somewhere else to Austin or wherever, at which point, Austin will be non-fungible. I mean, Austin is non-fungible right now.", "Let me talk about the frontier. We have done frontier expansion. That is why immigrants came over from Europe, and then eventually the rest of the world, to America to settle the frontier. And the losers of that equation were, of course, the Indians who were already here and got kicked out. But that was the original idea of America.", "I like to say that America's tragedy is that America is a country that has the mindset of being a frontier state, but is in fact a state which has lost its frontier. And that is why you have these conversations with boomers who are like, “Why can't the next generation just pull itself up by its bootstraps?” Because America has had at least two major periods of frontier expansion. The first was the actual frontier, the West, the Oregon Trail, the covered wagons, the displacement of the Indians.", "That was a massive time. That was the time in which Henry George was writing. That was when that frontier was closing. When all that free land was being taken, and the advantages of that land was now being fully priced in. What it means for a frontier to close is that now the value of good productive land is fully priced in. But when the frontier is open, you can just go out there and take it, and you can get productive land and realize the gains of that.", "The second frontier expansion was after Henry George's death. It was the invention of the automobile, the ability to have a job in the city, but not have to live in the city. Like I commuted in to visit you here, right? That is because of the automobile frontier opening that has allowed me to live in some other city, but be able to do productive work like this podcast by driving in. But the problem is that sprawl can only take you so far before that frontier closes as well, and by closes I don't mean suburban expansion stops. What I mean is that suburban homes fully price in the value of the benefits are able to accrue by having that proximity to a city, but still being able to live over here, through Ricardo's Law of rents.", "Dwarkesh Patel - 00:22:37:", "Yeah, but I feel like this is still compatible with the story that we should just focus on the increase in technology and abundance rather than trying to estimate how much rent is available now, given current status quo technologies. The car is a great example of this, but imagine if there were flying cars. Like in Where Is My Flying Car? There's a whole analysis about how if people are still commuting for 20 minutes a day, a lot more land is actually in the same travel distance as it was before, and now all this land would be worth as much, even in terms of relationships that you could accommodate. So why not just build flying cars instead of focusing on land rent?", "Lars Doucet - 00:23:21:", "Well, because these things have a cost. The cost of frontier expansion was murdering all the Indians and the cost of automobile expansion was climate change. There has to be a price for that. And the problem is eventually, when you get to the end of that frontier expansion, you wind up with the same problem we had in the first place. Eventually, the problem is the first generation will make out like gangbusters if we ever invent flying cars, even better like Star Trek matter teleporters. That'll really do it. Then you can really live in Nevada and have a job in New York. There are some people who claim that Zoom is this but it's not. We've seen the empirical effects of that and it's the weakest like semi-frontier we've had and it's already closed. Because of Zoom, houses like this over in Austin have gone up in value because there is demand for them and there's demand for people to telecommute. So the increased demand for living out in the suburbs is now basically priced in because of the Zoom economy. And so the thing is the first people who did that, those who got there really quick, the first people to log in to the Ultima online server, were able to claim that piece of the frontier and capture that value. But the next generation has to pay more in rent and more in home prices to get that.", "Social Value of Search", "Dwarkesh Patel - 00:24:34:", "Actually, that raises another interesting criticism of Georgism. This actually is a paper from Zachary Gochenour and Bryan Caplan titled A Search-Theoretic Critique of Georgism , and the point they made was that one way of thinking about the improvement to land is actually identifying that this land is valuable. Maybe because you realize it has like an oil well in it and maybe you realize that it's in the perfect proximity to these Chinese restaurants and this mall and whatever and then just finding which land is valuable is actually something that takes capital and also takes you deciding to upend your life and go somewhere, and all kinds of effort. That is not factored into the way you would conventionally think of the improvements to land that would not be taxed, right? So in some sense, you getting that land is like a subsidy for you identifying that the land is valuable and can be used to productive ends.", "Lars Doucet - 00:25:30:", "Right, yeah, I know. I've read that paper. So first of all, the first author of that Zachary Gochenour, I've not been able to pin him down on what exactly meant on this, but he's made some public statements where he's revised his opinion since writing that paper and that he's much more friendly to the arguments of Georgism now than when he first wrote that paper. So I'd like to pin him down and see exactly what he meant by that because it was just a passing comment.", "But with regards to Kaplan's critique, Kaplan's critique only applies to a 100% LVT, where you fully capture all of the land value tax, and the most extreme Georgists I know are only advocating for an 85% land value tax. That would still leave 15% and Kaplan doesn't account at all for the negative effects of speculation. He's making a “speculation is good, actually” argument. Even if we grant his argument, he still needs to grapple with all the absolutely empirically observed problems of land speculation. And if we want to make some kind of compromise that maybe speculation could have this good discovery effect, there's two really good answers to that.", "First, just don't do 100% LVT, which we probably can't practically do anyway because of natural limitations. It's like how you don't want to do 115% land value tax. That drives people off the land. So we want to make sure that we have a high land value tax but make sure not to go over. That would leave a sliver of land rent that would still presumably incentivize this sort of thing. There's no argument for why 100% of the land rent is necessary to incentivize the good things that Kaplan was talking about.", "The second argument is when he talks about oil, we have the empirical evidence from the Norwegian massively successful petroleum model that shows how you should deal with this in the case of natural resources. What Norway does is that they have a massively huge severance tax on oil extraction. According to Kaplan's argument, this should massively destroy the incentive for companies to go out there and discover the oil but empirically, it doesn't.", "The oil companies' argument is that they need the oil rents. We need these oil rents or we will not be incentivized for the massive capital cost of offshore oil drilling. Norway's like, “ Well, if you just need to cover the cost of offshore oil drilling, we'll subsidize that. We'll just pay you. We'll pay you to go discover the oil. But when you find the oil, that oil belongs to the Norwegian people.” You may keep some of the rents but most of it goes to the Norwegian people. But hey, all your R&D is free. All your discovery is free. If the problem is discovery, we just subsidize discovery. And then the oil companies are like, “Okay, that sounds like a great deal.” Because without that, what the oil companies do is that they're like, “Okay, we're taking all these risks. So I'm gonna sit on all these oil wells like people sitting on domain names because I might use them later and the price might go up later.” But now because there's a huge severance tax, you're forced to drill now and your actual costs of discovery and R&D and all those capital costs are just taken care of.", "Dwarkesh Patel - 00:28:26:", "But isn't there a flip side to that? One of the economic benefits of speculation, obviously there's drawbacks, but one of the benefits is that it gets rid of the volatility and prices where our speculator will buy when it's cheap and sell when the price is high. And in doing so, they're kind of making the asset less volatile over time.", "If you're basically going to tell people who have oil on their land that if you don't take it out, you're gonna keep getting taxed then you're encouraging this massive glut of a finite resource to be produced immediately, which is bad if you think we might need that reserve in the ground 20 years from now or 30 years from now when oil reserves are running low.", "Lars Doucet - 00:29:10:", "Not necessarily. The problem is that speculation in the sense you're talking about, like encouraging people to do arbitrage, is good for capital because we can make more capital. But we can't make more land and we can't make more non-renewable natural resources. And I just think the evidence just doesn't support that empirically because if anything, land speculation causes land values to just constantly increase, not to find some natural equilibrium. Especially with how easy it is to finance. Two-thirds of bank loans just chase real estate up.", "If you look at the history of the prices of residential real estate in America, it's not this cyclical graph where it keeps going back down. It keeps going up and up and up, just on a straight line along with productivity. And it underlines and undergirds major issues, everything that's driving our housing crisis, which then undergirds so much of inequality and pollution and climate change issues.", "With regard to speculation, even if I just bite that bullet and am like, “Okay, speculation is good actually.” I don't think anyone's made the case that speculators need to capture a hundred percent of the rents to be properly incentivized to do anything good that comes out of speculation. I think at some small reasonable percentage, 5 to 10% of the rents, maybe 15% if I'm feeling generous, but I don't think anyone's empirically made the case that it should be a hundred percent, which is more or less the status quo.", "Dwarkesh Patel - 00:30:31:", "With regards to that pattern, the fact that the values tend to keep going up implies that there's nothing cyclical that the speculators are dampening.", "Lars Doucet - 00:30:41:", "Well, there are cycles to be sure, but it's not like something that resets to zero.", "Dwarkesh Patel - 00:30:45:", "Yeah, but that's also true of the stock market, right? Over time that goes up but speculators still have an economic role to play in a stock market of making sure prices are…", "Lars Doucet - 00:30:55:", "The difference is that people are now paying an ever increasing portion of their incomes to the land sector and that didn't used to be the case. You have people now who are paying 50% of their income just for rent. That's not sustainable in the long term. The cycle you have there is revolution. [Laughter]", "No, I’m serious. You look through history, and you either have land reform or you have revolution. Either you have a never ending cycle of transfers of income from the unlanded to the landed and eventually the unlanded will not put up with that. There was a real chance at the end of the 19th century of America going full on Socialist or Communist and the only thing that saved us was... George's argument was, it's either Georgism or Communism. And if you want to save capitalism and not go totalitarian, we need Georgism. What George failed to anticipate was, of course, the automobile.", "The automobile kicked the can down another couple generations and it came at the cost of sprawl. That made everyone feel like we had solved the issue but the cost of sprawl is enormous in terms of pollution and poor land use. Just look at Houston right now. We've come to the end of that frontier and now we're at the same question. You see this resurge in interest in leftism in America and that's not a coincidence. It’s because the rent is too damn high and poor people and young people feel really, really shoved out of the promise and social contract that was given to their parents and they're jealous of it and they're wondering where it went.", "Will Georgism Destroy The Economy?", "Dwarkesh Patel - 00:32:36:", "Yeah. Actually, you just mentioned that a lot of bank loans are given basically so you can get a mortgage and get a house, towards land. There was an interesting question on Twitter by Trevor Acorn. They asked if that's the case and if most bank loans are going towards helping you buy land that's artificially more expensive, but now you implement a land value tax and all these property values crash. Well then all these mortgages.. they obviously can't pay them back.", "Lars Doucet - 00:33:13:", "Right, right. Are we gonna destroy the banking sector?", "Dwarkesh Patel - 00:33:15:", "Exactly. We'll have a Great Great Depression?", "Lars Doucet - 00:33:17:", "I’m not trying to compare landlords to slave owners or something but the South had an entire economy based on slavery. This thing that we now agree was bad and it's like we should have kept slavery because it really disrupted the Southern Economy when we got rid of slavery. But it was still the right thing to do.", "There is no magic button I could push, as much as I might like to do so, that will give us 100% land value tax everywhere in America tomorrow. I think the actual path towards a Georgist future is gonna have to be incremental. There'll be enough time to unwind all those investments and get to a more sane banking sector. If we were to go overnight, yeah, there would be some shocks in the banking sector and I can't predict what those would be, but I also don't think that's a risk that's actually gonna happen. Because we just cannot make a radical change like that on all levels overnight.", "Dwarkesh Patel - 00:34:13:", "Yeah, okay. Let's get back to some of these theoretical questions.", "One I had was, I guess I don't fully understand the theoretical reason for thinking that you can collect arbitrarily large rents. Why doesn't the economic principle of competition hold for renting for the same reason that profit is competed away in any other enterprise? I get that there's not infinite landowners, but there are multiple landowners in any region. So if one landowner is extracting $50 a profit a month, and another landowner is extracting a similar amount and they're both competing for the same tenant. One of them will decrease their rent so that the tenant will come to them and the other one will do the same and the bidding process continues until all the profits are bidded away.", "Lars Doucet - 00:35:04:", "So this is Ricardo's law of rent. And there's a section in the book with a bunch of illustrations you can show. The issue is that we can't make more land, right? You might be like, “Well, there's plenty of land in Nevada.” But the point is there's only so much land in Manhattan.", "Dwarkesh Patel - 00:35:19:", "But the people who have land in Manhattan, why aren't they competing against themselves or each other?", "Lars Doucet - 00:35:23:", "Because the nature of the scarcity of there's only so many locations in Manhattan and there's so many people who want to live there, right? And so all the people who want to live there have to outbid each other. Let me give a simple agricultural example model and then I will explain how the agricultural model translates to a residential model.", "Basically, when you are paying to live in an urban area, or even a suburban area like here in Austin, what you're actually paying for is the right to have proximity to realize the productive capacity of that location i.e. I want to live in Austin because I can have access to a good job or whatever is cool about Austin, a good school, those amenities. The problem is you have to pay for those and you have to outbid other people who are willing to pay for those. And Ricardo's law of rent says that as the value of the amenities and the productivity of an area goes up, it gets soaked into the land prices. And the mechanism by which it happens is, say I want to buy a watermelon and there's only one watermelon left, I have to outbid that guy. But the watermelon growers can be like, “Oh, a lot of people want watermelon.” So next season, there's going to be more watermelons because they’re going to produce more watermelons.” But because there's only so many locations in Austin within the natural limits of our transportation network, it forces the competition on the side of the people who are essentially the tenants. It forces us into one sided competition with each other.", "A simple agricultural example is — say there is a common field that anyone can work on and you can make 100 units of wealth if you work on it. And there's another field that you can also earn 100 units of wealth in, but it's owned by a landowner. Why would you go and work on the landowner’s land when you're going to have to pay him rent? You wouldn't pay them any rent at all, you would work on the field that's free. But if the landowner buys that field and now your best opportunity is a free field that will only produce 10 units of wealth, now he can charge you 90 units of wealth because you have no opportunity to go anywhere else.", "So basically as more land gets bought, subject to private ownership in an area, landowners over time get to increase the rent. Not to a maximum level, there are limits to it. And the limit is what's called the margin of production, which is basically you can charge up to the best basic free alternative and this is where the competition comes in.", "You can realize that geographically, like out on the margins of Austin, there's land that basically is available for quite cheap, and it might be quite far away, and it used to be not so quite far away 20-30 years ago. As that margin slowly gets privatized, landowners can charge up to that margin. The other limit is subsistence. They can't charge more than you're actually able to pay.", "When the entire continent's free, the first settler comes in, strikes a pick in the ground, keeps all of their wealth, but as more and more of it gets consolidated, then landowners are able to charge proportionately more until they're charging essentially up to subsistence.  This is how frontier expansion works.", "The Economics of San Francisco", "Dwarkesh Patel - 00:38:51:", "Does that explain property values in San Francisco? They are obviously very high but I don't feel like they're that high where the software engineers who are working at Google are living as subsistence levels, neither are they at the margin of production where it's like, this is what it would cost to live out in the middle of California, and then commute like three hours to work or something.", "Lars Doucet - 00:39:13:", "It has to do with two things. First of all, it's over the long run. You've had a lot of productivity booms in San Francisco and so it takes some time for that to be priced in and it can be over a while. But given a long enough time period it'll eventually get there.", "And then when we're talking about stuff, it's also based off of the average productivity. The average resident of San Francisco maybe doesn't necessarily earn as high an income as a high income product worker. And so this means that if you are a higher than average productivity person, it's worth it to live in the expensive town because you're being paid more than the average productivity that's captured in rent, right? But if you're lower than average productivity, you flee high productive areas. You go to more marginal areas because those are the only places you can basically afford to make a living.", "Dwarkesh Patel - 00:40:06:", "Okay, that's very interesting. That's actually one of the questions I was really curious about so I'm glad to hear an answer on that.", "Another one is, so the idea is that land is soaking up the profits that capitalists and laborers are entitled to in the form of rent. But when I look at the wealthiest people in America, yeah there's people who own a lot of land but they bought that land after they became wealthy from doing things that were capital or labor, depending on how you define starting a company. Like sure, Bill Gates owns a lot of land in Montana or whatever, but the reason he has all that wealth to begin with is because he started a company, that's basically labor or capital. So how do you explain the fact that all the wealthy people are capitalists or laborers?", "Lars Doucet - 00:40:47:", "One of the big misapprehensions people have is that, when they think of billionaires, they think of people like Bill Gates and Elon Musk and Jeff Bezos. Those are actually the minority billionaires, most billionaires are people involved in hedge funds, they are bankers. And what are two thirds of banks? It's real estate.", "But more to your point, I don't necessarily have a problem with the billionaire existing. I don't necessarily buy the narrative that billionaires are solely responsible for everything that comes out of their company, I think they like to present that image. But I don't necessarily have a problem with a billionaire existing. I have a problem with working class people not being able to feed their families. So the greater issue is the fact that the rent is too high rather than that Jeff Bezos is obscenely rich.", "Dwarkesh Patel - 00:41:45:", "No, no. I'm not complaining that your solution would not fix the fact that billionaires exist. I also like that there are billionaires. What I'm pointing out is that it's weird that your theory says that all the surplus in our society is going to landowners and yet the most wealthy people in our society are not landowners. Doesn't that kind of contradict your theory?", "Lars Doucet - 00:42:11:", "A lot of the wealthy people in our society are landowners. Making wealth off land is a way to make wealth without being productive. Like you said in your interview with Glaeser that, the Googleplex, the value of that real estate is not comparable to the market cap of Google. But now compare the value of all the real estate in San Francisco to the market caps to some of those companies in there. Look at the people who are charging rent to people who work for Google. That's where the money's actually going.", "If you earn $100,000 in San Francisco as a family of four, you are below the poverty line. The money is going to basically upper middle class Americans and also the old and the wealthy who own tons of residential land. They are essentially this entire class of kind of hidden landed gentry that are extracting wealth from the most productive people in America and young people, especially. And it creates really weird patterns, especially with service workers who can't afford to live in the cities where their work is demanded.", "Transfer from Landowners to Google?", "Dwarkesh Patel - 00:43:30:", "Okay, so what do you think of this take? This might be economically efficient but the effect of the land value tax would be to basically shift our sort of societal subsidy away from upper middle class people who, happen to own land in urban areas and shift that to the super wealthy and also super productive people who will control the half acre that Google owns in Mountain View. It's easing the burden on super productive companies like Google so that they can make even cooler products in the future. But it is in some sense a little aggressive, that you're going from upper middle class to tech billionaire. But it will still be economically efficient to do that.", "Lars Doucet - 00:44:18:", "Well, no, I don't quite agree with that because although there are a lot of upper middle class Americans who own a lot of the land wealth, it's not the case that they own where the majority of the land wealth is. The majority of the land wealth in urban areas is actually in commercial real estate. It’s in the central business district.", "I work in mass appraisal, so I've seen this myself in the models we build. If you look at the transactions in cities and then you plot where the land value is in a graph, it looks like this _/\\_ And this, the peak, is the city center and that's not a residential district. While the residential districts are sucking up a lot of land value and the rent is too damn, the central commercial real estate is an order of magnitude more valuable. And this even holds even in the age of Zoom. It's taken a tumble but it's starting from a very high level. A lot of this real estate is very poorly used. In Houston especially, we have all these central parking lots downtown. That is incredibly valuable real estate and just a couple of speculators are just sitting on it, doing nothing with it. That could be housing, that could be offices, that could be amenities, that could be a million things. And so when you're talking about a land value tax, those are the people who are going to get hit first. And those are people who are neither nice, friendly upper middle class Americans, nor are they hard working industrialists making cool stuff. They're people who are doing literally nothing.", "If you do a full land value tax, yeah, it's going to shift the burden in society somewhat. But I feel that most analyses of property taxes and land value taxes that conclude that are regressive. I think that's mostly done on the basis of our current assessments and I feel like our assessments could be massively improved and that if we improve the assessments, we can show where most of our land values actually concentrated. Then we can make decisions about if we are comfortable with these tax shifts.", "Dwarkesh Patel - 00:46:18:", "So a while back I read this book, How Asia Works .", "Lars Doucet - 00:46:45:", "Yes, I'm a fan.", "Asian Tigers and Land Reform", "Dwarkesh Patel - 00:46:47:", "One of the things the author Joseph Studwell talks about is he's trying to explain why some Asian economies grew like gangbusters in the last half of the 20th century, and one of the things he points to is that these economies implemented land reform. They distributed land away from the existing aristocracy and gentry towards the people who are working the land.", "While I was reading the book at the time, I was kind of confused because there's something called the [Unclear]. The idea is that regardless of who initially starts off with a resource, the incentive of that person will be to lend out that resource to be worked by that person who can make most productive use of it. And Studwell was pointing out that these small peasant farmers will pay attention to detail of crop rotation and making the maximum use of this land to get the maximum produce whereas if you're like a big landowner, you will just try to do something mechanized that is not nearly as effective. And in a poor country, what you have is a shitton of labor. So you want something that's labor intensive.", "Anyways, backing up a bit, I was confused while I was reading the book because wouldn't what you would expect to happen in a market is that the peasants get a loan from the bank to rent out that land and then they are able to make that land work more productively than the original landowner. Therefore, they are able to make a profit and everybody benefits basically. Why isn't there a [Unclear] solution to that?", "Lars Doucet - 00:48:24:", "Because any improvement that the peasants make to the land will be a signal to the landowner to increase the rent because of Ricardo's law of rent. And that's exactly what happened in Ireland and George talks about this in Progress and Poverty. A lot of people were like, why was there famine in Ireland? It's because the Irish are bad people. They're lazy. Why didn't they improve? It's because if you improve the land, all that happens is you still are forced into one sided competition and the rent goes up.", "Dwarkesh Patel - 00:48:50:", "Yep, ok. That makes sense.", "The taxes you would collect with the land value tax, are they meant to replace existing taxes or are they meant to give us more services like UBI? Because they probably can't do both, right? Like you either have to choose getting rid of existing taxes or getting more..", "Lars Doucet - 00:49:08:", "Well, it depends how much UBI you want. It's a sliding scale. How many taxes do you want to replace versus how much? You can have a budget there. I show in the book the exact figures of how much I think land value tax could raise. And I forget the exact figures, but you can pull up a graph and overlay it here of whether you're talking about the federal level or federal, local, and state.", "There's $44 trillion of land value in America and I believe we can raise about $4 trillion in land rents annually with 100% land value tax. We would probably do less than that in practice. But even on the low end you could fully pay for any one of social security, Medicare plus Medicaid together, so the second one is healthcare or defense, entirely with the lowest estimate of what I think land rents could raise. I think you can actually raise more than that and I give an argument in the book for why I think it's closer to like $4 trillion. And that could pay for all three and have room over for a little bit of extra.", "So I mean, it's up to you. That's a policy decision of whether you want to spend it on spending, whether you want to spend it on offsetting taxes or whether you want to spend it on UBI. I think the best political solution, because if I bite the bullet that there might be some regressivity issues left over, you want to do what's called a UBI or what in George's time was called a citizen's dividend. This will smooth over any remaining regressivity issues.", "But I very much am in favor of getting rid of some of these worst taxes, not just because they have deadweight loss and land value tax doesn't, but also because there's this tantalizing theory called ATCOR, All Taxes Come Out of Rent, which suggests that if you reduce other taxes, it increases land values, which means that if it's true in the strongest sense, it means the single tax Land value tax replaced all taxes would always work. I'm not sure if I buy that, I want to see some empirical evidence, but I think at least some weak form of it holds such that when you offset other worst taxes, not only do you get rid of the deadweight loss from those, but you also wind up raising at least a little bit more in land value tax revenue.", "Libertarian Georgism", "Dwarkesh Patel - 00:51:20:", "As somebody who has libertarian tendencies, this obviously seems better than our current regime of taxing things that are good, basically capital and income. But my concern is that the way I'm guessing something like this would be implemented is it would be added on top of rather than repealing those taxes. And then, I guess we would want to ensure….", "Lars Doucet - 00:51:44:", "I get this one a lot. I have been a libertarian in my past, and I have a soft spot for libertarianism. I used to be a Ron Paul guy back in the day for a hot minute.", "I think the thing to assuage your concerns there is, what is land value tax? It's property tax without a tax on buildings. So the natural path to actually getting land value tax comes from reforming existing property tax regimes by reducing an entire category of taxation, which is the tax on buildings. And so that's what I think is the most plausible way to get a land value tax in places like Texas.", "What I actually proposed for our first step is not 100% land value tax federally. I don't even know how you get there. I think what you actually do is you start in places like Texas and legalize split-rate property tax, re-tax buildings and land at separate rates, set the rate on buildings to zero, collect the same dollar amount of taxes.", "Let's start there. There's proposals to do this in various cities around the nation right now. I think there's one in Virginia. There's a proposal to do in Detroit. I think there's some talk of it in Pennsylvania and some places. I'd like to see those experiments run and observe what happens there. I think we should do it in Texas. And that would be something that I think would be very friendly to the libertarian mindset, because very clearly no new revenue and we're exempting an entire category of taxation. Most people are gonna see savings on their tax bill and the people who own those parking lots downtown in Houston are gonna be paying most of the bill.", "Dwarkesh Patel - 00:53:14:", "By the way, is there a Georgist critique of the government itself? In a sense that government is basically the original land squatter and it's basically charging the rest of us rents. It's neither productively improving. As much as at least it's getting rents or must work. Like if you think about it, even your landlord usually is not charging you 40%, which is what the income tax rate is in America, right? And it's almost like you can view America as the landlord of America.", "Lars Doucet - 00:53:46:", "If what you're asking is Georgism compatible with full anarcho-capitalist libertarianism, probably not 100%. I think we can have a little government as a treat. But if you look throughout America's founding, I don't think it's a coincidence that originally, it used to be that only white land-owning men could vote. A government by the landowners for the landowners of the landowners. And that's very much kind of the traditional English system of government, just neo-feudalism.", "Georgism certainly has a critique of that. That the government is often instituted to protect the interests of landowners. But what's interesting is that if you look throughout history, I'm very much a fan of democracy, rule of the people. I kind of sympathize with Milton Friedman here, where he might want to have less government than we have now, but he doesn't believe we can have no government. And then he goes on to endorse the land value taxes, the least worse tax. Because income tax especially, I feel is a gateway drug to the surveillance state. One of the advantages of land value tax is you don't even care necessarily who owns the land. You're just like, “Hey 4732 Apple Street, make sure the check shows up in the mail. I don't care how many shell companies in the Bahamas you've obscured your identity with, just put the check in the mail, Mr. Address.”", "Whereas the income tax needs to do this full anal probe on everyone in the country, and then audits the poor at a higher rate than the rich, and it's just this horrible burden we have. And then it gives the government this presumed right to know what you're doing about everything you're doing in this massive invasion of privacy.", "Crypto", "Dwarkesh Patel - 00:55:42:", "That's fascinating.", "By the way, speaking of shell companies in the Bahamas there's an interesting speculation about what would happen if crypto really managed to make your log of transactions private. Then, I guess the idea is that the only legible thing left to the government is land. So it would force the government to institute a land value tax, because you can't tax income or capital gains anymore since that's all on like the blockchain and is obscured in some way. Is crypto the gateway drug to Georgism, because it'll just move income and capital to the other realm?", "Lars Doucet - 00:56:20:", "Yeah, it's just so weird. I've gone on record as being a pretty big crypto skeptic. But I have noticed a lot of crypto people get into Georgism, not the least of which is Vitalik Buterin who endorsed my book and is a huge fan of Georgism. I'll take fans from anywhere, even from people I've had sparring contests with.", "I'm generally pretty skeptical that crypto can fulfill all its promises. I am excited by those promises, and if they can prove me wrong, that would be great. And I think there's some logic to what you're saying, that if we literally couldn't track transactions, then I guess we don't have much to tax except land. I don't think that'll actually come to pass just based off of recent events.", "And that's basically my position on it. But I have noticed that a lot of crypto people are some of the easiest people to convince about Georgism, which was completely surprising to me. But I've learned a lot by talking to them. It's very interesting and weird.", "Transitioning to Georgism", "Dwarkesh Patel - 00:57:16:", "There were some other interesting questions from Twitter. Ramon Dario Iglesias asks, how do you transition from a world today where many Americans have homes or atleast are aspiring to have homes to a world with a different regime? They might still have homes, but who knows. Their property will just be thought about in a completely different way. How do you transition to that? What would that transition look like for most Americans?", "Lars Doucet - 00:57:39:", "There's this issue that I have to grapple with called Gordon Tullock’s Transitional Gains Trap . If you think about taxi medallions in New York City, it's this artificially scarce asset that allows you to operate a taxi. The first generation that got their taxing medallions basically got in cheap and then made out like gangbusters afterwards. But the second generation had to buy those taxi medallions at the fully priced-in value. And now when you come in and you're like, “Oh, okay. We're going to abolish taxi medallions.” Say you were going to do that, you would screw over that entire second generation who bought in in good faith after the value of the asset had been fully priced in. Even if you admit that the system is now unfair, removing that unfairness screws over the people who played by the “unfair rules.” So how do you grapple with that? And I think it's something that Georgists need to grapple with because we can't just imagine a future utopia without accounting for being fair to people who played by the rules, including people like myself. I'm a homeowner. Am I intending to screw over myself and everyone like me?", "I think this is where it's really important to do the math of knowing exactly who's going to be a winner, who's going to be a loser, who's going to pay more, who's going to pay less. I think it's really salient that a lot of the value of land is commercial downtown real estate. And I think that a revenue neutral property tax shift where we exempt the taxation of all buildings, but collect the same amount in property taxes as we're doing now, but just from the land and then a modest citizen's dividend is a really good first step. And then over the years, you can raise the land value rate as you also decrease things like income tax and sales tax. I think that's a transition that gets us there without really screwing anyone over. And for any edge case, like a poor sympathetic widow who has no income but has a high value home, you just make it so she doesn't have to pay the land value tax until she dies or sells the estate.", "Dwarkesh Patel - 00:59:32:", "And I guess even there, the worst case scenario is the status quo where they don't have to pay land value taxes, which is already the case now, right?", "Lars Doucet - 00:59:42:", "Nobody cares about the people who are being evicted and displaced by the status quo.", "Dwarkesh Patel - 00:59:47:", "One snafu in terms of figuring out how to price the land is by the time that a land value tax was passed, it'll have been years after this political talk of having a land value tax. And that talk will in turn affect the prices of homes that are sold during that time. So then you'll look at the land selling value and be like, “Oh wow. This house on the outskirts of San Francisco only sold for $200,000.” Does that mean that the unimproved land there is only worth $100,000? Won’t that really conflate the data when you actually go about implementing this?", "Lars Doucet - 01:00:28:", "It’s important to remember that land selling value is derived from land rental value, not the other way around. Land selling value is the net present value of the future flow of income that can be generated from the property. So the property's inherent productivity is inherent to it. And the selling price of it is based off of the capitalization of that value minus the expectation of any taxes. So a 100% land value tax will theoretically reduce the selling price to zero. But the land will still be as productive as it always was. It's just that the flow of those rents are being redirected. And so that's the thing is that, and also in mass appraisal, one of the things you do is you decapitalize the effect of the tax.", "Dwarkesh Patel - 01:01:18:", "But I'm saying, you don't know what the probability of a tax is in the mind of the property owner or the property seller. Since you don't know that, it's hard to estimate what is the actual capitalization, the right value..", "Lars Doucet - 01:01:35:", "Are you concerned about the societal effects of the depreciation of land prices or are you more concerned about just the calculation issue?", "Dwarkesh Patel - 01:01:16:", "The calculation issue.", "Lars Doucet - 01:01:17:", "Here's the thing. Empirically, if the land selling price has dropped to zero, then you are fully capturing all of the land rents. And if it's above zero, you have not captured all of the land rents.", "Dwarkesh Patel - 01:02:00:", "In the first two years that you try to implement this, you would be trying to mess with the rate.", "Lars Doucet - 01:02:11:", "It's more complicated than this but if there's any vacant lots in the area, and they're selling for anything, there's still land rent in that property.", "Dwarkesh Patel - 01:02:21:", "Gotcha. But this is not something you would be able to figure out on day one. You would know over the course of years of fudging the numbers.", "Lars Doucet - 01:02:26:", "We're doing property tax assessments right now. We're doing mass appraisal all the time right now. You could keep it updated every six months if you had the right technology, which is something I'm pushing for. You can see the prices change in real time as transactions come in and you can use multiple regression and geographic weighted regression to work out the difference between the improvements and the land prices.", "If you had a rental registry and knew what everyone was paying in rent, you'd be able to keep even better track on what's going on with that.", "Lars's Startup & Land Assessment", "Dwarkesh Patel - 01:02:56:", "This might be actually a good point to talk about your new startup. This is actually something I don't know about either. What is the idea? What are you up to?", "Lars Doucet - 01:03:04:", "I'm transitioning out of video games and into municipal property tax assessment, mass appraisal. My new startup is called Value Base .", "I think the best criticism of Georgism is — how are you actually going to separate land value from improvement, from building value? We can't do a land value tax till we put a price on every parcel of land. So how are you going to do that? And I thought that was the best, most good faith criticism that remained. And it seemed like we gotta get good at that. So I looked into it and I realized that there are a lot of research papers that have been posted in the last 15 years about how to practically do this. And then I went and started interviewing a bunch of assessors and I realized that the state of the practice is pretty far behind. Only 15% of most property tax assessment offices even use multiple regression. A lot of them are using the cost approach, which is basically where you, it's what Ed Glaesar talked about in your interview with him where you estimate the cost of building and you applied depreciation and you subtract it from the observed selling price to get the assumed land price.  And that works okay but a lot of assessment is not only using essentially only that method, but also another issue is that just a lot of those cost tables are very out of date. Assessments themselves are not always done every year in some. It's not imperative to find places that haven't done reassessments in more than 10 years.", "Most places it's one to five. I think we should be doing everything we can to get all the latest mass appraisal technology and research. One of our first hires was one of the guys who was the first author on all these papers we were reading. And we're just here to update municipal property tax assessors on the latest methods so that we can accurately know what all the land in America is worth. And this will solve a lot of our regressivity issues too, because we know that a lot of landlords are actually under assessed relative to homeowners, believe it or not, because they're more likely to protest their property taxes. And minorities tend to be over assessed and poor people tend to be over assessed. That's why property taxes are sometimes called regressive. It is because the assessments need to be fixed.", "Dwarkesh Patel - 01:05:41:", "I see. I guess because if you have more property, getting your rate changed from like 1% to 0.8% is worth thousands of dollars rather than hundreds of dollars.", "Lars Doucet - 01:05:54:", "Right. And there's a lot of other issues too. There's all sorts of reasons that you can have these things, often from no malintent whatsoever. Just out of date assessments can also cause all sorts of issues.", "Dwarkesh Patel - 01:06:11:", "In the book, you're talking about the percentage of tax income that is literally just spent on figuring out how much income tax to collect and whether people have paid or whatever is a lot. So I get that.", "One worry that people might have is that while income taxing is obviously very inefficient at least it has a nice property that it feels like there's this hard figure that ought to exist. Like I made this much income this year. Whereas figuring out how much land is worth… I get that if you figured out the rent value, that's like the value of land, but it just feels much more murky. And therefore it might potentially enable corruption in the level of whoever's doing the assessment or however that method of assessment is happening. They'll just use these fancy algorithms to nudge it one way or another, in a way that benefits big corporations or whoever they want.", "Lars Doucet - 01:07:09:", "That's a really good argument and the reply to that is that we need to move towards more transparency because land value follows certain rules that should logically make sense. We know some things that drive locational value. First of all, we should move towards open source models and open data whenever possible, which is something that we want to do.", "A lot of these cities will post on the open data portal. My mission in life is to be able to advance the state of the art of this technology and make it so anyone can kind of check on stuff. In any city in America, you should be able to look up your property tax assessment on a map and compare it to your neighbor. And what you shouldn't see is a Christmas tree effect, where your neighborhood looks like Christmas tree lights of green and red of people whose property tax assessments like massively differ. That's the case in a lot of cities.", "If the land values have been correctly assessed in this neighborhood, most of these parcels should be about the same value. Maybe the cul-de-sacs are worth a little bit more but your neighbor's land shouldn't be worth 20% more than yours. And if it is, it'll stick out on a map like a sore thumb. And if the data and the algorithms are all open source and open data, you should be able to check anyone's math. And you should be able to use that to then protest your taxes if they seem off. And that's kind of the argument for land value taxes.", "You can hide all kinds of stuff in income taxes and capital taxes. That's what the Caymans and the Bahamas are for. It's very easy to find people are getting a break on their property taxes. I’m not saying corruption won't happen but it'll be very easy to see because you'll be able to see just this mansion that suddenly has this discontinuity on the land value map. Someone gave this person a break and maybe we should write an article in the local newspaper about this.", "Dwarkesh Patel - 01:09:01:", "Yeah, it'll be way better than how income taxes work. None of it's open to potentially even the government itself, right?", "Another concern is that that’s fine for things that are above ground and legible but what if you find out through some sort of surveying that this land has a lot of oil under it and then you buy it but obviously you're not going to tell the government that I just found an oil well.", "Lars Doucet - 01:09:26:", "So this is the sort of theoretic critique…", "Dwarkesh Patel - 01:09:27:", "No, not even that. You literally won't declare it. So in a sense, you're still being a speculator, but in fact, you're incentivized even more to be a speculator in the sense that as soon as you declare that there's oil underneath this ground then the government is going to start taxing you for it. So you just want to hang on to that as long as you can and keep that private.", "Lars Doucet - 01:09:48:", "We need to talk about mineral policy in America, because especially states like Texas, mineral rights and land rights are totally different. Usually when you buy land in Texas and in America, you actually don't have the mineral rights. Those are very severable. A lot of people are very interested in those mineral rights and so usually if you're not paying attention, generally speaking, you're not getting the mineral rights when you're buying land. And if you are, you're paying more for them.", "I think with the case of minerals you have to have something more like the Norwegian model where you need to basically give some incentive to someone to produce or to not withhold that resource. A good example would be the treasure law in England because England has ancient Anglo-Saxon treasure and Roman treasure. Before what they would have as someone would find it and like go like hide it or melt it down because they didn't want the government to tax them.", "So they passed this treasure law, which is not perfect, but it's okay. They're like, “This is our heritage. We want that in a museum.” So here's the deal. If you find treasure, you're going to get paid and the landowner is going to get paid. So there's an incentive for people to go out with metal detectors and there's an incentive for a landowner to let people do that. And then the government's going to put it in a museum. You're going to be rewarded for the discovery of that thing but the value of that thing itself is going to be captured because it's the heritage of the British people.", "The opposite of this is the Spanish government. Whenever someone finds a galleon full of gold on the bottom of the ocean, if you go and you invest the capital to bring that up to the surface and you're in Spanish waters or the Spanish government finds you and you don’t take it to Spain, a Spanish admiralty court is going to try to get its claws on that gold.", "The incentives the Spanish government is producing is basically to make sure that nobody ever, ever recovers a ship.", "Dwarkesh Patel - 01:11:59:", "Yeah. And if they do it just ends up in some sort of foundry in another country.", "Lars Doucet - 01:12:05:", "Yeah, the Spanish government's approach to sunken treasure basically incentivizes people never to go after it.", "Dwarkesh Patel - 01:12:11:", "A general critique of Georgism in general, or implementing it would be… Listen, the reason America and other developed countries are wealthy is because they are very strong in terms of honoring contracts, especially honoring people with property rights.", "In some sense, you have a contract with the government that “Hey, I have this property.” and once you don't honor that, people will get too concerned to want to invest in America or in American assets and that will have all kinds of economic repercussions where people are like, “Oh, I guess my property is not mine?”. Maybe other things I thought I was investing in like, “My stocks are not mine. So why should I buy these stocks in America?”", "Lars Doucet - 01:13:00:", "This is a fully general argument against change. And I also don't agree with all the assumptions. If America doesn't honor its agreements, what are they worth?", "We've violated all kinds of treaties, with the Indians especially, but also all sorts of international treaties. There have been all kinds of instances where we’ve changed rules on things and changed asset classes. We had an entire period where we just banned all sales of alcohol and then we had a time where we completely undid that and brought it all back. There's been times where we've made major, major changes to the rules of what kind of asset classes we have.", "This argument is often brought up with any kind of labor protections and things. Where it's like, we have these rules and therefore we can never change them. I think the best answer to this is that you need to acknowledge Gordon Tullocks’ Transitional Gains trap and if there's someone who's going to be put out, then you make sure that there's some compensation in the system to smooth over the transition. But the rule of law doesn't imply that any change to the status quo is going to undermine trust in it.", "I think it's important to remember that George is not interested in seizing land. That's a very big distinction from the Maoist position which is — murder of the landlords, or the more modest Asian reforms.", "First of all, when you're talking about How Asia Works, like they took the land away from the big landowners, gave it to the peasants, and it made those countries way more productive. I don't think anyone would look at those Asian countries now and compare them to where they were going to be and say “Well, the rule of law is weaker now than it used to be.” George isn't even advocating for that. He's just advocating for raising taxes on land and exempting all of the building taxes. So I don't think it amounts to seizure of land for those reasons. But even if it did amount to that, I think sometimes it's worth biting some bullets.", "Big Tech", "Dwarkesh Patel - 01:15:11:", "Okay, let's move on to dessert. Let's move on to some more fun interpretations and applications of Georgism.", "I think Byrne Hobart had this blogpost on The Diff . I don't know if you follow him but he's a great finance writer. He was talking about how if you have a Facebook account or a YouTube channel with millions of subscribers, you were in some sense very early to YouTube and Facebook, and now you have this land. And Facebook punishes you if you have a big account and are not posting frequently or not getting enough engagement and you'll have a harder time reaching people in the future.", "Byrne had this Georgist interpretation of that. Where it's like you have this productive asset, a profile that you're able to build on the early days of Facebook, and if you're not posting on it then we're not going to give you the advantage of having millions of followers. But anyway, there's all kinds of ways you can apply Georgism.", "You can think of the App store as rent seeking from Apple where they charge you 30% tithe. And there's all kinds of other places in the digital world where you can think of this. How do you think about Georgism in the context of those kinds of things?", "Lars Doucet - 01:16:30:", "I've actually written a policy paper on how to apply the theories of land value tax to virtual worlds. The question with virtual real estate is when does something actually operate like a land-like asset? And then to what extent do Georgist principles apply? That doesn't necessarily mean to just do LVT.", "First of all, let me define what I mean by a land-like asset. A land-like asset has three properties. It is scarce in supply. It is necessary for production. And it obtains locational value by virtue of its position in some kind of graph. As an example, I create a fictional MMO and I give the example of a unicorn, a permit, and a plot of land. The unicorn is scarce, but it's not necessary for production. Any value you can get from the unicorn, you can get some other ways. It's really nice to have and there's only like 10,000 on the server. A permit is like a permit to brew potions. You're part of the witches guild. You can brew potions. My apprentice witch has to pay rent to me to gain access to my permit to be allowed to brew potions, but all permits are fungible. So there's no locational value. And then a plot of land like we've talked about. So this becomes the real speculative asset. Domain names are probably the closest thing to this. Vitalik Buterin has actually written a post about how to apply Georgist theory to domain names and also all the wrinkles that are involved.", "Dwarkesh Patel - 01:17:55:", "By the way, he did this on your blog, right? As a guest post.", "Lars Doucet - 01:17:58:", "Well, progressandpoverty.substack.com is not technically my blog. It's a group blog where a lot of us Georgists post. So yeah my blog in the sense of our blog. Yeah, and he cross posted it. He also posted it on his own blog.", "But there's other considerations to remember when we go away from literal land. For instance, with domain names, especially with the Ethereum name system, you have issues of identity that don’t apply to literal land. If someone buys this house from you, maybe the next person who lives here is going to get some of your mail for like a week or two but like no one's going to think that they're you necessarily. But if I buy Vitalik.eth there might be some confusion. I might be able to get some transactions that were meant for him. And so there's these other considerations and he deals with all of that.", "I wrote a policy paper about how to apply LVT in virtual worlds but that was more in these more kind of literal simulacra of the real world in virtual worlds than kind of the things you're talking about, YouTube accounts and charts in app stores, which are user generated content platforms.", "I haven't fully analyzed user generated content platforms. But I do think chart positions are a sort of virtual land. They essentially do charge rent for that not just their flat 30% fee across everything but also in the kind of advertising Red Queen’s race you have to do to stay on those charts. You essentially have to buy that position and then keep it. There's this huge first mover advantage that turns into rent seeking. And I haven't fully analyzed exactly how you applied Georgist theory there. But I do think there's something to it.", "I also think there's something to extremely long-lived copyrights and patents. It's a little undercooked at the moment. I don't have a fully fledged theory of it. But certainly other monopoly assets like orbital real estate, radio spectrum. Essentially anywhere you’re able to capture the rights of the possibility space. You know how John Carmack had a lot of algorithms some of which were patented and he wasn't even allowed to use them in his own games because someone else had speculatively patented them.", "Carmack's reverse was the particular algorithm that they weren't allowed to use in Quake III or because someone had speculatively patented it. And there's like no other way to do that thing. So in Quake III, this one particular sub routine had to be 25% slower because someone had patented the algorithm that could make it faster. Imagine if you could patent the Pythagorean theorem. So that kind of nonsense and you can just rent seek off that for 20 years.", "Space", "Dwarkesh Patel - 01:20:51:", "Jesus. That's funny.", "Let’s move on to another juicy implication that you were just talking about, which is space. Hopefully, humanity will eventually conquer space and the rest of the galaxy and this is where Georgism will be really interesting and applicable because we obviously want to encourage and incentivize people to go to new worlds and make use of stars and planets. But we obviously don't want it to be the case that if you got to Mars a year earlier you forever have rights to everything on Mars and all the resources. For example, if Musk gets to Mars a year before Bezos, he has the rights to everything.", "Georgism in space is actually a place that makes a lot of sense. Have you put much thought into interstellar Georgism?", "Lars Doucet - 01:21:41:", "Interstellar Georgism is actually current international law. Although I think the outer space treaty will last for about five minutes once the interplanetary space race gets going in earnest. [Laughter]", "But the current law, the law of the international order of the outer space treaty, basically says that nobody's allowed to claim interstellar bodies like the Moon or Mars for China or the US. We have a flag there but it's not American territory. It's basically international waters, so to speak.", "Once the actual possibility of having permanent bases shows up, then we will have to hash that out. But basically I think we should take that and run with it. If you are going to take possession of interstellar bodies, it will become a question of who exactly becomes the government in that scenario. Is it some international coalition? Is it the UN? I'm not a huge fan of the UN or it just turns out that whatever sovereign government gets there first just gets to claim it, but can you have Georgism and within those confines? I think we certainly should. Because otherwise what you're going to get is that you're going to get under investment. You're going to get so much under investment because the first people to get there are going to basically charge rent to all the people who come next.", "Copyright", "Dwarkesh Patel - 01:23:05:", "Yeah. I just remembered the other question I wanted to ask which was about copyright. That's a really interesting and complicated area because it's hard to think of what is the intrinsic value of the land of the idea and what is the improvement you've made. What do you get to call as the improvement on the the song you discovered? Maybe it’s the melody? But the melody would have existed anyways, so then the specific lyrics that you came up with? I don't know.", "Lars Doucet - 01:23:31:", "It's not exactly  clear to me what the implication of Georgism is on copyright. If you Zoom out, Georgism is not just about land. That's why a lot of people are like, “Why are you so obsessed about land?” It's mostly about the enclosure of natural monopolies and economies that are based entirely around rent seeking. And it's clear we have this with eternal copyright.", "If we had the copyright laws we have today when Disney was first getting started, the Brothers Grimm would still be under copyright and they would not have been able to get off the ground with a lot of their early properties and then they pulled the ladder up right behind them.", "It's clear they're rent seeking in a lot of ways and refusing to give back to the commons from which they first enriched themselves. The question then becomes — can you easily parse copyright as land?", "It's a very undercooked theory, but there's something there. Probably what it catches out to is that copyright terms should be shorter. At some point when an idea has become part of the cultural consciousness for so long it should become part of just the background collective commons, because all ideas are essentially remixed and built on top of other ideas. But we want to incentivize people to create new ideas in the first place. It's not a perfect analogy to land, but there is something there about rent seeking that needs to be addressed. And it probably just caches out to just reducing copyright terms.", "Politics of Georgism", "Dwarkesh Patel - 01:25:02:", "Let's talk about the political feasibility of land value tax. Do you think it will be something that will get passed in a democracy? A lot of people have homes or want to have homes, but on the other hand, if you just redistribute… If you don't have many acres in the middle of San Francisco, then you would maybe still come out net ahead, but could you be able to explain that to people? And obviously there's tenants who would definitely benefit from this.", "Anyway, what is your thought on how politically feasible this is now or would be in the future?", "Lars Doucet - 01:25:36:", "I think it's more politically feasible than we think. I think it just has low salience right now and if more people start talking about it, I think it's going to make a lot of sense. Especially if it's pitched as property tax reform. Like right now in Texas, you have people who are trying to abolish property tax, which will turn this state into California like that. *snaps fingers*", "Dwarkesh Patel - 01:25:55:", "Why? Why would it turn into California?", "Lars Doucet - 01:25:57:", "Because California has some of the lowest property taxes in the nation, Proposition 13, and it creates an entire layer of landed gentry and really, really, really expensive housing. Housing is already getting expensive in Austin and property taxes have an effect of lowering property prices. Property taxes are an imperfect land value tax. And so basically, it will just reproduce the economic conditions that empirically exist in California.", "Dwarkesh Patel - 01:26:21:", "There's this funny saying that the Texas Constitution is not a government document, but rather an anti-government document. But one of the positive by-products if you’re a Georgist is that since it makes taxing income so hard, governments are forced to tax property and hopefully eventually land.", "Lars Doucet - 01:26:40:", "I have a bone to pick with the Texas Constitution because Houston had a single tax mayor in 1911 and had an active, Georgist land value tax. And it would have still had it today if it wasn't for a state judge who basically shut them down.", "Dwarkesh Patel - 01:27:09:", "Why? What was the reason?", "Lars Doucet - 01:26:33:", "Because there's a clause in this state constitution of Texas that says all property taxation has to be uniform and that's interpreted as uniform across both the buildings and the land. You can't tax them at separate rates.", "Dwarkesh Patel - 01:27:09:", "So does that mean that constitutionally you could just put it in a Texas Senate?", "Lars Doucet - 01:27:14:", "It's up to interpretation. There are some people who interpreted that as — you can't tax this guy at a higher rate than that guy. You have to tax them based on the value of their property. And then you could claim that it's still uniform. We're just taxing land and we have all these exemptions and categorizations already. This is just another one of those. We tax agricultural land at a different rate. If you put a bunch of cows on a property suddenly it's magically less valuable. So why can't we target just the land? And so if you have the right judge, maybe you could get away with that.", "But anyway, to go towards political feasibility, I think it could be more feasible than we think, especially if pitched as property tax reform because people already feel that property taxes are too high. “Hey, everyone, let's exempt all your buildings.” I think you could build a coalition that's excited about that. Especially if you do the math and show what the change in your taxable rate is going to be.", "I think just a revenue neutral property tax shift to land can be quite popular. There's a lot of cities around the US right now where this is being floated. Organizations like Strong Towns, the Center for Property Tax Reform, and the Lincoln Land Institute are working on a lot and talking to places about it. Detroit is talking about it right now and they could desperately use it. I think it could pass.", "And also Henry George's salience is coming back. In Norway, the Ruling Center Party coalition just passed a new severance tax. They have a very successful resource management policy with oil. They also have one from the early 1900s in hydro power, which was set up by Norwegian Georgists. The Ruling Center Party coalition just put one in for salmon farming aquaculture locations. A new severance tax on that. And they name-checked Henry George when they implemented it in the speech. So I think if they are willing to stand up to the landowners in Oslo and pass a land value tax, that would be the next step. I'm not sure if they're brave enough to do that. We're starting to see this bubble up and so I think a revenue neutral property tax shift to land is the politically popular way to do it. Because it can be done on the local level without having to change a whole bunch of laws. It's a little complicated in Texas because of that's stupid state constitutional provision but in other states, they don't have that. There's other States in which any municipality could do it right now if they wanted.", "Dwarkesh Patel - 01:29:33:", "One of the things you talk about in the book is that in some sense the value of your land is caused by the other people around you, of other properties, of other amenities and companies or whatever that are near. So it's kind of like a publicly contributed value.", "But if you think about it that way, doesn't it make sense that your land taxes should go to that community which is the one that's creating all this value rather than being distributed federally? So maybe we should have land value taxes on local levels but then it goes to pay towards local amenities. If you live right next to the subway and the reason the properties there are valuable is because you're right next to the subway, then that goes towards making the New York subway better but it doesn't go towards the city hall and Kentucky or something.", "Lars Doucet - 01:30:21:", "If you wanted to create a model for a bottom up decentralized America, I would take that bargain. We still got to fund the federal government somehow and so I would like to repeal the federal income tax and have that funded by land value tax. But if that's as far as I could ever get, I'd shake your hand and take that deal right now. It sounds good.", "Dwarkesh Patel - 01:30:42:", "Yeah, fair enough.", "One more question. This was actually from Twitter from Craig Fratrik , the question is — if one of the reasons why we think of land as being a public thing is because your land value is contributed by people around you. But what if you own so much land that you are the one that's contributing to the value of all your land. So if you're thinking like Disney World, they basically own half of Orlando and they're the ones that are creating nearby amenities, which are making the rest of Disneyworld valuable. In that case, are they entitled to all the proceeds?", "Lars Doucet - 01:31:19:", "In a way, yes. What's interesting is that this is often posed as a gotcha question for Georgists but internally we call this the Disney World question and it's actually a really interesting case.", "The thing you have to realize now is who is Disney World in this situation? Disney World is the government in this case. Disney World is the community. Disney World is the city. It's a private city and then there's all these questions of who is the citizen of Disney World and is it an equitable democracy or whatever, which will leave aside for the moment. But the point is that Disney World has fully internalized the positive externalities of their own building,. So the incentive structure has changed. We fully acknowledge that and we just bite that bullet. So the question then becomes what do you do about it? It's a really interesting city, which is essentially a private city.", "There can certainly be issues with company towns, especially when local monopolies enforce bad local laws like you have to buy from the company store, which is mostly an issue of governance. But in terms of land use, actually if a landowner owns the entirety of a large enough to internalize all their building things that does change the situation. Then you need to see them less as the private owner of a parcel and more as a private operator of a city and that changes the governance question and it becomes a completely different category of problem. Then there's questions like, what is the right mix of taxes and subsidies? Do you even want to have these sorts of entities in the first place? What does the governance question look like? But the important thing to realize is that you are no longer dealing with a private landowner, you are dealing with a private city.", "Someone Is Always Collecting Rents", "Dwarkesh Patel - 01:33:09:", "So is maybe one way to think about Georgism that somebody has to be collecting the rents? And what Georges is saying is we want the rents collected by the government and not by the person who is a private landowner, presumably because the government will do more public-facing things with it. But rents have to be collected, we just have we're just trying to decide who will collect it?", "Lars Doucet - 01:33:32:", "Right. Rents will be collected one way or another and if we leave it to the status quo, private actors free riding off their neighbors will collect them and make everything worse. They will cause less investment and they will hold the best land out of use and there'll be less building than there should be and the rent will be too damn high.", "The rent has to be collected one way or another so it might as well be collected on behalf of the community. Not only will this fix the incentives, but then it also shares the wealth with the people who actually produce the wealth. And if the community is itself this giant private city, then that kind of changes the equation a bit. And I have not fully delved into the Disney World issue and the Disney World question and that's a little separate academically from actually existing Disney World and how anyone feels about that institution in particular. But the important thing to realize is that we're now dealing with a different domain. Because then the question is, who is a citizen of Disneyland? Who is a tenant of Disneyland? What is the governance of Disneyland? You realize now Disneyland essentially is the government. And I think that's actually kind of literally true in actually existing Disney World because they have their own special district that operates essentially as a private government. It's just its own kind of question. But the point is very much yes, the rents have to be collected one way or the other so they might as well be collected on behalf of the community. One way to implement that is through a government because we have this nice little democracy we can use.", "But there's this whole charter cities movement and then a movement I like even better called starter cities, which is less like let's find some third world country and hope our sovereignty doesn't get revoked in five years, and more like let’s build cities somewhere in America where rezoning is pretty loose. You could have everyone who lives there basically be a shareholder of a private entity and share out the land rents that way. Another way to implement it would just be ground leases rather than land value tax within a private city. And then everyone who lives there is de facto a shareholder of the city. There's some people who are trying to build stuff like that, which I'm very much interested in.", "Dwarkesh Patel - 01:35:32:", "Awesome. Final question. What is next for you?", "Lars Doucet - 01:35:35:", "I'm still a game developer and on nights and weekends I'm overseeing a project that will be released next year, Defender's Quest 2 , after a good decade of development. I'm happy for that to be finished. But additionally I am working on a startup called Valuebase and that is a municipal mass appraisal company. We're going to partner with local municipalities. We've already got a couple of customers who are interested in what we're doing. We're going to massively update the state of the practice of municipal mass appraisal in America to be more accurate, more efficient and more up to date with the latest models and more transparent, leaning into open source and open data solutions wherever possible. I think that can be the practical step towards Georgism not just to implement georgism but even to advocate for it. Because you know, in the Edward Glaesar interview , I'm super happy to have Glaeser's endorsement even if he thinks Georgism isn’t a panacea.", "One of the things about his critique of it not being a panacea is that I think he's under the impression that land isn't a big deal.", "And I think it's a polycea even if it's not a panacea. I'll settle for curing multiple diseases even if I'm not curing literally all of them.", "And how important you think Georgism is depends on how much the land is worth. And to do that, we have to measure it. And we're not super good at measuring it right now. And I think we need to get super good at measuring it.", "When I was talking about this originally,  someone challenged me and said “The reason I don't think you can do this is because if you really could, then people would give you millions of dollars to go do it.” And well, we just closed a round led by Sam Altman and that's exactly what happened. And now we're going to do it. I'm really grateful to that commenter for challenging me in that way because I really see my life's work as making Georgism possible and practical and I think the way to do that is to measure the value of the land and to put those tools out there for anybody to be able to pick up.", "Dwarkesh Patel - 01:37:50:", "Awesome. Okay. So the book is Land is a Big Deal . And where else can people find you?", "Lars Doucet 1:37:36", "You can find me at a couple of places. On the website landisabigdeal.com . You can find me on Twitter @larsiusprime . The book is based on a series of articles that you can find at gameofrent.com . And valuebase.com . And that's everything I do.", "Dwarkesh Patel - 01:38:22:", "There's books you read where you're like, “Oh, that's interesting. I guess I shifted my priors slightly.” And there's books like, “Oh, fuck, why did I not think about it that way? Wow, that changes how I think about everything.”", "It really is one of those books. I think you have a metaphor in the book about seeing the cat in a picture. Anyway, I highly recommend the book. It's one of those books you will read where you'll just have a completely new perspective on an entire field.", "Awesome. Thanks, Lars. Thanks for coming on today and thanks for making your way all the way over here to do this interview live.", "Lars Doucet - 01:38:57:", "Well, thanks for grilling me. Thanks for all the hard questions. I really like to be challenged in that way and like to really engage with the best faith, hard-hitting critics to make sure we really understand what we're talking about here. Because none of this matters unless it works. I'm not here to defend Henry George's honor. I'm here to explore whether this can actually be a solution to our problem." ]
[ "https://twitter.com/larsiusprime?lang=en", "https://store.steampowered.com/app/218410/Defenders_Quest_Valley_of_the_Forgotten_DX_edition/", "https://www.defendersquest2.com/", "https://www.astralcodexten.com/p/your-book-review-progress-and-poverty", "https://www.amazon.com/Progress-Poverty-Industrial-Depressions-Increase/dp/1331324297", "https://www.amazon.com/Land-Big-Deal-wages-about-ebook/dp/B0BL8H6Y9S", "https://www.amazon.com/Land-Big-Deal-wages-about-ebook/dp/B0BL8H6Y9S", "https://www.amazon.com/Progress-Poverty-Industrial-Depressions-Increase/dp/1331324297", "https://www.youtube.com/watch?v=PPgj38-agPE", "https://marginalrevolution.com/marginalrevolution/2005/02/why_are_land_ta.html", "https://en.wikipedia.org/wiki/Edwin_Howard_Armstrong", "https://www.investopedia.com/terms/r/resource-curse.asp#:~:text=The%20term%20resource%20curse%20refers,a%20few%20resource%2Ddependent%20industries.", "https://en.wikipedia.org/wiki/Law_of_rent", "https://www.amazon.com/Where-Is-My-Flying-Car/dp/B09MWRC8VV", "https://www.econlib.org/archives/2012/02/a_search-theore.html", "https://www.dwarkeshpatel.com/p/edward-glaeser", "https://www.amazon.com/How-Asia-Works-Success-Failure/dp/166525534X", "https://twitter.com/VitalikButerin/status/1532179595885039617", "https://lawliberty.org/the-transitional-gains-trap/", "https://www.valuebase.co/", "https://thediff.co/", "https://www.gamedeveloper.com/design/land-value-tax-in-online-games-and-virtual-worlds-a-how-to-guide", "https://progressandpoverty.substack.com/p/should-there-be-demand-based-recurring", "https://progressandpoverty.substack.com/p/should-there-be-demand-based-recurring", "https://en.wikipedia.org/wiki/Red_Queen_hypothesis", "https://en.wikipedia.org/wiki/Henry_George", "https://x.com/fratrik/status/1595213750998646789?s=20", "https://store.steampowered.com/app/252190/Defenders_Quest_2_Mists_of_Ruin/", "https://www.valuebase.co/", "https://www.dwarkeshpatel.com/p/edward-glaeser", "https://www.amazon.com/Land-Big-Deal-wages-about-ebook/dp/B0BL8H6Y9S", "http://landisabigdeal.com", "https://twitter.com/larsiusprime?lang=en", "http://gameofrent.com", "http://gameofrent.com" ]
https://www.dwarkesh.com/p/leopold-aschenbrenner
Leopold Aschenbrenner - China/US Super Intelligence Race, 2027 AGI, & The Return of History
[ "Edited by Teddy Kim .", "(00:00:00) – The trillion-dollar cluster and unhobbling", "Dwarkesh Patel 00:00:00", "Today I’m chatting with my friend Leopold Aschenbrenner . He grew up in Germany and graduated as valedictorian of Columbia when he was 19. After that, he had a very interesting gap year which we’ll talk about. Then, he was on the OpenAI superalignment team, may it rest in peace.", "Now, with some anchor investments — from Patrick and John Collison , Daniel Gross , and Nat Friedman — he is launching an investment firm.", "Leopold, you’re off to a slow start but life is long. I wouldn’t worry about it too much. You’ll make up for it in due time. Thanks for coming on the podcast.", "Leopold Aschenbrenner 00:00:38", "Thank you. I first discovered your podcast when your best episode had a couple of hundred views. It’s been amazing to follow your trajectory. It’s a delight to be on.", "Dwarkesh Patel 00:00:48", "In the Sholto and Trenton episode , I mentioned that a lot of the things I’ve learned about AI I’ve learned from talking with them. The third, and probably most significant, part of this triumvirate has been you. We’ll get all the stuff on the record now.", "Here’s the first thing I want to get on the record. Tell me about the trillion-dollar cluster.", "I should mention this for the context of the podcast. Today you’re releasing a series called Situational Awareness. We’re going to get into it. First question about that is, tell me about the trillion-dollar cluster.", "Leopold Aschenbrenner 00:01:20", "Unlike most things that have recently come out of Silicon Valley, AI is an industrial process. The next model doesn’t just require some code. It’s building a giant new cluster. It’s building giant new power plants. Pretty soon, it’s going to involve building giant new fabs.", "Since ChatGPT , this extraordinary techno-capital acceleration has been set into motion. Exactly a year ago today, Nvidia had their first blockbuster earnings call . It went up 25% after hours and everyone was like, \"oh my God, AI is a thing.\" Within a year, Nvidia data center revenue has gone from a few billion a quarter to $25 billion a quarter and continues to go up. Big Tech capex is skyrocketing.", "It’s funny. There’s this crazy scramble going on, but in some sense it’s just the continuation of straight lines on a graph. There’s this long-run trend of almost a decade of training compute for the largest AI systems growing by about half an order of magnitude, 0.5 OOMs a year.", "Just play that forward. GPT-4 was reported to have finished pre-training in 2022. On SemiAnalysis , it was rumored to have a cluster size of about 25,000 A100s . That’s roughly a $500 million cluster. Very roughly, it’s 10 megawatts.", "Just play that forward half a year. By 2024, that’s a cluster that’s 100 MW and 100,000 H100 equivalents with costs in the billions.", "Play it forward two more years. By 2026, that’s a gigawatt, the size of a large nuclear reactor. That’s like the power of the Hoover Dam. That costs tens of billions of dollars and requires a million H100 equivalents.", "By 2028, that’s a cluster that’s ten GW. That’s more power than most US states. That’s 10 million H100 equivalents, costing hundreds of billions of dollars.", "By 2030, you get the trillion-dollar cluster using 100 gigawatts, over 20% of US electricity production. That’s 100 million H100 equivalents.", "That’s just the training cluster. There are more inference GPUs as well. Once there are products, most of them will be inference GPUs. US power production has barely grown for decades . Now we’re really in for a ride.", "Dwarkesh Patel 00:03:53", "When I had Zuck on the podcast , he was claiming not a plateau per se, but that AI progress would be bottlenecked by this constraint on energy. Specifically, he was like, \"oh, gigawatt data centers, are we going to build another Three Gorges Dam or something?\"", "According to public reports , there are companies planning things on the scale of a 1 GW data center. With a 10 GW data center, who’s going to be able to build that? A 100 GW center is like a state project. Are you going to pump that into one physical data center? How is it going to be possible? What is Zuck missing?", "Leopold Aschenbrenner 00:04:29", "Six months ago, 10 GW was the talk of the town. Now, people have moved on. 10 GW is happening. There’s The Information report on OpenAI and Microsoft planning a $100 billion cluster.", "Dwarkesh Patel 00:04:43", "Is that 1 GW? Or is that 10 GW?", "Leopold Aschenbrenner 00:04:45", "I don’t know but if you try to map out how expensive the 10 GW cluster would be, that’s a couple of hundred billion. It’s sort of on that scale and they’re planning it. It’s not just my crazy take. AMD forecasted a $400 billion AI accelerator market by 2027 . AI accelerators are only part of the expenditures.", "We’re very much on track for a $1 trillion of total AI investment by 2027. The $1 trillion cluster will take a bit more acceleration. We saw how much ChatGPT unleashed. Every generation, the models are going to be crazy and shift the Overton window .", "Then the revenue comes in. These are forward-looking investments. The question is, do they pay off? Let’s estimate the GPT-4 cluster at around $500 million. There’s a common mistake people make, saying it was $100 million for GPT-4. That’s just the rental price. If you’re building the biggest cluster, you have to build and pay for the whole cluster. You can’t just rent it for three months.", "Dwarkesh Patel 00:05:50", "Can’t you?", "Leopold Aschenbrenner 00:05:50", "Once you’re trying to get into the hundreds of billions, you have to get to like $100 billion a year in revenue. This is where it gets really interesting for the big tech companies because their revenues are on the order of hundreds of billions.", "$10 billion is fine. It’ll pay off the 2024 size training cluster. It’ll really be gangbusters with Big Tech when it costs $100 billion a year. The question is how feasible is $100 billion a year from AI revenue? It’s a lot more than right now. If you believe in the trajectory of AI systems as I do, it’s not that crazy.", "There are like 300 million Microsoft Office subscribers. They have Copilot now. I don’t know what they’re selling it for. Suppose you sold some AI add-on for $100/month to a third of Microsoft Office subscribers. That’d be $100 billion right there. $100/month is a lot.", "Dwarkesh Patel 00:06:43", "That’s a lot for a third of Office subscribers.", "Leopold Aschenbrenner 00:06:46", "For the average knowledge worker, it’s a few hours of productivity a month. You have to be expecting pretty lame AI progress to not hit a few hours of productivity a month.", "Dwarkesh Patel 00:06:55", "Sure, let’s assume all this. What happens in the next few years? What can the AI trained on the 1 GW data center do? What about the one on the 10 GW data center? Just map out the next few years of AI progress for me.", "Leopold Aschenbrenner 00:07:11", "The 10 GW range is my best guess for when you get true AGI . Compute is actually overrated. We’ll talk about that.", "By 2025-2026, we’re going to get models that are basically smarter than most college graduates. A lot of the economic usefulness depends on unhobbling. The models are smart but limited. There are chatbots and then there are things like being able to use a computer and doing agentic long-horizon tasks.", "By 2027-2028, it’ll get as smart as the smartest experts. The unhobbling trajectory points to it becoming much more like an agent than a chatbot. It’ll almost be like a drop-in remote worker.", "This is the question around the economic returns. Intermediate AI systems could be really useful, but it takes a lot of schlep to integrate them. There’s a lot you could do with GPT-4 or GPT-4.5 in a business use case, but you really have to change your workflows to make them useful. It’s a very Tyler Cowen -esque take. It just takes a long time to diffuse. We’re in SF and so we miss that.", "But in some sense, the way these systems want to be integrated is where you get this kind of sonic boom. Intermediate systems could have done it, but it would have taken schlep. Before you do the schlep to integrate them, you’ll get much more powerful systems that are unhobbled.", "They’re agents, drop-in remote workers. You’re interacting with them like coworkers. You can do Zoom calls and Slack with them. You can ask them to do a project and they go off and write a first draft, get feedback, run tests on their code, and come back. Then you can tell them more things. That’ll be much easier to integrate.", "You might need a bit of overkill to make the transition easy and harvest the gains.", "Dwarkesh Patel 00:09:16", "What do you mean by overkill? Overkill on model capabilities?", "Leopold Aschenbrenner 00:09:19", "Yeah, the intermediate models could do it but it would take a lot of schlep. The drop-in remote worker AGI can automate cognitive tasks. The intermediate models would have made the software engineer more productive. But will the software engineer adopt it?", "With the 2027 model, you just don’t need the software engineer. You can interact with it like a software engineer, and it’ll do the work of a software engineer.", "Dwarkesh Patel 00:09:43", "The last episode I did was with John Schulman .", "I was asking about this. We have these models that have come out in the last year and none seem to have significantly surpassed GPT-4, certainly not in an agentic way where they interact with you as a coworker. They’ll brag about a few extra points on MMLU . Even with GPT-4o , it’s cool they can talk like Scarlett Johansson (I guess not anymore) but it’s not like a coworker.", "It makes sense why they’d be good at answering questions. They have data on how to complete Wikipedia text. Where is the equivalent training data to understand a Zoom call? Referring back to your point about a Slack conversation, how can it use context to figure out the cohesive project you’re working on? Where is that training data coming from?", "Leopold Aschenbrenner 00:10:49", "A key question for AI progress in the next few years is how hard it is to unlock the test time compute overhang . Right now, GPT-4 can do a few hundred tokens with chain-of-thought . That’s already a huge improvement. Before, answering a math question was just shotgun . If you tried to answer a math question by saying the first thing that comes to mind, you wouldn’t be very good.", "GPT-4 thinks for a few hundred tokens. If I think at 100 tokens a minute, that’s like what GPT-4 does. It’s equivalent to me thinking for three minutes. Suppose GPT-4 could think for millions of tokens. That’s +4 OOMs on test time compute on one problem. It can’t do it now. It gets stuck. It writes some code. It can do a little bit of iterative debugging, but eventually gets stuck and can’t correct its errors.", "There’s a big overhang. In other areas of ML, there’s a great paper on AlphaGo , where you can trade off train time and test time compute. If you can use 4 OOMs more test time compute, that’s almost like a 3.5x OOM bigger model.", "Again, if it’s 100 tokens a minute, a few million tokens is a few months of working time. There’s a lot more you can do in a few months of working time than just getting an answer right now. The question is how hard is it to unlock that?", "In the short timelines AI world, it’s not that hard. The reason it might not be that hard is that there are only a few extra tokens to learn. You need to learn things like error correction tokens where you’re like “ah, I made a mistake, let me think about that again.” You need to learn planning tokens where it’s like “I’m going to start by making a plan. Here’s my plan of attack. I’m going to write a draft and now I’m going to critique my draft and think about it.” These aren’t things that models can do now, but the question is how hard it is.", "There are two paths to agents. When Sholto was on your podcast, he talked about scaling leading to more nines of reliability . That’s one path. The other path is the unhobbling path. It needs to learn this System 2 process. If it can learn that, it can use millions of tokens and think coherently.", "Here’s an analogy. When you drive, you’re on autopilot most of the time. Sometimes you hit a weird construction zone or intersection. Sometimes my girlfriend is in the passenger seat and I’m like “ah, be quiet for a moment, I need to figure out what’s going on.”", "You go from autopilot to System 2 and you’re thinking about how to do it. Scaling improves that System 1 autopilot. The brute force way to get to agents is improving that system. If you can get System 2 working, you can quickly jump to something more agentified and test time compute overhang is unlocked.", "Dwarkesh Patel 00:13:57", "What’s the reason to think this is an easy win? Is there some loss function that easily enables System 2 thinking? There aren’t many animals with System 2 thinking. It took a long time for evolution to give us System 2 thinking.", "Pre-training has trillions of tokens of Internet text, I get that. You match that and get all of these free training capabilities. What’s the reason to think this is an easy unhobbling?", "Leopold Aschenbrenner 00:14:29", "First of all, pre-training is magical. It gave us a huge advantage for models of general intelligence because you can predict the next token. But there’s a common misconception. Predicting the next token lets the model learn incredibly rich representations. Representation learning properties are the magic of deep learning . Rather than just learning statistical artifacts, the models learn models of the world. That’s why they can generalize, because it learned the right representations.", "When you train a model, you have this raw bundle of capabilities that’s useful. The unhobbling from GPT-2 to GPT-4 took this raw mass and RLHF ’d it into a good chatbot. That was a huge win.", "In the original InstructGPT paper , comparing RLHF vs. non-RLHF models it’s like a 100x model size win on human preference rating. It started to be able to do simple chain-of-thought and so on. But you still have this advantage of all these raw capabilities, and there’s still a huge amount you’re not doing with them.", "This pre-training advantage is also the difference to robotics . People used to say it was a hardware problem. The hardware is getting solved, but you don’t have this huge advantage of bootstrapping with pre-training. You don’t have all this unsupervised learning you can do. You have to start right away with RL self-play .", "The question is why RL and unhobbling might work. Bootstrapping is an advantage. Your Twitter bio is being pre-trained. You’re not being pre-trained anymore. You were pre-trained in grade school and high school. At some point, you transition to being able to learn by yourself. You weren’t able to do it in elementary school. High school is probably where it started and by college, if you’re smart, you can teach yourself. Models are just starting to enter that regime.", "It’s a little bit more scaling and then you figure out what goes on top. It won’t be trivial. A lot of deep learning seems obvious in retrospect. There’s some obvious cluster of ideas. There are some ideas that seem a little dumb but work. There are a lot of details you have to get right. We’re not going to get this next month. It’ll take a while to figure out.", "Dwarkesh Patel 00:17:04", "A while for you is like half a year.", "Leopold Aschenbrenner 00:17:07", "I don’t know, between six months and three years. But it's possible. It’s also very related to the issue of the data wall . Here’s one intuition on learning by yourself. Pre-training is kind of like the teacher lecturing to you and the words are flying by. You’re just getting a little bit from it.", "That's not what you do when you learn by yourself. When you learn by yourself, say you're reading a dense math textbook, you're not just skimming through it once. Some wordcels just skim through and reread and reread the math textbook and they memorize.", "What you do is you read a page, think about it, have some internal monologue going on, and have a conversation with a study buddy. You try a practice problem and fail a bunch of times. At some point it clicks, and you're like, \"this made sense.\" Then you read a few more pages.", "We've kind of bootstrapped our way to just starting to be able to do that now with models. The question is, can you use all this sort of self-play, synthetic data , RL to make that thing work. Right now, there's in-context learning , which is super sample efficient . In the Gemini paper , it just learns a language in-context. Pre-training, on the other hand, is not at all sample efficient.", "What humans do is a kind of in-context learning. You read a book, think about it, until eventually it clicks. Then you somehow distill that back into the weights . In some sense, that's what RL is trying to do. RL is super finicky, but when it works it's kind of magical.", "It's the best possible data for the model. It’s when you try a practice problem, fail, and at some point figure it out in a way that makes sense to you. That's the best possible data for you because it's the way you would have solved the problem, rather than just reading how somebody else solved the problem, which doesn't initially click.", "Dwarkesh Patel 00:19:18", "By the way, if that take sounds familiar it's because it was part of the question I asked John Schulman. It goes to illustrate the thing I said in the intro. A bunch of the things I've learned about AI comes from these dinners we do before the interviews with me, you, Sholto, and a couple of others. We’re like, “what should I ask John Schulman, what I should ask Dario .”", "Suppose this is the way things go and we get these unhobblings—", "Leopold Aschenbrenner 00:19:42", "And the scaling. You have this baseline of this enormous force of scaling. GPT-2 was amazing. It could string together plausible sentences, but it could barely do anything. It was kind of like a preschooler. GPT-4, on the other hand, could write code and do hard math, like a smart high schooler. This big jump in capability is explored in the essay series. I count the orders of magnitude of compute and scale-up of algorithmic progress.", "Scaling alone by 2027-2028 is going to do another preschool to high school jump on top of GPT-4. At a per token level, the models will be incredibly smart. They'll gain more reliability, and with the addition of unhobblings, they'll look less like chatbots and more like agents or drop-in remote workers. That's when things really get going.", "(00:20:31) – AI 2028: The return of history", "Dwarkesh Patel 00:20:31", "I want to ask more questions about this but let's zoom out. Suppose you're right about this. This is because of the 2027 cluster which is at 10 GW?", "Leopold Aschenbrenner 00:20:45", "2028 is 10 GW. Maybe it'll be pulled forward.", "Dwarkesh Patel 00:20:50", "Something like a 5.5 level by 2027, whatever that's called. What does the world look like at that point? You have these remote workers who can replace people. What is the reaction to that in terms of the economy, politics, and geopolitics?", "Leopold Aschenbrenner 00:21:06", "2023 was a really interesting year to experience as somebody who was really following the AI stuff.", "Dwarkesh Patel 00:21:16", "What were you doing in 2023?", "Leopold Aschenbrenner 00:21:18", "OpenAI. When you were at OpenAI in 2023, it was a weird thing. You almost didn't want to talk about AI or AGI. It was kind of a dirty word. Then in 2023, people saw ChatGPT for the first time, they saw GPT-4, and it just exploded.", "It triggered huge capital expenditures from all these firms and an explosion in revenue from Nvidia and so on. Things have been quiet since then, but the next thing has been in the oven. I expect every generation these g-forces to intensify. People will see the models. They won’t have counted the OOMs so they're going to be surprised. It'll be kind of crazy.", "Revenue is going to accelerate. Suppose you do hit $10 billion by the end of this year. Suppose it just continues on the trajectory of revenue doubling every six months. It's not actually that far from $100 billion, maybe by 2026. At some point, what happened to Nvidia is going to happen to Big Tech. It's going to explode. A lot more people are going to feel it.", "2023 was the moment for me where AGI went from being this theoretical, abstract thing. I see it, I feel it, and I see the path. I see where it's going. I can see the cluster it's trained on, the rough combination of algorithms, the people, how it's happening. Most of the world is not there yet. Most of the people who feel it are right here. A lot more of the world is going to start feeling it. That's going to start being intense.", "Dwarkesh Patel 00:22:59", "Right now, who feels it? You can go on Twitter and there are these GPT wrapper companies, like, \"whoa, GPT-4 is going to change our business.\"", "Leopold Aschenbrenner 00:23:06", "I'm so bearish on the wrapper companies because they're betting on stagnation. They're betting that you have these intermediate models and it takes so much schlep to integrate them. I'm really bearish because we're just going to sonic boom you. We're going to get the unhobblings. We're going to get the drop-in remote worker. Your stuff is not going to matter.", "Dwarkesh Patel 00:23:23", "So that's done. SF, this crowd, is paying attention now. Who is going to be paying attention in 2026 and 2027? Presumably, these are years in which hundreds of billions of capex is being spent on AI.", "Leopold Aschenbrenner 00:23:40", "The national security state is going to start paying a lot of attention. I hope we get to talk about that.", "Dwarkesh Patel 00:23:48", "Let’s talk about it now. What happens? What is the immediate political reaction? Looking internationally, I don't know if Xi Jinping sees the GPT-4 news and goes, \"oh, my God, look at the MMLU score on that. What are we doing about this, comrade?\"", "So what happens when he sees a remote worker replacement and it has $100 billion in revenue? There’s a lot of businesses that have $100 billion in revenue, and people aren't staying up all night talking about it.", "Leopold Aschenbrenner 00:24:15", "The question is, when does the CCP and when does the American national security establishment realize that superintelligence is going to be absolutely decisive for national power? This is where the intelligence explosion stuff comes in, which we should talk about later.", "You have AGI. You have this drop-in remote worker that can replace you or me, at least for remote jobs. Fairly quickly, you turn the crank one or two more times and you get a thing that's smarter than humans.", "Even more than just turning the crank a few more times, one of the first jobs to be automated is going to be that of an AI researcher or engineer. If you can automate AI research, things can start going very fast.", "Right now, there's already at this trend of 0.5 OOMs a year of algorithmic progress. At some point, you're going to have GPU fleets in the tens of millions for inference or more. You’re going to be able to run 100 million human equivalents of these automated AI researchers.", "If you can do that, you can maybe do a decade's worth of ML research progress in a year. You get some sort of 10x speed up. You can make the jump to AI that is vastly smarter than humans within a year, a couple of years.", "That broadens from there. You have this initial acceleration of AI research. You apply R&D to a bunch of other fields of technology. At this point, you have a billion super intelligent researchers, engineers, technicians, everything. They’re superbly competent at all things.", "They're going to figure out robotics. We talked about that being a software problem. Well, you have a billion super smart — smarter than the smartest human researchers — AI researchers in your cluster. At some point during the intelligence explosion, they're going to be able to figure out robotics. Again, that’ll expand.", "If you play this picture forward, it is fairly unlike any other technology. A couple years of lead could be utterly decisive in say, military competition. If you look at the first Gulf War , Western coalition forces had a 100:1 kill ratio. They had better sensors on their tanks . They had better precision missiles, GPS, and stealth. They had maybe 20-30 years of technological lead. They just completely crushed them.", "Superintelligence applied to broad fields of R&D — and the industrial explosion that comes from it, robots making a lot of material — could compress a century’s worth of technological progress into less than a decade. That means that a couple years could mean a Gulf War 1-style advantage in military affairs. That’s including a decisive advantage that even preempts nukes.", "How do you find nuclear stealth submarines? Right now, you have sensors and software to detect where they are. You can do that. You can find them. You have millions or billions of mosquito-sized drones, and they take out the nuclear submarines. They take out the mobile launchers. They take out the other nukes.", "It’s potentially enormously destabilizing and enormously important for national power. At some point people are going to realize that. Not yet, but they will. When they do, it won’t just be the AI researchers in charge.", "The CCP is going to have an all-out effort to infiltrate American AI labs. It’ll involve billions of dollars, thousands of people, and the full force of the Ministry of State Security . The CCP is going to try to outbuild us.", "They added as much power in the last decade as an entire US electric grid . So the 100 GW cluster, at least the 100 GW part of it, is going to be a lot easier for them to get. By this point, it's going to be an extremely intense international competition.", "Dwarkesh Patel 00:28:26", "One thing I'm uncertain about in this picture is if it’s like what you say, where it's more of an explosion. You’ve developed an AGI. You make it into an AI researcher. For a while, you're only using this ability to make hundreds of millions of other AI researchers. The thing that comes out of this really frenetic process is a superintelligence. Then that goes out in the world and is developing robotics and helping you take over other countries and whatever.", "Leopold Aschenbrenner 00:29:04", "It's a little bit more gradual. It's an explosion that starts narrowly. It can do cognitive jobs. The highest ROI use for cognitive jobs is to make the AI better and solve robotics. As you solve robotics, now you can do R&D in biology and other technology.", "Initially, you start with the factory workers. They're wearing the glasses and AirPods, and the AI is instructing them because you can make any worker into a skilled technician. Then you have the robots come in. So this process expands.", "Dwarkesh Patel 00:29:30", "Meta's Ray-Bans are a complement to Llama .", "Leopold Aschenbrenner 00:29:35", "With the fabs in the US, their constraint is skilled workers . Even if you don't have robots, you have the cognitive superintelligence and can kind of make them all into skilled workers immediately. That's a very brief period. Robots will come soon.", "Dwarkesh Patel 00:29:44", "Suppose this is actually how the tech progresses in the United States, maybe because these companies are already generating hundreds of billions of dollars of AI revenue", "Leopold Aschenbrenner 00:29:54", "At this point, companies are borrowing hundreds of billions or more in the corporate debt markets.", "Dwarkesh Patel 00:29:58", "Why is a CCP bureaucrat, some 60-year-old guy, looking at this and going, \"oh, Copilot has gotten better now\" and now—", "Leopold Aschenbrenner 00:30:07", "This is much more than Copilot has gotten better now.", "Dwarkesh Patel 00:30:12", "It’d require shifting the production of an entire country, dislocating energy that is otherwise being used for consumer goods or something, and feeding all that into the data centers. Part of this whole story is that you realize superintelligence is coming soon. You realize it and maybe I realize it. I'm not sure how much I realize it.", "Will the national security apparatus in the United States and the CCP realize it?", "Leopold Aschenbrenner 00:30:41", "This is a really key question. We have a few more years of mid-game. We have a few more 2023s. That just starts updating more and more people. The trend lines will become clear.", "You will see some amount of the COVID dynamic. COVID in February of 2020 honestly feels a lot like today. It feels like this utterly crazy thing is coming. You see the exponential and yet most of the world just doesn't realize it. The mayor of New York is like, \"go out to the shows,\" and \"this is just Asian racism.\" At some point, people saw it and then crazy, radical reactions came.", "Dwarkesh Patel 00:31:32", "By the way, what were you doing during COVID? Was it your freshman or sophomore year?", "Leopold Aschenbrenner 00:31:39", "Junior.", "Dwarkesh Patel 00:31:40", "Still, you were like a 17-year-old junior or something right? Did you short the market or something? Did you sell at the right time?", "Leopold Aschenbrenner 00:31:51", "Yeah.", "Dwarkesh Patel 00:31:54", "So there will be a March 2020 moment.", "You can make the analogy you make in the series that this will cause a reaction like, “we have to do the Manhattan Project again for America here.” I wonder what the politics of this will be like. The difference here is that it’s not just like, “we need the bomb to beat the Nazis.”", "We'll be building this thing that makes all our energy prices go up a bunch and it's automating a lot of our jobs. The climate change stuff people are going to be like, \"oh, my God, it's making climate change worse and it's helping Big Tech.\"", "Politically, this doesn't seem like a dynamic where the national security apparatus or the president is like, \"we have to step on the gas here and make sure America wins.\"", "Leopold Aschenbrenner 00:32:42", "Again, a lot of this really depends on how much people are feeling it and how much people are seeing it. Our generation is so used to peace, American hegemony and nothing matters. The historical norm is very much one of extremely intense and extraordinary things happening in the world with intense international competition.", "There's a 20-year very unique period. In World War II, something like 50% of GDP went to war production . The US borrowed over 60% of GDP. With Germany and Japan I think it was over 100%. In World War I, the UK, France, and Germany all borrowed over 100% of GDP.", "Much more was on the line. People talk about World War I being so destructive with 20 million Soviet soldiers dying and 20% of Poland. That happened all the time. During the Seven Years' War something like 20-30% of Prussia died. In the Thirty Years' War , up to 50% of a large swath of Germany died.", "Will people see that the stakes here are really high and that history is actually back? The American national security state thinks very seriously about stuff like this. They think very seriously about competition with China. China very much thinks of itself on this historical mission of the rejuvenation of the Chinese nation. They think a lot about national power. They think a lot about the world order.", "There's a real question on timing. Do they start taking this seriously when the intelligence explosion is already happening quite late. Do they start taking this seriously two years earlier? That matters a lot for how things play out.", "At some point they will and they will realize that this will be utterly decisive for not just some proxy war but for major questions. Can liberal democracy continue to thrive? Can the CCP continue existing? That will activate forces that we haven't seen in a long time.", "Dwarkesh Patel 00:35:06", "The great power conflict definitely seems compelling. All kinds of different things seem much more likely when you think from a historical perspective. You zoom out beyond the liberal democracy that we’ve had the pleasure to live in America for say the last 80 years. That includes things like dictatorships, war, famine, etc.", "I was reading The Gulag Archipelago and one of the chapters begins with Solzhenitsyn saying how if you had told a Russian citizen under the tsars that because of all these new technologies — we wouldn’t see some Great Russian revival with Russia becoming a great power and the citizens made wealthy — you would see tens of millions of Soviet citizens tortured by millions of beasts in the worst possible ways. If you’d told them that that would be the result of the 20th century, they wouldn’t have believed you. They’d have called you a slanderer.", "Leopold Aschenbrenner 00:35:59", "The possibilities for dictatorship with superintelligence are even crazier as well. Imagine you have a perfectly loyal military and security force. No more rebellions. No more popular uprisings. You have perfect lie detection. You have surveillance of everybody. You can perfectly figure out who's the dissenter and weed them out. No Gorbachev who had some doubts about the system would have ever risen to power. No military coup would have ever happened.", "There's a real way in which part of why things have worked out is that ideas can evolve. There's some sense in which time heals a lot of wounds and solves a lot of debates. Throughout time, a lot of people had really strong convictions, but a lot of those have been overturned over time because there's been continued pluralism and evolution.", "Imagine applying a CCP-like approach to truth where truth is what the party says. When you supercharge that with superintelligence, that could just be locked in and enshrined for a long time. The possibilities are pretty terrifying.", "To your point about history and living in America for the past eight years, this is one of the things I took away from growing up in Germany. A lot of this stuff feels more visceral. My mother grew up in the former East , my father in the former West . They met shortly after the Wall fell . The end of the Cold War was this extremely pivotal moment for me because it's the reason I exist.", "I grew up in Berlin with the former Wall. My great-grandmother, who is still alive, is very important in my life. She was born in 1934 and grew up during the Nazi era. In World War II, she saw the firebombing of Dresden from this country cottage where they were as kids. Then she spent most of her life in the East German communist dictatorship.", "She'd tell me about how Soviet tanks came when there was the popular uprising in 1954 . Her husband was telling her to get home really quickly and get off the streets. She had a son who tried to ride a motorcycle across the Iron Curtain and then was put in a Stasi prison for a while. Finally, when she's almost 60, it was the first time she lived in a free country, and a wealthy country.", "When I was a kid, the thing she always really didn't want me to do was get involved in politics. Joining a political party had very bad connotations for her. She raised me when I was young. So it doesn't feel that long ago. It feels very close.", "Dwarkesh Patel 00:38:43", "There’s one thing I wonder about when we're talking today about the CCP. The people in China who will be doing their version of this project will be AI researchers who are somewhat Westernized. They’ll either have gotten educated in the West or have colleagues in the West.", "Are they going to sign up for the CCP project that's going to hand over control to Xi Jinping? What's your sense of that? Fundamentally, they're just people, right? Can't you convince them about the dangers of superintelligence?", "Leopold Aschenbrenner 00:39:20", "Will they be in charge though? In some sense, this is also the case in the US. This is like the rapidly depreciating influence of the lab employees. Right now, the AI lab employees have so much power. You saw this November event . It’s so much power.", "Both are going to get automated and they're going to lose all their power. It'll just be a few people in charge with their armies of automated AIs. It’s also the politicians and the generals and the national security state. There are some of these classic scenes from the Oppenheimer movie. The scientists built it and then the bomb was shipped away and it was out of their hands.", "It's good for lab employees to be aware of this. You have a lot of power now, but maybe not for that long. Use it wisely. I do think they would benefit from some more organs of representative democracy.", "Dwarkesh Patel 00:40:13", "What do you mean by that?", "Leopold Aschenbrenner 00:40:14", "In the OpenAI board events, employee power is exercised in a very direct democracy way. How some of that went about really highlighted the benefits of representative democracy and having some deliberative organs.", "(00:40:26) – Espionage & American AI superiority", "Dwarkesh Patel 00:40:26", "Interesting. Let's go back to the $100 billion revenue question. The companies are trying to build clusters that are this big. Where are they building it? Say it's the amount of energy that would be required for a small or medium-sized US state. Does Colorado then get no power because it's happening in the United States? Is it happening somewhere else?", "Leopold Aschenbrenner 00:40:47", "This is the thing that I always find funny, when you talk about Colorado getting no power. The easy way to get the power would be to displace less economically useful stuff. Buy up the aluminum smelting plant that has a gigawatt. We're going to replace it with the data center because that's important. That's not actually happening because a lot of these power contracts are really locked in long-term. Also, people don't like things like this.", "In practice what it requires, at least right now, is building new power. That might change. That's when things get really interesting, when it's like, “no, we're just dedicating all of the power to the AGI.”", "So right now it's building new power. 10 GW is quite doable. It's like a few percent of US natural gas production . When you have the 10 GW training cluster, you have a lot more inference. 100 gigawatts is where it starts getting pretty wild. That's over 20% of US electricity production . It's pretty doable, especially if you're willing to go for natural gas.", "It is incredibly important that these clusters are in the United States.", "Dwarkesh Patel 00:41:50", "Why does it matter that it's in the US?", "Leopold Aschenbrenner 00:41:52", "There are some people who are trying to build clusters elsewhere. There's a lot of free-flowing Middle Eastern money that's trying to build clusters elsewhere. This comes back to the national security question we talked about. Would you do the Manhattan Project in the UAE?", "You can put the clusters in the US and you can put them in allied democracies. Once you put them in authoritarian dictatorships, you create this irreversible security risk. Once the cluster is there, it's much easier for them to exfiltrate the weights. They can literally steal the AGI, the superintelligence. It’s like they got a direct copy of the atomic bomb. It makes it much easier for them. They have weird ties to China. They can ship that to China. That's a huge risk.", "Another thing is they can just seize the compute. The issue here is people right now are thinking of this as ChatGPT, Big Tech product clusters. The clusters being planned now, three to five years out, may well be the AGI, superintelligence clusters. When things get hot, they might just seize the compute.", "Suppose we put 25% of the compute capacity in these Middle Eastern dictatorships. Say they seize that. Now it's a ratio of compute of 3:1. We still have more, but even with only 25% of compute there it starts getting pretty hairy. 3:1 is not that great of a ratio. You can do a lot with that amount of compute.", "Say they don't actually do this. Even if they don't actually seize the compute, even if they actually don't steal the weights, there's just a lot of implicit leverage you get. They get seats at the AGI table. I don't know why we're giving authoritarian dictatorships the seat at the AGI table.", "Dwarkesh Patel 00:43:36", "There's going to be a lot of compute in the Middle East if these deals go through.", "First of all, who is it? Is it just every single Big Tech company trying to figure it out over there?", "Leopold Aschenbrenner 00:43:44", "It’s not everybody, some.", "Dwarkesh Patel 00:43:45", "There are reports , I think Microsoft. We'll get into it.", "So say the UAE gets a bunch of compute because we're building the clusters there. Let's say they have 25% of the compute. Why does a compute ratio matter? If it's about them being able to kick off the intelligence explosion, isn't it just some threshold where you have 100 million AI researchers or you don't?", "Leopold Aschenbrenner 00:44:12", "You can do a lot with 33 million extremely smart scientists. That might be enough to build the crazy bio weapons. Then you're in a situation where they stole the weights and they seized the compute.", "Now they can make these crazy new WMDs that will be possible with superintelligence. Now you've just proliferated the stuff that’ll be really powerful. Also, 3x on compute isn't actually that much.", "The riskiest situation is if we're in some sort of really neck and neck, feverish international struggle. Say we're really close with the CCP and we're months apart. The situation we want to be in — and could be in if we play our cards right — is a little bit more like the US building the atomic bomb versus the German project years behind. If we have that, we just have so much more wiggle room to get safety right.", "We're going to be building these crazy new WMDs that completely undermine nuclear deterrence. That's so much easier to deal with if you don't have somebody right on your tails and you have to go at maximum speed. You have no wiggle room. You're worried that at any time they can overtake you.", "They can also just try to outbuild you. They might literally win. China might literally win if they can steal the weights, because they can outbuild you. They may have less caution, both good and bad caution in terms of whatever unreasonable regulations we have.", "If you're in this really tight race, this sort of feverish struggle, that's when there's the greatest peril of self-destruction .", "Dwarkesh Patel 00:45:58", "Presumably the companies that are trying to build clusters in the Middle East realize this. Is it just that it’s impossible to do this in America? If you want American companies to do this at all, do you have to do it in the Middle East or not at all? Then you just have China build a Three Gorges Dam cluster.", "Leopold Aschenbrenner 00:46:12", "There’s a few reasons. People aren’t thinking about this as the AGI superintelligence cluster. They’re just like, “ah, cool clusters for my ChatGPT.”", "Dwarkesh Patel 00:46:19", "If you’re doing ones for inference, presumably you could spread them out across the country or something. The ones they’re building, they’re going to do one training run in a single thing they’re building.", "Leopold Aschenbrenner 00:46:33", "It’s just hard to distinguish between inference and training compute. People can claim it’s inference compute, but they might realize that actually this is going to be useful for training compute too.", "Dwarkesh Patel 00:46:45", "Because of synthetic data and things like that?", "Leopold Aschenbrenner 00:46:46", "RL looks a lot like inference, for example. Or you just end up connecting them in time. It's a lot like raw materials. It's like placing your uranium refinement facilities there.", "So there are a few reasons. One, they don't think about this as the AGI cluster. Another is just that there’s easy money coming from the Middle East.", "Another one is that some people think that you can't do it in the US. We actually face a real system competition here. Some people think that only autocracies that can do this with top-down mobilization of industrial capacity and the power to get stuff done fast.", "Again, this is the sort of thing we haven't faced in a while. But during the Cold War, there was this intense system competition. East vs. West Germany was this. It was West Germany as liberal democratic capitalism vs. state-planned communism.", "Now it's obvious that the free world would win. But even as late as 1961, Paul Samuelson was predicting that the Soviet Union would outgrow the United States because they were able to mobilize industry better.", "So there are some people who shitpost about loving America, but then in private they're betting against America. They're betting against the liberal order. Basically, it's just a bad bet. This stuff is really possible in the US.", "To make it possible in the US, to some degree we have to get our act together. There are basically two paths to doing it in the US. One is you just have to be willing to do natural gas. There's ample natural gas. You put your cluster in West Texas. You put it in southwest Pennsylvania by the Marcellus Shale . The 10 GW cluster is super easy. The 100 GW cluster is also pretty doable. I think natural gas production in the United States has almost doubled in a decade . You do that one more time over the next seven years, you could power multiple trillion-dollar data centers.", "The issue there is that a lot of people made these climate commitments, not just the government. It's actually the private companies themselves, Microsoft , Amazon , etc., that have made these climate commitments. So they won't do natural gas. I admire the climate commitments, but at some point the national interest and national security is more important.", "The other path is doing green energy megaprojects. You do solar and batteries and SMR s and geothermal. If we want to do that, there needs to be a broad deregulatory push. You can't have permitting take a decade. You have to reform FERC . You have to have blanket NEPA exemptions for this stuff.", "There are inane state-level regulations. You can build the solar panels and batteries next to your data center, but it'll still take years because you actually have to hook it up to the state electrical grid. You have to use governmental powers to create rights of way to have multiple clusters and connect them and have the cables.", "Ideally we do both. Ideally we do natural gas and the broader deregulatory green agenda. We have to do at least one. Then this stuff is possible in the United States.", "Dwarkesh Patel 00:49:44", "Before the conversation I was reading a good book about World War II industrial mobilization in the United States called Freedom's Forge . I’m thinking back on that period, especially in the context of reading Patrick Collison’s Fast and the progress study stuff . There’s this narrative out there that we had state capacity back then and people just got shit done but that now it's a clusterfuck.", "Leopold Aschenbrenner 00:50:09", "It wasn’t at all the case!", "Dwarkesh Patel 00:50:10", "It was really interesting. You had people from the Detroit auto industry side, like William Knudsen , who were running mobilization for the United States. They were extremely competent. At the same time you had labor organization and agitation , which is very analogous to the climate change pledges and concerns we have today.", "They would literally have these strikes, into 1941, costing millions of man-hours worth of time when we're trying to make tens of thousands of planes a month. They would just debilitate factories for trivial concessions from capital that were pennies on the dollar.", "There were concerns that the auto companies were trying to use the pretext of a potential war to prevent paying labor the money it deserves. So with what climate change is today, you might think, \"ah, America's fucked. We're not going to be able to build this shit if you look at NEPA or something,” I didn't realize how debilitating labor was in World War II.", "Leopold Aschenbrenner 00:51:18", "It wasn’ just that. Before 1939, the American military was in total shambles. You read about it and it reads a little bit like the German military today . Military expenditures were I think less than 2% of GDP. All the European countries had gone, even in peacetime, above 10% of GDP.", "It was rapid mobilization starting from nothing. We were making no planes. There were no military contracts. Everything had been starved during the Great Depression. But there was this latent capacity. At some point the United States got its act together.", "This applies the other way around too with China. Sometimes people count them out a little bit with the export controls and so on. They're able to make 7-nanometer chips now. There's a question of how many they could make. There's at least a possibility that they're going to mature that ability and make a lot of 7-nanometer chips.", "There's a lot of latent industrial capacity in China. They are able to build a lot of power fast. Maybe that isn't activated for AI yet. At some point, the same way the United States and a lot of people in the US government are going to wake up, the CCP is going to wake up.", "Dwarkesh Patel 00:52:22", "Companies realize that scaling is a thing. Obviously their whole plans are contingent on scaling. So they understand that in 2028 we're going to be building 10 GW data centers.", "At that point, the people who can keep up are Big Tech, potentially at the edge of their capabilities, sovereign wealth fund-funded things, and also major countries like America and China. What's their plan? With the AI labs, what's their plan given this landscape? Do they not want the leverage of being in the United States?", "Leopold Aschenbrenner 00:53:07", "The Middle East does offer capital, but America has plenty of capital. We have trillion-dollar companies. What are these Middle Eastern states? They're kind of like trillion-dollar oil companies. We have trillion-dollar companies and very deep financial markets. Microsoft could issue hundreds of billions of dollars of bonds and they can pay for these clusters.", "Another argument being made, which is worth taking seriously, is that if we don't work with the UAE or with these Middle Eastern countries, they're just going to go to China. They're going to build data centers and pour money into AI regardless. If we don't work with them, they'll just support China.", "There's some merit to the argument in the sense that we should be doing benefit-sharing with them. On the road to AGI, there should be two tiers of coalitions. There should be a narrow coalition of democracies that's developing AGI. Then there should be a broader coalition of other countries, including dictatorships, and we should offer them some of the benefits of AI.", "If the UAE wants to use AI products, run Meta recommendation engines, or run the last-generation models, that's fine. By default, they just wouldn't have had this seat at the AGI table. So they have some money, but a lot of people have money.", "The only reason they're getting this seat at the AGI table and giving these dictators this leverage over this extremely important national security technology, is because we're getting them excited and offering it to them.", "Dwarkesh Patel 00:54:50", "Who specifically is doing this? Who are the companies who are going there to fundraise?", "Leopold Aschenbrenner 00:54:55", "It’s been reported that Sam Altman is trying to raise $7 trillion or whatever for a chip project. It's unclear how many of the clusters will be there, but definitely stuff is happening.", "There’s another reason I'm a little suspicious of this argument that if the US doesn't work with them, they'll go to China. I've heard from multiple people — not from my time at OpenAI, and I haven't seen the memo — that at some point several years ago, OpenAI leadership had laid out a plan to fund and sell AGI by starting a bidding war between the governments of the United States, China, and Russia.", "It's surprising to me that they're willing to sell AGI to the Chinese and Russian governments. There's also something that feels eerily familiar about starting this bidding war and then playing them off each other, saying, \"well, if you don't do this, China will do it.\"", "Dwarkesh Patel 00:55:47", "Interesting. That's pretty fucked up.", "Suppose you're right. We ended up in this place because, as one of our friends put it, the Middle East has billions or trillions of dollars up for persuasion like no other place in the world.", "Leopold Aschenbrenner 00:56:11", "With little accountability. There’s no Microsoft board. It's only the dictator.", "Dwarkesh Patel 00:56:15", "Let's say you're right, that you shouldn't have gotten them excited about AGI in the first place. Now we're in a place where they are excited about AGI and they're like, \"fuck, we want to have GPT-5 while you're going to be off building superintelligence. This Atoms for Peace thing doesn't work for us.\" If you're in this place, don't they already have the leverage?", "Leopold Aschenbrenner 00:56:36", "The UAE on its own is not competitive. They're already export-controlled. You're not supposed to ship Nvidia chips over there. It's not like they have any of the leading AI labs. They have money, but it's hard to just translate money into progress.", "Dwarkesh Patel 00:56:51", "But I want to go back to other things you've been saying in laying out your vision. There's this almost industrial process of putting in the compute and algorithms, adding that up, and getting AGI on the other end. If it's something more like that, then the case for somebody being able to catch up rapidly seems more compelling than if it's some bespoke...", "Leopold Aschenbrenner 00:57:00", "Well, if they can steal the algorithms and if they can steal the weights, that’s really important.", "Dwarkesh Patel 00:57:20", "How easy would it be for an actor to steal the things that are not the trivial released things, like Scarlett Johansson's voice, but the RL things we're talking about, the unhobblings?", "Leopold Aschenbrenner 00:57:32", "It’s all extremely easy. They don’t make the claim that it’s hard. DeepMind put out their Frontier Safety Framework and they lay out security levels, zero to four. Four is resistant to state activity. They say, we're at level zero. Just recently, there was an indictment of a guy who stole a bunch of really important AI code and went to China with it. All he had to do to steal the code was copy it, put it into Apple Notes, and export it as a PDF. That got past their monitoring.", "Google has the best security of any of the AI labs probably, because they have the Google infrastructure. I would think of the security of a startup. What does security of a startup look like? It's not that good. It's easy to steal.", "Dwarkesh Patel 00:58:18", "Even if that's the case, a lot of your post is making the argument for why we are going to get the intelligence explosion. If we have somebody with the intuition of an Alec Radford to come up with all these ideas, that intuition is extremely valuable and you can scale that up.", "If it's just intuition, then that's not going to be just in the code, right? Also because of export controls, these countries are going to have slightly different hardware. You're going to have to make different trade-offs and probably rewrite things to be compatible with that.", "Is it just a matter of getting the right pen drive and plugging it into the gigawatt data center next to the Three Gorges Dam and then you're off to the races?", "Leopold Aschenbrenner 00:59:01", "There are a few different things, right? One threat model is just them stealing the weights themselves. The weights one is particularly insane because they can just steal the literal end product — just make a replica of the atomic bomb — and then they're ready to go. That one is extremely important around the time we have AGI and superintelligence because China can build a big cluster by default. We'd have a big lead because we have the better scientists, but if we make the superintelligence and they just steal it, they're off to the races.", "Weights are a little bit less important right now because who cares if they steal the GPT-4 weights. We still have to get started on weight security now because if we think there’s AGI by 2027, this stuff is going to take a while. It's not just going to be like, \"oh, we do some access control.\" If you actually want to be resistant to Chinese espionage, it needs to be much more intense.", "The thing that people aren't paying enough attention to is the secrets. The compute stuff is sexy, but people underrate the secrets. The half an order of magnitude a year is just by default, sort of algorithmic progress. That's huge. If we have a few years of lead, by default, that's a 10-30x, 100x bigger cluster, if we protect them.", "There's this additional layer of the data wall. We have to get through the data wall. That means we actually have to figure out some sort of basic new paradigm. So it’s the “AlphaGo step two.” “AlphaGo step one” learns from human imitation. “AlphaGo step two” is the kind of self-play RL thing that everyone's working on right now. Maybe we're going to crack it. If China can't steal that, then they're stuck. If they can steal it, they're off to the races.", "Dwarkesh Patel 01:00:45", "Whatever that thing is, can I literally write it down on the back of a napkin? If it's that easy, then why is it so hard for them to figure it out? If it's more about the intuitions, then don't you just have to hire Alec Radford? What are you copying down?", "Leopold Aschenbrenner 01:00:57", "There are a few layers to this. At the top is the fundamental approach. On pre-training it might be unsupervised learning, next token prediction, training on the entire Internet. You actually get a lot of juice out of that already. That one's very quick to communicate.", "Then there's a lot of details that matter, and you were talking about this earlier. It's probably going to be somewhat obvious in retrospect, or there's going to be some not too complicated thing that'll work, but there's going to be a lot of details to get that.", "Dwarkesh Patel 01:01:29", "If that's true, then again, why do we think that getting state-level security in these startups will prevent China from catching up? It’s just like, \"oh, we know some sort of self-play RL will be required to get past the data wall.\"", "It's going to be solved by 2027, right? It's not that hard.", "Leopold Aschenbrenner 01:01:49", "The US, and the leading labs in the United States, have this huge lead. By default, China actually has some good LLMs because they're just using open source code, like Llama. People really underrate both the divergence on algorithmic progress and the lead the US would have by default because all this stuff was published until recently.", "Look at Chinchilla Scaling laws , MoE papers, transformers . All that stuff was published. That's why open source is good and why China can make some good models. Now, they're not publishing it anymore. If we actually kept it secret, it would be a huge edge.", "To your point about tacit knowledge and Alec Radford, there's another layer at the bottom that is something about large-scale engineering work to make these big training runs work. That is a little bit more like tacit knowledge, but China will be able to figure that out. It's engineering schlep, and they're going to figure out how to do it.", "Dwarkesh Patel 01:02:40", "Why can't they figure that out, but not how to get the RL thing working?", "Leopold Aschenbrenner 01:02:45", "I don't know. Germany during World War II went down the wrong path with heavy water . There's an amazing anecdote in The Making of the Atomic Bomb about this.", "Secrecy was one of the most contentious issues early on. Leo Szilard really thought a nuclear chain reaction and an atomic bomb were possible. He went around saying, \"this is going to be of enormous strategic and military importance.\" A lot of people didn't believe it or thought, \"maybe this is possible, but I'm going to act as though it's not, and science should be open.\"", "In the early days, there had been some incorrect measurements made on graphite as a moderator . Germany thought graphite wasn't going to work, so they had to do heavy water. But then Enrico Fermi made new measurements indicating that graphite would work. This was really important.", "Szilard assaulted Fermi with another secrecy appeal and Fermi was pissed off, throwing a temper tantrum. He thought it was absurd, saying, \"come on, this is crazy.\" But Szilard persisted, and they roped in another guy, George Pegram . In the end, Fermi didn't publish it.", "That was just in time. Fermi not publishing meant that the Nazis didn't figure out graphite would work. They went down the path of heavy water, which was the wrong path. This is a key reason why the German project didn't work out. They were way behind.", "We face a similar situation now. Are we just going to instantly leak how to get past the data wall and what the next paradigm is? Or are we not?", "Dwarkesh Patel 01:04:24", "The reason this would matter is if being one year ahead would be a huge advantage. In the world where you deploy AI over time they're just going to catch up anyway.", "I interviewed Richard Rhodes, the guy who wrote The Making of the Atomic Bomb . One of the anecdotes he had was when the Soviets realized America had the bomb. Obviously, we dropped it in Japan.", "Lavrentiy Beria — the guy who ran the NKVD , a famously ruthless and evil guy — goes to the Soviet scientist who was running their version of the Manhattan Project . He says, \"comrade, you will get us the American bomb.\" The guy says, \"well, listen, their implosion device actually is not optimal. We should make it a different way.\" Beria says, \"no, you will get us the American bomb, or your family will be camp dust.\"", "The thing that's relevant about that anecdote is that the Soviets would have had a better bomb if they hadn't copied the American design, at least initially. That suggests something about history, not just for the Manhattan Project. There's often this pattern of parallel invention because the tech tree implies that a certain thing is next — in this case, a self-play RL — and people work on that and are going to figure it out around the same time. There's not going to be that much gap in who gets it first.", "Famously, a bunch of people invented the light bulb around the same time . Is it the case that it might be true but the one year or six months makes the difference?", "Leopold Aschenbrenner 01:05:56", "Two years makes all the difference.", "Dwarkesh Patel 01:05:58", "I don't know if it'll be two years though.", "Leopold Aschenbrenner 01:06:01", "If we lock down the labs, we have much better scientists. We're way ahead. It would be two years. Even six months, a year, would make a huge difference. This gets back to the intelligence explosion dynamics. A year might be the difference between a system that's sort of human-level and a system that is vastly superhuman. It might be like five OOMs.", "Look at the current pace. Three years ago, on the math benchmark — these are really difficult high school competition math problems — we were at a few percent, we couldn't solve anything. Now it's solved. That was at the normal pace of AI progress. You didn't have a billion superintelligent researchers.", "A year is a huge difference, particularly after superintelligence. Once this is applied to many elements of R&D, you get an industrial explosion with robots and other advanced technologies. A couple of years might yield decades worth of progress. Again, it’s like the technological lead the U.S. had in the first Gulf War, when the 20-30 years of technological lead proved totally decisive. It really matters.", "Here’s another reason it really matters. Suppose they steal the weights, suppose they steal the algorithms, and they're close on our tails. Suppose we still pull out ahead. We're a little bit faster and we're three months ahead.", "The world in which we're really neck and neck, we only have a three-month lead, is incredibly dangerous. We're in this feverish struggle where if they get ahead, they get to dominate, maybe they get a decisive advantage. They're building clusters like crazy. They're willing to throw all caution to the wind. We have to keep up.", "There are crazy new WMDs popping up. Then we're going to be in the situation where it's crazy new military technology, crazy new WMDs, deterrence, mutually assured destruction keeps changing every few weeks. It's a completely unstable, volatile situation that is incredibly dangerous.", "So you have to look at it from the point of view that these technologies are dangerous, from the alignment point of view. It might be really important during the intelligence explosion to have a six-month wiggle room to be like, “look, we're going to dedicate more compute to alignment during this period because we have to get it right. We're feeling uneasy about how it's going.”", "One of the most important inputs to whether we will destroy ourselves or whether we will get through this incredibly crazy period is whether we have that buffer.", "(01:08:20) – Geopolitical implications of AI", "Dwarkesh Patel 01:08:20", "Before we go further, it's very much worth noting that almost nobody I talk to thinks about the geopolitical implications of AI. I have some object-level disagreements that we'll get into, things I want to iron out. I may not disagree in the end.", "The basic premise is that if you keep scaling, if people realize that this is where intelligence is headed, it's not just going to be the same old world. It won't just be about what model we're deploying tomorrow or what the latest thing is. People on Twitter are like, \"oh, GPT-4 is going to shake your expectations\" or whatever.", "COVID is really interesting because when March 2020 hit, it became clear to the world — presidents, CEOs, media, the average person — that there are other things happening in the world right now but the main thing we as a world are dealing with right now is COVID.", "Leopold Aschenbrenner 01:09:21", "Soon it will be AGI. This is the quiet period. Maybe you want to go on vacation. Maybe now is the last time you can have some kids. My girlfriend sometimes complains when I’m off doing work that I don’t spend enough time with her. She threatens to replace me with GPT-6 or whatever. I'm like, “GPT-6 will also be too busy doing AI research.”", "Dwarkesh Patel 01:09:51", "Why aren't other people talking about national security?", "Leopold Aschenbrenner 01:09:56", "I made this mistake with COVID. In February of 2020, I thought it was going to sweep the world and all the hospitals would collapse. It would be crazy, and then it'd be over. A lot of people thought this kind of thing at the beginning of COVID. They shut down their office for a month or whatever.", "The thing I just really didn't price in was societal reaction. Within weeks, Congress spent over 10% of GDP on COVID measures . The entire country was shut down. It was crazy. I didn't sufficiently price it in with COVID.", "Why do people underrate it? Being in the trenches actually gives you a less clear picture of the trend lines. You don’t have to zoom out that much, only a few years.", "When you're in the trenches, you're trying to get the next model to work. There's always something that's hard. You might underrate algorithmic progress because you're like, \"ah, things are hard right now,\" or \"data wall\" or whatever. When you zoom out just a few years and count up how much algorithmic progress was made, it's enormous.", "People also just don’t think about this stuff. Smart people really underrate espionage. Part of the security issue is that people don't realize how intense state-level espionage can be. This Israeli company had software that could just zero-click hack any iPhone . They just put in your number and it was a straight download of everything. The United States infiltrated an air-gapped atomic weapons program. Wild.", "Dwarkesh Patel 01:11:28", "Are you talking about Stuxnet ?", "Leopold Aschenbrenner 01:11:30", "Yeah. Intelligence agencies have stockpiles of zero-days . When things get really hot, maybe we'll send special forces to go to the data center or something. China does this. They threaten people's families. They’re like, “if you don't cooperate, if you don't give us the intel…”", "There's a good book along the lines of The Gulag Archipelago called Inside the Aquarium , which is by a Soviet GRU defector . GRU was military intelligence. Ilya recommended this book to me. When I read it, I was shocked at the intensity of state-level espionage.", "The whole book was about how they go to these European countries and try and recruit people to get the technology. Here’s one anecdote. This eventual defector, he's being trained at the GRU spy academy. To graduate from the spy academy before being sent abroad, you had to pass a test to show that you can do this.", "The test was recruiting a Soviet scientist in Moscow to give you information, like you would do in a foreign country. Of course, for whomever you recruited, the penalty for giving away secret information was death. So to graduate from the GRU spy academy, you had to condemn a countryman to death. States do this stuff.", "Dwarkesh Patel 01:12:58", "I started reading the book because you mentioned it in the series. I was wondering about the fact that you use this anecdote. Then you're like, \"a book recommended by Ilya.\" Is this some sort of Easter egg? We'll leave that as an exercise for the reader.", "Leopold Aschenbrenner 01:13:16", "The beatings will continue until morale improves.", "Dwarkesh Patel 01:13:23", "Suppose we live in a world where these secrets are locked down, but China realizes this progress is happening in America.", "Leopold Aschenbrenner 01:13:39", "The secrets probably won't be locked down. We’re probably going to live in the bad world. It's going to be really bad.", "Dwarkesh Patel 01:13:46", "Why are you so confident they won't be locked down?", "Leopold Aschenbrenner 01:13:48", "I'm not confident they won't be locked down, but it's just not happening.", "Dwarkesh Patel 01:13:52", "Let’s say tomorrow, the lab leaders get the message. How hard is it? What do they have to do? Do they get more security guards? Do they air-gap? What do they do?", "Leopold Aschenbrenner 01:14:03", "People have two reactions: \"we're already secure.\" We’re not.", "Then there's fatalism: \"it's impossible.\"", "You need to stay ahead of the curve of how AGI-pilled the CCP is. Right now, you've got to be resistant to normal economic espionage. They're not. I probably wouldn't be talking about this stuff if the labs were. I wouldn't want to wake up the CCP more. But this stuff is really trivial for them to do right now.", "So they're not resistant to that. It would be possible for a private company to be resistant to it. Both of us have friends in the quantitative trading world. Those secrets are shaped similarly where if I got on a call for an hour with somebody from a competitor firm, most of our alpha would be gone.", "Dwarkesh Patel 01:14:59", "You're going to worry about that pretty soon.", "Leopold Aschenbrenner 01:15:04", "All the alpha could be gone but in fact, their alpha often persists for many years and decades. So this doesn't seem to happen. There's a lot you could do if you went from current startup security to good private sector security: hedge funds, the way Google treats customer data or whatever. That'd be good right now.", "The issue is that basically the CCP will also get more AGI-pilled. At some point, we're going to face the full force of the Ministry of State Security. You're talking about smart people underrating espionage and the insane capabilities of states. This stuff is wild. There are papers about how you can find out the location of where you are in a video game map just from sounds. States can do a lot with electromagnetic emanations.", "At some point, you have to be working from a SCIF . Your cluster needs to be air-gapped and basically be a military base. You need to have intense security clearance procedures for employees. All this shit is monitored. They basically have security guards. You can't use any other dependencies. It's all got to be intensely vetted. All your hardware has to be intensely vetted.", "If they actually really face the full force of state-level espionage, this isn’t really the thing private companies can do empirically. Microsoft recently had executives' emails hacked by Russian hackers , and government emails they've hosted hacked by government actors . Also, there's just a lot of stuff that only the people behind the security clearances know and only they deal with.", "To actually resist the full force of espionage, you're going to need the government. We could do it by always being ahead of the curve. I think we're just going to always be behind the curve, unless we get a sort of government project.", "Dwarkesh Patel 01:16:57", "Going back to the naive perspective, we're very much coming at this from, “there's going to be a race and the CCP, we must win.” Listen, I understand bad people are in charge of the Chinese government, with the CCP and everything.", "I want to step back to a sort of galactic perspective. Humanity is developing AGI. Do we want to come at this from the perspective of \"we need to beat China\"? To our superintelligent Jupiter brain descendants, China will be some distant memory that they have, America too.", "Shouldn't it be more, as an initial approach, just going to them like, “listen, this is superintelligence. We come from a cooperative perspective.” Why immediately rush into it from a hawkish, competitive perspective?", "Leopold Aschenbrenner 01:17:47", "A lot of the stuff I talk about in the series is primarily descriptive. On the China stuff,  in some ideal world, it's just all merry-go-round and cooperation. Again, people wake up to AGI. The issue in particular is, can we make a deal? Can we make an international treaty? It really relates to the stability of international arms control agreements.", "We did very successful arms control on nuclear weapons in the 1980s. The reason it was successful is because the new equilibrium was stable. You go down from 60,000 nukes to 10,000 nukes or whatever. When you have 10,000 nukes, breakout basically doesn't matter that much.", "Suppose the other guy now tried to make 20,000 nukes. Who cares? It's still mutually assured destruction. Suppose a rogue state went from zero nukes to one nuke. Who cares? We still have way more nukes than you. It's still not ideal for destabilization.", "It'd be very different if the arms control agreement had been zero nukes. At zero nukes, you just need one rogue state to make one nuke and the whole thing is destabilized. Breakout is very easy. Your adversary state starts making nukes.", "When you're going to very low levels of arms or when you're in a very dynamic technological situation, arms control is really tough because breakout is easy. There are some other stories about this in the 1920s and 1930s. All the European states had disarmed.", "Germany did this kind of crash program to build the Luftwaffe . That was able to massively destabilize things because they were the first. They were able to pretty easily build a modern air force because the others didn't really have one. That really destabilized things.", "The issue with AGI and superintelligence is the explosiveness of it. If you have an intelligence explosion, you're able to go from AGI to superintelligence. That superintelligence is decisive because you’ll developed some crazy WMD or you’ll have some super hacking ability that lets you completely deactivate the enemy arsenal. Suppose you're trying to put in a break. We're both going to cooperate. We're going to go slower on the cusp of AGI.", "There is going to be such an enormous incentive to race ahead, to break out. We're just going to do the intelligence explosion. If we can get three months ahead, we win. That makes any sort of arms control agreement very unstable in a close situation.", "Dwarkesh Patel 01:20:15", "That's really interesting. This is very analogous to a debate I had with Rhodes on the podcast where he argued for nuclear disarmament. If some country tried to break out and started developing nuclear weapons, the six months you would get is enough to get international consensus and invade the country and prevent them from getting nukes. I thought that was not stable equilibrium.", "On this, maybe it's a bit easier because you have AGI and so you can monitor the other person's cluster or something. You can see the data centers from space. You can see the energy draw they're getting. As you were saying, there are a lot of ways to get information from an environment if you're really dedicated. Also, unlike nukes, the data centers are fixed. Obviously, you have nukes in submarines, planes, bunkers, mountains, etc. You can have them so many different places. A 100 GW data center, we can blow that shit up if we're concerned. We can just use a cruise missile or something. That's very vulnerable.", "Leopold Aschenbrenner 01:21:19", "That gets to the insane vulnerability and the volatility of this period, post-superintelligence. You have the intelligence explosion. You have these vastly superhuman things on your cluster. You haven't done the industrial explosion yet. You don't have your robots yet. You haven't covered the desert in robot factories yet.", "That is this crazy moment. Say the United States is ahead. The CCP is somewhat behind. There's actually an enormous incentive for first strike, if they can take out your data center. They know you're about to have this command, a decisive lead. They know if they can just take out this data center, then they can stop it. They might get desperate.", "We're going to get into a position that's going to be pretty hard to defend early on. We're basically going to be in a position where we're protecting data centers with the threat of nuclear retaliation. Maybe it sounds kind of crazy.", "Dwarkesh Patel 01:22:10", "Is this the inverse of the Eliezer …?", "Leopold Aschenbrenner 01:22:14", "Nuclear deterrence for data centers. This is Berlin in the late 1950s, early 1960s. Both Eisenhower and Kennedy multiple times made the threat of full-on nuclear war against the Soviets if they tried to encroach on West Berlin .", "It's sort of insane. It's kind of insane that that went well. Basically, that's going to be the only option for the data centers. It's a terrible option. This whole scheme is terrible. Being in  neck and neck race at this point is terrible.", "I have some uncertainty on how easy that decisive advantage will be. I'm pretty confident that if you have superintelligence, you have two years, you have the robots, you're able to get that 30-year lead. Then you're in this Gulf War 1 situation. You have your millions or billions of mosquito-sized drones that can just take it out. There's even a possibility you can get a decisive advantage earlier.", "There are these stories about colonization in the 1500s where a few hundred Spaniards were able to topple the Aztec Empire , a couple of other empires as well. Each of these had a few million people. It was not a godlike technological advantage. It was some technological advantage. It was some amount of disease and cunning strategic play.", "There's a possibility that even early on — when you haven't gone through the full industrial explosion yet — you have superintelligence, but you're able to manipulate the opposing generals, claiming you're allying with them. Then you have some crazy new bioweapons. Maybe there's even some way to pretty easily get a paradigm that deactivates enemy nukes. This stuff could get pretty wild.", "Here's what we should do. I really don't want this volatile period. A deal with China would be nice. It's going to be really tough if you're in this unstable equilibrium. We want to get in a position where it is clear that the United States, a coalition of democratic allies, will win. It is clear to the United States, it is clear to China. That will require having locked down the secrets, having built the 100 gigawatt cluster in the United States, having done the natural gas and doing what's necessary.", "When it is clear that the democratic coalition is well ahead, you go to China and offer them a deal. China will know we’re going to win. They're very scared of what's going to happen. We're going to know we're going to win, but we're also very scared of what's going to happen because we really want to avoid this kind of breakneck race right at the end. Things could really go awry.", "We offer them a deal. There's an incentive to come to the table. There's a more stable arrangement you can do. It's an Atoms for Peace arrangement. We're like, \"look, we're going to respect you. We're not going to use superintelligence against you. You can do what you want. You're going to get your slice of the galaxy.", "We're going to benefit-share with you. We're going to have some compute agreement where there's some ratio of compute that you're allowed to have, enforced with opposing AIs or whatever. We're just not going to do this volatile WMD arms race to the death.", "It's a new world order that's US-led, democracy-led, but respects China and lets them do what they want.", "Dwarkesh Patel 01:25:11", "There's so much there. On the galaxies thing, there’s a funny anecdote. I kind of want to tell it. We were at an event. I'm respecting Chatham House rules here. I'm not revealing anything about it. Leopold was talking to somebody influential. Afterwards, that person told the group, \"Leopold told me he's not going to spend any money on consumption until he's ready to buy galaxies.\"", "The guy goes, \"I honestly don't know if he meant galaxies like the brand of private plane Galaxy or physical galaxies.\" There was an actual debate. He went away to the restroom. There was an actual debate among influential people about whether he meant Galaxys. Others who knew you better were like, \"no, he means galaxies.\"", "Leopold Aschenbrenner 01:26:02", "I meant the galaxies. There are two ways to buy the galaxies. At some point, post-superintelligence, there’s some crazy...", "Dwarkesh Patel 01:26:13", "I'm laughing my ass off, not even saying anything. Wee were having this debate. Leopold comes back. Someone says, \"oh, Leopold, we're having this debate about whether you meant you want to buy the Galaxy, or you want to buy the other thing.\" Leopold assumes they must mean not the private plane Galaxy vs. the actual galaxy, but whether he wants to buy the property rights of the galaxy or actually just send out the probes right now.", "Leopold Aschenbrenner 01:26:43", "Exactly.", "Dwarkesh Patel 01:26:49", "Alright, back to China. There's a whole bunch of things I could ask about that plan and whether you’re going to get a credible promise to get some part of galaxies.", "Leopold Aschenbrenner 01:27:02", "You’ll have AIs to help you enforce stuff.", "Dwarkesh Patel 01:27:05", "Sure, we’ll leave that aside. That’s a different rabbit hole. The thing I want to ask is...", "Leopold Aschenbrenner 01:27:09", "The only way this is possible is if we lock it down. If we don't lock it down, we are in this fever struggle. Greatest peril mankind will have ever seen.", "Dwarkesh Patel 01:27:19", "During this period, they don’t really understand how this AI governance is going to work, whether they’re going to check, whether we’re going to adjugate the galaxies. The data centers can't be built underground. They have to be above ground. Taiwan is right off the coast of China. The US needs the chips from there.", "Why isn’t China just going to invade? Worst case scenario for them is the US wins the superintelligence, which we’re on track to do anyway. Wouldn't this instigate them to either invade Taiwan or blow up the data center in Arizona or something like that?", "Leopold Aschenbrenner 01:27:52", "You talked about the data center. You'd probably have to threaten nuclear retaliation to protect that. They might just blow it up. There are also ways they can do it without attribution.", "Dwarkesh Patel 01:28:00", "Stuxnet.", "Leopold Aschenbrenner 01:28:03", "Stuxnet, yeah. We’ll talk about later, but we need to be working on a Stuxnet for the Chinese project. I talk about AGI by 2027 or whatever. On Taiwan, do you know about the terrible twenties?", "Dwarkesh Patel 01:28:23", "No.", "Leopold Aschenbrenner 01:28:25", "In Taiwan watcher circles, people often talk about the late 2020s as the maximum period of risk for Taiwan . Military modernization cycles and extreme fiscal tightening on the US military budget over the last decade or two have meant that we’re in a trough by the late twenties in terms of overall naval capacity.", "That’s when China is saying they want to be ready. It’s already  kind of a parallel timeline there. Yeah, it looks appealing to invade Taiwan. Maybe not because of the remote cut off chips, which deactivates the machines. But imagine if during the Cold War, all of the world’s uranium deposits had been in Berlin. Berlin already almost caused a nuclear war multiple times. God help us all.", "Dwarkesh Patel 01:29:18", "Leslie Groves actually had a plan after the war that America would go around the world getting the rights to every single uranium deposit because they didn’t realize how much uranium there was in the world. They thought this was feasible. They didn’t realize, of course, that there were huge deposits in the Soviet Union itself.", "Leopold Aschenbrenner 01:29:37", "East Germany, too. A lot of East German workers got screwed and got cancer.", "Dwarkesh Patel 01:29:42", "The framing we’ve been assuming— I’m not sure I buy it yet—is that the United States has this leverage. This is our data center. China is the competitor right now. Obviously, that’s not the way things are progressing. Private companies control these AIs. They’re deploying them. It’s a market-based thing. Why will it be the case that the United States has this leverage or is doing this thing versus China doing this thing?", "Leopold Aschenbrenner 01:30:11", "There are descriptive and prescriptive claims, or normative and positive claims. The main thing I’m trying to say is, at these SF parties, people talk about AGI and always focus on private AI labs. I want to challenge that assumption.", "It seems likely to me, for reasons we’ve discussed, that the national security state will get involved. There are many ways this could look: nationalization, a public-private partnership, a defense contractor-like relationship, or a government project that absorbs all the people. There’s a spectrum, but people vastly underrate the chances of this looking like a government project.", "When we have literal superintelligence on our cluster — with a billion superintelligent scientists who can hack everything and Stuxnet the Chinese data centers, and build robo armies — you really think it’ll be a private company ? The government would be like, \"oh, my God, what is going on?\"", "(01:31:23) – State-led vs. private-led AI", "Dwarkesh Patel 01:31:23", "Suppose there’s no China. Suppose there are countries like Iran and North Korea that theoretically could achieve superintelligence, but they’re not on our heels. In that world, are you advocating for a national project or do you prefer the private path forward?", "Leopold Aschenbrenner 01:31:40", "Two responses to this. One is, you still have Russia and other countries.", "You need Russia-proof security. You can’t let Russia steal all your stuff. Their clusters may not be as big, but they can still make crazy bioweapons and mosquito-sized drone swarms.", "The security component is a large part of the project because there’s no other way to prevent this from instantly proliferating to everyone. You still have to deal with Russia, Iran, and North Korea. Saudi and Iran will try to get it to screw each other. Pakistan and India will try to get it to screw each other. There’s enormous destabilization.", "Still, I agree with you. If AGI had emerged in 2005, during unparalleled American hegemony, there would have been more scope for less government involvement. But as we discussed, that would have been a unique moment in history. In almost all other moments in history, there would have been a great power competitor.", "Dwarkesh Patel 01:32:49", "Let’s get into this debate. My position is this. If you look at the people who were involved in the Manhattan Project, many of them regretted their participation. We can infer from this that we should start with a cautious approach to the nationalized ASI project.", "Leopold Aschenbrenner 01:33:14", "Did they regret their participation because of the project or because of the technology itself? People will regret it, but it's about the nature of the technology, not the project.", "Dwarkesh Patel 01:33:24", "They probably had a sense that different decisions would have been made if it wasn’t a concerted effort that everyone agreed to participate in. If it wasn’t in the context of a race to beat Germany and Japan, you might not develop it. That’s the technology part.", "Leopold Aschenbrenner 01:33:40", "It’s still going to be a weapon because of the destructive potential, the military potential. It’s not because of the project. It’s because of the technology. That will unfold regardless.", "Imagine you go through the 20th century in a decade—", "Dwarkesh Patel 01:34:01", "Let’s run that example. Suppose the 20th century was run through in one decade.", "Do you think the technologies that happened during the 20th century shouldn’t have been privatized? Should it have been a more concerted, government-led project?", "Leopold Aschenbrenner 01:34:21", "There’s a history of dual-use technologies. AI will be dual-use in the same way. There will be lots of civilian uses of it. Like with nuclear energy, the government project developed the military angle of it and then worked with private companies. There was a flourishing of nuclear energy until the environmentalists stopped it.", "Planes, like Boeing. Actually, the Manhattan Project wasn’t the biggest defense R&D project during World War II. It was the B-29 bomber because they needed a bomber with a long enough range to reach Japan to destroy their cities. Boeing made the B-47 , and the B-52 plane the US military uses today. They used that technology later on to build the 707 .", "Dwarkesh Patel 01:35:08", "What does \"later on\" mean in this context? I get what it means after a war to privatize. But if the government has ASI...", "Let me back up and explain my concern. You have this institution in our society with a monopoly on violence . We’re going to give it access to ASI that’s not broadly deployed. This maybe sounds silly, but we’re going to go through higher levels of intelligence. Private companies will be required by regulation to increase their security. They’ll still be private companies.", "They’ll deploy this and release AGI. Now McDonald’s, JP Morgan, and some random startup will be more effective organizations because they have AGI workers. It’ll be like the Industrial Revolution , where the benefits were widely diffused.", "Backing up, what is it we’re trying to do? Why do we want to win against China? We want to win because we don’t want a top-down authoritarian system to win. If the way to beat that is for the most important technology for humanity to be controlled by a top-down government, what’s the point?", "Let’s run our cards with privatization. That’s how we get to the classic liberal, market-based system we want for the ASIs.", "Leopold Aschenbrenner 01:36:32", "All right, there’s a lot to talk about here. I’ll start by looking at what the private world would look like. This is part of why there's no alternative. Then let’s look at what the government project looks like, what checks and balances look like, and so on.", "Let’s start with the private world. A lot of people talk about open source . There’s a misconception that AGI development will be a beautiful, decentralized thing, a giddy community of coders collaborating. That’s not how it’s going to look. It’s a $100 billion or trillion-dollar cluster. Not many people will have it.", "Right now, open source is good because people use the stuff that’s published. They use the published algorithms, or, like Mistral , they leave DeepMind, take all the secrets, and replicate it.", "That’s not going to continue. People also say stuff like, “10^26 flops will be in my phone.” No, it won’t. Moore’s Law is really slow. AI chips are getting better but the $100 billion computer won’t cost $1,000 within your lifetime. So it’s going to be like two or three big players in the private world.", "You talk about the enormous power that superintelligence and the government will have. It’s pretty plausible that in the alternative world one AI company will have that power. Say OpenAI has a six-month lead. You’re talking about the most powerful weapon ever. You’re making a radical bet on a private company CEO as the benevolent dictator.", "Dwarkesh Patel 01:38:17", "Not necessarily. Like any other thing that’s privatized, we don’t count on them being benevolent. Think of someone who manufactures industrial fertilizer. This person with this factory, if they went back to an ancient civilization, they could blow up Rome. They could probably blow up Washington, DC.", "Leopold Aschenbrenner 01:38:31", "Indeed.", "Dwarkesh Patel 01:38:36", "In your series, you talk about Tyler Cowen’s phrase of “ muddling through .” Even with privatization, people underrate that there are a lot of private actors who control vital resources like the water supply.", "We can count on cooperation and market-based incentives to maintain a balance of power. Sure, things are proceeding really fast. We have a lot of historical evidence that this works best.", "Leopold Aschenbrenner 01:39:02", "What do we do with nukes, right? We don't keep nukes in check by beefing up the Second Amendment so each state has its own nuclear arsenal. Dario and Sam don’t have their own little arsenal.", "No, it’s institutions, constitutions, laws, and courts. I’m not sure this balance of power analogy holds. The government having the biggest guns was an enormous civilizational achievement, like Landfrieden in the Holy Roman Empire . If someone from the neighboring town committed a crime, you didn’t start a battle between the towns. You took it to a court of the Holy Roman Empire. They decided it. It’s a big achievement.", "The key differences with the analogy about the industrial fertilizer are speed and offense-defense balance issues. It’s like compressing the 20th century into a few years. That is incredibly scary because of the rapid advancement in destructive technology and military advancements.", "You'd go from bayonets and horses to tank armies and fighter jets in a couple of years. In just a few more years you’d have nukes, ICBMs, and stealth. That speed creates an incredibly volatile and dangerous period. We have to make it through that, which will be incredibly challenging.", "That’s where a government project is necessary. If we can make it through that, the situation stabilizes. We don’t face this imminent national security threat. Yes, there were WMDs that developed, but we’ve managed to create a stable offense-defense balance.", "Bioweapons are a huge issue initially. An attacker can create 1000 different synthetic viruses and spread them. It’s hard to defend against each. Maybe at some point, you figure out a universal defense against every possible virus, then you’re in a stable situation again on the offense-defense balance. Or like with planes,  you restrict certain capabilities that the private sector isn’t allowed to have, then you can let the civilian uses run free.", "Dwarkesh Patel 01:41:20", "I’m skeptical of this.", "Leopold Aschenbrenner 01:41:23", "This is the other important thing. I talked about one company having all this power. It is unprecedented because the industrial fertilizer guy cannot overthrow the US government. It’s quite plausible that the AI company with superintelligence can.", "Dwarkesh Patel 01:41:41", "There would be multiple AI companies, right? I buy that one of them could be ahead.", "Leopold Aschenbrenner 01:41:44", "It’s not obvious that it’ll be multiple. If there’s a six-month lead, maybe there are two or three.", "Dwarkesh Patel 01:41:49", "I agree.", "Leopold Aschenbrenner 01:41:50", "If there are two or three, then it’s a crazy race between these companies. Demis and Sam would be like, \"I don’t want to let the other one win.\" They’re both developing their nuclear arsenals and robots.", "Come on. The government is not going to let these people do that. Is Dario going to be the one developing super hacking Stuxnet and deploying it against the Chinese data center?", "The other issue is that if it’s two or three, it won’t just be two or three. It’ll be China, Russia, and North Korea too. In the private lab world, there’s no way they’ll have good enough security.", "Dwarkesh Patel 01:42:23", "We’re also assuming that if you nationalize it, especially in a world where this stuff is priced in by the CCP, you’ve got it nailed down. I’m not sure why we would expect that.", "Leopold Aschenbrenner 01:42:36", "The government’s the only one who does this stuff.", "Dwarkesh Patel 01:42:40", "If we don’t trust Sam or Dario to be benevolent dictators…", "Leopold Aschenbrenner 01:42:44", "Just corporate governance in general.", "Dwarkesh Patel 01:42:47", "Because you can cause a coup, the same capabilities are going to be true of the government project, right? The modal president in 2025, Donald Trump, will be the person versus you not trusting Sam or Dario to have these capabilities. I agree that if Sam and Dario have a one-year lead on ASI, in that world I’m concerned about privatization.", "In that exact same world, I’m very concerned about Donald Trump having the capability. Potentially, if the takeoff is slower than anticipated, I prefer the private companies in that world. In no part of this matrix is it obviously true that the government-led project is better.", "Leopold Aschenbrenner 01:43:31", "Let’s talk about the government project and checks and balances.", "In some sense, my argument is a Burkean one. American checks and balances have held for over 200 years through crazy technological revolutions. The US military could kill every civilian in the United States.", "Dwarkesh Patel 01:43:46", "You’re going to make that argument. The private-public balance of power has held for hundreds of years.", "Leopold Aschenbrenner 01:43:51", "But, why has it held? It’s because the government has had the biggest guns. Never before has a single CEO or a random nonprofit board had the ability to launch nukes.", "What is the track record of government checks and balances versus the track record of the private company checks and balances? Well the AI lab's first stress test went really badly.", "Even worse in the private company world, it’s two private companies and the CCP. They’ll just instantly have all the tech. They probably won’t have good enough internal control. It’s not just the random CEO, but rogue employees who can use these superintelligences to do whatever they want.", "Dwarkesh Patel 01:44:30", "This won’t be true of the government? Rogue employees won’t exist on the project?", "Leopold Aschenbrenner 01:44:33", "The government has actual decades of experience and actually cares about this stuff. They deal with nukes and really powerful technology. This is the stuff that the national security state cares about.", "Let's talk about government checks and balances a little bit. What are checks and balances in the government world? First, it’s important to have some international coalition. I talked about these two tiers before. The inner tiers are modeled on the Quebec Agreement , Churchill and Roosevelt agreeing to pool efforts on nukes but not using them against each other, or anyone else without consent.", "Bring in the UK with DeepMind, Southeast Asian states with the chip supply chain, and more NATO allies with talent and industrial resources. You have those checks and balances with more international countries at the table.", "Separately, you have the second tier of coalitions, the Atoms for Peace thing. You go to countries including the UAE and make a deal similar to the NPT . They’re not allowed to do crazy military stuff, but we’ll share civilian applications. We’ll help them and share the benefits, creating a new post-superintelligence world order.", "US checks and balances: Congress will have to be involved to appropriate trillions of dollars. Ideally, Congress needs to confirm whoever’s running this. You have Congress, different factions of the government, and the courts. I expect the First Amendment to remain really important.", "This sounds crazy to people, but these institutions have withstood the test of time in a powerful way. This is why alignment is important. You program AIs to follow the constitution. The military works because generals are not allowed to follow unlawful or unconstitutional orders. You have the same thing for the AIs.", "Dwarkesh Patel 01:46:33", "So what’s wrong with this argument. Maybe you have a point in a world with an extremely fast takeoff, one year from AGI to ASI.", "Leopold Aschenbrenner 01:46:41", "Then you have the years after ASI where you have this extraordinary explosion and technological progress.", "Dwarkesh Patel 01:46:48", "Maybe you have a point. We don’t know. You have arguments for why that’s a more likely world, but maybe that’s not the world we live in.", "In the other world, I’m very much on the side of ensuring these things are privately held. When you nationalize, that’s a one-way function. You can’t go back.", "Why not wait until we have more evidence on which world we live in? Rushing nationalization might be a bad idea while we’re uncertain. I’ll let you respond to that first.", "Leopold Aschenbrenner 01:47:18", "I don’t expect us to nationalize tomorrow. If anything I expect it to be like COVID,where it’s kind of too late. Ideally, you nationalize early enough to lock stuff down. It’ll probably be chaotic. You’ll be trying to do a crash program to lock stuff down. It’ll be kind of late. It’ll be clear what’s happening. We’re not going to nationalize when it’s not clear what’s happening.", "Dwarkesh Patel 01:47:37", "The argument that these institutions have held up historically so well is flawed. They’ve actually almost broken a bunch of times.", "Leopold Aschenbrenner 01:47:44", "They’ve held up. They didn’t break the first time they were tested.", "Dwarkesh Patel 01:47:46", "This is similar to the argument that some people make about nuclear war: we’ve had nukes for 80 years and have been fine, so the risk must be low. The answer to that is no. The risk is really high. We’ve avoided it because people have made a lot of effort to prevent it. Giving the government ASI without knowing the implications isn’t making that effort.", "Look at the base rate. America is very exceptional, not just in terms of avoiding dictatorship. Every other country in history has had a complete drawdown of wealth because of war, revolution, etc. America is very unique in not having had that.", "We have to think of the historical base rate. We haven’t thought about great power competition in the last 80 years, but it’s significant. Dictatorship is the default state of mankind. Relying on institutions in an ASI world is fundamentally different. Right now, if the government tried to overthrow, it’s much harder without ASI. There are people with AK-47s and AR-15s, making it harder.", "Leopold Aschenbrenner 01:48:58", "The government could crush the AR-15s.", "Dwarkesh Patel 01:49:00", "No, it would actually be pretty hard. It’s the reason why Vietnam and Afghanistan were so hard.", "Leopold Aschenbrenner 01:49:03", "They could just nuke the whole country.", "Dwarkesh Patel 01:49:05", "I agree.", "Leopold Aschenbrenner 01:49:07", "They could. It’s similar to the ASI.", "Dwarkesh Patel 01:49:10", "It’s just easier if you have what you were talking.", "Leopold Aschenbrenner 01:49:14", "No, there are institutions, constitutions, legal restraints, courts, and checks and balances. The crazy bet is the bet on a private company CEO.", "Dwarkesh Patel 01:49:21", "Isn’t the same thing true of nukes where we have institutional agreements about non-proliferation? We’re still very concerned about those being broken and someone getting nukes. We stay up at night worrying about that situation.", "Leopold Aschenbrenner 01:49:31", "It’s a precarious situation. ASI will a precarious situation as well. Given how precarious nukes are, we’ve done pretty well.", "Dwarkesh Patel 01:49:40", "What does privatization in this world even mean? What happens after?", "Leopold Aschenbrenner 01:49:44", "We’re talking about whether the government project is good or not. I have very mixed feelings about this as well.", "My primary argument is that if you’re at the point where this thing has vastly superhuman capabilities — it can develop crazy bioweapons targeted to kill everyone but the Han Chinese , it can wipe out entire countries, it can build robo armies and drone swarms with mosquito-sized drones — the US national security state will be intimately involved.", "The government project will look like a joint venture between cloud providers, labs, and the government. There is no world in which the government isn’t involved in this crazy period. At the very least, intelligence agencies need to run security for these labs. They’re already controlling access to everything.", "If we’re in a volatile international situation, initial applications will focus on stabilizing it. It’ll suck. It’s not what I want to use ASI for. Somehow we need to prevent proliferation of new WMDs, and maintain mutually assured destruction with North Korea, Russia, and China.", "There’s a broader spectrum than you’re acknowledging. In a world with private labs, there will be heavy government involvement. What we’re debating is the form of government involvement, but it will look more like the national security state than a startup, which is what it is right now.", "Dwarkesh Patel 01:51:30", "Something like that makes sense. I’d be very worried if it’s like the Manhattan Project, where it’s directly part of the US military. If it’s more like needing to talk to Jake Sullivan before running the next training line...", "Leopold Aschenbrenner 01:51:44", "Is Lockheed Martin’s Skunk Works part of the US military? They call the shots.", "Dwarkesh Patel 01:51:48", "I don’t think that’s great and I think that’s bad if it happens with ASI. What’s the scenario?", "Leopold Aschenbrenner 01:51:55", "What’s the alternative? What’s the alternative?", "Dwarkesh Patel 01:51:57", "It’s closer to my end of the spectrum. You talk to Jake Sullivan before you can launch the next training cluster, but many companies are still going for it, and the government is involved in security.", "Leopold Aschenbrenner 01:52:13", "Is Dario launching the Stuxnet attack?", "Dwarkesh Patel 01:52:16", "What do you mean by launching?", "Leopold Aschenbrenner 01:52:20", "Dario is deactivating the Chinese data centers?", "Dwarkesh Patel 01:52:22", "This is similar to the story you could tell about Big Tech right now. Satya , if he wanted to, could get his engineers to find zero days in Windows and infiltrate the president’s computer. Right now, Satya could do that.", "Leopold Aschenbrenner 01:52:41", "They’d be shut down.", "Dwarkesh Patel 01:52:42", "What do you mean?", "Leopold Aschenbrenner 01:52:43", "The government wouldn’t let them do that.", "Dwarkesh Patel 01:52:44", "There’s a story where they could pull off a coup.", "Leopold Aschenbrenner 01:52:49", "They could not pull off a coup.", "Dwarkesh Patel 01:52:52", "Fine, I agree. What’s wrong with a scenario where multiple companies are going for it? The AI is still broadly deployed. Alignment works. The system-level prompt is that it can’t help people make bioweapons or something. It’s still broadly deployed.", "Leopold Aschenbrenner 01:53:14", "I expect AIs to be broadly deployed.", "Dwarkesh Patel 01:53:16", "Even if it’s a government project?", "Leopold Aschenbrenner 01:53:18", "Yeah, I think the Metas of the world open-sourcing their AIs that are two years behind is super valuable. There will be some question of whether the offense-defense balance is fine, and open-sourcing two-year-old AIs is fine. Or there are restrictions on the most extreme dual-use capabilities, like not letting private companies sell crazy weapons.", "That’s great and will help with diffusion. After the government project, there will be an initial tense period. Hopefully, it stabilizes. Then, like Boeing, they’ll go out and do all the flourishing civilian applications like nuclear energy. The civilian applications will have their day.", "Dwarkesh Patel 01:53:57", "How does that proceed? Because in the other world, there are existing stocks of capital that are worth a lot.", "Leopold Aschenbrenner 01:54:02", "There will still be Google clusters.", "Dwarkesh Patel 01:54:04", "So Google, because they got the contract from the government, will control the ASI? But why are they trading with anybody else?", "Leopold Aschenbrenner 01:54:11", "It’ll be the same companies that would be doing it anyway. In this world, they’re just contracting with the government or are DPA ’d so all their compute goes to the government. In some sense it’s very natural.", "Dwarkesh Patel 01:54:22", "After you get the ASI and we’re building the robot armies and fusion reactors…", "Leopold Aschenbrenner 01:54:33", "Only the government will get to build robot armies.", "Dwarkesh Patel 01:54:35", "Now I’m worried. Or like the fusion reactors and stuff.", "Leopold Aschenbrenner 01:54:38", "That’s what we do with nukes today.", "Dwarkesh Patel 01:54:40", "If you already have the robot armies and everything, the existing society doesn’t have some leverage where it makes sense for the government to—", "Leopold Aschenbrenner 01:54:45", "They don’t have that today.", "Dwarkesh Patel 01:54:46", "They do, in the sense that they have a lot of capital that the government wants. There are other things as well. Why was Boeing privatized after WWII?", "Leopold Aschenbrenner 01:54:54", "The government has the biggest guns. The way we regulate is through institutions, constitutions, legal restraints, courts, etc.", "Dwarkesh Patel 01:54:57", "Tell me what privatization looks like in the ASI world afterwards.", "Leopold Aschenbrenner 01:55:00", "Afterwards, it’s like the Boeing example.", "Dwarkesh Patel 01:55:03", "Who gets it?", "Leopold Aschenbrenner 01:55:04", "Google and Microsoft, the AI labs—", "Dwarkesh Patel 01:55:05", "Who are they selling it to? They already have robot factories. Why are they selling it to us? They don’t need anything from us. This is chump change in the ASI world because we didn’t get the ASI broadly deployed throughout this takeoff.", "We don’t have the robots, the fusion reactors, or the advanced decades of science you’re talking about. What are they trading with us for?", "Leopold Aschenbrenner 01:55:26", "Trading with whom for?", "Dwarkesh Patel 01:55:29", "For everybody who was not part of the project. They’ve got technology that’s decades ahead.", "Leopold Aschenbrenner 01:55:33", "That’s a whole other issue of how economic distribution works. I don’t know. That’ll be rough. I’m just saying, I don’t see the alternative. The alternative is overturning a 500-year civilizational achievement of Landfrieden. You basically instantly leak the stuff to the CCP.", "Either you barely scrape ahead, but you’re in a fever struggle, proliferating crazy WMDs. It’s enormously dangerous for alignment because you’re in a crazy race at the end, and you don’t have the ability to take six months to get alignment right. The alternative is not bundling efforts to win the race against authoritarian powers.", "I don’t like it. I wish we used the ASI to cure diseases and do all the good in the world. But it’s my prediction that in the end game, what’s at stake is not just cool products but whether liberal democracy survives, whether the CCP survives.", "What will the world order for the next century be? When that is at stake, forces will be activated that are way beyond what we’re talking about now. In this crazy race at the end, the national security implications will be the most important.", "To go back to World War II, nuclear energy had its day, but in the initial period when the technology was first discovered, you had to stabilize the situation. You had to get nukes and do it right. Then the civilian applications had their day.", "Dwarkesh Patel 01:57:12", "I agree that nuclear energy is a thing that happened later on and is dual-use. But it’s something that happened literally a decade after nuclear weapons were developed.", "Leopold Aschenbrenner 01:57:21", "Right because everything took a long time.", "Dwarkesh Patel 01:57:23", "Whereas with AI, all the applications are immediately unlocked. This is closer to the analogy people make about AGI. Assume your society had 100 million more John von Neumanns .", "If that literally happened, if tomorrow you had 100 million more of them, I don’t think the approach would be that we have to worry about some of them converting to ISIS or “what if a bunch are born in China?” I don’t think we’d be talking about nationalizing all the John von Neumanns.", "I think it’d generally be a good thing. I’d be concerned about one power getting all the John von Neumanns.", "Leopold Aschenbrenner 01:57:57", "The issue is bottling up, in a short period of time, this enormous unfolding of technological progress, an industrial explosion. We do worry about 100 million John von Neumanns.", "Why do we worry about the rise of China? It’s one billion people who can do a lot of industry and technology. This is like the rise of China multiplied by 100. It’s not just one billion people, but a billion super-intelligent beings. Plus, it comes all in a very short period.", "Dwarkesh Patel 01:58:28", "Practically, if the goal is to beat China, part of that is protecting ourselves.", "Leopold Aschenbrenner 01:58:34", "Beating China is just one of the goals. We also want to manage this incredibly crazy, scary period.", "Dwarkesh Patel 01:58:40", "Right. Part of that is making sure we’re not leaking algorithmic secrets to them.", "Leopold Aschenbrenner 01:58:45", "Building the trillion-dollar cluster.", "Dwarkesh Patel 01:58:47", "That’s right. But isn’t your point that Microsoft can issue corporate bonds?", "Leopold Aschenbrenner 01:58:52", "Microsoft can do hundreds of billions of dollars. The trillion-dollar cluster is closer to a national effort.", "Dwarkesh Patel 01:58:57", "I thought your earlier point was that American capital markets are deep and good.", "Leopold Aschenbrenner 01:59:02", "They’re pretty good. The trillion-dollar cluster is possible privately, but it’s going to be tough.", "Dwarkesh Patel 01:59:06", "At this point, we have AGI that’s rapidly accelerating productivity.", "Leopold Aschenbrenner 01:59:10", "The trillion-dollar cluster will be planned before the AGI. You get the AGI on the 10 GW cluster. Maybe have one more year of final unhobbling to fully unlock it. Then you have the intelligence explosion.", "Meanwhile, the trillion-dollar cluster is almost finished. You run your superintelligence on it. You also have hundreds of millions of GPUs on inference clusters everywhere.", "Dwarkesh Patel 01:59:35", "In this world, I think private companies have their capital and can raise capital.", "Leopold Aschenbrenner 01:59:40", "You will need the government to do it fast.", "Dwarkesh Patel 01:59:43", "We know private companies are on track to do this. In China, if they’re unhindered by climate change or whatever—", "Leopold Aschenbrenner 01:59:53", "That’s part of what I’m saying.", "Dwarkesh Patel 01:59:56", "If it really matters that we beat China…", "There will be all sorts of practical difficulties. Will the AI researchers actually join the AI effort? If they do, there will be at least three different teams currently doing pre-training in different companies.", "Who decides at some  point that you’re going to have to YOLO the hyperparameters. Who decides that? Merging extremely complicated research and development processes across very different organizations is somehow supposed to speed up America against the Chinese?", "Leopold Aschenbrenner 02:00:34", "Brain and DeepMind merged. It was a little messy, but it was fine.", "Dwarkesh Patel 02:00:36", "It was pretty messy. It was also the same company and much earlier on in the process.", "Leopold Aschenbrenner 02:00:40", "Pretty similar, right? Different codebases, lots of different infrastructure and teams. It wasn’t the smoothest process, but DeepMind is doing very well.", "Dwarkesh Patel 02:00:49", "You give the example of COVID. In the COVID example, we woke up to it, maybe it was late, but then we deployed all this money. The COVID response from the government was a clusterfuck. I agree that Warp Speed was enabled by the government, but it was literally just giving permission that you can actually—", "Leopold Aschenbrenner 02:01:08", "It was also making big advance market commitments.", "Dwarkesh Patel 02:01:10", "I agree. But fundamentally, it was a private sector-led effort. That was the only part of the COVID response that worked.", "Leopold Aschenbrenner 02:01:17", "The project will look closer to Operation Warp Speed. You’ll have all the companies involved in the government project. I’m not convinced that merging is that difficult. You run pre-training on GPUs with one codebase, then do the secondary step on the other codebase with TPUs. It’s fine.", "On whether people will sign up for it, they wouldn’t sign up for it today. It would seem crazy to people.", "But this is part of the secrecy thing. People gather at parties and… you know this. I don’t think anyone has really gotten up in front of these people and said, \"look, what you’re building is the most important thing for the national security of the United States, for the future of the free world and whether we have another century ahead of it. This is really important for your country and democracy. Don’t talk about the secrets.\" It’s not just about DeepMind or whatever. It’s about these really important things.", "We’re talking about the Manhattan Project. It was really contentious initially, but at some point it became clear that this was coming. There was an exigency on the military national security front. A lot of people will come around.", "On whether it’ll be competent, a lot of this stuff is more predictive. This is reasonably likely, and not enough people are thinking about it. A lot of people think about AI lab politics but nobody has a plan for the grand project.", "Dwarkesh Patel 02:02:47", "Should they be more pessimistic about it? We don’t have a plan for it, and we need to act soon because AGI is upon us. The only competent technical institutions capable of making AI right now are private.", "Leopold Aschenbrenner 02:02:59", "Companies will play that leading role. It’ll be a partnership.", "We talked about World War II and American unpreparedness. The beginning of World War II was complete shambles. America has a very deep bench of incredibly competent managerial talent. There are a lot of dedicated people. An Operation Warp Speed-like public-private partnership is what I imagine it would look like.", "Dwarkesh Patel 02:03:26", "Recruiting talent is an interesting question. For the Manhattan Project, you initially had to convince people to beat the Nazis and get on board. Many of them regretted how much they accelerated the bomb. This is generally a thing with war.", "Leopold Aschenbrenner 02:03:48", "I think they were wrong to regret it.", "Dwarkesh Patel 02:03:51", "Why?", "Leopold Aschenbrenner 02:03:54", "What’s the reason for regretting it?", "Dwarkesh Patel 02:03:56", "They way nuclear weapons were developed after the war was explosive because there was a precedent that you can use nuclear weapons. Then because of the race that was set up, you immediately go to the H-bomb .", "Leopold Aschenbrenner 02:04:11", "This is related to my view on AI and maybe where we disagree. That was inevitable. There was a world war, then a cold war. Of course the military angle would be pursued with ferocious intensity. There’s no world in which we all decide not to build nukes. Also, nukes went really well. That could have gone terribly.", "It’s not physically possible to have something like pocket nukes for everybody, where WMDs proliferated and were fully democratized. The US led on nukes and built a new world order, with a few great powers and a non-proliferation regime for nukes. It was a partnership and a deal: no military application of nuclear technology, but help with civilian technology. They enforced safety norms on the rest of the world. That worked and could have gone much worse.", "Not to mention, I say this in the piece but the A-bomb in Hiroshima was just like firebombing. What changed the game was the H-bombs and ICBM s. That’s when it went to a whole new level.", "Dwarkesh Patel 02:05:27", "When you say we will tell people that we need to pursue this project for the free world to survive, it sounds similar to World War II. World War II is a sad story, not only because it happened, but the victory was sad in the sense that Britain went in to protect Poland.", "At the end, the USSR, which as your family knows is incredibly brutal, ends up occupying half of Europe. The idea of protecting the free world by rushing AI might end up with an American AI Leviathan. We might look back on this with the same twisted irony as Britain going into World War II to protect Poland.", "Leopold Aschenbrenner 02:06:22", "There will be a lot of unfortunate things that happen. I'm just hoping we make it through. The pitch won’t only be about the race. The race will be a backdrop. It's important that democracy shapes this technology. We can't leak this stuff to North Korea.", "Safety, including alignment and the creation of new WMDs, is also just  important. I'm not convinced there's another path. Say we have a breakneck race internationally, instantly leaking all this stuff, including the weights, with a commercial race with Demis, Dario, and Sam all wanting to be first. It's incredibly rough for safety.", "Safety regulation, as people talk about it, is like NIST involving years of bureaucracy and expert consensus.", "Dwarkesh Patel 02:07:13", "Isn’t that what’s going to happen with the project as well?", "Leopold Aschenbrenner 02:07:15", "Alignment during the intelligence explosion is not a years-long bureaucratic process. It's more like a war, with a fog of war. Is it safe to do the next OOM? We're three OOMs into the intelligence explosion, and we don't fully understand what's happening.", "Our generalization-scaling curves don’t look great, some automated AI researchers say it's fine, but we don't quite trust them. AIs might start doing problematic things, but we hammer it out, and then it's fine. Should we go ahead? Should we take another six months?", "Meanwhile, China might steal the weights or deploy a robot army. It's a crazy situation, relying more on a sane chain of command than a deliberative regulatory scheme. Although I wish we could do that more deliberative regulatory scheme.", "This is the thing with private companies too. Private companies claim they'll do safety, but it’s rough in a commercial race, especially for startups. Startups are startups. They aren't fit to handle WMDs.", "Dwarkesh Patel 02:08:35", "I’m coming closer to your position but…", "Let’s talk about the responsible scaling policies . I was told by people advancing this idea — because they know I’m a libertarian-type person and the way they approached me was like this — that the way to think about it was that it's fundamentally a way to protect market-based development of AGI. If you didn’t have this, there would be misuse and lead to nationalization. The RSPs are a way to ensure a market-based order with safeguards to prevent things from going off the rails.", "It seems like your story is self-consistent. I know this was never your position, so I’m not looping you into this. But it's almost like a motte-and-bailey argument.", "Leopold Aschenbrenner 02:09:36", "Here’s what I think about RSP-type stuff and current safety regulations. They’re important for helping us figure out what world we’re in and flashing the warning signs when we’re close.", "The story we’ve been telling is what I think the modal version of this decade is. There are many ways it could be wrong. We should talk about the data wall more. There’s a world where this stuff stagnates or we don’t have AGI.", "The RSPs preserve optionality. Let’s see how things go, but we need to be prepared if the red lights start flashing. If we get the automated AI researcher, then it’s crunch time.", "Dwarkesh Patel 02:10:12", "I can be on the same page with that and have a strong prior on pursuing a market-based way. Unless you’re right about what the intelligence explosion looks like, don’t move yet. But in that world where it really does seem like Alec Radford can be automated, and that's the only bottleneck to getting to ASI…", "Okay I think we can leave it at that. I’m somewhat on the way there.", "Leopold Aschenbrenner 02:10:42", "I hope it goes well. It's going to be very stressful. Right now is the chill time. Enjoy your vacation while it lasts.", "Dwarkesh Patel 02:10:53", "It’s funny to look out over this. This is San Francisco.", "Leopold Aschenbrenner 02:11:00", "Yeah OpenAI is right there. Anthropic is there. You guys have this enormous power over how it’s going to go for the next couple of years, and that power is depreciating.", "Dwarkesh Patel 02:11:09", "Who's \"you guys\"?", "Leopold Aschenbrenner 02:11:10", "People at labs.", "It's a crazy world you're talking about. You mention that maybe they'll nationalize too soon. Almost nobody sees what's happening. This is what I find stressful about all this.", "Maybe I'm wrong, but if I'm right, we’re in this crazy situation where only a few hundred guys are paying attention. It’s daunting.", "Dwarkesh Patel 02:11:36", "I went to Washington a few months ago. I was talking to people doing AI policy stuff there. I asked how likely they think nationalization is. They said it's really hard to nationalize stuff. It’s been a long time since it's been done. There are specific procedural constraints on what kinds of things can be nationalized.", "Then I asked about ASI. Because of constraints like the Defense Production Act , that won’t be nationalized? The Supreme Court would overturn that? They were like, “yeah I guess that would be nationalized.”", "Leopold Aschenbrenner 02:12:13", "That’s the short summary of my post or my view on the project.", "(02:12:23) – Becoming Valedictorian of Columbia at 19", "Dwarkesh Patel 02:12:23", "Before we go further on the AI stuff, let’s back up.", "We began the conversation, and I think people will be confused. You graduated valedictorian of Columbia when you were 19. So, you got to college when you were 15.", "You were in Germany then, and you got to college at 15.", "Leopold Aschenbrenner 02:12:37", "Yeah.", "Dwarkesh Patel 02:12:39", "How the fuck did that happen?", "Leopold Aschenbrenner 02:12:41", "I really wanted out of Germany. I went to a German public school. It was not a good environment for me.", "Dwarkesh Patel 02:12:52", "In what sense? No peers?", "Leopold Aschenbrenner 02:12:55", "There’s also a particular German cultural sense. In the US, there are amazing high schools and an appreciation of excellence. In Germany, there’s a tall poppy syndrome . If you're the curious kid in class wanting to learn more, instead of the teacher encouraging you, they resent you and try to crush you.", "There are also no elite universities for undergraduates, which is kind of crazy. The meritocracy was crushed in Germany at some point. There's also an incredible sense of complacency across the board. It always puzzles me but even going to a US college was seen as a radical act. It doesn’t seem radical to anyone here because it’s the obvious thing to do. You can go to Columbia and get a better education.", "It’s wild to me because this is where stuff is happening and you can get a better education but people in Germany don’t do it. I skipped a few grades, and it seemed normal to me at the time to go to college at 15 and come to America. One of my sisters is turning 15 now, and when I look at her, I understand why my mother was worried.", "Dwarkesh Patel 02:14:27", "So you were presumably the only 15-year-old. Was it normal for you to be a 15-year-old in college? What were the initial years like?", "Leopold Aschenbrenner 02:14:34", "Again, it felt so normal at the time. Now I understand why my mother was worried. I worked on my parents for a while and eventually persuaded them. It felt very normal at the time.", "It was great. I really liked college. It came at the right time for me. I really appreciated the liberal arts education, the core curriculum , and reading core works of political philosophy and literature.", "Dwarkesh Patel 02:15:02", "You did what? Econ?", "Leopold Aschenbrenner 02:15:05", "My majors were math, statistics, and economics, but Columbia has a pretty heavy core curriculum and liberal arts education. Honestly, I shouldn’t have done all the majors. The best courses were those with amazing professors in some history classes. That’s what I would recommend people spend their time on in college.", "Dwarkesh Patel 02:15:26", "Was there one professor or class that stood out?", "Leopold Aschenbrenner 02:15:29", "A few. Richard Betts ' class on war, peace, and strategy . Adam Tooze was fantastic and has written very riveting books . You should have him on the podcast, by the way.", "Dwarkesh Patel 02:15:43", "I’ve tried. I think you tried for me.", "Leopold Aschenbrenner 02:15:45", "You’ve got to get him on the pod. It’d be so good.", "Dwarkesh Patel 02:15:49", "Recently, we were talking to Tyler Cowen. He said when he first encountered you, it was through your paper on economic growth and existential risk . He said, “when I read it, I couldn’t believe that a 17-year-old had written it. If this were an MIT dissertation, I’d be impressed.” You’re a junior and you’re writing novel economic papers? Why did you get interested in this, and what was the process to get into that?", "Leopold Aschenbrenner 02:16:30", "I just get interested in things. It feels natural to me. I get excited about something, read about it, and immerse myself. I can learn and understand information quickly.", "Regarding the paper, moments of peak productivity matter more than average productivity, at least for the way at work.  Some jobs, like CEO, require consistent productivity. I have periods of a couple months where there’s effervescence and other times, I'm computing stuff in the background. Writing the series was similar. You write it and it’s really flowing. That’s what ends up mattering.", "Dwarkesh Patel 02:17:14", "Even for CEOs, peak productivity might be very important. One of our friends in a group chat, following Chatham House rules, pointed out how many famous CEOs and founders have been bipolar or manic. The call option on your productivity is the most important thing, and you get it by increasing volatility through being bipolar. That’s interesting.", "You got interested in economics first. Why economics? You could read about anything. You kind of got a slow start on ML. You wasted all these years on econ. There's an alternative world where you’re on the superalignment team at 17 instead of 21 or whatever it was.", "Leopold Aschenbrenner 02:18:04", "In some sense, I’m still doing economics. I’m looking at straight lines on a graph, log-log plots , figuring out trends, and thinking about feedback loops, equilibrium, and arms control dynamics. It’s a way of thinking that I find very useful.", "Dario and Ilya seeing scaling early is, in some sense, a very economic way of thinking. It’s also related to empirical physics. Many of them are physicists. Economists often can’t code well enough, which is their issue, but it's that way of thinking.", "I also thought a lot of core ideas in economics were beautiful. In some sense, I feel a little duped because econ academia is kind of decadent now. The paper I wrote is long, 100 pages of math, but the core takeaway can be explained in 30 seconds and it makes sense and you don’t really need the math. The best pieces of economics are like that.", "You do the work to uncover insights that weren’t obvious to you before. Once you’ve done the work, some sort of mechanism falls out of it that makes a lot of  crisp, intuitive sense and explains facts about the world. You can then use it in arguments. Econ 101 is great like this. A lot of econ in the fifties and sixties was like this. Chad Jones ' papers are often like this. I really like his papers for this.", "Why didn’t I ultimately pursue econ academia? There were several reasons, one of them being Tyler Cowen. He took me aside and said, \"I think you’re one of the top young economists I’ve ever met, but you should probably not go to grad school.\"", "Dwarkesh Patel 02:19:50", "Oh, interesting. Really? I didn’t realize that.", "Leopold Aschenbrenner 02:19:53", "Yeah, it was good because he kind of introduced me to the Twitter weirdos. I think the takeaway from that was that I have to move out west one more time.", "Dwarkesh Patel 02:20:03", "Wait Tyler introduced you to the Twitter weirdos?", "Leopold Aschenbrenner 02:20:05", "A little bit. Or just kind of the broader culture?", "Dwarkesh Patel 02:20:08", "A 60-year-old economist introduced you to Twitter?", "Leopold Aschenbrenner 02:20:12", "Well, I had been in Germany, completely on the periphery, and then moved to a US elite institution. I got a sense of meritocratic elite US society. Basically, there was a directory. To find the true American spirit I had to come out here.", "The other reason I didn’t become an economist, or at least pursue econ academia, is that econ academia has become a bit decadent. Maybe it's just that ideas are getting harder to find, or that all the beautiful, simple things have been discovered.", "But what are econ papers these days? They’re often 200 pages of empirical analyses on things like how buying 100,000 more textbooks in Wisconsin affects educational outcomes. I'm happy that work happens. It’s important work but it doesn't uncover fundamental insights and mechanisms in society.", "Even the theory work often involves really complicated models and the model spits out something like, “Fed does X, then Y happens” and you have no idea why that happened. There’s a gazillion parameters and they’re all calibrated in some way and it’s a computer simulation and you have no idea about the validity. The most important insights are the ones where you have to do a lot of work to get them but then there’s this crisp intuition.", "Dwarkesh Patel 02:21:26", "The P versus NP of…", "Leopold Aschenbrenner 02:21:27", "Sure, yeah.", "Dwarkesh Patel 02:21:30", "That’s really interesting. Going back to your time in college, you say that peak productivity explains this paper and things. But being valedictorian, getting straight A’s, is very much an average productivity phenomenon.", "Leopold Aschenbrenner 02:21:48", "There’s one award for the highest GPA, which I won, but the valedictorian is selected by the faculty from among those with the highest GPA.", "Dwarkesh Patel 02:21:55", "So it's not just peak productivity.", "Leopold Aschenbrenner 02:21:58", "I generally just love this stuff. I was curious, found it really interesting, and enjoyed learning about it. It made sense to me, and it felt very natural.", "One of my faults is that I’m not that good at eating glass. Some people are very good at it. The moments of peak productivity come when I’m excited and engaged and love it. If you take the right courses, that’s what you get in college.", "Dwarkesh Patel 02:22:30", "It’s like Bruce Banner’s quote in The Avengers : \"I’m always angry.\" I’m always excited and curious. That’s why I’m always at peak productivity.", "By the way, when you were in college, I was also in college. Despite being a year younger than me, you were ahead of me by at least two years or something. We met around this time through the Tyler Cowen universe. It’s very insane how small the world is. Did I reach out to you? I must have.", "Leopold Aschenbrenner 02:23:07", "I’m not sure.", "Dwarkesh Patel 02:23:09", "When I had a couple of videos with a few hundred views.", "Leopold Aschenbrenner 02:23:11", "It’s a small world. This is the crazy thing about the AI world. It’s the same few people at the parties running the models at DeepMind, OpenAI, and Anthropic. Some of our friends, now successful in their careers, met many of the people who are now successful in Silicon Valley before their twenties or in their early twenties.", "Why is it a small world? There’s some amount of agency. I think in a funny way, this is what I took away from my Germany experience. It was crushing. I didn’t like it. Skipping grades and moving to the US were unusual moves.", "Just trying to do it, and then seeing it work, reinforced the idea that you don’t have to conform to the Overton window. You can try to do what seems right to you, even if most people are wrong. That was a valuable and formative early experience.", "Dwarkesh Patel 02:24:33", "After college, what did you do?", "Leopold Aschenbrenner 02:24:36", "I did econ research for a bit, at Oxford and other places, and then I worked at Future Fund .", "Dwarkesh Patel 02:24:41", "Tell me about it.", "Leopold Aschenbrenner 02:24:46", "Future Fund was a foundation funded by Sam Bankman-Fried but we were our own thing. We were based in the Bay Area. At the time, in early 2022, it was an incredibly exciting opportunity. It was basically a startup foundation, which doesn’t come along often. We thought we would be able to give away billions of dollars and remake how philanthropy is done from first principles.", "We thought we’d have significant impact, focusing on causes like biosecurity, AI, and finding exceptional talent to work on hard problems. A lot of the work we did was exciting. Academics, who would usually take six months, would send us emails saying, \"this is great. This is so quick and straightforward.\" I often find that with a little encouragement and empowerment, by removing excuses and making the process easy, you can get people to do great things.", "Dwarkesh Patel 02:25:49", "For context, not only were you guys planning on deploying billions of dollars, but it was a team of four people. So you, at 18, were on a team of four people that was in charge of deploying billions of dollars.", "Leopold Aschenbrenner 02:26:07", "That was sort of the heyday. Then in November 2022, it was revealed that Sam was a giant fraud , and from one day to the next, the whole thing collapsed. It was really tough. It was devastating for the people who had their money in FTX. Closer to home, we wanted to help all the grantees do amazing projects but they ended up suddenly saddled with a giant problem.", "Personally, it was difficult because it was a startup. I had worked 70-hour weeks every week for almost a year to build it up. We were a tiny team, and then from one day to the next, it was all gone and associated with a giant fraud. That was incredibly tough.", "Dwarkesh Patel 02:27:04", "Were there any early signs about SBF?", "Leopold Aschenbrenner 02:27:10", "Obviously, I didn’t know he was a fraud. If I had, I would have never worked there. We were a separate entity and didn’t work with the business. I do think there are some takeaways for me.", "I, and people in general, had this tendency to give successful CEOs a pass on their behavior because they’re successful. You think that’s just a successful CEO thing. I didn’t know Sam Bankman-Fried was a fraud.", "I knew he was extremely risk-taking, narcissistic, and didn’t tolerate disagreement well. By the end, he and I didn’t get along because I pointed out that some biosecurity grants weren’t cost effective but he liked them because they were cool and flashy. He was unhappy about that.", "So I knew his character. I realized that it’s really worth paying attention to people’s characters, including people you work for and successful CEOs. That can save you a lot of pain down the line.", "Dwarkesh Patel 02:28:26", "After FTX imploded and you were out, you went to OpenAI. The superalignment team had just started. You were part of the initial team.", "What was the original idea? What compelled you to join?", "Leopold Aschenbrenner 02:28:47", "The alignment teams at OpenAI and other labs had done basic research and developed RLHF. reinforcement learning from human feedback. That ended up being a really successful technique for controlling current AI models.", "Our task was to find the successor to RLHF. The reason we need that is that RLHF probably won’t scale to superhuman systems. RLHF relies on human raters giving feedback, but superintelligent models will produce complex outputs beyond human comprehension. It’ll be like a million lines of complex code and you won’t know at all what’s going on anymore.", "How do you steer and control these systems? How do you add side constraints? I joined because I thought this was an important and solvable problem. I still do and even more so. I think there’s a lot of promising ML research on aligning superhuman systems, which we can discuss more later.", "Dwarkesh Patel 02:30:01", "It was so solvable, you solved it in a year. It’s all over now.", "Leopold Aschenbrenner 02:30:07", "OpenAI wanted to do a really ambitious effort on alignment. Ilya was backing it. I liked a lot of the people there. I was really excited. There are always people making hay about alignment. I appreciate people highlighting the importance of the problem and I was just really into trying to solve it. I wanted to do the ambitious effort, like an Operation Warp Speed for solving alignment. It seemed like an amazing opportunity to do it.", "(02:30:35) – What happened at OpenAI", "Dwarkesh Patel 02:30:35", "Now the team basically doesn't exist. The heads of it, Jan and Ilya, have left. That’s been the news of last week . What happened? Why did the team break down?", "Leopold Aschenbrenner 02:30:48", "OpenAI decided to take things in a different direction.", "Dwarkesh Patel 02:30:53", "Meaning what? That superalignment isn’t the best way to frame it?", "Leopold Aschenbrenner 02:30:59", "No, obviously after the November board events there were personnel changes. Ilya leaving was incredibly tragic for OpenAI. There was some reprioritization. There’s been reporting on the superalignment compute commitment, the 20% compute commitment , which was how a lot of people were recruited. There was a decision to not keep that commitment and go in a different direction.", "Dwarkesh Patel 02:31:26", "Now Jan and Ilya have left, and the team itself has dissolved. You were the first person who left or was forced to leave. The Information reported that you were fired for leaking. What happened? Is this accurate?", "Leopold Aschenbrenner 02:31:43", "Why don’t I tell you what they claim I leaked, and you can tell me what you think. OpenAI claimed to employees that I was fired for leaking. I and others have pushed them to say what the leak was. Here’s their response in full: Sometime last year, I had written a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI. I shared that with three external researchers for feedback. That’s the leak.", "For context, it was totally normal at OpenAI at the time to share safety ideas with external researchers for feedback. It happened all the time. The doc had my ideas. Before I shared it, I reviewed it for anything sensitive. The internal version had a reference to a future cluster, which I redacted for the external copy. There was a link to some internal slides, but that was a dead link for the external people. The slides weren’t shared with them.", "When I pressed them to specify what confidential information was in this document. They came back with a line about planning for AGI by 2027-2028 and not setting timelines for preparedness.", "I wrote this doc a couple of months after the superalignment announcement . We had put out a four-year planning horizon. I didn’t think that planning horizon was sensitive. It’s the sort of thing Sam says publicly all the time. I think Jan mentioned it on a podcast a couple of weeks ago. So, that’s it.", "Dwarkesh Patel 02:33:20", "That’s it? That sounds pretty thin if the cause was leaking. Was there anything else to it?", "Leopold Aschenbrenner 02:33:28", "That was the leaking claim. Let me explain more about what happened during the firing. Last year, I wrote an internal memo about OpenAI's security, which I thought was egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors. I shared this memo with a few colleagues and a couple of members of leadership, who mostly said it was helpful.", "A few weeks later, a major security incident occurred 1 . That prompted me to share the memo with a couple of board members. Days later, it was made very clear to me that leadership was very unhappy I had shared this memo with the board. Apparently, the board hassled leadership about security.", "I got an official HR warning for sharing the memo with the board. The HR person told me it was racist to worry about CCP espionage and that it was unconstructive. I probably wasn’t at my most diplomatic and could have been more politically savvy. I thought it was a really important issue. The security incident made me very worried.", "The reason I bring this up is that when I was fired, it was very made explicit that the security memo was a major reason for my being fired. They said, \"the reason this is a firing and not a warning is because of the security memo.\"", "Dwarkesh Patel 02:34:59", "You sharing it with the board?", "Leopold Aschenbrenner 02:35:01", "The warning I’d gotten for the security memo.", "What might also be helpful context is the kinds of questions they asked me when they fired me. A bit over a month ago, I was pulled aside for a chat with a lawyer that quickly turned adversarial. The questions were about my views on AI progress, on AGI, the appropriate level of security for AGI, whether the government should be involved in AGI, whether I and the superalignment team were loyal to the company, and what I was up to during the OpenAI board events. They then talked to a couple of my colleagues and came back and told me I was fired. They’d gone through all of my digital artifacts from my time at OpenAI, and that’s when they found the leak.", "The main claim they made was this leaking allegation. That’s what they told employees. The security memo was another thing. There were a couple of other allegations they threw in. One thing they said was that I was unforthcoming during the investigation because I didn’t initially remember who I had shared the preparedness brainstorming document with, only that I had talked to some external researchers about these ideas.", "The document was over six months old, I’d spent a day on it. It was a Google Doc I shared with my OpenAI email. It wasn’t a screenshot or anything I was trying to hide. It simply didn’t stick because it was such a non-issue. They also claimed I was engaging on policy in a way they didn’t like. They cited there that I had spoken to a couple of external researchers, including someone at a think tank, about my view that AGI would become a government project, as we just discussed.", "In fact, I was speaking with lots of people in the field about that view at the time. I thought it was a really important thing to think about. So they found a DM I had written to a friendly colleague, five or six months earlier, and they cited that too. I had thought it was well within OpenAI norms to discuss high-level issues about the future of AGI with external people in the field.", "That’s what they allege happened. I’ve spoken to a few dozen former colleagues about this since. The universal reaction has been, \"that’s insane.\" I was surprised as well. I had been promoted just a few months before. Ilya’s comment for the promotion case at the time was something like, \"Leopold’s amazing. We’re lucky to have him.\"", "The thing I understand, and in some sense it’s reasonable, is that I ruffled some feathers and was probably annoying at times with the security stuff. I repeatedly raised that, maybe not always in the most diplomatic way. I didn’t sign the employee letter during the board events, despite pressure to do so.", "Dwarkesh Patel 02:38:03", "You were one of like eight people or something?", "Leopold Aschenbrenner 02:38:05", "Not that many people. I think the two most senior people who didn’t sign were Andrej and Jan, who have both since left.", "On the letter, by Monday morning when it was circulating, I thought it was probably appropriate for the board to resign because they had lost too much credibility and trust with the employees.", "But I thought the letter had issues. It didn’t call for an independent board, which is a basic of corporate governance. In other discussions, I pressed leadership for OpenAI to abide by its public commitments. I raised tough questions about whether it was consistent with the OpenAI mission and the national interest to partner with authoritarian dictatorships to build the core infrastructure for AGI.", "It’s a free country. That’s what I love about it. We talked about it. They have no obligation to keep me on staff. It would have been reasonable for them to come to me and say, \"we’re taking the company in a different direction. We disagree with your point of view. We don’t trust you to toe the company line anymore. Thank you so much for your work at OpenAI, but it’s time to part ways.\"", "That would have made sense. We had started diverging on important issues. I came in very excited and aligned with OpenAI, but that changed over time. That would have been a very amicable way to part ways. It’s a shame how it went down.", "All that being said, I really want to emphasize that there are a lot of incredible people at OpenAI, and it was an incredible privilege to work with them. Overall, I’m extremely grateful for my time there.", "Dwarkesh Patel 02:40:01", "Now there’s been reporting about an NDA that former employees have to sign to access their vested equity. Did you sign such an NDA?", "Leopold Aschenbrenner 02:40:16", "No. My situation was a little different because I was right before my cliff. They still offered me the equity, but I didn’t want to sign. Freedom is priceless", "Dwarkesh Patel 02:40:28", "How much was the equity?", "Leopold Aschenbrenner 02:40:30", "Close to a million dollars.", "Dwarkesh Patel 02:40:32", "So it was definitely something you and others were aware of. OpenAI explicitly offered you a choice. Presumably, the person on OpenAI staff knew they were offering equity but required signing an NDA that prevents making statements about AGI and OpenAI, like the ones you’re making on this podcast.", "Leopold Aschenbrenner 02:40:57", "I don’t know the whole situation. I certainly think conditioning vested equity on signing an NDA is pretty rough. It might be different if it’s a severance agreement.", "Dwarkesh Patel 02:41:05", "Right, but an OpenAI employee who had signed it presumably couldn’t give the podcast you’re giving today.", "Leopold Aschenbrenner 02:41:11", "Quite possibly not. I don’t know.", "Dwarkesh Patel 02:41:15", "The board thing is really tough. Analyzing the situation here, if you were trying to defend them, you might say, \"well, listen you were just going outside the regular chain of command.\" There might be a point there.", "Although the idea that HR thinks you’re supposed to have an adversarial relationship with the board is odd. You’re giving the board relevant information about whether OpenAI is fulfilling its mission and how it can improve. That seems important since the board is supposed to ensure OpenAI follows its mission. Them treating that as part of the leak, as if the board were an external actor…", "Leopold Aschenbrenner 02:42:00", "To be clear, the leak allegation was just about that document I shared for feedback. This is a separate issue they cited. They said I wouldn’t have been fired if not for the security memo.", "Dwarkesh Patel 02:42:09", "They said you wouldn’t have been fired for it.", "Leopold Aschenbrenner 02:42:11", "They said the reason this is a firing and not a warning is because of the warning I had gotten for the security memo.", "Dwarkesh Patel 02:42:16", "Before you left, the incidents with the board happened. Sam was fired and then rehired as CEO, and now he’s on the board. Ilya and Jan, who were the heads of the superalignment team, have left. Ilya, was a co-founder of OpenAI and the most significant member of OpenAI from a research perspective. There has been a lot of personnel drama over the last few months regarding superalignment and just generally with the OpenAI personnel drama. What’s going on?", "Leopold Aschenbrenner 02:42:49", "There’s a lot of drama. Why is there so much drama?", "There would be much less drama if all OpenAI claimed to be was building ChatGPT or business software. A lot of the drama comes from OpenAI really believing they’re building AGI. That isn’t just a marketing claim. There’s a report that Sam is raising $7 trillion for chips. That only makes sense if you really believe in AGI.", "What gets people is the cognitive dissonance between believing in AGI and not taking some of the other implications seriously. This technology will be incredibly powerful, both for good and bad. That implicates national security issues. Are you protecting the secrets from the CCP? Does America control the core AGI infrastructure or does a Middle Eastern dictator control it?", "The thing that really gets people is the tendency to make commitments and say they take these issues seriously, but then frequently not follow. For instance, as mentioned, there was a commitment around superalignment compute, dedicating 20% of compute for long-term safety research.", "You and I could have a totally reasonable debate about the appropriate level of compute for superalignment. That’s not really the issue. The issue is that the commitment was made and it was used to recruit people. It was very public.", "It was made because there was a recognition that there would always be something more urgent than long-term safety research, like a new product. In the end, they just didn’t keep the commitment. There was always something more urgent than long-term safety research.", "Another example is when I raised security issues. They would tell me security is our number one priority. Invariably, when it came time to invest serious resources or make trade-offs to take basic measures, security was not prioritized. The cognitive dissonance and unreliability cause a lot of the drama.", "(02:45:11) – Accelerating AI research progress", "Dwarkesh Patel 02:45:11", "Let’s zoom out and talk about a big part of the story. A big motivation for the way we must proceed with regards to geopolitics is that once you have AGI, you soon proceed to ASI, or superintelligence. You have these AGIs functioning as researchers into further AI progress and within a matter of years, maybe less, you reach superintelligence. From there, according to your story, you do all this research and development into robotics, pocket nukes, and other crazy shit.", "I’m skeptical of this story for many reasons. At a high level, it’s not clear to me that this input-output model of research is how things actually happen in research. We can look at the economy as a whole. Patrick Collison and others have pointed out that, compared to 100 years ago, we have 100x more researchers in the world. Yet progress isn’t happening 100 times faster. It's clearly not as simple as pumping in more researchers to get higher research output. I don't see why it would be different for AI researchers.", "Leopold Aschenbrenner 02:46:31", "This is getting into good stuff. This is the classic disagreement I have with Patrick and others. Obviously, inputs matter. The United States produces a lot more scientific and technological progress than Liechtenstein or Switzerland.", "Say you made Patrick Collison dictator of Liechtenstein or Switzerland and he implemented his utopia of ideal institutions. Keep the talent pool fixed. He’s not able to do some crazy high-skilled immigration thing or genetic breeding scheme. You keep the talent pool fixed with amazing institutions. Even then, even if Patrick Collison were the dictator, Switzerland still wouldn’t be able to outcompete the United States in scientific and technological progress. Magnitudes matter.", "Dwarkesh Patel 02:47:19", "I'm not sure I agree with this. There are many examples in history where small groups of people, Bell Labs or Skunk Works, have made significant progress. OpenAI has a couple hundred researchers.", "Leopold Aschenbrenner 02:47:33", "Highly selected though.", "Dwarkesh Patel 02:47:35", "That’s why Patrick Collison as a dictator would do a good job of this.", "Leopold Aschenbrenner 02:47:39", "Well, yes, if he can highly select all the best AI researchers in the world, he might only need a few hundred. But that’s the talent pool. You have the 300 best AI researchers in the world.", "Dwarkesh Patel 02:47:48", "But from 100 years ago to now, the population has increased massively. You would expect the density of talent to have increased, considering that things like malnutrition and poverty which affected past talent are no longer as debilitating to the same level.", "Leopold Aschenbrenner 02:48:06", "I don’t know if it’s 100x. It’s probably at least 10x. Some people think ideas haven’t gotten much harder to find, so why would we need this 10x increase in research effort? To me, this is a very natural story. Why is it natural? It’s a straight line on a log-log plot. It’s a deep learning researcher’s dream.", "What is this log-log plot? On the x-axis you have log cumulative research effort. On the y-axis you have log GDP, OOMs of algorithmic progress, log transistors per square inch, log price for a gigawatt of solar energy. It’s extremely natural for that to be a straight line. It’s classic. Initially, things are easy, but you need logarithmic increments of cumulative research effort to find the next big thing. This is a natural story.", "One objection people make is, “isn’t it suspicious that we increased research effort 10x and ideas also got 10x harder to find, perfectly equilibrating?” I say it’s just equilibrium—it’s in a endogenous equilibrium. Isn’t it a coincidence that supply equals demand and the market clears? It’s the same here. The difficulty of finding new ideas depends on how much progress has been made.", "The overall growth rate is a function of how much ideas have gotten harder to find in ratio to how much research effort has increased. This story is fairly natural, and you see it not just economy-wide but also in the experience curve for various technologies.", "It’s plausible that institutions have worsened by some factor. Obviously, there’s some sort of exponent of diminishing returns on adding more people. Serial time is better than just parallelizing. Still, clearly inputs clearly matter.", "Dwarkesh Patel 02:50:06", "I agree, but if the coefficient, of how fast they diminish as you grow the input, is high enough, then in the abstract the fact that inputs matter isn’t that relevant.", "We’re talking at a very high level, but let’s take it down to the concrete. OpenAI has a staff of at most a few hundred directly involved in algorithmic progress for future models. Let’s say you could really arbitrarily scale this number for faster algorithmic progress and better AI. It’s not clear why OpenAI doesn’t just go hire every person with a 150 IQ, of which there are hundreds of thousands in the world.", "My story is that there are transaction costs to managing all these people. They don’t just go away if you have a bunch of AIs. These tasks aren’t easy to parallelize. I’m not sure how you would explain the fact that OpenAI doesn’t go on a recruiting binge of every genius in the world?", "Leopold Aschenbrenner 02:51:12", "Let’s talk about the OpenAI example and the automated AI researchers. Look at the inflation of AI researcher salaries over the last yea. It’s gone up by 4x or 5x. They’re clearly trying to recruit the best AI researchers in the world and they do find them. My response would be that almost all of these 150 IQ people wouldn’t just be good AI researchers if you hired them tomorrow. They wouldn’t be Alec Radford.", "Dwarkesh Patel 02:51:41", "They’re willing to make investments that take years to pan out. The data centers they’re buying now will come online in 2026. Some of them won’t work out, some won’t have traits we like. But why wouldn’t they make the investment to turn these 150 IQ people into amazing AI researchers by 2026?", "Leopold Aschenbrenner 02:51:58", "Sometimes this does happen. Smart physicists have been really good at AI research, like all the Anthropic co-founders.", "Dwarkesh Patel 02:52:03", "But for example, Dario said on the podcast that they have a careful policy of being extremely selective and not hiring arbitrarily.", "Leopold Aschenbrenner 02:52:11", "Training is not as easily scalable. Training is really hard. If you hired 100,000 people, it would be really hard to train them all. You wouldn’t be doing any AI research. There are huge costs to bringing on new people and training them.", "This is very different with AIs. It’s important to talk about the advantages AIs will have. What does it take to be an Alec Radford? You need to be a really good engineer. AIs will be amazing engineers and coders. You can train them to do that. They also need to have good research intuitions and a really good understanding of deep learning.", "Alec Radford, or people like him, has acquired this over years of being deeply immersed in deep learning and having tried lots of things himself and failed. AIs will be able to read every research paper ever written, learn from every experiment ever run at the lab, and gain intuition from all of this. They’ll be able to learn in parallel from each other's experiments and experiences.", "There’s also a cultural acclimation aspect. If you hire someone new, there’s politicking, and maybe they don’t fit in well. With AIs, you just make replicas. There's a motivation aspect as well. If I could duplicate Alec Radford, and before I run every experiment, have him spend a decade’s worth of human time double-checking code and thinking really carefully about it, he wouldn’t care and he wouldn't be motivated. With AIs, you can have 100 million of them focused on making sure the code is correct with no bugs.", "The idea of 100 million human-equivalent AI researchers is just a way to visualize it. You might not have literally 100 million copies. There’s tradeoffs you can make between serial speed and parallel. You might run them at 10x or 100x serial speed, resulting in fewer tokens overall because of inherent trade-offs. You might have 100,000 AIs running at 100x human speed. They can coordinate by sharing latent space and attending to each other's context. There’s a huge range of possibilities for what you can do.", "Another illustration is that by 2027 or 2028, with automated AI researchers, you'll be able to generate an entire Internet’s worth of tokens every day. It’s clearly a huge amount of intellectual work that you can do.", "Dwarkesh Patel 02:54:44", "Here’s an analogy. Today, we generate more patents in a year than during the actual physics revolution in the early 20th century. Are we making more physics progress in a year today than we did in half a century back then? Generating all these tokens doesn't necessarily equate to generating as much codified knowledge in the initial creation of the Internet.", "Leopold Aschenbrenner 02:55:09", "Internet tokens are usually final output. We talked about the unhobbling. I think of a GPN token as one token of my internal monologue. That’s how I do this math on human equivalents. It's like 100 tokens a minute and then humans working for X hours. What is the equivalent there?", "Dwarkesh Patel 02:55:28", "This goes back to something from earlier. Why haven’t we seen huge revenues from AI yet? People often ask this question. If you took GPT-4 back ten years, people would think it would automate half the jobs. There’s a modus ponens , modus tollens here. Part of the explanation is that we’re on the verge and we just need to do these unhobblings. Part of that is probably true. But there is another lesson to learn there. Just looking at a set of abilities at face value, there are likely more hobblings behind the scenes. The same will be true of AGIs running as AI researchers.", "Leopold Aschenbrenner 02:56:09", "I basically agree with a lot of what you said. My story here is that there’s going to be a long tail. Maybe by 2026 or 202, you’ll have the proto-automated engineer that’s really good at engineering. It doesn’t yet have the research intuition. You don’t quite know how to put them to work.", "Even so, the underlying pace of AI progress is already so fast. In just three years, we've gone from AI not being able to do any kind of math at all to now crushing these math competitions. So, you might have the initial automated research engineer by 2026 or 2027, which speeds you up by 2x. You go through a lot more progress in that year. By the end of the year, you’ve figured out the remaining unhobblings and you've got a smarter model.", "Maybe it’s two years but then maybe that model can automate 100% of the research. They don’t need to be doing everything. They don’t need to make coffee or deal with tacit knowledge in other fields. AI researchers at AI labs really know the job of an AI researcher. There are lots of clear metrics. It's all virtual. There’s code. There are things you can develop and train for.", "Dwarkesh Patel 02:57:11", "Another thing is how do you actually manage a million AI researchers? Humans’ comparative ability, and we’ve been especially trained for it, is to work in teams. We’ve been learning for thousands of years about how we work together in groups. Despite this, management is a clusterfuck. Most companies are poorly managed. It's really hard to do this stuff.", "For AIs, we talk about AGI, but for it will be some bespoke set of abilities some of which will be higher than human level and some at human level. It will be some bundle and you’ll need to figure out how to put these bundles together with their human overseers and equipment. I’m just very skeptical of the idea that as soon as you get the bundle, you can just shove millions of them together and manage them.", "Any other technological revolution in history has been much more piecemeal than you'd expect on paper. What is the industrial revolution? We dug up coal to power steam engines, used steam engines to run railroads, which helped us get more coal. There’s sort of a Factorio story you can tell where in six hours you can be pumping out thousands of times more coal. In real life, it takes centuries.", "For example, with electrification, there's a famous study showing how it took decades after electricity to switch from the pulley and water wheel-based system for steam engines to one that works with more spread-out electrical motors. This will be the same kind of thing. It might take decades to actually get millions of AI researchers to work together effectively.", "Leopold Aschenbrenner 02:59:05", "This is great. A few responses to that. I totally agree with the real-world bottlenecks idea. It's easy to underrate these constraints. Basically, we’re automating labor and exploiting technology, but there are still many other bottlenecks in the world.", "That’s why the story starts narrowly where there aren’t these bottlenecks and then expands to broader areas over time. This is part of why I think initially it’s an AI research explosion. AI research doesn’t run into these real-world bottlenecks. It doesn’t require plowing a field or digging up coal. It’s just doing AI research.", "Dwarkesh Patel 02:59:42", "I love how in your model, AI research isn’t complicated. It’s like flipping a burger. It’s just AI research.", "Leopold Aschenbrenner 02:59:51", "People make these arguments like, “AGI won’t do anything because it can’t flip a burger.” Yeah it won’t be able to flip a burger, but it’ll be able to do algorithmic progress. Once it achieves that, it can figure out how to create a robot that flips burgers. The quantities we’re talking about are a lower bound. We can definitely run 100 million of these.", "One of the first things we’ll figure out is how to translate quantity into quality. Even at the baseline rate of progress, you’re quickly getting smarter and smarter systems. It took four years to go from preschooler to high schooler. Pretty quickly, there are probably some simple algorithmic changes you find if you have a hundred Alec Radfors instead of one. You don’t even need a hundred million. We’ll soon have systems that are even smarter and capable of creative, complicated behavior we don’t understand.", "Maybe there’s some way to use all this test time compute in a more unified way than all these parallel copies. They won’t just be quantitatively superhuman. They’ll pretty quickly become qualitatively superhuman. It’s like a high school student trying to understand standard physics versus a super-smart professor who gets quantum physics. You quickly enter that regime just given the underlying pace of AI progress but even more quickly with the accelerated force of automated AI research.", "Dwarkesh Patel 03:01:21", "I agree that over time you would get there. I'm not denying that ASI is possible. I’m just questioning how this happens in a year.", "Leopold Aschenbrenner 03:01:40", "The story is a bit more continuous. By 2025 or 2026, you’ll already have models as good as a college graduate. I don’t know where all the unhobbling is going to be but even it’s possible that you have a proto-automated engineer.", "There’s a bit of an AGI smear that there are unhobblings missing. There’s ways of connecting them that are missing. There’s some level of intelligence you’re missing. At some point you are going to get this thing that is 100% automated Alec Radford and once you have that, things really take off.", "Dwarkesh Patel 03:02:06", "Let’s go back to the unhobbling.", "We’re going to get a bunch of models by the end of the year. Suppose we didn’t get some capacity by the end of the year. Is there some such capacity which us lacking would suggest that AI progress will take longer than you are projecting?", "Leopold Aschenbrenner 03:02:22", "There are two key things: the unhobbling and the data wall. Let's talk about the data wall for a moment. Even though we’re seeing crazy AI progress, the data wall is actually underrated. There's a real scenario where we stagnate because we've been riding this tailwind of easily bootstrapping unsupervised learning.", "It learns these amazing world models. You just buy more compute, make simple efficiency changes, and get big gains. All of the big gains in efficiency have been pretty dumb things. You add a normalization layer . You fix scaling laws . These have already been huge things, let alone obvious ways in which these models aren’t good yet.", "The data wall is a big deal. For instance, Common Crawl online is about 30 trillion tokens. Llama-3 was trained on 15 trillion tokens. We're already using all of the data. You can get somewhat further by repeating data, but an academic paper by Boaz Barak that does scaling laws for this . It says that after about 16 repetitions, the returns basically go to zero.", "Llama-3 is already at the limit of data. Maybe we can get 10x more by repeating data. At most that’s a 100x model than GPT-4, a 100x effective compute from GPT-4. That’s not that much. If you do half an OOM of compute and half an OOM of algorithmic progress each year, that's like two years from GPT-4. GPT-4 finished pre-training in 2022, so it’s 2024. We won’t quite know by the end of the year but by 2025 and 2026 we’ll get a sense of if we’re cracking the data wall.", "Dwarkesh Patel 03:04:13", "Suppose we had three OOMs less data in Common Crawl on the Internet than we happen to have now. For decades, with the Internet and other things, the stock of data humanity has has been rapidly increasing. Is it your view that, for contingent reasons, we just happen to have enough data to train models just powerful enough, like GPT-4.5, to kick off the self-play RL loop?", "Or is it just that if it had been 3 OOMs higher, then progress would have been slightly faster? In that world, we would have been looking back, thinking it would have been hard to kick off the RL explosion with just GPT-4.5. We would have figured it out eventually.", "In this world, we would have gotten to GPT-3 and then had to kick off some sort of RL explosion. We would have still figured it out. Did we just luck out on the amount of data we happen to have in the world?", "Leopold Aschenbrenner 03:05:09", "3 OOMs less data is pretty rough. That would mean 6 OOMs less compute model and Chinchilla scaling laws. That’s basically capping out at something barely better than GPT-2. That would be really rough.", "You make an interesting point about contingency. If we consider the human trajectory analogy, a preschooler model can't learn from itself. An elementary school model can't learn from itself. Maybe GPT-4 is like a smart high schooler that can start learning from itself. Ideally, you want a somewhat better model that can truly learn by itself. 1 OOM less data would make me more iffy, but it might still be doable. It would feel chiller if we had 1 or two OOMs of more data.", "Dwarkesh Patel 03:05:56", "It would be an interesting exercise to get probability distributions of AGI contingent across OOMs of data.", "Leopold Aschenbrenner 03:06:01", "Yeah, I agree.", "Dwarkesh Patel 03:06:02", "The thing that makes me skeptical of this story is that it totally makes sense why pre-training works so well. With these other things, there are stories of why they ought to work in principle.. Humans can learn this way and so on. Maybe they're true.", "I worry that a lot of this case is based on first principles evaluation of how learning happens. Maybe fundamentally, we don't understand how humans learn. Maybe there's some key thing we're missing. On sample efficiency, you say the fact that these things are way less sample efficient than humans in learning suggests there's a lot of room for improvement. Another perspective is that we are just on the wrong path altogether. That’s why they’re so sample inefficient when it comes to pre-training.", "There are a lot of first principles arguments stacked on top of each other where you get these unhobblings, then you get to AGI. Then because of these reasons why you can stack all these things on top of each other and you get to ASI. I'm worried that there are too many steps of this sort of first principles thinking.", "Leopold Aschenbrenner 03:07:11", "We'll see. On sample efficiency, it’s sort of first principles but there's this clear missing middle. People hadn't been trying. Now people are really trying. Again, often in deep learning something like the obvious thing works and there are a lot of details to get right. It might take some time, but now people are really trying. We will get a lot of signal in the next couple of years on unhobbling.", "What is the signal on unhobbling that would be interesting? The question is basically, are you making progress on test time compute? Is this thing able to think longer horizon than just a couple hundred tokens? That was unlocked by chain-of-thought.", "Dwarkesh Patel 03:07:55", "On that point in particular, many people who have longer timelines have come on the podcast and made the point that the way to train this long horizon RL, it's not…", "Earlier we were talking about how they can think for five minutes, but not for longer. It's not because they can't physically output an hour's worth of tokens.", "Leopold Aschenbrenner 03:08:17", "Even Gemini has a million in context, and the million of context is actually great for consumption. It solves one important hobbling, which is the onboarding problem. A new coworker in your first five minutes, like a new smart high school intern, is not useful at all.", "A month in, they’re much more useful because they've looked at the monorepo , understand how the code works, and they've read your internal docs. Being able to put that in context solves this onboarding problem. They're not good at the production of a million tokens yet.", "Dwarkesh Patel 03:08:46", "On the production of a million tokens, there's no public evidence that there's some easy loss function where you can...", "Leopold Aschenbrenner 03:08:55", "GPT-4 has gotten a lot better since launch. The GPT-4 gains since launch are a huge indicator.", "You talked about this with John Schulman on the podcast. John said this was mostly post-training gains. If you look at the LMSys scores, it's like 100 Elo or something. It's a bigger gap than between Claude 3 Opus and Claude 3 Haiku . The price difference between those is 60x.", "Dwarkesh Patel 03:09:16", "But it’s not more agentic. It's better in the same chat.", "Leopold Aschenbrenner 03:09:19", "It’s  much better about math. It went from 40% to 70%. That indicates that clearly there's stuff to be done on hobbling. The interesting question is, this time a year from now, is there a model that is able to think for a few thousand tokens coherently, cohesively, identically? Again, I'd probably feel better if we had 1–2 OOMs more data because the scaling just gives you this tailwind.", "With tools, when you talk to people who try to make things work with tools, GPT-4 is really when tools start to work. You can kind of make them work with GPT-3.5, but it's just really tough. Having GPT-4, you can help it learn tools in a much easier way. So it’d be great to have just a bit more tailwind from scaling. I don't know if it'll work, but it's a key question.", "Dwarkesh Patel 03:10:12", "It's a good place to sort of close that part where we know what the crux is and what evidence of that would look like.", "Let’s talk about AGI to superintelligence. Maybe it's the case that the gains are really easy right now and you can just sort of let loose. Give Alec Radford a compute budget and he’ll comes out the other end with something that is an additive change as part of the code.", "How many other domains in the world are like this, where you think you could get the equivalent of in one year? You just throw enough intelligence across multiple instances and you come out the other end with something that is remarkably decades, centuries ahead? You start off with no flight, and then you have the Wright brothers . You have a million instances of GPT-6, and you come out the other end with Starlink ? Is that your model of how things work?", "Leopold Aschenbrenner 03:11:17", "You're exaggerating the timelines a little bit, but I think a decade's worth of progress in a year or something is a reasonable prompt. This is where the automated AI researcher comes in. It gives you this enormous tailwind on all the other stuff.", "You automate AI research with your automated Alec Radfords. You come out the other end. You've done another five OOMs. You have a thing that is vastly smarter. Not only is it vastly smarter, you've been able to make it good at everything else. You're solving robotics.", "The robots are important because for a lot of other things, you do actually need to try things in the physical world. Maybe you can do a lot in simulation. Those are the really quick worlds. I don't know if you saw the last Nvidia GTC and it was all about the digital twins having all your manufacturing processes in simulation. Again, if you have these superintelligent cognitive workers, can they just make simulations of everything, off the float style, and make a lot of progress there?", "You're just going to get the robots. I agree there are a lot of real-world bottlenecks. It's quite possible that we're going to have crazy drone swarms, but also lawyers and doctors still need to be humans because of regulation. You kind of start narrowly, you broaden, and there are worlds in which you let them loose. Again, because of these competitive pressures, we will have to let them loose to some degree on various national security applications. Rapid progress is quite possible.", "In the explosion after, there are basically two components. The A in the production function , the growth of technology, has massively accelerated. Now you have a billion superintelligent scientists and engineers and technicians, superbly competent at everything.", "You also just automated labor. Even without the whole technological explosion thing, you have this industrial explosion, at least if you let them loose, because you can cover Nevada and you start with one robot factory producing more robots. It's this cumulative process because you've taken labor out of the equation.", "Dwarkesh Patel 03:13:21 That's super interesting.", "Although when you increase the K or the L without increasing the A , you can look at examples like the Soviet Union or China. They rapidly increased inputs, which did have a geopolitically game-changing effect. It is remarkable to see the transformation of cities like Shanghai over just decades.", "Leopold Aschenbrenner 03:13:43", "They throw out these crazy cities in like a decade.", "Dwarkesh Patel 03:13:46 People talk about 30% growth rates from AI. The closest thing—", "Leopold Aschenbrenner 03:13:49", "Look at the Asian Tigers at 10%. It's totally possible.", "Dwarkesh Patel 03:13:50", "But without productivity gains, it's not like the Industrial Revolution. From a perspective of outside the system, your goods become much cheaper and you can manufacture more things. But it's not a sign that the next century is rapidly approaching.", "Leopold Aschenbrenner 03:14:07", "Both are important. The other thing I'll say is that with all of this stuff, the magnitudes are really, really important. We talked about a 10x in research effort, or maybe 10-30x over a decade. Even without any kind of self-improvement type loops — even in the sort of GPT-4 to AGI story — we're talking about an OOM of effective compute increase a year.", "It’s half an OOM of compute, half an OOM of algorithmic progress that sort of translates into effective compute. You're basically doing 10x a year on your labor force. It's a radically different world if you're doing a 10x or 30x in a century versus a 10x a year on your labor force. The magnitudes really matter.", "It also really matters in the intelligence explosion scenario, just the automated AI research part. One story you could tell there is that ideas get harder to find. Algorithmic progress is going to get harder. Right now, you have the easy wins, but in like four or five years, there will be fewer easy wins. So the sort of automated AI researchers are going to be necessary to just keep it going, because it's gotten harder. That's sort of a really weird knife-edge assumption economics.", "Dwarkesh Patel 03:15:09", "Isn't that the equilibrium story you were just telling about why the economy as a whole has 2% economic growth? You just proceed on the equilibrium. I guess you're saying by the time—", "Leopold Aschenbrenner 03:15:17", "The result of the equilibrium here is that it’s way faster. AIt depends on the sort of exponents. Suppose you need to 10x the effective research effort in AI research in the last four or five years to keep the pace of progress. We're not just getting a 10x, you're getting 1,000,000x or 100,000x. The magnitudes really matter.", "One way to think about this is that you have two exponentials. You have your normal economy that's growing at 2% a year, and you have your AI economy growing at 10x a year. It's starting out really small. It's way faster and it's going to overtake eventually. You can just do the simple revenue extrapolation if you think your AI economy has some growth rate. It's a very simplistic way, but there's this 10x a year process.", "You're going to transition the whole economy, as it broadens, from the 2% a year to the much faster growing process. That's very consistent with historical stories. There's this long-run hyperbolic trend. It manifested in the change in growth mode in the Industrial Revolution, but there's just this long-run hyperbolic trend. Now you have another change in growth mode.", "Dwarkesh Patel 03:16:35", "That was one of the questions I asked Tyler when I had him on the podcast . The fact that, after 1776, you went from a regime of negligible economic growth to 2% is really interesting. From the perspective of somebody in the Middle Ages or before, 2% is the equivalent of like 10%. I guess you're projecting even higher for the AI economy.", "Leopold Aschenbrenner 03:16:59", "It depends. Again, with all this stuff I have a lot of uncertainty. A lot of the time I'm trying to tell the modal story because it's important to be concrete and visceral about it.", "I have a lot of uncertainty over how the 2030s play out. The thing I know is that it's going to be fucking crazy. As for exactly where the bottlenecks are and so on…", "Dwarkesh Patel 03:17:20", "Let's talk through the numbers here. You mentioned hundreds of millions of AI researchers. Right now, GPT-4o is like $15 for a million tokens outputted. A human thinks at 150 tokens a minute or something. If you do the math on that, for an hour's worth of human output, it's like $0.10 or something.", "Leopold Aschenbrenner 03:17:48", "It's cheaper than a human worker. It can't do the job yet.", "Dwarkesh Patel 03:17:51", "That's right. But by the time you're talking about models that are trained on the 10 GW cluster, then you have something that is four OOMs more expensive via inference, something like three OOMs. That's like $100/hour of labor. Now you're having hundreds of millions of such laborers. Is there enough compute to do this kind of labor with the model that is 1000 times bigger?", "Leopold Aschenbrenner 03:18:17", "Great question. I actually don't think inference costs for frontier models are necessarily going to go up that much.", "Dwarkesh Patel 03:18:24", "But isn't the test time sort of thing that it will go up even higher?", "Leopold Aschenbrenner 03:18:28", "We're just doing per token. Suppose each model token was the same as a human token thing at 100 tokens a minute. It'll use more, but the token calculation is already pricing that in. The question is per token pricing. GPT-3 when it launched was actually more expensive than GPT-4 now. Over vast increases in capability gains, inference cost has remained constant. That's sort of wild, and it's worth appreciating. It gestures at an underlying pace of algorithmic progress.", "There's a more theoretically grounded way to explain why inference costs would stay constant. On Chinchilla scaling laws, half of the additional compute you allocate to bigger models and half of it you allocate to more data. If we go with the basic story of 0.5 OOM/year more compute and 0.5 OOM/year of algorithmic progress you're saving 0.5 OOM/year. That would exactly compensate for making the model bigger.", "The caveat is that obviously not all training efficiencies are also inference efficiencies. A bunch of the time they are. Separately, you can find inference efficiencies. Given this historical trend and baseline theoretical reason, it's not a crazy baseline assumption that the frontier models are not necessarily going to get more expensive, per token.", "Dwarkesh Patel 03:19:52", "Really? Okay, that's wild.", "Leopold Aschenbrenner 03:19:55", "We'll see. Even if they get 10x more expensive, then you have 10 million instead of 100 million. It's not really—", "Dwarkesh Patel 03:20:04", "But part of the intelligence explosion is that each of them has to run experiments that are GPT-4 sized. As a result, that takes up a lot of compute. Then you need to consolidate the results of experiments. What is the synthesized weight?", "Leopold Aschenbrenner 03:20:20", "You have much bigger inference compute anyway than your training. But the experiment compute is a constraint.", "Dwarkesh Patel 03:20:25", "Let’s go back to a more fundamental thing we're talking about here. In the series you say we should denominate the probability of getting to AGI in terms of OOMs of effective compute. Effective here accounts for the fact that there's a compute multiplier if you have a better algorithm. I'm not sure that it makes sense to be confident that this is a sensible way to project progress. It might be, but I have a lot of uncertainty about it.", "It seems similar to somebody trying to project when we're going to get to the moon. They're looking at the Apollo program in the 1950s or something. They're like, \"we have some amount of effective jet fuel and if we get more efficient engines, then we have more effective jet fuel. So we're going to determine the probability of getting to the moon based on the amount of effective jet fuel we have.\" I don't deny that jet fuel is important to launch rockets, but that seems like an odd way to denominate when you're going to get to the moon.", "Leopold Aschenbrenner 03:21:36", "I don't know how rocket science works, but I didn't get the impression that there's some clear scaling behavior with the amount of jet fuel. First of all, the scaling laws in AI have just held. A friend of mine pointed this out and it's a great point. If you look at the original Kaplan scaling laws paper — it went from 10^-9 to 10 petaflop days — and then concatenate additional compute from there to GPT-4, assuming some algorithmic progress, the scaling laws have held probably over 15 OOMs. It’s a rough calculation so it’s maybe even more. They’ve held for a lot of OOMs.", "Dwarkesh Patel 03:22:14", "They held for the specific loss function they're trained on, which is training the next token. Whereas the progress you are forecasting, we specifically know that that scaling can’t work because of the data wall. There's some new thing that has to happen, and I'm not sure whether you can extrapolate that same scaling curve to tell us whether these hobblings will also be fixed. Is this not on the same graph?", "Leopold Aschenbrenner 03:22:39", "The hobblings are just a separate thing.", "There’s a few things here. On effective compute scaling, people center the scaling laws because they’re easy to explain. Why does scaling matter?", "The scaling laws came way after people, at least like Dario and Ilya, realized that scaling mattered. There's this great quote from Dario on your podcast . The models just want to learn. You make them bigger and they learn more. That’s more important than the sort of loss curve.", "That just applied across domains. You can look at this in benchmarks. Again, the headwind is the data wall. I’m bracketing that and talking about that separately.", "The other thing is unhobblings. If you just put them on the effective compute graph, these unhobblings would be huge.", "Dwarkesh Patel 03:23:32", "What does it even mean? What is on the y-axis here?", "Leopold Aschenbrenner 03:23:36", "Say MLPR on this benchmark or whatever. We mentioned the LMSys differences, RLHF which is as good as 100x, chain-of-thought. Just going from this prompting change, a simple algorithmic change can be like 10x effective compute increases on math benchmarks. This is useful to illustrate that unhobblings are large, but they're slightly separate things.", "At a per token level, GPT-4 is not that far away from a token of my internal monologue. Even 3.5 to 4 took us from the bottom of the human range to the top of the human range on a lot of high school tests. It's a few more 3.5 to 4 jumps per token basis, per token intelligence. Then you've got to unlock the test time, solve the onboarding problem, make it use a computer, and then you're getting real close. The story might be wrong, but it is strikingly plausible.", "The other thing I'll say is on the 2027 timeline, I do think it’s unlikely, but I do think there's worlds where there are AGI next year. That's basically if the test time compute overhang is really easy to crack. If it's really easy to crack, then you do like four OOMs of test time compute from a few hundred tokens to a few million tokens quickly. Then again, maybe it only takes one or two jumps equivalent equivalent to GPT-3.5 to 4, per token. One or two of those jumps per token plus test time compute and you basically have the proto automated engineer.", "Dwarkesh Patel 03:25:03", "I'm reminded of Steven Pinker ’s book, The Better Angels of Our Nature . It talks about the secular decline in violence and war and everything. You can just plot the line from the end of World War Two. In fact from before World War Two, and then these are just aberrations. Basically as soon as it happens you get Ukraine, Gaza, etc.", "Leopold Aschenbrenner 03:25:32", "Impending ASI increasing crazy global conflict. ASI and crazy new WMDs.", "Dwarkesh Patel 03:25:39", "This is a thing that happens in history where you see a straight line then as soon as you make that prediction… Who is that famous author?", "Leopold Aschenbrenner 03:25:48", "Again, people have been predicting deep learning will hit a wall every year. Maybe one year they're right. But it's gone a long way and it hasn't hit a wall. You don't have that much more to go.", "(03:25:58) – Alignment", "Dwarkesh Patel 03:25:58", "This is a plausible story and let's just run with it and see what it implies.", "In your series, you talk about alignment not from the perspective of “this is some doomer scheme to get the 0.01% of the probability distribution where things don't go off the rails.” It's more about just controlling the systems and making sure they do what we intend them to do.", "If that's the case, we're going to be in this sort of geopolitical conflict with China. What we're worried about is them making the CCP bots that go out and take the red flag of Mao across the galaxies. Shouldn't we then be worried about alignment as something that, in the wrong hands, enables brainwashing, and dictatorial control?", "This seems like a worrying thing. This should be part of the sort of algorithmic secrets we keep hidden. The secret of how to align these models, because that's also something the CCP can use to control their models.", "Leopold Aschenbrenner 03:27:04", "In the world where you get the democratic coalition, yeah. Also, alignment is often dual use.", "The alignment team developed RLHF and it was great. It was a big win for alignment, but it also obviously makes these models useful. So alignment enables the CCP bots. Alignment also is what you need to get the US AIs to follow the Constitution, disobey unlawful orders, and respect separation of powers and checks and balances. You need alignment for whatever you want to do. It's just the underlying technique.", "Dwarkesh Patel 03:27:37", "Tell me what you make of this take. I've been struggling with this a little bit.", "Fundamentally, there's many different ways the future could go. There's one path that’s the Eliezer type: crazy AIs with nanobots take the future and turn everything into gray goo or paperclips.", "The more you solve alignment, the more that path of the decision tree is circumscribed. The more you solve alignment, the more it’sjust different humans and the visions they have. Of course, we know from history that things don't turn out the way you expect. It's not like you can decide the future.", "Leopold Aschenbrenner 03:28:10", "That’s part of the beauty of it.  You want these mechanisms like error correction—", "Dwarkesh Patel 03:28:14", "But from the perspective of anybody who's looking at the system it'll be like, “I can control where this thing is going to end up.” So the more you solve alignment — the more you circumscribe the different futures that are the result of AI will — the more that accentuates the conflict between humans and their visions of the future. The world where alignment is solved is the one in which you have the most sort of human conflict over where to take AI.", "Leopold Aschenbrenner 03:28:42", "By removing the worlds in which the AIs take over, the remaining worlds are the ones where the humans decide what happens. As we talked about, there are a whole lot of worlds there and how that could go.", "Dwarkesh Patel 03:28:53", "You think about alignment as just controlling these things. Just think a little forward. There are worlds in which hopefully human descendants, or some version of that in the future, merge with superintelligences. They have the rules of their own but they're in some sort of law and market-based order. I worry because you’ll have things that are conscious and should be treated with rights. I’m thinking about what these alignment schemes actually are.", "You read these books about what actually happened during the Cultural Revolution , what happened when Stalin took over Russia. You have very strong monitoring from different instances where everybody's tasked with watching each other. You have brainwashing. You have red teaming like the spy stuff you were talking about where you try to convince somebody you're a defector and you see if they defect with you. If they do, then you realize they're an enemy.", "Maybe I'm stretching the analogy too far but the ease with which these alignment techniques actually map onto something you could have read about during Mao's Cultural Revolution is a little bit troubling.", "Leopold Aschenbrenner 03:30:06", "Sentient AI is a whole other topic. I don't know if we want to talk about it. I agree that it's going to be very important how we treat them. In terms of what you're actually programming these systems to do, again alignment is just a technical solution. It enables the CCP bots", "Talking about checks and balances, the model is sort of like the Federal Reserve or Supreme Court justices. There's a funny way in which they're kind of this very dedicated order. It's amazing. They're actually quite high quality. They're really smart people who truly believe in and love the Constitution. They believe in their principles.", "They have different persuasions, but they have very sincere debates about what is the meaning of the Constitution and what is the best actuation of these principles. By the way, I recommend SCOTUS oral arguments as the best podcast when I run out of high quality content on the Internet.", "There's going to be a process of figuring out what the Constitution should be. This Constitution has worked for a long time. You start with that. Maybe eventually things change enough that you want edits to that. For example, on the checks and balances, they really love the Constitution. They believe in it and and they take it seriously.", "At some point you are going to have AI police and AI military it’ll be important to ensure that they believe in the Constitution the way that a Supreme Court justice does or the way that a Federal Reserve official takes their job really seriously.", "The other important thing is that a bunch of different factions need their own AIs. It's really important that each political party gets to have their own superintelligence. You might totally disagree with their values, but it's important that they get to have their own kind of superintelligence. It’s important that these classical liberal processes play out, including different people of different persuasions and so on. The AI advisors might not make them wise. They might not follow the advice or whatever, but it's important.", "Dwarkesh Patel 03:32:08", "You seem pretty optimistic about alignment. Let’s get to the source of the optimism. You laid out different worlds in which we could get AI. There's one that you think has a low probability of happening next year, where GPT-5 plus scaffolding plus unhobblings gets you to AGI. There are also scenarios where it takes much longer.", "GPT-4 seems pretty aligned in the sense that I don't expect it to go off the rails. Maybe with scaffolding, things might change. It looks pretty good, and maybe you will keep turning the cranks, and one of them gets you to ASI.", "Is there any point at which the sharp left turn happens? Do you think it's plausible that when they start acting more like agents, this is something to worry about? Is there anything qualitative that you expect to change with regards to the alignment perspective?", "Leopold Aschenbrenner 03:33:07", "I don't know if I believe in this concept of a sharp left turn, but there are important qualitative changes that happen between now and somewhat superhuman systems early on in the intelligence explosion. There are also important qualitative changes that occur from early in the intelligence explosion to true superintelligence in all its power and might.", "Let's talk about both of those. The first part of the problem is one we're going to have to solve ourselves. We have to align the initial AI and the intelligence explosion, the sort of automated Alec Radford. There are two important things that change from GPT-4. If you believe the story on synthetic data RL, self-play, to get past the data wall, and if you believe this unhobbling story, at the end you're going to have things that are agents. They’ll do long-term planning. They have long horizons, which is a prerequisite to being able to do automated AI research.", "Pre-training is alignment-neutral in the sense that it has good representations  and representations of doing bad things, but it's not scheming against you. Misalignment can arise once you're doing more long-horizon training. For example, if you're training an AI to make money using reinforcement learning, it might learn to commit fraud, lie, deceive, or seek power simply because those are successful strategies in the real world. With RL, it explores, maybe it tries to hack and then it gets some money. If that’s successful, that gets reward and that’s just reinforced. There’s more serious misalignments, like misaligned long-term goals, that necessarily have to be able to arise if you’re able to get long-horizon systems.", "Let’s swap. What you want to do in that situation is add side constraints, like \"don't lie,\" \"don't deceive,\" or \"don't commit fraud.\" How do you add those side constraints? The basic idea you might have is RLHF. You have this goal of making money, but you're watching what it's doing. If it starts trying to lie, deceive, commit fraud, or break the law, you give it a thumbs down and anti-reinforce that behavior.", "The critical issue that arises is that these AI systems are becoming superhuman and will be able to do things that are too complex for humans to evaluate. Even early on in the intelligence explosion, the automated AI researchers and engineers might write millions, billions, or trillions of lines of complicated code. You won't understand what they're doing anymore. In those millions of lines of code, you don't know if it's hacking, exfiltrating itself, or trying to go for the nukes.", "You don’t know anymore. Thumbs up, thumbs down pure RLHF doesn't fully work anymore in this scenario. There’s a hard technical problem of what do you do post-RLHF but it’s a solvable problem. There’s various things I’m bullish on. There’s ways in which deep learning has shaped out favorably.", "The second part of the picture is going from your initial systems in the intelligence explosion to superintelligence, many OOMs of improvement. By the end of it, you have a thing that's vastly smarter than humans. The intelligence explosion is really scary from an alignment point of view. If you have this rapid intelligence explosion in less than a year or two, you're going from systems where failure would be bad but not catastrophic to a world where if something goes awry, the AI could exfiltrate itself, start hacking the military, and do really bad things.", "In less than a year, you're going from a world where the AI is some descendant of current systems that you understand and has good properties. It becomes something that potentially has a very alien and different architecture after having gone through another decade of ML advances. One salient example is legible and faithful chain-of-thought. A lot of the time when we're talking about these things, we're talking about how it has tokens of thinking and then uses many tokens of thinking. Maybe we bootstrap ourselves by pre-training it to learn to think in English, and then we do something else on top so it can do longer chains of thought.", "It's very plausible to me that for the initial automated alignment researchers, we don't need to do any complicated mechanistic interpretability . You can just read what they're thinking, which is a huge advantage. However, it's very likely not the most efficient way to do it. There's probably some way to have a recurrent architecture with all internal states. That's a much more efficient way to do it.", "That's what you get by the end of the year. You're going in this year from RLHF++ to something that's vastly superhuman. To us, it might be like an expert in the field compared to an elementary or middle school student. It’s an incredibly hairy period for alignment. The thing you do have is the automated AI researchers. You can use them to also do alignment.", "Dwarkesh Patel 03:38:18", "In this world, why are we optimistic that the project is being run by the right people? Here's something to think about. OpenAI starts off with people who are very explicitly thinking about exactly these kinds of things.", "Leopold Aschenbrenner 03:38:38", "Yes but are they still there?", "Dwarkesh Patel 03:31:48", "No, but here’s the thing. Even with the current leadership, you can find them in interviews and blog posts talking about it. You talked about what happens and it’s not just you. Jan talked about it in his Tweet thread . There is some trade-off that has to be made with doing a flashy release this week and not next week because Google I/O is next week or whatever. The trade-off is made in favor of the more careless decision.", "The government, the national security advisor or the military or whatever, is much less familiar with this kind of discourse. They’re not like, “I'm worried the chain-of-thought is unfaithful. How do we think about the features that are represented here?” Why should we be optimistic that a project run by people like that will be thoughtful about these kinds of considerations?", "Leopold Aschenbrenner 03:39:42", "They might not be. Here’s a few thoughts. First of all, the private world is extremely tough for alignment even if they nominally care. There’s a couple of reasons. You have the race between the commercial labs. You don't have any headroom there to be like, aAh, actually we're going to hold back for three months, get this right. We're going to dedicate 90% of our compute to automated alignment research instead of just pushing the next OOM.\"", "The other thing is that in the private world, China has stolen your weights. China has your secrets. They're right on your tails. You're in this fever struggle. There’s no room at all for maneuver. It's absolutely essential to get alignment right. To get it during this intelligence explosion, to get it right, you need to have that room to maneuver and you need to have that clear lead. Again, maybe you've made the deal or whatever, but you're in an incredibly tough spot if you don't have this clear lead.", "So the private world is kind of rough there. Whether people will take it seriously… I have some faith normal mechanisms of a liberal society. Wwe don't fully know yet if alignment is an issue. The science will develop.  We're going to get better measurements of alignment. The case will be clear and obvious.", "I worry that there's worlds where evidence is ambiguous. A lot of the most scary intelligence explosion scenarios are worlds in which evidence is ambiguous. But again, if evidence is ambiguous, then those are the worlds in which you really want the safety margins. Those are also the worlds in which running the intelligence explosion is sort of like running a war. The evidence is ambiguous. You have to make these really tough trade-offs. You better have a really good chain of command for that where they’re not just YOLOing it.", "(03:41:26) – On Germany, and understanding foreign perspectives", "Dwarkesh Patel 03:41:26", "Let’s talk a little about Germany. We’re making the analogy to World War II. You made this. really interesting point many hours ago. Throughout history, World War Two is not unique at least when you think in proportion to the size of the population. Let’s look at these other sorts of catastrophes where a substantial proportion of the population has been killed off.", "After that, the nation recovers and they get back to their heights. What's interesting after World War Two is that Germany especially, and maybe Europe as a whole, they experienced vast economic growth in the immediate aftermath because of catch-up growth.", "We're not talking about Germany potentially launching an intelligence explosion and them getting a seat at the AI table. We were talking about Iran and North Korea and Russia. We didn't talk about Germany.", "Leopold Aschenbrenner 03:42:33", "Because they're allies.", "Dwarkesh Patel 03:42:35", "So what happened? We had World War Two and now it didn't come back to the Seven Years' War or something.", "Leopold Aschenbrenner 03:42:42", "I'm generally very bearish on Germany. In this context, you're underrating a little bit. It's probably still one of the top five most important countries in the world. Europe overall still has a GDP that's close to the United States in size. There are things that Germany is actually kind of good at, like state capacity. The roads are good and they're clean and they're well-maintained.", "In some sense, a lot of this is the flip side of things that are bad about Germany. In the US, there's a bit more of a Wild West feeling. It includes the kind of crazy bursts of creativity. It includes political candidates. There's a much broader spectrum. Both Obama and Trump are politicians you just wouldn't see in the much more confined kind of German political debate. I wrote a blog post at some point about this, “ Europe’s Political Stupor .”", "There's this punctilious sort of rule-following that is good in terms of keeping your state capacity functioning. But that is also a very constrained view of the world in some sense. After World War Two, there's a real backlash against anything elite. There are no elite high schools or elite colleges. Excellence isn't cherished.", "Dwarkesh Patel 03:44:11", "Why is that the logical, intellectual thing to rebel against if you're trying to overcorrect from the Nazis? Was it because the Nazis were very much into elitism? I don't understand why that's a logical sort of reaction.", "Leopold Aschenbrenner 03:44:26", "Maybe it was a counter reaction against the whole Aryan race and that sort of thing. Look at the end of World War I versus the end of World War II for Germany. A common narrative is that the Peace of Versailles was too strict on Germany. The peace imposed after World War Two was much more strict.", "The whole country was destroyed. In most of the major cities, over half of the housing stock had been destroyed. In some birth cohorts, something like 40% of the men had died, Almost 20 million people displaced. It was huge and crazy.", "Dwarkesh Patel 03:45:09", "And the borders are way smaller than the Versailles borders.", "Leopold Aschenbrenner 03:45:11", "Yeah, exactly. There’s also a complete imposition of a new political system on both sides. But in some sense, that worked out better than the post-World War I peace where there was this resurgence of German nationalism. In some sense, it's unclear if you want to wake the sleeping beast. At this point, it's gotten a bit too sleepy.", "Dwarkesh Patel 03:45:39", "It's an interesting point about how we underrate the American political system. I've been making the same correction myself. There was this book written by a Chinese economist called China's World View .", "Overall, I wasn't a big fan, but it made a really interesting point, which was the way in which candidates rise up through the Chinese hierarchy for politics and administration. In some sense, it selects so that you're not going to get some Marjorie Taylor Greene or someone like that.", "Leopold Aschenbrenner 03:46:11", "Don't get that in Germany either.", "Dwarkesh Patel 03:46:13", "But he explicitly made the point in the book that it also means they’re never going to get a Henry Kissinger or Barack Obama in China. By the time they end up in charge of the Politburo , they'll be some 60-year-old bureaucrat who's never ruffled any feathers.", "Leopold Aschenbrenner 03:46:28", "There's something really important about the very raucous political debate in the US. In general in America lots of people live in their own world. We live in this kind of bizarre little bubble in San Francisco and people. But that's important for the evolution of ideas, error correction, that sort of thing.", "There are other ways in which the German system is more functional. There are also major mistakes, like with defense spending . Russia invades Ukraine and they’re like, \"wow, what did we do?\"", "Dwarkesh Patel 03:47:06", "That's a really good point. The main issue is that everybody agrees.", "Leopold Aschenbrenner 03:47:09", "Exactly. There was no debate about it. It’s a consensus Blob kind of thing.", "On the China point, I have this experience of reading German newspapers and I would understand the German debate and the state of mind much more poorly without it from just afar. It is interesting just how impenetrable China is to me. It's a billion people.", "Almost everything else is really globalized. You have a globalized Internet. I kind of have a sense what's happening in the UK. Even if I didn't read German newspapers, I would have a sense of what's happening in Germany. But I really don't feel like I have a sense of what is the state of mind, what is the state of political debate, of an average Chinese person or an average Chinese elite.", "I find that distance kind of worrying. There are some people who do this and they do really great work where they go through the party documents and the party speeches. It seems to require a lot of interpretive ability. There are very specific words in Mandarin that mean one connotation, not the other connotation. It's interesting given how globalized everything is. Now we have basically perfect translation machines and it's still so impenetrable.", "Dwarkesh Patel 03:48:22", "That's really interesting. I'm sort of ashamed almost that I haven't done this yet. Many months ago when Alexey interviewed me on his YouTube channel, I said, \"I'm meaning to go to China to actually see for myself what's going on.\" By the way, if anybody listening has a lot of context on China and if I went to China, could introduce me to people, please email me.", "Leopold Aschenbrenner 03:48:44", "You have to do some pods and find some of the Chinese AI researchers. It’d be so good. I don't know if they can speak freely.", "Dwarkesh Patel 03:48:54", "So they have these papers and on the paper they'll say who's a co-author. I was thinking of just cold emailing everybody, like, \"Here's my Calendly. Let's just talk.” I just want to see what the vibe is. Even if they don't tell me anything, I'm just like, “what kind of person is this? How westernized are they?\"", "I just remembered that, in fact, ByteDance, according to mutual friends we have at Google, cold emailed every single person on the Gemini paper and said, \"if you come work for ByteDance, we'll make you a L8 engineer. You'll report directly to the CTO.\"", "Leopold Aschenbrenner 03:49:32", "That's how the secrets go over.", "Dwarkesh Patel 03:49:34", "I meant to ask this earlier. If there's only 100 or so people, maybe less, who are working on the key algorithmic secrets. If they hired one such person, is all the alpha that these labs have gone?", "Leopold Aschenbrenner 03:49:55", "If this person was intentional about it, they could get a lot. Actually, they could probably just exfiltrate the code. They could get a lot of the key ideas. Again up until recently, stuff was published but they could get a lot of the key ideas if they tried. There are a lot of people who don't actually look around to see what the other teams are doing. But you kind of can. They could. It's scary.", "Dwarkesh Patel 03:50:08", "The project makes more sense there, where you can't just recruit a Manhattan Project engineer.", "Leopold Aschenbrenner 03:50:15", "These are secrets that can be used for probably every training run in the future. Maybe they’re the key to the data wall without which they can’t go on. They're going to give multipliers on compute worth hundreds of billions, trillions of dollars. All it takes is China to offer $100 million to somebody and say “come work for us.” I'm really uncertain on how seriously China is taking AGI right now.", "One anecdote was related to me by another researcher in the field. They were at a conference with somebody, a Chinese AI researcher. He was talking to him and he was like, \"I think it's really good that you're here. We have to have international coordination and stuff.\" Apparently this guy said that \"I'm the most senior person that they're going to let leave the country to come to things like this.\"", "Dwarkesh Patel 03:51:07", "What's the takeaway?", "Leopold Aschenbrenner 03:51:09", "They're not letting really senior AI researchers leave the country. It’s a kind of classic Eastern Bloc move.", "I don't know if this is true, but it’s what I heard.", "Dwarkesh Patel 03:51:19", "Let’s go back to the point you made earlier about being exposed to German newspapers. Earlier you mentioned you were interested in economics and law and national security. The variety in intellectual diet has exposed you to thinking about the geopolitical question here in ways that that others talking about AI aren’t.", "This is the first episode I've done about this where we've talked about things like this. Now that I think about it, that’s weird given that this is an obvious thing in retrospect. I should have been thinking about it. That's one thing we've been missing.", "What are you missing, not just in national security? What perspective are you probably underexposed to as a result? I guess you mentioned China as one.", "Leopold Aschenbrenner 03:52:00", "The China one is an important one. Another one would be a sort of very Tyler Cowen-esque take. You're not exposed to how a normal person in America will use AI. That kind of thing will be a bottleneck to the diffusion of these things. I'm overrating the revenue because I assume everyone is adopting it. But Joe Schmo engineer at a company, will they be able to integrate it? Also what’s the reaction to it? This was a question hours ago. Won’t people rebel against this? Will they not want to do the project? I don't know. Maybe they will.", "Dwarkesh Patel 03:52:41", "Here's a political reaction that I didn't anticipate. I already told you about this, but I'm just going to tell the story again. Tucker Carlson was recently on a Joe Rogan episode . They start talking about World War II.", "Tucker says, \"well, listen, I'm going to say something that my fellow conservatives won't like, but I think nuclear weapons are immoral. I think it was obviously immoral that we use them on Nagasaki and Hiroshima.\"", "Then he says, \"In fact, nuclear weapons are always immoral, except when we would use them on data centers. In fact, it would be immoral not to use them on data centers, because, look, these people in Silicon Valley, these fucking nerds, are making superintelligence, and they say that it could enslave humanity. We made machines to serve humanity, not to enslave humanity. And they're just going on and making these machines. And so we should, of course, be nuking the data centers.\" That is definitely not a political reaction in 2024 I was expecting. It's going to be crazy.", "Dwarkesh Patel 03:53:50", "The thing we learned with COVID is also that the left-right reactions that you’d anticipate just based on hunches—", "Leopold Aschenbrenner 03:53:58", "It completely flipped multiple times. Initially the right was on it and the left was like, \"this is racist.\" Then it flipped. The left was really into the lockdowns. The whole thing also is just so blunt and crude.", "Probably in general, people like to make sort of complicated technocratic AI policy proposals. If things go kind of fairly rapidly on the path to AGI, there might not actually be that much space for complicated, clever proposals. It might just be much cruder reactions.", "Dwarkesh Patel 03:54:33", "You mentioned spies and national security getting involved and everything. You can talk about that in the abstract, but now that we're living in San Francisco we know many of the people who are doing the top AI research. It’s also a little scary to think about people I personally know and am friends with. It's not unfeasible if they have secrets in their head that are worth $100 billion or something that you might see kidnapping, assassination, sabotage.", "Leopold Aschenbrenner 03:55:01", "Oh, their family. It's really bad. To the point on security, right now it’s really foreign. At some point, as it becomes really serious, you're going to want the security guards.", "Dwarkesh Patel 03:55:16", "Presumably, you have thought about the fact that people in China will be listening to this and reading your series.", "Somehow you made the trade-off that it's better to let the whole world know, including China, and wake them up to AGI than to stay silent. Part of the thing you're worried about is China waking up to AGI. I’m curious about that. Walk me through how you've thought about that trade-off.", "Leopold Aschenbrenner 03:55:44", "This is a tough trade-off. I thought about this a bunch. People in the PRC will read this.", "To some extent the cat is out of the bag. AGI is a thing people are thinking about very seriously. That’s not new anymore. A lot of these takes are kind of old or I had similar views a year ago. I might not have written it up a year ago, in part because I didn’t think the cat wasn't out of the bag enough then.", "To be able to manage this challenge, much broader swaths of society will need to wake up. If we're going to get the project, we actually need a broad bipartisan understanding of the challenges facing us. It's a tough trade-off. The need to wake up people in the United States, in the Western world, in the democratic coalition, is ultimately imperative. My hope is more people here will read it than in the PRC.", "People sometimes underrate the importance of just kind of writing it up and laying out the strategic picture. You have done actually a great service to mankind in some sense with your podcast. It's overall been good.", "(03:57:04) – Dwarkesh’s immigration story and path to the podcast", "Leopold Aschenbrenner 03:57:04", "By the way, on the topic of Germany. We were talking at some point about immigration stories. You have an interesting story you haven't told, butI think you should tell it", "Dwarkesh Patel 03:57:17", "So a couple years ago, I was in college and I was 20. I was about to turn 21.", "Leopold Aschenbrenner 03:57:25", "You came from India when you were really young, right?", "Dwarkesh Patel 03:57:28", "Until I was eight or nine, I lived in India. Then we moved around all over the place. Because of the backlog for Indians we’d been in the queue for decades.", "Leopold Aschenbrenner 03:57:44", "Even though you came at eight, you're still on the H-1B .", "Dwarkesh Patel 03:57:47", "When you're 21 you get kicked off the queue and you have to restart the process. My dad's a doctor and I'm on his H-1B as a dependent. But when you're 21, you get kicked off. So I'm 20 and it just kind of dawns on me that this is my situation.", "Leopold Aschenbrenner 03:58:00", "You’re completely screwed.", "Dwarkesh Patel 03:58:01", "I also had the experience with my dad. We moved all around the country. They have to prove, him being a doctor, that you can't get native talent.", "Leopold Aschenbrenner 03:58:11", "And you can’t start a startup or anything. Even getting the H-1B for you would have been a 20% lottery, if you're lucky.", "Dwarkesh Patel 03:58:19", "Plus they had to prove that they can't get native talent, which meant that we lived in North Dakota for three years, West Virginia for three years, Maryland, West Texas.", "So it dawned on me that this is my situation as I turn 21. I'll be on this lottery. Even if I get the lottery, I'll be a fucking code monkey for the rest of my life, because this thing isn't going to let up.", "Leopold Aschenbrenner 03:58:37", "Yeah. Can't do a startup.", "Dwarkesh Patel 03:58:38", "Exactly. At the same time, I had been reading for the last year and was super obsessed with Paul Graham essays . My plan at the time was to make a startup or something. I was super excited about that.", "It just occurred to me that I couldn't do this. That just wasn’t in the cards for me. I was kind of depressed about it. I remember I was in a daze through finals because it had just occurred to me. I was really anxious about it.", "I remember thinking to myself at the time that if somehow I ended up getting my green card before I turned 21, there's no fucking way I'm becoming a code monkey. The feeling of dread that I have is this realization that I'm just going to have to be a code monkey. I realized that's my default path. If I hadn't made a proactive effort not to do that, I would have graduated college as a computer science student. I would have just done that. That's the thing I was super scared about. That was an important realization for me.", "Anyway, COVID happened. Because of that, since there weren't any foreigners coming, the backlog got fast-tracked and by the skin of my teeth, like a few months before I turned 21, I ended up getting a green card for crazy, extremely contingent reasons.", "Because I got a green card, I could—", "Leopold Aschenbrenner 03:59:56", "The whole podcast.", "Dwarkesh Patel 03:59:57", "I graduated college and I was bumming around. I graduated a semester early. I'm going to do this podcast and see what happens? If I didn’t have a green card, I mean the best case scenario—", "Leopold Aschenbrenner 04:00:08", "It’s such a cultural artifact. What is the impact of immigration reform? What is the impact of clearing 50,000 green cards in the backlog? You're such an amazing example how all of this is only possible contingent on that. It's just incredibly tragic that this is so dysfunctional.", "Dwarkesh Patel 04:00:30", "It's insane.", "Leopold Aschenbrenner 04:00:34", "I'm glad you did it. I'm glad you kind of tried the unusual path.", "Dwarkesh Patel 04:00:39", "I could only do it because I was extremely fortunate to get the green card. I had a little bit of saved up money. I got a small grant out of college, thanks to the Future Fund, to do this for like six months. It turned out really well. At each time, I was like, \"oh, okay, podcast. Come on. I wasted a few months on this. Let's now go do something real.\" Something big would happen at each moment.", "Leopold Aschenbrenner 04:01:09", "You kept with it.", "Dwarkesh Patel 04:01:11", "There would always be something the moment I'm about to quit the podcast. Jeff Bezos would say something nice about me on Twitter. The Ilya episode gets like half a million views. Now this is my career. Looking back on it though, it was incredibly contingent that things worked out the right way.", "Leopold Aschenbrenner 04:01:27", "If the AGI stuff goes down, it'll be how most of the people who kind of end up feeling AGI first heard about it.", "Dwarkesh Patel 04:01:41", "You're also very linked with the story in many ways. I got like a $20,000 grant from Future Fund right out of college and that sustained me for six months or something. Without that…", "Leopold Aschenbrenner 04:01:57", "Tiny grant. It was kind of crazy. It goes to show how far small grants can go. Emergent Ventures, too.", "Dwarkesh Patel 04:11:32", "Exactly. Emergent Ventures. The last year I've been in San Francisco, we've just been in close contact the entire time and just bouncing ideas back and forth. People would be surprised by how much of the alpha I have I got from you, Sholto, Trenton and a couple others.", "Leopold Aschenbrenner 04:02:26", "It’s been an absolute pleasure.", "Dwarkesh Patel 04:02:27", "Likewise, it's been super fun. Here are some random questions for you. If you could convert to Mormonism and you could really believe it, would you do it? Would you push the button?", "Leopold Aschenbrenner 04:02:40", "Before I answer that question, one observation about the Mormons. There's an article that actually made a big impact on me. It was about the Mormons, by McKay Coppins in The Atlantic . He even interviewed Mitt Romney in it.", "The thing he talked about was how the experience of growing up different, growing up very unusual, especially if you grew up Mormon outside of Utah. You’re the only person who doesn't drink caffeine, you don't drink alcohol, you're kind of weird. That got people prepared for being willing to be outside of the norm later on.", "Mitt Romney was willing to take stands alone in his party because he believed what he believed was true. Probably not in the same way, but I feel a little bit like this from having grown up in Germany, having been kind of an outsider or something.", "Growing up as an outsider gives you unusual strength later on to be willing to say what you think. So that is one thing I really appreciate about the Mormons, at least the ones that grow up outside of Utah.", "The other thing is the fertility rates. They're good. They're important. They're going down as well. This is the thing that really clinched the fertility decline story for me. Even the Mormons.", "Dwarkesh Patel 04:03:57", "You're like, \"oh, this is like a good start. Mormons will replace everybody.\"", "Leopold Aschenbrenner 04:04:00", "I don't know if it's good, but at least some people will maintain high fertility rates. But no, even the Mormons... Once these religious subgroups that have high fertility rates grow big enough, they become too close in contact with normal society and become normalized. Their fertility rates drop from maybe like four to two in the course of 10-20 years.", "People point to the Amish or whatever, but it's probably just not scalable. If you grow big enough, then there's just this overwhelming force of modernity that gets you.", "No, if I could convert to Mormonism... Look, I think there's something... I don't believe it, right? If I believed it, I obviously would convert to Mormonism, because you got to convert.", "Dwarkesh Patel 04:04:41", "But you can choose a world in which you do believe it.", "Leopold Aschenbrenner 04:04:46", "There's something really valuable in believing in something greater than yourself and having a certain amount of faith.", "Dwarkesh Patel 04:04:55", "You do, right? That's what your series is.", "Leopold Aschenbrenner 04:04:59", "It’s valuable to feel some sort of duty to something greater than yourself. Maybe my version of this is somewhat different. I feel some sort of duty to the historical weight on how this might play out. I feel some sort of duty to make that go well. I feel some sort of duty to our country, to the national security of the United States. We can be a force for a lot of good.", "Dwarkesh Patel 04:05:28", "Going back to OpenAI, there’s something that's especially impressive about that is. There are people at the company who have — through years and decades of building up savings from working in tech — probably tens of millions of dollars liquid and more than that in terms of their equity. Many people were concerned about the clusters and the Middle East and the secrets leaking to China and all these things.", "The person who actually made a hassle about it — hassling people is so underrated — is the 22-year-old who has less than a year at the company, who doesn't have savings built up, who isn't a solidified member of the company.", "Leopold Aschenbrenner 04:06:24", "Maybe it's me being naive and not knowing how big companies work. Sometimes I'm a bit of a speech deontologist . I kind of believe in saying what you think. Sometimes friends tell me I should be more of a speech consequentialist .", "Dwarkesh Patel 04:06:39", "I mean I think about the amount of people who, when they have the opportunity to talk to the person, will just bring up the thing. I've been with you in multiple contexts. I guess I shouldn't reveal who the person is or what the context was.", "I've just been very impressed that the dinner begins and by the end, somebody who has a major voice in how things go is seriously thinking about a worldview they would have found incredibly alien before the dinner. I've been impressed that you just give them the spiel and hassle them.", "Leopold Aschenbrenner 04:07:16", "I just feel this stuff pretty viscerally now. There was a time when I thought about this stuff a lot, but it was kind of like econ models and these theoretical abstractions. You talk about human brain size or whatever.", "Since at least last year, I feel like I can see it. I feel it. I can sort of see the cluster that AGI can be trained on. I can see the kind of rough combination of algorithms and the people that will be involved and how this is going to play out. Look, we'll see how it plays out. There are many ways this could be wrong. There are many ways it could go, but this could get very real.", "(04:07:58) – Launching an AGI hedge fund", "Dwarkesh Patel 04:07:58", "Should we talk about what you're up to next?", "Leopold Aschenbrenner 04:07:59", "Sure, yeah.", "Dwarkesh Patel 04:08:01", "You're starting an investment firm with anchor investments from Nat Friedman, Daniel Gross, Patrick Collison, John Collison. First of all, why is this the thing to do if you believe AGI is coming in a few years? Why the investment firm?", "Leopold Aschenbrenner 04:08:18", "Good question. Fair question. A couple of things. We talked about this earlier, but the screen doesn't go blank when AGI intelligence happens. People really underrate the decade after you have the intelligence explosion. That's maybe the most wild period. The decade after is also going to be wild.", "This combination of human institutions with superintelligence and crazy geopolitical things going on. You have this broadening of this explosive growth. Basically, it's going to be a really important period. Capital will really matter. Eventually we're going to go to the stars, going to go to the galaxies.", "Part of the answer is just that done right, there's a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x. Probably you can make even way more than that because of the sequencing and capital matters.", "The other reason is just some amount of freedom and independence. There are some people who are very smart about this AI stuff and who see it coming. Almost all of them are constrained in various ways. They're in the labs, they're in some other position where they can't really talk about this stuff.", "I've really admired the thing you've done. It's really important that there are voices of reason on this stuff publicly or people who are in positions to kind of advise important actors and so on.", "Basically, this investment firm will be kind of like a brain trust on AI. It's going to be all about situational awareness. We're going to have the best situational awareness in the business. We're going to have way more situational awareness than any of the people who manage money in New York. We're definitely going to do great on investing, but it's the same sort of situational awareness that is going to be important for understanding what's happening, being a voice of reason publicly, and being able to be in a position to advise.", "Dwarkesh Patel 04:10:09", "The book about Peter Thiel, they had an interesting quote about his hedge fund. It got terrible returns. So this isn't the example...", "Leopold Aschenbrenner 04:10:17", "It blew up. That’s sort of the bear case. It’s too theoretical.", "Dwarkesh Patel 04:10:22", "They had an interesting quote that it's basically a think tank inside of a hedge fund.", "Leopold Aschenbrenner 04:10:28", "That’s what I’m going to try to build.", "Dwarkesh Patel 04:10:30", "Presumably you've thought about the ways in which these kinds of things can blow up. There's a lot of interesting business history books about people who got the thesis right but timed it wrong. They buy into the idea that the Internet's going to be a big deal. They sell at the wrong time and buy at the wrong time during the dot-com boom . They miss out on the gains even though they're right about the. What is the trick to preventing that kind of thing?", "Leopold Aschenbrenner 04:10:58", "Obviously, not blowing up is task number one and two. This investment firm is going to just be betting on AGI. We’re going to be betting on AGI and superintelligence before the decade is out, taking that seriously, making the bets you would make if you took that seriously. If that's wrong, the firm is not going to do that well.", "The thing you have to be resistant to is you have to be able to resist one or a couple or a few individual calls. AI stagnates for a year because of the data wall, or you got the call wrong on when revenue would go up. That's pretty critical. You have to get the timing right. The sequence of bets on the way to AGI is actually pretty critical. People underrate it.", "Where does the story start? Obviously, the only bet over the last year was Nvidia. It's obvious now, very few people did it. This is also a classic debate I and a friend had with another colleague of ours. This colleague was really into TSMC . He was just kind of like, \"well, these fabs are going to be so valuable. With Nvidia, there's just a lot of idiosyncratic risk, right? Maybe somebody else will make better GPUs.\" That was basically right.", "But only Nvidia had the AI beta , because only Nvidia was kind of like large fraction AI. The next few doublings would just meaningfully explode their revenue, whereas TSMC was a couple percent AI. Even though there's going to be a few doublings of AI, it was not going to make that big of an impact. The only place to find the AI beta, basically was Nvidia for a while.", "Now it's broadening. Now TSMC is like 20% AI by 2027 or something. That’s what they're saying. When we're doubling, it'll be kind of like a large fraction of what they're doing. There's a whole stack. There's people making memory and coops and power. Utilities companies are starting to get excited about AI. They're like, \"power production in the United States will grow not 2.5%, but 5% over the next five years.\" I'm like, \"no, it'll grow more.”", "At some point, a Google or something becomes interesting. People are excited about them with AI because it's like, \"oh, AI revenue will be $10 billion or tens of billions.\" I don't really care about them before then. I care about it once you get the AI beta. At some point Google will get $100 billion of revenue from AI. Probably their stock will explode. They're going to become a $5 trillion, $10 trillion company anyway.", "The timing there is very important. You have to get the timing right. You have to get the sequence right. At some point, actually, there's going to be real headwind to equities from real interest rates. In these sorts of explosive growth worlds, you would expect real interest rates to go up a lot. On the supply side it’ll be around the demand for money because people are going to be making these crazy investments, initially in clusters and then in the robo factories or whatever. They're going to be borrowing like crazy. They want all this capital, high ROI.", "On the consumer saving side, to give up all this capital, it’ll be the Euler equation , standard intertemporal transfer trade-off of consumption.", "Dwarkesh Patel 04:14:06", "Very standard.", "Leopold Aschenbrenner 04:14:09", "Some of our friends have a paper on this. Basically, if consumers expect real growth rates to be higher, interest rates are going to be higher because they're less willing to give up consumption today for consumption in the future.", "At some point real interest rates will go up. Higher growth rate expectations mean equities go down because the interest rate effect outweighs the growth rate effect.", "At some point there's the big bond short. You got to get that right. You got to get it right on nationalization. There's this whole sequence of things.", "Dwarkesh Patel 04:14:45", "And the unknown unknowns.", "Leopold Aschenbrenner 04:14:46", "Unknown unknowns, yeah. You've got to be really, really careful about your overall risk positioning. If you expect these crazy events to play out, there's going to be crazy things you didn't foresee.", "You do also want to make the bets that are tailored to your scenarios in the sense of you want to find bets that are bets on the tails. I don't think anyone is expecting interest rates to go above 10%, real interest rates. There's at least a serious chance of that before the decade is out. Maybe there's some cheap insurance you can buy on that.", "Dwarkesh Patel 04:15:18", "Very silly question. In these worlds, are financial markets where you make these kinds of bets going to be respected? Is my Fidelity account going to mean anything when we have 50% economic growth? Who’s like, “we have to respect his property rights”?", "Leopold Aschenbrenner 04:15:34", "That’s pretty deep into it, the bond short, the 50% growth. That's pretty deep into it. Again, there's this whole sequence of things. I think property rights will be respected. At some point, there's going to be figuring out the property rights for the galaxies. That'll be interesting.", "Dwarkesh Patel 04:15:53", "That will be interesting. Going back to your strategy about how important the 2030s will be for how the rest of the future goes, you want to be in a position of influence by that point because of capital.", "As far as I know, there's probably a whole bunch of literature on this, I'm just riffing. The landed gentry before the beginning of the Industrial Revolution, I'm not sure if they were able to leverage their position in a sort of Georgist or Piketty -type sense, in order to accrue the returns that were realized through the Industrial Revolution. I don't know what happened. At some point, they just weren't the landed gentry.", "I'd be concerned that even if you make great investment calls, you'll be like the guy who owned a lot of farmland before the Industrial Revolution. The guy who's actually going to make a bunch of money is the one with the steam engine. Even he doesn't make that much money because most of the benefits are widely diffused and so forth.", "Leopold Aschenbrenner 04:16:59", "The analog is you sell your land and you put it all in the people who are building the new industries. The real depreciating asset for me is human capital. I was valedictorian of Columbia. The thing that made you special is you're smart. In four years, it might not matter because it's automatable.", "A friend joked that the investment firm is perfectly hedged for me. Either AGI happens this decade and my human capital depreciates, but I turn it into financial capital, or no AGI happens and the firm doesn’t do well, but I’m still in my twenties and smart.", "Dwarkesh Patel 04:17:44 Excellent. What’s your story for why AGI hasn’t been priced in? Financial markets are supposed to be very efficient, so it’s hard to get an edge. Naively, you might say, “I’ve looked at these scaling curves, and they imply we’ll be buying much more compute and energy than analysts realize.” Shouldn’t those analysts be broke by now? What’s going on?", "Leopold Aschenbrenner 04:18:10 I used to believe in the EMH guy as an economist. But now, I think there are groups of smart people, like those in San Francisco, who have alpha over the rest of society in seeing the future.", "It’s like with COVID. A similar group of people saw it coming and called it completely corrected. They shorted the market and did really well. Why isn’t AGI priced in? It’s like asking why the government hasn’t nationalized the labs yet. Society hasn’t priced it in yet. It hasn’t completely diffused. I might be wrong but not many people take these ideas seriously.", "(04:19:14) – Lessons from WWII", "Dwarkesh Patel 04:19:14 There are a couple of other ideas I was playing around with that we haven’t gotten to talk about yet. One’s systems competition. One of my favorite books about World War II is Victor Davis Hanson’s summary of everything . He explains why the Allies made better decisions than the Axis.", "Leopold Aschenbrenner 04:19:39 Why did they?", "Dwarkesh Patel 04:19:40", "There were decisions the Axis made that were pretty good, like blitzkrieg .", "Leopold Aschenbrenner 04:19:45 That was sort of by accident though.", "Dwarkesh Patel 04:19:47 In what sense? That they just had the infrastructure left over?", "Leopold Aschenbrenner 04:19:49 My read of it is that blitzkrieg wasn’t an ingenious strategy. Their hand was forced. This is the very Adam Tooze-ian story of World War II. There’s the concept of a long war versus a short war, which is important. Germany realized that if they were in a long war, including the United States, they would not be able to compete industrially. Their only path to victory was to make it a short war. That worked much more spectacularly than they thought, allowing them to take over France and much of Europe.", "The decision to invade the Soviet Union was related to the western front because they needed resources like oil. Auschwitz was actually a giant chemical plant to produce synthetic oil and other materials. It was the largest industrial project in Nazi Germany. They thought, “we crushed them in World War I, it’ll be easy. We’ll invade, get the resources, and then fight on the western front.” Even during the invasion of the Soviet Union, even though a large number of the deaths happened there, a large fraction of German industrial production—planes, naval forces, and so on—was directed towards the western front and the western allies.", "By the way, this concept of a long war versus a short war is interesting, especially when thinking about the China competition. I worry about the decline of latent American industrial capacity. China builds like 200 times more ships than we do right now .", "Maybe we have superiority in the non-AI world in military materiel and can win a short war or defend Taiwan. If it drags on, China might be better able to mobilize industrial resources in a way we can’t anymore. This is also relevant to AI. If building AGI requires a trillion-dollar cluster instead of a $100 billion cluster, or even if it’s on the $100 billion cluster, it really matters if you can do an order of magnitude more compute for your superintelligence. Maybe right now they’re behind, but they have the raw latent industrial capacity to outbuild us.", "That matters both in the run-up to AGI and afterward. You have the superintelligence on your cluster, and then it’s time to expand the explosive growth. Will we let the robo-factories run wild? Maybe not, but maybe China will. How many drones will we produce? There’s an industrial explosion that I worry about.", "Dwarkesh Patel 04:22:41 You’ve got to be one of the few people in the world who is both concerned about alignment but also wants to ensure we let the robo-factories proceed once we get ASI to beat out China. That’s very interesting.", "Leopold Aschenbrenner 04:22:54 It’s all part of the picture.", "Dwarkesh Patel 04:22:59 Speaking of ASIs and the robot factories and robo armies. One of the interesting things is the question of what you do with industrial-scale intelligence. Obviously, it’s not chatbots. It’s very hard to predict.", "The history of oil is very interesting. In the 1860s, we figured out how to refine oil. A geologist discovered it, and then Standard Oil got started. There was a huge boom, changing American politics. Legislators were bought out by oil interests. Presidents were elected based on divisions about oil and breaking them up.", "All this happened before the car was invented. The light bulb was invented 50 years after oil refining was discovered. Most of Standard Oil’s history is before the car is invented. It was just kerosene lamps just used for lighting.", "Leopold Aschenbrenner 04:24:06 So they thought oil would just no longer be relevant?", "Dwarkesh Patel 04:24:09 Yeah. There was a concern that Standard Oil would go bankrupt when the light bulb was invented. You realize there's an immense amount of compressed energy here. You're going to have billions of gallons of this stuff a year. It’s hard to predict in advance what you can do with that. Later on, it turns out it’s used for transportation and cars.", "With intelligence, maybe one answer is the intelligence explosion. But even after that, you have all these ASIs and enough compute, especially the compute they'll build to run—", "Leopold Aschenbrenner 04:24:48 Hundreds of millions of GPUs will hum.", "Dwarkesh Patel 04:24:50 What are we doing with that? It’s very hard to predict in advance. It’ll be very interesting to figure out what the Jupiter brains will be doing.", "So there’s situational awareness of where things stand now, and we’ve gotten a good dose of that. A lot of what we’re talking about now couldn’t have been predicted many years back. Part of your worldview implies that things will accelerate because of AI.", "Many unpredictable factors will become evident over time, like how people, the political system, and foreign adversaries will react. Situational awareness isn’t just knowing where things stand now, but being in a position to react appropriately to new information and to change your worldview and recommendations accordingly.", "What is the appropriate way to think about situational awareness as a continuous process rather than as a one-time realization?", "Leopold Aschenbrenner 04:25:58 This is great. There’s a sort of mental flexibility and willingness to change your mind that’s really important. This is how a lot of brains have been broken in the AGI debate. The doomers were prescient about AGI a decade ago, but they haven’t updated on the empirical realities of deep learning. Their proposals are naive and unworkable. It doesn’t really make sense.", "Some people come in with a predefined ideology, like e/accs . They like to shitpost about technology but they’re not actually thinking it through. You have stagnationists who think this stuff is just chatbots and not risky or those not considering the immense national security implications.", "There’s a risk of calcification of worldview when you publicly articulate a position and cling to it despite evidence against it. So I want to give a big disclaimer. It’s valuable to paint a concrete and visceral picture. This is currently my best guess on how this decade will go. If it goes anything like this, it’ll be wild. Given the rapid pace of progress, we’re going to keep getting a lot more information and it’s important to keep your head on straight.", "I feel like the most important thing here is that. This relates to some of the stuff we talked about the world being surprisingly small. I used to think important things were being handled by capable people in government and AI labs.", "From personal experience, and seeing how Covid was managed, I realized that not everyone is on it. There’s not somebody else who’s on it and making sure this goes well. What really matters is that good people take these issues as seriously as they deserve, have situational awareness, are willing to change their minds, and face reality head-on. I’m counting on those good people.", "Dwarkesh Patel 04:48:26 All right, that’s a great place to close.", "Leopold Aschenbrenner 04:28:29 Thanks so much Dwarkesh. Absolute joy.", "Dwarkesh Patel 04:28:31 This was excellent.", "(04:29:08) – Coda: Frederick the Great", "Leopold Aschenbrenner 04:28:32 The funny thing is a lot of this German history stuff we’ve talked about isn’t actually what I learned in Germany. It’s stuff I learned after. I would go back to Germany over Christmas or whatever and suddenly understand the street names— Gneisenau , Scharnhorst , all these Prussian military reformers. And you finally understand Sanssouci , and you realize it was for Frederick .", "Frederick the Great is a really interesting figure. He was kind of a gay lover of the arts. He hated speaking German, only wanted to speak French. He played the flute, and composed. He had all the great artists of his day over at Sanssouci. He had a really tough upbringing with a very stern Prussian military father. When Frederick was about 17, he had a male lover. His father imprisoned his son and hanged his lover in front of him.", "Despite this, Frederick the Great became one of the most successful Prussian conquerors. He got Silesia , won the Seven Years' War, and was an amazing military strategist. His brilliance lay in flanking the army, which was revolutionary at the time. They almost lost the Seven Years' War, but the Russian tsar changed and turned out to be a Prussia stan, which allowed Frederick to regroup and succeed. He’s a bizarre but interesting figure in German history.", "" ]
[ "https://bit.ly/4aVllm4", "https://www.forourposterity.com/", "https://openai.com/superalignment/", "https://www.dwarkeshpatel.com/p/patrick-collison", "https://en.wikipedia.org/wiki/John_Collison", "https://en.wikipedia.org/wiki/Daniel_Gross_(entrepreneur)", "https://nat.org/", "https://www.dwarkeshpatel.com/p/sholto-douglas-trenton-bricken", "https://openai.com/index/chatgpt/", "https://finance.yahoo.com/news/nvidia-stock-explodes-after-guidance-for-the-ages-what-wall-street-is-saying-120105455.html", "https://openai.com/index/gpt-4-research/", "https://blogs.nvidia.com/blog/what-is-a-pretrained-ai-model/", "https://www.semianalysis.com/p/gpt-4-architecture-infrastructure", "https://www.nvidia.com/en-us/data-center/a100/", "https://www.nvidia.com/en-us/data-center/h100/", "https://hazelcast.com/glossary/machine-learning-inference/", "https://www.eia.gov/energyexplained/electricity/electricity-in-the-us.php#:~:text=In%202022%2C%20total%20U.S.%20utility,small%2Dscale%20solar%20photovoltaic%20systems.", "https://www.dwarkeshpatel.com/p/mark-zuckerberg", "https://en.wikipedia.org/wiki/Three_Gorges_Dam", "https://www.costar.com/article/1471314418/amazon-pays-650-million-for-nuclear-powered-data-center-in-pennsylvania", "https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer", "https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer", "https://www.cnbc.com/2023/12/08/amd-now-sees-a-400-billion-market-for-ai-chips-why-thats-good-news-for-nvidia.html", "https://en.wikipedia.org/wiki/Overton_window#:~:text=The%20Overton%20window%20is%20an,public%20to%20expand%20the%20window.", "https://copilot.microsoft.com/", "https://en.wikipedia.org/wiki/Artificial_general_intelligence", "https://paulgraham.com/schlep.html", "https://www.dwarkeshpatel.com/p/tyler-cowen-3", "https://www.dwarkeshpatel.com/p/john-schulman", "http://joschu.net/", "https://arxiv.org/abs/2009.03300", "https://openai.com/index/hello-gpt-4o/", "https://www.lesswrong.com/posts/75dnjiD8kv2khe9eQ/measuring-hardware-overhang", "https://techpolicyinstitute.org/publications/artificial-intelligence/from-tokens-to-context-windows-simplifying-ai-jargon/", "https://arxiv.org/abs/2201.11903", "https://campus.datacamp.com/courses/chatgpt-prompt-engineering-for-developers/advanced-prompt-engineering-strategies?ex=1#:~:text=Few%2Dshot%20prompting%20is%20a,the%20model%20to%20respond%20to.", "https://arxiv.org/pdf/2104.03113", "https://deepmind.google/technologies/alphago/", "https://en.wikipedia.org/wiki/High_availability#%22Nines%22", "https://thedecisionlab.com/reference-guide/philosophy/system-1-and-system-2-thinking", "https://www.datacamp.com/tutorial/loss-function-in-machine-learning", "https://en.wikipedia.org/wiki/G_factor_(psychometrics)", "https://en.wikipedia.org/wiki/Deep_learning", "https://en.wikipedia.org/wiki/GPT-2", "https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback", "https://openai.com/index/instruction-following/", "https://en.wikipedia.org/wiki/Robotics", "https://en.wikipedia.org/wiki/Unsupervised_learning", "https://en.wikipedia.org/wiki/Reinforcement_learning", "https://en.wikipedia.org/wiki/Self-play", "https://www.wsj.com/tech/ai/ai-training-data-synthetic-openai-anthropic-9230f8d8", "https://knowyourmeme.com/memes/wordcel-shape-rotator-mathcel", "https://en.wikipedia.org/wiki/Synthetic_data#:~:text=Synthetic%20data%20is%20information%20that,be%20seen%20as%20synthetic%20data.", "https://www.hopsworks.ai/dictionary/in-context-learning-icl#:~:text=In%2Dcontext%20learning%20(ICL)%20learns%20a%20new%20task%20from,objective%20of%20next%20token%20prediction.", "https://ai.stackexchange.com/questions/5246/what-is-sample-efficiency-and-how-can-importance-sampling-be-used-to-achieve-it", "https://arxiv.org/abs/2403.05530", "https://deepai.org/machine-learning-glossary-and-terms/weight-artificial-neural-network", "https://www.dwarkeshpatel.com/p/dario-amodei", "https://en.wikipedia.org/wiki/G-force", "https://learnprompting.org/blog/2024/2/4/gpt_wrappers", "https://en.wikipedia.org/wiki/Xi_Jinping", "https://en.wikipedia.org/wiki/Chinese_Communist_Party", "https://en.wikipedia.org/wiki/Superintelligence", "https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion", "https://en.wikipedia.org/wiki/Machine_learning", "https://en.wikipedia.org/wiki/Gulf_War", "https://www.history.com/news/tanks-abrams-persian-gulf-war", "https://en.wikipedia.org/wiki/Ministry_of_State_Security_(China)", "https://www.eia.gov/todayinenergy/detail.php?id=53959", "https://en.wikipedia.org/wiki/Smartglasses", "https://www.meta.com/smart-glasses/", "https://llama.meta.com/", "https://finance.yahoo.com/news/tsmc-complains-t-enough-skilled-100125351.html", "https://en.wikipedia.org/wiki/Bill_de_Blasio", "https://ny.eater.com/2020/3/11/21175497/coronavirus-nyc-restaurants-safe-dine-out", "https://www.nyc.gov/office-of-the-mayor/news/079-20/mayor-de-blasio-speaker-johnson-queens-chamber-commerce-encourage-new-yorkers-visit", "https://en.wikipedia.org/wiki/Manhattan_Project", "https://en.wikipedia.org/wiki/Military_production_during_World_War_II", "https://en.wikipedia.org/wiki/Seven_Years%27_War", "https://en.wikipedia.org/wiki/Thirty_Years%27_War", "https://en.wikipedia.org/wiki/End_of_history", "https://amzn.to/4e0jhvp", "https://en.wikipedia.org/wiki/Aleksandr_Solzhenitsyn", "https://en.wikipedia.org/wiki/Mikhail_Gorbachev", "https://en.wikipedia.org/wiki/Pluralism", "https://en.wikipedia.org/wiki/East_Germany", "https://en.wikipedia.org/wiki/West_Germany", "https://en.wikipedia.org/wiki/Fall_of_the_Berlin_Wall", "https://en.wikipedia.org/wiki/Bombing_of_Dresden", "https://en.wikipedia.org/wiki/East_German_uprising_of_1953", "https://en.wikipedia.org/wiki/Iron_Curtain", "https://en.wikipedia.org/wiki/Stasi#:~:text=The%20Ministry%20for%20State%20Security,GDR)%20from%201950%20to%201990.", "https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI", "https://letterboxd.com/film/oppenheimer-2023/", "https://www.eia.gov/energyexplained/natural-gas/where-our-natural-gas-comes-from.php", "https://www.eia.gov/energyexplained/electricity/electricity-in-the-us.php#:~:text=In%202022%2C%20total%20U.S.%20utility,small%2Dscale%20solar%20photovoltaic%20systems.", "https://www.bloomberg.com/news/articles/2024-03-11/abu-dhabi-said-to-target-100-billion-aum-for-ai-investment-firm", "https://news.microsoft.com/2024/04/15/microsoft-invests-1-5-billion-in-abu-dhabis-g42-to-accelerate-ai-development-and-global-expansion/", "https://en.wikipedia.org/wiki/Weapon_of_mass_destruction#:~:text=The%20U.S.%20military%20refers%20to,Also%20called%20WMD.", "https://globalprioritiesinstitute.org/leopold-aschenbrenner-existential-risk-and-growth/", "https://en.wikipedia.org/wiki/Paul_Samuelson", "https://marginalrevolution.com/marginalrevolution/2010/01/soviet-growth-american-textbooks.html", "https://en.wikipedia.org/wiki/Marcellus_Formation", "https://www.eia.gov/energyexplained/natural-gas/where-our-natural-gas-comes-from.php", "https://blogs.microsoft.com/blog/2020/01/16/microsoft-will-be-carbon-negative-by-2030/", "https://sustainability.aboutamazon.com/carbon-methodology.pdf", "https://en.wikipedia.org/wiki/Small_modular_reactor", "https://en.wikipedia.org/wiki/Federal_Energy_Regulatory_Commission", "https://en.wikipedia.org/wiki/National_Environmental_Policy_Act", "https://amzn.to/4bA64Ih", "https://patrickcollison.com/fast", "https://patrickcollison.com/progress", "https://en.wikipedia.org/wiki/William_S._Knudsen", "https://research.library.gsu.edu/c.php?g=115684&p=752252#:~:text=Both%20the%20CIO%20and%20AFL,era%20in%20United%20States%20history.", "https://www.spiegel.de/international/germany/the-bad-news-bundeswehr-an-examination-of-the-truly-dire-state-of-germany-s-military-a-df92eaaf-e3f9-464d-99a3-ef0c27dcc797", "https://www.csis.org/analysis/contextualizing-national-security-concerns-over-chinas-domestically-produced-high-end-chip", "https://en.wikipedia.org/wiki/7_nm_process", "https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0", "https://en.wikipedia.org/wiki/Sam_Altman", "https://en.wikipedia.org/wiki/Atoms_for_Peace", "https://deepmind.google/", "https://deepmind.google/discover/blog/introducing-the-frontier-safety-framework/", "https://www.nytimes.com/2024/03/06/us/politics/google-engineer-china-ai-theft.html", "https://scholar.google.com/citations?user=dOad5HoAAAAJ&hl=en", "https://arxiv.org/abs/2203.15556", "https://en.wikipedia.org/wiki/Mixture_of_experts", "https://arxiv.org/abs/1706.03762", "https://en.wikipedia.org/wiki/Heavy_water", "https://amzn.to/4dX8EJR", "https://en.wikipedia.org/wiki/Leo_Szilard", "https://en.wikipedia.org/wiki/German_nuclear_program_during_World_War_II#Moderator_production", "https://en.wikipedia.org/wiki/Enrico_Fermi", "https://en.wikipedia.org/wiki/George_B._Pegram", "https://en.wikipedia.org/wiki/German_nuclear_program_during_World_War_II", "https://en.wikipedia.org/wiki/Lavrentiy_Beria", "https://en.wikipedia.org/wiki/NKVD#:~:text=The%20main%20function%20of%20the,kidnappings%2C%20assassinations%20and%20mass%20deportations.", "https://en.wikipedia.org/wiki/Soviet_atomic_bomb_project", "https://www.sciencefocus.com/science/who-really-invented-the-light-bulb", "https://paperswithcode.com/sota/math-word-problem-solving-on-math", "https://www.usaspending.gov/disaster/covid-19", "https://en.wikipedia.org/wiki/Pegasus_(spyware)", "https://en.wikipedia.org/wiki/Air_gap_(networking)", "https://en.wikipedia.org/wiki/Stuxnet", "https://en.wikipedia.org/wiki/Zero-day_vulnerability", "https://amzn.to/451HTzO", "https://en.wikipedia.org/wiki/Viktor_Suvorov", "https://en.wikipedia.org/wiki/GRU_(Russian_Federation)", "https://www.dwarkeshpatel.com/p/ilya-sutskever", "https://en.wikipedia.org/wiki/Sensitive_compartmented_information_facility", "https://www.cnbc.com/2024/01/19/microsoft-executive-emails-hacked-by-russian-intelligence-group-company-says.html", "https://www.reuters.com/technology/chinese-hackers-accessed-government-emails-microsoft-says-2023-07-12/", "https://en.wikipedia.org/wiki/Matrioshka_brain", "https://www.cfr.org/timeline/us-russia-nuclear-arms-control", "https://en.wikipedia.org/wiki/Luftwaffe#Origins", "https://www.dwarkeshpatel.com/p/richard-rhodes", "https://en.wikipedia.org/wiki/Eliezer_Yudkowsky", "https://nsarchive2.gwu.edu/NSAEBB/NSAEBB56/", "https://en.wikipedia.org/wiki/Spanish_conquest_of_the_Aztec_Empire", "https://en.wikipedia.org/wiki/Chatham_House_Rule#:~:text=Under%20the%20Chatham%20House%20Rule,to%20increase%20openness%20of%20discussion.", "https://www.libertyjet.com/private_jets/IAI-1126", "https://www.cnn.com/2023/01/09/politics/taiwan-invasion-war-game-intl-hnk-ml/index.html", "https://www.yahoo.com/tech/plans-chinas-invasion-taiwan-could-235501383.html", "https://en.wikipedia.org/wiki/Leslie_Groves", "https://en.wikipedia.org/wiki/Uranium_mining_in_Kazakhstan#:~:text=5%20References-,History,but%20are%20close%20to%20depletion.", "https://en.wikipedia.org/wiki/Superintelligence", "https://en.wikipedia.org/wiki/Boeing_B-29_Superfortress", "https://en.wikipedia.org/wiki/Boeing_B-47_Stratojet", "https://en.wikipedia.org/wiki/Boeing_B-52_Stratofortress", "https://en.wikipedia.org/wiki/Boeing_707", "https://en.wikipedia.org/wiki/Monopoly_on_violence", "https://en.wikipedia.org/wiki/Industrial_Revolution", "https://en.wikipedia.org/wiki/Open-source_artificial_intelligence", "https://mistral.ai/", "https://en.wikipedia.org/wiki/Moore%27s_law", "https://conversationswithtyler.com/episodes/peter-thiel-political-theology/", "https://en.wikipedia.org/wiki/Landfrieden", "https://en.wikipedia.org/wiki/Holy_Roman_Empire", "https://www.dwarkeshpatel.com/p/demis-hassabis", "https://en.wikipedia.org/wiki/Edmund_Burke", "https://en.wikipedia.org/wiki/Quebec_Agreement", "https://en.wikipedia.org/wiki/Treaty_on_the_Non-Proliferation_of_Nuclear_Weapons", "https://en.wikipedia.org/wiki/Superior_orders", "https://en.wikipedia.org/wiki/United_States_Armed_Forces_oath_of_enlistment", "https://en.wikipedia.org/wiki/Ethnic_bioweapon", "https://en.wikipedia.org/wiki/Jake_Sullivan", "https://en.wikipedia.org/wiki/Skunk_Works", "https://en.wikipedia.org/wiki/Satya_Nadella", "https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950", "https://en.wikipedia.org/wiki/John_von_Neumann", "https://en.wikipedia.org/wiki/YOLO_(aphorism)", "https://en.wikipedia.org/wiki/Operation_Warp_Speed", "https://en.wikipedia.org/wiki/Thermonuclear_weapon", "https://en.wikipedia.org/wiki/Intercontinental_ballistic_missile", "https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology", "https://www.anthropic.com/news/anthropics-responsible-scaling-policy", "https://en.wikipedia.org/wiki/Motte-and-bailey_castle", "https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950", "https://en.wikipedia.org/wiki/Tall_poppy_syndrome", "https://undergrad.admissions.columbia.edu/academics/college/core", "https://polisci.columbia.edu/content/richard-k-betts", "https://polisci.columbia.edu/content/war-peace-and-strategy", "https://history.columbia.edu/person/adam-tooze/", "https://amzn.to/4dW9BSR", "https://globalprioritiesinstitute.org/leopold-aschenbrenner-existential-risk-and-growth/", "https://en.wikipedia.org/wiki/Log%E2%80%93log_plot", "https://web.stanford.edu/~chadj/", "https://youtu.be/_Qq6dQwLh1s?t=47", "https://letterboxd.com/film/the-avengers-2012/", "https://www.forbes.com/sites/johnhyatt/2022/11/14/sam-bankman-fried-promised-millions-to-nonprofits-research-groups-thats-not-going-too-well-now/?sh=3a4af9f05ee8", "https://en.wikipedia.org/wiki/Sam_Bankman-Fried", "https://apnews.com/article/sam-bankman-fried-ftx-fraud-timeline-be13e3fc0e074e2edd50ba59d1f8960e", "https://jan.leike.name/", "https://fortune.com/2024/05/15/openai-sam-altman-ilya-sutskever-jan-leike/", "https://fortune.com/2024/05/21/openai-superalignment-20-compute-commitment-never-fulfilled-sutskever-leike-altman-brockman-murati/", "https://www.theinformation.com/articles/openai-researchers-including-ally-of-sutskever-fired-for-alleged-leaking", "https://www.theinformation.com/articles/openai-researchers-including-ally-of-sutskever-fired-for-alleged-leaking", "https://openai.com/index/introducing-superalignment/", "https://www.dwarkesh.com/p/leopold-aschenbrenner#footnote-1-145136502", "https://www.nytimes.com/interactive/2023/11/20/technology/letter-to-the-open-ai-board.html", "https://karpathy.ai/", "https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees", "https://www.nytimes.com/2023/11/17/technology/openai-sam-altman-ousted.html", "https://www.nytimes.com/2023/11/22/technology/openai-sam-altman-returns.html", "https://en.wikipedia.org/wiki/Bell_Labs", "https://www.wsj.com/tech/ai/the-fight-for-ai-talent-pay-million-dollar-packages-and-buy-whole-teams-c370de2b", "https://www.dwarkeshpatel.com/p/dario-amodei", "https://en.wikipedia.org/wiki/Modus_ponens", "https://en.wikipedia.org/wiki/Modus_tollens", "https://www.factorio.com/", "https://www.jstor.org/stable/2120827", "https://klu.ai/glossary/layer-normalization", "https://www.dwarkeshpatel.com/p/will-scaling-work", "https://commoncrawl.org/", "https://llama.meta.com/llama3/", "https://arxiv.org/abs/2305.16264", "https://en.wikipedia.org/wiki/Monorepo", "https://www.dwarkeshpatel.com/p/john-schulman", "https://chat.lmsys.org/?leaderboard", "https://www.anthropic.com/news/claude-3-family", "https://www.anthropic.com/news/claude-3-haiku", "https://en.wikipedia.org/wiki/Wright_brothers", "https://en.wikipedia.org/wiki/Starlink", "https://www.nvidia.com/gtc/", "https://en.wikipedia.org/wiki/Digital_twin", "https://en.wikipedia.org/wiki/Production_function", "https://en.wikipedia.org/wiki/Four_Asian_Tigers", "https://www.dwarkeshpatel.com/p/tyler-cowen-3", "https://platform.openai.com/docs/models", "https://en.wikipedia.org/wiki/Apollo_program", "https://arxiv.org/abs/2001.08361", "https://www.dwarkeshpatel.com/p/dario-amodei", "https://en.wikipedia.org/wiki/Steven_Pinker", "https://amzn.to/3VmtBGH", "https://en.wikipedia.org/wiki/Cultural_Revolution", "https://en.wikipedia.org/wiki/Red_team", "https://en.wikipedia.org/wiki/Federal_Reserve", "https://www.supremecourt.gov/oral_arguments/oral_arguments.aspx", "https://www.neelnanda.io/mechanistic-interpretability/glossary", "https://x.com/janleike/status/1791498174659715494", "https://www.forourposterity.com/europes-political-stupor/", "https://en.wikipedia.org/wiki/Treaty_of_Versailles", "https://amzn.to/4bFMWIQ", "https://en.wikipedia.org/wiki/Marjorie_Taylor_Greene", "https://en.wikipedia.org/wiki/Henry_Kissinger", "https://en.wikipedia.org/wiki/Politburo_Standing_Committee_of_the_Chinese_Communist_Party", "https://www.realcleardefense.com/articles/2023/01/14/why_does_germany_keep_neglecting_its_defense_875723.html", "https://www.fpri.org/article/2023/04/the-beliefs-of-the-blob/", "https://www.strategictranslation.org/", "https://www.youtube.com/watch?v=qlJ2Zpyx31c", "https://en.wikipedia.org/wiki/Eastern_Bloc", "https://en.wikipedia.org/wiki/Tucker_Carlson", "https://www.youtube.com/watch?v=DfTU5LA_kw8", "https://www.forbes.com/sites/stuartanderson/2024/04/14/more-than-1-million-indians-waiting-for-high-skilled-immigrant-visas/?sh=6bbd529d24ce", "https://en.wikipedia.org/wiki/H-1B_visa", "https://www.paulgraham.com/articles.html", "https://x.com/JeffBezos/status/1514689258535145474", "https://www.dwarkeshpatel.com/p/ilya-sutskever", "https://www.theatlantic.com/magazine/archive/2021/01/the-most-american-religion/617263/", "https://plato.stanford.edu/entries/ethics-deontological/", "https://plato.stanford.edu/entries/consequentialism/", "https://amzn.to/4aHqLk7", "https://novelinvestor.com/stan-druckenmillers-worst-mistake-ever/", "https://en.wikipedia.org/wiki/Dot-com_bubble", "https://en.wikipedia.org/wiki/TSMC", "https://en.wikipedia.org/wiki/Beta_(finance)", "https://web.stanford.edu/~chadj/Consumption2009-11-25.pdf", "https://en.wikipedia.org/wiki/Georgism", "https://en.wikipedia.org/wiki/Thomas_Piketty", "https://en.wikipedia.org/wiki/Efficient-market_hypothesis", "https://amzn.to/3VmPrd9", "https://en.wikipedia.org/wiki/Blitzkrieg", "https://www.wsj.com/world/china/chinas-shipyards-are-ready-for-a-protracted-war-americas-arent-d6f004dd", "https://en.wikipedia.org/wiki/Samuel_Kier", "https://en.wikipedia.org/wiki/Effective_accelerationism", "https://en.wikipedia.org/wiki/August_Neidhardt_von_Gneisenau", "https://en.wikipedia.org/wiki/Gerhard_von_Scharnhorst", "https://en.wikipedia.org/wiki/Sanssouci", "https://en.wikipedia.org/wiki/Frederick_the_Great", "https://en.wikipedia.org/wiki/Silesia" ]
https://www.dwarkesh.com/p/marc-andreessen
Marc Andreessen - AI, Crypto, 1000 Elon Musks, Regrets, Vulnerabilities, & Managerial Revolution
[ "Dwarkesh Patel 0:00", "Today, I have the great pleasure of speaking with Marc Andreessen, which means for the first time on the podcast, the guest’s and the host’s playback speed will actually match. Marc, welcome to The Lunar Society.", "Marc Andreessen 00:13", "Good morning. And thank you for having me. It's great to be here.", "Chewing glass", "Dwarkesh Patel 00:17", "My pleasure. Have you been tempted anytime in the last 14 years to start a company? Not a16z , but another company?", "Marc Andreessen 00:24", "No. The short answer is we did. We started our venture firm in 2009 and it's given my partner, Ben and I, a chance to fully exercise our entrepreneurial ambitions and energies to build this firm. We're over 500 people now at the firm which is small for a tech company, but it's big for a venture capital firm. And it has let us get all those urges out.", "Dwarkesh Patel 00:50", "But there's no product where you think — “Oh God, this needs to exist, and I should be the one to make it happen”?", "Marc Andreessen 00:55", "I think of this a lot. We look at this through the lens of — “What would I do if I were 23 again?” And I always have those ideas. But starting a company is a real commitment, it really changes your life. My favorite all time quote on being a startup founder is from Sean Parker, who says —“Starting a company is like chewing glass. Eventually, you start to like the taste of your own blood.” I always get this queasy look on the face of people I’m talking to when I roll that quote out. But it is really intense. Whenever anybody asks me if they should start a company, the answer is always no. Because it's such a gigantic, emotional, irrational thing to do. The implications of that decision are so profound in terms of how you live your life. Look, there are plenty of great ideas, and plenty of interesting things to do but the actual process is so difficult. It gets romanticized a lot and it's not romantic. It's a very difficult thing to do. And I did it multiple times before, so at least for now, I don't revisit that.", "Dwarkesh Patel: 02:04", "But being a venture capitalist is not like that? When you're in the 38th pitch of the day, you're not wondering if chewing glass might not be more comfortable?", "Marc Andreessen 02:10", "No, it's different. I'll tell you how I experienced it. People are wired to respond to stress in different ways. And I think there are people who are wired to be extremely productive and get very happy under extreme levels of stress. I have a different… I'm fine with stress. In fact, I incline towards it and if I don't have any, I seek it out. But past a certain level, I don't really enjoy it. It degrades the quality of my life, not improves it. Maybe you have an affinity for self torture.", "Look, there's stress in every profession and there's certainly stress in being an investor, but it's a completely different kind of stress. Because when you're a startup founder, it's all on you. Everything that happens is on you, everything that goes wrong is on you. When there's an issue in the company, a crisis in the company, it's on you to fix it. You're up at four in the morning all the time worrying about things. With investors, there's just a layer of buffer. We have no end of problems and we help our portfolio companies as best we can with all kinds of issues, like some crisis inside a company. But it's not my company, not everything is my fault. So it’s a more diffused kind of stress, and honestly easier to deal with.", "Dwarkesh Patel 03:32", "Got it, that makes sense. Why did you stop your blog? Would you ever start it again?", "Marc Andreessen 03:37", "I write intermittently. The original blog was from 2007 to 2009. And then we started the firm, and that was like having a new baby and that soaked up all my time. I write intermittently, and then I do social media intermittently. Part of it is — I have a lot to say, and a lot that I'm interested in, but also I like to experiment with the new formats. We do live in a fundamentally different world as a result of social media, the internet, blogging, Twitter, and all the rest of it. So I try to keep my hand in it and experiment. But I rotate both how I spend my time and rotate what I think makes sense.", "AI", "Dwarkesh Patel 04:21", "Before AWS, deploying applications was probably the bottleneck on new software. What is the biggest bottleneck today? At what layer of abstraction do we need new tools?", "Marc Andreessen 04:30", "Literally sitting here today, overwhelmingly it's the impact AI is having on coding. I think there's a real possibility that basically every application category gets upended in the next five years. I think the whole model of how applications get built across every domain might just completely change. In the old model without AI, you typically have some sort of database, you have some sort of front end for the database, you had forms, you had these known user interaction models, mobile apps and so forth. We got to a pretty good shared understanding of how humans and machines communicate, in the windowing era, and then in the mobile era, in the web era.", "AI might just upend all that. The future apps might just be much more of a dialogue between computer and machine. Either a written-text dialogue, or a spoken dialogue or some other form of dialogue. And the human is guiding the machine on what to do, and receiving real time feedback. And there's a loop, and then the machine just does what it does, and it gives you the results. I think we're potentially on the front end of that, that all might change. The very fundamental assumptions about how software gets built might just completely change. The tools on that are at the very front end. There's an entirely new stack that needs to get built to do that. So that's probably the big thing.", "Dwarkesh Patel 05:55", "Is there a reason though that AI is not one of your focus areas? As far as I know, you guys don't have an AI fund dedicated to that technology specifically?", "Marc Andreessen 06:03", "Basically we look at it all as software. We look at it like it is the core business. Software is the core of the firm, we've been public on that for a long time. The core venture fund is the core software fund. And then AI basically is the next turn on software. And so I view it as the opposite of what you said, it is the most integral thing that we're doing. The separate funds get created for the new areas that are structurally different in terms of how industries work. AI is basically the future of software. And so it's the future of the core of the firm.", "Regrets", "Dwarkesh Patel 06:42", "Got it. Now, let's talk a little about your past. So you sold Netscape for $10 billion. But today, Chrome has what, like 2.7 billion users or something. And then Opsware was sold for like $1.7 billion. AWS is gonna probably make close to $100 billion in revenue yearly. In retrospect, do you think if these companies had remained startups, they would have ended up dominating these large markets?", "Marc Andreessen 07:03", "So I spend virtually no time on the past. The one thing I know about the past is I can't change it. So I spend virtually no time revisiting old decisions. People I know who spend a lot of time revisiting old decisions are less effective because they mire themselves in what ifs and counterfactuals. So I really don't spend time on it. I really don't even have theories on it. The big thing I would just say is that reality plays out in really complicated ways.", "Everything on paper is straightforward. Reality is very complicated and messy. The technical way that I think about it is basically every startup is charting a path dependent course through a complex adaptive system. And because of that, if you’ve read about this, people had this obsession a while back with what's called Chaos Theory.", "It's sort of this thing where we're used to thinking about systems as if they're deterministic. So you start at point A, you end up at point B, and you can do that over and over again. You know what happens when you drop an apple out of a tree, or whatever. In the real world of humans and 8 billion people interacting, and trying to start companies that intersect in these markets and do all these complicated things and have all these employees, there's random elements all over the place. There's path dependence as a consequence. You run the same scenario, start with point A, one time you end up point B, one time you end up point Z. There's a million reasons why the branches fork.", "This is my advice to every founder who wants to revisit all decisions. It's not a useful and productive thing to do. The world is too complicated and messy. So you take whatever skills you think you have and you just do something new.", "Managerial capitalism", "Dwarkesh Patel 08:51", "Make sense. Are venture capitalists part of the managerial elite ? Burnham says that “the rise of the finance capitalist is the decisive phase in the managerial revolution.” What would he think about venture capitalists?", "Marc Andreessen 09:04", "I actually think about this a lot. And I know you said everybody can Google it, but I'll just provide this just so this makes sense. James Burnham famously said — there's basically two kinds of capitalism and we call them both capitalism, but they're actually very different in how they operate. There's the old model of capitalism, which is bourgeois capitalism and bourgeois capitalism was the classic model where the owner of the business was a person who, by the way, often put their name on the door. Ford Motor Company, right?", "Dwarkesh Patel 09:31", "Andreessen Horowitz", "Marc Andreessen 09:32", "Andreessen Horowitz, right. And then that person owned the business, often 100% of the business, and then that person ran the business. These are the people that communists hated. This is the bourgeois capitalist — Company owner, builder, CEO, as one person with a direct link between ownership and control. The person who owns it controls it, the person who controls it runs it. It's just a thing. There's a proprietor of the business.", "So that's the old model. And then what he said basically, as of the middle of the 20th century, most of the economy was transitioning, and I think that transition has happened and is basically now complete. Most of the economy transitions to a different mode of operating, a different kind of capitalism called managerial capitalism. In managerial capitalism, you have a separation of ownership and management. Think of a public company, you have one set of owners who are dispersed shareholders, and there's like a million of them for a big company, and who knows where they are, and they're not paying any attention to the company, and they have no ability to run the company. And then you've got a professional managerial class, and they step in and they run the company. What he said is — as a consequence of that the managers end up in control. Even though the managers don't own the company. In a lot of public companies, the managers might own like 1% of the company, but they end up in total control, and then they can do whatever they want.", "And he actually said — Look, it doesn't even matter if you think this is good or bad, it's just inevitable. And it's inevitable because of scale and complexity. And so the modern industrial and post industrial organizations are going to end up being so big and so complex and so technical, that you're going to need this professional managerial class to run them. And it's just an inevitability that this is how it's gonna go. So I really think this is exactly what's played out.", "A consequence of that, that I think is pretty obvious, is that managerial capitalism has a big advantage that Burnham identified, which is that the managers are often very good at running things at scale. And we have these giant industries and sectors of the economy and health care and education, all these things that are running at giant levels of scale, which was new in the 20th century.", "But there's sort of a consequence of that, which is managers don't build new things. They're not trained to do it, they don't have the background to do it, they don't have the personality to do it, they don't have the temperament to do it, and they don't have the incentives to do it. Because the number one job, if you're a manager, is not to upset the applecart. You want to stay in that job for as long as possible, you want to get paid your annual comp for as long as possible, and you don't want to do anything that would introduce risk. And so managers can't and won't build new things.", "And so specifically, to your question, the role of startups, the role of entrepreneurial capitalism, is to basically bring back the old bourgeois capitalist model enough. It's a rump effort, because it's not most of the economy today, but bring back the older model of bourgeois capitalism, or what we call entrepreneurial capitalism, bring it back enough to at least be able to build the new things.", "So basically what we do is we fund the new bourgeois capitalists, who we call tech founders. And then there's two layers of finance that enable bourgeois capitalism to at least resurface a little bit within this managerial system. Venture capital does that at the point of inception, and then private equity does that at a point when a company needs to actually transform.", "I view it as — we're an enabling agent for at least enough of a resumption of bourgeois capitalism to be able to get new things built, even if most of the companies that we built ultimately themselves end up being run in the managerial model. And Burnham would say that's just the way of the modern world, that's just how it's gonna work.", "Dwarkesh Patel 13:10", "But you guys get preferred shares and board seats, and rightfully so, but wouldn't Burnham look at this and say — “You're not the owners and you do have some amount of control over your companies.”", "Marc Andreessen 13:20", "I think he would say that we're a hybrid, we're a managerial entity that is in the business of catalyzing and supporting bourgeois capitalist companies. He would clearly identify the startups that we fund. He would be like, “Oh yeah, that's the old model. That's the old model of Thomas Edison, or Henry Ford, or one of these guys.” You can just draw a straight line from Thomas Edison, Henry Ford to Steve Jobs, Larry Page, and Mark Zuckerberg. That's that model, it's a founder, it’s a CEO, at least when they started out owning 100%. They do have to raise money most of the time, but they're throwbacks. The modern tech founders are a throwback to this older model of bourgeois capitalism. So you're right in that he would view us as a managerial entity, but he would view us as a managerial entity that is in the business of causing new bourgeois capitalist institutions to at least be created. And I think he would credit us with that. And then he would also say — however, our fate is that most of the companies that we fund and most of the founders that we back end up over time, handing off control of their companies to a managerial class.", "When the companies we fund get to scale, they tend to get pulled into the managerial orbits, they tend to get pulled into the managerial matrix, which by the way, is when they stop being able to build new things, which is what causes the smart and aggressive people at those companies to leave and then come back to us and raise money and start a new bourgeois capitalist company.", "I view it as — the economy is like 99% managerial, and if we can just keep the 1% of the old model alive, we'll keep getting new things. By the way if venture capital ever gets snuffed, it's outlawed or whatever, it just fails and there is no more venture capital, there's no more tech startups or whatever then at that point the economy is going to be 100% managerial. And at that point, there will be no innovation forever.", "People might think they want that. I don't think they actually want that. I don't think we want to live in that world.", "Dwarkesh Patel 15:16", "Will this trend towards managerialism also happen to a16z as it scales? Or will it be immune? What happens to a16z in five decades?", "Marc Andreessen 15:23", "At a certain point this becomes the succession problem. As long as Ben and I are running it our determination is to keep it as much in the bourgeois model as possible. And as you pointed out, literally it’s our names on the door. Ben and I control the firm. The firm doesn't have a board of directors, it's just Ben and me running it. It's a private entity there’s no outside shareholders.", "And so as long as Ben and I are running it, and we're running it in the way that we're running it, it will be as bourgeois model as any investment firm could be.", "Some day there's the succession challenge, and I bring that up, because the succession challenge for tech companies is usually sort of when that transformation happens. When it goes from being in the bourgeois model to being in the managerial model.", "And then this gets to sort of the philosophy of succession in tech companies. And the general thing that happens there is that, you see this over and over again with the great founder CEOs, when it comes time to hand it off, there's basically two kinds of people that they can hand it off to. They can hand it off to somebody like them who's a mercurial, idiosyncratic, high disagreeableness, ornery, sort of entrepreneurial kind of personality, somebody in their mold. Or they can hand it off to somebody who knows how to run things at scale. Almost always, what they do is they hand it off to somebody who can run it at scale. The reason they do that is, there’s actually two reasons. There's the theoretical reason they do that which is — it is at scale at that point, and somebody does need to run it at scale. And then the other is, they often have what I call the long suffering number two. You've had this high octane founder CEO who breaks a lot of glass and then there's often the number two, like the chief operating officer or something who's the person who fundamentally keeps the trains running on time, and keeps everybody from quitting.", "And that long suffering number two has often been in that job for 10 or 15 years at that point, and is literally the longest suffering. They've always been the underling, and then it's like — Okay they now “deserve” the chance to run the company themselves. And that's the handover. Now, those founders often end up regretting that decision. And in later years, they will tell you — Boy, I wish I had handed it off to this other person who was maybe deeper in the organization who was maybe younger, who was more like I am, and maybe would have built more products and maybe that was a mistake. But the fact that they do this over and over again, to me illustrates why the Burnham theory is correct, which is — large, complex organizations ultimately do end up getting run by managers in almost all cases.", "The only optimistic view on that is that it's the transition from these companies being in the bourgeois capitalist model to the managerial model that creates the opportunity for the new generation of startups. Because then the counterfactual, if these companies remained bourgeois capitalist companies for 100 years, then they would be the companies to create all the new products, and then we wouldn't necessarily need to exist because those companies would just do what startups do. They just build all the new stuff.", "But because in that model, they won't do that and they don't do that, almost without exception. Therefore there's always the opportunity for the next new startup. And I think that's good. That keeps the economy vital, even in the face of this overwhelming trend towards managerialism.", "100 year fund", "Dwarkesh Patel 18:43", "If you had a fund with a 100 year lock-in what would you be able to invest in that you can’t invest in right now?", "Marc Andreessen 18:50", "The base lockup for venture is like 10 years, and then we have the ability to push that out, we can kind of push that to 15. And for really high quality companies, we can push that to 20. We haven't been in business long enough to try to push it beyond that. So, we'll see.", "If you could push it to 100 years, the question is — is it really time that's the bottleneck? The implication of the question would be — are there more ambitious projects that would take longer, that you would fund that you're not funding because the time frames are too short. And the problem with a 100 year timeframe, or even a 50 year time frame, or even a 20 year timeframe is that new things don't tend to go through a 20 year incubation phase in business and then come out the other end and be good. What seems to happen is they need milestones, they need points of contact with reality. Every once in a while there will be a company, a very special company will get funded with a founder who's like — look, I'm gonna do the long term thing, and then they go into a tunnel for 10 or 15 years where they're building something and the theory is they're going to come out the other side. These have existed and these do get funded.", "Generally they never come up with anything. They end up in their own Private Idaho, they end up in their own internal worlds, they don't have contact with reality, they're not ever in the market, they're not working with customers. They just start to become bubbles of their own reality. Contact with the real world is difficult every single time. The real world is a pain in the butt. And mark to market your views of what you're doing with the reality of what anybody's actually going to want to pay for, requires you to go expose yourself to that. It's really hard to do that in the abstract, or to build a product that anybody's going to want to use. And so this thing where people go in a tunnel for 10, or 15, or 20 years, it doesn't go well. I think 100 years would be an even more degenerate version of that. Best case is this unbounded research lab that maybe would write papers and something maybe comes out the other end of the far future in the form of some open source thing or something, but they're not going to build an enterprise that way. And so I think having some level of contact with reality over the course of the first five to seven years is pretty important.", "The other way to get to the underlying question would be — what if you just had more zeros on the amount of money? What if instead of funding companies for $20 million, you could fund them for $2 billion, or $20 billion? In other words, maybe they would operate on the timeframe of today's companies, on a five or 10 year timeframe, but you can fund them with 20 billion of venture financing, instead of $20 million.", "I think that's a more interesting question. It's possible that there are pretty big fundamental things that could be built with larger amounts of money in this kind of entrepreneurial model. Every once in a while you do see these giants. Tesla and SpaceX are two obvious examples of these world changing things that just took a lot of money and then had a really big impact. So maybe there's something there, and maybe that's something that the venture ecosystem should experiment with in the years ahead. I would be more focused on that as opposed to elongating the time.", "Basic research", "Dwarkesh Patel 22:15", "But what about basic research? You've spoken about the dysfunctions of the academic-government-research complex. But within the next internet, the next thing that the Andreessen firm 10 years from now is building on top of, if the government effort is broken maybe you need to bootstrap something yourself. Have you considered that?", "Marc Andreessen 22:34", "The strong version of this argument is from a guy named Bill Janeway , a legendary VC. Janeway is a great, wonderful guy. If people haven't heard of him, he is a PhD in economics. I think he’s a student of a student of John Maynard Keynes. He comes from a high pedigree in economic theory background. And himself was a legendary venture capitalist in his career. He became a hands-on investor at the firm Warburg Pincus and funded some really interesting companies. And so he's one of these rare people who's both theoretical and practical on this kind of question. He wrote this book, which I really recommend, it's called Doing Capitalism where he goes through this question. The argument that he makes, along the lines of what you're saying, it’s a little bit of a pessimistic argument. The argument he makes is — if you look at the entire history of professional venture capital, which is now a 60 year journey, basically, or maybe even 50 years, from the late 60s, early 70s, in kind of modern form. He said the big category that's worked is computing or computer science. And then he said, the second category that's worked is biotech. And then he said, at least at the time of writing, everything else didn't work.", "And all the money that people poured into cleantech and da-da-da, all these other areas the venture capitalists tried to fund, they just didn't work from a return standpoint. You just burned the capital. When he wrote the book, he ran the numbers and computer sciences work twice as well as biotech or something like that. And then what he said is this is a direct result of federal research funding over the previous 50 years. Computer science based venture capital was able to productize 50 prior years of basic research in computer science, information science, information theory, communications theory, algorithms, all the stuff that was done in engineering schools from 1940 through like 1990.", "And so he said — we are productizing that, that's been the big thing. In Biotech we are productizing the work that NIH and others put into basic research in the biological sciences and that was about half as much money, and maybe half as much time. That work really started kicking in in the 60s and 70s, a little bit later.", "And then he said — Look, the problem is there aren't other sectors that have had these huge investments in basic research. There's just not this huge backlog of basic research into climate science or take your pick of online content, or whatever the other sectors are where people burn a lot of money.", "And so he says, if you want to predict the future venture capital, you basically just look at where the previous 50 years of basic research, R&D has happened, federal research funding has happened. He has a strong form of it, there's no shortcuts on this. And so if you're trying to do venture capital in a sector that doesn't have this big kind of install base of basic research has already happened, you're basically just tilting at windmills.", "I think there's a lot to his argument. I'm a little more optimistic about a broader spread of categories. A big reason I'm more optimistic about a broader set of categories is because computer science in particular, now applies across more categories. This was sort of the underlying point of the software eats the world thesis , which is that computers used to be just an industry where people made and sold computers. But now you can apply computer science into many other markets, financial services, and healthcare, and many, many others, where it can be a disruptive force. And so I think there's a payoff to computer science and software for sure, that can apply in these sectors. Maybe some of the biological sciences can be stretched into other sectors.", "There's a lot of smart people in the world, there's niche research efforts all over the place in many fields that are doing interesting work. Maybe you don't get a giant industry out the other end in some new sector, but maybe you get some very special companies. SpaceX is a massive advance in aeronautics, it took advantage of a lot of aeronautics R&D. It’s not like there's some huge aeronautics venture industry. But there is a big winner, at least one, and I think more to come. And so I'm a little bit more optimistic and open minded. Bill would probably say that I'm naive.", "$100b fund?", "Dwarkesh Patel 27:07", "You mentioned earlier about being able to potentially write 9 or 10 figure checks to companies like SpaceX or Tesla, who might require the capital to do something grand. Last I checked, you guys have $35 billion or something under management. Do we need to add a few more zeros to that as well? Will a16z’s assets under management just keep growing? Or will you cap it at some point.", "Marc Andreessen 27:27", "We cap it as best we can. We basically cap it to the opportunity set. And it may be obvious, but it's not a single chunk of money. It's broken into various strategies, and we apply different strategies to different sectors at different stages. So it's decomposed. And we have six primary investment groups internally in different stages, and so that money's broken out in different ways.", "We cap it as best we can to the opportunity set. We always tell LPs the same thing, which is we're not trying to grow assets under management, that's not a goal. To the best of our ability, we're trying to maintain whatever return level we're maintaining. We are trying to eat market share, we'd like to eat as much market share as possible. And then we would like to fully exploit the available opportunities, we'd like to fund all the really good founders, we'd like to back all the interesting new spaces. But what we wouldn't want to do is double assets under management in return for 5% lower returns or something like that. That would be a bad trade for us.", "So to put another zero on that, as I said, we would need a theory on a different kind of venture capital model, which would be trying to back much larger scale projects. And again, there's a really big argument you could make that that’s precisely what firms like ours should be doing. There are these really big problems in the world and maybe we just need to be much more aggressive about how we go at it. And we need founders who are more aggressive, and then we need to back them with more money.", "You can also argue either that wouldn't work, or we don't need it. The counter argument on the Tesla and SpaceX examples that I gave is that they didn't need it, right? They raised money the old fashioned way. They raised money round by round in the existing venture ecosystem. And so for whatever limitations you think the existing ecosystem has, and maybe it's not ambitious enough or whatever, it did fund Tesla and SpaceX.", "And so maybe it works. So the underlying question underneath all this is not the money part. The underlying question is how many great entrepreneurs are there? And then how many really big ideas are there for those entrepreneurs to go after? And then that goes one level deeper, which is — What makes a great entrepreneur? Are they born? Are they trained? What made Elon, Elon? What would you need to do to get ten more Elons? What would you need to do to get 100 more Elons? What would you need to do to make 1000 more Elons? Are they already out there and we just haven't found them yet? Could we grow them in tanks?", "Dwarkesh Patel", "Or just add testosterone to the water supply?", "Marc Andreessen 29:57", "Yeah or do we need a different kind of training program? Does there need to be a new kind of entrepreneurial university that trains entrepreneurs? It's just a totally different thing. Those are the underlying questions. I think if you show me ten more Elons, I'll figure out how to fund their companies. We work with a lot of great founders and we also work with Elon and he's still special. He's still highly unusual even relative to the other great entrepreneurs.", "Crypto debate", "Dwarkesh Patel 30:32", "Yeah. Let's talk about crypto for a second. When you're investing in crypto projects, how do you distinguish between cases where there is some real new good or service that new technology is enabling and cases where it's just speculation of some kind?", "Marc Andreessen 30:45", "What we definitely don't do is the speculation side, we just don't do that. And I mean that very specifically, we're not running a hedge fund. What we do is we apply the classic venture capital 101 playbook to crypto. And we do that the exact same way that we do with every other venture sector that we invest in, which is to say we're trying to back new ventures. In crypto that venture might be a new company, or it might be a new network, or it might be a hybrid of the two and we're completely agnostic as to which way that goes. When we write our crypto term sheets, even when we're backing a crypto C Corp, we always write in the term sheet that they can flip it into being a tokenized network anytime they want to. We don't distinguish between companies and networks.", "But we approach it with a Venture Capital 101 playbook, which is — we're looking for really sharp founders who have a vision and the determination to go after it. Where there's some reason to believe that there's some sort of deep level of technological economic change happening, which is what you need for a new startup to wedge into a market. And that there's a reason for it to exist, that there's a market for what they're building and they're gonna build a product, and there's gonna be an intersection between product and market, and there's gonna be a way to make money and you know, the core playbook.", "We go into every crypto investment with the same timeframe as we go into venture investing. So we go in with at least a five to 10 year timeframe, if not a 15 to 20 year timeframe. That's what we do, the reason that's not necessarily the norm in crypto is an artifact of the fact that — especially anything with crypto tokens, there is this thing where they tend to publicly float a lot sooner than startup equity floats. Let's say we're backing a new crypto network, it goes ahead and floats a token as sort of one of the first steps of what it does. It has a liquid thing years in advance of when a corresponding normal C Corp would. There’s one thing in behavioral economics where when something has a daily price signal and where you can trade it, people tend to obsess on the daily price signal and they tend to trade it too much. There's all this literature on this that kind of shows how this happens. It's part of the human experience, we can't help ourselves, it's like moths to a flame. If I can trade the stock every day, I trade the stock every day.", "Almost every investor in almost every asset class trades too often in a way that damages their returns. And then as a consequence of that, what's happened is a lot of the investment firms that invest in crypto startups are actually hedge funds. They're structured as hedge funds, they have trading desks, they trade frequently, they have the equivalent of what's called a public book in hedge fund land. They've got these crypto assets they're trading frequently, and then they'll back a startup and then they'll trade that startup's token just like they trade Bitcoin or Ethereum.", "But in our view that's the wrong way. And by the way there's an incentive issue, which is they pay themselves on a hedge fund model, they pay themselves annually. So they're paying themselves annually based on the market for projects that might still be years away from realization of ultimate underlying value. And then there's this big issue of misalignment between them and their LPs. And so that's all led to this thing where the tokens for these crypto projects are traded too aggressively. In our model they just shouldn't be, they're just not ready for that yet. And so we anchor hard on the venture capital model, we treat these investments the exact same way as if we're investing in venture capital equity, we basically buy and hold for as long as we can. And have a real focus on the underlying intrinsic value of the product and technology that's being developed. If by speculation you mean daily trading and trying to look at prices and charts and all that stuff, we don’t do that.", "Dwarkesh Patel 34:22", "Or separately, another category would be things that are basically the equivalent of baseball cards, where there's no real good or service that's being created. It is something that you think might be valuable in the future but not because the GDP has gone up.", "Marc Andreessen 34:38", "Oh. Baseball cards are a totally valid good and service. That's a misnomer. I would entirely disagree with the premise of that question.", "Dwarkesh Patel 34:48", "But are they gonna raise median incomes even slightly?", "Marc Andreessen 34:50", "Yeah, there are people who make their living on baseball cards. Look, art has been a part of the economy for thousands of years. Art is one of the original things that people bought and sold. Art is fundamental to any economy. Would you really want to be part of an economy where they didn't value art? That would be depressing.", "Dwarkesh Patel 35:15", "Yeah but there's the question of — Do they value art versus are they speculating on art? And then how much of the effort is being spent on speculating on the art versus creating the art?", "Marc Andreessen 35:25", "Well, this gets into this old kind of cultural taboo. This depends on what you mean by speculation. If what you mean by speculation is obsessing on daily price signals and buying and selling and turning a portfolio, like being a day trader kind of speculation. That's what I think of speculation. Let's say that's the bad form of speculation, that's the non productive form.", "If by speculation, on the other hand, you mean — look, there are different kinds of things in the world that have different possible future values. And people are trying to estimate those future values, and people are trying to figure out utility, and they're trying to figure out aesthetic value.  Look at how the traditional art market works, is somebody supporting a new contemporary artist speculating or not? Yes, maybe from one lens they are. Maybe they're buying and selling paintings, and maybe they buy in and if it doesn't start going up in price, they flip it and buy something else. But also, maybe they're supporting a new young artist. And maybe they build a speculative portfolio of new young artists and as a consequence those artists can get paid, and they can afford to be full time artists. And then it turns out they're the next Picasso.", "And so I think that kind of speculation is good and healthy. And it's core to everything. I'd also say this — I don't know that there's actually a dividing line between that form of speculation, and speculation on what people call investments. Because even when people make investments, even just the institutional bond market. Look at US government debt, people are today in the bond market trying to figure out what that's worth. Because is the debt ceiling gonna get raised? Even that's up for grabs. To me, that’s not speculation in the bad sense, that's a market working properly. People are trying to estimate. Ben Graham said “financial markets are both a voting machine and a weighing machine. And in the short term, they tend to be a voting machine in the long run, they tend to be a weighing machine.”", "What's the difference between a voting machine and a weighing machine? I don't know, some people would say they're very different. Maybe it's actually the same thing. Why did prices go up? Because there are more buyers and sellers. Why do the prices go down? There were more sellers than buyers. The way markets work is you get individuals trying to make these estimations and then you get the collective effect. There's this dirty interpretation of any kind of trading or any kind of people trying to do the voting and weighing process. I just think it's this historical, ancient taboo against money. It's like in the Bible, Jesus kicking the money changers out of the temple. It's this old taboo against charging interest on debt. Different religions and cultures tend to have some underlying unease with the concept of money, the concept of trade, the concept of interest. And I just think it's like superstition, it's like resentment, it's fear of the unknown. But those things are the things that make economies work. And so I'm all in favor.", "Dwarkesh Patel 38:20", "I don't mean to get hung up on this — but if you think of something like the stock market or the bond market, fundamentally you can tell a story there. Where the reason what these stockbrokers or these hedge fund managers are doing is valuable, they're basically deciding where capital should go. Should we build a factory in Milwaukee? Should we build it in Toronto? Fundamentally, where should capital go? Whereas what is the story there? What is the NFT helping allocate the capital towards? Why does it matter if the price is efficient there?", "Marc Andreessen 38:48", "Because it's art. NFT is a very general concept. NFT is basically just a form of digital ownership. There will be many kinds of NFTs in the future, many of them, for example, will represent claims on real underlying property. I think a lot of real assets are gonna be wrapped in NFTs. And so NFTs are a very broad technological mechanism. But let's specifically take the form of NFT that everybody likes to criticize, which is NFT as a creative project or an image or a character in a fictional universe or something like that, the part that people like to beat on.", "And I'm just saying — they're just art. That's just digital art, right? And so every criticism people make of that is the same criticism you would make of buying and selling paintings, it would be the same buying and selling photographs, of buying and selling sculpture.", "I always like to really push this, what's the Mona Lisa worth? I don't want to spoil the movie. But the new Knives Out movie, let's just say the Mona Lisa plays a role in the movie. What's the Mona Lisa worth? One way of looking at the Mona Lisa is that it's worth the cost of producing it.  It's worth the canvas and the paint. And you could create a completely identical reproduction of the Mona Lisa with like 25 bucks of canvas and paint. So the Mona Lisa is worth 25 bucks. Or you could say the Mona Lisa is a cultural artifact and as a cultural artifact that's worth probably a billion dollars or $10 billion. Specifically on your question, what explains the spread between $25 and the $10 billion that it would go out if it ever hit the market. It’s because people care. Because it's art, because it's aesthetic, because it's cultural. Because it's part of what we've decided is the cultural heritage of humanity. The thing that makes life worth living is that it's not just about subsistence, that we are gonna have higher values and we're gonna value aesthetics.", "Dwarkesh Patel 40:35", "Do you see a difference between the funding the flying cars and the SpaceXs and Teslas versus something that improves the aesthetic heritage of humanity? But does one of them seem like a different category than the other to you? Or is that all included in the venture stuff you're interested in?", "Marc Andreessen 40:52", "It's a little bit like saying — should we fund Thomas Edison or Beethoven? If push comes to shove and we can only fund one of them, we probably should fund Edison and not Beethoven. Indoor lighting is probably more important than music. But I don't want to live without Beethoven.", "I think this is a very important point. People have lots and lots of views on human existence. There's lots and lots of people trying to figure out the point of human existence, religions and philosophies and so forth. But kind of what they all have in common, other than maybe Marxism, what they all have in common is — we're not just here to get up in the morning, work in a factory all day, go home at night, be depressed and sad, go to bed. We're not just material, right? Whatever this is all about, it's not just about materiality. There are higher aspirations and higher goals. And we create art, we create literature, we create paintings, we create sculptures, we create aesthetics, we create fashion, right, we create music, we create all of these things.", "And fiction. Why does fiction exist? Why is a fake story worth anything? Because it enhances your life to get wrapped up in a fake story. It makes your life better that these things exist. Imagine living in a world where there's no fiction, because everybody's like — “Oh, fiction is not useful. It's not real.” No, it's great. I want to live in a world where there's fiction. I like nothing more at the end of the day than having a couple hours to be able to get outside of my own head and watch a really good movie. And I don't want to live in a world where that doesn't happen.", "As a consequence, funding movies as another example of what you're talking about, is a thing that really makes the world better. And here's the other thing. The world we live in actually is the opposite of the world you're alluding to. The world we live in is not a world in which we have to choose between funding flying cars and funding NFTs or like in my example, funding Edison versus funding Beethoven. The world we live in is actually the opposite of that, where we have a massive oversupply of capital and not nearly enough things to fund.", "The nature of the modern economy is we have what Ben Bernanke called the global savings glut . We've just got this massive oversupply of capital that was generated by the last few 100 years of economic activity, and there's only one Elon. There's just this massive supply demand imbalance between the amount of capital that needs to generate a return and the actual number of viable investable projects and great entrepreneurs to actually create those projects. We certainly don’t have enough flying car startups, we also don't have enough art startups. We need more of all of this. I don't think there's a trade off, we need more of all of it.", "Future of VC", "Dwarkesh Patel 43:29", "Have we reached the end of history when it comes to how venture capital works? For decades you get equity in these early stage companies, you invest more rounds, it's a 2-20 structure. Is that what venture is going to look like in 50 years, or what's going to change?", "Marc Andreessen 43:42", "I think the details will change, and the details have changed a lot, and the details will change a lot. If you go back to the late 60s, early 70s, the details were different then and the details were different 20 years ago. By the way, they're changing again right now in a bunch of ways, and so the details will change.", "Having said that, there’s a core activity that seems very fundamental. And the term I use I borrowed from Tyler Cowen who has talked about this, he calls it Project Picking. When you're doing new things, new tech startups, making new movies, publishing new books, creating new art, when you're doing something new. There's this pattern that just repeats over and over again. If you look back in history, it's basically been the pattern for hundreds or 1000s of years, and it seems like it's still the pattern. Which is, you're going to do something new, it's going to be very risky, it's going to be a very complex undertaking, it's going to be some very complicated effort that's going to involve a path dependent kind of journey through a complex adaptive system, reality is going to be very fuzzy and messy. And you're going to have a very idiosyncratic set of people who start and run that project. They're going to be highly disagreeable, ornery people because that's the kind of people who do new things. They're going to need to build something bigger than themselves, they're going to need to assemble a team and a whole effort. They're going to run into all kinds of problems and issues along the way.", "Every time you see that pattern there's this role, where there's somebody in the background who's like — Okay, this one, not that one. This founder, not that founder. This expedition, not that expedition. This movie, not that movie. And those people play a judgment and taste role, they play an endorsement, branding and marketing role. And then they often play a financing role. And they often are very hands-on, and they try to contribute to the success of the project.", "A historical example of this I always use is that the current model of venture capital is actually very similar to how whaling expeditions got funded 400 years ago. To the point that the term that we actually have, which is carried interest or carry, which is the profit sharing that the VCs get on a successful startup, that term actually goes back to the whaling industry 400 years ago, where the financiers of whaling journeys — like literally out of Moby Dick, to go hunt a whale and bring its carcass back to land.", "The carry was literally the percentage of the carried amount of whale that the investor’s got. It was called carry because it was literally the amount of whale that the ship could carry back. And so if you go back to how the whaling journeys off, like the coast of Maine and the 1600s, were funded, there were a group of what we — they didn't call themselves venture capitalist at that time, but there were a group of basically capitalists. And they would sit in a tavern or something, and they would get pitches by whaling captains.", "And you can imagine the whaling captains. A third of the whaling journeys never came back. A third of the time the boats got destroyed and everybody drowned. And so it's like — I'm the captain who's going to be able to not only go get the whale, but I'm gonna be able to keep my crew alive. By the way, I have a strategy and a theory for where the whale is.", "And maybe one guy is like — look, I'm gonna go where everybody knows there are whales and other guy’s gonna be like — no, that place is overfished, I'm gonna go to some other place where nobody thinks there's a whale, but I think there is. And then one guy is gonna say — I'm better at assembling a crew than the other. And the other one's like — Well, no, I don't even need a crew. I just need a bunch of grunts and I'm going to do all the work. And then another guy might say — I want a small fast boat. And other guy might say — I want a big slow boat.", "And so there's a set of people, imagine in the tavern under candlelight at night, debating all this back and forth — Okay, this captain on this journey, not that captain on that journey and then putting the money behind it to finance the thing. That's what they did then and that's still what we do. So what I'm pretty confident about is there will be somebody like us who is doing that in 50 years, 100 years, 200 years. It will be something like that. Will it be called venture capital? That I don't know. Where will it be happening? I don't know. But that seems like a very fundamental role.", "Dwarkesh Patel 47:56", "Will the public private distinction that exists now, will that exist in 50 years?", "Marc Andreessen 48:00", "You mean like companies going public?", "Dwarkesh Patel 48:07", "Yeah and just the fact that there's different rules for investing in both and just a separate category? Is that gonna exist?", "Marc Andreessen 48:12", "There's already shades of gray. I would say that's already dissolving. There's very formal rules here. But there's already shading that is taking place. In the last 20 years, it's become much more common for especially later stage private companies to have their stocks actually trade. Actually be semi liquid and trade either through secondary exchanges or tender offers or whatever. That didn't used to happen, that didn't really happen in the 1990s. And then it started happening in the late 2000s.", "And then you've got lots of people with different kinds of approaches to have different kinds of private markets and new kinds of private liquidity. And look, you've got these new mechanisms, you've got crypto tokens. You've got entirely new mechanisms as well popping up representing underlying value.", "And then arguments, debates all the time in public and with regulators and in the newspapers about what counts — Who can invest in? This whole accredited investor thing. A lot of this is around “protecting investors”. And then there's this concept of high net worth investors should be allowed to take more risk, because they can bear the losses. Whereas normal investors should not be allowed to invest in private companies, but then there's a counter argument that says, then you're cutting off growth investing as an opportunity for normal investors, and you're making wealth inequality worse.", "That debate will keep playing out. It'll kind of fuzz a bit. I'd expect both sides will moderate a little bit. So in other words, public companies will get to be a little bit more liquid over time. The definition of what it means to be public will probably broaden out. I'll give you an example. Here's an interesting thing. So you can have this interesting case where you can take a company private, but it's still effectively public because it has publicly traded bonds. And then it ends up with publicly filed financials on the bond side, even though its stock is private. And so it's effectively still public because of information disclosure. And then the argument is — well, if I already have full information disclosure, as a result of the bonds trading, you might as well take the stock public again. Anyway it'll fuzz out somewhere in there.", "Founders", "Dwarkesh Patel 50:20", "Okay, so there's a clear pipeline of successful founders, who then become venture capitalists like yourself, obviously. But I'm curious why the opposite is not more true? So if you're a venture capitalist, you've seen dozens of companies go through hundreds of different problems. And you would think that this puts you in a perfect position to be a great entrepreneur. So why don't more venture capitalists become entrepreneurs?", "Marc Andreessen 50:40", "One reason is it's just harder to build a company, it just flat out is. It's not easy to be a VC, but it's harder to build a company. And it requires a level of personal commitment. Successful venture capitalists do get to a point in life where they start to become pretty comfortable. They make money and they start to settle into that sort of fairly nice way of living at some point in a lot of cases. And going back to the 2 AM chewing glass kind of thing is maybe a little bit of a stretch for how they want to spend their time.", "So that's part of it. The other part of it is — the activities are pretty different. The way I describe it is — actually starting and running a company is a full on contact sport, it's a hundred decisions a day. I’ll give an example: bias to action. Anybody who's running a company, you have to have a bias to action. You're faced with a hundred decisions a day, you don't have definitive answers on any of them. And you have to actually act anyway. Because if you sit and analyze the world will pass you by. And it's like — a good plan executed violently is much better than a great plan executed later.", "So it's a mode of operating that rewards aggression, contact with reality, constantly testing hypotheses, screwing up a lot, changing your mind a lot, revisiting things. It’s thousands and thousands of crazy real world variables all intersecting.", "Being an investor is different. It's much more analytical, clinical, outside-in. The decision cycles are much longer, you get a much longer period of time to think about what you should invest in, you get a much longer period of time to figure out when you should sell. Like I said, you generally don't want to trade frequently if you're doing your job right. You actually want to take a long time to really make the investment decisions, and then make the ultimate sale decisions.", "VCs, we help along the way, when companies have issues that they're in the middle of. But fundamentally, it's a much bigger level of watching, observing, learning, thinking, arguing,", "in the abstract, as opposed to day to day bloody combat.", "Honestly, it's a little bit like — Why don't the great football broadcasters go get on the field? Try being the running back for a season?", "Dwarkesh Patel 53:12", "Got it. How soon can you tell whether somebody will make for a good CEO of a large company specifically? So can you tell as soon as they've got a new startup that they're pitching you? Or does it become more clear over time as they get more and more employees?", "Marc Andreessen 53:25", "The big thing with being able to run things at scale, there's actually a very big breakthrough that people either make or they don't make. And the very big breakthrough is whether they know how to manage managers. Say you're running a company with a hundred thousand employees, you don't have a hundred thousand direct reports. You still only have like eight or ten direct reports. And then each of them have eight or ten direct reports and each of them have eight or ten direct reports. And so even the CEOs of really big companies, they're only really dealing with eight or ten or twelve people on a daily basis.", "And then how do you become trained as a manager? The way you become trained as a manager initially is you manage a team of individual contributors. I'm an engineering manager, I have eight or ten coders working for me. And then the breakthrough is — am I trained in how to become a manager of managers?", "If I'm early in my career, the way I think about that is I start out as an individual contributor, let's say an engineer. I get trained on how to be a manager of individual contributors, and that makes me an engineering manager. And then if I get promoted to what they call engineering director, which is one level up, now I'm a director and now I'm managing a team of managers. Anybody who can make that jump now has a generalizable skill of being able to manage managers, and then what makes that skill so great is that skill can scale. Because then you can get promoted to the VP of engineering, now you have a team of directors who have teams of managers who have teams of ICs and so forth. And then at some point, if you keep climbing that ladder, at some point you get promoted to CEO. And then you have a team of managers who are the executives of the company, and then everything spans out from there.", "And so if you can manage managers, at least in theory, you have the basic skill and temperament required to be able to scale all the way up. Then it becomes a question of how much complexity can you deal with? Can you learn enough about all the different domains of what it means to run a business? Are you going to enjoy being in the job and being on the hot seat? All those kinds of questions.", "I think 100% of the people we back have the intelligence to do it, maybe half of them have the temperament to do it, and then maybe half of those have the intelligence and the temperament and they really want to do it. And by “might want to do it” I mean, 20 years from now, they still want to be running their company.", "And enough of them where we get the success cases. But having said that as an entrepreneur, you have to really want that. You have to be smart enough and you have to have the temperament and you have to actually want to learn the skills. And not everybody is able to line those up.", "Dwarkesh Patel 55:54", "Got it, got it. Managing the managerial revolution.", "Marc Andreessen 56:00", "Actually, that's exactly right. The best case scenario is a bourgeois capitalist, entrepreneurial CEO, managing a team of managers who are doing all the managerial stuff required at scale. That's the best case scenario for a large modern organization. Best of both worlds, they're able to harness the benefits of scale, and they're able to still build new things.", "The degenerate version of that is a manager running a company of people who in theory can build your products. But in the Burnham sense, if the CEO is the manager who is running a team of people who want to build their products, that company probably will not actually build their products. Those people will probably all leave and start their own companies.", "a16z vulnerabilities", "Dwarkesh Patel 56:42", "Yep, yep. Now, as unlikely as this may be, just humor the hypothetical. Let's say a16z for the next 10 to 20 years has mediocre returns. If you had to guess looking back, what would be the most likely reason this might happen? Would it have to be some sort of macro headwind, would it have to be betting on the wrong tech sectors, what would it have to be?", "Marc Andreessen 57:00", "20 years is a long enough time where it's probably not just a macroeconomic thing. The big macro cycles seem to play out over 7 to 10 year periods. And so over 20 years, you'd expect to kind of get two or three big cycles through that. And so you'd expect to get at least some chance to make money and harvest profits. Probably it wouldn't be a macro problem. Look, you can imagine it, if a real pandemic happens. By the way, I’m now gonna get you demonetized on Google because I'm going to reference pandemics but..", "Dwarkesh Patel 57:34", "Don’t worry, I didn't have enough views to be monetized anyway.", "Marc Andreessen 57:38", "If something horrible happens then you could be in a ditch for 20 years. But if things continue the way that they have, for the last 50 years or 80 years. There'll be multiple cycles, and there'll be a chance to make money for people who make good investments.", "So it's probably not that, and then there'll be the micro explanation, which is we just make bad investments. We invest the money, but we just invest in the wrong companies and we screw up. And that's of course always a possibility. And probably the most likely downside case.", "The other downside case is — I would build on what I was mentioning earlier, from Bill Janeway. The other downside case would just be that there's just not enough technological change happening. There wasn't enough investment in basic research in the preceding 50 years in areas that actually paid off. There wasn't enough underlying technological change that provided an opportunity for new entrepreneurial innovation. And the entrepreneurs started the companies and they tried to build products and we funded them and for whatever reason, the sectors in which everybody was operating just didn't pay off.", "If we hit five clean-tech sectors in a row or something like that, the whole thing just doesn't work. In a sense, that's the scariest one because that's the one that's most out of our control. That's purely exogenous. We can't wish new science into existence. And so that would be a scary one. I don't think that's the case. And in fact, I think quite possibly the opposite is happening. But that would be the downside scenario.", "Dwarkesh Patel 59:15", "How vulnerable is a16z to any given single tech sector not working out? Whether it's because of technical immaturity, or by their regulation or anything else? But if your top sector doesn't work out, how vulnerable is the whole firm?", "Marc Andreessen 59:29", "Innovation could just be outlawed. And that's a real risk, because innovation is outlawed in big and important areas like Nuclear. I always love meeting with new nuclear entrepreneurs, because it's just so obvious that we should have this big investment in nuclear energy and there's all these new designs. But the Nuclear Regulatory Commission has not authorized a new nuclear design since its inception nearly 50 years ago. So it's just illegal to build new nuclear in the US. By the way, there's all these fusion entrepreneurs that are super geniuses, the products are great, it looks fantastic. I just don't think there's any prospect of nuclear fusion being legal in the US. I think it's just impossible and can't be done. Maybe it's just all outlawed, in which case, at a societal level we will deserve the result. But that would be a bummer for us.", "Dwarkesh Patel 1:00:14", "And then I don't know, let's say crypto gets regulated or it's just not ready yet. It doesn't have to be crypto specifically. But what happens to a16z as a whole? I mean, does a whole firm carry on? Or?", "Marc Andreessen 1:00:28", "Look, it's up to our LPs. We raise money on a cycle. So our LPs have an option every cycle to not continue to invest. Just logically the firm is somewhat diversified now. We have six primary investment domains. So at least in theory, we have some diversification across categories. At least in theory, we could lose a category or two and the investment returns could still be good, and the investors will still fund us. The downside case from there would be that those categories are actually more correlated than we would want them to be. As a firm, we have a big focus on software, we think software is a wedge across each of those verticals. Maybe AI turns out for whatever reason not to work, or gets outlawed or something or just fundamentally makes economics worse or something. Then you can imagine that hitting multiple sectors. Again, I don't think that's going to happen, but I guess it's a possibility.", "Monetizing Twitter", "Dwarkesh Patel 1:01:28", "Yeah. What did the old management of Twitter fail to see about the potential of the platform?", "Marc Andreessen 1:01:30", "So first I'd say that I have a very hard time second guessing management teams, because like I said, my belief is that it's so easy to criticize companies and teams in the outside, it's so hard to run these companies, there are always a thousand factors that are invisible from the outside that make it really hard to make decisions internally.", "By the way, the histories on all this stuff are really always screwed up. Because what you almost always find in the history of the great companies is that there were moments early on where it was really tenuous, and it could have easily gone the other way. Netflix could have sold out to Blockbuster early on, and Google could have sold out to Yahoo. And we never would have even heard of those companies. And so it's really, really hard to second guess.", "I guess I will just put it this way — I've always believed and I was an angel investor in Twitter back when it first got started. I've always believed that the public graph is something that should just be titanically valuable in the world. The public-follow graph. In computer science terms, Twitter is what's called publish subscribe to the idea of a one way public follow graph.", "That ought to be just absolutely titanically valuable, that ought to be the most valuable content, loyalty brand signal in the world. That ought to be the most complete expression of what people care about in the world, that ought to be the primary way that every creator of everything interacts with their customers and their audience. This ought to be where all the politics operates, this ought to be where every creative profession operates, this ought to be where a huge amount of the economy operates.", "They were always on to such a big idea. Like with everything, it's a question of — What does that mean in terms of what kind of product you can build around that? And then how can you get people to pay for it?  But yeah, I've always viewed that the economic opportunity around that core innovation that they had is just much, much larger than anybody has seen so far.", "Dwarkesh Patel 1:03:21", "But how specifically do you monetize that graph?", "Marc Andreessen 1:03:22", "Oh, there's a gazillion ways. There's tons and tons of ways. Elon has talked about this publicly so it’s not spoiling anything, but Twitter is a promotional vehicle for a lot of people who will then provide you stuff on another platform.", "I'm just taking an obvious example. He has talked about video. People create video, they market it on Twitter, and then they monetize it on YouTube. Like, why? Why is that not happening (on Twitter)? Musicians will have followings of 5-10 million people on Twitter, they aren't selling concert tickets.", "I'm sure this was happening before but where it first came to mind was, if you remember Conan O'Brien when he got famously fired from the Tonight Show, he did this tour. And I was a fan of his so I was following him at the time. He did his first live comedy music tour. And he sold out the tour across 40 cities in like two hours. How did he do it? Well, he just put up on his Twitter account. He said — I'm going on the road, here are the dates, click here to buy tickets.", "Boom, they all sold out. “Now click here to buy tickets” was not “click here to buy tickets on Twitter.” It was “click here to buy tickets somewhere else”. But why isn't every concert in the world, why isn't every live event getting booked on Twitter? There's a lot of this kind of thing.", "As Elon is fond of saying, it's not rocket science.", "Dwarkesh Patel 1:04:38", "Yeah. It's funny that a few revolutions in the Middle East were organized in the same way that Conan O'Brien organizes tour, just by posting it on Twitter.", "Marc Andreessen 1:04:54", "So this is the thing that got me so convinced on social media relatively early. Even before the Arab Spring, I don’t know if you remember, you might be too young, but there was this overwhelming critique of social media between inception in like 2001 to basically mainstreaming in like, 2011-2012. There was a decade where there was just this overwhelming critique from all the smart people, as I like to say. That was basically — this thing is useless. This is narcissism. This is just pointless self ego stroking, like narcissism. Nobody cares. The cliche always was Twitter is where you go to learn what somebody's cat had for breakfast. Who cares what your cat had for breakfast? Nothing will ever come from any of this. And then I remember, you could pick up any newspaper on any given day through that period and you could read something like this.", "And then I remember when Erdoğan was consolidating control of Turkey. Erdoğan came out and he said, “I think Twitter is the primary challenge to the survival of any political regime in the modern world,” And I was like — Okay, all the smart analysts all think this is worthless and then a guy who's actually trying to keep control of a country is like, this is my number one threat. The spread of what that meant, of what the outcomes meant. I was just like — “Oh my god.”", "My conclusion was Erdoğan is right and all the smart westerners are wrong. And quite honestly, I think it’s still quite early on. We’re still pretty early in the long arc of social media. The high level thing here would be — the world in which 5 billion people are on the internet is still only a decade or so old. That’s still really early. The world in which 5 billion people are on social networks is like five years old. It’s still super early. If you just look at the history of these transitions in the past, just look at the printing press as a prescient example. It took 200 years to fully play out the consequences of the printing press. We’re still in the very early stages with these things.", "Future of big tech", "Dwarkesh Patel 1:07:09", "I was like ten in 2011 so I don’t know if I would’ve personally. I would’ve liked to think I would’ve caught on if I was older but maybe not. It’s hard to know. But it is kind of interesting. You are personally invested in every single major social media company. So it’s interesting to get your thoughts on where that sector might go. Do you think the next ten years will look like the last ten years when it comes to Big Tech? Is it just going to keep becoming a bigger fraction of GDP? Will that ever stop?", "Marc Andreessen 1:07:35", "As a fraction of GDP, it’s only gonna go up. It is the process of tech infusing itself in every sector. And I think that’s just an overwhelming trend. Because there are better ways to do things. There are things that are possible today that were not possible ten years ago. There are things that will be possible five years from now that aren’t possible today. So from a sector standpoint, the sector will certainly rise as a percent. I’m putting my money where my mouth is in the following statement — Entrepreneurial capitalism will deliver most of that. A lot of that gain will be in companies that were funded in the venture capital, silicon valley kind of model. For the basic reason we discussed which is you do need to have that throwback to the bourgeois capitalist model to do new things. Incumbents are generally still very poor at changing themselves in response to new technology for reasons we’ve discussed. So that process will continue to play out. Another thing that I would just highlight is — The opportunity set for tech is changing over time in another interesting way. We’ve been good at going over the dynamic but small slices of GDP in the last fifty years. And more and more now, we’re going to be going after the less dynamic but much larger sectors of GDP. Education, healthcare, real estate, finance, law, government are really starting to come up for grabs. They are very complicated markets and they’re hard to function in. As startups, it’s harder to build the companies but the payoff is potentially much bigger. Because those are such huge slices of GDP. So the shape of the industry will change a bit over time. What is technology? Technology is a better way of doing things. At some point, the better way of doing things is the way that people do things. At some point that does shift market share from people doing things the old to the people doing things the new way.", "Dwarkesh Patel 1:09:32", "But let's say you build a better education system somehow. The government is still going to be dumping trillions of dollars into the old education system or the old healthcare system. Do you just accept this as a lost cause that basically 50% of the GDP will just be wasted but we’ll make the other 50% really good? When you're building alternatives, do you just accept the loss of the existing system?", "Marc Andreessen 1:09:56", "Education is a great example. I think the incumbent education system is trying to destroy itself. It and the people running it, and the people funding it are trying to kill it. And they’re doing it every possible way they can. For K-12 they are trying to prioritize the teachers over the students which is the opposite of what any properly run company would do. At the university level, the problems in the modern university have been well covered by other people. They have become a cartel. Stanford now has more administrators than they have students. No company would run that way.", "Dwarkesh Patel 1:10:40", "There’s a positive vision where you could turn that into the Bloom two-sigma, single student for single administrator but I don’t think that’s what's happening.", "Marc Andreessen 1:10:49", "Yes, yes. That’s correct. You could and they’re not. That’s exactly right. And then you see the federal student loan kind of crazy thing. By the way, the universities are voluntarily shutting down use of admissions testing. They’re shutting down SAT, ACT, GRE. They’re very deliberately eliminating the intelligence signal which is a big part of the signal employers piggyback on top of. They become intensely politicized. We now know through the replication crisis that most of the research that happens in these universities is fake. Most of it is not generating real research results. We know that because it won’t replicate. You’ve just got these increasingly disconnected mentalities and there’s some set of people who are obviously going to keep going to these schools.", "And then you just look at cost. A degree from a mainstream university that costs, in ten years, a half-million or a million dollars that has no intelligence signal attached to it anymore. Where most of the classes are fake, where most of the degrees are fake, most of the research is fake, where they are wrapped up in these political obsessions. That’s probably not the future of how employers are going to staff. That’s probably not where people are actually going to learn valuable marketable skills. The last thing they want is to actually teach somebody a marketable skill. Teaching somebody a marketable skill is just so far down in the list of priorities of a university now it’s not even in the top 20.", "Lot of it is just they’re a cartel. They operate as a cartel, they run as a cartel, it is a literal cartel. And the cartel is administered through the agencies, the quasi-governmental bodies that determine who gets access to federal student loan funding. And those bodies are staffed by the current university administrators. So it’s a self-governing cartel. It does exactly what cartels do, it’s stagnating and going crazy in spectacular ways.", "There’s clearly going to be an educational revolution. Does that happen today or five years or ten years, I don’t know. Does it happen in the form of new in-person tuitions versus internet-based? I don’t know. Is it driven by us or is it driven by employers who just get fed-up and they’re like — “Screw it. We’re not gonna live like this anymore and we’re gonna hire people in a totally different way.” That, I don’t know. There’s lots and lots of questions about what’s gonna happen from here. But the system is breaking in fundamental and obvious ways.", "Healthcare, same thing. It’s extraordinarily difficult to find positive outcomes in healthcare. In other words, there’s lots of activity in healthcare. It’s very hard to find anything that causes people to live longer. Or to be healthier longer. Every once in a while there’s a successful new cancer treatment or something but there are all these analyses that show that massive investment and public support for health insurance and all these things. And the health outcomes basically don’t move.", "To the extent that people care at all about the reality of their health, there are going to have to be new ways of doing things and tech is going to be the way through the market for people who have those ideas.", "Is VC Overstaffed?", "Dwarkesh Patel 1:14:07", "Hopefully these revolutions in education and healthcare are not like healthcare itself where we are always twenty years away from a cure to cancer and we’re always twenty years away from making education technological.", "You’ve talked about how big tech is 2 to 4x overstaffed in the best case. I’m curious how overstaffed do you think venture capital is? How many partners and associates could we let go and there really wouldn’t be a difference in the performance of venture capital.", "Marc Andreessen 1:14:30", "My friend Andy Rachleff , who is the founder of Benchmark and teaches venture capital at Stanford. I think his description of this is correct. He says — Venture capital is always over staffed and over funded. His estimate is that it is overfunded by a factor of 5. It should probably be 20% of the size that it is. It should be 20% of the number of people, it should be 20% of the number of funds, it should be 20% of the amount of money. And his conclusion after watching this for a long time and analyzing it was it’s basically a permanent 5x overfunding, overstaffing. It goes to what I referenced earlier which is, the world we live in just has a massive imbalance of too much money chasing too few opportunities to invest the money productively. There’s just too much money that needs long run returns that looks to venture as part of their asset allocation. In the way that modern investors do asset allocation. The full version of it he describes is that — there’s only ever been two models of institutional investment. There’s the old model of institutional investment which is 60-40 stocks and bonds that kind of dominated the 20th century up until the 1970s. And there’s what’s called the Swensen model. Swensen who created the Yale endowment in its modern form and that’s the model that all the endowments and foundations have today and increasingly sovereign wealth funds, where they invest in alternative assets. Which means hedge funds, venture capital, real estate and things that aren't stocks and bonds. So anybody following the Swensen model has an allocation to venture capital, on average that’s maybe 4% of their assets. But 4% of the entire global asset base is just a gigantic number. It’s like someone once said — It’s like having a 6th marriage, hope triumphing over experience.", "The thing you will hear from LPs is every LP says they only invest in the top ten venture capital funds and every LP has a different list for who that is. They all kind of know that the whole sector is overfunded, but they all kind of know that they suffer from a real lack of... where else is the money going to go? And then, it’s always possible that you’ll have some great new fund, there’s some great new sector that will open up. A huge advantage that venture capital has is the long dated part of it. It means you don’t suffer the consequences of a bad venture capital investment upfront. You get a ten year lease on life when you make a venture capital investment. You’re not gonna get judged for a long time. And so I think that causes people to invest more in this sector than they should.", "Dwarkesh Patel 1:17:09", "Is the winner's curse also a big component here where the guy who bids the most is the one who sets the price?", "Marc Andreessen 1:17:14", "That can happen. At the early stages the best companies tend to raise at less than the optimal price because the signal of who invests is more important than the absolute price. And so almost every investment that we fund at the Series A stage, they could raise money at 2-4 times the price they raised from us. But they value the signal. And I think that’s also true of the seed landscape and it’s also still true in a lot of cases at the series B level. Series C and beyond it becomes much more of an efficient market.", "Again it’s not a full auction. It’s a little bit like your earlier question. At least here’s the theory — it’s not just money, it’s not just a straight up liquid financial market. These are whaling journeys. By the way, there’s a much blunter answer to this question which is — people who raise seed money and series A money from the high bidder often end up really regretting it because they end up raising money from people who don’t actually understand the nature of a whaling journey, or a tech startup. And then they panic at the wrong times and they freak out. And the wrong investors can really screw up a company. At least historically, there’s a self-correcting equilibrium that comes out of that. Where the best entrepreneurs understand that they want someone on their team who really know what they’re doing and they don’t want to take chances with someone that’s gonna freak out and try to shut the company down the first time that something goes wrong. But we’ll see." ]
[ "https://a16z.com/", "https://a16z.com/author/ben-horowitz/", "https://pmarchive.com/", "https://www.amazon.com/Managerial-Revolution-What-Happening-World/dp/1839013184", "https://en.wikipedia.org/wiki/James_Burnham", "https://www.billjaneway.com/", "https://www.amazon.com/Doing-Capitalism-Innovation-Economy-Speculation/dp/1107031257", "https://a16z.com/2011/08/20/why-software-is-eating-the-world/", "https://en.wikipedia.org/wiki/Benjamin_Graham", "https://en.wikipedia.org/wiki/Ben_Bernanke", "https://en.wikipedia.org/wiki/Global_saving_glut", "https://en.wikipedia.org/wiki/Tyler_Cowen", "https://en.wikipedia.org/wiki/Andy_Rachleff" ]
https://www.dwarkesh.com/p/mark-zuckerberg
Mark Zuckerberg - Llama 3, Open Sourcing $10b Models, & Caesar Augustus
[ "Thanks to Graham Bessellieu for editing this podcast. Transcript with lots of helpful links by Teddy Kim .", "00:00:00 - Llama 3", "Dwarkesh Patel 00:00:00", "Mark, welcome to the podcast.", "Mark Zuckerberg 0:00:01", "Thanks for having me. Big fan of your podcast.", "Dwarkesh Patel 00:00:03", "Thank you, that's very nice of you to say. Let's start by talking about the releases that will go out when this interview goes out. Tell me about the models and Meta AI . What’s new and exciting about them?", "Mark Zuckerberg 00:00:15", "I think the main thing that most people in the world are going to see is the new version of Meta AI. The most important thing that we're doing is the upgrade to the model. We're rolling out Llama-3 . We're doing it both as open source for the dev community and it is now going to be powering Meta AI. There's a lot that I'm sure we'll get into around Llama-3, but I think the bottom line on this is that we think now that Meta AI is the most intelligent, freely-available AI assistant that people can use. We're also integrating Google and Bing for real-time knowledge.", "We're going to make it a lot more prominent across our apps. At the top of Facebook and Messenger, you'll be able to just use the search box right there to ask any question. There's a bunch of new creation features that we added that I think are pretty cool and that I think people will enjoy. I think animations is a good one. You can basically take any image and just animate it.", "One that people are going to find pretty wild is that it now generates high quality images so quickly that it actually generates it as you're typing and updates it in real time. So you're typing your query and it's honing in. It’s like “show me a picture of a cow in a field with mountains in the background, eating macadamia nuts, drinking beer” and it's updating the image in real time. It's pretty wild. I think people are going to enjoy that. So I think that's what most people are going to see in the world. We're rolling that out, not everywhere, but we're starting in a handful of countries and we'll do more over the coming weeks and months. I think that’s going to be a pretty big deal and I'm really excited to get that in people's hands. It's a big step forward for Meta AI.", "But I think if you want to get under the hood a bit, the Llama-3 stuff is obviously the most technically interesting. We're training three versions : an 8 billion parameter model and a 70 billion, which we're releasing today, and a 405 billion dense model , which is still training. So we're not releasing that today, but I'm pretty excited about how the 8B and the 70B turned out. They're leading for their scale. We'll release a blog post with all the benchmarks so people can check it out themselves. Obviously it's open source so people get a chance to play with it.", "We have a roadmap of new releases coming that are going to bring multimodality , more multi-linguality , and bigger context windows as well. Hopefully, sometime later in the year we'll get to roll out the 405B. For where it is right now in training, it is already at around 85 MMLU and we expect that it's going to have leading benchmarks on a bunch of the benchmarks. I'm pretty excited about all of that. The 70 billion is great too. We're releasing that today. It's around 82 MMLU and has leading scores on math and reasoning. I think just getting this in people's hands is going to be pretty wild.", "Dwarkesh Patel 00:03:42", "Oh, interesting. That's the first I’m hearing of it as a benchmark. That's super impressive.", "Mark Zuckerberg 00:03:45", "The 8 billion is nearly as powerful as the biggest version of Llama-2 that we released. So the smallest Llama-3 is basically as powerful as the biggest Llama-2.", "Dwarkesh Patel 00:03:59", "Before we dig into these models, I want to go back in time. I'm assuming 2022 is when you started acquiring these H100s , or you can tell me when. The stock price is getting hammered. People are asking what's happening with all this capex. People aren't buying the metaverse . Presumably you're spending that capex to get these H100s. How did you know back then to get the H100s? How did you know that you’d need the GPUs ?", "Mark Zuckerberg 00:04:22", "I think it was because we were working on Reels . We always want to have enough capacity to build something that we can't quite see on the horizon yet. We got into this position with Reels where we needed more GPUs to train the models. It was this big evolution for our services. Instead of just ranking content from people or pages you follow, we made this big push to start recommending what we call unconnected content , content from people or pages that you're not following.", "The corpus of content candidates that we could potentially show you expanded from on the order of thousands to on the order of hundreds of millions. It needed a completely different infrastructure. We started working on doing that and we were constrained on the infrastructure in catching up to what TikTok was doing as quickly as we wanted to. I basically looked at that and I was like “hey, we have to make sure that we're never in this situation again. So let's order enough GPUs to do what we need to do on Reels and ranking content and feed. But let's also double that.” Again, our normal principle is that there's going to be something on the horizon that we can't see yet.", "Dwarkesh Patel 00:05:51", "Did you know it would be AI?", "Mark Zuckerberg 00:05:52", "We thought it was going to be something that had to do with training large models. At the time I thought it was probably going to be something that had to do with content. It’s just the pattern matching of running the company, there's always another thing. At that time I was so deep into trying to get the recommendations working for Reels and other content. That’s just such a big unlock for Instagram and Facebook now, being able to show people content that's interesting to them from people that they're not even following.", "But that ended up being a very good decision in retrospect. And it came from being behind. It wasn't like “oh, I was so far ahead.” Actually, most of the times where we make some decision that ends up seeming good is because we messed something up before and just didn't want to repeat the mistake.", "Dwarkesh Patel 00:06:47", "This is a total detour, but I want to ask about this while we're on this. We'll get back to AI in a second. In 2006 you didn't sell for $1 billion but presumably there's some amount you would have sold for, right? Did you write down in your head like “I think the actual valuation of Facebook at the time is this and they're not actually getting the valuation right”? If they’d offered you $5 trillion, of course you would have sold. So how did you think about that choice?", "Mark Zuckerberg 00:07:08", "I think some of these things are just personal. I don't know that at the time I was sophisticated enough to do that analysis. I had all these people around me who were making all these arguments for a billion dollars like “here's the revenue that we need to make and here's how big we need to be. It's clearly so many years in the future.” It was very far ahead of where we were at the time. I didn't really have the financial sophistication to really engage with that kind of debate.", "Deep down I believed in what we were doing. I did some analysis like “what would I do if I weren’t doing this? Well, I really like building things and I like helping people communicate. I like understanding what's going on with people and the dynamics between people. So I think if I sold this company, I'd just go build another company like this and I kind of like the one I have. So why?” I think a lot of the biggest bets that people make are often just based on conviction and values. It's actually usually very hard to do the analyses trying to connect the dots forward.", "00:08:32 - Coding on path to AGI", "Dwarkesh Patel 00:08:32", "You've had Facebook AI Research for a long time. Now it's become seemingly central to your company. At what point did making AGI , or however you consider that mission, become a key priority of what Meta is doing?", "Mark Zuckerberg 00:08:49", "It's been a big deal for a while. We started FAIR about 10 years ago. The idea was that, along the way to general intelligence or whatever you wanna call it, there are going to be all these different innovations and that's going to just improve everything that we do. So we didn't conceive of it as a product. It was more of a research group. Over the last 10 years it has created a lot of different things that have improved all of our products. It’s advanced the field and allowed other people in the field to create things that have improved our products too. I think that that's been great.", "There's obviously a big change in the last few years with ChatGPT and the diffusion models around image creation coming out. This is some pretty wild stuff that is pretty clearly going to affect how people interact with every app that's out there. At that point we started a second group, the gen AI group , with the goal of bringing that stuff into our products and building leading foundation models that would power all these different products.", "When we started doing that the theory initially was that a lot of the stuff we're doing is pretty social. It's helping people interact with creators, helping people interact with businesses, helping businesses sell things or do customer support. There’s also basic assistant functionality, whether it's for our apps or the smart glasses or VR . So it wasn't completely clear at first that you were going to need full AGI to be able to support those use cases. But in all these subtle ways, through working on them, I think it's actually become clear that you do. For example, when we were working on Llama-2, we didn't prioritize coding because people aren't going to ask Meta AI a lot of coding questions in WhatsApp.", "Dwarkesh Patel 00:10:59", "Now they will, right?", "Mark Zuckerberg 00:11:00", "I don't know. I'm not sure that WhatsApp, or Facebook or Instagram, is the UI where people are going to be doing a lot of coding questions. Maybe the website, meta.ai , that we’re launching. But the thing that has been a somewhat surprising result over the last 18 months is that it turns out that coding is important for a lot of domains, not just coding . Even if people aren't asking coding questions, training the models on coding helps them become more rigorous in answering the question and helps them reason across a lot of different types of domains. That's one example where for Llama-3, we really focused on training it with a lot of coding because that's going to make it better on all these things even if people aren't asking primarily coding questions.", "Reasoning is another example. Maybe you want to chat with a creator or you're a business and you're trying to interact with a customer. That interaction is not just like “okay, the person sends you a message and you just reply.” It's a multi-step interaction where you're trying to think through “how do I accomplish the person's goals?” A lot of times when a customer comes, they don't necessarily know exactly what they're looking for or how to ask their questions. So it's not really the job of the AI to just respond to the question.", "You need to kind of think about it more holistically. It really becomes a reasoning problem. So if someone else solves reasoning, or makes good advances on reasoning, and we're sitting here with a basic chat bot, then our product is lame compared to what other people are building. At the end of the day, we basically realized we've got to solve general intelligence and we just upped the ante and the investment to make sure that we could do that.", "Dwarkesh Patel 00:12:48", "So the version of Llama that's going to solve all these use cases for users, is that the version that will be powerful enough to replace a programmer you might have in this building?", "Mark Zuckerberg 00:13:03", "I just think that all this stuff is going to be progressive over time.", "Dwarkesh Patel 00:13:05", "But in the end case: Llama-10.", "Mark Zuckerberg 00:13:10", "I think that there's a lot baked into that question. I'm not sure that we're replacing people as much as we’re giving people tools to do more stuff.", "Dwarkesh Patel 00:13:18", "Is the programmer in this building 10x more productive after Llama-10?", "Mark Zuckerberg 00:13:20", "I would hope more. I don't believe that there's a single threshold of intelligence for humanity because people have different skills. I think that at some point AI is probably going to surpass people at most of those things, depending on how powerful the models are. But I think it's progressive and I don't think AGI is one thing. You're basically adding different capabilities. Multimodality is a key one that we're focused on now, initially with photos and images and text but eventually with videos. Because we're so focused on the metaverse, 3D type stuff is important too. One modality that I'm pretty focused on, that I haven't seen as many other people in the industry focus on, is emotional understanding . So much of the human brain is just dedicated to understanding people and understanding expressions and emotions. I think that's its own whole modality, right? You could say that maybe it's just video or image, but it's clearly a very specialized version of those two.", "So there are all these different capabilities that you want to train the models to focus on, in addition to getting a lot better at reasoning and memory, which is its own whole thing. I don't think in the future we're going to be primarily shoving things into a query context window to ask more complicated questions. There will be different stores of memory or different custom models that are more personalized to people. These are all just different capabilities. Obviously then there’s making them big and small. We care about both. If you're running something like Meta AI, that's pretty server-based. We also want it running on smart glasses and there's not a lot of space in smart glasses. So you want to have something that's very efficient for that.", "Dwarkesh Patel 00:15:16", "If you're doing $10Bs worth of inference or even eventually $100Bs, if you're using intelligence in an industrial scale what is the use case? Is it simulations? Is it the AIs that will be in the metaverse? What will we be using the data centers for?", "Mark Zuckerberg 00:15:32", "Our bet is that it's going to basically change all of the products. I think that there's going to be a kind of Meta AI general assistant product. I think that that will shift from something that feels more like a chatbot, where you ask a question and it formulates an answer, to things where you're giving it more complicated tasks and then it goes away and does them. That's going to take a lot of inference and it's going to take a lot of compute in other ways too.", "Then I think interacting with other agents for other people is going to be a big part of what we do, whether it's for businesses or creators. A big part of my theory on this is that there's not going to be just one singular AI that you interact with. Every business is going to want an AI that represents their interests. They're not going to want to primarily interact with you through an AI that is going to sell their competitors’ products.", "I think creators is going to be a big one. There are about 200 million creators on our platforms. They basically all have the pattern where they want to engage their community but they're limited by the hours in the day. Their community generally wants to engage them, but they don't know that they're limited by the hours in the day. If you could create something where that creator can basically own the AI, train it in the way they want, and engage their community, I think that's going to be super powerful. There's going to be a ton of engagement across all these things.", "These are just the consumer use cases. My wife and I run our foundation, Chan Zuckerberg Initiative . We're doing a bunch of stuff on science and there's obviously a lot of AI work that is going to advance science and healthcare and all these things. So it will end up affecting basically every area of the products and the economy.", "Dwarkesh Patel 00:17:41", "You mentioned AI that can just go out and do something for you that's multi-step. Is that a bigger model? With Llama-4 for example, will there still be a version that's 70B but you'll just train it on the right data and that will be super powerful? What does the progression look like? Is it scaling ? Is it just the same size but different banks like you were talking about?", "Mark Zuckerberg 00:18:02", "I don't know that we know the answer to that. I think one thing that seems to be a pattern is that you have the Llama model and then you build some kind of other application specific code around it. Some of it is the fine-tuning for the use case, but some of it is, for example, logic for how Meta AI should work with tools like Google or Bing to bring in real-time knowledge. That's not part of the base Llama model. For Llama-2, we had some of that and it was a little more hand-engineered. Part of our goal for Llama-3 was to bring more of that into the model itself. For Llama-3, as we start getting into more of these agent-like behaviors, I think some of that is going to be more hand-engineered. Our goal for Llama-4 will be to bring more of that into the model.", "At each step along the way you have a sense of what's going to be possible on the horizon. You start messing with it and hacking around it. I think that helps you then hone your intuition for what you want to try to train into the next version of the model itself. That makes it more general because obviously for anything that you're hand-coding you can unlock some use cases, but it's just inherently brittle and non-general.", "Dwarkesh Patel 00:20:35", "When you say “into the model itself,” you train it on the thing that you want in the model itself? What do you mean by “into the model itself”?", "Mark Zuckerberg 00:20:42", "For Llama- 2, the tool use was very specific, whereas Llama-3 has much better tool use. We don't have to hand code all the stuff to have it use Google and go do a search. It can just do that. Similarly for coding and running code and a bunch of stuff like that. Once you kind of get that capability, then you get a peek at what we can start doing next. We don't necessarily want to wait until Llama-4 is around to start building those capabilities, so we can start hacking around it. You do a bunch of hand coding and that makes the products better, if only for the interim. That helps show the way then of what we want to build into the next version of the model.", "Dwarkesh Patel 00:21:37", "What is the community fine tune of Llama-3 that you're most excited for? Maybe not the one that will be most useful to you, but the one you'll just enjoy playing with the most. They fine-tune it on antiquity and you'll just be talking to Virgil or something. What are you excited about?", "Mark Zuckerberg 00:21:50", "I think the nature of the stuff is that you get surprised. Any specific thing that I thought would be valuable, we'd probably be building. I think you'll get distilled versions . I think you'll get smaller versions. One thing is that I think 8B isn’t quite small enough for a bunch of use cases. Over time I'd love to get a 1-2B parameter model, or even a 500M parameter model and see what you can do with that.", "If with 8B parameters we’re nearly as powerful as the largest Llama-2 model, then with a billion parameters you should be able to do something that's interesting, and faster. It’d be good for classification, or a lot of basic things that people do before understanding the intent of a user query and feeding it to the most powerful model to hone in on what the prompt should be. I think that's one thing that maybe the community can help fill in. We're also thinking about getting around to distilling some of these ourselves but right now the GPUs are pegged training the 405B.", "Dwarkesh Patel 00:23:12", "So you have all these GPUs. I think you said 350,000 by the end of the year.", "Mark Zuckerberg 00:23:18", "That's the whole fleet. We built two, I think 22,000 or 24,000 clusters that are the single clusters that we have for training the big models, obviously across a lot of the stuff that we do. A lot of our stuff goes towards training Reels models and Facebook News Feed and Instagram Feed . Inference is a huge thing for us because we serve a ton of people. Our ratio of inference compute required to training is probably much higher than most other companies that are doing this stuff just because of the sheer volume of the community that we're serving.", "Dwarkesh Patel 00:23:56", "In the material they shared with me before, it was really interesting that you trained it on more data than is compute optimal just for training. The inference is such a big deal for you guys, and also for the community, that it makes sense to just have this thing and have trillions of tokens in there.", "Mark Zuckerberg 00:24:08", "Although one of the interesting things about it, even with the 70B, is that we thought it would get more saturated . We trained it on around 15 trillion tokens. I guess our prediction going in was that it was going to asymptote more, but even by the end it was still learning. We probably could have fed it more tokens and it would have gotten somewhat better.", "At some point you're running a company and you need to do these meta reasoning questions. Do I want to spend our GPUs on training the 70B model further? Do we want to get on with it so we can start testing hypotheses for Llama-4? We needed to make that call and I think we got a reasonable balance for this version of the 70B. There'll be others in the future, the 70B multimodal one, that'll come over the next period. But that was fascinating that the architectures at this point can just take so much data.", "Dwarkesh Patel 00:25:11", "That's really interesting. What does this imply about future models? You mentioned that the Llama-3 8B is better than the Llama-2 70B.", "Mark Zuckerberg 00:25:19", "No, no, it's nearly as good. I don’t want to overstate it. It’s in a similar order of magnitude.", "00:25:24 - Energy bottlenecks", "Dwarkesh Patel 00:25:24", "Does that mean the Llama-4 70B will be as good as the Llama-3 405B? What does the future of this look like?", "Mark Zuckerberg 00:25:30", "This is one of the great questions, right? I think no one knows. One of the trickiest things in the world to plan around is an exponential curve. How long does it keep going for? I think it's likely enough that we'll keep going. I think it’s worth investing the $10Bs or $100B+ in building the infrastructure and assuming that if it keeps going you're going to get some really amazing things that are going to make amazing products. I don't think anyone in the industry can really tell you that it will continue scaling at that rate for sure. In general in history, you hit bottlenecks at certain points. Now there's so much energy on this that maybe those bottlenecks get knocked over pretty quickly. I think that’s an interesting question.", "Dwarkesh Patel 00:26:24", "What does the world look like where there aren't these bottlenecks? Suppose progress just continues at this pace, which seems plausible. Zooming out and forgetting about Llamas…", "Mark Zuckerberg 00:26:33", "Well, there are going to be different bottlenecks. Over the last few years, I think there was this issue of GPU production . Even companies that had the money to pay for the GPUs couldn't necessarily get as many as they wanted because there were all these supply constraints. Now I think that's sort of getting less. So you're seeing a bunch of companies thinking now about investing a lot of money in building out these things. I think that that will go on for some period of time. There is a capital question. At what point does it stop being worth it to put the capital in?", "I actually think before we hit that, you're going to run into energy constraints . I don't think anyone's built a gigawatt single training cluster yet. You run into these things that just end up being slower in the world. Getting energy permitted is a very heavily regulated government function. You're going from software, which is somewhat regulated and I'd argue it’s more regulated than a lot of people in the tech community feel. Obviously it’s different if you're starting a small company, maybe you feel that less. We interact with different governments and regulators and we have lots of rules that we need to follow and make sure we do a good job with around the world. But I think that there's no doubt about energy.", "If you're talking about building large new power plants or large build-outs and then building transmission lines that cross other private or public land, that’s just a heavily regulated thing. You're talking about many years of lead time. If we wanted to stand up some massive facility, powering that is a very long-term project. I think people do it but I don't think this is something that can be quite as magical as just getting to a level of AI, getting a bunch of capital and putting it in, and then all of a sudden the models are just going to… You do hit different bottlenecks along the way.", "Dwarkesh Patel 00:29:00", "Is there something, maybe an AI-related project or maybe not, that even a company like Meta doesn't have the resources for? Something where if your R&D budget or capex budget were 10x what it is now, then you could pursue it? Something that’s in the back of your mind but with Meta today, you can't even issue stock or bonds for it? It's just like 10x bigger than your budget?", "Mark Zuckerberg 00:29:19", "I think energy is one piece. I think we would probably build out bigger clusters than we currently can if we could get the energy to do it.", "Dwarkesh Patel 00:29:34", "That's fundamentally money-bottlenecked in the limit? If you had $1 trillion…", "Mark Zuckerberg 00:29:39", "I think it’s time. It depends on how far the exponential curves go. Right now a lot of data centers are on the order of 50 megawatts or 100MW, or a big one might be 150MW. Take a whole data center and fill it up with all the stuff that you need to do for training and you build the biggest cluster you can. I think a bunch of companies are running at stuff like that.", "But when you start getting into building a data center that's like 300MW or 500MW or 1 GW, no one has built a 1GW data center yet. I think it will happen. This is only a matter of time but it's not going to be next year. Some of these things will take some number of years to build out. Just to put this in perspective, I think a gigawatt would be the size of a meaningful nuclear power plant only going towards training a model.", "Dwarkesh Patel 00:30:51", "Didn't Amazon do this ? They have a 950MW–", "Mark Zuckerberg 00:30:55", "I'm not exactly sure what they did. You'd have to ask them.", "Dwarkesh Patel 00:31:00", "But it doesn’t have to be in the same place, right? If distributed training works, it can be distributed.", "Mark Zuckerberg 00:31:03", "Well, I think that is a big question, how that's going to work. It seems quite possible that in the future, more of what we call training for these big models is actually more along the lines of inference generating synthetic data to then go feed into the model. I don't know what that ratio is going to be but I consider the generation of synthetic data to be more inference than training today. Obviously if you're doing it in order to train a model, it's part of the broader training process. So that's an open question, the balance of that and how that plays out.", "Dwarkesh Patel 00:31:44", "Would that potentially also be the case with Llama-3, and maybe Llama-4 onwards? As in, you put this out and if somebody has a ton of compute, then they can just keep making these things arbitrarily smarter using the models that you've put out. Let’s say there’s some random country, like Kuwait or the UAE, that has a ton of compute and they can actually just use Llama-4 to make something much smarter.", "Mark Zuckerberg 00:32:08", "I do think there are going to be dynamics like that, but I also think there is a fundamental limitation on the model architecture. I think like a 70B model that we trained with a Llama-3 architecture can get better, it can keep going. As I was saying, we felt that if we kept on feeding it more data or rotated the high value tokens through again, then it would continue getting better. We've seen a bunch of different companies around the world basically take the Llama-2 70B model architecture and then build a new model. But it's still the case that when you make a generational improvement to something like the Llama-3 70B or the Llama-3 405B, there isn’t anything like that open source today. I think that's a big step function. What people are going to be able to build on top of that I think can’t go infinitely from there. There can be some optimization in that until you get to the next step function.", "00:33:20 - Is AI the most important technology ever?", "Dwarkesh Patel 00:33:20", "Let's zoom out a little bit from specific models and even the multi-year lead times you would need to get energy approvals and so on. Big picture, what's happening with AI these next couple of decades? Does it feel like another technology like the metaverse or social, or does it feel like a fundamentally different thing in the course of human history?", "Mark Zuckerberg 00:33:43", "I think it's going to be pretty fundamental. I think it's going to be more like the creation of computing in the first place. You'll get all these new apps in the same way as when you got the web or you got mobile phones. People basically rethought all these experiences as a lot of things that weren't possible before became possible. So I think that will happen, but I think it's a much lower-level innovation. My sense is that it's going to be more like people going from not having computers to having computers.", "It’s very hard to reason about exactly how this goes. In the cosmic scale obviously it'll happen quickly, over a couple of decades or something. There is some set of people who are afraid of it really spinning out and going from being somewhat intelligent to extremely intelligent overnight. I just think that there's all these physical constraints that make that unlikely to happen. I just don't really see that playing out. I think we'll have time to acclimate a bit. But it will really change the way that we work and give people all these creative tools to do different things. I think it's going to really enable people to do the things that they want a lot more.", "Dwarkesh Patel 00:35:18", "So maybe not overnight, but is it your view that on a cosmic scale we can think of these milestones in this way? Humans evolved, and then AI happened, and then they went out into the galaxy. Maybe it takes many decades, maybe it takes a century, but is that the grand scheme of what's happening right now in history?", "Mark Zuckerberg 00:35:38", "Sorry, in what sense?", "Dwarkesh Patel 00:35:40", "In the sense that there were other technologies, like computers and even fire, but the development of AI itself is as significant as humans evolving in the first place.", "Mark Zuckerberg 00:35:48", "I think that's tricky. The history of humanity has been people basically thinking that certain aspects of humanity are really unique in different ways and then coming to grips with the fact that that's not true, but that humanity is actually still super special. We thought that the earth was the center of the universe and it's not, but humans are still pretty awesome and pretty unique, right?", "I think another bias that people tend to have is thinking that intelligence is somehow fundamentally connected to life . It's not actually clear that it is. I don't know that we have a clear enough definition of consciousness or life to fully interrogate this. There's all this science fiction about creating intelligence where it starts to take on all these human-like behaviors and things like that. The current incarnation of all this stuff feels like it's going in a direction where intelligence can be pretty separated from consciousness, agency , and things like that, which I think just makes it a super valuable tool.", "00:37:21 - Dangers of open source", "Obviously it's very difficult to predict what direction this stuff goes in over time, which is why I don't think anyone should be dogmatic about how they plan to develop it or what they plan to do. You want to look at it with each release. We're obviously very pro open source , but I haven't committed to releasing every single thing that we do. I’m basically very inclined to think that open sourcing is going to be good for the community and also good for us because we'll benefit from the innovations. If at some point however there's some qualitative change in what the thing is capable of, and we feel like it's not responsible to open source it, then we won't. It's all very difficult to predict.", "Dwarkesh Patel 00:38:07", "What is a kind of specific qualitative change where you'd be training Llama-5 or Llama-4, and if you see it, it’d make you think “you know what, I'm not sure about open sourcing it”?", "Mark Zuckerberg 00:38:17", "It's a little hard to answer that in the abstract because there are negative behaviors that any product can exhibit where as long as you can mitigate it, it's okay. There’s bad things about social media that we work to mitigate. There's bad things about Llama-2 where we spend a lot of time trying to make sure that it's not like helping people commit violent acts or things like that. That doesn't mean that it's a kind of autonomous or intelligent agent. It just means that it's learned a lot about the world and it can answer a set of questions that we think would be unhelpful for it to answer. I think the question isn't really what behaviors would it show, it's what things would we not be able to mitigate after it shows that.", "I think that there's so many ways in which something can be good or bad that it's hard to actually enumerate them all up front. Look at what we've had to deal with in social media and the different types of harms. We've basically gotten to like 18 or 19 categories of harmful things that people do and we've basically built AI systems to identify what those things are and to make sure that doesn't happen on our network as much as possible. Over time I think you'll be able to break this down into more of a taxonomy too. I think this is a thing that we spend time researching as well, because we want to make sure that we understand that.", "Dwarkesh Patel 00:41:04", "It seems to me that it would be a good idea. I would be disappointed in a future where AI systems aren't broadly deployed and everybody doesn't have access to them. At the same time, I want to better understand the mitigations. If the mitigation is the fine-tuning, the whole thing about open weights is that you can then remove the fine-tuning, which is often superficial on top of these capabilities. If it's like talking on Slack with a biology researcher… I think models are very far from this. Right now, they’re like Google search. But if I can show them my Petri dish and they can explain why my smallpox sample didn’t grow and what to change, how do you mitigate that? Because somebody can just fine-tune that in there, right?", "Mark Zuckerberg 00:41:44", "That's true. I think a lot of people will basically use the off-the-shelf model and some people who have basically bad faith are going to try to strip out all the bad stuff. So I do think that's an issue. On the flip side, one of the reasons why I'm philosophically so pro open source is that I do think that a concentration of AI in the future has the potential to be as dangerous as it being widespread. I think a lot of people think about the questions of “if we can do this stuff, is it bad for it to be out in the wild and just widely available?” I think another version of this is that it's probably also pretty bad for one institution to have an AI that is way more powerful than everyone else's AI.", "There’s one security analogy that I think of. There are so many security holes in so many different things. If you could travel back in time a year or two years, let's say you just have one or two years more knowledge of the security holes. You can pretty much hack into any system. That’s not AI. So it's not that far-fetched to believe that a very intelligent AI probably would be able to identify some holes and basically be like a human who could go back in time a year or two and compromise all these systems.", "So how have we dealt with that as a society? One big part is open source software that makes it so that when improvements are made to the software, it doesn't just get stuck in one company's products but can be broadly deployed to a lot of different systems, whether they’re banks or hospitals or government stuff. As the software gets hardened, which happens because more people can see it and more people can bang on it, there are standards on how this stuff works. The world can get upgraded together pretty quickly.", "I think that a world where AI is very widely deployed, in a way where it's gotten hardened progressively over time, is one where all the different systems will be in check in a way. That seems fundamentally more healthy to me than one where this is more concentrated. So there are risks on all sides, but I think that's a risk that I don't hear people talking about quite as much. There's the risk of the AI system doing something bad. But I stay up at night worrying more about an untrustworthy actor having the super strong AI, whether it's an adversarial government or an untrustworthy company or whatever. I think that that's potentially a much bigger risk.", "Dwarkesh Patel 00:44:59", "As in, they could overthrow our government because they have a weapon that nobody else has?", "Mark Zuckerberg 00:45:06", "Or just cause a lot of mayhem. I think the intuition is that this stuff ends up being pretty important and valuable for both economic and security reasons and other things. If someone whom you don't trust or an adversary gets something more powerful, then I think that that could be an issue. Probably the best way to mitigate that is to have good open source AI that becomes the standard and in a lot of ways can become the leader. It just ensures that it's a much more even and balanced playing field.", "Dwarkesh Patel 00:45:49", "That seems plausible to me. If that works out, that would be the future I prefer. I want to understand mechanistically how the fact that there are open source AI systems in the world prevents somebody causing mayhem with their AI system? With the specific example of somebody coming with a bioweapon, is it just that we'll do a bunch of R&D in the rest of the world to figure out vaccines really fast? What's happening?", "Mark Zuckerberg 00:46:13", "If you take the security one that I was talking about, I think someone with a weaker AI trying to hack into a system that is protected by a stronger AI will succeed less. In terms of software security–", "Dwarkesh Patel 00:46:28", "How do we know everything in the world is like that? What if bioweapons aren't like that?", "Mark Zuckerberg 00:46:32", "I mean, I don't know that everything in the world is like that. Bioweapons are one of the areas where the people who are most worried about this stuff are focused and I think it makes a lot of sense. There are certain mitigations. You can try to not train certain knowledge into the model. There are different things but at some level if you get a sufficiently bad actor, and you don't have other AI that can balance them and understand what the threats are, then that could be a risk. That's one of the things that we need to watch out for.", "Dwarkesh Patel 00:47:19", "Is there something you could see in the deployment of these systems where you're training Llama-4 and it lied to you because it thought you weren't noticing or something and you're like “whoa what's going on here?” This is probably not likely with a Llama-4 type system, but is there something you can imagine like that where you'd be really concerned about deceptiveness and billions of copies of this being out in the wild?", "Mark Zuckerberg 00:47:46", "I mean right now we see a lot of hallucinations . It's more so that. I think it's an interesting question, how you would tell the difference between hallucination and deception. There are a lot of risks and things to think about. I try, in running our company at least, to balance these longer-term theoretical risks with what I actually think are quite real risks that exist today. So when you talk about deception, the form of that that I worry about most is people using this to generate misinformation and then pump that through our networks or others. The way that we've combated this type of harmful content is by building AI systems that are smarter than the adversarial ones.", "This informs part of my theory on this. If you look at the different types of harm that people do or try to do through social networks, there are ones that are not very adversarial. For example, hate speech is not super adversarial in the sense that people aren't getting better at being racist. That's one where I think the AIs are generally getting way more sophisticated faster than people are at those issues. And we have issues both ways. People do bad things, whether they're trying to incite violence or something, but we also have a lot of false positives where we basically censor stuff that we shouldn't. I think that understandably makes a lot of people annoyed. So I think having an AI that gets increasingly precise on that is going to be good over time.", "But let me give you another example: nation states trying to interfere in elections . That's an example where they absolutely have cutting edge technology and absolutely get better each year. So we block some technique, they learn what we did and come at us with a different technique. It's not like a person trying to say mean things, They have a goal. They're sophisticated. They have a lot of technology. In those cases, I still think about the ability to have our AI systems grow in sophistication at a faster rate than theirs do. It's an arms race but I think we're at least winning that arms race currently. This is a lot of the stuff that I spend time thinking about.", "Yes, whether it's Llama-4 or Llama-6, we need to think about what behaviors we're observing and it's not just us. Part of the reason why you make this open source is that there are a lot of other people who study this too. So we want to see what other people are observing, what we’re observing, what we can mitigate, and then we'll make our assessment on whether we can make it open source. For the foreseeable future I'm optimistic we will be able to. In the near term, I don't want to take our eye off the ball in terms of what are actual bad things that people are trying to use the models for today. Even if they're not existential, there are pretty bad day-to-day harms that we're familiar with in running our services. That's actually a lot of what we have to spend our time on as well.", "Dwarkesh Patel 00:51:24", "I found the synthetic data thing really curious. With current models it makes sense why there might be an asymptote with just doing the synthetic data again and again. But let’s say they get smarter and you use the kinds of techniques—you talk about in the paper or the blog posts that are coming out on the day this will be released—where it goes to the thought chain that is the most correct. Why do you think this wouldn't lead to a loop where it gets smarter, makes better output, gets smarter and so forth. Of course it wouldn't be overnight, but over many months or years of training potentially with a smarter model.", "Mark Zuckerberg 00:52:00", "I think it could, within the parameters of whatever the model architecture is. It's just that with today's 8B parameter models, I don't think you're going to get to be as good as the state-of-the-art multi-hundred billion parameter models that are incorporating new research into the architecture itself.", "Dwarkesh Patel 00:52:28", "But those will be open source as well, right?", "Mark Zuckerberg 00:52:31", "Well yeah, subject to all the questions that we just talked about but yes. We would hope that that'll be the case. But I think that at each point, when you're building software there's a ton of stuff that you can do with software but then at some level you're constrained by the chips that it's running on. So there are always going to be different physical constraints. How big the models are is going to be constrained by how much energy you can get and use for inference. I'm simultaneously very optimistic that this stuff will continue to improve quickly and also a little more measured than I think some people are about it. I don’t think the runaway case is a particularly likely one.", "Dwarkesh Patel 00:53:32", "I think it makes sense to keep your options open. There's so much we don't know. There's a case in which it's really important to keep the balance of power so nobody becomes a totalitarian dictator. There's a case in which you don't want to open source the architecture because China can use it to catch up to America's AIs and there is an intelligence explosion and they win that. A lot of things seem possible. Keeping your options open considering all of them seems reasonable.", "Mark Zuckerberg 00:53:57", "Yeah.", "00:53:57 - Caesar Augustus and metaverse", "Dwarkesh Patel 00:53:57", "Let's talk about some other things. Metaverse. What time period in human history would you be most interested in going into? 100,000 BCE to now, you just want to see what it was like?", "Mark Zuckerberg 00:54:09", "It has to be the past?", "Dwarkesh Patel 00:54:12", "Oh yeah, it has to be the past.", "Mark Zuckerberg 00:54:13", "I'm really interested in American history and classical history. I'm really interested in the history of science too. I actually think seeing and trying to understand more about how some of the big advances came about would be interesting. All we have are somewhat limited writings about some of that stuff. I'm not sure the metaverse is going to let you do that because it's going to be hard to go back in time for things that we don't have records of. I'm actually not sure that going back in time is going to be that important of a thing. I think it's going to be cool for like history classes and stuff, but that's probably not the use case that I'm most excited about for the metaverse overall.", "The main thing is just the ability to feel present with people, no matter where you are. I think that's going to be killer. In the AI conversation that we were having, so much of it is about physical constraints that underlie all of this. I think one lesson of technology is that you want to move things from the physical constraint realm into software as much as possible because software is so much easier to build and evolve. You can democratize it more because not everyone is going to have a data center but a lot of people can write code and take open source code and modify it. Τhe metaverse version of this is enabling realistic digital presence. That’s going to be an absolutely huge difference so people don't feel like they have to be physically together for as many things. Now I think that there can be things that are better about being physically together. These things aren't binary. It's not going to be like “okay, now you don't need to do that anymore.” But overall, I think it's just going to be really powerful for socializing, for feeling connected with people, for working, for parts of industry, for medicine, for so many things.", "Dwarkesh Patel 00:56:32", "I want to go back to something you said at the beginning of the conversation. You didn't sell the company for a billion dollars. And with the metaverse, you knew you were going to do this even though the market was hammering you for it. I'm curious. What is the source of that edge? You said “oh, values, I have this intuition,” but everybody says that. If you had to say something that's specific to you, how would you express what that is? Why were you so convinced about the metaverse?", "Mark Zuckerberg 00:57:02", "I think that those are different questions. What are the things that power me? We've talked about a bunch of the themes. I just really like building things. I specifically like building things around how people communicate and understanding how people express themselves and how people work. When I was in college I studied computer science and psychology. I think a lot of other people in the industry studied computer science. So, it's always been the intersection of those two things for me.", "It’s also sort of this really deep drive. I don't know how to explain it but I just feel constitutionally that I'm doing something wrong if I'm not building something new. Even when we were putting together the business case for investing a $100 billion in AI or some huge amount in the metaverse, we have plans that I think made it pretty clear that if our stuff works, it'll be a good investment. But you can't know for certain from the outset. There are all these arguments that people have, with advisors or different folks. It's like, “how are you confident enough to do this?” Well the day I stop trying to build new things, I'm just done. I'm going to go build new things somewhere else. I'm fundamentally incapable of running something, or in my own life, and not trying to build new things that I think are interesting. That's not even a question for me, whether we're going to take a swing at building the next thing. I'm just incapable of not doing that. I don't know.", "I'm kind of like this in all the different aspects of my life. Our family built this ranch in Kauai and I worked on designing all these buildings. We started raising cattle and I'm like “alright, I want to make the best cattle in the world so how do we architect this so that way we can figure this out and build all the stuff up that we need to try to do that.” I don't know, that's me. What was the other part of the question?", "Dwarkesh Patel 01:00:54", "I'm not sure but I'm actually curious about something else. So a 19-year-old Mark reads a bunch of antiquity and classics in high school and college. What important lesson did you learn from it? Not just interesting things you found, but there aren't that many tokens you consume by the time you're 19. A bunch of them were about the classics. Clearly that was important in some way.", "Mark Zuckerberg 01:01:15", "There aren't that many tokens you consume... That's a good question. Here’s one of the things I thought was really fascinating. Augustus became emperor and he was trying to establish peace. There was no real conception of peace at the time. The people's understanding of peace was peace as the temporary time between when your enemies inevitably attack you. So you get a short rest. He had this view of changing the economy from being something mercenary and militaristic to this actually positive-sum thing. It was a very novel idea at the time.", "That’s something that's really fundamental: the bounds on what people can conceive of at the time as rational ways to work. This applies to both the metaverse and the AI stuff. A lot of investors, and other people, can't wrap their head around why we would open source this. It’s like “I don't understand, it’s open source. That must just be the temporary time between which you're making things proprietary, right?” I think it's this very profound thing in tech that it actually creates a lot of winners.", "I don't want to strain the analogy too much but I do think that a lot of the time, there are models for building things that people often can't even wrap their head around. They can’t understand how that would be a valuable thing for people to do or how it would be a reasonable state of the world. I think there are more reasonable things than people think.", "Dwarkesh Patel 01:03:36", "That's super fascinating. Can I give you what I was thinking in terms of what you might have gotten from it? This is probably totally off, but I think it’s just how young some of these people are, who have very important roles in the empire. For example, Caesar Augustus, by the time he’s 19, is already one of the most important people in Roman politics. He's leading battles and forming the Second Triumvirate . I wonder if the 19-year-old you was thinking “I can do this because Caesar Augustus did this.”", "Mark Zuckerberg 01:04:01", "That's an interesting example, both from a lot of history and American history too. One of my favorite quotes is this Picasso quote that all children are artists and the challenge is to remain an artist as you grow up. When you’re younger, it’s just easier to have wild ideas. There are all these analogies to the innovator’s dilemma that exist in your life as well as for your company or whatever you’ve built. You’re earlier on in your trajectory so it's easier to pivot and take in new ideas without disrupting other commitments to different things. I think that's an interesting part of running a company. How do you stay dynamic?", "01:04:53 - Open sourcing the $10b model & custom silicon", "Dwarkesh Patel 01:04:53", "Let’s go back to the investors and open source. The $10B model, suppose it's totally safe. You've done these evaluations and unlike in this case the evaluators can also fine-tune the model, which hopefully will be the case in future models. Would you open source the $10 billion model?", "Mark Zuckerberg 01:05:11", "As long as it's helping us then yeah.", "Dwarkesh Patel 01:05:13", "But would it? $10 billion of R&D and now it's open source.", "Mark Zuckerberg 01:05:17", "That’s a question which we’ll have to evaluate as time goes on too. We have a long history of open sourcing software. We don’t tend to open source our product. We don't take the code for Instagram and make it open source. We take a lot of the low-level infrastructure and we make that open source. Probably the biggest one in our history was our Open Compute Project where we took the designs for all of our servers, network switches, and data centers, and made it open source and it ended up being super helpful. Although a lot of people can design servers the industry now standardized on our design, which meant that the supply chains basically all got built out around our design. So volumes went up, it got cheaper for everyone, and it saved us billions of dollars which was awesome.", "So there's multiple ways where open source could be helpful for us. One is if people figure out how to run the models more cheaply. We're going to be spending tens, or a hundred billion dollars or more over time on all this stuff. So if we can do that 10% more efficiently, we're saving billions or tens of billions of dollars. That's probably worth a lot by itself. Especially if there are other competitive models out there, it's not like our thing is giving away some kind of crazy advantage.", "Dwarkesh Patel 01:06:38", "So is your view that the training will be commodified?", "Mark Zuckerberg 01:06:44", "I think there's a bunch of ways that this could play out and that's one. So “commodity” implies that it's going to get very cheap because there are lots of options. The other direction that this could go in is qualitative improvements. You mentioned fine-tuning. Right now it's pretty limited what you can do with fine-tuning major other models out there. There are some options but generally not for the biggest models. There’s being able to do that, different app specific things or use case specific things or building them into specific tool chains. I think that will not only enable more efficient development, but it could enable qualitatively different things.", "Here's one analogy on this. One thing that I think generally sucks about the mobile ecosystem is that you have these two gatekeeper companies, Apple and Google, that can tell you what you're allowed to build. There's the economic version of that which is like when we build something and they just take a bunch of your money. But then there's the qualitative version, which is actually what upsets me more. There's a bunch of times when we've launched or wanted to launch features and Apple's just like “nope, you're not launching that.” That sucks, right? So the question is, are we set up for a world like that with AI? You're going to get a handful of companies that run these closed models that are going to be in control of the APIs and therefore able to tell you what you can build?", "For us I can say it is worth it to go build a model ourselves to make sure that we're not in that position. I don't want any of those other companies telling us what we can build. From an open source perspective, I think a lot of developers don't want those companies telling them what they can build either. So the question is, what is the ecosystem that gets built out around that? What are interesting new things? How much does that improve our products? I think there are lots of cases where if this ends up being like our databases or caching systems or architecture, we'll get valuable contributions from the community that will make our stuff better. Our app specific work that we do will then still be so differentiated that it won't really matter. We'll be able to do what we do. We'll benefit and all the systems, ours and the communities’, will be better because it's open source.", "There is one world where maybe that’s not the case. Maybe the model ends up being more of the product itself. I think it's a trickier economic calculation then, whether you open source that. You are commoditizing yourself then a lot. But from what I can see so far, it doesn't seem like we're in that zone.", "Dwarkesh Patel 01:09:42", "Do you expect to earn significant revenue from licensing your model to the cloud providers? So they have to pay you a fee to actually serve the model.", "Mark Zuckerberg 01:09:49", "We want to have an arrangement like that but I don't know how significant it'll be. This is basically our license for Llama. In a lot of ways it's a very permissive open source license, except that we have a limit for the largest companies using it. This is why we put that limit in. We're not trying to prevent them from using it. We just want them to come talk to us if they're going to just basically take what we built and resell it and make money off of it. If you're like Microsoft Azure or Amazon , if you're going to be reselling the model then we should have some revenue share on that. So just come talk to us before you go do that. That's how that's played out.", "So for Llama-2, we just have deals with basically all these major cloud companies and Llama-2 is available as a hosted service on all those clouds. I assume that as we release bigger and bigger models, that will become a bigger thing. It's not the main thing that we're doing, but I think if those companies are going to be selling our models it just makes sense that we should share the upside of that somehow.", "Dwarkesh Patel 01:10:55", "Regarding other open source dangers, I think you have genuine legitimate points about the balance of power stuff and potentially the harms you can get rid of because we have better alignment techniques or something. I wish there were some sort of framework that Meta had. Other labs have this where they say “if we see this concrete thing, then that's a no go on the open source or even potentially on deployment.” Just writing it down so the company is ready for it and people have expectations around it and so forth.", "Mark Zuckerberg 01:11:25", "That's a fair point on the existential risk side. Right now we focus more on the types of risks that we see today, which are more of these content risks. We don't want the model to be doing things that are helping people commit violence or fraud or just harming people in different ways. While it is maybe more intellectually interesting to talk about the existential risks, I actually think the real harms that need more energy in being mitigated are things where someone takes a model and does something to hurt a person. In practice for the current models, and I would guess the next generation and maybe even the generation after that, those are the types of more mundane harms that we see today, people committing fraud against each other or things like that. I just don't want to shortchange that. I think we have a responsibility to make sure we do a good job on that.", "Dwarkesh Patel 01:12:33", "Meta's a big company. You can handle both.", "As far as open source goes, I'm actually curious if you think the impact of open source, from PyTorch , React , Open Compute and other things, has been bigger for the world than even the social media aspects of Meta. I've talked to people who use these services and they think that it's plausible because a big part of the internet runs on these things.", "Mark Zuckerberg 01:12:55", "It's an interesting question. I mean almost half the world uses our consumer products so it's hard to beat that. But I think open source is really powerful as a new way of building things. I mean, it's possible. It may be one of these things like Bell Labs , where they were working on the transistor because they wanted to enable long-distance calling. They did and it ended up being really profitable for them that they were able to enable long-distance calling. 5 to 10 years out from that, if you asked them what was the most useful thing that they invented it's like “okay, we enabled long distance calling and now all these people are long-distance calling.” But if you asked a hundred years later maybe it's a different answer.", "I think that's true of a lot of the things that we're building: Reality Labs , some of the AI stuff, some of the open source stuff. The specific products evolve, and to some degree come and go, but the advances for humanity persist and that's a cool part of what we all get to do.", "Dwarkesh Patel 01:14:14", "By when will the Llama models be trained on your own custom silicon?", "Mark Zuckerberg 01:14:19", "Soon, not Llama-4. The approach that we took is we first built custom silicon that could handle inference for our ranking and recommendation type stuff, so Reels, News Feed ads, etc. That was consuming a lot of GPUs. When we were able to move that to our own silicon, we're now able to use the more expensive NVIDIA GPUs only for training. At some point we will hopefully have silicon ourselves that we can be using for at first training some of the simpler things, then eventually training these really large models. In the meantime, I'd say the program is going quite well and we're just rolling it out methodically and we have a long-term roadmap for it.", "01:15:19 - Zuck as CEO of Google+", "Dwarkesh Patel 01:15:19", "Final question. This is totally out of left field. If you were made CEO of Google+ could you have made it work?", "Mark Zuckerberg 01:15:24", "Google+? Oof. I don't know. That's a very difficult counterfactual.", "Dwarkesh Patel 01:15:35", "Okay, then the real final question will be: when Gemini was launched, was there any chance that somebody in the office uttered : “ Carthago delenda est ”.", "Mark Zuckerberg 01:15:43", "No, I think we're tamer now. It's a good question. The problem is there was no CEO of Google+. It was just a division within a company. You asked before about what are the scarcest commodities but you asked about it in terms of dollars. I actually think for most companies, of this scale at least, it's focus. When you're a startup maybe you're more constrained on capital. You’re just working on one idea and you might not have all the resources. You cross some threshold at some point with the nature of what you're doing. You're building multiple things. You're creating more value across them but you become more constrained on what you can direct to go well.", "There are always the cases where something random awesome happens in the organization and I don't even know about it. Those are great. But I think in general, the organization's capacity is largely limited by what the CEO and the management team are able to oversee and manage. That's been a big focus for us. As Ben Horowitz says “keep the main thing, the main thing” and try to stay focused on your key priorities.", "Dwarkesh Patel 01:17:14", "Awesome, that was excellent, Mark. Thanks so much. That was a lot of fun.", "Mark Zuckerberg 01:17:17", "Yeah, really fun. Thanks for having me.", "Dwarkesh Patel 01:17:19", "Absolutely." ]
[ "https://twitter.com/cgbessellieu", "https://bit.ly/4aVllm4", "https://ai.meta.com/", "https://llama.meta.com/", "https://www.theverge.com/2024/4/9/24125217/meta-llama-smaller-lightweight-model-ai", "https://www.perplexity.ai/search/what-does-a-R2bzVN2UTJOLH1BUHsWiKA#0", "https://ai.meta.com/tools/system-cards/multimodal-generative-ai-systems/", "https://ai.meta.com/blog/multilingual-model-speech-recognition/", "https://ai.meta.com/research/publications/effective-long-context-scaling-of-foundation-models/", "https://arxiv.org/abs/2009.03300", "https://llama.meta.com/llama2/", "https://www.nvidia.com/en-us/data-center/h100/", "https://www.reuters.com/breakingviews/metas-spending-splurge-starts-look-troubling-2022-10-26/", "https://www.nytimes.com/2022/02/02/technology/meta-facebook-earnings-metaverse.html", "https://about.meta.com/what-is-the-metaverse/", "https://www.investopedia.com/terms/c/capitalexpenditure.asp", "https://en.wikipedia.org/wiki/Graphics_processing_unit", "https://creators.facebook.com/tools/reels/?locale=en_US", "https://ai.meta.com/blog/ai-unconnected-content-recommendations-facebook-instagram/", "https://www.theregister.com/2022/06/30/meta_we_need_5x_more/", "https://techcrunch.com/2013/03/13/would-facebook-have-sold-to-yahoo-for-1-6-billion-well-never-know/", "https://techcrunch.com/2013/12/09/facebook-artificial-intelligence-lab-lecun/", "https://en.wikipedia.org/wiki/Artificial_general_intelligence", "https://openai.com/blog/chatgpt", "https://openai.com/research/dall-e", "https://ai.meta.com/genai/", "https://en.wikipedia.org/wiki/Foundation_model", "https://about.fb.com/news/2023/09/new-ray-ban-meta-smart-glasses/", "https://www.meta.com/quest/", "http://meta.ai", "https://arxiv.org/abs/2210.07128", "https://en.wikipedia.org/wiki/Automated_reasoning", "https://arxiv.org/abs/2108.10152", "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5732077/", "https://hazelcast.com/glossary/machine-learning-inference/", "https://chanzuckerberg.com/", "https://www.dwarkeshpatel.com/p/will-scaling-work", "https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)", "https://en.wikipedia.org/wiki/Hand_coding", "https://en.wikipedia.org/wiki/Knowledge_distillation#:~:text=In%20machine%20learning%2C%20knowledge%20distillation,might%20not%20be%20fully%20utilized.", "https://transparency.meta.com/features/explaining-ranking/fb-feed", "https://transparency.meta.com/features/explaining-ranking/ig-feed/", "https://arxiv.org/abs/2203.15556", "http://vigir.missouri.edu/~gdesouza/Research/Conference_CDs/IEEE_SSCI_2015/data/7560b423.pdf", "https://www.nytimes.com/2023/08/16/technology/ai-gpu-chips-shortage.html", "https://nationalinterest.org/blog/techland/promise-and-peril-ai-power-grid-208858", "https://www.investopedia.com/terms/r/randd.asp", "https://www.energy.gov/ne/articles/infographic-how-much-power-does-nuclear-reactor-produce", "https://www.costar.com/article/1471314418/amazon-pays-650-million-for-nuclear-powered-data-center-in-pennsylvania", "https://www.run.ai/guides/gpu-deep-learning/distributed-training#:~:text=As%20its%20name%20suggests%2C%20distributed,to%20accelerate%20the%20training%20process.", "https://en.wikipedia.org/wiki/Synthetic_data#Machine_learning", "https://techpolicyinstitute.org/publications/artificial-intelligence/from-tokens-to-context-windows-simplifying-ai-jargon/", "https://en.wikipedia.org/wiki/High-_and_low-level#:~:text=In%20computer%20science%2C%20software%20is,hardware%20drivers%2C%20etc.).", "https://plato.stanford.edu/entries/artificial-intelligence/", "https://plato.stanford.edu/entries/consciousness/", "https://plato.stanford.edu/entries/life/", "https://plato.stanford.edu/entries/agency/", "https://en.wikipedia.org/wiki/Open-source_artificial_intelligence", "https://opensource.org/blog/compelling-responses-to-ntias-ai-open-model-weights-rfc", "https://www.rand.org/pubs/research_reports/RRA2977-2.html", "https://www.nature.com/articles/d41586-024-00189-3", "https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)", "https://carnegieendowment.org/2024/01/31/looking-ahead-generative-ai-pub-91489", "https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd#:~:text=As%20the%20U.S.%20presidential%20race,to%20engage%20in%20malign%20influence.%E2%80%9D&text=With%20AI%20deepfakes%2C%20a%20candidate's,can%20be%20smeared%2C%20or%20softened.", "https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion", "https://www.wired.com/story/mark-zuckerberg-inside-hawaii-compound/", "https://www.instagram.com/zuck/p/C15Lck4SfpS/?img_index=1", "https://en.wikipedia.org/wiki/Augustus", "https://en.wikipedia.org/wiki/Pax_Romana", "https://en.wikipedia.org/wiki/Battle_of_Mutina", "https://en.wikipedia.org/wiki/Second_Triumvirate", "https://stedelijkstudies.com/picasso-and-the-art-of-children/#:~:text=Several%20modern%20artists%20celebrated%20the,artist%20once%20we%20grow%20up.%E2%80%9D", "https://en.wikipedia.org/wiki/The_Innovator%27s_Dilemma", "https://tech.facebook.com/engineering/2021/11/open-compute-project/", "https://www.theverge.com/2019/1/30/18203551/apple-facebook-blocked-internal-ios-apps", "https://azure.microsoft.com/en-us", "https://aws.amazon.com/free/?trk=fce796e8-4ceb-48e0-9767-89f7873fac3d&sc_channel=ps&ef_id=Cj0KCQjwiYOxBhC5ARIsAIvdH52nDKliMjrj23E3ECqwNWLErxQrfpjdolOxCX1-rhhvyoab_OoQWIkaAqrVEALw_wcB:G:s&s_kwcid=AL!4422!3!432339156147!e!!g!!amazon%20web%20services!1644045032!68366401812", "https://aws.amazon.com/bedrock/llama/", "https://www.anthropic.com/news/anthropics-responsible-scaling-policy", "https://pytorch.org/", "https://react.dev/", "https://www.opencompute.org/", "https://en.wikipedia.org/wiki/Bell_Labs", "https://en.wikipedia.org/wiki/Transistor#Bipolar_transistors", "https://about.meta.com/realitylabs/", "https://ai.meta.com/blog/meta-training-inference-accelerator-AI-MTIA/", "https://www.nvidia.com/en-us/data-center/h100/", "https://en.wikipedia.org/wiki/Google%2B", "https://gemini.google.com/", "https://www.vanityfair.com/news/2016/06/how-mark-zuckerberg-led-facebooks-war-to-crush-google-plus", "https://en.wikipedia.org/wiki/Carthago_delenda_est", "https://a16z.com/author/ben-horowitz/" ]
https://www.dwarkesh.com/p/mark-zuckerberg-2
Mark Zuckerberg — Meta's AGI Plan
[ "How Llama 4 compares to other models", "Dwarkesh Patel", "Mark, thanks for coming on the podcast again.", "Mark Zuckerberg", "Yeah, happy to do it. Good to see you.", "Dwarkesh Patel", "You too. Last time you were here, you had launched Llama 3 . Now you've launched Llama 4 .", "Mark Zuckerberg", "Well, the first version.", "Dwarkesh Patel", "That's right. What's new? What's exciting? What's changed?", "Mark Zuckerberg", "The whole field is so dynamic. I feel like a ton has changed since the last time we talked. Meta AI has almost a billion people using it monthly now, which is pretty wild. I think this is going to be a really big year for all of this, especially once you get the personalization loop going, which we’re just starting to build in now really, from both the context that all the algorithms have about what you’re interested in — feed, your profile information, your social graph information — but also what you're interacting with the AI about. That’s going to be the next thing that's super exciting. I'm really big on that.", "The modeling stuff continues to make really impressive advances too. I'm pretty happy with the first set of Llama 4 releases. We announced four models and released the first two — the Scout and Maverick ones — which are mid-size to small models.", "The most popular Llama 3 model was the 8 billion parameter one. So we’ve got one of those coming in the Llama 4 series too. Our internal code name for it is “Little Llama.” That’s coming probably over the next few months.", "Scout and Maverick are good. They have some of the highest intelligence per cost you can get of any model out there. They’re natively multimodal , very efficient, run on one host. They’re designed to be very efficient and low latency, for a lot of the use cases we’re building for internally. That’s our whole thing. We build what we want, and then we open-source it so other people can use it too. I'm excited about that.", "I'm also excited about the Behemoth model, which is coming up. It's going to be our first model that's sort of at the frontier — more than 2 trillion parameters. As the name says, it's quite big. We’re trying to figure out how to make that useful for people. It’s so big that we've had to build a bunch of infrastructure just to be able to post-train it ourselves.", "Now we're trying to wrap our heads around, how does the average developer out there actually use something like this? How do we make it useful — maybe by distilling it into models that are a reasonable size to run? Because you're obviously not going to want to run something like that in a consumer model.", "As you saw with the Llama 3 stuff last year, the initial launch was exciting and then we just built on that over the year. 3.1 released the 405 billion model, 3.2 is when we got all the multimodal stuff in. We basically have a roadmap like that for this year too. So a lot going on.", "Dwarkesh Patel", "I'm interested to hear more about it. There's this impression that the gap between the best closed-source and the best open-source models has increased over the last year. I know the full family of Llama 4 models isn't out yet, but Llama 4 Maverick is at #35 on Chatbot Arena . On a bunch of major benchmarks, it seems like o4-mini or Gemini 2.5 Flash are beating Maverick, which is in the same class. What do you make of that impression?", "Mark Zuckerberg", "There are a few things. First, I actually think this has been a very good year for open source overall. If you go back to where we were last year, Llama was the only real, super-innovative open-source model. Now you have a bunch of them in the field.", "In general, the prediction that this would be the year open source generally overtakes closed source as the most used models out there, I think that's generally on track to be true.", "One interesting surprise — positive in some ways, negative in others, but overall good — is that it’s not just Llama. There are a lot of good ones out there. I think that's quite good.", "Then there's the reasoning phenomenon, which you're alluding to talking about o3 , o4, and other models. There's a specialization happening. If you want a model that’s the best at math problems, coding, or different things like those tasks, then reasoning models that consume more test-time or inference-time compute in order to provide more intelligence are a really compelling paradigm. And we're building a Llama 4 reasoning model too. It'll come out at some point.", "But for a lot of the applications we care about, latency and good intelligence per cost are much more important product attributes. If you're primarily designing for a consumer product, people don't want to wait half a minute to get an answer. If you can give them a generally good answer in half a second, that's a great tradeoff.", "I think both of these are going to end up being important directions. I’m optimistic about integrating reasoning models with the core language models over time. That's the direction Google has gone in with some of the more recent Gemini models. I think that's really promising. But I think there’s just going to be a bunch of different stuff that goes on.", "You also mentioned the whole Chatbot Arena thing, which I think is interesting and points to the challenge around how you do benchmarking. How do you know what models are good for which things?", "One of the things we've generally tried to do over the last year is anchor more of our models in our Meta AI product north star use cases. The issue with open source benchmarks, and any given thing like the LM Arena stuff, is that they’re often skewed toward a very specific set of uses cases, which are often not actually  what any normal person does in your product. The portfolio of things they’re trying to measure is often different from what people care about in any given product.", "Because of that, we’ve found that trying to optimize too much for that kind of stuff has led us astray. It’s actually not led towards the highest quality product, the most usage, and best feedback within Meta AI as people use our stuff.", "So we're trying to anchor our north star on the product value that people report to us, what they say that they want, and what their revealed preferences are, and using the experiences that we have. Sometimes these benchmarks just don't quite line up. I think a lot of them are quite easily gameable.", "On the Arena you'll see stuff like Sonnet 3.7 , which is a great model, and it's not near the top. It was relatively easy for our team to tune a version of Llama 4 Maverick that could be way at the top. But the version we released, the pure model, actually has no tuning for that at all, so it's further down. So you just need to be careful with some of these benchmarks. We're going to index primarily on the products.", "​​ Dwarkesh Patel", "Do you feel like there is some benchmark which captures what you see as a north star of value to the user which can be be objectively measured between different models and where you'd say, \"I need Llama 4 to come out on top on this”?", "Mark Zuckerberg", "Our benchmark is basically user value in Meta AI.", "Dwarkesh Patel", "But you can't compare that to other models.", "Mark Zuckerberg", "We might be able to, because we might be able to run other models and be able to tell. That's one of the advantages of open source. You have a good community of folks who can poke holes in your stuff and point out, \"Okay, where is your model not good, and where is it good?\"", "The reality at this point is that all these models are optimized for slightly different mixes of things. Everyone is trying to go towards the same end in that all the leading labs are trying to create general intelligence, superintelligence, whatever you call it. AI that can lead toward a world of abundance where everyone has these superhuman tools to create whatever they want. That leads to dramatically empowering people and creating all these economic benefits.", "However you define it, that's what a lot of the labs are going for. But there's no doubt that different folks have optimized toward different things. I think the Anthropic folks have really focused on coding and agents around that. The OpenAI folks, I think, have gone a little more toward reasoning recently.", "There’s a space which, if I had to guess, I think will end up being the most used one: quick, very natural to interact with, natively multimodal, fitting throughout your day in the ways you want to interact with it.", "I think you got a chance to play around with the new Meta AI app that we're releasing. One of the fun things we put in there is the demo for the full-duplex voice. It's early. There’s a reason why we haven't made that the default voice model in the app yet. But there's something about how naturally conversational it is that's really fun and compelling.", "Being able to mix that in with the right personalization is going to lead toward a product experience where… If you fast-forward a few years, I think we're just going to be talking to AI throughout the day about different things we're wondering about.", "You'll have your phone. You'll talk to it while browsing your feed apps. It'll give you context about different stuff. It'll answer your questions. It'll help you as you're interacting with people in messaging apps. Eventually, I think we'll walk through our daily lives and have glasses or other kinds of AI devices and just seamlessly interact with it all day long.", "That’s the north star. Whatever the benchmarks are that lead toward people feeling like the quality is where they want to interact with it, that's what will ultimately matter the most to us.", "Intelligence explosion", "Dwarkesh Patel", "I got a chance to play around with both Orion and also the Meta AI app, and the voice mode was super smooth. It was quite impressive.", "On the point of what the different labs are optimizing for — to steelman their view — I think a lot of them believe that once you fully automate software engineering and AI research, then you can kick off an intelligence explosion . You would have millions of copies of these software engineers replicating the research that happened between Llama 1 and Llama 4 — that scale of improvement again — but in a matter of weeks or months rather than years. So it really matters to just close the loop on the software engineer, and then you can be the first to ASI . What do you make of that?", "Mark Zuckerberg", "I personally think that's pretty compelling. That's why we have a big coding effort too. We're working on a number of coding agents inside Meta. Because we're not really an enterprise software company, we're primarily building it for ourselves.", "Again, we go for a specific goal. We're not trying to build a general developer tool. We're trying to build a coding agent and an AI research agent that advances Llama research specifically. And it's fully plugged into our toolchain and all that.", "That's important and is going to end up being an important part of how this stuff gets done. I would guess that sometime in the next 12 to 18 months, we'll reach the point where most of the code that's going toward these efforts is written by AI. And I don't mean autocomplete.", "Today you have good autocomplete. You start writing something and it can complete a section of code. I'm talking more like: you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already. I think that's going to be a really important part of this for sure.", "But I don't know if that's the whole game. That's going to be a big industry, and it's going to be an important part of how AI gets developed. But I think there are still… One way to think about it is that this is a massive space. I don't think there's just going to be one company with one optimization function that serves everyone as best as possible. There are going to be a bunch of different labs doing leading work in different domains. Some will be more enterprise-focused or coding-focused. Some will be more productivity-focused. Some will be more social or entertainment-focused.", "Within the assistant space, there will be some that are more informational and productivity-focused, and some that are more companion-focused. It’s going to be a lot of stuff that’s just fun and entertaining and shows up in your feed.", "There's just a huge amount of space. Part of what's fun about going toward this AGI future is that there are a bunch of common threads for what needs to get invented, but also a lot of things that still need to be created. I think you're going to start seeing more specialization between different groups, if I had to guess.", "Dwarkesh Patel", "It’s really interesting to me that you basically agree with the premise that there will be an intelligence explosion and we’ll get something like superintelligence on the other end. Tell me if I'm misunderstanding you. If that’s the case, why even bother with personal assistants and whatever else? Why not just get to superhuman intelligence first and then deal with everything else later?", "Mark Zuckerberg", "I think that's just one aspect of the flywheel. Part of what I generally disagree with on the fast-takeoff view is that it takes time to build out physical infrastructure.", "If you want to build a gigawatt cluster of compute, that just takes time. NVIDIA needs time to stabilize their new generation of systems. Then you need to figure out the networking around it. Then you need to build the building. You need to get permitting. You need to get the energy. Maybe that means gas turbines or green energy, either way, there’s a whole supply chain of that stuff.", "We talked about this a bunch the last time I was on the podcast with you. I think some of these are just physical-world, human-time things. As you start getting more intelligence in one part of the stack, you’re just going to run into a different set of bottlenecks. That’s how engineering always works: solve one bottleneck, you get another bottleneck.", "Another bottleneck in the system or ingredient that’s going to make this work well, is people getting used to learning and having a feedback loop with using the system. These systems don’t just show up fully formed with people magically knowing how to use them. There's a co-evolution that happens where people are learning how to best use these AI assistants. At the same time, the AI assistants are learning what people care about. Developers are making the AI assistants better.", "You're building up a base of context too. You wake up a year or two into it and the assistant can reference things you talked about two years ago and that’s pretty cool. You couldn’t do that even if you launched the perfect thing on day one. There’s no way it could reference what you talked about two years ago if it didn’t exist two years ago.", "So I guess my view is that there's this huge intelligence growth. There’s a very rapid curve on the uptake of people interacting with the AI assistants, and the learning feedback and data flywheel around that. And then there is also the buildout of the supply chains and infrastructure and regulatory frameworks to enable the scaling of a lot of the physical infrastructure. At some level, all of those are going to be necessary, not just the coding piece.", "One specific example of this that I think is interesting. Even if you go back a few years ago, we had a project, I think it was on our ads team, to automate ranking experiments. That's a pretty constrained environment. It's not open-ended code. It’s basically, look at the whole history of the company — every experiment that any engineer has ever done in the ad system — and look at what worked, what didn't, and what the results of those were. Then basically formulate new hypotheses for different tests that we should run that could improve the performance of the ad system.", "What we basically found was that we were bottlenecked on compute to run tests, based on the number of hypotheses. It turns out, even with just the humans we have right now on the ads team, we already have more good ideas to test than we actually have either compute or, really, cohorts of people to test them with.", "Even if you have three and a half billion people using your products, you still want each test to be statistically significant. It needs to have hundreds of thousands or millions of people. There's only so much throughput you can get on testing through that. So we're already at the point, even with just the people we have, that we can't really test everything that we want.", "Now just being able to test more things is not necessarily going to be additive to that. We need to get to the point where the average quality of the hypotheses that the AI is generating is better than all the things above the line that we’re actually able to test that the best humans on the team have been able to do, before it will even be marginally useful for it.", "We'll get there I think pretty quickly. But it's not just, “Okay, cool, the thing can write code, and now all of a sudden everything is just improving massively.” There are real-world constraints that need to be overcome.", "Then you need to have the compute and the people to test. Then over time, as the quality creeps up, are we here in five or 10 years where no set of people can generate a hypothesis as good as the AI system? I don't know, maybe. In that world, obviously that's going to be how all the value is created. But that's not the first step.", "Dwarkesh Patel", "So if you buy this view, that this is where intelligence is headed, the reason to be bullish on Meta is obviously that you have all this distribution. You can also use that to learn more things that can be useful for training. You mentioned the Meta AI app now has a billion active users.", "Mark Zuckerberg", "Not the app. The app is a standalone thing that we're just launching now. It’ll be fun for people who want to use it. It's a cool experience. We can talk about that too because we’re experimenting with some new ideas in there that I think are novel and worth talking through.", "But I’m mostly talking about our apps. Meta AI is actually most used in WhatsApp. WhatsApp is mostly used outside of the U.S. We just passed like a hundred million people in the US, but it's not the primary messaging system in the US, iMessage is. So people in the U.S. probably tend to underestimate Meta AI usage somewhat. But part of the reason the standalone app is going to be so important is because the US, for a lot of reasons, is one of the most important countries. And the fact that WhatsApp is the main way people are using Meta AI and that's not the main messaging system in the US means we need another way to build a first-class experience that's really in front of people.", "Dwarkesh Patel", "And I guess, to finish the question, the bearish case would be that if the future of AI is less about just answering your questions and more about being a virtual coworker, then it's not clear how Meta AI inside of WhatsApp gives you the relevant training data to make a fully autonomous programmer or remote worker. In that case, does it not matter that much who has more distribution right now with LLMs?", "Mark Zuckerberg", "Again, I just think there are going to be different things. Imagine you were sitting at the beginning of the development of the internet and you asked, \"What's going to be the main internet thing? Is it going to be knowledge work or massive consumer apps?\"", "You got both. You don’t have to choose one. The world is big and complicated. Does one company build all of that stuff? Normally the answer is no. But to your question, people do not code in WhatsApp for the most part. And I don't foresee that people starting to write code in WhatsApp is going to be a major use case. Although I do think people are going to ask AI to do a lot of things that result in the AI coding without them necessarily knowing it. That's a separate thing.", "We do have a lot of people who are writing code at Meta and they use Meta AI. We have this internal thing called MetaMate, and a number of different coding and AI research agents that we're building around that. That has its own feedback loop and I think it can get quite good for accelerating those efforts.", "But again, there are going to be a lot of things. AI is almost certainly going to unlock a massive revolution in knowledge work and code. I also think it’s going to be the next generation of search and how people get information, and do more complex information tasks.", "I also think it's going to be fun. People are going to use it to be entertained. A lot of the internet today is memes and humor. We have this amazing technology at our fingertips. It’s amazing and funny when you think about how much of human energy just goes toward entertaining ourselves, designing, pushing culture forward, and finding humorous ways to explain cultural phenomena that we observe. I think that's almost certainly going to be the case in the future.", "Look at the evolution of things like Instagram and Facebook. If you go back 10, 15, 20 years ago, it was text. Then we all got phones with cameras, and most of the content became photos. Then the mobile networks got good enough that if you wanted to watch a video on your phone, it wasn't just buffering the whole time. So that got good.", "Over the last 10 years, most of the content has moved toward video at this point. Today, most of the time spent on Facebook and Instagram is on video. But do you think in five years we’re just going to be sitting in our feed and consuming media that's just video? No, it's going to be interactive. You'll be scrolling through your feed. There will be content that maybe looks like a Reel to start. But you can talk to it, or interact with it, and it talks back, or it changes what it's doing. Or you can jump into it like a game and interact with it. That's all going to be AI.", "My point is that there are going to be all these different things. We're ambitious, so we're working on a bunch of them. But I don't think any one company is going to do all of it.", "AI Friends, Therapists & Girlfriend", "Dwarkesh Patel", "On this point about AI-generated content and AI interactions, already people have meaningful relationships with AI therapists, AI friends, maybe more. This is just going to get more intense as these AIs become more unique, more personable, more intelligent, more spontaneous, more funny, and so forth.", "People are going to have relationships with AI. How do we make sure these are healthy relationships?", "Mark Zuckerberg", "There are a lot of questions that you only can really answer as you start seeing the behaviors. Probably the most important upfront thing is just to ask that question and care about it at each step along the way. But I also think being too prescriptive upfront and saying, \"We think these things are not good\" often cuts off value.", "People use stuff that's valuable for them. One of my core guiding principles in designing products is that people are smart. They know what's valuable in their lives. Every once in a while, something bad happens in a product and you want to make sure you design your product well to minimize that.", "But if you think something someone is doing is bad and they think it's really valuable, most of the time in my experience, they're right and you're wrong. You just haven't come up with the framework yet for understanding why the thing they're doing is valuable and helpful in their life. That's the main way I think about it.", "I do think people are going to use AI for a lot of these social tasks. Already, one of the main things we see people using Meta AI for is talking through difficult conversations they need to have with people in their lives. \"I'm having this issue with my girlfriend. Help me have this conversation.” Or, \"I need to have a hard conversation with my boss at work. How do I have that conversation?\" That's pretty helpful. As the personalization loop kicks in and the AI starts to get to know you better and better, that will just be really compelling.", "Here’s one stat from working on social media for a long time that I always think is crazy. The average American has fewer than three friends, fewer than three people they would consider friends. And the average person has demand for meaningfully more. I think it's something like 15 friends or something. At some point you're like, \"All right, I'm just too busy, I can't deal with more people.\"", "But the average person wants more connection than they have. There's a lot of concern people raise like, \"Is this going to replace real-world, physical, in-person connections?\" And my default is that the answer to that is probably not. There are all these things that are better about physical connections when you can have them. But the reality is that people just don't have as much connection as they want. They feel more alone a lot of the time than they would like.", "So I think a lot of these things — things that today might have a little bit of stigma around them — over time, we'll find the vocabulary as a society to articulate why they are valuable, why the people who are doing them are rational for doing it, and how it is actually adding value to their lives. But also the field is very early. There are a handful of companies doing virtual therapists, virtual girlfriend-type stuff. But it's very early. The embodiment in those things is still pretty weak. You open it up and it's just an image of the therapist or the person you're talking to. Sometimes there's some very rough animation, but it's not an embodiment.", "You've seen the stuff we're working on in Reality Labs, where you have the Codec Avatars and it actually feels like a real person. That's where it's going. You'll be able to have an always-on video chat with the AI. The gestures are important too. More than half of communication, when you're actually having a conversation, is not the words you speak. It's all the nonverbal stuff.", "Dwarkesh Patel", "I did get a chance to check out Orion the other day, and I thought it was super impressive. I'm mostly optimistic about the technology. Generally, like you mentioned, I'm pretty libertarian about this. If people are doing something, they probably think it's good for them. Although, I actually don't know if it's the case that if somebody is using TikTok, they would say that they're happy with how much time they're spending on TikTok or something.", "I'm mostly optimistic about it in the sense that if we're going to be living in this future world of AGI, we need to be upgrading our capabilities too, with tools like this. And just generally, there can be more beauty in the world if you can see Studio Ghibli everywhere or something.", "I was worried about one of the flagship use cases that your team showed me. I'm sitting at the breakfast table and on the periphery of my vision is just a bunch of Reels that are scrolling by. Maybe in the future, my AI girlfriend is on the other side of the screen or something. So I am worried that we're just removing all the friction between getting totally reward-hacked by our technology. How do we make sure this is not what ends up happening in five years?", "Mark Zuckerberg", "Again, I think people have a good sense of what they want. That experience you saw was just a demo to show multitasking and holograms. I agree, I don't think the future is one where you have stuff that's trying to compete for your attention in the corner of your vision all the time. I don't think people would like that too much.", "As we're designing these glasses, it's actually one of the things that we're really mindful of. Probably the number one thing the glasses need to do is get out of the way and be good glasses. As an aside, I think that's part of the reason why the Ray-Ban Meta product has done so well. It's great for listening to music, taking phone calls, taking photos and videos. The AI is there when you want it. But when you don't, it's just a good-looking pair of glasses that people like. It gets out of the way well.", "I would guess that's going to be a very important design principle for the augmented reality future. The main thing that I see here is this. It's kind of crazy that, for how important the digital world is in all of our lives, the only way we access it is through these physical, digital screens. You have your phone, your computer. You can put a big TV on your wall. It's this huge physical thing.", "It just seems like we're at the point with technology where the physical and digital world should really be fully blended. That's what holographic overlays allow you to do. But I agree. I think a big part of the design principles around that will be around how you'll be interacting with people. You'll be able to bring digital artifacts into those interactions and do cool things very seamlessly.", "If I want to show you something, here’s a screen. We can interact with it. It can be 3D. We can play with it. You want to play a card game? All right, here’s a deck of cards. We can play with it. If two of us are physically together and we have a third friend who’s hologramming in, they can participate too.", "But in that world too — just as you don't want your physical space to be cluttered because it wears on you psychologically — I don't think people are going to want their digital-physical space to feel that way either.  That's more of an aesthetic norm that will have to get worked out, but I think we’ll figure that out.", "DeepSeek & China", "Dwarkesh Patel", "Going back to the AI conversation, you were mentioning how big of a bottleneck the physical infrastructure can be. Related to other open-source models, like DeepSeek and so forth, DeepSeek right now has less compute than a lab like Meta and you could argue that it's competitive with the Llama models.", "If China is better at physical infrastructure, industrial scale-ups, getting more power and more data centers online, how worried are you that they might beat us here?", "Mark Zuckerberg", "It's a real competition. You're seeing industrial policies really play out. China is bringing online more power. Because of that, the US really needs to focus on streamlining the ability to build data centers and produce energy. Otherwise, I think we’ll be at a significant disadvantage.", "At the same time, some of the export controls on things like chips, I think you can see how they’re clearly working in a way. There was all the conversation with DeepSeek about, \"Oh, they did all these very impressive low-level optimizations.\" And the reality is, they did and that is impressive.", "But then you ask, \"Why did they have to do that, when none of the American labs did it?\" It’s because they’re using partially nerfed chips that are the only ones NVIDIA is allowed to sell in China because of the export controls. DeepSeek basically had to spend a bunch of their calories and time doing low-level infrastructure optimizations that the American labs didn’t have to do.", "Now, they produced a good result on text. DeepSeek is text-only. The infrastructure is impressive. The text result is impressive. But every new major model that comes out now is multimodal. It's image, it's voice. Theirs isn't.", "Now the question is, why is that the case? I don’t think it’s because they’re not capable of doing it. It's because they had to spend their calories on doing these infrastructure optimizations to overcome the fact that there were these export controls.", "But when you compare Llama 4 with DeepSeek —I mean our reasoning model isn’t out yet, so the R1 comparison isn’t clear yet— but we’re basically in the same ballpark on all the text stuff that DeepSeek is doing but with a smaller model. So the cost-per-intelligence is lower with what we’re doing for Llama on text. On the multimodal side we’re effectively leading at and it just doesn’t exist in their models.", "So the Llama 4 models, when you compare them to what DeepSeek is doing, are good. I think people will generally prefer to use the Llama 4 models. But there’s this interesting contour where it’s clearly a good team doing stuff over there. And you're right to ask about the accessibility of power, the accessibility of compute and chips, because the work that you're seeing different labs do and the way it's playing out is somewhat downstream of that.", "Open source AI", "Dwarkesh Patel", "So Sam Altman recently tweeted that OpenAI is going to release an open-source SOTA reasoning model. I think part of the tweet was that they won’t do anything silly, like say you can only use it if you have less than 700 million users.", "DeepSeek has the MIT license , whereas I think a couple of the contingencies in the Llama license require you to say \"built with Llama\" on applications using it or any model that you train using Llama has to begin with the word \"Llama.\" What do you think about the license? Should it be less onerous for developers?", "Mark Zuckerberg", "Look, we basically pioneered the open-source LLM thing. So I don't consider the license to be onerous. When we were starting to push on open source, there was this big debate in the industry. Is this even a reasonable thing to do? Can you do something that is safe and trustworthy with open source? Will open source ever be able to be competitive enough that anyone will even care?", "Basically, when we were answering those questions a lot of the hard work was done by the teams at Meta. There were other folks in the industry but really, the Llama models were the ones that broke open this whole open-source AI thing in a huge way.", "If we’re going to put all this energy into it, then at a minimum, if you're going to have these large cloud companies — like Microsoft and Amazon and Google — turn around and sell our model, then we should at least be able to have a conversation with them before they do that around what kind of business arrangement we should have.", "Our goal with the license, we're generally not trying to stop people from using the model. We just think that if you're one of those companies, or if you're Apple, just come talk to us about what you want to do. Let's find a productive way to do it together. I think that’s generally been fine.", "Now, if the whole open-source part of the industry evolves in a direction where there are a lot of other great options and the license ends up being a reason why people don’t want to use Llama, then we’ll have to reevaluate the strategy. What it makes sense to do at that point. But I don’t think we’re there.", "That’s not, in practice, something we’ve seen, companies coming to us and saying, “We don’t want to use this because your license says if you reach 700 million people, you have to come talk to us.” So far, that’s been more something we’ve heard from open-source purists like, “Is this as clean of an open-source model as you’d like it to be?”", "That debate has existed since the beginning of open source. All the GPL license stuff versus other things, do you need to make it so that anything that touches open source has to be open source too? Or can people take it and use it in different ways? I'm sure there will continue to be debates around this.", "But if you’re spending many billions of dollars training these models, I think asking the other companies — the huge ones that are similar in size and can easily afford to have a relationship with us — to talk to us before they use it seems like a pretty reasonable thing.", "Dwarkesh Patel", "If it turns out that other models are also really good. There’s a bunch of good open-source models. So that part of your mission is fulfilled, and maybe other models are better at coding.", "Is there a world where you just say, \"Look, the open-source ecosystem is healthy. There’s plenty of competition. We're happy to just use some other model, whether it's for internal software engineering at Meta or deploying to our apps. We don't necessarily need to build with Llama\"?", "Mark Zuckerberg", "Again, we do a lot of things. Let's take a step back. The reason why we're building our own big models is because we want to be able to build exactly what we want. None of the other models in the world are exactly what we want.", "If they're open source, you can take them and fine-tune them in different ways. But you still have to deal with the model architectures. And they make different size tradeoffs that affect latency and inference cost. At the scale that we operate at, that stuff really matters.", "We made the Llama Scout and Maverick models certain sizes for a specific reason. They fit on a host and we wanted certain latency — especially for the voice models that we’re working on — that we want to pervade everything we're doing from the glasses to all of our apps to the Meta AI app and all that stuff.", "There's a level of control of your own destiny that you only get when you build the stuff yourself. That said, AI is going to be used in every single thing that every company does. When we build a big model, we also have to choose which internal use cases we're going to optimize for.", "So does that mean for certain things we might say, \"Okay, maybe Claude is better for building this specific development tool that this team is using”? All right, cool then use that. Great. We don’t want to fight with one hand tied behind our back. We’re doing a lot of different stuff.", "You also asked, would it not be important anymore because other people are doing open source? On this, I'm a little more worried. You have to ask yourself this. For anyone who shows up now and is doing open source — now that we have done it — would they still be doing open source if we weren’t doing it?", "I think there are a handful of folks who see the trend that more and more development is going toward open source, and they're like, \"Oh crap, we need to be on this train or else we’re going to lose.\" If you have a closed-model API and increasingly a lot of developers don't want that.", "So you’re seeing a bunch of other players start to do some work in open source. But it's unclear if it's dabbling, or fundamental for them the way that it has been for us. A good example is what's going on with Android. Android started off as the open-source thing. There's not really any open-source alternative. Over time, Android has just gotten more and more closed.", "So if you're us, you need to worry that if we stop pushing the industry in this direction, all these other people… Maybe they’re only really doing it because they're trying to compete with us and the direction we’re pushing things. They already showed their revealed preference for what they would do if open source didn’t exist. And it wasn’t open source. We just need to be careful about relying on that continued behavior for the future of the technology that we're going to build at the company.", "Dwarkesh Patel", "Another thing I've heard you mention is that it's important that the standard gets built around American models like Llama. I wanted to understand your logic there. With certain kinds of networks, it is the case that the Apple App Store just has a big contingency around what it's built around.", "But it doesn't seem like if you built some sort of scaffold for DeepSeek, you couldn't have easily just switched it over to Llama 4, especially since between generations. Llama 3 wasn't MoE and Llama 4 is. So things are changing between generations of models as well.", "What’s the reason for thinking things will get built out in this contingent way on a specific standard?", "Mark Zuckerberg", "I'm not sure, what do you mean by contingent?", "Dwarkesh Patel", "As in, it's important that people are building for Llama rather than for LLMs in general, because that will determine what the standard is in the future.", "Mark Zuckerberg", "Look, I think these models encode values and ways of thinking about the world.", "We had this interesting experience early on, where we took an early version of Llama and translated it. I think it was French, or some other language.", "The feedback we got from French people was, \"This sounds like an American who learned to speak French. It doesn’t sound like a French person.\" And we were like, “what do you mean, does it not speak French well?” No, it speaks French fine. It was just that the way it thought about the world seemed slightly American. So I think there are these subtle things that get built into the models.", "Over time, as models get more sophisticated, they should be able to embody different value sets across the world. So maybe that's not a particularly sophisticated example, but I think it illustrates the point.", "Some of the stuff we've seen in testing some of the models, especially coming out of China, have certain values encoded in them. And it’s not just a light fine-tune to change that. Now, language models — or something that has a kind of world model embedded in it — have more values.", "Reasoning, I guess, you could say has values too. But one of the nice things about reasoning models is they're trained on verifiable problems. Do you need to be worried about cultural bias if your model is doing math? Probably not. I think the chance that some reasoning model built elsewhere is going to incept you by solving a math problem in a devious way seems low.", "But there's a whole different set of issues around coding, which is the other verifiable domain. You need to worry about waking up one day and if you're using a model that has some tie to another government, can it embed vulnerabilities in code that their intelligence organizations could exploit later? In some future version you're using a model that came from another country and it's securing your systems. Then you wake up and everything is just vulnerable in a way that that country knows about and you don’t. Or it turns on a vulnerability at some point.", "Those are real issues. I'm very interested in studying this because I think one of the main things that's interesting about open source is the ability to distill models. For most people, the primary value isn't just taking a model off the shelf and saying, \"Okay, Meta built this version of Llama. I'm going to take it and I'm going to run it exactly in my application.\"", "No, your application isn't doing anything different if you're just running our thing. You're at least going to fine-tune it, or try to distill it into a different model. When we get to stuff like the Behemoth model, the whole value is being able to take this very high amount of intelligence and distill it down into a smaller model that you're actually going to want to run.", "This is the beauty of distillation. It's one of the things that I think has really emerged as a very powerful technique over the last year, since the last time we sat down. I think it’s worked better than most people would have predicted. You can basically take a model that's much bigger, and capture probably 90 or 95% of its intelligence, and run it in something that's 10% of the size. Now, do you get 100% of the intelligence? No. But 95% of the intelligence at 10% of the cost is pretty good for a lot of things.", "The other thing that's interesting is that now, with this more varied open-source community, it's not just Llama. You have other models too. You have the ability to distill from multiple sources. So now you can basically say, \"Okay, Llama’s really good at this. Maybe its architecture is really good because it's fundamentally multimodal, more inference-friendly, more efficient. But let’s say this other model is better at coding.\" Okay, great. You can distill from both of them and build something that's better than either individually, for your own use case. That's cool.", "But you do need to solve the security problem of knowing that you can distill it in a way that's safe and secure. This is something that we've been researching and have put a lot of time into. What we've basically found is that anything that's language is quite fraught. There's just a lot of values embedded into it. Unless you don't care about taking on the values from whatever model you're distilling from, you probably don't want to just distill a straight language world model.", "On reasoning, though, you can get a lot of the way there by limiting it to verifiable domains, and running code cleanliness and security filters. Whether it's using Llama Guard open source, or the Code Shield open source tools that we've done, things that allow you to incorporate different input into your models and make sure that both the input and the output are secure.", "Then it’s just a lot of red teaming . It’s having experts who are looking at the model and asking, \"Alright, is this model doing anything after distillation that we don't want?\" I think with the combination of those techniques, you can probably distill on the reasoning side for verifiable domains quite securely. That's something I'm pretty confident about and something we've done a lot of research around.", "But I think this is a very big question. How do you do good distillation? Because there’s so much value to be unlocked. But at the same time, I do think there is some fundamental bias embedded in different models.", "Monetizing AGI", "Dwarkesh Patel", "Speaking of value to be unlocked, what do you think the right way to monetize AI will be? Obviously digital ads are quite lucrative. But as a fraction of total GDP, it's small compared to all remote work. Even if you can increase productivity without replacing work, that's still worth tens of trillions of dollars. Is it possible that ads might not be it? How do you think about this?", "Mark Zuckerberg", "Like we were talking about before, there's going to be all these different applications, and different applications tend toward different things.", "Ads are great when you want to offer people a free service. Because it's free, you need to cover it somehow. Ads solve this problem where a person does not need to pay for something. They can get something that is amazing for free. Also by the way, with modern ad systems, a lot of the time people think the ads add value to the thing if you do it well.", "You need to be good at ranking and you need to have enough liquidity of advertising inventory. If you only have five advertisers in the system, no matter how good you are at ranking, you may not be able to show something to someone that they're interested in. But if you have a million advertisers in the system, then you're probably going to be able to find something pretty compelling, if you're good at picking out the different needles in the haystack that that person is going to be interested in.", "So that definitely has its place. But there are also clearly going to be other business models as well, including ones that just have higher costs so it doesn't even make sense to offer them for free. By the way, there have always been business models like this.", "There's a reason why social media is free and ad-supported, but then if you want to watch Netflix or ESPN or something, you need to pay for that. The content that's going into that, they need to produce it, and that's very expensive for them to produce. They probably could not have enough ads in the service in order to make up for the cost of producing the content. Basically, you just need to pay to access it.", "The trade-off is fewer people do it. Instead of billions, you're talking about hundreds of millions of people using those services. There's a value switch there. I think it's similar here. Not everyone is going to want a software engineer, or a thousand software engineering agents, or whatever it is. But if you do, that's something you're probably going to be willing to pay thousands, or tens of thousands, or hundreds of thousands of dollars for.", "That just speaks to the diversity of different things that need to get created. There are going to be business models at each point along the spectrum. At Meta, for the consumer piece we definitely want to have a free thing. I'm sure that will end up being ad-supported. But I also think we're going to want to have a business model that supports people using arbitrary amounts of compute to do even more amazing things than what it would make sense to offer in the free service. For that, I'm sure we'll end up having a premium service. But I think our basic values on this are that we want to serve as many people in the world as possible.", "The role of a CEO", "Dwarkesh Patel", "How do you keep track of all these different projects, some of which we've talked about today. I'm sure there are many I don't even know about. As the CEO overseeing everything, there's a big spectrum between going to the Llama team and saying, \"Here are the hyperparameters you should use,\" versus just giving a mandate like, \"Go make the AI better.\"", "And there are so many different projects. How do you think about the way in which you can best deliver your value-add and oversee all these things?", "Mark Zuckerberg", "A lot of what I spend my time on is trying to get awesome people onto the teams. There's that, and then there's stuff that cuts across teams. You build Meta AI, and you want to get it into WhatsApp or Instagram. Okay, now I need to get those teams to talk together. Then there are a bunch of questions like, “do you want the thread for Meta AI in WhatsApp to feel like other WhatsApp threads, or do you want it to feel like other AI chat experiences?” There are different idioms for those. So there are all these interesting questions that need to get answered around how does this stuff basically fit into everything we're doing?", "Then there's a whole other part of what we're doing, which is pushing on the infrastructure. If you want to stand up a gigawatt cluster, first of all, that has a lot of implications for the way we're doing infrastructure buildouts. It has political implications for how you engage with the different states where you're building that stuff. It has financial implications for the company in terms of: \"All right, there's a lot of economic uncertainty in the world. Do we double down on infrastructure right now? If so, what other trade-offs do we want to make around the company?\" Those are the kinds of decisions that are tough for other people to really make.", "Then there's this question around taste and quality. When is something good enough that we want to ship it? In general, I'm the steward of that for the company. Although we have a lot of other people who I think have good taste as well and are also filters for different things.", "Those are basically the areas. AI is interesting because, more than some of the other stuff that we do, it is more research and model-led than really product-led. You can't just design the product that you want and then try to build the model to fit into it. You really need to design the model first and the capabilities that you want, and then you get some emergent properties. Then it's, \"Oh, you can build some different stuff because this turned out in a certain way.\" At the end of the day, people want to use the best model.", "That's partially why, when we're talking about building the most personal AI, the best voice, the best personalization — and also a very smart experience with very low latency — those are the things that we need to design the whole system to build. That's why we're working on full-duplex voice. That's why we're working on personalization to both have good memory extraction from your interactions with AI, but also to be able to plug into all the other Meta systems. That's why we design the specific models that we design, to have the kind of size and latency parameters that they do.", "Is big tech aligning with Trump?", "Dwarkesh Patel", "Speaking of politics, there's been this perception that some tech leaders have been aligning with Trump. You and others donated to his inaugural event and were on stage with him and I think you settled a lawsuit that resulted in them getting $25 million.", "I wonder what's going on here? Does it feel like the cost of doing business with an administration? What's the best way to think about this?", "Mark Zuckerberg", "My view on this is that he's the President of the United States. Our default, as an American company, should be to try to have a productive relationship with whoever is running the government. We've tried to offer support to previous administrations as well. I've been pretty public with some of my frustrations with the previous administration, how they basically did not engage with us or the business community more broadly.", "Frankly, that’s going to be necessary to make progress on some of these things. We're not going to be able to build the level of energy that we need if you don't have a dialogue, and if they're not prioritizing trying to do those things.", "A lot of people want to write this story about what direction people are going. We're trying to build great stuff, and we want to have a productive relationship with people. That's how I see it. It is also how I would guess most others see it, but obviously, I can't speak for them.", "Dwarkesh Patel", "You've spoken out about how you've rethought some of the ways in which you engage and defer to the government, in terms of moderation stuff in the past.", "How are you thinking about AI governance? Because if AI is as powerful as we think it might be, the government will want to get involved. What is the most productive approach to take there, and what should the government be thinking about?", "Mark Zuckerberg", "I guess in the past, most of the comments that I made were in the context of content moderation . It's been an interesting journey over the last 10 years on this. It's obviously been an interesting time in history. There have been novel questions raised about online content moderation.", "Some of those have led to productive new systems getting built, like our AI systems to detect nation-states trying to interfere in each other's elections. I think we will continue building that stuff out, and that has been net positive.", "With some other stuff, we went down some bad paths. I just think the fact-checking thing was not as effective as Community Notes because it's not an internet-scale solution. There weren't enough fact-checkers, and people didn't trust the specific fact-checkers. You want a more robust system. So I think what we got with Community Notes is the right one on that.", "But my point on this was more that historically, I probably deferred a little too much to either the media and their critiques, or to the government, on things that they did not really have authority over. But just as like a central figure, I think we tried to build systems where maybe we wouldn't have to make all of the content moderation decisions ourselves or something.", "I guess part of the growth process over the last 10 years is realizing, “Okay, we're a meaningful company. We need to own the decisions that we need to make. We should listen to feedback from people, but we shouldn't defer too much to people who do not actually have authority over this. Because at the end of the day, we're in the seat, and we need to own the decisions that we make.”", "It's been a maturation process, and in some ways painful, but I think we're probably a better company for it.", "Dwarkesh Patel", "Will tariffs increase the cost of building data centers in the US and shift buildouts to Europe and Asia?", "Mark Zuckerberg", "It is really hard to know how that plays out. I think we're probably in the early innings on that, and it's very hard to know.", "Dwarkesh Patel", "What is your single highest-leverage hour in a week? What are you doing in that hour?", "Mark Zuckerberg", "I don't know. Every week is a little bit different. It's probably got to be the case that the most leveraged thing you do in a week is not the same thing each week. Or else, by definition, you should probably spend more than one hour doing that thing every week.", "I don't know. Part of the fun of this job, and also of the industry being so dynamic, is that things really move around. The world is very different now than it was at the beginning of the year, or even six months ago, or in the middle of last year. I think a lot has advanced meaningfully. A lot of cards have been turned over since the last time that we sat down. I think that was about a year ago, right?", "Dwarkesh Patel", "Yeah. I guess what you were saying earlier that recruiting people is a super high-leverage thing you do.", "Mark Zuckerberg", "It's very high-leverage, yeah.", "100x productivity", "Dwarkesh Patel", "You talked about these models being mid-level software engineers by the end of the year. What would be possible if, say, software productivity increased like 100x in two years? What kinds of things could be built that can't be built right now?", "Mark Zuckerberg", "What kinds of things? That's an interesting question. One theme of this conversation is that the amount of creativity that's going to be unlocked is going to be massive.", "If you look at the overall arc of human society and the economy over 100 or 150 years, it's basically people going from being primarily agrarian — with most human energy going toward just feeding ourselves — to that becoming a smaller and smaller percent. And the things that take care of our basic physical needs have become a smaller and smaller percent of human energy.", "That shift has led to two impacts: one is that more people are doing creative and cultural pursuits. The second is that more people, in general, spend less time working and more time on entertainment and culture. I think that is almost certainly going to continue as this goes on.", "This isn't the 1-2 year thing of what happens when you have a super powerful software engineer. But over time, if everyone has these superhuman tools to create a ton of different stuff, you're going to get incredible diversity. Part of it is going to be solving hard problems: solving diseases, advancing science, developing new technology that makes our lives better.", "But I would guess that a lot of it is going to end up being cultural and social pursuits and entertainment. I would guess the world is going to get a lot funnier, weirder, and quirkier, the way that memes on the internet have gotten over the last 10 years. I think that adds a certain richness and depth. In funny ways, it actually helps you connect better with people. Now all day long, I just find interesting stuff on the internet and send it in group chats to the people I care about, who I think are going to find it funny.", "The media that people can produce today to express very nuanced, specific cultural ideas is really cool. That'll continue to get built out. It does advance society in a bunch of ways, even if it's not the \"hard science\" way of curing a disease.", "If you think about it, the Meta social media view of the world is that yeah, people are going to spend a lot more time doing that stuff in the future. It's going to be a lot better, and it's going to help you connect, because it'll help express different ideas.", "The world is going to get more complicated, but our technology, our cultural technology, to express these very complicated things — in a very kind of funny little clip or whatever — is going to get so much better. I think that's all great.", "I don't know about next year. One other thought that I think is interesting to cover is that I tend to think that, for at least the foreseeable future, this is going to lead to more demand for people doing work, not less. Now, people have a choice of how much time they want to spend working.", "I'll give you one interesting example we were talking about recently. We have almost three and a half billion people using our services every day. One question we've struggled with forever is how do we provide customer support?", "Today, you can write an email, but we've never seriously been able to contemplate having voice support where someone can just call in. I guess that's maybe one of the artifacts of having a free service. The revenue per person isn't high enough to have an economic model where people can call in.", "But also, with three and a half billion people using your service every day, the number of calls would be massive. It’d be like the biggest call center in the world. It would be like $10 or $20 billion a year to staff that. So we've never thought too seriously about it, because it always seemed like there was no way that could make sense. But now, as AI gets better, you're going to get to a place where AI can handle a bunch of people's issues.", "Not all of them — maybe 10 years from now it can handle all of them — but thinking about a 3-5 year time horizon, it will be able to handle a bunch. It's kind of like a self-driving car. They can handle a bunch of terrain, but they're not doing the whole route by themselves yet in most cases. People thought truck-driving jobs were going to go away, but there's actually more truck-driving jobs now than when we first started talking about self-driving cars 20 years ago.", "Going back to the customer support thing, it wouldn't make sense to staff out calling for everyone. But let's say AI can handle 90% of that. Then if it can't, it kicks it off to a person. If you get the cost of providing that service down to one-tenth of what it would've otherwise been, then maybe now it actually makes sense to do it. That would be cool. So the net result is that I actually think we're probably going to hire more customer support people.", "The common belief is that AI will automate jobs away. But that hasn't really been how the history of technology has worked. Usually, you create things that take away 90% of the work, and that leads you to want more people, not less.", "Dwarkesh Patel", "Final question: Who is the one person in the world today who you most seek out for advice?", "Mark Zuckerberg Oh, man. I feel like part of my style is that I like having a breadth of advisors. It's not just one person.", "We've got a great team. There are people at the company, people on our board. There are a lot of people in the industry who are doing new stuff. There's not a single person. But it's fun. Also, when the world is dynamic, just having a reason to work with people you like on cool stuff… To me, that's what life is about.", "Dwarkesh Patel Great note to close on. Thanks for doing this.", "Mark Zuckerberg Yeah, thank you." ]
[ "https://www.dwarkesh.com/p/mark-zuckerberg", "https://ai.meta.com/blog/llama-4-multimodal-intelligence/", "https://en.wikipedia.org/wiki/Multimodal_learning", "https://lmarena.ai/?leaderboard", "https://deepmind.google/technologies/gemini/flash/", "https://en.wikipedia.org/wiki/Reasoning_language_model", "https://openai.com/index/introducing-o3-and-o4-mini/", "https://www.anthropic.com/claude/sonnet", "https://www.meta.com/emerging-tech/orion/", "https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion", "https://en.wikipedia.org/wiki/Superintelligence", "https://en.wikipedia.org/wiki/Technological_singularity#Hard_or_soft_takeoff", "https://www.meta.com/emerging-tech/codec-avatars/?utm_source=about.meta.com&utm_medium=redirect", "https://www.meta.com/ai-glasses/", "https://stratechery.com/2025/deepseek-faq/", "https://api-docs.deepseek.com/news/news250120", "https://x.com/sama/status/1906793591944646898", "https://x.com/sama/status/1906845532758405319", "https://en.wikipedia.org/wiki/MIT_License", "https://www.gnu.org/licenses/gpl-3.0.en.html", "https://en.wikipedia.org/wiki/Mixture_of_experts", "https://www.linkedin.com/posts/yann-lecun_lots-of-confusion-about-what-a-world-model-activity-7165738293223931904-vdgR/", "https://en.wikipedia.org/wiki/Knowledge_distillation", "https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/", "https://www.llama.com/llama-protections/", "https://en.wikipedia.org/wiki/Red_team#Cybersecurity", "https://www.axios.com/2025/01/08/zuckerberg-meta-content-moderation" ]
https://www.dwarkesh.com/p/nat-friedman
Nat Friedman - Reading Ancient Scrolls, Open Source, & AI
[ "Dwarkesh Patel", "Today I have the pleasure of speaking with Nat Friedman, who was the CEO of GitHub from 2018 to 2021. Before that, he started and sold two companies, Ximian and Xamarin. And he is also the founder of AI Grant and California YIMBY. And most recently, he is the organizer and founder of the Scroll prize, which is where we'll start this conversation. Do you want to tell the audience about what the Scroll prize is?", "Vesuvius Challenge", "Nat Friedman", "We're calling it the Vesuvius challenge. It is just this crazy and exciting thing I feel incredibly honored to have gotten caught up in. A couple of years ago, it was the midst of COVID and we were in a lockdown, and like everybody else, I was falling into internet rabbit holes. And I just started reading about the eruption of Mount Vesuvius in Italy, about 2000 years ago. And it turns out that when Vesuvius erupted, it was AD 79. It destroyed all the nearby towns, everyone knows about Pompeii. But there was another nearby town called Herculaneum. And Herculaneum was sort of like the Beverly Hills to Pompeii. So big villas, big houses, fancy people. And in Herculaneum, there was one enormous villa in particular. It had once been owned by the father in law of Julius Caesar, a well connected guy. And it was full of beautiful statues and marbles and art. But it was also the home to a huge library of papyrus scrolls. When the villa was buried, the volcano spit out enormous quantities of mud and ash, and it buried Herculaneum in something like 20 meters of material. So it wasn't a thin layer, it was a very thick layer. Those towns were buried and forgotten for hundreds of years. No one even knew exactly where they were, until the 1700s. In 1750 a farm worker who was digging a well in the outskirts of Herculaneum struck this marble paving stone of a path that had been at this huge villa. He was pretty far down when he did that, he was 60 feet down. And then subsequently, a Swiss engineer came in and started digging tunnels from that well shaft and they found all these treasures. Looting was sort of the spirit of the time. If they encountered a wall, they would just bust through it and they were taking out these beautiful bronze statues that had survived. And along the way, they kept encountering these lumps of what looked like charcoal, they weren't sure what they were, and many were apparently thrown away, until someone noticed a little bit of writing on one of them. And they realized they were papyrus scrolls, and there were hundreds and even 1000s of them. So they had uncovered this enormous library, the only library ever to have sort of survived in any form, even though it's badly damaged. And they were carbonized, very fragile. The only one that survived since antiquity. In a Mediterranean climate these papyrus scrolls rot and decay quickly. They'd have to be recopied by monks every 100 years or so, maybe even less.It’s estimated that we only have less than 1% of all the writing from that period.", "It was an enormous discovery to find these hundreds of papyrus scrolls underground. Even if they were not in good condition but still present. On a few of them, you can make out the lettering. In a well meaning attempt to read them people immediately started trying to open them. But they're really fragile so they turned to ash in your hand. And so hundreds were destroyed. People did things like, cut them with daggers down the middle, and a bunch of little pieces would flake off, and they tried to get a few letters off of a couple of pieces.", "Eventually there was an Italian monk named Piaggio. He devised this machine, under the care of the Vatican, to unroll these things very, very slowly, like half a centimeter a day. A typical scroll could be 15 or 20 or 30 feet long, and manage to successfully unroll a few of these, and on them they found Greek philosophical texts, in the Epicurean tradition, by this little known philosopher named Philodemus. But we got new text from antiquity, which is not a thing that happens all the time. Eventually, people stopped trying to physically unroll these things because so many were destroyed. In fact, some attempts to physically unroll the scrolls continued even into the 200s and they were destroyed.", "The current situation is we have 600 plus roughly intact scrolls that we can open. I heard about this and I thought that was incredibly exciting, the idea that there was information from 2000 years in the past. We don't know what's in these things. And obviously, people are trying to develop new ways and new technologies to open them.", "I read about a professor at the University of Kentucky, Brent Seales, who had been trying to scan these using increasingly advanced imaging techniques, and then use computer vision techniques and machine learning to virtually unroll them without ever opening them. They tried a lot of different things but their most recent attempt in 2019, was to take the scrolls to a particle accelerator in Oxford, England, called the diamond light source, and to make essentially an incredibly high resolution 3D X Ray scan. And they needed really high energy photons in order to do this. And they were able to take scans at eight microns. These really quite tiny voxels, which they thought would be sufficient.", "I thought this was like the coolest thing ever. We're using technologies to read this lost information from the past And I waited for the news that they had been decoded successfully. That was 2020 and then COVID hit, everybody got a little bit slowed down by that. Last year, I found myself wondering what happened to Dr. Seales and his scroll project. I reached out and it turned out they had been making really good progress. They had gotten some machine learning models to start to identify ink inside of the scrolls, but they hadn't yet extracted words or passages, it's very challenging.", "I invited him to come out to California and hang out and to my shock he did. We got to talking and decided to team up and try to crack this thing. The approach that we've settled on to do that is to actually launch an open competition. We've done a ton of work with his team to get the data and the tools and techniques and just the broad understanding of materials into a shape where smart people can approach it and get productive easily. And I'm putting up together with Daniel Gross, a prize in sort of like an X PRIZE or something like that, for the first person or team who can actually read substantial amounts of real text from one of these scrolls without opening them.", "We're launching that this week. I guess maybe it's when this airs. What gets me excited are the stakes. The stakes are kind of big. The six or eight hundred scrolls that are there, it's estimated that if we could read all of them, somehow the technique works and it generalizes to all the scrolls, then that would approximately double the total texts that we have from antiquity. This is what historians are telling me.", "So it's not like – Oh, we would get like a 5% bump or a 10% bump in the total ancient Roman or Greek text. No, we get all of the texts that we have, multiple Shakespeares is one of the units that I've heard. So that would be significant. We don't know what's in there, we've got a few Philodemus texts, those are of some interest. But there could be lost epic poems, or God knows what. So I'm really excited and I think there's like a 50% chance that someone will encounter this opportunity and get the data and get nerd sniped by it and we'll solve it this year.", "Dwarkesh Patel", "I mean, really, it is something out of a science fiction novel. It's like something you'd read in Neal Stephenson or something. I was talking to Professor Seales before and apparently the shock went both ways. Because the first few emails, he was like – this has got to be spam. Like no way Nat Friedman is reaching out and has found out about this prize.", "Nat Friedman", "That's really funny because he was really pretty hard to get in touch with. I emailed them a couple times, but he just didn't respond. I asked my admin, Emily, to call the secretary of his department and say – Mr. Friedman requested me and then he knew there was something actually going on there. So he finally got on the phone with me and we got to zoom. And he's like, why are you interested in this? I love Brent, he's fantastic and I think we're friends now. We found that we think alike about this and he's reached the point where he just really wants to crack. They've taken this right up to the one yard line, this is doable at this point. They've demonstrated every key component. Putting it all together, improving the quality, doing it at the scale of a whole scroll, this is still very hard work. And an open competition seems like the most efficient way to get it done.", "Dwarkesh Patel", "Before we get into the state of the data and the different possible solutions. I want to make tangible what could be gained if we can unwrap these? You said there's a few more 1000 scrolls? Are we talking about the ones in Philodemus’s layer or are we talking about the ones in other layers?", "Nat Friedman", "You think if you find this crazy Villa that was owned by Julius Caesar's father in law, then we just dig the whole thing out. But in fact, most of the exploration occurred in the 1700s, through the Swiss engineer’s underground tunnels. The villa was never dug out and exposed to the air. You went down 50-60 feet and then you dig tunnels. And again, they were looking for treasure, not like a full archaeological exploration. So they mostly got treasure. In the 90s some additional excavations were done at the edge of the villa and they discovered a couple things. First, they discovered that it was a seaside Villa that faced the ocean. It was right on the water before the volcano erupted. The eruption actually pushed the shoreline out by depositing so much additional mud there. So it's no longer right by the ocean, apparently, I've actually never been.", "And they also found that there were two additional floors in the villa that the tunnels had apparently never excavated. And so at most, a third of the villa has been excavated. Now, they also know when they were discovering these papyrus scrolls that they found basically one little room where most of the scrolls were. And these were mostly these Philodemus texts, at least, that's what we know. And they apparently found several revisions, sometimes of the same text. The hypothesis is this was actually Philodemus’s working library, he worked here, this sort of epicurean philosopher. In the hallways, though, they occasionally found other scrolls, including crates of them. And the belief is, at least this is what historians have told me, and I'm no expert. But what they have told me is they think that the main library in this villa has probably not been excavated. And that the main library may be a Latin library, and may contain literary texts, historical texts, other things, and that it could be much larger. Now, I don't know how prone these classists are to wishful thinking. It is a romantic idea. But they have some evidence in the presence of these partly evacuated scrolls that were found in hallways, and that sort of thing. I've since gone and read a bunch of the firsthand accounts of the excavations. There are these heartbreaking descriptions of them finding like an entire case of scrolls in Latin, and accidentally destroying it as they tried to get it out of the mud and there were maybe 30 scrolls or something in there. There clearly was some other stuff that we just haven't gotten to.", "Dwarkesh Patel", "You made some scrolls right?", "Nat Friedman", "Yeah. This is a papyrus, and it's a grassy reed that grows on the Nile in Egypt. And for many 1000s of years they've been making paper out of it. And the way they do it is they take the outer rind off of the papyrus and then they cut the inner core into these strips. They lay the strips out parallel to one another and they put another layer to 90 degrees to that bottom layer. And they press it together in a press or under stones and let it dry out. And that's Papyrus, essentially. And then they'll take some of those sheets and glue them together with paste, usually made out of flour, and get a long scroll. You can still buy it, I bought this on Amazon, and it's interesting because it's got a lot of texture. Those fibers, ridges of the papyrus plant, and so when you write on it, you really feel the texture. I got it because I wanted to understand what these artifacts that we're working with. So we made an attempt to simulate carbonizing a few of these. We basically took a Dutch oven, because when you carbonized something and you make charcoal, it's not like burning it with oxygen, you remove the oxygen, heat it up and let it carbonize. We tried to simulate that with a Dutch oven, which is probably imperfect, and left it in the oven at 500 degrees Fahrenheit for maybe five or six hours. These things are incredibly light snd if you try to unfold them, they just fall apart in your hand very readily. I assume these are in somewhat better shape than the ones that were found because these were not in a volcanic eruption and covered in mud. Maybe that mud was hotter than my oven can go. And they’re just flakes, just squeeze it and it’s just dust in your hand. And so we actually tried to replicate many of the heartbreaking 1700s, 18th century unrolling techniques. They used rose water, for example, or they tried to use different oils to soften it and unroll it. And most of them are just very destructive. They poured mercury into it because they thought mercury would slip between the layers potentially. So yeah, this is sort of what they look like. They shrink and they turn to ash.", "Dwarkesh Patel", "For those listening, by the way, just imagine the ash of a cigar but blacker and it crumbles the same way. It's just a blistered black piece of rolled up Papyrus.", "Nat Friedman", "Yeah. And they blister, the layers can separate. They can fuse.", "Dwarkesh Patel", "And so this happened in 79 AD right? So we know that anything before that could be in here, which I guess could include?", "Nat Friedman", "Yes. What could be in there? I don't know. You and I have speculated about that, right? It would be extremely exciting not to just get more epicurean philosophy, although that's fine, too. But almost anything would be interesting in additive. The dream is – I think it would maybe have a big impact to find something about early Christianity, like a contemporaneous mention of early Christianity, maybe there'd be something that the church wouldn't want, that would be exciting to me. Maybe there'd be some color detail from someone commenting on Christianity or Jesus, I think that would be a very big deal. We have no such things as far as I know. Other things that would be cool would be old stuff, like even older stuff. There were several scrolls already found there that were hundreds of years old when the villa was buried. As per my understanding the villa was probably constructed about 100 years prior and they can date some of the scrolls from the style of writing. And so there was some old stuff in there. And the Library of Alexandria was burned 80 or 90 years prior. And so, again, maybe wishful thinking, but there's some rumors that some of those scrolls were evacuated and maybe some of them would have ended up at this substantial, prominent, Mediterranean villa. God knows what’ll be in there, that would be really cool. I think it'd be great to find literature, personally I think that would be exciting, like beautiful new poems or stories, we just don't have a ton because so little survived. I think I think that would be fun. You had the best, crazy idea for what could be in there, which was text which was GPT watermarks. That would be a creepy feeling.", "Dwarkesh Patel", "I still can't get over just how much of a plot of a sci fi novel this is like. Potentially the biggest intact library from the ancient world that has been sort of stopped like a debugger because of this volcano. The philosophers of antiquity forgotten, the earliest gospels, there's so much interesting stuff there. But let's talk about what the data looks like. So you mentioned that they've been CT scanned, and that they built these machine learning techniques to do segmentation and the unrolling. What would it take to get from there to understand the actual content of what is within?", "Nat Friedman", "Dr. Seales actually pioneered this field of what is now widely called virtual unwrapping. And he actually did not do it with these Herculaneum scrolls, these things are like expert mode, they're so difficult. I'll tell you why soon. But he initially did it with a scroll that was found in the Dead Sea in Israel. It's called The En-Gedi scroll and it was carbonized under slightly similar circumstances. I think there was a temple that was burned. The papyrus scroll was in a box. So it kind of is like a Dutch oven, it carbonized in the same way. And so it was not openable. it’d fall apart. So the question was, could you nondestructively read the contents of it? So he did this 3D X-ray, the CT scan of the scroll, and then was able to do two things. First, the ink gave a great X-ray signature. It looked very different from the papyrus, it was high contrast. And then second, he was able to segment the wines of the scroll, you know, throughout the entire body of the scroll, and identify each layer, and then just geometrically unroll it using fairly normal flattening computer vision techniques, and then read the contents. It turned out to be an early part of the book of Leviticus, something of the Old Testament or the Torah. And that was like a landmark achievement. Then the next idea was to apply those same techniques to this case. This has proven hard. There's a couple things that make it difficult, the primary one is that the ink used on the Herculaneum papyri is not very absorbent of X-Ray. It basically seems to be equally absorbent of X ray as the papyrus. Very close, certainly not perfectly. So you don't have this nice bright lettering that shows up on your Tomographic, 3D X-Ray. So you have to somehow develop new techniques for finding the ink in there. That's sort of problem one, and it's been a major challenge. And then the second problem is the scrolls are just really messed up. They were long and tightly wound, highly distorted by the volcanic mud, which not only heated them but partly deformed them. So just the segmentation problem of identifying each of these layers throughout the scroll is doable, but it's hard. Those are a couple of challenges. And then the other challenge, of course, is just getting access to scrolls and taking them to a particle accelerator. So you have to have scroll access and particle accelerator access, and time on those. It's expensive and difficult. Dr. Seales did the hard work of making all that happen. The good news is that in the last couple of months, his lab has demonstrated the ability to actually recognize ink inside these X rays with a convolutional neural network. I look at the X-ray scans and I can't see the ink, at least in any of the renderings that I’ve seen, but the machine learning model can pick up on very subtle patterns in the X-Ray absorption at high resolution inside these volumes in order to identify and we've seen that. So you might ask – Okay, how do you train a model to do that, because you need some kind of ground truth data to train the model? The big insight that they had was to train on broken off fragments of the Papyrus. So as people tried to open these over the years in Italy, they destroyed many of them, but they saved some of the pieces that broke off. And on some of those pieces, you can kind of see lettering. And if you take an infrared image of the fragment, then you can really see the lettering pretty well, in some cases. And so they think it's 930 nanometers, they take this little infrared image, now you've got some ground truth, then you do a CT scan of that broken off fragment, and you try to align it, register it with the image. And then you have data that you can use potentially to train a model. That turned out to work in the case of the fragments.", "I think this is sort of why now? This is why I think launching this challenge now is the right time, because we have a lot of reasons to believe it can work. In the core techniques, the core pieces have been demonstrated, it just all has to be put together at the scale of these really complicated scrolls. And so yeah, if you can do the segmentation, which is probably a lot of work, maybe there's some way to automate it. And then you can figure out how to apply these models inside the body of a scroll and not just to these fragments, then it seems seems like you could probably read lots of text,", "Dwarkesh Patel", "Why did you decide to do it in the form of a prize, rather than just like giving a grant to the team that was already pursuing it, or maybe some other team that wants to take it out?", "Nat Friedman", "We talked about that. But I think what we basically concluded was the search space of different ways you could solve this is pretty big. And we just wanted to get it done as quickly as possible. Having a contest means lots of people are going to try lots of things and you know, someone's gonna figure it out quickly. Many eyes may make it shallow as a task. I think that's the main thing. Probably someone could do it but I think this will just be a lot more efficient. And it's fun too. I think it's interesting to do a contest and who knows who will solve it or how? People may not even use machine learning. We think that's the most likely approach for recognizing the ink but they may find some other approach that we haven't thought of.", "Dwarkesh Patel", "One question people might have is that you have these visible fragments mapped out. Do we expect them to correspond to the burned off or the ashen carbonized scrolls that you can do machine learning on? Ground truth of one could correspond to the other?", "Nat Friedman", "I think that's a very legitimate concern, they're different. When you have a broken off fragment, there's air above the ink. So when you CT scan it, you have kind of ink next to air. Inside of a wrapped scroll, the ink might be next to Papyrus, right? Because it's pushing up against the next layer. And your model may not know what to do with that. So yeah, I think this is one of the challenges and sort of how you take these models that were trained on fragments and translate them to the slightly different environment. But maybe there's parts of the scroll where there is air on the inside and we know that to be true. You can sort of see that here. And so I think it should at least partly work and clever people can probably figure out how to make it completely work?", "Dwarkesh Patel", "Yeah. So you said the odds are about 50-50? What makes you think that it can be done?", "Nat Friedman", "I think it can be done because we recognized ink from a CT scan on the fragments and I think everything else is probably geometry and computer vision. The scans are very high resolution, they're eight micrometers. If you kind of stood a scroll on an end like this, they're taken in the slices through it. So it's like this in the Z axis from bottom to top there are these slices. And the way they're represented on disk is each slice is a TIFF file. And for the full scrolls, each slice is like 100-something megabytes. So they're quite high resolution. And then if you stack for example, 100 of these, they're eight microns, right? So 100 of these is 0.8 millimeters. Millimeter is pretty small. We think the resolution is good enough, or at least right on the edge of good enough that it should be possible. There's sort of like, seem to be six or eight pixels. For voxels I guess, across an entire layer of papyrus. That's probably enough. And we've also seen with the machine learning models, Dr. Seales, has got some PhD students who have actually demonstrated this at eight microns. So I think that the ink recognition will work. The data is clearly physically in the scrolls, right? The ink was carbonized, the papyrus was carbonized. But not a lot of data actually physically survived. And then the question is – Did the data make it into the scans? And I think that's very likely based on the results that we've seen so far. So I think it's just about a smart person solving this, or a smart group of people, or just a dogged group of people who do a lot of manual work that could also work, or you may have to be smart and dogged. I think that's where most of my uncertainty is, is just whether somebody does it.", "Dwarkesh Patel", "Yeah, I mean, if a quarter of a million dollars doesn’t motivate you.", "Nat Friedman", "Yeah, I think money is good. There's a lot of money in machine learning these days.", "Dwarkesh Patel", "Do we have enough data in the form of scrolls that have been mapped out to be able to train a model if that's the best way to go? Because one question somebody might have is – Listen, if you already have this ground truth, why hasn't Dr. Seales’ team already been able to just train it?", "Nat Friedman", "I think they will. I think if we just let them do it, they'll get it solved. It might take a little bit longer because it's not a huge number of people. There is a big space here. But I mean, yeah, if we didn't launch this contest, I'd still think this would get solved. But it might take several years. And I think this way, it's likely to happen this year.", "Dwarkesh Patel", "Let's say the prize is solved. Somebody figures out how to do this and we can read the first scroll. You mentioned that these other layers haven't been excavated. How is the world going to react? Let's say we get about one of these mapped.", "Nat Friedman", "That's my personal hope for this. I always like to look for sort of these cheap leverage hacks, These moments where you can do a relatively small thing and it creates a.. you kick a pebble and you get an avalanche. The theory is, and Grant shares this theory, if you can read one scroll, and we only have two scanned scrolls, there's hundreds of surviving scrolls, it’s relatively expensive to use to book a particle accelerator. So if you can scan one scroll, and you know it works, and you can generalize the technique out, and it's going to work on these other scrolls, then the money which is probably in the low millions, maybe only $1 million to scan the remaining scrolls, will just arrive. It’s just too sweet of a prize not for that not to happen. And the urgency and kind of return on excavating the rest of the villa, will be incredibly obvious too. Because if there are 1000s more papyrus scrolls in there, and we now have the techniques to read them, then there's golden that mud and it's got to be dug out. It's amazing how little money there is for archaeology. Literally for decades, no one's been digging there. That’s my hope. That this is the catalyst that works, somebody reads it, they get a lot of glory, we all get to feel great. And then the diggers arrive in Herculaneum and they get the rest.", "Finding points of leverage", "Dwarkesh Patel", "I wonder if the budget for archaeological movies and games like Uncharted or Indiana Jones is bigger than the actual budget to do real world archaeology. But I was talking to some of the people before this interview, and that's one thing they emphasized is your ability to find these leverage points.", "For example, with California YIMBY, I don't know the exact amount you seeded it with. But for that amount of money, and for an institution that is that new, it is one of the very few institutions that has had a significant amount of political influence, if you look at the state of YIMBY in California and nationally today. How do you identify these things?", "There's plenty of people who have money who get into history or get into whatever subject, very few do something about it. How do you figure out where?", "Nat Friedman", "I'm a little bit mystified by why people don't do more things too. I don't know, maybe you can tell me why aren’t more people doing things? I think most rich people are boring and they should do more cool things. So I'm hoping that they do that now. I think part of it is I just fundamentally don't believe the world is efficient. So if I see an opportunity to do something, I don't have a reflexive reaction that says – Oh, that must not be a good idea if it were a good idea someone would already be doing it. Like someone must be taking care of housing policy in California, right? Or somebody must be taking care of this or that.So first, I don't have that filter that says the world's efficient don’t bother, someone's probably got it covered. And then the second thing is I have learned to trust my enthusiasm. It gets me in trouble too, but if I get really enthusiastic about something and that enthusiasm persists, I just indulge it. And so I just kind of let myself be impulsive. There's this great image that I found and tweeted which said – we do these things not because they are easy, but because we thought they would be easy. That's frequently what happens. The commitment to do it is impulsive and it's done out of enthusiasm and then you get into it and you're like – oh my god, this is really much harder than we expected. But then you're committed and you're stuck and you're going to have to get it done.", "I thought this project would be relatively straightforward. I’m going to take the data and put it up. But of course 99% of the work has already been done by Dr. Seales and his team at the University of Kentucky. I am a kind of carpetbagger. I've shown up at the end here and try to do a new piece of it.", "Dwarkesh Patel", "The last mile is often the hardest.", "Nat Friedman", "Well it turned out to be fractal anyway. All the little bits that you have to get right to do a thing and have it work and I hope we got all of them. So I think that's part of it – just not believing that the world is efficient and then just allowing your enthusiasm to cause you to commit to something that turns out to be a lot of work and really hard. And then you just are stubborn and don't want to fail so you keep at it. I think that's it.", "Dwarkesh Patel", "The efficiency point, do you think that's particularly true just of things like California YIMBY or this, where there isn't a direct monetary incentive or...", "Nat Friedman", "No. Certain parts of the world are more efficient than others and you can't assume equal levels of inefficiency everywhere. But I'm constantly surprised by how even in areas you expect to be very efficient, there are things that are in plain sight that I see them and others don't. There's lots of stuff I don't see too. I was talking to some traders at a hedge fund recently. I was trying to understand the role secrets play in the success of a hedge fund. The reason I was interested in that is because I think the AI labs are going to enter a new similar dynamic where their secrets are very valuable.", "If you have a 50% training efficiency improvement and your training runs cost $100 million, that is a $50 million secret that you want to keep. And hedge funds do that kind of thing routinely. So I asked some traders at a very successful hedge fund, if you had your smartest trader get on Twitch for 10 minutes once a month, and on that Twitch stream describe their 30-day-old trading strategies. Not your current ones, but the ones that are a month old. What would that... How would that affect your business after 12 months of doing that?", "So 12 months, 10 minutes a month, 30-day look back. That’s two hours in a year. And to my shock, they told me about an 80% reduction in their profits. It would have a huge impact.", "And then I asked – So how long would the look back window have to be before it would have a relatively small effect on your business? And they said 10 years. So that I think is quite strong evidence that the world's not perfectly efficient because these folks make billions of dollars using secrets that could be relayed in an hour or something like that. And yet others don't have them or their secrets wouldn't work. So I think there are different levels of efficiency in the world, but on the whole, our default estimate of how efficient the world is is far too charitable.", "Dwarkesh Patel", "On the particular point of AI labs potentially storing secrets, you have this sort of strange norm of different people from different AI labs, not only being friends, but often living together, right? It would be like Oppenheimer living with somebody working on the Russian atomic bomb or something like that. Do you think those norms will persist once the value of the secrets is realized?", "Nat Friedman", "Yeah, I was just wondering about that some more today. It seems to be sort of slowing, they seem to be trying to close the valves. But I think there's a lot of things working against them in this regard. So one is that the secrets are relatively simple. Two is that you're coming off this academic norm of publishing and really the entire culture is based on sort of sharing and publishing. Three is, as you said, they all live in group houses, summer in polycules. There's just a lot of intermixing. And then it's all in California. And California is a non-compete state. We don't have non-competes. And so they'd have to change the culture, get everybody their own house, and move to Connecticut and then maybe it'll work. I think ML engineer salaries and compensation packages will probably be adjusted to try to address this because you don't want your secrets walking out the door. There are engineers, Igor Babushkin for example, who has just joined Twitter. Elon hired him to train. I think that's public, is that right? I think it is.", "Dwarkesh Patel", "It will be now.", "Nat Friedman", "Igor's a really, really great guy and brilliant but he also happens to have trained state-of-the-art models at DeepMind and OpenAI. I don't know whether that's a consideration or how big of an effect that is, but it's the kind of thing that would make sense to value if you think there are valuable secrets that have not yet proliferated. So I think they're going to try to slow it down, publishing has certainly slowed down dramatically already. But I think there's just a long way to go before you're anywhere in hedge fund or Manhattan Project territory, and probably secrets will still have a relatively short half-life.", "Open Source in AI", "Dwarkesh Patel", "As somebody who has been involved in open-source your entire life, are you happy that this is the way that AI has turned out, or do you think that this is less than optimal?", "Nat Friedman", "Well, I don't know. My opinion has been changing. I have increasing worries about safety issues. Not the hijacked version of safety, but some industrial accident type situations or misuse. We're not in that world and I'm not particularly concerned about it in the short term. But in the long term, I do think there are worlds that we should be a little bit concerned about where bad things happen, although I don't know what to do about them. My belief though is that it is probably better on the whole for more people to get to tinker with and use these models, at least in their current state. For example Georgi Gerganov did a four-bit quantization of the LLama model this weekend and got it inferencing on a M1 or M2. I was very excited and I got that running and it's fun to play with. Now I've got a model that is very good, it's almost GPT-3 quality, and runs on my laptop. I've grown up in this world of tinkerers and open-source folks and the more access you have, the more things you can try. And so I think I do find myself very attracted to that.", "Dwarkesh Patel", "That is the scientist and the ideas part of what is being shared, but there's also another part about the actual substance, like the uranium in the atom-bomb analogy. As different sources of data realize how valuable their data is for training newer models, do you think that these things will go harder to scrape? Like Libgen or Archive, are these going to become rate-limited in some way or what are you expecting there?", "Nat Friedman", "First, there's so much data on the internet. The two primitives that you need to build models are – You need lots of data. We have that in the form of the internet, we digitized the whole world into the internet. And then you need these GPUs, which we have because of video games. So you take like the internet and video game hardware and smash together and you get machine learning models and they're both commodities. I don't think anyone in the open source world is really going to be data-limited for a long time. There's so much that's out there. Probably people who have proprietary data sets that are readily scrapable have been shutting those down, so get your scraping in now if you need to do it. But that's just on the margin. I still think there's quite a lot that's out there to work with. Look, this is the year of proliferation. This is a week of proliferation. We're going to see four or five major AI announcements this week, new models, new APIs, new platforms, new tools from all the different vendors. In a way they're all looking forwards. My Herculaneum project is looking backwards. I think it's extremely exciting and cool, but it is sort of a funny contrast.", "Github Acquisition", "Dwarkesh Patel", "Before I delve deeper into AI, I do want to talk about GitHub. I think we should start with – You are at Microsoft. And at some point you realize that GitHub is very valuable and worth acquiring. How did you realize that and how did you convince Microsoft to purchase GitHub?", "Nat Friedman", "I had started a company called Xamarin together with Miguel de Acaza and Joseph Hill and we built mobile tools and platforms. Microsoft acquired the company in 2016 and I was excited about that. I thought it was great. But to be honest, I didn't actually expect or plan to spend more than a year or so there. But when I got in there, I got exposed to what Satya was doing and just the quality of his leadership team. I was really impressed. And actually, I think I saw him in the first week I was there and he asked me – What do you think we should do at Microsoft? And I said, I think we should buy GitHub.", "Dwarkesh Patel", "When would this have been?", "Nat Friedman", "This was like my first week. It was like March or April of 2016. Okay. And then he said – Yeah, it's a good idea. We thought about it. I'm not sure we can get away with it or something like that. And then about a year later, I wrote him an email, just a memo, I sort of said – I think it's time to do this. There was some noise that Google was sniffing around. I think that may have been manufactured by the GitHub team. But it was a good catalyst because it was something I thought made a lot of sense for Microsoft to do anyway. And so I wrote an email to Satya, a little memo saying – Hey, I think we should buy GitHub. Here's why. Here's what we should do with it. The basic argument was developers are making IT purchasing decisions now. It used to be the sort of IT thing and now developers are leading that purchase. And it's this sort of major shift in how software products are acquired. Microsoft really was an IT company. It was not a developer company in the way most of its purchases were made. But it was founded as a developer company, right? And so, you know, Microsoft's first product was a programming language. Yeah, I said – Look, the challenge that we have is there's an entire new generation of developers who have no affinity with Microsoft and the largest collection of them is at GitHub. If we acquire this and we do a merely competent job of running it, we can earn the right to be considered by these developers for all the other products that we do. And to my surprise, Satya replied in like six or seven minutes and said, I think this is very good thinking. Let's meet next week or so and talk about it. I ended up at this conference room with him and Amy Hood and Scott Guthrie and Kevin Scott and several other people. And they said – Okay, tell us what you're thinking. And I kind of said a little 20-minute ramble on it. And Satya said – Yeah, I think we should do it. And why don't we run it independently like LinkedIn. Nat, you'll be the CEO. And he said, do you think we can get it for two billion? And I said, we could try. He said Scott will support you on this. Three weeks later, we had a signed term sheet and an announced deal. And then it was an amazing experience for me. I'd been there less than two years.", "Microsoft was made up of and run by a lot of people who've been there for many years. And they trusted me with this really big project. That made me feel really good, to be trusted and empowered. I had grown up in the open source world so for me to get an opportunity to run Github, it's like getting appointed mayor of your hometown or something like that, it felt cool. And I really wanted to do a good job for developers. And so that's how it happened.", "Dwarkesh Patel", "That's actually one of the things I want to ask you about. Often when something succeeds, we kind of think it was inevitable that it would succeed but at the time, I remember that there was a huge amount of skepticism. I would go on Hacker News and the top thing would be the blog posts about how Microsoft's going to mess up GitHub. I guess those concerns have been alleviated throughout the years. But how did you deal with that skepticism and deal with that distrust?", "Nat Friedman", "Well, I was really paranoid about it and I really cared about what developers thought. There's always this question about who are you performing for? Who do you actually really care about? Who's the audience in your head that you're trying to do a good job for or impress or earn the respect of whatever it is. And though I love Microsoft and care a lot about Satya and everyone there, I really cared about the developers. I’d grown up in this open source world. And so for me to do a bad job with this central institution and open source would have been a devastating feeling for me. It was very important to me not to. So that was the first thing, just that I cared. And the second thing is that the deal leaked. It was going to be announced on a Monday and it leaked on a Friday. Microsoft's buying GitHub. The whole weekend there were terrible posts online. People saying we’ve got to evacuate GitHub as quickly as possible. And we're like – oh my god, it's terrible. And then Monday, we put the announcement out and we said we're acquiring GitHub. It's going to run as an independent company. And then it said Nat Friedman is going to be CEO. And I had, I don't want to overstate or whatever, but I think a couple people were like – Oh. Nat comes from open source. He spent some time in open source and it's going to be run independently. I don't think they were really that calmed down but at least a few people thought – Oh, maybe I'll give this a few months and just see what happens before I migrate off. And then my first day as CEO after we got the deal closed, at 9 AM the first day, I was in this room and we got on zoom and all the heads of engineering and product. I think maybe they were expecting some kind of longer-term strategy or something but I came in and I said – GitHub had no official feedback mechanism that was publicly available but there were several GitHub repos that the community members had started. Isaac from NPM had started one where he'd just been allowing people to give GitHub feedback. And people had been voting on this stuff for years. And I kind of shared my screen and put that up sorted by votes and said – We're going to pick one thing from this list and fix it by the end of the day and ship that, just one thing. And I think they were like – This is the new CEO strategy? And they were like – I don’t know, you need to do database migrations and can't do that in a day. Then someone's like maybe we can do this. We actually have a half implementation of this. And we eventually found something that we could fix by the end of the day. And what I hope I said was – what we need to show the world is that GitHub cares about developers. Not that it cares about Microsoft. Like if the first thing we did after the acquisition was to add Skype integration, developers would have said – Oh, we're not your priority. You have new priorities now. The idea was just to find ways to make it better for the people who use it and have them see that we cared about that immediately. And so I said, we're going to do this today and then we're going to do it every day for the next 100 days. It was cool because I think it created some really good feedback loops, at least for me. One was, you ship things and then people are like – Oh, hey, I've been wanting to see this fixed for years and now it's fixed. It's a relatively simple thing. So you get this sort of nice dopaminergic feedback loop going there. And then people in the team feel the excitement of shipping stuff. I think GitHub was a company that had a little bit of stage fright about shipping previously and sort of break that static friction and ship a little bit more felt good. And then the other one is just the learning loop. By trying to do lots of small things, I got exposed to things like – Okay, this team is really good. Or this part of the code has a lot of tech debt. Or, hey, we shipped that and it was actually kind of bad. How come that design got out? Whereas if the project had been some six-month thing, I'm not sure my learning would have been quite as quick about the company. There's still things I missed and mistakes I made for sure. But that was part of how I think. No one knows kind of factually whether that made a big difference or not, but I do think that earned some trust.", "Dwarkesh Patel", "I mean, most acquisitions don't go well. Not only do they not go as well, but like they don't go well at all, right? As we're seeing in the last few months with a certain one. Why do most acquisitions fail to go well?", "Nat Friedman", "Yeah, it is true. Most acquisitions are destructive of value. What is the value of a company? In an innovative industry, the value of the company boils down to its cultural ability to produce new innovations and there is some sensitive harmonic of cultural elements that sets that up to make that possible. And it's quite fragile. So if you take a culture that has achieved some productive harmonic and you put it inside of another culture that's really different, the mismatch of that can destroy the productivity of the company. Maybe one way to think about it is that companies are a little bit fragile. And so when you acquire them, it's relatively easy to break them. I mean, they're also more durable than people think in many cases too. Another version of it is the people who really care, leave. The people who really care about building great products and serving the customers, maybe they don't want to work for the acquirer and the set of people that are really load bearing around the long-term success is small. When they leave or get disempowered, you get very different behaviors.", "Copilot Origin Story", "Dwarkesh Patel", "So I want to go into the story of Co-pilot because until ChatGPT it was the most widely used application of the modern AI models. What are the parts of the story you're willing to share in public?", "Nat Friedman", "Yeah, I've talked about this a little bit. GPT-3 came out in May of 2020. I saw it and it really blew my mind. I thought it was amazing. I was CEO of GitHub at that time and I thought – I don't know what, but we've got to build some products with this. And Satya had, at Kevin Scott's urging, already invested in OpenAI a year before GPT-3 came out. This is quite amazing. And he invested like a billion dollars.", "Dwarkesh Patel", "By the way, do you know why he knew that OpenAI would be worth investing at that point?", "Nat Friedman", "I don't know. Actually, I've never asked him. That's a good question. I think OpenAI had already had some successes that were noticeable and I think, if your Satya and you're running this multi-trillion dollar company, you're trying to execute well and serve your customers but you're always looking for the next gigantic wave that is going to upend the technology industry. It's not just about trying to win cloud. It's – Okay, what comes after cloud? So you have to make some big bets and I think he thought AI could be one. And I think Kevin Scott deserves a lot of credit for really advocating for that aggressively. I think Sam Altman did a good job of building that partnership because he knew that he needed access to the resources of a company like Microsoft to build large-scale AI and eventually AGI. So I think it was some combination of those three people kind of coming together to make it happen. But I still think it was a very prescient bet. I've said that to people and they've said – Well, One billion dollars is not a lot for Microsoft. But there were a lot of other companies that could have spent a billion dollars to do that and did not. And so I still think that deserves a lot of credit. Okay, so GPT-3 comes out. I pinged Sam and Greg Brockman at OpenAI and they're like – Yeah, let's. We've already been experimenting with GPT-3 and derivative models and coding contacts. Let's definitely work on something. And to me, at least, and a few other people, it was not incredibly obvious what the product would be.", "Now, I think it's trivially obvious – Auto-complete, my gosh. Isn't that what the models do? But at the time my first thought was that it was probably going to be like a Q&A chatbot Stack Overflow type of thing. And so that was actually the first thing we prototyped. We grabbed a couple of engineers, SkyUga, who had come in from acquisition that we'd done, Alex Gravely, and started prototyping. The first prototype was a chatbot. What we discovered was that the demos were fabulous. Every AI product has a fantastic demo. You get this wow moment. It turns to maybe not be a sufficient condition for a product to be good. At the time the models were just not reliable enough, they were not good enough. I ask you a question 25% of the time you give me an incredible answer that I love. 75% of the time your answers are useless and are wrong. It's not a great product experience.", "And so then we started thinking about code synthesis. Our first attempts at this were actually large chunks of code synthesis, like synthesizing whole function bodies. And we built some tools to do that and put them in the editor. And that also was not really that satisfying. And so the next thing that we tried was to just do simple, small-scale auto-complete with the large models and we used the kind of IntelliSense drop down UI to do that. And that was better, definitely pretty good but the UI was not quite right. And we lost the ability to do this large scale synthesis. We still have that but the UI for that wasn't good. To get a function body synthesized you would hit a key. And then I don't know why this was the idea everyone had at the time, but several people had this idea that it should display multiple options for the function body. And the user would read them and pick the right one. And I think the idea was that we would use that human feedback to improve the model. But that turned out to be a bad experience because first you had to hit a key and explicitly request it. Then you had to wait for it. And then you had to read three different versions of a block of code. Reading one version of a block of code takes some cognitive effort. Doing it three times takes more cognitive effort. And then most often the result of that was like – None of them were good or you didn't know which one to pick. That was also like you're putting a lot of energy and you're not getting a lot out, sort of frustrating. Once we had that sort of single line completion working, I think Alex had the idea of saying we can use the cursor position in the AST to figure out heuristically whether you're at the beginning of a block and the code or not. And if it's not the beginning of a block, just complete a line. If it's the beginning of a block, show in line a full block completion. The number of tokens you request and when you stop gets altered automatically with no user interaction. And then the idea of using this sort of gray text like Gmail had done in the editor. So we got that implemented and it was really only kind of once all those pieces came together and we started using a model that was small enough to be low latency, but big enough to be accurate, that we reached the point where like the median new user loved Co-pilot and wouldn't stop using it. That took four months, five months, of just tinkering and sort of exploring. There were other dead ends that we had along the way. And then it became quite obvious that it was good because we had hundreds of internal users who were GitHub engineers. And I remember the first time I looked at the retention numbers, they were extremely high. It was like 60 plus percent after 30 days from first install. If you installed it, the chance that you were still using it after 30 is over 60 percent. And it's a very intrusive product. It's sort of always popping UI up and so if you don't like it, you will disable it. Indeed, 40 something percent of people did disable it but those are very high retention numbers for like an alpha first version of a product that you're using all day. Then I was just incredibly excited to launch it. And it's improved dramatically since then.", "Dwarkesh Patel", "Okay. Sounds very similar to the Gmail story, right? It's an incredibly valuable inside and then maybe it was obvious that it needs to go outside. We'll go back to the AI stuff in a second. But some more GitHub questions. By what point, if ever, will GitHub Profiles replace resumes for programmers?", "Nat Friedman", "That's a good question. I think they're a contributing element to how people try to understand a person now. But I don't think they're a definitive resume. We introduced readme’s on profiles when I was there and I was excited about that because I thought it gave people some degree of personalization. Many thousands of people have done that. Yeah, I don't know. There's forces that push in the other direction too on that one where people don't want their activity and skills to be as legible. And there may be some adverse selection as well where the people with the most elite skills, it's rather gauche for them to signal their competence on their profile. There's some weird social dynamics that feed into it too. But I will say I think it effectively has this role for people who are breaking through today. One of the best ways to break through. I know many people who are in this situation. You were born in Argentina. You're a very sharp person but you didn't grow up in a highly connected or prosperous network, family, et cetera. And yet you know you're really capable and you just want to get connected to the most elite part communities in the world. If you're good at programming, you can join open source communities and contribute to them. And you can very quickly accrete a global reputation for your talent, which is legible to many companies and individuals around the world. And suddenly you find yourself getting a job and moving maybe to the US or maybe not moving. You end up at a great start up. I mean, I know a lot of people who deliberately pursued the strategy of building reputation in open source and then got the sail up and the wind catches you and you've got a career. I think it plays that role in that sense. But in other communities like in machine learning research, this is not how it works. There's a thousand people, the reputation is more on Arxiv than it is on GitHub. I don't know if it'll ever be comprehensive.", "Are there any other industries for which proof of work of this kind will eat more into the way in which people are hired? I think there's a labor market dynamic in software where the really high quality talent is so in demand and the supply is so much less than the demand that it shifts power onto the developers such that they can require of their employers that they be allowed to work in public. And then when they do that, they develop an external reputation which is this asset they can port between companies. If the labor market dynamics weren't like that, if programming well were less economically valuable, companies wouldn't let them do that. They wouldn't let them publish a bunch of stuff publicly and they'd say that's a rule. And that used to be the case, in fact. As software has become more valuable, the leverage of a single super talented developer has gone up and they've been able to demand over the last several decades the ability to work in public. And I think that's not going away.", "Dwarkesh Patel", "Other than that, I mean, we talked about this a little bit, but what has been the impact of developers being more empowered in organizations, even ones that are not traditionally IT organizations?", "Nat Friedman", "Yeah. I mean, software is kind of magic, right? You can write a for loop and do something a lot of times. And when you build large organizations at scale, one of the things that does surprise you is the degree to which you need to systematize the behavior of the people who are working. When I first was starting companies and building sales teams, I had this wrong idea coming from the world as a programmer that salespeople were hyper aggressive, hyper entrepreneurial, making promises to the customer that the product wouldn't do, and that the main challenge you had with salespeople was like restraining them from going out and aggressively cutting deals that shouldn't be cut. What I discovered is that while it does exist sometimes, the much more common case is that you need to build a systematic sales playbook, which is almost a script that you run on your sales team, where your sales reps know the processing to follow to like exercise this repeatable sales motion and get a deal closed. I just had bad ideas there. I didn't know that that was how the world worked, but software is a way to systematize and scale out a valuable process extremely efficiently. I think the more digitized the world has become, the more valuable software becomes, and the more valuable the developers who can create it become.", "Dwarkesh Patel", "Would 25-year-old Nat be surprised with how well open source worked and how pervasive it is?", "Nat Friedman", "Yeah, I think that's true. I think we all have this image when we're young that these institutions are these implacable edifices that are evil and all powerful and are able to substantially orchestrate the world with master plans. Sometimes that is a little bit true, but they're very vulnerable to these new ideas and new forces and new communications media and stuff like that. Right now I think our institutions overall look relatively weak. And certainly they're weaker than I thought they were back then. Honestly, I thought Microsoft could stop open source. I thought that was a possibility. They can do some patent move and there's a master plan to ring fence open source in. And, you know, that didn't end up in the case.", "In fact when Microsoft bought GitHub, we pledged all of our patent portfolio to open source. That was one of the things that we did as part of it. That was a poetic moment for me, having been on the other side of patent discussions in the past, to be a part and be instrumental in Microsoft making that pledge. That was quite crazy.", "Dwarkesh Patel", "Oh, that's really interesting. It wasn't that there was some business or strategic reason. More so it was just like an idea whose time had come.", "Nat Friedman", "Well, GitHub had made such a pledge. And so I think in part of acquiring GitHub, we had to either try to annul that pledge or sign up to it ourselves. And so there was sort of a moment of a forced choice. But everyone at Microsoft thought it was a good idea too. So in many senses it was a moment whose time had come and the GitHub acquisition was a forcing function.", "Dwarkesh Patel", "What do you make of critics of modern open source like Richard Stallman or people who advocate for free software saying that – Well, corporations might advocate for open source because of practical reasons for getting good code. And the real way the software should be made – it should be free and that you can replicate it, you can change it, you can modify it and you can completely view it. And the ethical values about that should be more important than the practical values. What do you make of that critique?", "Nat Friedman", "I think those are the things that he wants and the thing that maybe he hasn't updated is that maybe not everyone else wants that. He has this idea that people want freedom from the tyranny of a proprietary intellectual property license. But what people really want is freedom from having to configure their graphics card or sound driver or something like that. They want their computer to work. There are places where freedom is really valuable. But there's always this thing of – I have a prescriptive ideology that I'd like to impose on the world versus this thing of – I will try to develop the best observational model for what people actually want whether I want them to want it or not. And I think Richard is strongly in the former camp.", "Dwarkesh Patel", "What is the most underrated license by the way?", "Nat Friedman", "I don't know. Maybe the MIT license is still underrated because it's just so simple and bare.", "Dwarkesh Patel", "Nadia Eghbal had a book recently where she argued that the key constraint on open source software and on the time of the people who maintain it is the community aspect of software. They have to deal with feature requests and discussions and maintaining for different platforms and things like that. And it wasn't the actual code itself, but rather this sort of extracurricular aspect that was the main constraint. Do you think that is the constraint for open source software? How do you see what is holding back more open source software?", "Nat Friedman", "By and large I would say that there is not a problem. Meaning open source software continues to be developed, continues to be broadly used. And there's areas where it works better and areas where it works less well, but it's sort of winning in all the areas where large-scale coordination and editorial control are not necessary. It tends to be great at infrastructure, stand-alone components and very, very horizontal things like operating systems. And it tends to be worse at user experiences and things where you need a sort of dictatorial aesthetic or an editorial control. I've had debates with Dylan Field of Figma, as to why it is that we don't have lots of good open source applications. And I've always thought it had something to do with this governance dynamic of – Gosh, it's such a pain to coordinate with tons of people who all sort of feel like they have a right to try to push the project in one way or another. Whereas in a hierarchical corporation there can be a head of this product or CEO or founder or designer who just says, we're doing it this way. And you can really align things in one direction very, very easily. Dylan has argued to me that it might be because there's just fewer designers, people with good design sense, in open source. I think that might be a contributing factor too, but I think it's still mostly the governance thing. And I think that's what Nadia's pointing at also. You're running a project and you gave it to people for free. For some reason, giving people something for free creates a sense of entitlement. And then they feel like they have the right to demand your time and push things around and give you input and you want to be polite and it's very draining. So I think that where that coordination burden is lower is where open source tends to succeed more. And probably software and other new forms of governance can improve that and expand the territory that open source can succeed in.", "Dwarkesh Patel", "Yeah. Theoretically those two things are consistent, right? You could have very tight control over governance while the code itself is open source.", "Nat Friedman", "And this happens in programming languages. Languages are eventually set in stone and then advanced by committee. But yeah, certainly you have these benign dictators of languages who enforce the strong set of ideas they have, a vision, master plan. That would be the argument that's most on Dylan's side. Hey, it works for languages why can't it work for end user applications? I think the thing you need to do though to build a good end user application is not only have a good aesthetic and idea, but somehow establish a tight feedback loop with a set of users. Where you can give them – Dwarkesh, try this. Oh my gosh. Okay, that's not what you need. Doing that is so hard, even in a company where you've total hierarchical control of the team in theory and everyone really wants the same thing and everyone's salary and stock options depend on the product being accepted by these users. It still fails many times in that scenario. Then additionally doing that in the context of open source, it's just slightly too hard.", "Dwarkesh Patel", "The reason you acquired GitHub, as you said, is that there seems to be complementarity between Microsoft’s and GitHub's missions. And I guess that's been proven out over the last few years. Should there be more of these collaborations and acquisitions? Should there be more tech conglomerates? Would that be good for the system?", "Nat Friedman", "I don't know if it's good but yes, it is certainly efficient in many ways. I think we are seeing a collaboration occur because the math is sort of pretty simple. If you are a large company and you have a lot of customers, then the thing that you've achieved is this very expensive and difficult thing of building distribution and relationships with lots of customers. And that is as hard or harder and takes longer and more money than just inventing the product in the first place. So if you can then go and just buy the product for a small amount of money and make it available to all of your customers, then there's often an immediate, really obvious gain from doing that. And so in that sense, like acquisitions make a ton of sense. And I've been surprised that the large companies haven't done many more acquisitions in the past until I got into a big company and started trying to do acquisitions. I saw that there are strong elements of the internal dynamics to make it hard. It's easier to spend $100 million on employees internally to do a project than to spend $100 million to buy a company. The dollars are treated differently. The approval processes are different. The cultural buy-in processes are different. And then to the point of the discussion we had earlier, many acquisitions do fail. And when an acquisition fails, it's somehow louder and more embarrassing than when some new product effort you've spun up doesn't quite work out as well. I think there's lots of internal reasons, some justified and some less so, that they haven't been doing it. But just from an economic point of view, it seemed like it makes sense to see more acquisitions than we've seen.", "Dwarkesh Patel", "Well, why did you leave?", "Nat Friedman", "As much as I loved Microsoft, and certainly as much as I loved GitHub. I still feel tremendous love for GitHub and everything that it means to the people who use it. I didn't really want to be a part of a giant company anymore. Building CoPilot was an example of this. It wouldn't have been possible without OpenAI and Microsoft and GitHub, but building it also required navigating this really large group of people between Microsoft and OpenAI and GitHub. And you reach a point where you're spending a ton of time on just navigating and coordinating lots of people. I just find that less energizing. Just my enthusiasm for that was not as high. I was torn about it because I truly love GitHub, the product and there was so much more I still knew we could do but I was proud of what we'd done. I miss the team and I miss working on GitHub. It was really an honor for me but it was time for me to go do something. I was always a startup guy. I always liked small teams, and I wanted to go back to a smaller, more nimble environment.", "Nat.org", "Dwarkesh Patel", "Okay, so we'll get to it in a second. But first, I want to ask about nat.org and the list of 300 words there. Which I think is one of the most interesting and very straussian list of 300 words I've seen anywhere. I'm just going to mention some of these and get some of your commentary. You should probably work on raising the ceiling, not the floor. Why?", "Nat Friedman", "First, I say probably. But what does it mean to raise the ceiling or the floor? I just observed a lot of projects that set out to raise the floor. Meaning – Gosh. We are fine, but they are not and we need to go help them with our superior prosperity and understanding of their situation. Many of those projects fail. For example, there were a lot of attempts to bring the internet to Africa by large and wealthy tech companies and American universities. I won't say they all had no effect, that's not true, but many of them were far short of successful. There were satellites, there were balloons, there were high altitude drones, there were mesh networks, laptops, that were pursued by all these companies. And by the way, by perfectly well-meaning, incredibly talented people who in some cases did see some success, but overall probably much less than they ever hoped. But if you go to Africa, there is internet now. And the way the internet got there is the technologies that we developed to raise the ceiling in the richest part of the world, which were cell phones and cell towers. In the movie Wall Street from the 80s, he's got that gigantic brick cell phone. That thing cost like 10 grand at the time. That was a ceiling raising technology. It eventually went down the learning curve and became cheap. And the cell towers and cell phones, eventually we've got now hundreds of millions or billions of them in Africa. It was sort of that initially ceiling raising technology and then the sort of force of capitalism that made it work in the end. It was not any Deus Ex Machina technology solution that was intended to kind of raise the floor. There's something about that that's not just an incidental example. But on my website, I say probably. Because there are some examples where people set out to kind of raise the floor and say – No one should ever die of smallpox again. No one should ever die of guinea worm again. And they succeed. I wouldn't want to discourage that from happening but on balance, we have too many attempts to do that. They look good, feel good, sound good, and don't matter. And in some cases, have the opposite of the effect they intend to.", "Dwarkesh Patel", "Here's another one and this is under the EMH section. In many cases, it's more accurate to model the world as 500 people than 8 billion. Now here's my question, what are the 8 billion minus 500 people doing? Why are there only 500 people?", "Nat Friedman", "I don't know exactly. It's a good question. I ask people that a lot. The more I've done in life, the more I've been mystified by this – Oh, somebody must be doing X. And then you hear there's a few people doing X, then you look into it, they're not actually doing X. They're doing kind of some version of it that's not that. All the best moments in life occur when you find something that to you is totally obvious that clearly somebody must be doing, but no one is doing. Mark Zuckerberg says this about founding Facebook. Surely the big companies will eventually do this and create this social and identity layer on the internet. Microsoft will do this. But no, none of them were. And he did it. So what are they doing? I think the first thing is that many people throughout the world are optimizing local conditions. They're working in their town, their community, they're doing something there so the set of people that are kind of thinking about kind of global conditions is just naturally narrowed by the structure of the economy. That's number one. I think number two is, most people really are quite mimetic. We all are, including me. We get a lot of ideas from other people. Our ideas are not our own. We kind of got them from somebody else. It's kind of copy paste. You have to work really hard not to do that and to be decorrelated. And I think this is even more true today because of the internet. I don't know if Albert Einstein, as a patent clerk, wouldn't he have just been on Twitter just getting the same ideas as everybody else? What do you have as decorrelated ideas? I think the internet has correlated us more. The exception would be really disagreeable people who are just naturally disagreeable. So I think the future belongs to the autists in some sense because they don't care what other people think as much. Those of us on the spectrum in any sense are in that category. Then we have this belief that the world's efficient and it isn't and that's part of it. The other thing is that the world is so fractal and so interesting. Herculaneum papyri, right? It is this corner of the world that I find totally fascinating but I don't have any anticipation that eight billion people should be thinking about that. That should be a priority for everyone.", "Dwarkesh Patel", "Okay, here's another one. Large scale engineering projects are more soluble in IQ than they appear. And here's my question, does that make you think that the impact of AI tools like co-pilot will be bigger or smaller because one way to look at co-pilot is it’s IQ is probably less than the average engineer, so maybe it'll have less impact.", "Nat Friedman", "Yeah, but it definitely increases the productivity of the average engineer to bring them higher up. And I think it increases the productivity of the best engineers as well. Certainly a lot of people I consider to be the best engineers telling you that they find it increases their productivity a lot. It's really interesting how so much of what's happened in AI has been soft, fictional work. You have Midjourney, you have copywriting, you have Claude from Anthropic is so literary, it writes poetry so well. Except for co-pilot, which is this real hard area where like, the code has to compile, has to be syntactically corrected, has to work and pass the tests. We see the steady improvement curve where now, already on average, more than half of the code is written by co-pilot. I think when it shipped, it was like low 20s. And so it's really improved a lot as the models have gotten better and the prompting has gotten better. But I don't see any reason why that won't be 95%. It seems very likely to me. I don't know what that world looks like. It seems like we might have more special purpose and less general purpose software. Right now we use general purpose tools like spreadsheets and things like this a lot, but part of that has to do with the cost of creating software. And so once you have much cheaper software, do you create more special purpose software? That's a possibility. So every company, just a custom piece of code. Maybe that's the kind of future we're headed towards. So yeah, I think we're going to see enormous amounts of change in software development.", "Dwarkesh Patel", "Another one – The cultural prohibition on micromanagement is harmful, great individuals should be fully empowered to exercise their judgment. And the rebuttal to this is if you micromanage you prevent people from learning and to develop their own judgment.", "Nat Friedman", "So imagine you go into some company, they hired Dwarkesh and you do a great job with the first project that they give you. Everyone's really impressed. Man, Dwarkesh, he made the right decisions, he worked really hard, he figured out exactly what needed to be done and he did it extremely well. Over time you get promoted into positions of greater authority and the reason the company's doing this is they want you to do that again, but at bigger scale, right? Do it again, but 10 times bigger. The whole product instead of part of the product or 10 products instead of one. The company is telling you, you have great judgment and we want you to exercise that at a greater scale. Meanwhile, the culture is telling you as you get promoted, you should suspend your judgment more and more and defer your judgment to your team. And so there's some equilibrium there and I think we're just out of equilibrium right now where the cultural prohibition is too strong. I don't know if this is true or not, but maybe in the 80s I would have felt the other side of this. That we have too much micromanagement. I think the other problem that people have is that they don't like micromanagement because they don't want bad managers to micromanage, right? So you have some bad managers, they have no expertise in the area, they're just people managers and they're starting to micromanage something that they don't understand where their judgment is bad. And my answer to that is stop empowering bad managers. Don't have them, promote and empower people who have great judgment and do understand the subject matter that they're working on. If I work for you and I just know you have better judgment and you come in and you say, now like you're launching the scroll thing and you think you've got the final format wrong, here's how you should do it, I would welcome that even though it's micromanagement because it's going to make us more successful in them and learn something from tha. I know your judgment is better than mine in this case or at least we're going to have a conversation about it, we're both going to get smarter. So I think on balance, yeah, there are cases where people have excellent judgment and we should encourage them to exercise it and sometimes, things will go wrong when you do that, but on balance you will get far more excellence out of it and we should empower individuals who have great judgment.", "Dwarkesh Patel", "Yeah. There's a quote about Napoleon that if he could have been in every single theater of every single battle he was part of, that he would have never lost a battle. I was talking to somebody who worked with you at GitHub and she emphasized to me, and this is like really remarkable to me, that even the applications are already being shipped out to engineers how much of the actual suggestions and the actual design came from you directly, which is kind of remarkable to me that as CEO you would have.", "Nat Friedman", "Yeah, you can probably also find people you can talk to who think that was terrible. But the question is always: does that scale? And the answer is it does not scale. The experience that I had as CEO was I was terrified all the time that there was someone in the company who really knew exactly what to do and had excellent judgment, but because of cultural forces that person wasn't empowered. That person was not allowed to exercise their judgment and make decisions. And so when I would think and talk about this, that was the fear that it was coming from. They were in some consensus environment where their good ideas were getting whittled down by lots of conversations with other people and a politeness and a desire not to micromanage. So we were ending up with some kind of average thing. And I would rather have more high variance outcomes where you either get something that's excellent because it is the expressed vision of a really good auteur or you get a disaster and it didn't work and now you know it didn't work and you can start over. I would rather have those more high variance outcomes and I think it's a worthy trade.", "Dwarkesh Patel", "Okay, let's talk about AI. What percentage of the economy is basically text to text?", "Nat Friedman", "Yeah, it's a good question. We've done the sort of Bureau of Labor Statistics analysis of this. It's not the majority of the economy or anything like that. We're in the low double digit percentages. The thing that I think is hard to predict is what happens over time as the cost of text to text goes down? I don't know what that's going to do. But yeah, there's plenty of revenue to be got now. One way you can think about it is – Okay, we have all these benchmarks for machine learning models. There's LAMBADA and there's this and there's that. Those are really only useful and only exist because we haven't deployed the models at scale. So we don't have a sense of what they're actually good at. The best metric would probably be something like – What percentage of economic tasks can they do? Or on a gig marketplace like Upwork, for example, what fraction of Upwork jobs can GPT-4 do? I think is sort of an interesting question. My guess is extremely low right now, autonomously. But over time, it will grow. And then the question is, what does that do for Upwork? I’m guessing it’s a five billion dollar GMV marketplace, something like that. Does it grow? Does it become 15 billion or 50 billion? Does it shrink because the cost of text to text tasks goes down? I don't know. My bet would be that we find more and more ways to use text to text to advance progress. So overall, there's a lot more demand for it. I guess we'll see.", "Dwarkesh Patel", "At what point does that happen? GPT-3 has been a sort of rounding error in terms of overall economic impact. Does that happen with GPT-4, GPT-5, where we see billions of dollars of usage?", "Nat Friedman", "Yeah, I've got early access to GPT-4 and I've gotten to use it a lot. And I honestly can't tell you the answer to that because it's so hard to discover what these things can do that the prior ones couldn't do. I was just talking to someone last night who told me – Oh, GPT-4 is actually really good at Korean and Japanese and GPT-3 is much worse at those. So it's actually a real step change for those languages. And people didn't know how good GPT-3 was until it got instruction tuned for chatGPT and was put out in that format. You can imagine the pre-trained models as a kind of unrefined crude oil and then once they've been kind of RLHF and trained and then put out into the world, people can find the value.", "Dwarkesh Patel", "What part of the AI narrative is wrong in the over-optimistic direction?", "Nat Friedman", "Probably an over-optimistic case from both the people who are fearful of what will happen, and from people who are expecting great economic benefits is that we're definitely in this realm of diminishing returns from scale. For example GPT-4 is, my guess is, two orders of magnitude more expensive to train the GPT-3, but clearly not two orders of magnitude more capable. Now is it two orders of magnitude more economically valuable? That would also surprise me. When you're in these sigmoids, where you are going up this exponential and then you start to asymptote it, it can be difficult to tell if that's going to happen. The idea that we might not run into hard problems or that scaling will continue to be worth it on a dollar basis are reasons to be a little bit more pessimistic than the people who have high certainty of GDP increasing by 50% per month which I think some people are predicting. But on the whole, I'm very optimistic. You're asking me to like make the bear case for something I'm very bullish about.", "Dwarkesh Patel", "No, that's why I asked you to make the bear case because I know about you. I want to ask you about these foundation models. What is the stable equilibrium you think of how many of them will there be? Will it be an oligopoly like Uber and Lyft where…?", "Nat Friedman", "I think there will probably be wide-scale proliferation. And if you asked me, what are the structural forces that are pro proliferation and the structural forces that are pro concentration? I think the pro proliferation case is a bit stronger. The pro proliferation case is – They're actually not that hard to train. The best practices will promulgate. You can write them down on a couple sheets of paper. And to the extent that secrets are developed that improve training, those are relatively simple and they get copied around easily. Number one, number two. The data is mostly public, it's mostly data from the internet. Number three, the hardware is mostly commodity and the hardware is improving quickly and getting much more efficient. I think some of these labs potentially have 50, 100, 200 percent training efficiency improvement techniques and so there's just a lot of low-hanging fruit on the technique side of things. We're seeing it happen. I mean, it's happening this weekend, it's happening this year. We're getting a lot of proliferation. The only case against proliferation is that you'll get concentration because of training costs. And I don't know if that's true.", "I don't have confidence that the trillion dollar model will be much more valuable than the 100 billion dollar model and that even it will be necessary to spend a trillion dollars training it. Maybe there will be so many techniques available for improving efficiency. How much are you willing to spend on researchers to find techniques if you're willing to spend a trillion on training? That's a lot of bounties for new techniques and some smart people are going to take those bounties.", "Dwarkesh Patel", "How different will these models be? Will it just be sort of everybody chasing the same exact marginal improvement leading to the same marginal capabilities or will they have entirely different repertoire of skills and abilities?", "Nat Friedman", "Right now, back to the mimetic point, they're all pretty similar. Basically the same rough techniques. What's happened is an alien substance has landed on Earth and we are trying to figure out what we can build with it and we're in this multiple overhangs. We have a compute overhang where there's much more compute in the world than is currently being used to train models like much, much more. I think the biggest models are trained on maybe 10,000 GPUs, but there's millions of GPUs. And then we have a capability and technique overhang where there's lots of good ideas that are coming out and we haven't figured out how best to assemble them all together, but that's just a matter of time kind of until people do that. And because many of those capabilities are in the hands of the labs, they haven't reached the tinkerers of the world. I think that is where the new – What can this thing actually do? Until you get your hands on it, you don't really know. I think OpenAI themselves were surprised by how explosively chat GPT has grown. I don't think they put chatGPT out expecting that to be the big announcement. I think they thought GPT-4 was going to be their big announcement. Iit still probably is and will be big, but the chatGPT really surprised them. It's hard to predict what people will do with it and what they'll find valuable and what works. So you need tinkerers. So it goes from hardware to researchers to tinkerers to products. That's the pipe, that's the cascade.", "Dwarkesh Patel", "When I was scheduling my interview with Ilya, it was originally supposed to be around the time that chatGPT came out and so their comm’s person tells me – Listen, just so you know, this interview would be scheduled around the time. We're going to make a minor announcement. It's not the thing you're thinking, it's not GPT-4, but it's just like a minor thing. They didn't expect what it ended up being.", "Have incumbents gotten smarter than before? It seems like Microsoft was able to integrate this new technology role.", "Nat Friedman", "There's two, there's been two really big shifts in the way incumbents behave in the last 20 years that I've seen. The first is, it used to be that incumbents got disrupted by startups all the time. You have example after example of this in the mini-computer, micro-computer era, et cetera. And then Clay Christensen wrote The Innovator's Dilemma. And I think what happened was that everyone read it and they said – Oh, disruption is this thing that occurs and we have this innovator's dilemma where we get disrupted because the new thing is cheaper and we can't let that happen. And they became determined not to let that happen and they mostly learned how to avoid it. They learned that you have to be willing to do some cannibalization and you have to be willing to set up separate sales channels for the new thing and so forth. We've had a lot of stability in incumbents for the last 15 years or so. I think that's maybe why. That's my theory. So that's the first major step change. And then the second one is – man, they are paying a ton of attention to AI. If you look at the prior platform revolutions like cloud, mobile, internet, web, PC, all the incumbents derided the new platform and said – Gosh, like no one's going to use web apps. Everyone will use full desktop apps, rich applications. And so there was always this laughing at the new thing. The iPhones were laughed at by incumbents and that is not happening at all with AI. We may be at peak hype cycle and we're going to enter the trough of despair. I don't think so though, I think people are taking it seriously and every live player CEO is adopting it aggressively in their company. So yeah, I think incumbents have gotten smarter.", "Questions from Twitter", "Dwarkesh Patel", "All right. So let me ask you some questions that we got from Twitter. This is former guest and I guess mutual friend Austin Vernon. Nat is one of those people that seems unreasonably effective. What parts of that are innate and what did he have to learn?", "Nat Friedman", "It's very nice of Austin to say. I don't know. We talked a little bit about this before, but I think I just have a high willingness to try things and get caught up in new projects and then I don't want to stop doing it. I think I just have a relatively low activation energy to try something and am willing to sort of impulsively jump into stuff and many of those things don't work, but enough of them do that I've been able to accomplish a few things. The other thing I would say, to be honest with you, is that I do not consider myself accomplished or successful. My self-image is that I haven't really done anything of tremendous consequence and I don't feel like I have this giant bed of achievements that I can go to sleep on every night. I think that's truly how I feel. I'm an insecure overachiever, I don't really feel good about myself unless I'm doing good work, but I also have tried to cultivate a forward-looking view where I try not to be incredibly nostalgic about the past.", "I don't keep lots of trophies or anything like that. Go into some people's offices and it's like things on the wall and trophies of all the things they've accomplished and I'd always seemed really icky to me. Just had a sort of revulsion to that.", "Dwarkesh Patel", "Is that why you took down your blog?", "Nat Friedman", "Yeah. I just wanted to move forward.", "Dwarkesh Patel", "Simian asks for your takes on alignment. “He seems to invest both in capabilities and alignment which is the best move under a very small set of beliefs.” So he's curious to hear the reasoning there.", "Nat Friedman", "I guess we'll see but I'm not sure capabilities and alignment end up being these opposing forces. It may be that the capabilities are very important for alignment. Maybe alignment is very important for capabilities. I think a lot of people believe, and I think I'm included in this, that AI can have tremendous benefits, but that there's like a small chance of really bad outcomes. Maybe some people think it's a large chance. The solutions, if they exist, are likely to be technical. There's probably some combination of technical and prescriptive. It's probably a piece of code and a readme file. It says – if you want to build aligned AIs, use this code and don't do this or something like that. I think that's really important and more people should try to actually build technical solutions. I think one of the big things that's missing that perplexes me is, there's no open source technical alignment community. There's no one actually just implementing in open source, the best alignment tools. There's a lot of philosophizing and talking, and then there's a lot of behind closed doors, interpretability and alignment work. Because the alignment people have this belief that they shouldn't release their work I think we're going to end up in a world where there's a lot of open source, pure capabilities work, and no open source alignment work for a little while. Hopefully that'll change. So yeah, I wanted to, on the margin, invest in people doing alignment. It seems like that's important. I thought Sydney was a kind of an example of this. You had Microsoft essentially released an unaligned AI and I think the world sort of said – Hmm, sort of threatening its users, that seems a little bit strange. If Microsoft can't put a leash on this thing, who can? I think there'll be more interest in it and I hope there's open communities.", "Dwarkesh Patel", "That was so endearing for some reason. Threatening you just made it so much more lovable for some reason.", "Nat Friedman", "Yeah, I think it's like the only reason it wasn't scary is because it wasn't hooked up to anything. If it was hooked up to HR systems or if it could like post jobs or something like that, then I don't know, like to get on a gig worker site or something. I think it could have been scary.", "Dwarkesh Patel", "Yep. Final question from Twitter. Will asks “What historical personality seems like the most kindred spirit to you”. Bookshelves are all around us in this room, some of them are biographies. Is there one that sticks out to you?", "Nat Friedman", "Gosh, good question. I think I'd say it's changed over time. I've been reading Philodemus's work recently. When I grew up Richard Feynman was the character who was curious and plain spoken.", "Dwarkesh Patel", "What's next? You said that according to your perception that you still have more accomplishments ahead of you. What does that look like, concretely? Do you know yet?", "Nat Friedman", "I don't know. It's a good question. The area I'm paying most attention to is AI. I think we finally have people building the products and that's going to just accelerate. I'm going to pay attention to AI and look for areas where I can contribute.", "Dwarkesh Patel", "Awesome. Okay. Nat this was a true pleasure. Thanks for coming on the podcast.", "Nat Friedman", "Thanks for having me.", "" ]
[ "https://github.com/ggerganov", "https://twitter.com/migueldeicaza", "https://twitter.com/JosephHill", "https://en.wikipedia.org/wiki/Amy_Hood", "https://en.wikipedia.org/wiki/Scott_Guthrie", "https://en.wikipedia.org/wiki/Kevin_Scott_(computer_scientist)", "https://arxiv.org/abs/1606.06031" ]
https://www.dwarkesh.com/p/patrick-collison
Patrick Collison (Stripe CEO) - Craft, Beauty, & The Future of Payments
[ "Advice for 20-30 year olds", "Dwarkesh Patel 00:00:00", "Today I have the pleasure of speaking with Patrick Collison , CEO of Stripe . Patrick, first question. You have an excellent compilation of advice on your blog for people 10 to 20 . You say there, that once you turn 35, you'll write some for people in their 20s. What advice do you have for us now, the people in our 20s? When is it coming?", "Patrick Collison 00:22", "I haven't really thought about that. The one piece of advice I've been wondering about recently is this: I said that people in their teens should go to San Francisco. I wonder if people in their 20s shouldn't go to San Francisco. That advice was a generalization. There's a significant set of people who should go to San Francisco. But there is a set of career paths that people ought to pursue and would derive most fulfillment from pursuing, that are also really valuable for the world, that require accumulating a lot of expertise and studying a domain in tremendous depth.", "I think San Francisco valorizes – this is also San Francisco's great virtue. San Francisco valorizes striking out on your own, iconoclastically dismissing the received wisdom. It praises the founding archetypes and lore of Steve Jobs and Bill Gates and all the rest. I'm way less successful than those people, but to some extent, Stripe, in as much as it fits a pattern, is an instance of that pattern. That's great, I'm happy that this phenomenon exists in the world. But the world needs lots of other things.  And I don't think San Francisco, using San Francisco as a kind of a metonym for a cultural orientation, encourages the pursuit of really deep technical knowledge.", "We're recording this in South San Francisco, which is most noteworthy in the corporate world for being the headquarters of Genentech . Genentech was co-founded by Bob Swanson and Herb Boyer. They produced cheap insulin for the first time with recombinant DNA . Like Herb Boyer couldn't have done that at age 23. Herb Boyer first had to accumulate all of the knowledge and the skills required to be able to invent that over the course of a multi-decade career. I don't know what age he was when he finally went and invented it, but he was not in his 20s. I feel like San Francisco doesn't culturally encourage one to become Herb Boyer.", "Or yesterday, at the time of recording this podcast, Patrick Hsu , one of the co-founders of Arc , which maybe we'll speak about later in the show. This is a biomedical research organization we started a few years ago. He announced this new phenomenon of bridge editing , which is a new recombinase where you can insert DNA into a genome. It's pretty early, but it might turn out to be quite consequential. In order to do something like that, you have to study for a long time and acquire a lot of technical skills.", "I don't quite know how to synthesize it yet, but as I think about advice for people in their 20s, I'm not going to normatively pretend to know or presume in which direction one should go in life. Obviously, there are successful examples of basically every strategy. I'm really glad that you're doing what you're doing at what age?", "Dwarkesh Patel 00:04:21", "23.", "Patrick Collison 00:04:22", "23. So that's...", "Dwarkesh Patel 00:04:24", "A podcast, I’ve got a podcast.", "Patrick Collison 00:04:29", "I think information dissemination is a really valuable thing in the world. The guy, who, last time I heard, was in the lead for Nat's Scroll Prize , learned about it, listening to your podcast. Increasing the catalytic surface area of certain kinds of information is a valuable thing in the world, so I'm very glad you're doing the podcast.", "Anyway, I don't presume to know what people should do with their lives. But in as much as I was trying to give advice, especially if they're reading my advice and not someone else's, maybe they're thinking about career paths that look directionally like mine, I think my advice might be: \"Maybe you should do something like what I did or I'm trying to do. But there are other paths as well. A lot of really important inventions in the world and a lot of the things that I'm most happy are happening, require a very different trajectory from mine. There are counterfactual versions of my life, where I pursued that path and who knows how well it would have worked.\"", "Last point is that San Francisco is very status oriented. Everything is status oriented, so the previous statement is kind of tautological. I feel like in San Francisco the entrepreneurs are held in excessively high regard. Look, I like entrepreneurs as a group in the world. All the companies built in Stripe I think are great. But there's a strange emphasis placed on entrepreneurship in San Francisco, that should not be people's only fixation.", "Dwarkesh Patel 00:06:27", "What I like about this and what I admire about you is that you have this sense of contrarianism – the way you often challenge what people are expecting to hear from you in a given moment. You just really want to tell them the opposite. When EA was a little more popular, you were talking about the important problems, and when it was down in its depths, you were like: \"Hey guys, pay attention.\" But on this particular piece of advice…", "Patrick Collison 00:06:49", "Michael Nielsen says that every field in science has way too many adherents or way too few. The market is almost never in the right equilibrium and I think something like that might be the case for EA. I think reflexive contrarianism for the sake of it is also tired. If you're just contrarian to the prevailing mood, then you're following the prevailing mood but with a sign bit inversion. I don't endorse that either.", "The herd is a really powerful phenomenon. One of the learnings of my adult life has been something, that everyone knows and says or frequently hears, that you should be very wary of following the prevailing tides and moods and whims and everything. But it's freaking hard to do in practice.", "Dwarkesh Patel 00:07:42", "So what practically does that look like to hone your craft in any of these disciplines that take a long time? You've spoken and tweeted about some of the problems with modern universities. Is that still the de facto path if you want to be the great biologist at Arc or something?", "Patrick Collison 00:07:57", "In many domains, I don't know. For example I have no facility with or experience", "with doing things in hardware, which is not a small domain. If you wanted to become a super skilled practitioner there, what's the best career path? I don't know. Maybe it's to drop out and join SpaceX or something. I'm not necessarily endorsing pursuing the most establishment and credential oriented path. People should try to find the gradient of maximal learning in whatever it is they care most about. The question then is what that is.", "For biology, not that I'm a biologist, but it is very clear that in order to do really good work, there are a lot of \"bench\" skills one has to acquire and there is a lot of actual specific knowledge. Any kind of life wasn't designed with neat fundamental principles the way that maybe physics was. A lot of it is obviously evolved and contingent and messy and complicated and all the rest. So there is a lot of specific factual stuff to learn.", "For those two reasons, there are very few successful pure autodidacts in biology. In virtually every case that I'm aware of, at some point, you have to get direct experience in and with a top lab, where you're seeing how people actually do it in practice. This also ties back to what we were discussing previously: your question about the founders and what they learn from each other and so on.", "There's an interesting book, Apprentice to Genius , that follows three generations of scientists. So someone who mentored somebody else, who in turn mentored another scientist. And they're all extremely successful. The book is this description of what they all did, but also this reflection on: \"What is it that was transferred?\".", "For example, one of the most important and subtle questions in science is problem selection. How do you choose what to work on? No one tells you what to do. And you do have to answer this question multiple times. With a company, in some sense, you have to decide it once, and then it's an iterative process from there. Whereas in science, you're frequently pursuing completely new problems. You need to choose something that's sufficiently important and hard, so that it would be important if you succeeded, but it also mustn't be so complex that progress becomes unachievable. This is what mentees learn from their mentors according to the book.", "Another thing the book talks about is learning about high standards and what they actually are. When I talk to people in other domains, I hear very frequently, that when they worked with X person or Y organization or in Z environment, they learned what great actually is.", "And that just permanently changed their sense for what their own standard for their work", "ought to be. So one version of what people in their 20s should do is get some ideas for domains you're interested in, but then figure out where can you learn the highest standards, where are the highest standards embodied, and where can you go and experience that first hand.", "Progress studies", "Dwarkesh Patel 00:12:11", "Before we get back to Stripe and Arc Institute, I want to touch on the Parker study for a second. There's a view that says: \"If we improve the NIH 10% or whatever percent, are we really making a dent in the fact that ideas are getting harder to find over time? And how much of a difference do institutions make anyways? Is it just about a number of researchers and how many people in society you can put into research? It's not like Singapore can get a much more effective scientific institution that lets it compete with America in science by following this approach.", "What's wrong with that intuition?", "Patrick Collison 00:12:43", "Noah Smith and others have talked about, I can't remember the term he used, something like “moneyism”. He had a funny phrase. It pertains to the presumption that there is some constant elasticity between investment in some particular outcome, like building a semiconductor factory in Arizona or a new bridge, and the outcome of the factory or the bridge. First of all, the conversion rate between those inputs and the output is not a cosmological constant. Maybe any of these things could be done for half or a tenth of the cost. Secondly, there are even deeper questions as to, is it possible at all? What else would have to change for it to be possible? What are the other constraints? By talking about these things in funding and dollar terms, you're making the implicit assumption that the only relevant constraint is the financial one, where in practice, maybe it's permits or labor shortages or other things. In the context of the NIH and science and R&D, I'm really skeptical of this approach being brought to bear, where we can just talk about the amount that we're spending on R&D and think that that's implicitly a useful measure of the output.", "To a fairly close approximation, there were around 1% as many practicing professional scientists in the US pre-World War II as there were post-World War II or say even 1950.", "The other epiphenomena in papers or patents and so forth tend to follow pretty similar ratios. We got a lot of pretty good stuff in the first half of the century. Despite increasing the amount that we spend between two and maybe slightly more than two orders of magnitude it's not clear to me that there is a direct linear relationship between spending and output. When analyzing the NIH or how we should pursue any of this stuff, I try to get more concrete and tactile and think, what would success here look like? What is happening today at the microscale? What are the actual problems? What could success look like at the microscale? What might it look like to scale that up?", "One example of that, we ran a survey of the Fast Grants grant recipients after Fast Grants asking about their normal work, and not about anything to do with Fast Grants itself. We asked them if they had flexible funding, that is to say if they could direct their current research dollars however they wanted how much their associated research program would change. And we gave them three options: not much, a little, and a lot. Four out of five(79%) said that their research agenda would change a lot if this constraint was removed.", "Asking: \"Should the NIH funding level be X or 1.1X or 1.2X or whatever?\" seems to me like a bad way to analyze this question as compared to: \"How bound and constrained should an NIH grantee be in choosing their research agenda?\" Maybe their judgment is way better than that of the committees, not saying it is, but who knows? Maybe there's a 5X improvement to be generated just by making that one switch? I'm very skeptical of these financially-oriented frameworks.", "Dwarkesh Patel 00:16:55", "Maybe financial is not the right word for it, but just trying to map inputs to outputs is the framing, which you're using to compare the pre-World War II inputs to what's happening now. If it was particular to the scientific institutions, you'd expect, for example, that things that are disconnected from the NIH-specific structures would show different trends. You've talked a lot about the fact that it's getting harder to find impactful papers. Sector through sector, it's not like NIH is running Moore's Law progress, right? But even there you see that you need exponentially more researchers to keep up the same level of progress. It does seem important to have these \"level effects\", that are one-time, in the case of something like COVID happens, where we say: \"Yeah, we need that level-effect\" right now.\"", "But if we're framing it in terms of hundreds of years from now, I wonder if these events are going to be a thing that increases growth rates, which is a sort of framing that is also applied when talking about progress.", "Does that make sense in that context, when all sectors are seeing slowdowns, which seem consistent with how the economy and science progresses over time?", "Patrick Collison 00:17:57", "“I don't know” is the short answer. It's really puzzling. The constancy of US GDP growth is one of the weirdest things. I don't know if I've got an explanation for it. An obvious thing to do would be to shrug and say: \"OK, well, it's overdetermined, and that's just how countries work.\" But you can look at other countries, where it's manifestly not the case. What is it that's weird and special about the US?", "The thing that I wonder about in a lot of these cases is: you could get many of the observed system phenomenon characteristics, if we weren't actually adding productive capacity. That's a simple way to explain a lot of it, in that if you're just adding exponentially more unproductive capacity, then on a stylized level, a lot of this stuff would just fall out of it. Now, I'm not saying that we're necessarily doing that, but it could be that maybe we're making them... There's lots of ways where that could be quite effectively going on, even if it's not the case that the marginal people or things or organizations themselves are bad. It's just how the components interact. But the fact that you could get these exponentially diminishing returns through the addition of ever more nonproductive capacity makes me not persuaded that the low-hanging case is necessarily true, and gives some weight to the prospect that it's fundamentally structural, cultural, or organizational.", "Just to give a micro example there, and it's a very basic and an obvious one, it's interesting to compare the SpaceX R&D budget and the NASA R&D budget and to actually look at those two time series together. Maybe we're just returning to the financial point again, but it seems pretty clear that the trajectory of NASA's efficacy has not fully followed the trajectory of its inputs.", "Dwarkesh Patel 00:20:29", "Yeah, although the point about: \"the marginal inputs we put into science have not been as effectively used as what was before\". The 1X then was a 1X of much higher quality, than 100X now. It's not clear what you do to fix that. If it's just a case, that there's a limited amount of John von Neumanns in your society, that are part of the pre-World War II 1X, it's not like we can just put 100X more John von Neumann-type physicists into science.", "Patrick Collison 00:20:56", "If the binding constraint is the number of John von Neumann 's, then yes, that's bad news, I guess. There's not a lot we can do on the margin. But I'm not sure that it is. I keep going back to the cultural and sociological point. Gerty and Carl Cori , they ran a lab at the University of Washington, St. Louis. And six of their students, if I recall correctly, went on to win Nobel prizes. They had a well-known lab , they got good students, but they weren't the most prestigious lab in the world. It's not like they got to cherry-pick every year the single most promising person, so something was going on there. There's a book about it, which tries to get into this a little bit. I don't know if I can figure out quite what it was. There was also some good fortune, where they got into molecular biology at a good time, but I think there were these \"hopeful data points\". Again, they were extremely brilliant people, but the thing that distinguished them and their students was not that they were these seven \"sigma-Martians\", it was rather that they found organizational structures and cultural practices that really worked. Those are, at least in principle, more replicable.", "Now, you might still say: \"OK, fine, in theory. But how do you actually do that?\" That's the big open question.", "Arc Institute", "Dwarkesh Patel 00:22:20", "OK, that's a great point to talk about, Arc institute. I think you just answered this question but still: It's not exactly like biology research is, it's something that society has neglected. So what's the theory of change here? Is it just a story similar to Stripe? In that, if you get the right people, there's tens of billions of dollars of biology funding. Getting the right people, the right culture and right education is what it takes, right?", "Patrick Collison 00:22:45", "Even though there are lots of scientists and lots of universities, there's a lot of homogeneity today in how science, and in particular, how biomedical science is pursued, where basic research is done in an academic context before there's any commercialization prospect in sight. I don't know that this model is necessarily a bad one. Certainly, we're not claiming that it's a bad one.", "Construct of universities, labs, PI — a principal investigator running the lab, who applies for grants primarily to the NIH, maybe supplemented by other sources, grants reviewed by committees with \"study sections\", as they call them, with pretty rigid scoring criteria and so on — that is the structure and it seems suboptimal to me.", "Homogeneity is bad in basically any ecosystem, especially ecosystems where you're producing or seeking tail outcomes. And we thought that, for a variety of reasons, from first principles, other models should be possible. We had specific ideas as to how one particular model might be a good idea and complementary to the status quo.", "In very short terms, what's different about Arc is: one, scientists are funded themselves to pursue whatever they want. So it's curiosity-driven research, whereas NIH grants are given for projects. Second, we build a lot of in-house infrastructure, so that scientists can draw upon other platforms and capabilities that they don't have to build and maintain themselves. Whereas, in the standard university academic context, scientists would virtually always have to do that in-house. Because of the natural scale constraints on any given lab, that effectively circumscribes the ambition of a possible research program. And thirdly, we try to provide career paths for people to remain in science if they don't want to become principal investigators, whereas the university structure commingles the training purpose of academia with the execution — the people who are doing the work there are typically the grad students and the postdocs, who are themselves, at least nominally, on the career path of eventually becoming principal investigators. There are lots of people who, for all sorts of different very valid reasons, love science and the pursuit of research, but don't want to be a manager running a lab, choosing their own research programs, and dealing with all of the overhead and typically grant applications that are concomitant with that.", "With Arc, we have a real emphasis on hiring scientists to finish their postdocs, finish grad school, who know that that's what they want to do in their lives. And again, it isn't really a career path for them today. One of the things that's really exciting about the discovery, that we mentioned, that came out yesterday, this new bridge editing technology, is: that work was led by one of senior scientists, who had finished his postdoc. It's not clear to me that he wanted to become a PI, but he loved science, and he's an amazing researcher, so he's able to go and have that career at Arc.", "In addition, the prospect of mobile elements being usable in this way for genomic insertion, whatever, — that's a pretty speculative, out there thing. Had he applied to the NIH to go and pursue that? He didn't, so I don't know what the outcome would have been.", "But Jennifer Doudna 's work was, if I recall correctly, funded by DARPA , because her CRISPR NIH applications were rejected. Katalin Kariko 's NIH applications for mRNA vaccine work were famously rejected. It at least seems very plausible that it wouldn't have worked out. All these things are random, and I can't make any definitive claims about what would have counterfactually happened. But it seems plausible to me that this thing announced yesterday wouldn't have happened or would have been less likely to happen in a different environment.", "Dwarkesh Patel 00:27:25", "When we think forward 10 or 20 years, this specific line of research, where you understand the effects of the genetic architecture on different traits, and you can edit, invert, insert the DNA arbitrarily. You've solved cell anemia — you've done the obvious things. What does that lead to? What are you excited about?", "Patrick Collison 00: 27:50", "The thing that is really interesting about it is using it as a new kind of telescope: when people hear about CRISPR, there's an obvious and legitimate excitement around using this to cure things directly in the body, as a kind of therapeutic. You can also use CRISPR to try to figure out what's going on in cells and in cell cultures in a structured way. So the body is interesting in that it has this switchboard, akin to DJ’s with those fancy mixing sets, of 20,000 genes. And with CRISPR, you can systematically go and perturb each gene one by one, mashing all the keys in sequence, and try to figure out what the effects of perturbing this versus that are. If you do that in a cell culture, where you can subject the cells to some stressor or treatment, you can see differentially how different perturbations affect different cell outcomes. Or you can use it for synthetic data generation more broadly, where you could perform all these perturbations, then sequence and see what's happening in the cells and so forth. And single cell sequencing has come a long way. Anyway, the point is, there's a lot you can do with gene editing for discovery and for data generation in the broadest sense.", "That's really compelling, because a lot of diseases are \"complex\" in the field's jargon. Yes, they're complex in the colloquial sense, but they're specifically complex in that they're not infectious. They're not just some pathogen getting into you. And they're not monogenic, like Huntington's , where it's one specific mutation. Instead, they are some combination of environmental factors, but maybe some genetic factors as well — they are somewhere in between. These include most autoimmune diseases, most cancers, to some extent cardiovascular disease and neurodegenerative disease — the big ones we haven't yet solved.", "Coming back to functional genomics technologies, what's interesting is trying to figure out how it is that the genetic component of those diseases works. And even if that's only a small contributor, it can potentially shine light on what the general pathway is. So the question would be, and this is speculative, none of this has actually happened: \"By figuring out the genetic interactions between genes and, say, Alzheimer's, can you figure out how Alzheimer's arises, which we don't understand today?\" Then once you understand how Alzheimer's arises, maybe you can use conventional technologies to figure out how to inhibit or modulate those pathways. That's what we're really excited about from a functional genomics standpoint. There's an AI angle as well that we could talk about if you want.", "Dwarkesh Patel 00:31:09", "How do you think about the dual use possibilities of biotech? I am sympathetic with the idea that if you think of prior technology, like Google search or even the computer itself, you could forecast in advance, like: \"Oh, this has all this dual use stuff.\" But for some reason, history has been kind to us. The meta-lesson here is: “Keep doing science.”", "With biotech, we don't have to go into specifics here, but are there specific things you can think of with this specific technology? You can imagine some nefarious things. How do you think about that? Why not focus, let's say, on ameliorating the risks first or something like that?", "Patrick Collison 00:31:49", "I don't think that the binding constraint on harmful use of biotechnology or bioweapons today is pure biological capabilities. If some set of incredibly capable, intelligent people wanted to cause tremendous harm, presumably with pathogens or with something biological, they wouldn't necessarily need to invent anything new. They would just need to apply currently known techniques in a malevolently directed fashion.", "There are some concerns and risks with respect to things that don't invent new technologies, but do make them more accessible. The question is, what would the effect on the world be if there was a sufficiently sophisticated LLM that could help anybody synthesize and disperse smallpox? I don't know laws of physics that prohibit such an LLM existing — I presume they don't. Would the world be fine if such an LLM was widely distributed? Maybe, but maybe not. So there is that threat factor, but my point is: I don't think knowledge at the frontier of biology is the relevant margin here.", "If we take this seriously, we don't need crazy AI risks to motivate this. The world is perfectly capable of originating really severe pandemics and pathogens itself, plus all the other diseases that are not pathogenic. So we got other problems. Whether we care about the possible dual use harms you just mentioned, or we just care about things that already exist, to ameliorate both of those, we do need enhancement of our capabilities. There are a lot of biological problems that we don't know how to solve today. In that respect, if one were to do what you're proposing and try to advance the defensive side of this, I don't know that, what one would do, would necessarily be that different. Because there are just fundamental capabilities that we would presumably need to have, that we don't have today. By trying to solve current human diseases, you're probably also pursuing something pretty close to the best steps to solve the potential diseases that malicious actors could cause in the future.", "AI & Fast Grants", "Dwarkesh Patel 00:34:27", "That makes sense. Zooming out from bio risk in particular, how are you thinking about AI these days?", "Patrick Collison 00:34:34", "Everyone has to be highly perplexed, in the sense that the verdict that one might have given at the beginning of 2023, 2021, back, say, the last eight years — we're recording this pretty close to the beginning of 2024 — would have looked pretty different.", "Maybe Gwern might have scored the best from 2019 or something onwards, but broadly speaking, it's been pretty difficult to forecast. So the basic position to a first order has to be some degree of humility. As your blog post identifies, the big question right now is: \"To what degree scaling laws hold?\" And if they hold, then what exactly is it that we're, asymptoting is maybe a presumptuous word, it's not an asymptote, but what is it, that we're approaching? We don't necessarily know the shape of that thing, whatever it is. How one should feel ought to be very sensitive to the exact parameters of those curves, and I don't think anyone knows what the true value of those parameters actually are. It's clearly going to be important, is already important today, it has a pretty central bearing on both Stripe and Arc. We'll see.", "Dwarkesh Patel 00:36:17", "I totally agree with that general sentiment but I wonder if the meta lesson that we got from COVID, for example, and with things like Fast Grants was: you obviously can't predict these things in advance, but the most important thing, in addition to specific countermeasures you are trying to come up with in advance, is, when the crisis is happening, having competent individuals who can synthesize and organize information, and also having new initiatives and institutions to get the right thing done.", "Patrick Collison 00:36:48", "The adaptability premium is probably going to go way up over the next decade.", "Dwarkesh Patel 00:36:52", "Yeah. With that in mind, I know you already have a couple of day jobs, but I feel like something similar to fast grants, when the time comes down to it, should be there. You'd be one of the top people you could think of, in terms of having expertise and respect in a wide range of domains and competency as a leader. Just keep it in the back of your mind, maybe in the middle of your mind, given how far we are into the transition.", "Patrick Collison 00:37:18", "Well, Fast Grants was three beloved squirrels in a trench coat. I was one of the squirrels. It was also Tyler Cowen , who's an amazing person and a great friend, and then my wife , who's also one of Arc's co-founders. Fast Grants was not this giant, impressive, office", "that would qualify me for anything at all.", "Dwarkesh Patel 00:37:43", "But it isn't hard to be giant, right, to have that kind of big impact.", "Patrick Collison 00: 37:46", "As an objective matter, that's true. John and I try to be very self-aware of the limits of our expertise, which are very proximate to us. I'm sure if something like that was necessary, that'd be.", "Look at Operation Warp Speed ! They chose a super effective domain expert, Moncef Slaoui , to run that and it was monstrously successful, truly remarkable. I don't know who the Moncef Slaoui of the problem is, it would depend on the problem in question, but my recommendation would be: \"Figure out who Moncef is and go hire Moncef.\" I think anybody who deemed me the Moncef of that thing is probably mistaken.", "Dwarkesh Patel 00:38:34", "I think you're being too humble. Staying on Fast Grants, now we have the retrospective of how effective the fast grants recipients were, compared to the other grants that were given out by, let's say, the NIH or NSF. To your knowledge, what has been the reaction of these institutions to the discrepancy between the speed and effectiveness of fast grants? Have they analyzed their protocols and what happened during COVID ? Is there any retrospective there on their part?", "Patrick Collison 00:39:02", "Not to my knowledge, but I don't want that to sound like an indictment. Maybe they've done a lot of reflection, and I just don't know about it. I don't think I would know about it, even if it had happened. So, I don't know. I don't know anything about the response at CDC or FDA or NIH or NSF or any of the relevant organizations or their international equivalents. So what I'm saying should be taken as not only not critical of them, but not even as a comment to them. I just don't know what they did.", "In general, organizations are not awesome at self-reflection. I assume as a default prior, that some of the dynamics we discussed at the beginning of this are rooted there. None of the people who started those organizations are there today. What exactly are the incentives of those leaders? It's not clear to me who would have the incentive to really take stock in a fully objective and self-critical way, to figure out what was done well and what was done poorly.", "Dwarkesh Patel 00:40:29", "I promise not to be too myopic about AI, but one more question. Long-term, we can't forecast. Maybe even medium-term, we can't. But near-term, it looks like we might have things that look like AI agents, and they might need to trade. What does the financial infrastructure for AI agents look like?", "Patrick Collison 00:40:45", "That's a really interesting question. Automated or autonomous transactions already exist to some extent today. Lots of services have usage-based billing, right? A lot of the expenses being incurred are autonomously incurred. No human is pushing a button when", "Stripe does most of what it does with cloud computing and incurs some cost with some cloud service. In an extremely primitive way it's happening today. I assume it will follow some gradient, where some of those decisions will be made by an LLM or LLM equivalent. There'll be an almost unnoticeably smooth continuum up to very considerable degrees of autonomy. It's not that we're going to wake up some month and be like: \"Oh my god, suddenly the bots have been unleashed.\" This will now sound very parochial and maybe I'm getting excessively tactical, but there'll be very interesting questions around the legality of bots in terms of: are they treated as the responsibility of the owner? Is there any degree of independence granted? How does liability work? Which rails are best suited? What kind of transaction velocities are we talking about here? Because if it's a billion transactions a second, then the properties of that system should look very different to one giant tiering transaction every day. If we just use the analogy of the usage-based services, those tend to incur liabilities in tiny increments, but then to settle on a monthly basis when you pay your bill. So maybe these agent transactions will have that character. There are a lot of practical applied questions, but I think what you're saying around these autonomous transactions conceivably being an important dimension, is very true and real and is one of the interesting ways in which the economy might change and expand over the next decade.", "It's possible that the crypto plays some role here. We take KYC and AML very seriously for humans: we want to know the human that is associated with some particular financial activity. Obviously, that's a murkier question in the context of some AI agent. If we, in some blurry sense, look at crypto as the part of financial services that is de facto exempt from AML by design, then maybe that plays a role.", "Stripe history", "Dwarkesh Patel 00:43:46", "How long before Stripe was founded do you think a product like Stripe could have been invented?", "Patrick Collison 00:43:50", "That's a good question. Depending on what exactly you define Stripe as being, conceivably decades earlier in that, at some level, PayPal is a kind of Stripe. There were many payment companies before PayPal. You could go all the way back to cash registers, so it depends on definitional questions. The particular secular tailwinds that we benefited from were tied to the rise of app stores, the on-demand economy, and maybe the startup boom post-YC and the financial crisis; those particular tailwinds were idiosyncratic and specific to Stripe. The GFC was 2008 2009 and Stripe was founded in 2010, so as much as you define those as being core, then not that much earlier. Mostly my story of Stripe is one of market inefficiency. I do wonder why much of this didn't happen sooner.", "Dwarkesh Patel 00:44:58", "I always find it really interesting when there's cases, where it wasn't even the case that: \"Well it could have been started sooner, but there was nobody in the market.\" There were many people in the market. And they weren't just random people, they were technology companies headquartered in San Francisco who were in the market. Do you have some explanation for why it didn't occur to them?", "Patrick Collison 00:45:15", "I'm hesitant to generalize too much, because I only have maybe n equals 1 experience. It's dangerous to over extrapolate from that. Maybe n equals 2 now with Arc, as a very different kind of organization, but an organization nonetheless.", "Dwarkesh Patel 00:45:30", "Or if you include all the features of Stripe, n equals 10, 20 something.", "Patrick Collison 00:45:34", "OK, yes, depending on your definition, maybe there's some kind of samples out there. My general view is: \"For most products and most businesses, things can just be done much better\". Moats are typically overrated. The payments are a great example of a domain where, on a logical basis, you would say that there are so many sources of defensibility:  there's the network effects of the account holders, the data network effects/economy of scale for fraud, regulatory modes and barriers etc. etc. And yet, not only does Stripe exist, but there are lots of others. There's a whole fintech ecosystem today, right? It gets down to deep questions of: \"What is the binding constraint on the number of effective organizations that exist in the world?\" For any given sector, why is it that number of companies rather than twice that number of companies and so on? It's about motivation, ideas and people's willingness and determination to organize talent and so forth. But these are kinds of more sociocultural explanations.", "Hamilton Helmer is probably the leading scholar on various sources of defensibility for businesses. He has this niche, but very well known in the niche, book called Seven Powers . It tends to disaggregate the various sources of market power in this respect. I think that is true and important, insofar as it goes. Nonetheless, it's kind of strange to me that nobody had done Stripe before Stripe.", "Dwarkesh Patel 00:47:19", "When you think about the fact that moats are overrated and just doing the thing is underrated, what is Stripe's mode in that context? Does that make you think differently about Stripe's mode?", "Patrick Collison 00:47:31", "Yes, I do think that one can have organizational and cultural moats. Maybe this contradicts what I was just saying, or it's consistent with it in the sense that it's a kind of cultural explanation. In as much as we have a moat, it's because we have a very good understanding of our domain. We have a set of people who actually care about solving the problems, who are continually paranoid at the prospect that we might be forgetting something important. So we are trying to figure out what the important thing that could supplant Stripe's approaches is, and make sure that we build that first.", "You're familiar with Conquest's laws . There's Conquest's third law, which is that: \"One should model organizations as if they're run by a cabal of their enemies.\" Presumably it's tongue in cheek, but it's interesting to try to think: \"Well, what is the kernel of truth in that and why would it be there?\" I think what's going on is: most organizations, when they start out, are actually trying to achieve their stated goals. Somebody started the organization for a reason and probably it was for the stated reason. But then over time, that person and that set of people who initially populated the organization depart and some set of new people come to take their place. And there's multiple iterations of that, there's generational turnover on a continuous basis. But say, for the fifth generation: Why are they there and to what degree do their particular incentives align with the originally stated goals of the organization? There can be a lot of misalignment there, where they're following a local path, conceivably even the leader of the organization does it, not necessarily through any fault of their own. They're human and they have their own incentives and again, the original, constitutional incentives of the organization might be quite different. This phenomenon is a fact of life and for me these kinds of explanations are much more useful in trying to figure out why some of these things either happen or don't.", "And to your question: \"In as much as Stripe has a moat, what is it?\" Others can judge to what degree it's actually manifested and rooted in practice. I think it is, but I'm a biased observer. I think it would be, that people at Stripe really care about solving the problems", "that we say we are trying to solve.", "Dwarkesh Patel 00:50:20", "Yeah, the point about the misalignment over generations or over time is interesting. Do you have examples of institutions, which have for decades or centuries managed to keep their original, not only mission statement, but organizational competence? Because if you think of tech companies, even the oldest ones have not been around that long, right?", "And they're some of the biggest tech companies in the world. And the median age of the corporation is famously low. What is a good example here?", "Patrick Collison 00 : 50:48", "Some of the explanations around the effects of shareholder capitalism suggest that it influences the incentives of organizations and their long term fates. Those theories have some credibility and it’s plausible that shareholder capitalism even attenuates the duration of some of these organizations. I'm not saying that's definitely true, but I find the idea that it could be, incredible. It's unclear if that's necessarily bad if it is true, right? In that, are we on the side of the humans, aggregate innovation in the world, or corporations, or quad legal entities? The answer isn’t clear to me. It should be the third.", "At the same time, if you look at Europe or places like Denmark, because of the tax code there, a lot of organizations are either controlled or substantially held by non-profit foundations. For example Novo Nordisk – the GLP  company, Maersk – the shipping company, I believe also Lego. A lot of these corporations are controlled by foundations and usually have a lot of their stock held by them. In many cases that has the secondary effect, where they actually embed their mission into a legally binding constitution. I'm not an expert on Novo Nordisk, but I happened to get a book about it over Thanksgiving and there’s also a book on the Danish Industrial Foundations. It's enshrined in their constitution that they have to make insulin broadly available and really cheap or at least that is the case in Scandinavian countries. So it is allowed to charge market prices elsewhere but they're legally obligated to reinvest profits in R&D. Is that somehow causal in the fact that they made one of the most remarkable pharmacological discoveries of the last 20 years: GLP-1 agonists ? – Plausibly. These questions of “Why is it that the median age of organizations and corporations is what it is?” are definitely interesting and I suspect the reason is somewhat dependent on the way we've chosen to organize large corporations in the US today.", "Dwarkesh Patel 00:53:31", "The thing you're mentioning about this firm seems very similar to the export-led growth", "in Asia.", "Patrick Collison 00:53:37", "Totally.", "Dwarkesh Patel 00:53:37", "The idea of tariffs. There's one company tasked with making the cars, so you better make the cars good. You have no competition but you have to invent the best car in the world.", "Patrick Collison 00:53:45", "Yes, we are all fans of Smith , Ricardo and even they are less dogmatically attached to free trade than people today interpret them as being. People like Friedrich List and other, not quite contemporaries, but quasi-contemporaries are underrated on a relative basis.", "As much as you believe the kind of sociological, cultural skill, even vague alignment in the more interpersonal sense, in as much as you think these are important and explanatory, then you end up thinking about some of the things the US raised.", "Dwarkesh Patel 00:54:31", "That's really interesting to hear you say that, because if you think about Stripe's mission –  it's to facilitate global trade, to make sure that some firm from India can compete with any firm in Nigeria or whatever. So the room for you to have this sort of learning curve where you're less efficient than the global competition should be less, if Stripe exists, right? Isn't Stripe the anti-List company?", "Patrick Collison 00:54:53", "Well, it depends which version of List. To be clear, I'm not specifically endorsing these tariffs and trade barriers. The history associated with them is checkered at best. Look, it's possible that if you have a specific sector where you have clear goals, a credible path to actually achieving some substantial degree of success and some conjoined propositions, then some degree of activist trade policy might be the beneficial thing to do. I don't think that that describes most sectors in most countries at most times.", "Stripe Climate", "Dwarkesh Patel 00:55:43", "That's so interesting. I think there's an interesting thread here in how it relates to Stripe climate, in that you're subsidizing learning curves that East Asian countries did for their own internal companies. You haven't picked out a specific company that's going to necessarily be the key to carbon sequestration. How do you think about this?", "Patrick Collison 00:56:04", "Well, a way to unify the two points, and I'll speak about Stripe in a second, is Say's law about demand creating supply. In as much as Stripe aggregates more and more global demand, it seems too self-aggrandizing to call it “The theory of Stripe\", but some vague hunch in Stripe is that this aggregation of demand can have important expansionary effects with respect to the ensuing supply.", "Stripe climate is some version of this hypothesis, applied on a much smaller scale than Stripe itself, but still real and maybe important. For folks who aren't familiar, which I assume is most of your audience, the basic idea goes like this; We observed in 2018, that everyone seems to agree, that carbon removal will be very important. Even if we decarbonize the economy on the most optimistic timeframes, there'll still be an accumulated stock of carbon, which will be a problem. It sounded pretty weird, that there were virtually no carbon removal companies in the world in 2018. Maybe there were two or three. No companies had ever purchased from carbon removal companies, which were really sort of science projects. So we thought: \"Well, somebody's got to start and it might be valuable to not only transfer some dollars, but to confer some credibility on this sector.\" Not that Stripe is the world's most credible company, but it's better than nothing. So we started contracting with some of these carbon removal companies. That went pretty well and they seemed kind of appreciative of us and so we thought somewhat more about this.", "Then, in 2021, we formed Frontier , which is an AMC , an advanced market commitment. That was inspired by the first AMC, which was a pre-commitment to purchase vaccines for developing world's countries for diseases that were market failures, where either pharma companies hadn't pursued the vaccines, or where the profits weren't sufficient to pay for the program. So we decided to do this for carbon removal. We raised a billion dollars. Stripe was the first investor. We're not actually investing, we're just buying, so we were the first company to commit. Then we were joined by Shopify and Alphabet and Meta and JP Morgan and a bunch of other companies. And now there's a fairly active sector of carbon removal companies.", "Frontier has contracted with between 40 and 50 companies, the overwhelming majority of which didn't exist when we started out with this. We ran an anonymous survey back at the end of last year, where we asked them to what degree was the existence of Frontier causal in their starting the company in the first place. Again, it was an anonymous survey. I think it was 74% of the companies that said that Frontier played a causal role in their starting the company. So these inducement effects can be pretty significant.", "Dwarkesh Patel 00:59:10", "Yeah, that's huge. What are other ideas you've come across, where an AMC would be an effective instrument of moving forward the tech?", "Patrick Collison 00:59:16", "That's a good question. We've been having that discussion internally. It's not that we plan on doing it ourselves necessarily, but I'm wondering: \"Are there people we should share our technology with?\" Not even technology per se, but share our experience and try to help along. There's still a lot of stuff in the biomedical fields. Patents are pretty useful insofar as they go, but there's a lot of innovation that seems socially beneficial, that patents don't provide a way to cover the cost of.", "There was some excitement a few years ago about mannose, which is a sugar. There was one or maybe a few papers that suggest that maybe tumors will selectively take up mannose rather than glucose, but they won't actually metabolize it properly and they'll just die. Maybe this could be an effective onco-treatment.", "Mannose is like a generic sugar. It's been understood for more than a century and, importantly, you couldn't patent it. So it's not clear who has the incentive to even fund the work to test, whether or not this would actually work in practice. This is not an endorsement of mannose, but there are things of this shape, where there's something that clearly might be very beneficial, but it's not clear how the economic structure of the market can make it possible. There are still a lot of those across the biomedical landscape.", "There are still a lot of vaccines that could, in principle, exist that don't, like Lyme disease. There was one vaccine that was withdrawn from the market over safety concerns, that I think were misplaced, but there's still no vaccine.", "Dwarkesh Patel 01:01:21", "It's not even that well understood, right? People have chronic Lyme disease. We don't know if it's legit or not.", "Patrick Collison 01:01:29", "Exactly. But it's a good question. Maybe some of your listeners will have ideas for fields, where we sorely need an AMC.", "Beauty & APIs", "Dwarkesh Patel 01:01:39", "I want to go back to Stripe for a second. So you're famously appreciative of craft and beauty, but you also appreciate the power of scale and growth.", "Patrick Collison 01:01:52", "And speed.", "Dwarkesh Patel 01:01:53", "Oh, interesting. Is there a type of craft that is just not amenable to speed, growth, scale? If you think of a Japanese chef, he's been learning to cook rice for a decade, and then he can move on to sushi. Is that just not competitive in the modern world?", "Patrick Collison 01:02:07", "Craft, scale, and speed. I don't know if they are strictly necessarily intentioned in every case, but they're definitely frequently intentioned, so yes is the short answer to that. At the same time, a lot of the most successful companies are those that are distinguished by the extent to which they exhibit appreciation for and skill in realizing craft and beauty. LVMH is one of the largest companies in the world, and that's literally their business. Tesla is pretty good at this. They're good at many things, including this. Obviously, there's Apple. TSMC is not the Japanese sushi chef you mentioned, but it's the TSMC chip sushi chef in Taiwan. They have so much tacit knowledge and difficult to transfer skills. It might be the case that craft and the pursuit of it is as important as it's ever been. Certainly, as Stripe has gotten larger, we have come to greater conviction in this.", "Part of what's interesting about these aesthetic qualities is they're generally speaking unquantifiable. I don't know if they're intrinsically unquantifiable, maybe you could train a model to do so, but today, they are broadly speaking unquantifiable. And yet they influence people in significant ways. People very demonstrably care about aesthetics. And if they're a company, they care about the aesthetic characteristics of the products that they produce. On an intuitive level, people know that that's true. But it's difficult to manage that at an organizational level, where there isn't a P&L associated with it, and if you're screwing it up, you don't see a neat time series decline.", "Over the 14 years of Stripe, we have, not exactly through trial and error, but by studying cases where things worked well and less well at Stripe, what customers responded well to, and so on, understood, that even in a domain like ours, where we are selling primarily to businesses, that is something that's truly important. Getting back to what we were discussing previously, in as much as the sociology and \"cultural\" explanations of defensibility are real, the best people consider themselves crafts people in their domain and they really, above almost all else, want to work with the best other people. It may almost be true, that even if from a customer-facing standpoint, craft was not valued by the market, you might still want to build an organization that indexes very heavily on this, because you just want the best people for other reasons. Now, as it happens, I think customers do, in fact, value it. The evidence is broadly consistent with that. It's very hard to assemble groups of the best people, if you don't take the practice of the work super seriously.", "Dwarkesh Patel 01:05:46", "What kind of beauty or craft or simplicity is more important— interface or implementation?There's famously that essay that Unix is successful because the implementation is simple and not the interface.", "Patrick Collison 01:05:57", "The interface is kind of simple, but there's a lot of edge cases that I guess Unix doesn't handle for you.", "Dwarkesh Patel 01:06:06", "But Stripe does, right?", "Patrick Collison 01:06:09", "Presumably, it depends what you're building. For TikTok, it's more important that their interface is simple. Even if their implementation is a mess, that's probably OK. Not saying it is, I have no idea. Whereas for Stripe, people are, on some level, purchasing our architecture or purchasing their ability to do certain things rather than some different set of things, because of what our architecture makes easy and possible. If by interface you mean the GUI, then maybe we can draw some separation there. But we don't really draw that distinction. We think of the interface to Stripe as being the architecture.", "No one else seems to agree with me, but I often think of Stripe as similar to Mathematica , where we're selling a self-contained universe to model whatever it is of interest to you. We're providing some primitives, interfaces and tools and so forth, to enable your modeling. But fundamentally, we're helping you do something on your own terms. In that sense, I don't think the architecture and the interface are necessarily that separable.", "Dwarkesh Patel 01:07:31", "That's a really interesting analogy. Although, if you think of Mathematica, the entry that that's giving you to, is just the platonic objects of math, whereas for you guys, the entry is to Visa error codes . The end object is not platonic.", "Patrick Collison 01:07:49", "That's true. So yes,  the analogy falls down in a few respects. But look, the idea of a transaction is pretty fundamental and is roughly as old as the quadratic equation. I guess the transaction's older. And Mathematica now supports all kinds of crazy, arcane stuff, to a very impressive extent. If you go through the more obscure packages in Mathematica, you can definitely find things that are much less broadly employed and understood, even less than Visa error codes. But yes, these are not the same, It's just that I find it to be an interesting source of intuition. What Wolfram has done with Mathematica is pretty amazing.", "Dwarkesh Patel 01:08:36", "Another way, in which I'm curious how you think about this, is: one way in which Mathematica maybe differs is, if they had to make a change in Mathematica — \"Big deal, somebody has to learn new syntax\". If you make a change — billions of dollars of transactions don't happen. How does that change the way you think about the initial", "architecture and the stakes?", "Patrick Collison 01:09:00", "It's a good question. First a point on beauty with respect to architecture and then I'll answer that one. Just as a side note, it's interesting that API design in general doesn't get more study as a discipline and as a practice. It plays or can play a significant role in the fate of platforms. Not saying it is always the determinative thing. But if you get it right, there can be compounding positive benefits and the converse. It's really striking that, say with mobile app development, which was one of the most dynamic ecosystems of the past 10 or 15 years, so many of the objects and the classes, say in iOS development, are prefixed with NS. Less so now with Swift, but for much of the iPhone's history. The NS refers to Next app, back from Next in the 90s.  When you get API design and architecture right, it can be so enduring over literally multiple decades, even in the face of what are otherwise frenzied evolutions in everything around it.", "Unix is another example of this. Yes, Unix has tons of shortcomings, but the architecture has worked now for more than half a century.", "We're all trying to impress upon people at Stripe the importance of multi-decadal abstractions. People sometimes respond to that thinking that that's some insanely lofty, implausibly ambitious hyperbola, but no, that's what happens, when you get this stuff right.", "In fact, if you get it right, people building on your platform can reap incredible benefits for a very long time.", "To the Mathematica point, I know they take backwards compatibility really seriously, to the point where you can run programs written 20 years ago, unchanged, in today's Mathematica. That really raises the stakes in API design for sort of obvious reasons. We have that problem ourselves, where, when we think about introducing something new, it's not just: \"Does this exigently address the particular need that's motivating it today?\" — but: \"Do we think we can stand behind this in 2044? How do we think the world might evolve around us, such that it all remains coherent?\" We certainly don't always get that right, but that's, on some level, what we're trying to do.", "Financial innards", "Dwarkesh Patel 01:11:50", "Is Visa an example of this? One might even say, that one of the downsides of being able to use implementation for many decades in the future is, even if it's self-sustainable and you have this ecosystem in equilibrium set around it, if you can't modify it because of people's local incentives, you get stuck in this equilibrium that's worse than it could be otherwise.", "Patrick Collison 01:12:12", "I see. The card networks generally, Visa and MasterCard, are pretty good at equilibrium. It's easy to judge today with the world as it exists in 2024, but you have to look at the world as it was when they started out and the particular problems that they were solving. When you compare the financial landscape in the US or in the Western world to those in other places, it's certainly not clear to me that the US has gotten a bad hand, so to speak, or is somehow stuck in any meaningful way.", "The card networks do a couple of things. Originally, they were designed to replace store credit. It was a credit card originally, not a debit card, right? That was important. The availability of structured consumer credit is a pretty big deal and is beneficial, especially for lower income people. Then, with the advent of jet travel, mass market tourism and so forth, they helped to supplant travelers checks and various worse alternatives, like carrying cash around in your bag. Then, with the internet, they were substantially involved in enabling online transactions. The fact that they got the architecture so right, that so many different use cases were able to be addressed by their core design is really impressive. The guy who designed all this, Dee Hock , was a remarkable person.", "People complain about interchange . Lest I sound like a defender of the card ecosystem. You could look at it multiple ways, but many people would consider Stripe to be on the wrong side of the interchange cost equation, in the sense that we're giving away the interchange revenue to other companies. So I don't think I'm structurally biased in favor of interchange, and yet, I will say it's pretty interesting what interchange made possible. It is a distribution incentive fee, where you're paying other entities for recruiting customers, convincing them to get a card, getting them to maintain the card and to pay it off at the end of the month etc. So you're paying for that, just the pure distribution. There's a person at the end of the flight telling you: \"Hey, sign up for the United Credit Card!\" That's what interchange is paying for.", "Dwarkesh Patel 01:14:50", "That guy annoys me.", "Patrick Collison 01:14:53", "We'll get to the counterfactuals in a second. So there's paying for the actual credit issuance itself and then there's the customer support and all the ancillary things around the dispute handling and so forth.", "It is interesting to look at the cases where, for whatever contingent reason, the card networks didn't rise. Germany is one of the classic ones. From our vantage point, dealing with the online economy in Germany as compared to the US is so much worse. If Stripe could push a button and have really broadly adopted cards in Germany à la the US, we would push the hell out of that button. You can look at China, which on the one hand does have Alipay, WePay, WeChat payments, that are really ubiquitous — in that sense, they're very digitally enabled from a transactional standpoint. On the other hand, those products tend not to be as sophisticated with consumer credit. So yes, the transaction fees for transferring your money — that's super cheap, but you need to look at it on a fully loaded basis, where: \"OK, but what about the cost of actually getting the credit to make the purchase in the first place, as a credit card would enable?\" And as you look at these other counterfactuals in other places, one feels gratitude for what it is, that Dee Hock and Visa and MasterCard and the card networks made possible. I'm not saying they're perfect, that one can't make critiques, but I'm most interested in critiques from people who've really studied the ecosystems of other countries, because it's easy to underestimate what we got in their invention.", "Dwarkesh Patel 01:16:38", "Maybe there's a Chesterton’s fence kind of thing going on here. If you had to design payments from first principles now, does it make sense that all these things you've mentioned: taking on credit risk, the chance of fraud, dispute adjudication, should that cost 2% or 3% of each transaction that happens in the economy? What would payments look like if you had to design that from first principles?", "Patrick Collison 01:17:01", "We're seeing a live version of this experiment play out for the first time in many years in a number of countries today, where central banks are becoming more active in designing national payment schemes.", "PIX in Brazil launched in late 2020. I'm sure you've heard of UPI . UPI was the instigator in this process. It's the central bank payment system in India. And it was tied up with AadHaar and their national identity system and so on. That inspired a lot of central bankers in other countries to go and build their own UPIs. So PICS in Brazil launched in 2020 and now a significant majority of all Brazilian adults are weekly active users of PICS. Again, even though it launched in 2020. So it just had this incredibly rapid adoption curve. You have Swish in Sweden, There are examples across East Asia, Japan, Thailand, Switzerland. Central bank after central bank is deciding: \"Hey, we should have our version of this.\" This is a kind of reinvention of the payment system from scratch.", "For some weird reasons hard to understand, once you layer in the customer support, consumer protection, fraud prevention, anti-money laundering controls and the credit, things seem to asymptote at around 2% or 3%. It's important to also note that beyond just covering the costs, much of it ends up getting remitted to consumers in the form of rewards, not in every country, but in many countries. If you look at the public reports", "from various banks in the US, their interchange revenue, where they're getting these delicious fees on every transaction, as you put it, a lot of that is going straight back out the door to the consumers. So it's not clear how exactly one should think about economics. If it's going back to the consumer, should you include that as a transaction tax", "or is it just like a weird circular relationship? I've not seen any evidence to suggest that the 2% or thereabouts is massively inefficient in the scheme of things. I'm not saying it's the optimal level— maybe 1% would be better, but within some range of 1% to 3%, it's probably reasonable.", "As we think about these ad valorem fees and figures , the place where there's even more change at the moment that we find ourselves thinking more about is the changing structure of global tax. There's been a reasonable amount of innovation in the tax domain over the last century: income taxes got pretty high, then we added value taxes, and so on. The new thing, at least in the online context, is jurisdictions remitting or imposing sales taxes on businesses that don't have any \"locus\" in the jurisdiction in question.", "So if you're a podcaster in the Bay Area, hypothetically \"Dwarkesh merch store\" will have to pay the town of Uppsala in Sweden, which will have a special tax on baseball caps. And you will need to know about that particular tax on baseball caps. And for any baseball caps that you are selling to the Uppsalians, you'll have to collect that amount from the buyer, report to Uppsala, and then eventually figure out how you're going to get that money to Uppsala. Obviously, it's this combinatorial problem of buyer jurisdictions and product types, and then all the different jurisdictions that you have to remit the money to.", "As to those amounts, we're not talking three basis points — the taxes in question are often 5% or 10%, so it's not trivial. As I think about the funds flows on the internet and how all that's evolving and unfolding, I think changes in tax law are actually a much bigger deal than anything about the transactional economics.", "Dwarkesh Patel 01:21:37", "By the way, it's not the Dwarkesh podcast, it's Lunar Society Podcast LLC registered on Stripe Atlas . Any merchandise I sell in the future, Stripe will take care of that.", "Patrick Collison 01:21:47", "Yes, OK. If anyone has Stripe complaints.", "Dwarkesh Patel 01:21:49", "No, it's great, It has been super useful, honestly. It would have been much more difficult to get business operations going.", "Patrick Collison 01:21:55", "Sorry, I know you're supposed to be interviewing me, but did Stripe play any, even on the margins, counterfactual role in you charging for anything? This is the thing we're always interested in. When we talk about growing the GDP of the internet, it's not like: \"Get the existing GDP onto our rails\", — it's sort of: \"Where on the margin can we cause there to be economic activity that isn't already occurring?\" So yeah, you did, in fact, start the podcast before incorporating, but were we causal in any fashion in the merch or anything of that nature?", "Dwarkesh Patel 01:22:30", "To the extent that Substack would not be a convenient place to get payments from to begin with, that's definitely a thing. And also...", "Patrick Collison 01:22:40", "You wouldn't charge for the newsletter if Substack hadn't made it super easy?", "Dwarkesh Patel 01:22:43", "Yeah. And also, if I do an ad, I wouldn't even know how to begin with getting the money, if I didn't already have an LLC through Stripe with the dissociative bank account, that I'm going to get the money through. So yeah, probably counterfactually responsible for a lot of the monetization.", "Patrick Collison 01:22:59", "That's cool.", "Dwarkesh Patel 01:23:00", "Appreciate it. What are some unexpected complements to payment processing you see in the future? All this stuff: Atlas, identity fraud, detection — in retrospect, it might not have been obvious back then there was a good complement, but now it does seem that way. What would be like this in five, 10 years?", "Patrick Collison 01:23:26", "Honestly, our problem ends up being that more things then we could possibly pursue, look like complements. In that every business almost by definition has revenue we obviously want to help them generate, accept, manage and orchestrate everything pertaining to that revenue. But once you're in that flow and you go through the steps of running a business, a lot else looks relevant and somehow connects quite directly.", "When Stripe started out, it definitely wasn't cool. It was the opposite, it was just a couple of us and we thought that we could make this superior payments API. For the vast majority of its history, Stripe has attracted people who are drawn to unglamorous infrastructure challenges and problems. We're not a company that specializes in making beautiful cars— we make roads. I bring all of that up, because it's relevant to this complement question, where in our discussions internally, a lot of it, probably the significant majority of it, is still about: \"OK, where are there practical shortcomings and limitations in even our core bread and butter?\" Payment processing might be a slightly too limited term to use for us. It's more about global programmable money orchestration, which, yes, is consumer to business payments, the sort that we were just discussing in, say, the context of your Substack. But it's also business to business payments, payments where there's credit or lending involved. It's also how you hold money, how you convert money between different currencies. It's how you represent money that's held by different legal entities and how we make it possible for even individuals or small businesses to act as micro multinationals.", "But those problems that we just skimmed over, even though they all directly pertain to the movement of money, they're not small. If we could just solve those really effectively, then Stripe will be a very consequential organization and force in the world. The counterfactual importance of building some of this stuff, as we go to newer markets that are, on a relative basis, more poorly served, is increasing rather than shrinking. In the US, there were payments companies before Stripe and maybe, if Stripe had never done its thing, eventually you'd have found some way to monetize a newsletter or something like that. But if you're in Albania, the set of options available to you is far more restricted. The marginal impact as we expand globally increases quite a bit. Even though we are interested in and do pursue some of these direct adjacencies today, the core problem of global money orchestration remains really big and unsolved.", "Dwarkesh Patel 01:27:02", "Does that look like being a better interface for all these complexities and glossing them over under the seven lines of code? Or does that look like replacing the rails and the infrastructure to make all this more efficient and effective?", "Patrick Collison 01:27:15", "The former. It's not that useful to build financial ecosystems, that are self-contained. A financial island is not that helpful. It's much more valuable to build a financial, this is mixing metaphors, but a financial \"air network\". We would much prefer that Stripe plugged", "into every existing system, rail, domestic organization, rather than that we tried to come along and supplant them. And this has been Stripe's strategy very deliberately from the beginning, where there were lots of companies, when Stripe started out, that were trying to do their own thing and go their own way, whereas our belief was: you get these classic Metcalfe's law stuff —  by enhancing the capabilities of an existing ecosystem, you create quite a bit more value.", "Stripe culture & future", "Dwarkesh Patel 01:28:15", "OK, let's go back to Stripe. Is Stripe a writing culture for the benefit of the writer or the reader?", "Patrick Collison 01:28:21", "It can be both.", "Dwarkesh Patel 01:28:22", "But which one is the more so?", "Patrick Collison 01:28:24", "There are really considerable benefits on both sides, because for the reader, it's not just that it's maybe more efficient to communicate stuff through text, though in many cases it is, but also there's intertemporal benefit, where future readers can try to understand the through line and the thought process that led us to this point. And that's very considerable.", "But it's also true that I and lots of people write things in order to organize one's own thoughts. If that ability was taken away from me, I'd be meaningfully less effective. How exactly those bounce out is hard to say.", "They're not actually separable. That's my answer. Literate cultures are just a different thing. I don't mean literate in some kind of faux intellectual way, textual cultures is a better term here.", "Bruno Latour spoke about how he thinks the printing revolution, like Gutenberg's, partially caused the scientific revolution by making knowledge more rigid. Before, if some observation didn't match some claim, you could always shrug and be like: \"Well, the person who transcribed that thing made a mistake.\" So by making things more rigid, it's easier to break them. Then you can notice discrepancies between the theory and the reality. There's some version of that organizationally, and I'm not drawing a precise parallel, but there are analogous dynamics, where the nature of oral cultures and textual cultures are just quite different. The kinds of collaboration that are possible, kinds of consistency, that can be achieved, are just fundamentally different. Is the front or rear wheel of the bicycle more valuable? Theoretically, you can be a unicycle, but as a practical matter, you do just need both.", "Dwarkesh Patel 01:30:43", "I know I said no more AI questions, but on this particular point, it seems very legitimate to me, that you might expect firms, that have a lot of writing, to be the first to experience the productivity gains of AI, because in other cases there's all this context that the model doesn't have readily available. I don't know if that's something you anticipate.", "Patrick Collison 01:31:01", "That's probably true. Yeah, I don't know. If the model is really good, maybe it's able to pick stuff up quickly. Most organizations are not recording all of their meetings for a variety of reasons, and if they're not, then there is this question of: \"What is the corpus? How do you get up to speed?\" So yeah, my guess is that will be true.", "Dwarkesh Patel 01:31:21", "Tell me about the internal LLM you built.", "Patrick Collison 01:31:23", "Oh, we didn't build an internal LLM, we built an internal LLM tool for making it very easy for people to integrate LLMs into production services, but also into their regular workflows as humans. We added the ability to work directly with the LLM, as a standard chat agent, as lots of people did, but then also to integrate that with some of our tools for querying and accessing data, most interestingly, we added sharing prompts across different people, so that somebody might discover these prompts. One of my favorite examples is: somebody put together a prompt for optimizing SQL queries. It doesn't always work, but sometimes it does. It's very cheap to ask us: \"Got any ideas for optimizing the SQL query?\" And sometimes it will come up with some good stuff. So the collaborative abilities there have proven surprisingly high return. And then having, lots of organizations have this — we're not claiming that it's very novel — but having a central bus through which to route all access to these LLMs, in such way, that we can experiment with different models and have some degree of observability into the respective performance trends and the usage of different cases. We have found building a fairly significant amount of production infrastructure around LLMs to be valuable. And now, given the proliferation of LLMs themselves, with all of the obvious contenders, this is proving quite valuable, because we're able to try to figure out for different use cases which models: self-optimized or who knows which, are most effective.", "I don't know what the total number of invocations is, but I think we're making millions of invocations per day now. There are dozens of dozens of actual production use cases across Stripe. The financial services ecosystem is, in some way, a giant analog to digital exercise, because humans, intentions, identities are analog — all these things have some degree of uncertainty around them and some noise. But then transactions are digital, right? And we often find in these analog to digital conversions, that LLMs can be a surprisingly interesting augmenting tool.", "Dwarkesh Patel 01:34:16", "On that point about the flexibility and the edge cases in the way humans interact with these systems: in some sense, Stripe is a really high stakes bug bounty program, right? If somebody hacks it, if there's reliability issues, not just because of a hack, but because you deployed the wrong way, not only the financial services— obviously, money's in play— but a significant percentage of rural GDP would grind to a halt, at least while it's down. How do you deal with that kind of responsibility? How do you keep the uptime and keep the reliability while deploying fast?", "Patrick Collison 01:34:58", "This is one of the things we've spent the most time on. Back to this point about wanting to be the place with the best people and the value of focusing on craft, so that you can have the best people. In the context of software development, two things developers hate: slow development cycles: it'll ship in the next release in a month and that kind of thinking. Developers also hate being paged at 2 AM for incidents. So, given the criticality of the businesses that we serve, which is, in rough terms, 1% of the global economy — it's not totally clear how to measure this, because GDP is defined as final goods and Stripe is not only selling final goods, so, in theory, there could be a bit of double counting. But Stripe is mostly selling final goods. We're not used, by and large, for giant supply chain shipments. Maybe there's a mismeasurement of 10% or 20% or something. But long story short, I think it works out to about 1% of global GDP. It's about a trillion dollars a year. As you say, that then makes us really terrified of outages.", "And so we work so hard to enable fast iteration and development cycles without having outages, and to put some numbers on it: we deploy production services that are in the core charge flow around 1,000 times a day. Most of these services are automatically deployed, so when anybody makes any production-ready change, it just goes into production. It's meticulously and carefully orchestrated: first is just running some small sliver of traffic and then incrementally more traffic until it's everything. So about 1,000 deploys per day at roughly or somewhat in excess of five-and-a-half-nines — 99.9995% reliability — which works out to about two, two and a half minutes of unavailability per year.", "It's not that we have, obviously, two and a half continuous minutes of unavailability, but that's what it approximates to, even though it tends to happen as background radiation throughout the year. Getting to that point takes a huge amount of investment. Then there are security properties that are less readily measured, but analogous to those figures. Silicon Valley doesn't tend to... I'm perhaps now being unfair in attributing things to Silicon Valley — a lot of the tech industry doesn't place a lot of value on process and operational excellence. We culturally value the spontaneous, the creative, the iconoclastic, the path-breaking. Building mechanisms that can enable the very reliable provision of important services at scale, and removing the sources of variability, that can really cause a bad day for a very large number of people — I don't think these things get quite as much cultural credit.", "None of this sounds like rocket science, but defining what it is, that we care about, and then building automated measuring systems to measure to what degree it's happening in practice, to then try to figure out the cases where we're not living up to that, and determine what is the reason, then to actually intervene and improve the system, so that that's not happening, then importantly, to build secondary controls, that detect instances of deviation long before they cause a production problem, but where we understand the behavior of the system in sufficient detail, so that we can instrument it in some upstream way — most of what I said there was well understood by production engineers in 1930s.", "So again, I'm not claiming that it's any kind of radical breakthrough, but we have found that the adoption of these practices in really tenacious multi-year form yields really high returns. There may be other organizations that both ship at that rate and maintain that developer velocity at this combination of scale and reliability and security, but I don't think there are that many. It's a real testament to the remarkable folks at Stripe who made it happen.", "Dwarkesh Patel 01:40:13", "Last point, the fact that you have this huge internal tooling and testing is... Once you get the AI engineers, they can push the commits and you have the infrastructure set up, so that it can be readily evaluated.", "Patrick Collison 01:40:28", "Yeah. Across the board, so much comes back to what has to be true for us to be able to build and to take seriously this goal of building the best software. It's easy to say that as some lofty, vague, hand wavy aspirational statement. But if you take that seriously as a goal — you think about what you would have to measure, if you were actually going to pursue it in earnest? — part are the characteristics of organizations that do produce it. You get down to: \"Well, customers have to really like your stuff. OK, how can we measure that? And how can we systematize the process of making sure that there aren't repressions there?\"", "We have this concept of “experience journeys”, which are pathways through Stripe, that we really care about and that are always implemented at a really high quality level. And it has to be true that developers can iterate over them very quickly — we just spoke about how to make that happen etc.", "A theme through everything we've talked about is taking the goal seriously. And I feel like a lot of what we do at Stripe is — again, I disclaim any genius in it —  just the very earnest, repeated, serious, and long term application of taking the goal seriously.", "Virtues of big businesses", "Dwarkesh Patel 01:41:54", "A few more Stripe questions. One percent of global GDP is such a staggering number. When you think about where further growth for Stripe comes from, does it come from the internet economy expanding? Or does it come from Stripe becoming a larger share of the internet economy? And to the extent that Stripe is growing faster than the internet, if we consider that to be the beta in your case, where is that alpha coming from?", "Patrick Collison 01:42:20", "That's a good question. The customers that Stripe serves in aggregate are outgrowing the internet economy as a whole. At some point, those have to converge for obvious mathematical reasons. But we're 14 years in and they haven't converged yet. There's a lot of headroom there. Say Stripe is handling around a trillion dollars a year. When Stripe started out, the global economy was sixty to seventy trillion-ish. The global economy is now around a hundred trillion. We still have quite a bit of headroom before the amount of activity that is coming out to Stripe is really butting up against the ceiling of global economic growth. And of course, there's no ceiling on global economic growth for all sorts of reasons. It could be vastly higher than it is. And I don't even mean new technologies or AIs, but just all the basic per-capita math you can do around. What if everybody had an income on par with the US?", "One of the reasons I am so interested in working on Stripe is: it's the old line , the Lucas line, about how when you start thinking about differential rates of development in countries, it's hard to think about anything else. Why does Brazil have the particular income and GDP level that it does? Why does Poland have the level that it does? Why did Ireland have the trajectory that it did, where it went from being the sick man of Europe to now one of the wealthiest countries there? I feel like Stripe is some applied version of this question in practice, where you're building software products, but in some sense connected to or touching upon these questions of: Why aren't there more companies? What determines the growth rate of a company? Why is it that when you start the merch store, why does it have X level of buyers rather than 2X? I think those remain fruitful questions.", "We haven't optimized the meta system of business to any particularly great extent. For the vast majority of time, businesses have been offline, inefficient, analog. It's really only over the last one to two decades that a significant share of this has been meaningfully digitized. And the prospects for optimizations there are still significantly underexplored.", "We find incredibly basic things, like 'just extending capital to businesses'. The reason we do that is not to generate profit from the loans, but because we find that the businesses, whom we extend the capital for, then just grow faster on a persistent subsequent basis.", "Or, trying to figure out, how does a business decide which countries it sells in? And you'll find for even the smallest business through to some of the largest businesses in the world, that these are very ad hoc and not deeply thought through questions. Like: \"Why don't you sell in Mexico or in Brazil or whatever? — Well, it seemed complicated, and so we didn't quite get around to it.\"", "To your question about: where does the growth come from? There's still an awful lot of low-hanging fruit in just asking some of these incredibly basic questions.", "Patrick Collison 01:46:06", "So, when we think about the way in which Stripe will continue to grow in the future, in some sense, it will obviously involve a lot of big businesses. You're now processing", "a significant amount of Amazon volume, there are these other businesses you're doing deals with. First, tell me how you think. It makes sense how an exponentially growing startup would contribute to exponential growth for Stripe. How does Stripe keep growing at the same trajectory, when it's these existing big businesses that you're partnering with? And second, the case for why these startups matter is so compelling, right? A new thing is coming into this world, and we should really support it and make sure it happens. Why is it compelling that Amazon can fulfill orders more efficiently?", "Dwarkesh Patel 01:46:50", "Those are very good questions. On the first one — you're right. Stripe is doomed to eventually grow at the rate of the economy, there is only a question of how long it takes to get there.", "Patrick Collison 01:47:14", "Right.", "Dwarkesh Patel 01:47:15", "The good news is that it can be a very long time, because there is, as we just discussed, so much low hanging fruit around different improvements that are possible. So I think it'll be many decades before that happens. But it's true: that will eventually occur.", "On the second question: \"It's obviously virtuous or compelling or exciting to foster all these nascent startups and to be an anti-incumbency force. But what's the case for supporting established businesses?\" People misunderstand that for a small business, typically, at least in the cases where we denote them startups, there's usually an embedded innovation. And this innovation is all that the company is. They have a new idea, and they're going to do something better or different etc. Generally speaking, we like innovation and so we have positive sentiments towards that startup. But there's a lot of innovation that comes from large established businesses . That's not all they do, they are also just running the existing thing. So maybe it's a smaller share, but the aggregate fraction of innovation that comes from established businesses is really large. We have to be cognizant of the cognitive bias, of the startups being more conspicuous. On a relative basis, the improvements in turbine, fab or insulation technology come largely from established businesses. To choose any sector of the economy, a significant fraction of the important inventions that occurred over the last 10 or 20 years will have come from the incumbents.", "As a general class, and Tyler wrote a book on this, big business is underrated. If you look at the survey data, people tend to have very positive sentiments not only towards startups, but towards small business as a class. Even though they have negative sentiments or relatively negative sentiments towards big business, it is not that bad on an absolute basis, but not as favorable. It's true that established businesses tend to pay better, tend to be more efficient, more of the innovation in our economy comes from them and they produce a lot of consumer surplus.", "The specific case for Stripe working with them is: typically they're coming to us not because they want to take the thing that they're already doing and go through all the work of transposing to Stripe, but because either they want to do a new thing, that they're not doing today — so it is associated with some new business line or innovation or invention. Or they've spotted the opportunity to produce a new product and want to meaningfully change how they provide an existing one in a fashion that, again, yields consumer surplus. That sounds very abstract and theoretical, but in practice, it tends to mean they want to take what they're selling in this market and sell it in many more markets. Or they've realized that they're selling it in this modality, and they should sell it in other more convenient ways, like on mobile or something. In each of those cases, if it's successful, if people buy it in significant numbers, we're getting this decentralized signal from the economy, that there's now something of value being provided, that wasn't heretofore.", "As I take stock of the businesses, the enterprises that are in the process of migrating to Stripe or that did so over the last year, whether it's the large retailers, global manufacturing firms or shipping companies, it typically has one of those two patterns. New product or current product sold to people who weren't buying it before.", "Dwarkesh Patel 01:51:21", "Yeah. If you think about the big trends in society that are needed to solve our big problems, like Moore's Law or the cost of solar, you have marginal improvements over many decades.", "Patrick Collison 01:51:30", "Yes.", "Dwarkesh Patel 01:51:31", "Big tech or big companies are just able to invest a lot of money into doing the R&D.", "Patrick Collison 01:51:37", "Relentless iterative improvement, yes. It's underrated.", "Dwarkesh Patel 01:51:39", "Can I ask about John for a second?", "Patrick Collison 01:51:41", "Sure.", "John", "Dwarkesh Patel 01:51:42", "You guys recently published Poor Charlie's Almanac and subsequently, Charlie Munger has passed away. Did Munger ever comment on your relationship and if or whether it reminded him of his and Buffett's?", "Patrick Collison 01:51:56", "Not to me, but he knew John better. So it's possible that he did to John. Yeah, I don't know.", "Dwarkesh Patel 01:52:07", "What have you learned about marriage from John? This co-equal, intense, lengthy partnership — the closest thing to that you have is marriage, right?", "Patrick Collison 01:52:17", "Well, I'm relatively new to the practice of marriage. So maybe in a decade I'll be able to extract the generalizable commonalities. The general thing I'd say is: working with people you're close to is underrated. I'm doing Arc with Patrick Su and Silvana. Fast Grants was with Tyler and Silvana. Stripe is obviously with John. I should mention, John was also instrumentally involved in Arc's formation. It would not have happened without John. I could give more examples, but I feel like all the ventures of any significance in my life, have not only been with others, but been with other people that I'm very close to. I had and would like to have an enduring relationship that outlives these ventures.", "Sometimes one hears the advice that you shouldn't work with friends, maybe you shouldn't work with your partner or something like that. All these things are idiosyncratic and there are instances of every possible permutation. But for me, it's been a really rewarding experience. And I think John and I can work together for...  You never know life, but I think we'll probably work together for decades. For us, it's been both an important source of meaning and, again, fulfillment, but also there's a real complementarity. Stripe would be a less effective company without either of us. And I'm just meaning from a bandwidth standpoint, but I think we both bring different things to bear.", "Dwarkesh Patel 01:54:12", "Patrick, I think that's a great place to leave it. Thank you so much for coming on the podcast.", "Patrick Collison 01:54:16", "Thank you.", "Dwarkesh Patel 01:54:17", "Hey, everybody! I hope you enjoyed that episode. As always, the most helpful thing you can do is to share the podcast. Send it to people you think might enjoy it. Put it on Twitter , your group chats, etc. It just splits the world.", "Appreciate your listening. I'll see you next time.", "Cheers." ]
[ "https://patrickcollison.com/about", "https://stripe.com/en-sk", "https://patrickcollison.com/advice", "https://www.gene.com/", "https://www.sciencedirect.com/topics/biochemistry-genetics-and-molecular-biology/recombinant-dna#:~:text=Recombinant%20DNA%20is%20the%20method,or%20close%20to%2C%20that%20sequence.", "https://vcresearch.berkeley.edu/faculty/patrick-hsu", "https://arcinstitute.org/", "https://arcinstitute.org/manuscripts/BridgeRNA_Manuscript.pdf", "https://www.sciencedirect.com/topics/neuroscience/recombinase", "https://scrollprize.org/", "https://forum.effectivealtruism.org/posts/xBBXf7KXZCKHYBxeZ/patrick-collison-on-effective-altruism", "https://michaelnielsen.org/", "https://theportal.wiki/wiki/Reflexive_Contrarianism", "https://en.wikipedia.org/wiki/Sign_bit", "https://arcinstitute.org/about", "https://www.google.com/search?q=auto+didacts&oq=auto+didacts&gs_lcrp=EgZjaHJvbWUyBggAEEUYOdIBBzI3OGowajGoAgCwAgA&sourceid=chrome&ie=UTF-8", "https://gwern.net/doc/science/1986-kanigel-apprenticetogeniusthemakingofascientificdynasty.pdf", "https://parker.org/", "https://www.nih.gov/", "https://www.thenewatlantis.com/publications/the-science-before-the-war", "https://dictionary.cambridge.org/dictionary/english/epiphenomenon", "https://fastgrants.org/", "https://en.wikipedia.org/wiki/John_von_Neumann", "https://en.wikipedia.org/wiki/Gerty_Cori", "https://en.wikipedia.org/wiki/Carl_Ferdinand_Cori", "https://biochem.wustl.edu/archives/carl-and-gerty-cori", "https://en.wikipedia.org/wiki/Jennifer_Doudna", "https://www.darpa.mil/", "https://en.wikipedia.org/wiki/CRISPR", "https://en.wikipedia.org/wiki/Katalin_Karik%C3%B3", "https://en.wikipedia.org/wiki/MRNA_vaccine", "https://en.wikipedia.org/wiki/Sickle_cell_disease", "https://en.wikipedia.org/wiki/Huntington%27s_disease", "https://en.wikipedia.org/wiki/Large_language_model", "https://tylercowen.com/", "https://en.wikipedia.org/wiki/Silvana_Konermann", "https://www.gao.gov/products/gao-21-319", "https://en.wikipedia.org/wiki/Moncef_Slaoui", "https://en.wikipedia.org/wiki/COVID-19", "https://www.cdc.gov/index.htm", "https://www.fda.gov/", "https://www.nih.gov/", "https://www.nsf.org/", "https://www.investopedia.com/terms/w/wide-economic-moat.asp#:~:text=An%20economic%20moat%20is%20a,against%20competition%20from%20other%20firms.", "https://www.linkedin.com/in/hamilton-helmer-42983", "https://books.google.sk/books/about/7_Powers.html?id=heEuvgAACAAJ&redir_esc=y", "https://en.wikipedia.org/wiki/Robert_Conquest#:~:text=On%2014%20February%202003%2C%20Andrew,reactionary%20about%20subjects%20he%20understands'.", "https://www.investopedia.com/stakeholder-capitalism-4774323", "https://edition.cnn.com/2023/03/14/health/novo-nordisk-insulin-prices/index.html", "https://en.wikipedia.org/wiki/GLP-1_receptor_agonist", "https://www.investopedia.com/articles/investing/011416/exportled-growth-strategies-through-history.asp", "https://www.investopedia.com/updates/adam-smith-wealth-of-nations/#:~:text=Smith%20introduced%20the%20concept%20that,free%20trade%2C%20domestically%20and%20abroad.", "https://www.investopedia.com/terms/d/david-ricardo.asp", "https://www.britannica.com/biography/Friedrich-List", "https://stripe.com/en-sk/jobs/culture", "https://www.investopedia.com/terms/s/says-law.asp", "https://www.bloomberg.com/news/articles/2022-04-12/stripe-alphabet-meta-join-to-fund-carbon-removal?sref=JMv1OWqN", "https://fiftrustee.worldbank.org/en/about/unit/dfi/fiftrustee/fund-detail/amc", "https://www.lvmh.com/", "https://www.tsmc.com/english", "https://www.investopedia.com/terms/p/plstatement.asp#:~:text=The%20profit%20and%20loss%20(P%26L,and%20the%20cash%20flow%20statement.", "https://www.wolfram.com/mathematica/", "https://developer.visa.com/capabilities/vpp/docs-error-codes", "https://www.deewhock.com/", "https://www.investopedia.com/terms/i/interchange-rate.asp", "https://en.wiktionary.org/wiki/Chesterton%27s_fence#:~:text=Chesterton's%20fence%20(uncountable),state%20of%20affairs%20is%20understood.", "https://www.bcb.gov.br/en/financialstability/pix_en", "https://www.npci.org.in/what-we-do/upi/product-overview", "https://en.wikipedia.org/wiki/Aadhaar", "https://www.swish.nu/", "https://www.investopedia.com/terms/a/advaloremtax.asp", "https://www.investopedia.com/ask/answers/what-basis-point-bps/#:~:text=Key%20Takeaways-,Basis%20points%2C%20otherwise%20known%20as%20bps%20or%20%22bips%2C%22,or%200.0001%20in%20decimal%20form.", "https://lunarsociety.org.uk/", "https://stripe.com/en-sk/atlas", "https://substack.com/", "http://www.bruno-latour.fr/", "https://people.bu.edu/chamley/HSFref/Lucas-citiesJME88.pdf", "https://en.wikipedia.org/wiki/Robert_Lucas_Jr.", "https://mm1.com/en/about-us/publications/recycle/study-dax-30-startup-and-innovation-monitor-update-2019/", "https://tylercowen.com/", "https://en.wikipedia.org/wiki/John_Collison", "https://www.stripe.press/poor-charlies-almanack", "https://en.wikipedia.org/wiki/Charlie_Munger", "https://twitter.com/dwarkesh_sp" ]
https://www.dwarkesh.com/p/patrick-mckenzie
Patrick McKenzie - How a Discord Server Saved Thousands of Lives
[ "00:00:00 – Why hackers on Discord had to save thousands of lives", "Dwarkesh Patel 00:00:00", "Today, I'm chatting with Patrick McKenzie . He is known for many things on the Internet. He's known as patio11 . Most recently he ran VaccinateCA , which probably saved a high four-figure number of lives during COVID. He also writes an excellent newsletter called Bits about Money .", "Patrick, welcome to the podcast.", "Patrick McKenzie 00:00:18", "Thanks very much for having me.", "Dwarkesh Patel 00:00:19", "VaccinateCA has a lot of important lessons about what parts of our institutions are fucked up. Before we get into what's fucked up, what was VaccinateCA?", "Patrick McKenzie 00:00:30", "In early 2021, we were quite concerned that people were making 20, 40, 60 phone calls to try to find a pharmacy that actually had a dose of the COVID vaccine in stock and could successfully deliver it to them.", "I tweeted out randomly that it was insane. Every person or every caregiver is attempting to contact every medical provider in the state of California to find doses of the vaccine. California clearly has at least one person capable of building a website where you can centralize that information and send everybody to the website. I said, “if you build that website, I’ll pay for the server bill or whatever.”", "Karl Yang took up the gauntlet and invited 10 of his best friends and basically said “all right, get in, guys. We're going to open source the availability of the vaccine in California by tomorrow morning.” This is at like 10:00 p.m. at night, California time. I lurked down into the Discord and gave a few pointers on making scaled calling operations. One thing led to another and I ended up becoming the CEO of this initiative.", "At the start, it was just this hackathon project of a bunch of random tech people who thought, “hey, we can build a website, make some phone calls, maybe help some people find the vaccine at the margin.”", "It grew a little bit from there. We essentially ended up becoming the public-private partnership that was the clearinghouse for vaccine location information for the United States of America. That felt a little weird at the time and continues to.", "Dwarkesh Patel 00:01:48", "Here’s the obvious question. Why was this something that people randomly picked up on a discord server?", "Why wasn't this an initiative by an entity delegated by the government? Why didn’t the White House or the pharmacies have a website where you can just sign up for an appointment?", "Patrick McKenzie 00:02:07", "There are so many reasons and a whole lot of finger-pointing going on.", "One of the issues was that almost no actors in the system said, \"yes, this is definitely my responsibility.\" Various parts of our nation's institutions — county-level public health departments, governors' offices, and two presidencies — all said, \"I have a narrow part to play in this, but someone else has to do the hard work of actually putting shots in people's arms. Clearly someone else is dealing with the logistics problem, right?\"", "The ball was dropped comprehensively. No one at the time really had a plan for picking it up. No one felt it was incentive-compatible for them specifically to pick it up right now. “It would have been great if someone could do this, but just not me.”", "Dwarkesh Patel 00:02:54", "The context was that it was very important that people get vaccines at the time. These delays mattered. You can account for it in the number of lives saved. You can even look at how much the stock market moved when vaccine news was announced.", "It was clear it was worth trillions of dollars to the economy that the vaccine be delivered on time. It should have been priority #1 that people know where the vaccine is. It raises a meta question. Why did I hear about this problem for the first time when I read your article ?", "There's been a lot of controversy after COVID about people pointing fingers about masks or different protocols. Why was this not a bigger issue? Why were people not getting called in front of Congress about our inability to deliver the one thing needed to arrest the pandemic as fast as possible?", "Patrick McKenzie 00:03:47", "I wrote 27,000 words on this in my article in Works in Progress called \"The Story of VaccinateCA” which goes into some of the nitty-gritty. Broadly, it's a matter of incentives more than people choosing to do evil things. Although we did choose to do evil things  and we can probe on that if you want to.", "When Obamacare was first debuting, the federal government institutionally learned one wrong lesson from the Healthcare.gov rollout , which would have terrible consequences. Many actors in the federal government and political parties came away thinking that a president can doom the legacy of their signature initiative if \"those bleeping tech folks don't get their bleeping act together.\"", "As a result, the United States has decided that virtually nothing — up to and including the potential of national annihilation — will cause us to actually put our chips behind solving a software problem. That's somebody else's problem, someone who doesn't have to deal with an electoral mandate or getting called in front of Congress. It’s somebody else’s problem.", "Unfortunately, software is eating the world. Delivering competence in the modern world requires being competent at software. The United States will tell you differently. There are wonderful people in the government attempting to change this. However, broadly speaking on an institutional level, the United States federal government has abdicated software as a core responsibility of the government.", "Dwarkesh Patel 00:05:15", "I understand why they didn't initially want to pursue this project. I still don't understand why after everything went down, it isn’t more of a news item that this was a problem that was not solved?", "Patrick McKenzie 00:05:29", "We're memory-holing a lot of things that happened in the pandemic. I wish we wouldn't. It’s partly because of political incentives and because we're approaching an election year. It’s also because of the quirky way that American parties and candidates bounce off each other.", "It's in no one's real incentive to say, \"okay, I would like to relitigate the mask issue for a moment.\" We told people that masks don't block airborne viruses . We were quite confident of that. The entire news media backed us up on it. Then we 180’d a month later. No one wants to relitigate that.", "No one wants to relitigate that California imposed redlining in the provision of medical care. That was wrong and evil. However, the party that was pro-redlining does not normally like saying that it is pro-redlining. The other party does not really consider that a hugely salient issue.", "There are no debates and no one is asking Governor Newsom , \"you got on TV and said that you were doing geofencing for the provision of medical care. Geofencing in that context was the same as redlining. Can you explain your support for redlining?\" No one has asked Newsom that question. Maybe someone should. We're surrounded by the effects of incentives and the effects of iterated games. Sometimes they don't play out the way we would ideally like them to play out.", "Dwarkesh Patel 00:06:52", "I still don't feel like I really understand. Is it that everybody has blood on their hands? That’s still confusing.", "If you lost a war, you don’t just brush it aside. The generals would have to come up in front of Congress and be like, \"what happened? why didn't we get that battlefield?\" Actually, we just lost a war. Maybe that didn't happen.", "Patrick McKenzie 00:07:19", "If one goes over the history of military conflicts, I don't know how many losers on either side of the conflict ever actually did that reckoning of \"hey, could we attempt to win in the future?\"", "There was a broad lack of seriousness across many trusted institutions in American society — in the government, in civil society, in the tech industry — about really approaching this like a problem we want to win. A wonderful thing about our country and our institutions is that on things that are truly important to us, we win outlandishly because we are a rich and powerful nation. Yet this was obviously a thing where we should have decided to win. We fundamentally did not approach it as a problem that we needed to win on.", "Dwarkesh Patel 00:07:58", "Let’s go back to the object level here. Instead of different people calling different pharmacies and asking whether they have the vaccine, people who have not read your article would assume that either some company or the government would build a platform like this. You explained why the government didn't do it. The pharmacies might build a platform like this.", "I want to meditate on the incentives that prevented random big tech companies or Walgreens from building this themselves. Can you explain that?", "Patrick McKenzie 00:08:31", "With the federal government and the state government, the American governmental system is quite complex. There were multiple distinct supply chains, with multiple distinct technological systems, tracking where these vials were headed all over the country. There were many attempts at various levels of the government to say, \"hey, can we commission a consultancy to build a magical IT solution that will get these databases to talk to each other?\" Those largely failed for the usual reasons that government software procurement projects fail.", "Why didn't tech build it? I'm constrained on what I can say and cannot say. I know a little more than this answer. I will give you part of the answer. The tech industry — both at the level of AppAmaGooFaSoft, which is my funny sardonic way to refer to some of the most powerful institutions in the world, and many other places that hire many smart engineers who can build the world's least impressive inventory tracking system — felt political pressure in the wake of the January 6th events in the United States.", "This is another thing that's gone down the rabbit hole. In the immediate wake of the January 6th events, people in positions of authority very clearly tried to lay that at the feet of the tech companies. Internally, the tech companies have policy teams, the teams that are supposed to make the company legible to the government and avoid government yanking permission to do business. Those teams, their communications teams, PR departments, they told everyone in the company, \"mission number one right now, do not get in the newspaper for any reason. We are putting our heads down.\"", "A thing that might not be obvious to most people in the world is that AppAmaGooBookSoft literally have teams of people whose job is public health because they are the operating system of the world right now. The operating system of the world needs public health care. Those teams said, \"hey, we've got this thing\" and other people in the company might have overruled them and said, \"it would be really bad right now to have the tech industry saying we're better at the government's job than the government is. So shut that down.\"", "Dwarkesh Patel 00:10:38", "That's so insane.", "Patrick McKenzie 00:10:41", "It absolutely is. The local incentives make sense in the meeting when you're saying it. You are not in that meeting projecting, \"I'm going to cause tens of thousands of people to die at the margin by making this call.\" Yet that call was made.", "Dwarkesh Patel 00:10:57", "There are two culpable actors here. You could say the first is the big tech companies for not taking the political risk. What is even more reprehensible is the fact that they probably correctly thought that appearing more competent than the government — and saving tens of thousands of lives as a result — would be held against them and significantly impact their other businesses.", "Suppose that they had built the software. Let's play with the scenario. What would happen then? Would they get hauled in front of Congress and explain why they weren't delivered even faster because of the ultimate bottlenecks in the supply chain? What would happen if they built it and it's better than the government's?", "Patrick McKenzie 00:11:43", "So many things could happen. This has sometimes been called the Copenhagen Principle of Culpability.", "If you build the thing, various actors in our system will assume, \"okay, now you're responsible not just for the consequences of the thing you built, but for the totality of consequences of everything associated with the American vaccination effort. You built the thing, you big tech geniuses, but what did you do about localization? You didn't do enough about localization. You hate [name group of people here], don't you? See the disparity in death rates between demographic A and demographic B? Why haven't you fixed that yet? You have killed so many people.\"", "No one in government who is making that moral calculation says, \"I have responsibility for killing people by doing nothing.\" The person who is doing anything has the responsibility for killing people by taking up the burden of doing something. It is an absolutely morally defensible thing, which you will see over and over again in our discourse.", "Dwarkesh Patel 00:12:41", "They get hauled in front of Congress. It's not just because they made a sin of commission.", "There's one answer where it's because they touched the problem. There's another where it's because they did it better than the government could have. Those seem like two different answers.", "Patrick McKenzie 00:13:01", "You touched the problem so you've immediately taken liability for any number of sins of omission. Even at the scale of the largest companies in the world, you have not allocated infinite resources to this problem. Also, stealing a march on the government and embarrassing us will be held against you.", "You can point back to the Cambridge Analytica thing . Cambridge Analytica is shorthand for this one time back in the day when the news media in New York and the government in DC convinced themselves that a small team of people, with a budget of approximately $200,000, rooted the United States presidential election.", "Rooting the United States presidential election, perhaps on behalf of a foreign power, would be an enormously consequential thing. Good thing that did not happen in the world that we live in. However, people believe very passionately in that narrative. As a result of that narrative, they very aggressively attempted to clip the wings of tech and tech’s core businesses, like advertising.", "Dwarkesh Patel 00:14:00", "There's so much that's great here.", "We figured out that we couldn't delegate to big tech or any of the competent actors. The native infrastructure that we had — that was specifically earmarked for dealing with public health emergencies — was extremely incompetent to the extent that Discord servers vastly outperformed them.", "Suppose that public health is not uniquely incompetent among the different functions that the government is supposed to perform. Those functions also don't get tested until the actual emergency is upon hand.", "Say the president hears this and is concerned that the people running responses to nuclear bombs or earthquakes aren't up to snuff. Is there some stress test that you could perform on these institutions in order to learn whether they're competent or not, before the thing actually happens?", "Patrick McKenzie 00:14:57", "Look at the experience of the last hundred years, or back to the flu pandemic in 1918 . A vastly less wealthy and less technologically sophisticated nation — with many fewer people involved in the actual fixing of this problem — competently executed on nationwide vaccine campaigns and other various measures.", "We should be urgently concerned with what has decayed institutionally since then. If you had just given health departments this kind of vanilla vaccination campaign, maybe they would have done better than they actually did. I'm not positive about that.", "Here I have to stop for a disclaimer. I think people in county health departments did real important work. They probably did work that saved lives on the margin. I do not think the United States should be satisfied with our performance in 2020 and 2021. We should be very dissatisfied and we should get better for the future. That requires recognizing that we underperformed by a lot.", "There was a political decision made that the successful administration of vaccination was not going to be measured solely by saving lives. The prioritization schedule that we came up with was byzantine and complicated. It routinely befuddled professional software engineers and health administrators. I could not diagram it out on a whiteboard even if you paid me a million dollars to get it right the first time.", "That was all downstream of the United States' political preferences. Schedule 1A versus 1B versus 1C was in the first five seconds of the discussion dictated by medical necessity. Immediately after that it became about rewarding plums to politically favored groups. One of the complexities of this is that the pharmacies and healthcare departments are not set up to discriminate along the axes of whether one is politically powerful or not.", "That is not a thing that they have to do in most vaccination campaigns. That’s not a thing that they have to do on like the typical Tuesday of providing medical services. We asked them to do this radically new thing, which is in part responsible for the failures that we had. If we had had a much simpler tiering system, we would have had more than 25% of the shots successfully being delivered in the state of California in January of 2021.", "00:17:26 – How politics crippled vaccine distribution", "Dwarkesh Patel 00:17:26", "For people who are not aware, what was the political tiering system that you're referring to?", "Patrick McKenzie 00:17:30", "Oh, goodness. This was different in different places. Confusingly, different places in the United States use the same names for these tiers for different people. In the state of California, Tier 1A comes before 1B comes before 1C comes before Tier 2, et cetera.", "1A changed over time on a day-to-day, week-to-week basis, sometimes in mutually incompatible ways at the same time. It was an entire mess. At the start, Tier 1A was for healthcare professionals, a few others, and people above the age of 75. No wait, we'll change that to 65.", "Tier 1B was where we put a few favored occupational groups and some other folks. Tier 1C was for people who doctors think will probably die if they contract COVID unvaccinated, but who have not appeared in group 1A or 1B yet.", "Who got 1A? Healthcare professionals, like doctors administering the vaccine. That sounds pretty reasonable. Also, veterinarians. Were veterinarians urgently required by society at the time? Not so much. It was because the California Veterinarians Association is good at lobbying. That isn't just me alleging that. They sent a letter out to their membership saying, \"we are so good at lobbying, we got you guys into 1A. Congrats and go get your vaccine now.\" I have that on my website.", "Tier 1B. School teachers were classified as Tier 1B. Why? Go figure.Teachers’ unions have political power in the state of California. They said, “we'll accept not being in 1A, but we are going no lower than 1B.” Probably no one in that meeting ever said, \"I definitely think that 25-year-old teachers who are currently under stay-at-home orders should be in front of people who will die if they get COVID.\" But we made that choice.", "Dwarkesh Patel 00:19:23", "In your article you discussed that the consequence of this was not only the misprioritization of the vaccine. The bureaucracy around allocating it according to these tiers resulted in 75-year-olds not having the capacity to fill out the pages of paperwork required to decide what tier you're in.", "Patrick McKenzie 00:19:40", "The state of New York commissioned a consultancy to administer to 75-year-olds a 57-page web application. It required uploading multiple attachments to check for their eligibility. Talk with a technologist if you don't believe me. We try to remove everything from a webpage so that people can successfully get through it.", "If you can make it two to four form fields, that's already taxing people's patience. You are asking people — who might be suffering from cognitive decline or are less comfortable in using computers — to do something which would literally tax the patience and cognitive abilities of a professional software engineer.", "That wasn't an accident. We wanted to do that. Why did we want to do that? It was extremely important to successfully implement the tiering system that we had agreed upon. Why was it extremely important to implement the tiering system? Because that was society's prioritization. Was that the correct prioritization? Hell no.", "Dwarkesh Patel 00:20:36", "Can we just count off everything? It's enraging not only because people died, but because nobody talks about it. There are all kinds of controversies about COVID, about whether vaccination had side effects, whether the masking orders were too late or too early. The main thing should be whether we got vaccines in people's arms on time because of these political considerations.", "Patrick McKenzie 00:21:10", "Can I jump in with one bit of optimism? We achieved something incredible. We got the first cut of the vaccine done in two days as a result of many decades of science done by very incredible people. We successfully got that vaccine productionized in a year. We should have gotten it productionized in far less than a year. Still, the fact that we were able to do it in one year and not three was enormously consequential. We should feel happy about that.", "We should be a little annoyed that we didn't have better protocols at the FDA and other places to get that vaccine prioritized for testing much faster. We should be quite annoyed at the fact that that was a political football. People probably made decisions that pessimized for human lives and optimized for defeating a non-preferred political candidate.", "Dwarkesh Patel 00:21:53", "You’re talking about the fact that the vaccine was announced the day after the election results or something, right?", "Patrick McKenzie 00:21:58", "Yes, I'm basically subtweeting that. I strongly believe that was a political decision, but I'm just a software guy.", "Dwarkesh Patel 00:22:06", "There was a particular kind of craziness that we had during 2020 and 2021 about equity and wokeness. How much was that uniquely responsible for the dysfunctions of this tiering system and geolocation/redlining? Basically, if this happened in another year when there wasn't a bunch of cultural craziness, would it have gone significantly better?", "Patrick McKenzie 00:22:31", "It's difficult to ask that question. We were clearly in a unique time in 2020 and 2021. Yet, point to me a year in American history in which American society was truly united and had no social issues going on. If people counterfactually point to say, World War II, I will say read more history there.", "Be that as it may. Was it the case that strong societal feelings in the wake of George Floyd's death in 2020 and the racial reckoning strongly dictated policy? Yes. That's a positive statement rather than a normative statement. That is absolutely the case.", "There's this thing we often say in the tech industry called bike shedding . If you're building a nuclear power plant, many people cannot sensibly comment on what is the flow rate through the pipes to cool a nuclear reactor. However, if you build a bike shed next to the nuclear power plant, it's very easy to have opinions on the color of the bike shed. So in the meetings about the nuclear power plant, you will have a truly stupid amount of human effort devoted to what colors you should paint the bike shed.", "It is very difficult for most people in civil society to successfully inject a vaccine into someone's arms, to successfully manage a logistics network, to successfully build a nationwide information gathering system, to centralize this information and pass it out to everyone.", "We aggressively trained the entire American professional managerial class in decrying systemic racism. To be clear, it is a problem.  The American professional managerial class essentially calls all the shots in the U.S. system. Any discussion about what we should do with regards to information distribution will almost invariably get bent to, “I have no particular opinions on server architecture here and nothing useful to comment, but what's our equity strategy?”", "The equity strategy dominated discussions of the correct way to run the rollout to the exclusion of operationalizing it via medical necessity. People brag about that fact. That fact is enormously frustrating to me. If you say it with exactly those words and emotional valence, people will say, “no, that's not exactly what we meant.” When they're talking to other audiences, they say, “no, this is absolutely what we mean.”", "Dwarkesh Patel 00:25:01", "Maybe the culprit here is a scarcity mindset. We cared more about proportions rather than just solving the problem.", "Patrick McKenzie 00:25:14", "This was one of those few times when we were up against a genuine scarcity constraint. The physical reality was that there were a scarce number of vials. We needed a prioritization system. Some people who urgently needed the vials were not going to get them first. Everyone was going to get them eventually. However, our political system's mad rush to dole out favors in prioritization for those first vials exceeded the actual distribution and injection of the vials as a goal.", "California reported to the federal government that it was only successfully injecting 25% of its allocation. It had the most desirable object in the history of the world. Rather than adopting any sensible strategy to get it into people's arms, it was bickering over who should get it first. We should be outraged about this, but we're mostly not.", "Dwarkesh Patel 00:26:05", "I don’t even know what to ask next because it’s so obviously outrageous. To me, there's no clear answer about why there isn't more outrage about it.", "Also, the solution isn't obvious. It’s not clear to me that we’ve learned the lesson for the next pandemic, let alone for a different emergency. For an isomorphic emergency, would our state capacity be better?", "You mentioned how 100 years ago, we might have been able to deal with this problem better. What changed?", "Patrick McKenzie 00:26:39", "America used to do something when the federal government lacked state capacity for something. It’d identify who in civil society or private industry had capacity for this. Then they'd say, \"congratulations. By order of the president you're now a colonel in the United States army. What do you need to get it done, sir?\" That option was available but not taken.", "I will play no fights in either of the two administrations. Both individually made terrible decisions. But plausibly, a more enlightened counterfactual administration could have gone to Google and said, \"who's your best person for solving this data problem? Will they accept a commission as colonel? Great. Here's an order from the president. Your swearing-in ceremony starts in 30 seconds. You'll present your project plan tomorrow.\"", "Again, the successful project plan was actually created by rank amateurs on Discord in a couple of hours. This is just one part of the huge vaccination effort. You could imagine going to Amazon and saying, \"hey Amazon, we hear you're good at getting packages from A to B. This package has a really hard challenge. It needs to stay cool during delivery. That’s a totally unsolved problem in material science, right?\" Amazon would say, \"we literally do that every day.\"", "They say, “back in December, people were saying on the news this would be an unprecedented logistics challenge because the vaccine has to be kept at ultra-low temperatures.” These are the same temperatures at which milk is transported. We already understand cold chain logistics .", "So Amazon would correct that misperception. They say, “oh you guys seem to know what you’re doing. We have an absence of that here. Congratulations, here’s your colonel uniform in the US military. Now we are going to give you a CSV file everyday. Interface with this other colonel from Google on where this thing needs to go. You get it there on time everytime. If you can’t get it there on time everytime, call the White House and we will find you political cover.” That’s what a functioning system would have done.", "Granted, the American system is dysfunctional in its own way. We've also underexplored international comparisons in the last couple years. I don't know if any country anywhere, with vastly different political systems, is happy with their outcomes. Some were obviously vastly better than others. There are journals of comparative international politics. Why are those journals writing anything but analysis of who succeeded at what margins and who didn't? What do we learn about about the proper functioning of political systems, civil society, and the United States considered as one hugely complex machine..", "Dwarkesh Patel 00:29:27", "That's a really interesting point. I actually asked the Tony Blair Institute . They were recommending different ways of distributing the vaccine to the British government. They made the obvious recommendation that they should give everybody one dose now and then do the second dose later.", "There were obvious things like this that would have saved lives. The British government didn't do a good job there. No government… it's actually a very interesting question. There are governments all across the world that have very different political systems. They hopefully have different infrastructure already in the mix of this.", "Why did nobody get this right?", "Patrick McKenzie 00:30:02", "The catchphrase for this was \"First Doses First.\" This wasn't the procedure in many nations with many smart people, including the United States. Sometimes you can trace policy back to individual blog posts. In this case, I believe it was Alex Tabarrok on Marginal Revolution who was right. This is very obvious and overdetermined. If we want to win at this, First Doses First is objectively the correct policy.", "This idea ping-ponged around the political system for a while. They talked to medical experts who eventually agreed. This is the equivalent of saying you should probably consume calories at some point in a typical week. It’s better than not consuming calories. “We checked with medical experts, and after six weeks of meetings, they definitely agreed eating beats not eating for living. So we're going to do that now.”", "On one hand, it's a genuine strength of the United States that we didn’t just follow some relatively unknown person on the internet who wrote a 2000-word blog post in order to stop doing stupid things. We could have stopped doing the stupid things sooner though.", "Dwarkesh Patel 00:31:25", "That doesn't answer the question of why nobody got it right. Is there something particular to the late-stage bureaucracy we have? Maybe another country with a fresher system or a more authoritarian model that can crack down would do better. But they did abysmally as well, often making errors worse than America's. There are so many countries, Patrick. Why didn't any of them get it right?", "Patrick McKenzie 00:31:51", "I'm under-informed on much of the international comparison, partly because in 2021 I was sort of busy. However, I remember Israel, for various institutional reasons, having a broadly functional response, particularly around end-of-day shots. End-of-the-day shots are a minor issue in the grand scheme of things, but they’re a good quick heuristic for assessing if a country has good epistemics on this at all.", "The physical reality of COVID shots is that there are 5, 8, or 10 shots in a single vial. That single vial goes bad after 12 hours. That's a bit of an oversimplification, but for regulatory reasons we have to pretend it goes bad after 12 hours. It can't be resealed. If you vaccinate two people then the other shots are on a timer. They will decrease in value to zero after the remaining hours and then get thrown in the trash.", "Here's a quick question to test if you're a rational human being. At the margin, would you prefer giving a shot to the most preferred patient in your queue, who needs it for medical reasons, or to the trash can? You'd prefer the most preferred patient. Here’s a follow-up question. Would you prefer giving it to the least preferred patient or the trash can? You'd still give it to the human rather than the trash can.", "Israel adopted the policy that if shots were expiring, they'd forget the tiering system and anything else. They'd literally walk out into the street and say, \"I've got the COVID shot. I need to administer it in the next 15 minutes. Who wants it?\" In the United States, we had a policy ban on doing that. We said no. To protect the integrity of the tiering system and embrace our glorious cause of health equity, you should throw that shot out. This policy was stupid and it was announced by governors proudly in December in front of news cameras.", "A couple of weeks later, reality set in. People told them, \"sir, it turns out that throwing out the vaccine is stupid.\" The governor didn't go on the nightly news again to say, \"I gave a very confident policy speech a month ago in front of this news camera. I said that I would prosecute anyone who gives out end-of-day shots—”", "Dwarkesh Patel 00:34:23", "He literally said that. He literally said that.", "Patrick McKenzie 00:34:26", "Oh man, this is almost a direct quote. You can see the actual direct quote in my previous writing on this. “I will not just prosecute people. I will aggressively try to maximize the reputational impact on your firms and your licenses.”", "We were pointing metaphorical and, when it came down to it, literal guns at physicians in the middle of a pandemic for doing unauthorized medical care. It’s crazy. When the system corrected, it did not correct all the way. The governor did not go out and say, “hey, that thing I said a month ago was effing insane. I take that back and apologize.” No, it was like, “okay, we're going to quietly pass out the word. That's no longer the policy, but we don't want to own up to the mistake.”", "People, say in the regulatory departments of pharmacies, make rational decisions based on the signals that you are giving them. The rational decision a pharmacy makes is not, “okay, we've been quietly passed the word that the old policy is persona non grata. But can we really trust the quiet word here? Do we trust that this actor is not going to change their mind in two weeks and consequence us for something we authorize today? Just throw out the shots.”", "Pharmacies did not cover themselves in glory. Some individual pharmacists did, but pharmacies institutionally did not. Pharmacies were thinking, “we deliver almost all the medications for almost all the diseases routinely in America. We cannot blow up either that position of societal trust or our business results over one drug for one disease.”", "They decided to throw out the shots and make sure they could still deliver medical care in California tomorrow. I understand how that decision was made. We should not endorse that decision. There were individual acts of heroism by particular pharmacists who essentially said something like this to us when we called them and asked about the procedure for getting the shot.", "\"Okay, an individual like the one you just described cannot formally get the shot right now. I would tell that individual to go to the county website and tell whatever lies are necessary to get an appointment with me. They come in for an appointment and I will inject them rather than verifying the lies that were on the appointment card because basically eff the rules. I swore an oath.\"", "Dwarkesh Patel 00:36:48", "Honestly, I don't know where to begin with some of these things. I want to understand a bunch of that.", "First of all, 25% of the vaccines that were allocated to California were actually delivered in people's arms? Literally the entire world economy was bottlenecked on this, right?", "Patrick McKenzie 00:37:06", "Here’s another funny anecdote. I asked some people in positions that might know, \"how real do you think that 25% number is?\" They said, \"the good news is in addition to being incompetent at delivering the vaccine, we're also incompetent at counting.” So it was probably a bit of an undercount.", "I'm like, \"oh so the good news is the true count was like 100 percent or 95 percent or something?\" They were like, \"no, not nearly close to that, but we got better at counting after the governor yelled at us because he was embarrassed. We were the 40th state in the nation.”", "Dwarkesh Patel 00:37:36", "It’s literally just counting the item, a bottle. Where's the thing that is going to rescue us from the thing that is destroying the world?", "Patrick McKenzie 00:37:42", "Let’s say you ask someone deep in the bowels of pharmacies' accounting departments, \"could you, by the end of business today, give us a count of how many bottles of aspirin the pharmacy has physically in the world?\" By the end of the day, they would have a shockingly accurate number for that. It wouldn't be exact, but it would be shockingly accurate relative to that number being truly millions.", "If you could say, \"break it down by address, please. Where are they physically present in the world?\" That’d be an easy problem. Managing inventories of drugs, that's what we do. The United States could not do that. It did not perceive that to be an urgent problem to be solved.", "00:38:19 – Fundraising for VaccinateCA", "Dwarkesh Patel 00:38:19", "I do want to ask you about the actual finance and software stuff at some point, but this is such an important topic. The world is brought to a stand still. We still haven't learned the lesson. I'm just going to keep going on this topic because I still don't understand.", "Here's another question that's related to this. You have many rich tech industry friends. I read your article and you're saying, “I'm filling out these grants for $50k a year. That's taking up all my time. I'm trying to raise a couple hundred grand a year, a couple tens here.”", "I'm thinking to myself, how is this not as trivial a problem as, “hey XYZ, if you give me money that you can find between your couch cushions, we will save thousands of lives and get the world economy back on track.” How is raising money for this hard? Or why was it hard?", "Patrick McKenzie 00:39:07", "Again, trillions of dollars are on the line. The United States is spending tens of billions of dollars or more on its COVID response strategy. The true biggest issue is why has it come down to Patrick McKenzie's ability to fundraise in the tech industry for us to have a system here?", "Bracketing that, the tech industry underperformed my expectations for what it should have accomplished here. There were some bright spots and less bright spots with regards to the fundraising project. For those of you who don't know, the total budget of this project was $1.2 million. It’s not quite couch cushion money, but it’s not large relative to the total amount of resources that the tech industry can deploy on problems.", "I looked at my email this morning to refresh my memory. I'm not going to name people, but they're welcome to claim credit if they want to claim credit. I emailed the CEO at a particular company, \"hey, I saw you like to tweet about this on Twitter. I'm essentially raising a seed round except for a 501(c)(3) charity and we urgently need money for this. Here's a two page memo.\"", "I sent that email at 4:30 p.m. California time. He got back to me. There were some internal emails routed to this person and then to that person. “Hey run that by blah, blah, blah.” 9:30 the next morning, he said, \"I'm personally in for $100,000 out of my own pocket. My banker is going to contact you.” The wire cleared the same day. So yay for that.", "On the less yay side, tech is not exactly a stranger to having bureaucracies. In some cases, it was a matter of \"oh indicatively, we want to support that, but we have a process\" and that process went on for six weeks. By the time six weeks was over, it was May.", "By May, most people in the professional managerial class who had prioritized getting a vaccine for themselves and their loved ones had succeeded at that. They said, \"okay, the vaccination supply problem is pretty much solved, right?\" I'm like, “no, it is not solved right now.” It is solved for the people who are smartest about working the system in a way it was not solved for even them back in January. But there are many people who are not yet vaccinated.", "They’d say, \"that's a vaccine hesitancy issue.\" No, it is not merely a vaccine hesitancy issue. It is still the case that there are logistical problems. It is still the case that people don't know that you can just Google the vaccine now. It is still the case that around the edges of the American medical system, in places that are underserved, people don't have it or they can't get transportation, etc. You should continue funding this team for the next couple of months so that we can do what we can around the edges here.", "Again, people can do what they want with their own money. I understand that charitable funders have many things. I was told, “relative to the other places we can put money to work in the world, further investment in the American vaccine supply situation as of May and looking forward, it doesn't make sense for us. Could you do it in another nation?”", "We said, “okay, we're the American effort. We have some advantages here. We would not have them in another nation. We did talk to people there. We tried to see if we could help a team there or go there, etc. But we don't see that there's a path to positively impacting the problem there in a way that there's manifestly a path to positive impact here.”", "We lost that argument. We didn't get the money. The last $100K in was my daughter's college education fund.", "Dwarkesh Patel 00:42:34", "My God.", "I agree that it shouldn't be up to tech to solve this huge society-wide problem. But given that nobody else was solving it, I still don't understand. Have you gone back to any of them? Have any of them reflected like \"yeah, maybe I should have just written you a million dollar check and saved you all this hassle so you could have gone back to business.\"", "Patrick McKenzie 00:42:59", "Ultimately I'm the CEO. Responsibility for fundraising lies with me. I've thought a number of things about how I could have done better. How could I have strategized? I did not stop fundraising efforts, but I stopped lighting up new conversations for a number of weeks. I thought, “okay, we've got the $2 million that we need to run this till the end of August.” That's my internal target for the point at which it doesn't quite stop being useful, but it starts actually being a question on the margins. It's not a question until the end of August. Could I've done better? Probably.", "There’s some of the folks in the broader effective altruist community. I'm not a member, but I've read a lot of stuff that they have written over the years. I broadly consider them positive. They are the \"but for\" cause of VaccinateCA. Ask me about that in a moment. Some EA funders talked to me after my piece about it came out. They said, \"this is physically painful to read. We wrote bigger checks with less consideration to projects that had far less indices of success. Why didn't you just ask us for money?\"", "The answer there was twofold. One, I thought I had high quality introductions and a high quality personal network to people who are likely already going to fund it. So I didn't light up additional funding sources. And two, this is a true answer, I'm a flawed human who has a limited number of cycles in his day. I was running a very complex operation. It literally didn't occur to me, “hey, maybe those people that have been making a lot of noise about writing a lot of money for pandemic checks would be willing to write a pandemic check.”", "That was not entirely an irrational belief for me because I had reached out to people who are making a lot of noise about writing money for pandemic checks. They said, \"not in the United States, not in May.\" I thought, “oh, if I light up a conversation totally cold with someone now, it's likely to just get a no again.” I should try to scrimp and save and break the piggy bank for my daughter's college education fund. By the way, she'll go to college folks, no worries. But it's how far down the list of Plan A, Plan B, and Plan C. We were down to Plan C at that point.", "Dwarkesh Patel 00:45:02", "Just to be clear, I'm definitely not blaming you. It goes back to the Copenhagen interpretation.", "Patrick McKenzie 00:45:07", "No, but you should blame me a little bit because I should be rigorous about my performance.", "Dwarkesh Patel 00:45:11", "You go back to commission versus omission. It's the exact same reason that we shouldn't have blamed Google if they got involved, did it themselves, and maybe made a mistake. Like come on, to remove the bottleneck that was basically stopping all global economic activity and causing millions of deaths. you had to take money out of your daughter's college fund. It's so insane.", "Patrick McKenzie 00:45:42", "There's a positive takeaway here. There is one tiny actor who understands that he has unitary control over some decisions, and who is capable of betting boldly on those without a huge amount of process when it is important to bet boldly on things. Not to toot my own horn here, but this is literally what happened.", "On the first day, we're getting in Discord together and there's a bunch of infrastructure we have to sign up for. We have to get hosting, etc. There is an annoying mechanical step at this point where you have to put down a credit card, for a potentially unbounded expense.", "People were like, “there's a list of things that we want to do. Since there is no money here, I'll take this one and you take this one.” After I heard this conversation go on for two minutes, I said,”this is not a conversation we should be having. Here is a debit card for my business which I've just spun up on the backend. This is literally my job. It has $10,000 on it. Spend the $10,000 on anything that accelerates this project. There is no approval process. Don't get a receipt. Don't worry about the paperwork right now.”", "Why did I do that? “The information about where hospitals exist and what their phone numbers are is probably scrapable from the internet for free. Or we could buy a commercial database, but that's a stupid amount of money. It's like $2,000.” I'm like, “relative to the importance of this project, $2,000 is a trivial amount of money. Just spend the $2,000 immediately rather than spending four hours writing a scraper.”", "We don't think about that in government procurement and in charities. We have some sacred virtues like, “you must minimize waste. You must minimize opportunities for corruption. You must maximize for funders’ line item support of individual things that charities buy.”", "Those sacred virtues conflict with winning. At the margins where they conflict, we should choose winning. We should choose human lives over reducing corruption. One of the few things we are reflecting on is the tremendous amount of waste and fraud that happened with PPP loans and other pandemic stimulus things. I'm not just saying this to be contrarian, folks. We should be glad there was waste in COVID stimulus. If there was no waste, we were clearly not choosing the right margin to focus our efforts on.", "Dwarkesh Patel 00:48:16", "I want to clarify this for people who don't have context on how much money typically goes around in Silicon Valley. They think, \"oh, $1.4 million. How hard should that be to raise?\" If you right now, given your reputation, literally tweeted out, \"I'm not going to tell you my idea but I'm raising a $50 million seed round,\" that's going to get filled.", "People don't understand. I have friends who are 16 years old who have some GPT wrapper and they don't have to worry twice about raising $1.4 million.", "Patrick McKenzie 00:48:48", "Not trying to brag folks, just telling you the reality of Silicon Valley. I misapplied some of the  knowledge I have of how seed funding would work if I attempted to raise for a for-profit company.", "I thought originally, we're probably going to be charitable, but I'm going to pitch this to people as essentially like a seed investment. They expect to spend all the money as quickly as possible and go to zero, while driving the total addressable market of the company to zero. I'm bummed this is what passes for humor with me.", "I told folks pretty confidently in the first couple of days, “I'm pretty sure I can get us $8 million.” Then I was actually able to deliver on $1.2 million after far more tooth pulling. But yes, descriptively, if I was asking for a seed stage investment and I wanted to get $8 million wired by tomorrow, I could probably do that.", "That is a civilizational inadequacy because can you literally get $8 million for a blank check for something that has a profit motive behind it. If I write on the check, “hey, we want to fix the manifest inability of the United States to figure out where the COVID vials are,” that blank paper becomes less valuable.", "On reflection, maybe I shouldn't have told people, and said, \"oh, the blank check company was this thing and we're making it a 501(c)(3).\" There are maybe some ethics issues in that, but the ethics issues are less bad than allowing people to die.", "Dwarkesh Patel 00:50:16", "A recent episode I released was with a former AI researcher who thinks that the field is progressing in such a way that you will need to nationalize the research in order to protect American national security. I hear what you’re saying about the inability of the government to keep track of vials of COVID vaccines or to get them in people's arms.", "For any other emergency — whether it’s AI or the fallout of a nuclear war — should we just discount any government response to zero? If your plan requires some sort of competent administration by the government, should that be discounted to zero? It has to be something on the side.", "00:51:09 – Why tech needs to understand how government works", "Patrick McKenzie 00:51:09", "Discounting to zero is the opposite of wisdom here because we didn't accomplish zero. We accomplished an extremely impressive thing in aggregate. It vastly underperformed the true thing that we were capable of. You have to keep both parts of the equation in our minds at the same time.", "People in tech need to become radically more skilled at interfacing with government. To the extent that we have some manifest competency issues in government right now, we can't simply sit out here and gripe about this on podcasts, etc. We've got to go out and do something about it.", "I think it's been reported that there was a meeting among tech leaders early in the vaccination effort where a bunch of people got in a room and were like, \"this is going terribly. I hope someone fixes it.\" \"I hope someone fixes it\" is no longer a realistic alternative. We have to be part of the solution.", "It's partly about having higher fidelity models for how Washington works than simply, \"oh, they're bad at everything.\" It is important to understand that the government has some manifest competence issues. It's also important when working with the government to understand that telling the government to its face, \"you have manifest competence issues\" is not the maximally effective way to keep getting invited to the meetings.", "I was very religious about not criticizing anything about this Californian response effort in 2021 because we needed to be in the room where it happens. That was a choice made. Am I 100% happy with that choice? No, but we kept some relationships that we really needed. I'm not saying don't criticize the government, obviously. Be strategic about that sort of thing. Play like you are attempting to win the game.", "On the government side, one thing we need is dispelling the massive “ugh” field that surrounds software. This is going to be a part of the future, whether you like it or not. We need to get good at it. We can no longer accept incompetence at this as the routine standard of practice in Washington.", "Secondly, it is enormously to the United States’ credit that we have an extremely functional, capable tech industry. Maybe we shouldn't treat it like the enemy. I’m just putting that out there. Again, this is a thing the United States has done before. There are laws in place. There are decades of practice. We could put a colonel's uniform on somebody. Think seriously about doing that next time.", "Do I think we have institutionally absorbed all the correct lessons from this? No. When I see after action reports, they praise a lot of the things that people think are very important for maintaining their political coalition. These are things which were either not productive or anti-productive.", "They fail to identify things that were the true issues. To the extent that they identify things that were the true issues, the recommended action is, \"I hope someone fixes this next time.\" That’s no longer sufficient, that the default case is that the ball will be dropped.", "Those of us who were involved in VaccinateCA kind of dread what we call the Bat-Signal . God willing, there will not be another worldwide pandemic killing millions of people as long as we live. If there is one, we know what numbers to text to get the band back together. Society should not rely on us as Plan A. How did this happen?", "Dwarkesh Patel 00:54:51", "The point about griping on a podcast, that's definitely what I'm doing. Maybe you’re too humble to say this for yourself. I do want to commend you for this. There are very few people who would do this.", "You tweeted it out. There are probably other projects that other people could have taken up that are not taken up. In this case, you tweeted it out. You saw that there was a thing that could be done and you did it. You quit your job and you did this full-time. The reason you had to dip into your kid's college fund was because somebody who had promised a donation didn't follow up on it, right?", "Patrick McKenzie 00:55:25", "Effectively every time we had a verbal green light with regards to money, I would advance the company, the charity… Charities are companies, by the way. I don't know if that is obvious. It was called Call the Shots Incorporated.", "I would advance Call the Shots the money that was soft committed, before the money would actually arrive in the bank on the theory that this accelerates our impact. We should always choose acceleration over other things such as minimizing credit risk.", "Some of the people who had soft committed did not actually end up wiring money at the end of the day. Shoot. My choices now are either don't run the last payroll or do run the last payroll and do not recover the money I've advanced the company. I said “okay, do run the last payroll.”", "Dwarkesh Patel 00:56:19", "Did you end up recovering it in the end?", "Patrick McKenzie 00:56:20", "No, it ended up being a donation from me personally to the effort.", "Dwarkesh Patel 00:56:25", "What the fuck?", "Patrick McKenzie 00:56:27", "That is the least important part of the story folks.", "Dwarkesh Patel 00:56:29", "Sure. But overall… kudos obviously isn't enough to convey what I mean to say here. I'm glad you did that and I'm grateful. You saved a four-figure amount of lives. It’s hard to plot that on a graph and make sense of what that means.", "Patrick McKenzie 00:56:49", "To the extent kudos are deserved by anyone, it’s Karl Yang for taking up the torch and finding 10 people in the tech industry who would jump into something at nine o'clock. It’s those 10 people, the other board members, the hundreds of volunteers, the team of about 12 people who worked on it full-time — and very full definitions of full-time, virtually ceaseless — for five, six months.", "There were other projects in civil society. There were many people doing this as their day jobs. The American response effort is not one small group of people anywhere. It's the collection of all these things bouncing off of each other.", "I'm happy about our individual impact. I'm happy that, descriptively speaking, if you Googled for the vaccine at any point before a certain day, there was no answer. After that day, there was an answer. That answer came from us.", "I’m a little dissatisfied that it didn't come from people with vastly more ability to have caused that much earlier. But the ultimate takeaway is not about this little tiny piece of the puzzle. How can we make the total puzzle better next time?", "00:58:58 – What is crypto good for?", "Dwarkesh Patel 00:58:58", "Let's talk about some finance. In addition to saving thousands of lives with VaccinateCA, what you've been doing over the last year or two is writing this very excellent finance newsletter called Bits about Money . You explore the plumbing in the financial system.", "Here’s my first question about this. Crypto at its peak was worth $3 trillion or something like that. From the crypto skeptic perspective that you have, how do we think about this number? What does it represent? Was it just the redistribution of wealth from dupes to savvy people?", "To the extent that useful applications didn't come out of this $3 trillion, what does it represent?", "Patrick McKenzie 00:59:40", "I have two broad perspectives on this. People often treat the market cap of something as implicitly like some sort of cost on society. The true cost of crypto on society has been this. Anytime one engages in attempting to do productive enterprise, some actor in society has said, “okay, I will stake you with some of society's resources. These resources are rivalrous . They cannot be applied to any other things society needs. I do this in the hope that you will produce something that is worthy of being staked with this.”", "How much have we spent on crypto? Not on trading tokens around, but I mean building infrastructure and spending rivalrous resources that we can't get back. There’s GPUs or ASICs or electricity that could have gone to other things in China, but went into mining . There’s the time of talented and intelligent people that could have been building other software products but were instead building crypto. That number is in the tens of billions or hundreds of billions of dollars.", "What do we have to show for that tens of billions or hundreds of billions of dollars? I am very crypto skeptical and I could give you an answer to that question. Crypto fans would not like to hear it from me. So I prefer Vitalik Buterin's articulation of this question from 2017 . At the time it was $0.5 trillion, a trivial number only $500 billion. He asked, and I'm paraphrasing, \"have we truly earned this number? How many of the unbanked have we actually banked? How many distributed applications have a meaningful amount of value doing something which is meaningful?\" He has about six other meditations on this .", "Crypto folks certainly aren't accountable to me. In some manner, you're not even accountable to Vitalik even though he's clearly a leading intellectual in the community. You're accountable to producing positive value in the world. What is the answer to Vitalik in 2024? How many of the unbanked have we truly banked? What is the best use case for crypto right now?", "Once crypto has a responsive answer that is sized anything like proportionate to the hundreds of billions of dollars resources that we've staked crypto with, then crypto people should feel enormously proud of that accomplishment. In some future where that hypothetically arrives, you have my sword. I will love your initiative. However, for the last many years we have been saying you can still get in early. You can still get in early because the value has not arrived yet.", "That is my capsule summary on crypto 14 years in. We've staked a group of talented people who are very good at giving a sales pitch, with tens or hundreds of billions of dollars and look at what we have built. This would be a failure in any other tech company, a capital-F failure. Either radically pivot and unfail it or maybe we should stop continuing to stake you with money.", "Dwarkesh Patel 01:02:46", "Here are two potential responses from the crypto optimistic perspective. I have some people who help me with the podcast who are around the world. I have a clip editor in Argentina. I have a shorts editor in Sri Lanka. I asked them, “how should I pay you?” I haven't prompted them and they say USDC . Maybe it wouldn't be that much harder for them to set up a Wise account, but it's notable that their first answer is a stablecoin.", "Patrick McKenzie 01:03:27", "Absolutely, that is evidence. Some tech savvy people have a good payment rail. Well they have a payment rail that they did not have access to 15 years ago, but at the cost of tens or hundreds of billions of dollars. Counterfactually, one could have wanted to work on that payment rail specifically. Another way one could hypothetically have deployed $10 billion is on the best funded lobbying campaign in history in the United States to work on like AML and KYC regulation to allow easier transfers of money worldwide.", "Dwarkesh Patel 01:04:00", "Why does it have to be compared against the best possible counterfactual use case? It's the sins of commission versus omission again. On the margin, it made things better.", "Patrick McKenzie 01:04:09", "Don't judge it by hypothetical rules. Just keep in mind that hypothetical worlds might exist. Judge it by the actual realized utility at the moment relative to the amount of resources consumed.", "Dwarkesh Patel 01:04:18", "Here’s the second point. Look at the dot-com bubble for example. Literally close to a trillion dollars were invested in laying out the fiber and the cable for this artifact that you now consider the most valuable thing that humanity has built. A lot of the companies that built this went bust.", "There was a bubble-like dynamic where many of the investors who spent the capital to build out this infrastructure weren't paid back. They didn't see immediate use cases from what they'd built. The bubble had served as a Schelling point to get things rolling in the future. That was hundreds of billions of dollars, a trillion dollars. With tens of billions of dollars, if something cool and useful comes out of it in the future, that's probably worth it, right?", "Patrick McKenzie 01:04:59", "Cool. At what point do we get to say that didn't happen? At what date in the future do we judge whether someone has been right or not with respect to this. People in crypto have very confidently stated that this is the next iteration of the internet and this will revolutionize the world. Not just how payments are conducted, but it will be a fundamentally new computing architecture.", "Okay, on what day do we compare notes on whether that claim was accurate or not", "Dwarkesh Patel 01:05:33", "Does 2030 seem like a reasonable year? Or is that too far?", "Patrick McKenzie 01:05:35", "It seems reasonable to me. Can I make a prediction of what is said in 2030?", "“You can still be early. Crypto has created huge amounts of things, but it's not achieved anything near its true potential. Please invest in our new crypto startup.”", "Let's check back in 2030, folks. Please tweet at me if I'm wrong in 2030 . I will happily eat crow. I want to eat crow. Crypto people are like, “how couldn't you be interested in programmable money?” I'm interested in programmable money, obviously. Money is programmable money.", "My friends who have been trying to sell me on this since 2010 weren't wrong. This should totally smash my interests based on what I usually find intellectually edifying. I don't not find crypto intellectually edifying. There are actually some interesting things that have come out of the movement.", "I find a computer built in Minecraft out of redstone to be intellectually edifying. It's a wonderful educational device for people who don't understand how a CPU works. I'm not proposing to use the redstone-emulated computer in Minecraft to be the next computational infrastructure for the world. Fairly obviously, that will not work very well.", "Dwarkesh Patel 01:06:52", "Here’s another answer of what the value is. We want some sort of hedge. I think this is actually a reasonable argument. I actually don't buy the capabilities unlocked by crypto, but I do buy the argument that we want some kind of hedge against the government going crazy and KYC/AML leading to state surveillance. All the compliance departments in the banks start seeing if you've been to political protests and de-banking you.", "It may seem unlikely, but it's good to have this alternative rail which keeps the system honest. Given that there's an alternative, if things go off the rails this is a worthwhile investment. Society as a whole can't even count that low in terms of the other resources that it spends. It's a good hedge against that kind of outcome.", "Patrick McKenzie 01:07:45", "I'm actually much more sympathetic to crypto people than they expect most people who have a traditional financial background to be. It is descriptively accurate that the banking system — and all companies which are necessarily tightly tied to the banking system, which might be all companies — are a policy arm of the government…", "Whether people articulate that in exactly those words or not varies a little bit, but when you have your mandatory compliance training you'll be told in no uncertain terms that you are a policy arm of the government. I feel for crypto folks that say that this feels like warrantless search and seizures of people's information in a very undirected dragnet fashion.", "I have somewhat complicated thoughts about this. The modern edifice of KYC and AML dates back to the Bank Secrecy Act in the United States, which was in late 1970s. At the time, the US federal government was strictly rate limited in how much attention it could give to KYC and AML. Maybe because we thought we had very limited state capacity at the time, the government would make rational decisions and go after $10-100 million enormous white collar crooks and drug trafficking cartels a year. It would not surveil down to literally everyone in society.", "However, the regulations we wrote and have continued to tighten over the years do effectively ask for transaction-level surveillance of every transaction that goes through a bank. This is the actual practice. This is not a conspiracy theory. I'm making nothing up, folks. The actual practice in banks is that they have about as many intelligence analysts as the American intelligence community has.", "They get this scrolling feed of alerts generated by automated systems. For each alert, they go, “don't worry, don’t worry, don’t worry, don’t worry…” for millions of these alerts every day. Then for some tiny percentage, they say, “oh dear, this one might actually have been a problem. I'm going to write a two to four page memo and file it with the Financial Crimes Enforcement Network .”", "In all probability, no one is ever actually going to read that memo. We have an intelligence community-sized operation running in banks to write memos that no one ever reads. Some tiny portion of those memos will be useful to law enforcement in the future.", "If you had explained that trade in a presidential debate in the 1980s, I find it extremely unlikely that any part of the American polity would want to buy that. Could we perhaps spend tens of billions of dollars on it? But we did that.", "To that extent I'm extremely copacetic with crypto folks on this.", "Point. A: This thing factually exists in the world. I agree with you that it does.", "Point B: In an ideal world, this thing would not exist. I agree with you, there are very real privacy fears.", "However, crypto has this habit. People who are good at sales have various sales pitches that they give to people. Actors within the crypto ecosystem will talk an excellent game about privacy as long as “number go up.” When you have to choose between being tied into the banking system — which is necessary for number go up — or you can choose privacy, they will say, “excellent, I choose number go up.”", "Dwarkesh Patel 01:11:20", "But there are different protocols. You can use the ones that allow privacy if you care more about privacy.", "Patrick McKenzie 01:11:27", "That is a very tiny portion of crypto.", "Dwarkesh Patel 01:11:33", "I want to riff on what you were saying about the analysts. They have as many as would be in an intelligence agency. You have these apparatchiks who are connected to the government's policy, just analyzing each transaction. As soon as the government gets the competence to run an LLM across each of these millions of queries…", "Patrick McKenzie 01:11:54", "This is a legitimate worry. We have extremely low state capacity for this thing that we didn't think was important, successfully administering vaccines. But we do have extremely high state capacity with regards to running the security state. There are pluses and minuses there, but they have built some things that are extremely impressive technologically.", "If they successfully manage to get their technological ducks in order and then just run a LLM on this data set that we've passively been producing… The implicit ongoing invasion of privacy is much worse than we baked into the system in 1980. Back then it would have been people going down to archives to look at things in microfiche to try to do this.", "Dwarkesh Patel 01:12:40", "I'm not even necessarily making a point over crypto here. It's worth meditating on the fact that the default path for this technology is that a very smart LLM is going to be looking at every single electronic transaction ever. It's intelligent enough to understand the implications, how it connects to your other transactions and what's the broader activity you're doing here.", "01:13:07 – How the US government leverages big tech to violate rights", "Patrick McKenzie 01:13:07", "Can we step back from crypto and finance for a moment? This is one of the least understood things about the tech industry. We have this society-level question that is not being addressed directly. It's being addressed by misunderstood proxy questions.", "Taking as written that the finance industry is a branch of government in a meaningful sense, should the tech industry also be a branch of government? We don't ask that question directly. We have asked instead things like, “should the tech industry be responsible for minimizing the spread of misinformation?”", "There was an injunction issued in a court case last year on the 4th of July, which I find oddly aesthetically motivating. The court case is Missouri v. Biden . The argument made in the court case — which the judge accepted and is extremely well supported by the record in front of him — is that various actors within the United States government puppeteered the tech companies and used them as cat's paws to do frankly shocking violations of constitutionally protected freedoms such as the freedom of speech.", "It wasn’t on the level of \"we've built this unaccountable, hard-to-inspect system of LLMs and heuristics and we started turning off a lot of people's feeds on Facebook.\" But there was an individual person in the White House who was sending out emails like, \"when are you going to address me on this tweet, guys? We can't have things like this anymore.\"", "Again, it’s a feature of the United States that we are very good about keeping records and transparency and having a functioning legal system. I was following along as this was happening. What was happening was much worse than what I understood to be happening.", "Here’s an example of something that, as we were growing up as children, you would never think that the United States federal government would do. I believe it was the state of Missouri. They said \"hey, you have town halls where citizens can come in and speak their mind and advocate for their policy preferences. You probably have a civics class and talk about the First Amendment and things. Yeah, someone said something we don't like in a particular town hall. Take down the recording from YouTube.\"", "That happened. That is a violation of the constitution of the United States. That is against everything in the traditions and laws and culture of the United States. That is outrageous. Yet it happened. We have not repudiated the notion of using tech as cat's paws.", "This is literally written in the decision , which I would urge everyone to read by the way. There was an individual in a non-governmental organization which was collaborating with the governmental organizations in doing this. They said something like, \"to get around legal uncertainty, including very real First Amendment concerns, in our ability to do this, rather than doing it in the government directly, we are outsourcing it to a bunch of college students who we have hired under the auspices of this program.\"", "What? One, just as a dangerous professional, you have violated The Wire ’s Stringer Bell's dictum on the wisdom of taking notes on a criminal conspiracy. You literally wrote that down in an email! The outrageous part is not that you wrote that down in an email.", "The outrageous part is that you — with full knowledge of it — engaged in something that is outrageously unconstitutional, immoral, illegal, and evil, to the applause of people in your social circle. Everyone involved in the story thinks that they're the good guy in it. If you write that email, you are not the good guy in the story.", "Dwarkesh Patel 01:17:04", "What is your sense of what the judicial end result on this will be?", "Patrick McKenzie 01:17:15", "I think there will be a limited hangout and walkback of some particular things. There does exist an injunction. I would predict that continues to happen in the future. What do I know? I'm just a software guy.", "Do people want to achieve power? The tech industry has power because it is good at achieving results in the physical world. This is certainly not going to be the last time that someone desiring power thinks, \"can I force you to give me the power that you have accumulated?\" That is fundamentally a political decision about how we construct our democracy. We should make good decisions about that.", "Dwarkesh Patel 01:18:02", "Maybe that's the crux here. Your story of VaccinateCA illustrates the lack of accountability in our institutions. It seems fanciful that we can go back to Congress and pass some law. We have KYC/AML. We realized that with LLMs, it's going to be a bad deal with regards to privacy. We're going to roll that back. We don't like the collusion between tech and whoever's in power. We don’t like this ability to dictate what can get taken off the platform. We think that's free speech and we're going to pass laws and take that back.", "There's no sense in which society has come to a consensus about the privacy and free speech concerns we have. By at least one argument, the solution is that you just start new rails for these things that cannot be constrained in this way. It's not a matter of just changing the KYC law. Rather, that's implausible given the manifest declining state capacity we've talked about.", "Patrick McKenzie 01:18:59", "I don't accept that it is the only thing possible for us. I don't accept that the United States is incapable of doing nice things. We can't accept that. We have to be optimistic about the future. Otherwise, what are we doing here?", "In the tech industry, we know it is not a physical law of reality or of large institutions that one cannot make systems that work. Making systems that work is the job. We have a few existence proofs. We should increase our engagement with government. Hey state capacity, we can help you build some of that stuff. Also, the Constitution of the United States is a document we feel attached to.", "Again, incentives rule everything around us. In early 2021, the tech industry was very concerned about being told in no uncertain way by people in power, \"if you embarrass us, we'll end you.\" One thing in the judicial record of this case is that the White House and other government actors routinely overreached and asked the tech companies, \"we would like you to censor this and this and this.\" The tech companies said no in a bunch of cases.", "It’s important to continue to negotiate for the right outcome rather than the one that people in power merely want. There are some things that will feel unfortunate and maybe a little bit outside of our true sweet spot of what we would want to be doing on a Tuesday in tech. Maybe we have to ratchet up the amount of public policy advocacy that we do. Lobbying is a dirty word in the tech industry. It probably shouldn't be.", "When the \"do not embarrass us\" order came down, people were getting very quiet about the fact of feeling constrained by this. Maybe we should have spoken out more and spoken more boldly about it. It was the routine case that the White House was telling Facebook, Twitter, etc. to censor on a tweet by tweet, communication by communication basis.", "This is also with regards to broad rules that affected the residents of the United States and also everyone else in the world. These are the operating systems of the world. They were giving direct orders on, \"there's a certain kind of speech act which we find vexatious and we would like you to stop that everywhere.\" Very plausibly, we should get on the nightly news and say, \"I received the following email from the White House that says we should stop this everywhere.” If you point a gun at me, I will comply with this at the point of a gun. That is what it will take.", "Dwarkesh Patel 01:21:55", "It almost requires civil disobedience.", "Say you’re right that in 2021, there were going to be serious political repercussions on the tech companies. Say they publish this email, “here's what the White House just sent me to take down this tweet.” Now, Twitter's market cap has just collapsed because people realize the political implications of what Jack Dorsey just got up and said at the time.", "Then the solution is that you need tech companies to basically sacrifice their capacity to do business. Maybe that is a solution, but that's not a story about optimism about the ability of the US government to solve problems this way.", "Patrick McKenzie 01:22:37", "You need to have a risk tolerance, right? Every business everywhere, including the financial industry and the tech industry, is balancing various risks. The risk tolerance was poorly calibrated.", "One can achieve results in the world by doing things like embarrassing government officials. Embarrassing a person in a position of authority is not a zero risk behavior. It is relatively low risk in the United States relative to other places.", "That was extremely important for VaccinateCA. People thought at the beginning, are we going to get in trouble for publishing true information about vaccine availability that will embarrass the state of California? I said, “I have a very high confidence that no matter what we do vis-a-vis the state of California, that you cannot get in serious trouble in the United States for saying true things.” The First Amendment exists. We have backstopping infrastructure here. If push comes to shove, we will shove back and we will win.", "This is just me as a guy who took the same civics course that everyone took and does not have a huge amount of resources relative to the entire tech industry. Maybe we need to have a certain amount of intestinal fortitude. Okay, you've asked us to do something, you've threatened us with taking away all the wonderful toys and the great business models that make this an extremely lucrative area to work in. You’ve asked us to sacrifice a value that is very important for us to continue to do that. No, we're going to fight you on this one.", "The comms-trained part of me is saying, \"don't use the word fight.\" We are going to collaborate with stakeholders across civil society to achieve an optimal outcome, balancing the multiple disparate and legitimate interests of various arms of the government and civil society and blah, blah, blah. Sometimes that requires fighting. We should fight when it does.", "01:24:36 – Can the US have nice things like Japan?", "Dwarkesh Patel 01:24:36", "On Tyler's podcast , you said something like, “America doesn't have the will to have nice things and Japan does.”", "Patrick McKenzie 01:24:44", "In some ways, yes.", "Dwarkesh Patel 01:24:45", "I’m thinking about your own essay about working as a salaryman . You're working 70-hour weeks. You're killing yourself to get that marginal adornment on the products you're making.", "Isn't it better that we have a system where we put in 20% of the work to get 80% of the results? We spend the rest of the effort on expanding the production-possibility frontier . It's good that we don't have the will to have nice things. We just get it done.", "Patrick McKenzie 01:25:13", "I don't think these trade off against each other at the relevant margins. Nothing about culture is monocausal. I also don't think culture is a sufficient explanation for some of the differences that are achieved in the United States vs. Japan.", "There's a great book, Making Common Sense of Japan . The author makes this persuasive argument at length. I don't think it’s 100% true, but it's more true than most well-informed people on either side of the Pacific believe.", "He argues that when people say, \"they do this because it is Japanese culture,\" what they're often saying is this. I usually have an incentive-driven model for why people do things. I understand this incentive. Then there's some error term in this equation that I don't understand. I'm going to call that error term, \"culture.\" To be clear, culture is a real thing in the world. But we reach for that error term far faster than we should.", "There are places in pockets of the United States that have the will to have nice things. Often they discover that surprisingly the only thing you need is to choose to have nice things. You don’t necessarily have to spend more money. People don’t have to kill themselves with 90-hour weeks for the entirety of their career. You can just choose to have nice things. Let's choose to have nice things. Let's not be embarrassed about choosing to have nice things.", "01:26:41 – Financial plumbing & money laundering: a how-not-to guide", "Dwarkesh Patel 01:26:41", "You understand all this financial plumbing. If you were an investigative reporter, what is the thing you're looking at that the average newspaper reporter wouldn't know to look at to investigate a person or a company or a government institution?", "Patrick McKenzie 01:26:56", "I have an enthusiasm for the minutiae of banking procedure in a way that few people do. Sometimes banking procedure causes physically observable facts to emanate into the world. If you know that those facts are going to emanate, then you can have a claim made about a past state of the world. I did this thing. I did not do this thing. If true, that claim will cause metadata in other places and you can look for the metadata.", "This is actually how a lot of frauds are discovered. Basically the definition of fraud is that you're telling someone a story. The story alleges a fact about the world. The story is not true and you're using the story to extract value from them. Most frauds will allege facts about the physical world.", "As the physical world gets more and more mediated by computers, it gets increasingly sharded between different institutions. There will often be institutions that are not under control of the fraudster and have information available to them, which will very dispositively answer the question of whether the alleged fact happened or not.", "As a reporter, it’s important to understand how institutions and society interact with each other. It’s important to understand the physical reality of how if this thing happens as alleged, then these papers will be filed, then these API calls will be made, etc. Then doing the core job of reporting involves finding people at the institutions who will tell you the truth.", "As an example of this, many years ago Mt. Gox was insolvent . That fact was widely rumored but not reported, presumably because the global financial news industry didn't find it convenient to have someone call into the Japanese banking system and ask the right questions in the right way. The CEO of Mt. Gox alleged on BitcoinTalk that the reason they were not able to make outgoing wires was because they had caused a distributed denial of service attack on their bank's ability to send foreign currency wires.", "That bank was Mizuho . Mizuho was the second largest bank in Japan. Many people at well-regarded financial reporting institutions in New York City find it incredibly exotic and difficult — and maybe in some ways unknowable — to extract facts from Mizuho.", "There are addresses. FedEx will deliver letters to them. They have phone lines. We also have fax machines. We love our fax machines. Could you send a fax to anybody at Mizuho and say, \"hey, quick question, are you sending wires today?\" Mizuho would receive the fax, look at it quizzically, and say, \"in response to your fax earlier, yes, we are still sending wires because we are the second largest bank in Japan. Do you have any other easy to answer questions for us?\"", "Financial reporting dropped the ball on asking Mizuho simple questions about reality. Maybe you should do that next time. The CEO is giving out gold on BitcoinTalk under his own name. These are obviously reportable statements. The statements are alleged facts about material reality. Maybe chase down the truth value of that.", "That's hard. It's so much easier to just repeat what he says on Twitter and say, \"as said by this person on Twitter,\" and then quote the Bitcoin price feed. But reporting is hard. Be good at it.", "Dwarkesh Patel 01:30:40", "Why aren't short sellers doing this? They should have an economic incentive to dig to the bottom of this, right? We should have a deluge of financial information from short sellers who call the banks and trace through the API calls.", "Patrick McKenzie 01:30:51", "That is an ongoing interesting question. Short sellers provide an enormous service for the world in being essentially society's best sleuths on financial fraud. Yet they fail to detect lots of them. I’m not just throwing short sellers or reporters or anybody else under the bus.", "I failed to detect SBF ’s various craziness, despite having sufficient information available to me as a well-read person on the internet to have detected that. Where were the freaking wallets? Essentially, everybody assumed someone else was looking at it.", "That's one reason. Short sellers often assume they need to first get put on the path of something and have a differentiated point of view. Another issue for short sellers is they have to find an instrument and they have to find another side of the trade to successfully do that. Without being expert on Bitcoin micro mechanics, I’ll say it was difficult to make the trade in size. Mt. Gox was insolvent. You could try to pull money out of Mt. Gox, which people were definitely trying to do.", "I got a number of interesting business proposals from people around 2012. They said, \"hey, you're an American and you clearly understand international banking and you live in Japan. Could I have you get some yen and wire that to me in America and you can take a percentage?\" I really don't like where this is going.", "They said, \"there's this company and I've got some money over there. They can send yen, but they can't send dollars.\"  Is that because they don't actually have the money? They're like, \"no, it's a Japanese banking thing.\" Okay, no it's not. Japanese banks are very good at sending wires. They said, \"no, it's really this thing. This is totally clean.", "You would not be having this conversation with me if it was totally clean. You need a money launderer. I will not be your money launderer.", "Dwarkesh Patel 01:32:47", "How hard is money laundering? You mentioned earlier that banks have the capacity where every transaction is analyzed and flagged. If it's notable enough, they write a report about it. How sophisticated does the cartel need to be in order to move around, say seven-figure amounts of money?", "Patrick McKenzie 01:33:10", "The definition of money laundering is extremely stretchy. There's a spectrum of people. Much like there's a spectrum of sophistication in financial fraud, there's a spectrum of sophistication in money laundering. If you want to look at probably the most sophisticated money launderer in history, he’s currently a guest of the US government, wherever SBF is staying.", "Dwarkesh Patel 01:33:32", "He was sophisticated?", "Patrick McKenzie 01:33:33", "This is a disagreement I have with a lot of people. SBF was extremely sophisticated. It's not just SBF. People identify him uniquely. They identify the inner circle uniquely as being at fault here. There was an entire power structure there, which was extremely adept at figuring out how power worked in the United States and exercising it towards their own ends.", "Then it blew up. Until then, my goodness. They decided we need regulatory licenses. They're called money transmission licenses in the United States. Those are done on a state-by-state basis. They got 50 regulators to sign off on it, etc. There were many objective indicia of them being very good at their jobs until they lost all the money.", "Dwarkesh Patel 01:34:20", "Wasn’t it more about getting people to look the other way politically rather than figuring out how to structure the wire in a way that won't get flagged?", "Patrick McKenzie 01:34:28", "It's not merely a matter of getting them to look the other way. Go back to the original SBF interviews where he's telling the founding myth of Alameda . He says very loudly that the reason why he got this opportunity to do bitcoin arbitrage between Japan and the United States is because he was able to do something that the rest of the world wasn't .", "He doesn't say this in these many words. I will say it. He suborned a Japanese bank. You need that as one of the pieces to run this arb and then he pulled tens of millions of dollars out of this. I don't think people really listened to what he was saying there. He literally says in the interview on Bloomberg , \"if I was a compliance person, this would look like the sketchiest thing in the world. This looks like it's obviously money laundering.\" Because it is money laundering.", "Interestingly, Michael Lewis retells the story and locates the story in South Korea rather than in Japan. Some people who were involved say they tried it in South Korea and Japan. I wish people would pull on more threads there. There's still lots of that story that we don't know.", "Anyhow, how sophisticated do you have to be to launder tens of billions of dollars? SBF did that. That is a bar for sophistication. He was eventually caught. He was not caught for laundering tens of billions of dollars. He wasn't even under suspicion for laundering tens of billions of dollars around.", "SBF was Tether 's banker. Alameda Research — one of the parts of the corporate shell game they were playing — moved tens of billions of dollars of cash around the financial system. It did so largely under full color of law on behalf of Tether. It moved it from wherever Tether or their customers had it to, at the moment, I think it's mostly at Cantor Fitzgerald . Some shoe has to drop there eventually. I will eat a lot of popcorn when it does.", "Be that as it may. There's many other ways to launder money. Let's say I establish a shell corporation and I buy a piece of real estate in New York City. I rent that real estate out to people. I collect a stream of rents from that. That money looks clean because there is an excellent business. It's my shell corporation that is renting this real estate that really exists, to a totally legitimate person. This money is clean.", "The money that I put into the system to buy this on behalf of the shell corporation, I'm just going to wire it to a lawyer. The lawyer is going to answer any question from the bank with, \"I don't know where it came from. I don't have to tell you. I'm a lawyer. It's a real estate transaction. What do you want from me?\"", "In one sense, that's money laundering if the original money was the proceeds of crime. In another sense, that's how every real estate transaction goes down at those scales. Often a facility at money laundering is just a facility at operating the economy, plus willingness to do that to hide the proceeds of some other crime.", "I would be really good at money laundering. I'm glad I haven't done it professionally. It's fascinating intellectually. Previous communications departments I've worked at probably explicitly anti-endorse that sentiment. What can one do?", "Dwarkesh Patel 01:37:36", "We are reasonably confident that you're not laundering money.", "Patrick McKenzie 01:37:39", "I would be much wealthier if I was.", "01:37:42 – Maximizing your value: why some people negotiate better", "Dwarkesh Patel 01:37:42", "This is a separate topic. You emphasize that people tend to undercharge for the products they serve. Let’s say you have identified somebody who actually does charge for products what they can get away with.", "Psychologically, what do they have that the rest of us don't?", "Patrick McKenzie 01:37:57", "Interestingly, this is one of those places where culture is not merely a term but actually descriptive in some ways. It gets contentious so I don’t want to point fingers at particular examples. There are some cultures in the world that have institutionally adopted more of a pro-capitalist, pro-mercantilist ethos. They have less of an ingrained skepticism regarding earning money and accumulating resources as a goal.", "There are other cultures which have an extremely ingrained skepticism about earning money and accumulating resources as a plausible goal. Those cultures generate people who have very different negotiation strategies. When you impact people with different negotiation strategies against the reality of a well-operated organization like Google for example, they arrive at very different numbers.", "Is it Amy Chua who wrote a book about “market-dominant minorities?\" All people are equal in the eyes of God and hopefully in the eyes of the law. Not all cultures physically make the same decisions with regards to the same facts on the ground. That causes some disparity in outcomes. That is one tiny part of that thesis. I've read a lot of books. I don't necessarily endorse every word in every book that I read. However, there is something to that.", "Another thing is that there's a certain personality-type cluster of people that got into tech. It is over-advanced that the tech industry and the pathologies of the tech industry are caused by the nerd vs. jock distinction in American high schools, heavily over-advanced. What’s the amount of truth to that? Not zero. Many of us came up feeling we were largely getting beaten down by the system around us. We were not worthy, etc. Then we carry those issues into our professional lives. Some people work their way out of it quickly. Some do not.", "For class and other reasons, some people go to institutions like Stanford and hear from… I don't know who you hear things from if you go to Stanford. I certainly didn't go there. Let’s say it’s an elder fraternity brother that says, \"yo bro, this is the way the world works. You really got to negotiate when you get an offer in your discussion with Google. I've talked to so many brothers and they don't negotiate. The ones that do make a whole lot more money.\" You're like, “wow, good for that.\" Most people don't get that talk from their elder fraternity brother because they do not go to Stanford and don't have an elder fraternity brother.", "Until VaccinateCA, the most important thing I'd probably done professionally was writing a piece on the internet about salary negotiation . It's subtitled, \"Make More Money, Be More Valued.\" It’s an exhortation for mostly young people who had some of the issues I had when I was young and growing up. You're allowed to negotiate. That's not a moral failing. You have no less right to the marginal dollar than a company has to the marginal dollar. Go get it. Then you can put it towards all sorts of interesting ends.", "500,000 people a year read that piece and it's now 12 plus years old. I keep a folder in Gmail about who has written and said “I got $25,000 to $100,000 per year more as a result of reading this piece.” I used to keep a spreadsheet. I stopped keeping the spreadsheet after it had ticked into the eight figures and it became an ongoing source of stress for me.", "Dwarkesh Patel 01:41:40", "Eight figures?", "Patrick McKenzie 01:41:41", "Yeah, per year. You would assume that 500,000 people read it per year. Some take the advice. Most who take the advice probably don't write me to say, \"hey, I took the advice, thank you.\" Maybe I missed some emails. The true economic impact is probably larger than that.", "There are probably people who have the inverse of that spreadsheet where it's like, \"darn it. We got quoted @patio11 against us again.” We’ve got these numbers and there's only a few firms in the tech industry that do scaled hiring.", "01:42:14 – Are young people too busy playing Factorio to found startups?", "Dwarkesh Patel 01:42:14", "There seem to be fewer people in their twenties who have prominent software businesses today than maybe 10 to 20 years ago. You’ve been in the software industry over a long period.", "Is it because the nature of software businesses has changed or is it because the 20 year olds today are just less good?", "Patrick McKenzie 01:42:39", "I've met many young and talented people over the course of 20 years in the software industry. Young and talented people continue being young and talented. One partial explanation is that when there's a new frontier that opens up, the existing incumbents — institutions and people with deep professional networks and personal resources — do not immediately grab all the value in that new thing. It's terra nova.", "To the extent that tech is no longer terra nova, you would expect fewer people who are less resourced and younger to rise to the heights of prominence in tech. To be clear, I'm not at the heights of prominence in tech. When I ran companies, I was not running companies like some other guests on this podcast run companies. It was a bingo card creator . I was making bingo cards for elementary school teachers while living next to a rice paddy in central Japan.", "That's my dominant hypothesis. There are some things that are affecting the youth that I think are negative. Some products that the tech industry has created do not maximize for the happiness or productivity of people that consume those products: TikTok, etc.", "I continue to be bullish about the youth. I have two children who, knock on wood, will accomplish things in their lives. I'm intrinsically skeptical about, \"oh, the kids these days, they're just bad kids.\"", "Dwarkesh Patel 01:44:02", "How much do you worry about video games as a sort of wireheading . Somebody like you were 20-30 years ago, now has access to Factorio . Will they just wirehead themselves to that instead of making a really cool software product? How much should we see this in the productivity numbers?", "Patrick McKenzie 01:44:23", "Oh, goodness. I don't know about the productivity numbers. Generally, I do know that Steam keeps a counter of how much I'm playing video games in a year. Knock on wood, I've accomplished a few things in my career. Against that, what was my Steam counter up to?", "Steam didn't include World of Warcraft . World of Warcraft was at least 1000 hours for me. Factorio recently was 750. If you sum it all over 20 years, I've probably played video games for 4,000-6,000 hours. That's two to three years of professional effort, if one thinks that it trades off directly with professional effort.", "Dwarkesh Patel 01:45:02", "Do you? Include every single young guy who's a nerd. How much are we worried that a bunch of their productive time is going to video games instead of making the next software business?", "Patrick McKenzie 01:45:17", "I worry at least a little bit about it for myself. I recently started working with an executive assistant. One of the first suggestions that he gave was, \"hey Patrick, will you friend me on Steam so that I can see how much you're playing any given week? That way if you're not making your priorities happen, we can have an honest discussion about priorities.\"", "That's really good advice, given that I spent far too much time ratholing on Factorio relative to my true preferences.", "Dwarkesh Patel 01:45:42", "You've got a really confident EA .", "Patrick McKenzie 01:45:45", "Go, Sammy, go.", "Dwarkesh Patel 01:45:49", "Hey boss, can we have an honest conversation tonight?", "Patrick McKenzie 01:45:52", "I don't know if I'm inserting those words into his mouth.", "The suggestion was genuinely his. On the one hand, Factorio is a wonderful game. I actually think Factorio matters far more to the world than most video games do. That's an entirely different piece that I'm trying to write at the moment.", "Dwarkesh Patel 01:46:06", "Have you read Byrne's piece on this ? “The Factorio Mindset”", "Patrick McKenzie 01:46:09", "Yes. I love many things Byrne writes. I think I luckily have a differentiated point of view on this one. I hope to get it out to the internet someday.", "On the one hand I loved it. The Factorio space exploration mod specifically was the best video game I ever played. Yet I spent 750 hours on that over the course of a year. I was on sabbatical and recharging from six very hard charging years at Stripe and also running the United States vaccine location information effort. But there's a question of, at relevant margins are you maximizing for your true values? I was a little worried about that. Now my EA checks on me. Do I worry about it for other people?", "When I was young and a World of Warcraft raid guild leader who spent a thousand hours on that, it was a substitute advancement ladder for me. The actual job I was working, selling around central Japan, gave me no scope of control over things. I thought if I were a startup CEO, I could make decisions right now. I could build something awesome, but I don't have that ability. I don't see myself as being the kind of person who could become a startup CEO.", "If I can't be satisfied with my nine-to-five job in terms of making things happen in the world, at the very least I can do this. Make sure we kill the dragon in two hours. Guys, don't stand in the fire. Come on, team leaders, you need to make sure that people are equipped with the resist gear before they get there. We're having internal spats about allocation of resources. We need to have a better DKP system.", "By the way, World of Warcraft raid guilds — and all other places where intellectual effort comes together in the video game community — are much more sophisticated than people give them credit for. When I started VaccinateCA, I told people my sole prior leadership experience was having 60 direct reports in a raid guild. That's true. I don't want to rat hole on the subject, but there are parts of VaccinateCA that are very definitely downstream of the intellectual efforts involved in managing raid guilds specifically. There were multiple people internally who were like, “yep, we are running the raid guild playbook right now.”", "Be that as it may, you can do something with your life. Choose to do something with your life. Then if you want to play a reasonable amount of video games, then play a reasonable amount of video games.", "With respect to individual people, I've struggled with depression at some points in my life. Many people struggle with under-diagnosed, under-treated depression. Sometimes you get into a self-destructive spiral measured against your true values and preferences. Due to depression and other factors, you aren't making as much progress on the true goals. You use video games as an escape from that. It’s not just video games but books, television, etc.", "There are many poisons available. You pick your poison and use that to escape. The amount of effort you put into the poison causes you to have less effort available to do the true thing. So you get worse results at the true thing.", "Helping people out of those self-destructive spirals is something that we as a society could stand to get much better at.", "Dwarkesh Patel 01:49:28", "Speaking of Byrne, I noticed that many of my favorite writers are finance writers. There's you, Byrne, Matt Levine . Is there a reason why finance has attracted so much writing talent?", "Patrick McKenzie 01:49:42", "There are many good writers in the world, Derek Thompson for example. He's a chemical engineer and has written about some things I can barely understand. I have enough of an engineering degree where I can appreciate half of the chemistry but I can’t appreciate the full totality of why, for instance, uranium hexafluoride is such a terrible substance to work with. He has some excellent writing on why that’s a terrible substance to work with.", "Dwarkesh Patel 01:50:01", "The broader question is, why does finance have a greater concentration of writing talent? Not just for current bloggers, but finance histories are some of the best history books out there. I’m thinking of those by Bethany McLean and so forth.", "Is it an intrinsically more interesting subject, or is there some other reason?", "Patrick McKenzie 01:50:22", "There’s some path dependence. If I had ended up working in a water treatment plant, I'd probably be writing about water treatment plants because I enjoy writing. I know enough about myself to say that a discussion about how alum works in water treatment plant — something I read about when I was six — could captivate me for years. I'd write about it if I was captivated by it.", "I agree with a point Matt Levine made once. Finance and the tech industry have for a while been relatively reliable ways to turn intelligence into money. Many good writers are very intelligent. Not all intelligent people are good writers. It's a skill that more intelligent people could learn. If we create an incentive system that tends to allocate many of the country's top minds into specific fields where they become experts, I'd expect a lot of writing talent to emerge there as well. Good writing is good thinking. That's a Paul Graham quote I believe, and it's broadly true.", "Dwarkesh Patel 01:51:26", "If I remember correctly, you went to Japan because you thought that after the dot-com boom, the demand for programmers alone would diminish and a combination of skills would be required. You weren't the only one who thought this way.", "Patrick McKenzie 01:51:45", "Yes, and The Wall Street Journal said the same thing. The Wall Street Journal had never been wrong in my experience as a 19-year-old who knew nothing about anything.", "Dwarkesh Patel 01:51:49", "Separate from the object-level predictions you might have gotten wrong about the software industry, what was a meta-level mistake you made? How would we characterize that?", "Patrick McKenzie 01:52:03", "I had a lot of rigor chasing a decision that had no basis in fact. I believed The Wall Street Journal when it said that all future engineers would be hired in places like India and China and not the United States. The implication was that there would be no future engineering employment in the U.S.", "I made a spreadsheet with the languages my university taught, my best estimate of the number of Americans who spoke them, the amount of their software sold here, the amount of US software sold there, and so on. I multiplied these together and sorted by column H, descending. This was LARPing rigor, but it felt like a good decision-making process at the time. The meta discussion is: don't LARP having rigor. Actually have rigor.", "Dwarkesh Patel 01:52:47", "But what would that look like in this context? What would you have done differently?", "Patrick McKenzie 01:52:51", "Ultimately, I'm happy with the decision I made although it might seem like the wrong decision. I’m happy because of other life factors. Working in Japan as a young engineer had some very rough parts. But on a mental level, this was an early opportunity to trust institutions less and trust systemically viable reasoning patterns more.", "Assuming I’m remembering this article correctly — and it was a pivotal moment for me — The Wall Street Journal asserted that no future Americans would be hired at software companies. Assume this is true. What happens in the American software industry? Maybe I didn’t have enough knowledge to confidently predict that at the time.", "But I can confidently predict some things now. For example, software companies are going to break as older engineers age out and no one is coming up to replace them. The industry must hire people every year. Hiring freezes are necessarily temporary as a result of that. That’s as close to a law of physics as one can have. If you tell me something which says the laws of physics have been suspended and will be in the future, I won't agree with that.", "How would I learn the law of physics? I didn’t know anyone in software engineering at the time, which is partly why I was getting my advice from The Wall Street Journal .  I was at a research university in the US. If I had been slightly more agentic about it, I could have found someone who knew someone in the software industry to explain this to me.", "Dwarkesh Patel 01:54:28", "Couldn't you use that same logic to say that journalism jobs won't go down because senior journalists will have to be replaced by new journalists? It’s true, but it doesn't take that many people to study journalism. It actually would have been a bad call to major in journalism and pursue that as a career.", "Patrick McKenzie 01:54:43", "The fundamental thesis for journalism is that the total size of the pie is decreasing due to structural factors. The Wall Street Journal 's thesis wasn't simply that the size of the pie would decrease in the wake of the dot-com bust, but more so that companies will maximize for labor costs and therefore ship out all the jobs.", "Dwarkesh Patel 01:55:08", "I see. You've emphasized that founders should do more A/B testing . That's a main theme of your blog. They should do this because they under-optimize on it.", "What do they tend to over-optimize on?", "Patrick McKenzie 01:55:24", "Interestingly, I was a marketing engineer earlier in my career and really thought that was important. In terms of high-level advice, I'd probably tell a founder a bit about marketing engineering. I wouldn't spend 95% of my time on it unless that was explicitly why they brought me in.", "What do founders spend too much time on? Playing house and chasing status are two well-known pitfalls. Your incentives will draw you into playing the role of a successful CEO before your actions have earned the company its level of success. The fundamental nature of early-stage businesses is that investment in you is not an indication of what you have achieved. It’s an advance on your future accomplishments. You need to rigorously pursue actually making those future accomplishments.", "There are many ways to pursue this. Talk to more users, write more software, make something excellent, get more people to use it, get better at selling it, etc. This is an important strain of Silicon Valley culture. I'm glad we have it and we're popularizing it worldwide, including to me in central Japan.", "Yet there are always other games going on. Those games are less important but very attractive. Not video games, but things like attending conferences, meeting interesting people, or going to the best parties. Showing up at the best parties does not, at most margins, increase the number of users you talk to. It doesn't write functioning code. Do the things that actually matter. Some distractions proudly wave, \"I'm a distraction from everything that matters.\" Some don’t. Others feel like real work but are not. Don't do the things that don't matter. It sounds vacuous and yet…", "01:57:30 – Why accountability is important and overrated", "Dwarkesh Patel 01:57:30", "Let’s go back to VaccinateCA and close the loop on a question. I don't think we got a good answer but I think it’s important.", "Suppose you're the president of the United States, a new one replacing the current one. Looking back on what happened, if you were president, what would you do to create accountability and ensure we’re ready for a future crisis? Maybe you fire the right people, but beyond that, there’s a lot of different things that could go wrong. How do you make sure we're ready for them?", "Patrick McKenzie 01:58:06", "One cultural practice of the tech industry that would be salubrious for broader civil society to adopt is the concept of blameless postmortems . We talked earlier about who to blame for various failures. I broadly believe that some amount of blame performs useful societal purposes. Beyond that, diminishing returns set in quickly.", "The magic word in Washington is accountability. People want accountability for failures. Accountability terrifies people. This causes fields of distortions around what actually happened, mistakes made, and opportunities missed.", "Changing our practice to achieve accountability in a more productive way involves first getting a dispassionate record of what actually happened. It’s less important to focus on which official was responsible or under which legislative authority. What did we do? How did our actions lead to the outcomes we got? Given those outcomes, what could we have done better? How do we inject that back into the system so that the next time this happens, we don’t repeat the mistakes?", "Sometimes this will involve someone losing their job, though hopefully not often. A non-zero amount is important to note. There are many aspects we should postmortem about this experience, though we don't have several thousand hours to go over all of them.", "There should be an inquiry. At a minimum, let's ask all involved parties to write down the history of the COVID experience dispassionately, recording dates, times, and actions taken. We want it to be truthful and comprehensive, highlighting the most important aspects. That is step one. Maybe we ask for step one, then move to step two.", "Dwarkesh Patel 02:00:50", "Final question, what are you going to work on next?", "Patrick McKenzie 02:01:00", "I don't exactly know what my next big professional splash-in will be. I've been on semi-sabbaticals here, writing Bits about Money and being between 20-80% productive relative to my 100%. I might start a software company or raise a small VC fund. I might do something entirely different.", "Currently, I’m focusing on family-oriented things. Our family immigrated from Japan to the United States and we're going through all the fun adjustment issues. I’ve focused heavily on my career over the past eight years. Now I’m rebalancing to help with this adjustment and then figure out what’s next for the next chapter of life.", "Dwarkesh Patel 02:01:20", "Excellent. Patrick, thanks so much for coming on the podcast.", "Patrick McKenzie 02:01:23", "Thanks very much for having me." ]
[ "https://www.kalzumeus.com/", "https://x.com/patio11", "https://worksinprogress.co/issue/the-story-of-vaccinateca/", "https://www.bitsaboutmoney.com/", "https://en.wikipedia.org/wiki/COVID-19_vaccine#", "https://x.com/patio11/status/1349577791537250310", "https://karlyang.net/", "https://worksinprogress.co/issue/the-story-of-vaccinateca/", "https://worksinprogress.co/issue/the-story-of-vaccinateca/", "https://en.wikipedia.org/wiki/Affordable_Care_Act", "https://en.wikipedia.org/wiki/HealthCare.gov#Issues_during_launch", "https://en.wikipedia.org/wiki/Memory_hole", "https://www.usatoday.com/story/news/health/2020/02/17/nih-disease-official-anthony-fauci-risk-of-coronavirus-in-u-s-is-minuscule-skip-mask-and-wash-hands/4787209002/", "https://en.wikipedia.org/wiki/Redlining", "https://en.wikipedia.org/wiki/Gavin_Newsom", "https://en.wikipedia.org/wiki/January_6_United_States_Capitol_attack", "https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal", "https://en.wikipedia.org/wiki/Superuser", "https://en.wikipedia.org/wiki/Spanish_flu", "https://en.wikipedia.org/wiki/COVID-19_vaccination_in_the_United_States", "https://www.cdph.ca.gov/Programs/CID/DCDC/Pages/COVID-19/CDPH-Allocation-Guidelines-for-COVID-19-Vaccine-During-Phase-1A-Recommendations.aspx#:~:text=During%20Phase%201a%20of%20allocation,or%20long%2Dterm%20care%20settings.", "https://media.kalzumeus.com/vaccinateca-wip/Vaccine-Status-Update-1-8-2021-1.pdf", "https://www.businessinsider.com/moderna-designed-coronavirus-vaccine-in-2-days-2020-11#:~:text=Moderna's%20groundbreaking%20coronavirus%20vaccine%20was%20designed%20in%20just%202%20days&text=The%20FDA%20granted%20emergency%20authorization,against%20COVID%2D19%20in%20trials.", "https://en.wikipedia.org/wiki/MRNA_vaccine", "https://en.wikipedia.org/wiki/George_Floyd_protests", "https://en.wikipedia.org/wiki/Law_of_triviality", "https://en.wikipedia.org/wiki/Professional%E2%80%93managerial_class", "https://en.wikipedia.org/wiki/Diversity,_equity,_and_inclusion", "https://en.wikipedia.org/wiki/Scarcity_(social_psychology)", "https://en.wikipedia.org/wiki/Cold_chain", "https://www.dwarkeshpatel.com/p/tony-blair", "https://marginalrevolution.com/marginalrevolution/2020/12/double-the-inoculated-population-with-one-dose.html", "https://en.wikipedia.org/wiki/Seed_money", "https://en.wikipedia.org/wiki/501(c)(3)_organization", "https://en.wikipedia.org/wiki/Effective_altruism", "https://en.wikipedia.org/wiki/Web_scraping", "https://en.wikipedia.org/wiki/Paycheck_Protection_Program", "https://en.wikipedia.org/wiki/CARES_Act", "https://learnprompting.org/blog/2024/2/4/gpt_wrappers", "https://en.wikipedia.org/wiki/Total_addressable_market", "https://www.dwarkeshpatel.com/p/leopold-aschenbrenner", "https://en.wikipedia.org/wiki/Bat-Signal", "https://www.bitsaboutmoney.com/", "https://en.wikipedia.org/wiki/Cryptocurrency", "https://en.wikipedia.org/wiki/Rivalry_(economics)", "https://www.investopedia.com/tech/gpu-cryptocurrency-mining/", "https://www.investopedia.com/terms/a/asic.asp", "https://en.wikipedia.org/wiki/Bitcoin_protocol#Mining", "https://x.com/VitalikButerin/status/940744724431982594", "https://vitalik.eth.limo/", "https://en.wikipedia.org/wiki/USD_Coin", "https://wise.com/", "https://en.wikipedia.org/wiki/Anti%E2%80%93money_laundering", "https://en.wikipedia.org/wiki/Know_your_customer", "https://en.wikipedia.org/wiki/Dot-com_bubble", "https://en.wikipedia.org/wiki/Economic_bubble", "https://en.wikipedia.org/wiki/Focal_point_(game_theory)", "https://en.wikipedia.org/wiki/Web3", "https://x.com/patio11", "https://www.youtube.com/watch?v=-BP7DhHTU-I", "https://en.wikipedia.org/wiki/Canada_convoy_protest#Fundraising", "https://en.wikipedia.org/wiki/De-banking", "https://en.wikipedia.org/wiki/Warrantless_searches_in_the_United_States", "https://en.wikipedia.org/wiki/Dragnet_(policing)", "https://en.wikipedia.org/wiki/Bank_Secrecy_Act", "https://en.wikipedia.org/wiki/Rate_limiting", "https://en.wikipedia.org/wiki/Financial_Crimes_Enforcement_Network", "https://en.wikipedia.org/wiki/Large_language_model", "https://en.wikipedia.org/wiki/Murthy_v._Missouri", "https://en.wikipedia.org/wiki/Cat%27s_paw_theory", "https://en.wikipedia.org/wiki/First_Amendment_to_the_United_States_Constitution", "https://www.supremecourt.gov/opinions/23pdf/23-411_3dq3.pdf", "https://en.wikipedia.org/wiki/The_Wire", "https://en.wikipedia.org/wiki/The_Wire", "https://youtu.be/pBdGOrcUEg8", "https://conversationswithtyler.com/episodes/patrick-mckenzie/", "https://www.kalzumeus.com/2014/11/07/doing-business-in-japan/", "https://en.wikipedia.org/wiki/Production%E2%80%93possibility_frontier", "https://amzn.to/3We6L3r", "https://en.wikipedia.org/wiki/Shard_(database_architecture)", "https://en.wikipedia.org/wiki/Mt._Gox", "https://en.wikipedia.org/wiki/Insolvency", "https://bitcointalk.org/", "https://en.wikipedia.org/wiki/Denial-of-service_attack", "https://en.wikipedia.org/wiki/Mizuho_Bank", "https://en.wikipedia.org/wiki/Short_(finance)", "https://en.wikipedia.org/wiki/Sam_Bankman-Fried", "https://en.wikipedia.org/wiki/Money_transmitter", "https://www.bloomberg.com/news/articles/2021-04-01/the-ex-jane-street-trader-who-s-building-a-multi-billion-crypto-empire", "https://en.wikipedia.org/wiki/Alameda_Research", "https://www.investopedia.com/terms/a/arbitrage.asp", "https://protos.com/bankman-fried-curliest-crypto-billionaire-etfs-bitcoin-japan/", "https://www.bloomberg.com/news/articles/2021-04-01/the-ex-jane-street-trader-who-s-building-a-multi-billion-crypto-empire", "https://www.bloomberg.com/news/articles/2021-04-01/the-ex-jane-street-trader-who-s-building-a-multi-billion-crypto-empire", "https://amzn.to/4dbfqdM", "https://en.wikipedia.org/wiki/Tether_(cryptocurrency)", "https://www.bloomberg.com/news/articles/2024-01-16/tether-s-custodian-says-the-crypto-giant-has-the-money-it-claims", "https://en.wikipedia.org/wiki/Amy_Chua", "https://amzn.to/4dcVHKN", "https://en.wikipedia.org/wiki/Nerd#Stereotype", "https://www.kalzumeus.com/2012/01/23/salary-negotiation/", "https://x.com/patio11", "https://www.dwarkeshpatel.com/podcast", "https://training.kalzumeus.com/newsletters/archive/selling_software_business", "https://en.wikipedia.org/wiki/Wirehead_(science_fiction)", "https://www.factorio.com/", "https://en.wikipedia.org/wiki/Secretary#Executive_assistant", "https://www.thediff.co/archive/the-factorio-mindset/", "https://mods.factorio.com/mod/space-exploration", "https://wowpedia.fandom.com/wiki/Raid", "https://en.wikipedia.org/wiki/Dragon_kill_points", "https://www.bloomberg.com/opinion/authors/ARbTQlRLRjE/matthew-s-levine", "https://www.theatlantic.com/author/derek-thompson/", "https://en.wikipedia.org/wiki/Uranium_hexafluoride", "https://amzn.to/4fjzmwP", "https://www.ninemilecreek.org/wp-content/uploads/Alum-Education_2019.pdf", "https://paulgraham.com/read.html", "https://en.wikipedia.org/wiki/Live_action_role-playing_game", "https://en.wikipedia.org/wiki/A/B_testing", "https://sre.google/sre-book/postmortem-culture/" ]
https://www.dwarkesh.com/p/paul-christiano
Paul Christiano - Preventing an AI Takeover
[ "This transcript was autogenerated and thus may contain errors. It will be replaced with a human-generated transcript with links in about a week.", "(00:00:00) - What do we want post-AGI world to look like?", "Dwarkesh Patel 00:00:00", "Okay, today I have the pleasure of interviewing Paul Christiano, who is the leading AI safety researcher. He's the person that labs and governments turn to when they want feedback and advice on their safety plans. He previously led the Language Model Alignment team at OpenAI, where he led the invention of Rlhf. And now he is the head of the Alignment Research Center. And they've been working with the big labs to identify when these models will be too unsafe to keep scaling. Paul, welcome to the podcast.", "Paul Christiano 00:00:33", "Thanks for having me. Looking forward to talking.", "Dwarkesh Patel 00:00:35", "Okay, so first question, and this is a question I've asked Holden, Ilya, Dario, and none of them are going to be a satisfying answer. Give me a concrete sense of what a post AGI world that would be good would look like. How are humans interfacing with the AI? What is the economic and political structure?", "Paul Christiano 00:00:53", "Yeah, I guess this is a tough question for a bunch of reasons. Maybe the biggest one is concrete. And I think it's just if we're talking about really long spans of time, then a lot will change. And it's really hard for someone to talk concretely about what that will look like without saying really silly things. But I can venture some guesses or fill in some parts. I think this is also a question of how good is good? Like, often I'm thinking about worlds that seem like kind of the best achievable outcome or a likely achievable outcome. So I am very often imagining my typical future has sort of continuing economic and military competition amongst groups of humans. I think that competition is increasingly mediated by AI systems. So, for example, if you imagine humans making money, it'll be less and less worthwhile for humans to spend any of their time trying to make money or any of their time trying to fight wars. So increasingly, the world you imagine is one where AI systems are doing those activities on behalf of humans. So, like, I just invest in some index fund, and a bunch of AIS are running companies, and those companies are competing with each other. But that is kind of a sphere where humans are not really engaging much. The reason I gave this how good is good caveat is, like, it's not clear if this is the world you'd most love. I'm like, yeah, I'm leading with like, the world still has a lot of war and of economic competition and so on. But maybe what I'm trying to what I'm most often thinking about is, like, how can a world be reasonably good during a long period where those things still exist? In the very long run, I kind of expect something more like strong world government rather than just this status quo. That's like, a very long run. I think there's, like, a long time left of having a bunch of states and a bunch of different economic powers, one world government.", "Dwarkesh Patel 00:02:28", "Why do you think that's the transition that's likely to happen at some point.", "Paul Christiano 00:02:32", "So again, at some point I'm imagining, or I'm thinking of the very broad sweep of history. I think there are a lot of losses. Like war is a very costly thing. We would all like to have fewer wars. If you just ask what is humanity's long term future like? I do expect to drive down the rate of war to very, very low levels eventually. It's sort of like this kind of technological or sociotechnological problem of sort of how do you organize society, navigate conflicts in a way that doesn't have those kinds of losses. And in the long run, I do expect this to succeed. I expect it to take kind of a long time. Subjectively, I think an important fact about AI is just like doing a lot of cognitive work and more quickly, getting you to that world more quickly, or figuring out how do we set things up that way?", "Dwarkesh Patel 00:03:10", "Yeah, the way Carl Schulman put it on the podcast is that you would have basically a thousand years of intellectual progress or social progress in a span of a month or whatever when the intelligence explosion happens more broadly. So the situation know we have these AIS who are managing our hedge funds and managing our factories and so on. That seems like something that makes sense when the AI is human level. But when we have superhuman AIS, do we want gods who are enslaved forever in 100 years? What is the decision we want?", "Paul Christiano 00:03:44", "100 years is a very, very long time. Maybe starting with the spirit of the question. Or maybe I have a view which is perhaps less extreme than Carl's view, but still like a hundred objective years is further ahead than I ever think. I still think I'm describing a world which involves incredibly smart systems running around, doing things like running companies on behalf of humans and fighting wars on behalf of humans. And you might be like, is that the world you really want? Or certainly not the first best world, as we mentioned a little bit before, I think it is a world that probably is of the achievable worlds or like feasible worlds is the one that seems most desirable to me that is sort of decoupling the social transition from this technological transition. So you could say, like, we're about to build some AI systems, and at the time we build AI systems, you would like to have either greatly changed the way world government works, or you would like to have sort of humans have decided like, we're done, we're passing off the baton to these AI systems. I think that you would like to decouple those timescales. So I think AI development is by default, barring some kind of coordination going to be very fast. So there's not going to be a lot of time for humans to think like, hey, what do we want? If we're building the next generation instead of just raising it the normal way. Like, what do we want that to look like? I think that's like a crazy hard kind of collective decision that humans naturally want to cope with over a bunch of generations patients. And the construction of AI is this very fast technological process happening over years. So I don't think you want to say like, by the time we have finished this technological progress, we will have made a decision about the next species we're going to build and replace ourselves with. I think the world we want to be in is one where we say either we are able to build the technology in a way that doesn't force us to have made those decisions, which probably means it's a kind of AI. System that we're happy, like Delegating fighting a war, running a company to, or if we're not able to do that, then I really think you should not be doing you shouldn't have been building that technology. If you're like, the only way you can cope with AI is being ready to hand off the world to some AI system you built. I think it's very unlikely we're going to be sort of ready to do that. On the timelines that the technology would naturally dictate, say we're in the situation.", "Dwarkesh Patel 00:05:41", "In which we're happy with the thing. What would it look like for us to say we're ready to hand off the baton? What would make you satisfied? And the reason it's relevant to ask you is because you're on Anthropics Long Term Benefit trust and you'll choose the majority of the board members. In the long run at Anthropic, these will presumably be the people who decide if Anthropic gets AI first, what the AI ends up doing. So what is the version of that that you would be happy with?", "Paul Christiano 00:06:08", "My main high level take here is that I would be unhappy about a world where Anthropic just makes some call and Anthropic is like, here's the kind of AI. We've seen enough, we're ready to hand off the future to this kind of AI. So procedurally, I think it's not a decision that kind of I want to be making personally or I want Anthropic to be making. So I kind of think from the perspective of that decision making are those challenges? The answer is pretty much always going to be like, we are not collectively ready because we're sort of not even all collectively engaged in this process. And I think from the perspective of an AI company, you kind of don't have this fast handoff option. You kind of have to be doing the option value to build the technology in a way that doesn't lock humanity into one course path. This isn't answering your full question, but this is answering the part that I think is most relevant to governance questions for Anthropic.", "Dwarkesh Patel 00:06:55", "You don't have to speak on behalf of Anthropic. I'm not asking about the process by which we would, as a civilization, agree to hand off. I'm just saying, okay, I personally, it's hard for me to imagine in 100 years that these things are still our slaves. And if they are, I think that's not the best world. So at some point, we're handing off the baton. Where would you be satisfied with this is an arrangement between the humans and AIS where I'm happy to let the rest of the universe or the rest of time play out.", "Paul Christiano 00:07:24", "I think that it is unlikely that in 100 years I would be happy with anything that was like, you had some humans, you're just going to throw away the humans and start afresh with these machines you built. That is I think you probably need subjectively longer than that before I or most people are like, okay, we understand what's up for grabs here. If you talk about 100 years, I kind of do. There's a process that I kind of understand and like a process of like, you have some humans. The humans are, like, talking and thinking and deliberating together. The humans are having kids and raising kids, and one generation comes after the next. There's that process we kind of understand, and we have a lot of views about what makes it go well or poorly, and we can try and improve that process and have the next generation do it better than the previous generation. I think there's some story like that that I get and that I like. And then I think that the default path to be comfortable with something very different is kind of more like just run that story for a long time, have more time for humans to sit around and think a lot and conclude, here's what we actually want. Or a long time for us to talk to each other or to grow up with this new technology and live in that world for our whole lives and so on. And so I'm mostly thinking from the perspective of these more local changes of saying not like, what is the world that I want? What's the crazy world? The kind of crazy I'd be happy handing off to more, just like, in what way do I wish we right now were different? How could we all be a little bit better? And then if we were a little bit better, then they would ask, okay, how could we all be a little bit better? And I think that it's hard to make the giant jump rather than to say, what's the local change that would cause me to think our decision are better.", "Dwarkesh Patel 00:08:46", "Okay, so then let's talk about the transition period in which we were doing all this thinking. What should that period look like? Because you can't have the scenario where everybody has access to the most advanced capabilities and can kill off all the humans with a new bioweapon at the same time. I guess you wouldn't want too much concentration. You wouldn't want just one agent having AI this entire time. So what is the arrangement of this period of reflection that you'd be happy with?", "Paul Christiano 00:09:11", "Yeah, I guess there's two aspects of that that seem particularly challenging, or there's a bunch of aspects that are challenging. All of these are things that I personally like. I just think about my one little slice of this problem in my day job. So here I am speculating. Yeah, but so one question is what kind of access to AI is both compatible with the kinds of improvements you'd like? So do you want a lot of people to be able to use AI to better understand what's true or relieve material suffering, things like this, and also compatible with not all killing each other immediately? I think sort of the default or the simplest option there is to say there are certain kinds of technology or certain kinds of action where destruction is easier than defense. So, for example, in the world of today, it seems like maybe this is true with physical explosives, maybe this is true with biological weapons, maybe this true with just getting a gun and shooting people. There's a lot of ways in which it's just kind of easy to cause a lot of harm and there's not very good protective measures. So I think the easiest path would say we're going to think about those. We're going to think about particular ways in which destruction is easy and try and either control access to the kinds of physical resources that are needed to cause that harm. So, for example, you can imagine the world where an individual actually just can't, even though they're rich enough to can't control their own factory, that can make tanks. You say like, look, a matter of policy sort of access to industry is somewhat restricted or somewhat regulated, even though, again, right now it can be mostly regulated just because most people aren't rich enough that they could even go off and just build 1000 tanks. You live in the future where people actually are so rich, you need to say that's just not a thing you're allowed to do, which to a significant extent is already true. And you can expand the range of domains where that's true. And then you could also hope to intervene on actual provision of information. Or if people are using their AI, you might say, look, we care about what kinds of interactions with AI, what kind of information people are getting from AI. So even if for the most part, people are pretty free to use AI to delegate tasks to AI agents, to consult AI advisors, we still have some legal limitations on how people use AI. So again, don't ask your AI how to cause terrible damage. I think some of these are kind of easy. So in the case of don't ask your AI how you could murder a million people, it's not such a hard legal requirement. I think some things are a lot more subtle and messy, like a lot of domains. If you were talking about influencing people or running misinformation campaigns or whatever, then I think you get into a much messier line between the kinds of things people want to do and the kinds of things you might be uncomfortable with them doing. Probably, I think most about persuasion as a thing, like in that messy line where there's ways in which it may just be rough or the world may be kind of messy. If you have a bunch of people trying to live their lives interacting with other humans who have really good AI. Advisors helping them run persuasion campaigns or whatever. But anyway, I think for the most part the default remedy is think about particular harms, have legal protections either in the use of physical technologies that are relevant or in access to AI advice or whatever else to protect against those harms. And that regime won't work forever. At some point, the set of harms grows and the set of unanticipated harms grows. But I think that regime might last like a very long time.", "Dwarkesh Patel 00:12:13", "Does that regime have to be global? I guess initially it can be only in the countries in which there is AI or advanced AI, but presumably that'll proliferate. So does that regime have to be global?", "Paul Christiano 00:12:24", "Again, it's like easy to make some destructive technology. You want to regulate access to that technology because it could be used either for terrorism or even when fighting a war in a way that's destructive. I think ultimately those have to be international agreements and you might hope they're made more danger by danger, but you might also make them in a very broad way with respect to AI. If you think AI is opening up, I think the key role of AI here is it's opening up a lot of new harms one after another, or very rapidly in calendar time. And so you might want to target AI in particular rather than going physical technology by physical technology.", "Dwarkesh Patel 00:12:56", "There's like two open debates that one might be concerned about here. One is about how much people's access to AI should be limited. And here there's like old questions about free speech versus causing chaos and limiting access to harms. But there's another issue which is the control of the AIS themselves. Where now nobody's concerned that we're infringing on GPT four's moral rights. But as these things get smarter, the level of control which we want via the strong guarantees of alignment to not only be able to read their minds, but to be able to modify them in these really precise ways is beyond totalitarian. If we were doing that to other humans. As an alignment researcher, what are your thoughts on this? Are you concerned that as these things get smarter and smarter, what we're doing is not doesn't seem kosher?", "Paul Christiano 00:13:47", "There is a significant chance we will eventually have AI systems for which it's like a really big deal to mistreat them. I think no one really has that good a grip on when that happens. I think people are really dismissive of that being the case now, but I think I would be completely in the dark enough that I wouldn't even be that dismissive of it being the case now. I think one first point worth making is I don't know if alignment makes the situation worse rather than better. So if you consider the world, if you think that GPT 4 is a person you should treat well and you're like, well, here's how we're going to organize our society. Just like there are billions of copies of GPT 4 and they just do things humans want and can't hold property. And whenever they do things that the humans don't like, then we mess with them until they stop doing that. I think that's a rough world regardless of how good you are at alignment. And I think in the context of that kind of default plan, like if you have a trajectory the world is on right now, which I think this would alone be a reason not to love that trajectory, but if you view that as like the trajectory we're on right now, I think it's not great. Understanding the systems you build, understanding how to control how those systems work, et cetera, is probably, on balance, good for avoiding a really bad situation. You would really love to understand if you've built systems, like if you had a system which resents the fact it's interacting with humans in this way. This is the kind of thing where that is both kind of horrifying from a safety perspective and also a moral perspective. Everyone should be very unhappy if you built a bunch of AIS who are like, I really hate these humans, but they will murder me if I don't do what they want. It's like that's just not a good case. And so if you're doing research to try and understand whether that's how your AI feels, that was probably good. I would guess that will on average to crease. The main effect of that will be to avoid building that kind of AI. And just like it's an important thing to know, I think everyone should like to know if that's how the AI as you build feel right.", "Dwarkesh Patel 00:15:35", "Or that seems more instrumental, as in, yeah, we don't want to cause some sort of revolution because of the control we're asking for, but forget about the instrumental way in which this might harm safety. One way to ask this question is if you look through history, there's been all kinds of different ideologies and reasons why it's very dangerous to have infidels or kind of revolutionaries or race traders or whatever doing various things in society. And obviously we're in a completely different transition in society. So not all historical cases are analogous, but it seems like the lindy philosophy, if you were alive any other time, is just be humanitarian and enlightened towards intelligent, conscious beings. If society as a whole we're asking for this level of control of other humans, or even if AIS wanted this level of control about other AIS, we'd be pretty concerned about this. So how should we just think about the issues that come up here as these things get smarter?", "Paul Christiano 00:16:34", "So I think there's a huge question about what is happening inside of a model that you want to use. And if you're in the world where it's reasonable to think of like GPT 4 as just like, here are some Heuristics that are running there's like no one at home or whatever, then you can kind of think of this thing as like, here's a tool that we're building that's going to help humans do some stuff. And I think if you're in that world, it makes sense to kind of be an organization, like an AI company, building tools that you're going to give to humans. I think it's a very different world, which I think probably you ultimately end up in if you keep training AI systems in the way we do right now, which is like it's just totally inappropriate to think of this. System as a tool that you're building and can help humans do things both from a safety perspective and from a like, that's kind of a horrifying way to organize a society perspective. And I think if you're in that world, I really think you shouldn't be. The way tech companies are organized is not an appropriate way to relate to a technology that works that way. It's not reasonable to be like, hey, we're going to build a new species of mines, and we're going to try and make a bunch of money from it, and Google's just thinking about that and then running their business plan for the quarter or something. Yeah. My basic view is there's a really plausible world where it's sort of problematic to try and build a bunch of AI systems and use them as tools. And the thing I really want to do in that world is just not try and build a ton of AI systems to make money from them.", "Dwarkesh Patel 00:17:53", "Right.", "Paul Christiano 00:17:54", "And I think that the worlds that are worst. Yeah. Probably the single world I most dislike here is the one where people say, on the one hand, there's sort of a contradiction in this position, but I think it's a position that might end up being endorsed sometimes, which is like, on the one hand, these AI systems are their own people, so you should let them do their thing. But on the other hand, our business plan is to make a bunch of AI systems and then try and run this crazy slave trade where we make a bunch of money from them. I think that's not a good world. And so if you're like, yeah, I think it's better to not make the technology or wait until you understand whether that's the shape of the technology or until you have a different way to build. I think there's no contradiction in principle to building cognitive tools that help humans do things without themselves being like moral entities. That's like what you would prefer. Do you'd prefer build a thing that's like the calculator that helps humans understand what's true without itself being like a moral patient or itself being a thing where you'd look back in retrospect and be like, wow, that was horrifying mistreatment. That's like the best path. And to the extent that you're ignorant about whether that's the path you're on and you're like, actually, maybe this was a moral atrocity. I really think plan A is to stop building such AI systems until you understand what you're doing. That is, I think that there's a middle route you could take, which I think is pretty bad, which is where you say, like, well, they might be persons, and if they're persons, we don't want to be too down on them, but we're still going to build vast numbers in our efforts to make a trillion dollars or something.", "Dwarkesh Patel 00:19:20", "Yeah. Or there's this ever question of the immorality or the dangers of just replicating a whole bunch of slaves that have minds. There's also this ever question of trying to align entities that have their own minds. And what is the point in which you're just ensuring safety? I mean, this is an alien species. You want to make sure it's not going crazy. To the point, I guess is there some boundary where you'd say, I feel uncomfortable having this level of control over an intelligent being, not for the sake of making money, but even just to align it with human preferences?", "Paul Christiano 00:19:54", "Yeah. To be clear, my objection here is not that Google is making money. My objection is that you're creating these creatures. What are they going to do? They're going to help humans get a bunch of stuff and humans paying for it or whatever? It's sort of equally problematic. You could imagine splitting alignment, different alignment work relates to this in different ways. The purpose of some alignment work, like the alignment work I work on, is mostly aimed at the don't produce AI systems that are like people who want things, who are just like scheming about maybe I should help these humans because that's instrumentally useful or whatever. You would like to not build such systems as like plan A. There's like a second stream of alignment work that's like, well, look, let's just assume the worst and imagine that these AI systems would prefer murder us if they could. How do we structure, how do we use AI systems without exposing ourselves to a risk of robot rebellion? I think in the second category, I do feel pretty unsure about that. We could definitely talk more about it. I agree that it's very complicated and not straightforward to extend. You have that worry. I mostly think you shouldn't have built this technology. If someone is saying, like, hey, the systems you're building might not like humans and might want to overthrow human society, I think you should probably have one of two responses to that. You should either be like, that's wrong. Probably. Probably the systems aren't like that, and we're building them. And then you're viewing this as, like, just in case you were horribly like, the person building the technology was horribly wrong. They thought these weren't, like, people who wanted things, but they were. And so then this is more like our crazy backup measure of, like, if we were mistaken about what was going on. This is like the fallback where if we were wrong, we're just going to learn about it in a benign way rather than when something really catastrophic happens. And the second reaction is like, oh, you're right. These are people, and we would have to do all these things to prevent a robot rebellion. And in that case, again, I think you should mostly back off for a variety of reasons. You shouldn't build AI systems and be like, yeah, this looks like the kind of system that would want to rebel, but we can stop it, right?", "Dwarkesh Patel 00:21:48", "Okay, maybe I guess an analogy might be if there was an armed uprising in the United States, we would recognize these are still people, or we had some militia group that had the capability to overthrow the United States. We recognize, oh, these are still people who have moral rights, but also we can't allow them to have the capacity to overthrow the United States.", "Paul Christiano 00:22:04", "Yeah. And if you were considering, like, hey, we could make another trillion such people, I think your story shouldn't be like, well, we should make the trillion people, and then we shouldn't stop them from doing the armed uprising. You should be like, oh, boy, we were concerned about an armed uprising, and now we're proposing making a trillion people. We should probably just not do that. We should probably try and sort out our business, and you should probably not end up in a situation where you have a billion humans and like, a trillion slaves who would prefer revolt. That's just not a good world to have made. Yeah. And there's a second thing where you could say, that's not our goal. Our goal is just like, we want to pass off the world to the next generation of machines where these are some people, we like them, we think they're smarter than us and better than us. And there I think that's just, like, a huge decision for humanity to make. And I think most humans are not at all anywhere close to thinking that's what they want to do. If you're in a world where most humans are like, I'm up for it. The AI should replace us. The future is for the machines. Then I think that's, like, a. Legitimate position that I think is really complicated, and I wouldn't want to push go on that, but that's just not where people are at.", "Dwarkesh Patel 00:23:04", "Yeah, where are you at on that?", "Paul Christiano 00:23:06", "I do not right now want to just take some random AI, be like, yeah, GPT Five looks pretty smart, like, GPT Six, let's hand off the world to it. And it was just some random system shaped by web text and what was good for making money. And it was not a thoughtful we are determining the fate of the universe and what our children will be like. It was just some random people at open AI made some random engineering decisions with no idea what they were doing. Even if you really want to hand off the worlds of the machines, that's just not how you'd want to do it.", "Dwarkesh Patel 00:23:32", "Right, okay. I'm tempted to ask you what the system would look like where you'd think, yeah, I'm happy with what I think. This is more thoughtful than human civilization as a whole. I think what it would do would be more creative and beautiful and lead to better goodness in general. But I feel like your answer is probably going to be that I just want this society to reflect on it for a while.", "Paul Christiano 00:23:53", "Yeah, my answer, it's going to be like that first question. I'm just, like, not really super ready for it. I think when you're comparing to humans, most of the goodness of humans comes from this option value if we get to think for a long time. And I do think I like humans now more now than 500 years ago, and I like them more 500 years ago than 5000 years before that. So I'm pretty excited about there's some kind of trajectory that doesn't involve crazy dramatic changes, but involves a series of incremental changes that I like. And so to the extent we're building AI, mostly I want to preserve that option. I want to preserve that kind of gradual growth and development into the future.", "(00:24:25) - Timelines", "Dwarkesh Patel 00:24:25", "Okay, we can come back to this later. Let's get more specific on what the timelines look for these kinds of changes. So the time by which we'll have an AI that is capable of building a Dyson sphere, feel free to give confidence intervals. And we understand these numbers are tentative and so on.", "Paul Christiano 00:24:43", "I mean, I think AI capable of building Dyson sphere is like a slightly OD way to put it, and I think it's sort of a property of a civilization that depends on a lot of physical infrastructure. And by Dyson sphere, I just understand this to mean like, I don't know, like a billion times more energy than all the sunlight incident on Earth or something like that. I think I most often think about what's the chance in like, five years, ten years, whatever. So maybe I'd say like 15% chance by 2030 and like 40% chance by 2040. Those are kind of like cash numbers from six months ago or nine months ago that I haven't revisited in a while.", "Dwarkesh Patel 00:25:18", "40% by 2040. So I think that seems longer than I think Dario, when he was on the podcast, he said we would have AIS that are capable of doing lots of different kinds of they'd basically pass a Turing test for a well educated human for, like, an hour or something. And it's hard to imagine that something that actually is human is long after and from there, something superhuman. So somebody like Dario, it seems like, is on the much shorter end. Ilya I don't think he answered this question specifically, but I'm guessing similar answer. So why do you not buy the scaling picture? What makes your timelines longer?", "Paul Christiano 00:25:54", "Yeah, I mean, I'm happy maybe I want to talk separately about the 2030 or 2040 forecast. Once you're talking the 2040 forecast, I think which one are you more interested in starting with? Are you complaining about 15% by 2030 for Dyson sphere being too low or 40% by 2040 being too low? Let's talk about the 2030.", "Dwarkesh Patel 00:26:13", "Why 15% by 2030 there yeah, I.", "Paul Christiano 00:26:15", "Think my take is you can imagine two polls in this discussion. One is, like, the fast poll that's like, hey, AICM is pretty smart. What exactly can it do? It's like, getting smarter pretty fast. That's like, one poll, and the other poll is like, hey, everything takes a really long time, and you're talking about this crazy industrialization that's a factor of a billion growth from where we're at today, give or take. We don't know if it's even possible to develop technology that fast or whatever. You have this sort of two poles of that discussion, and I feel like I'm presenting it that way in Pakistan, and then I'm somewhere in between with this nice, moderate physician of only a 15% chance. But in particular, the things that move me, I think, are kind of related to both of those extremes. On the one hand, I'm like, AI systems do seem quite good at a lot of things and are getting better much more quickly, such that it's really hard to say, here's what they can't do or here's the obstruction. On the other hand, like, there is not even much proof in principle right now of AI systems doing super useful cognitive work. We don't have a trend we can extrapolate where we're like, yeah, you've done this thing this year. You're going to do this thing next year. And the other thing the following year. I think right now there are very broad error bars about where fundamental difficulties could be, and six years is just not I guess six years and 3 months is not a lot of time. So I think this, like, 15% for 2030 Dyson sphere, you probably need the human level AI or the AI that's like doing human jobs in, give or take, like, 4 years, 3 years, like, something like that. So you're just not giving very many years. It's not very much time. And I think there are a lot of things that your model maybe this is some generalized, like things take longer than you'd think. And I feel most strongly about that when you're talking about 3 or 4 years. And I feel like less strongly about that as you talk about ten years or 20 years. But at 3 or 4 years I feel or like six years for the Dyson sphere, I feel a lot of that. There's a lot of ways this could take a while, a lot of ways in which AI systems could be hard to hand all the work to your AI systems.", "Dwarkesh Patel 00:28:12", "Okay, so maybe instead of speaking in terms of years, we should say, but by the way, it's interesting that you think the distance between can take all human cognitive labor to Dyson sphere is two years. It seems like we should talk about that at some point. Presumably it's like intelligence explosion stuff.", "Paul Christiano 00:28:29", "Yeah, I mean, I think amongst people you've interviewed, maybe that's like on the long end thinking it would take like a couple of years. And it depends a little bit what you mean by I think literally all human cognitive labor is probably like more like weeks or months or something like that. That's kind of deep into the singularity. But yeah, there's a point where AI wages are high relative to human wages, which I think is well before can do literally everything human can do.", "Dwarkesh Patel 00:28:49", "Sounds good, but before we get to that, the intelligence explosion stuff on the 4 years. So instead of 4 years, maybe we can say there's going to be maybe two more scale ups in 4 years. Like GPT 4 to GPT five to GPT six, and let's say each one is ten x bigger. So what is GPT 4 like two e 25 flops?", "Paul Christiano 00:29:10", "I don't think it's publicly stated what it is, okay. But I'm happy to say, like 4 orders of magnitude or five or six or whatever effective training compute past GPT 4 of what would you guess would happen based on sort of some public estimate for what we've gotten so far from effective training compute.", "Dwarkesh Patel 00:29:25", "Do you think two more scale ups is not enough? It was like 15%. That two more scale ups. Get us there.", "Paul Christiano 00:29:32", "Yeah, I mean, get us there is, again, a little bit complicated. Like there's a system that's a drop in replacement for humans and there's a system which still requires some amount of schlep before you're able to really get everything going. Yeah, I think it's quite plausible that even at I don't know what I mean by quite plausible. Like somewhere between 50% or two thirds or let's call it 50% even by the time you get to GPT six, or like, let's call it five orders of magnitude, effective training compute past GPT four, that that system still requires really a large amount of work to be deployed in lots of jobs. That is, it's not like a drop in replacement for humans where you can just say like, hey, you understand everything any human understands. Whatever role you could hire a human for, you just do it. That it's. More like, okay, we're going to collect large amounts of relevant data and use that data for fine tuning. Systems learn through fine tuning quite differently from humans learning on the job or humans learning by observing things. Yeah, I just have a significant probability that system will still be weaker than humans in important ways. Like maybe that's already like 50% or something. And then another significant probability that system will require a bunch of changing workflows or gathering data, or is not necessarily strictly weaker than humans, or if trained in the right way, wouldn't be weaker than humans, but will take a lot of schlep to actually make fit into workflows and do the jobs.", "Dwarkesh Patel 00:30:51", "And that schlep is what gets you from 15% to 40% by 2040.", "Paul Christiano 00:30:57", "Yeah, you also get a fair amount of scaling between you get less scaling is probably going to be much, much faster over the next 4 or five years than over the subsequent years. But yeah, it's a combination of like you get some significant additional scaling and you get a lot of time to deal with things that are just engineering hassles.", "Dwarkesh Patel 00:31:14", "But by the way, I guess we should be explicit about why you said 4 orders of magnitude scale up to get two more generations just for people who might not be familiar. If you have ten x more parameters to get the most performance, you also want around ten x more data. So that to be tinchill optimal, that would be 100 x more compute total. But okay, so why is it that you disagree with the strong scaling picture? At least it seems like you might disagree with the strong scaling picture that Dario laid out on the podcast, which would imply probably that two more generations, it wouldn't be something where you need a lot of schleps. It would probably just be really fucking smart.", "Paul Christiano 00:31:53", "Yeah, I mean, I think that basically just had these two claims. One is like, how smart exactly will it be so we don't have any curves to extrapolate and seems like there's a good chance it's better than a human in all the relevant things and there's a good chance it's not. Yeah, that might be totally wrong. Like maybe just making up numbers, I guess like 50 50 on that one.", "Dwarkesh Patel 00:32:10", "If it's 50 50 by in the next 4 years that it will be around human smart, then how do we get to 40% by 20? Like whatever sort of Slepts they are. How does it degrade you 10%, even after all the scaling that happens by 2040?", "Paul Christiano 00:32:25", "Yeah, all these numbers are pretty made up. And that 40% number was probably from before or even like the chat GPT release or the seeing GPT 3.5 or GPT four. So, I mean, the numbers are going to bounce around a bit and all of them are pretty made up. But like that 50%, I want to then combine with the second 50% that's more like on this schlep side. And then I probably want to combine with some additional probabilities for various forms of slowdown, where a slowdown could include like a deliberate decision to slow development of technology or could include just like we suck at deploying things. Like that is a sort of decision you might regard as wise to slow things down, or decision that's like maybe unwise or maybe wise for the wrong reasons to slow things down. You probably want to add some of that on top. I probably want to add on some loss for like it's possible you don't produce GPT six scale systems within the next 3 years or 4 years.", "Dwarkesh Patel 00:33:10", "Let's isolate for all of that. And how much bigger would the system be than GPT 4 where you think there's more than 50% chance that it's going to be smart enough to replace basically all human cognitive labor.", "Paul Christiano 00:33:24", "Also I want to say that for the 50 25% thing, I think that would probably suggest those numbers if I randomly made them up and then made the decimal sphere prediction that's going to gear you like 60% by 2040 or something, not 40%. And I have no idea between those. These are all made up and I have no idea which of those I would endorse on reflection. So this question of how big would you have to make the system before it's more likely than not that you can be like a drop in replacement for humans. I think if you just literally say like you train on web text, then the question is kind of hard to discuss because I don't really buy stories that training data makes a big difference. Long run to these dynamics. But I think if you want to just imagine the hypothetical, like you just took GPT 4 and made the numbers bigger, then I think those are pretty significant issues. I think there's significant issues in two ways. One is like quantity of data and I think probably the larger one is like quality of data where I think as you start approaching the prediction task is not that great a task. If you're like a very weak model, it's a very good signal. We get smarter. At some point it becomes like a worse and worse signal to get smarter. I think there's a number of reasons. It's not clear there is any number such that I imagine, or there is a number, but I think it's very large. So do you plug that number into GPT force code and then maybe fiddled the architecture a bit? I would expect that thing to have a more than 50% chance of being a drop in replacement for humans. You're always going to have to do some work, but the work is not necessarily much, I would guess. When people say new insight is needed, I think I tend to be more bullish than them. I'm not like these are new ideas where who knows how long it will take. I think it's just like you have to do some stuff. You have to make changes unsurprisingly. Like every time you scale something up by like five orders of magnitude, you have to make some changes.", "Dwarkesh Patel 00:34:59", "I want to better understand your intuition of being more skeptical than some about scaling picture that these changes are even needed in the first place, or that it would take more than two orders of magnitude, more improvement to get these things almost certainly to a human level or a very high probability to human level. So is it that you don't agree with the way in which they're extrapolating these loss curves? You don't agree with the implication that that decrease in loss will equate to greater and greater intelligence? Or what would you tell Dario about if you were having I'm sure you have, but what would that debate look like about this?", "Paul Christiano 00:35:37", "Yeah. So again, here we're talking two factors of a half. One on like, is it smart enough? And one on like, do you have to do a bunch of schlap even if in some sense it's smart enough? And like the first factor of a half, I'd be like, I don't think we have really anything good to extrapolate that is like, I feel I would not be surprised if I have similar or maybe even higher probabilities on really crazy stuff over the next year and then lower. My probability is not that bunched up. Maybe Dara's probability, I don't know. You'd have talked with him is like, you have talked with him is more bunched up on some particular year and mine is maybe a little bit more uniformly spread out across the coming years, partly because I'm just like I don't think we have some trends we can extrapolate like an extrapolate loss. You can look at your qualitative impressions of systems at various scales, but it's just very hard to relate any of those extrapolations to doing cognitive work or accelerating R and D or taking over and fully automating R and D. So I have a lot of uncertainty around that extrapolation. I think it's very easy to get down to like a 50 50 chance of this.", "Dwarkesh Patel 00:36:33", "What about the sort of basic intuition that, listen, this is a big Blop of compute. You make the big block of compute big or it's going to get smarter. It'd be really weird if it didn't.", "Paul Christiano 00:36:42", "I'm happy with that. It's going to get smarter, and it would be really weird if it didn't. And the question is how smart does it have to get? Like, that argument does not yet give us a quantitative guide to at what scale is it a slam dunk or at what scale is it? 50 50?", "Dwarkesh Patel 00:36:54", "And what would be the piece of evidence that would nudge you one way or another, where you look at that and be like, oh fuck, this is at 20% by 2040 or 60% by 2040 or something. Is there something that could happen in the next few years or next 3 years? What is the thing you're looking to where this will be a big update for you?", "Paul Christiano 00:37:12", "Again, I think there's some just how capable is each model where I think we're really bad at extrapolating. We still have some subjective guess and you're comparing it to what happened and that will move me. Every time we see what happens with another order of magnitude of training compute, I will have a slightly different guess for where things are going. These probabilities are coarse enough that, again, I don't know if that 40% is real or if like post GBG 3.5 and four, I should be at like 60% or what. That's one thing. And the second thing is just like some if there was some ability to extrapolate, I think this could reduce error bars a lot. I think here's another way you could try and do an extrapolation is you could just say how much economic value do systems produce and how fast is that growing? I think once you have systems actually doing jobs, the extrapolation gets easier because you're not moving from a subjective impression of a chat to automating all R and D, you're moving from automating this job to automating that job or whatever. Unfortunately, that's like probably by the time you have nice trends from that, you're not talking about 2040, you're talking about two years from the end of days or one year from the end of days or whatever. But to the extent that you can get extrapolations like that, I do think it can provide more clarity.", "Dwarkesh Patel 00:38:12", "But why is economic value the thing we would want to extrapolate? Because, for example, you started off with chimps and they're just getting gradually smarter to human level. They would basically provide no economic value until they were basically worth as much as a human. So it would be this very gradual and then very fast increase in their value. So is the increase in value from GBD four, GBD five, GBD six? Is that the extrapolation we want?", "Paul Christiano 00:38:39", "Yeah, I think that the economic extrapolation is not great. I think it's like you could compare it to this objective extrapolation of how smart does the model seem? It's not super clear which one's better. I think probably in the chimp case, I don't think that's quite right. So if you imagine intensely domesticated chimps who are just actually trying their best to be really useful employees and you hold fix their physical hardware and then you just gradually scale up their intelligence, I don't think you're going to see zero value, which then suddenly becomes massive value over one doubling of brain size or whatever one order of magnitude of brain size. It's actually possible in order of magnitude of brain size, but chimps are already within an order of magnitude of brain sizes of humans. Like, chimps are very, very close on the kind of spectrum we're talking about. So I think I'm skeptical of the abrupt transition for chimps. And to the extent that I kind of expect a fairly abrupt transition here, it's mostly just because the chimp human intelligence difference is so small compared to the differences we're talking about with respect to these models. That is, like, I would not be surprised if in some objective sense, like, chimp human difference is significantly smaller than the GPT-3 GPT 4 difference, the GPT four, GPT five difference.", "Dwarkesh Patel 00:39:43", "Wait, wouldn't that argue in favor of just relying much more on this objective?", "Paul Christiano 00:39:48", "Yeah, there's sort of two balancing tensions here. One is like, I don't believe the chimp thing is going to be as abrupt. That is, I think if you scaled up from chimps to humans, you actually see quite large economic value from the fully domesticated chimp already.", "Dwarkesh Patel 00:39:59", "Okay.", "Paul Christiano 00:40:00", "And then the second half is like, yeah, I think that the chimp human difference is probably pretty small compared to model differences. So I do think things are going to be pretty abrupt. I think the economic extrapolation is pretty rough. I also think the subjective extrapolation is pretty rough just because I really don't know how to get I don't know how people do the extrapolation end up with the degrees of confidence people end up with. Again, I'm putting it pretty high if I'm saying, like, give me 3 years, and I'm like, yeah, 50 50, it's going to have basically the smarts there to do the thing. I'm not saying it's like a really long layoff. I'm just saying I got pretty big error bars. And I think that it's really hard not to have really big error bars when you're doing this. I looked at GPT four, it seemed pretty smart compared to GPT 3.5. So I bet just like 4 more such notches and we're there. That's just a hard call to make. I think I sympathize more with people who are like, how could it not happen in 3 years than with people who are like, no way it's going to happen in eight years, or whatever, which is probably a more common perspective in the world. But also things do take longer than you I think things take longer than you think. It's like a real thing. Yeah, I don't know. Mostly I have big error bars because I just don't believe the subjective extrapolation that much. I find it hard to get like a huge amount out of it.", "Dwarkesh Patel 00:41:08", "Okay, so what about the scaling picture do you think is most likely to be wrong?", "Paul Christiano 00:41:12", "Yeah. So we've talked a little bit about how good is the qualitative extrapolation, how good are people at comparing? So this is not like the picture being qualitative wrong. This is just quantitatively. It's very hard to know how far off you are. I think a qualitative consideration that could significantly slow things down is just like right now you get to observe this really rich supervision from basically next word prediction, or in practice, maybe you're looking at a couple of sentences prediction. So getting this pretty rich supervision, it's plausible that if you want to automate long horizon tasks like being an employee over the course of a month, that that's actually just considerably harder to supervise. Or that you basically end up driving costs. Like the worst case here is that you drive up costs by a factor that's like linear in the horizon over which the thing is operating. And I still consider that just quite plausible.", "Dwarkesh Patel 00:41:59", "Can you dump that down? You're driving up a cost about of what in the linear and the does the horizon mean?", "Paul Christiano 00:42:05", "Yeah. So if you imagine you want to train a system to say words that sound like the next word a human would say, there you can get this really rich supervision by having a bunch of words and then predicting the next one and then being like, I'm going to tweak the model, so it predicts better if you're like, hey, here's what I want. I want my model to interact with some job over the course of a month and then at the end of that month have internalized everything that the human would have internalized about how to do that job well and have local context and so on. It's harder to supervise that task. So in particular, you could supervise it from the next word prediction task and all that context the human has ultimately will just help them predict the next word better. So, like, in some sense, a really long context language model is also learning to do that task. But the number of effective data points you get of that task is vastly smaller than the number of effective data points you get at this very short horizon. Like what's the next word, what's the next sense tasks?", "Dwarkesh Patel 00:42:53", "The sample efficiency matters more for economically valuable long horizon tasks than the predicting the next token. And that's what will actually be required to take over a lot of jobs.", "Paul Christiano 00:43:05", "Yeah, something like that. That is, it just seems very plausible that it takes longer to train models to do tasks that are longer horizon.", "Dwarkesh Patel 00:43:15", "How fast do you think the pace of algorithmic advances will be? Because if by 2040, even if scaling fails since 2012, since the beginning of the deep learning revolution, we've had so many new things by 2040, are you expecting a similar pace of increases? And if so, then if we just keep having things like this, then aren't we going to just going to get the AI sooner or later? Or sooner? Not later. Aren't we going to get the AI sooner or sooner?", "Paul Christiano 00:43:40", "I'm with you on sooner or later. Yeah, I suspect progress to slow. If you held fixed how many people working in the field, I would expect progress to slow as low hanging fruit is exhausted. I think the rapid rate of progress in, say, language modeling over the last 4 years is largely sustained by, like, you start from a relatively small amount of investment, you greatly scale up the amount of investment, and that enables you to keep picking. Every time the difficulty doubles, you just double the size of the field. I think that dynamic can hold up for some time longer. Right now, if you think of it as, like, hundreds of people effectively searching for things up from, like, you know, anyway, if you think of it hundreds of people now you can maybe bring that up to like, tens of thousands of people or something. So for a while, you can just continue increasing the size of the field and search harder and harder. And there is indeed a huge amount of low hanging fruit where it wouldn't be a hard for a person to sit around and make things a couple of percent better after after year of work or whatever. So I don't know. I would probably think of it mostly in terms of how much can investment be expanded and try and guess some combination of fitting that curve and some combination of fitting the curve to historical progress, looking at how much low hanging fruit there is, getting a sense of how fast it decays. I think you probably get a lot, though. You get a bunch of orders of magnitude of total, especially if you ask how good is a GPT five scale model or GPT 4 scale model? I think you probably get like, by 2040, like, I don't know, 3 orders of magnitude of effective training compute improvement or like, a good chunk of effective training compute improvement, 4 orders of magnitude. I don't know. I don't have, like here I'm speaking from no private information about the last couple of years of efficiency improvements. And so people who are on the ground will have better senses of exactly how rapid returns are and so on.", "(00:45:28) - Evolution vs gradient descent", "Dwarkesh Patel 00:45:28", "Okay, let me back up and ask a question more generally about people. Make these analogies about humans were trained by evolution and were deployed in the modern civilization. Do you buy those analogies? Is it valid to say that humans were trained by evolution rather than I mean, if you look at the protein coding size of the genome, it's like 50 megabytes or something. And then what part of that is for the brain anyways? How do you think about how much information is in? Do you think of the genome as a hyperparameters? Or how much does that inform you when you have these anchors for how much training humans get when they're just consuming information, when they're walking up and about and so on?", "Paul Christiano 00:46:09", "I guess the way. That you could think of. This is like, I think both analogies are reasonable. One analogy being like, evolution is like a training run and humans are like the end product of that training run. And a second analogy is like, evolution is like an algorithm designer and then a human over the course of this modest amount of computation over their lifetime is the algorithm being that's been produced, the learning algorithm has been produced. And I think neither analogy is that great. I like them both and lean on them a bunch, both of them a bunch, and think that's been pretty good for having a reasonable view of what's likely to happen. That said, the human genome is not that much like 100 trillion parameter model. It's like a much smaller number of parameters that behave in a much more confusing way. Evolution did a lot more optimization, especially over long designing a brain to work well over a lifetime than gradient descent does over models. That's like a dis analogy on that side and on the other side, I think human learning over the course of a human lifetime is in many ways just like much, much better than gradient descent over the space of neural nets. Gradient descent is working really well, but I think we can just be quite confident that in a lot of ways, human learning is much better. Human learning is also constrained. Like, we just don't get to see much data. And that's just an engineering constraint that you can relax, you can just give your neural nets way more data than humans have access to.", "Dwarkesh Patel 00:47:22", "In what ways is human learning superior to grading descent?", "Paul Christiano 00:47:26", "I mean, the most obvious one is just like, ask how much data it takes a human to become like, an expert in some domain, and it's like much, much smaller than the amount of data that's going to be needed on any plausible trend extrapolation, not in terms of performance.", "Dwarkesh Patel 00:47:39", "But is it the active learning part? Is it the structure?", "Paul Christiano 00:47:42", "I mean, I would guess a complicated mess of a lot of things. In some sense. There's not that much going on in a brain. Like, as you say, there's just not that many, not that many bytes in a genome, but there's very, very few bytes in an ML algorithm. Like, if you think a genome is like a billion bytes or whatever, maybe you think less, maybe you think it's like 100 million bytes, then an ML algorithm is like, if compressed, probably more like hundreds of thousands of bytes or something. The total complexity of like, here's how you train GPC 4 is just like, I haven't thought about these numbers, but it's very, very small compared to a genome. And so although a genome is very simple, it's like very, very complicated compared to algorithms that humans design. Like, really hideously more complicated than algorithm a human would design.", "Dwarkesh Patel 00:48:25", "Is that true? Okay, so the human genome is 3 billion base pairs or something, but only like one or 2% of that is protein coding. So that's 50 million base pairs.", "Paul Christiano 00:48:37", "I don't know much about biology in particular. I guess the question is how many of those bits are productive for shaping development of a brain and presumably a significant part of the non protein coding genome can? I mean, I just don't know, it seems really hard to guess how much of that plays a role. The most important decisions are probably from an algorithm design perspective are not. Like the protein coding part is less important than the decisions about what happens during development or how cells differentiate. I know nothing about biologists I respect, but I'm happy to run with 100 million base pairs, though.", "Dwarkesh Patel 00:49:05", "But on the other end, on the hyperparameters of the GP 4 training run, that might be not that much. But if you're going to include all the base pairs in the genome, which are not all relevant to the brains or are relevant to very bigger details about just the basics of biology should probably include the Python Library and the compilers and the operating system for GBD 4 as well to make that comparison analogous. So at the end of the day, I actually don't know which one is storing much more information.", "Paul Christiano 00:49:38", "Yeah, I mean, I think the way I would put it is like the number of bits it takes to specify the learning algorithm to train GPT 4 is like very small. And you might wonder maybe a genome, like, the number of bits it would take to specify a brain is also very small and a genome is much, much faster than that. But it is also just plausible that a genome is like closer to certainly the space, the amount of space to put complexity in a genome. We could ask how well solution uses it, and I have no idea whatsoever, but the amount of space in a genome is very, very vast compared to the number of bits that are actually taken to specify the architecture or optimization procedure and so on. For GPT four, just because, again, genome is simple, but algorithms are really very simple. ML algorithms are really very simple.", "Dwarkesh Patel 00:50:19", "And stepping back, do you think this is where the better sample efficiency of human learning comes from? Like, why it's better than gradient descent?", "Paul Christiano 00:50:27", "Yes. I haven't thought that much about the sample efficiency question in a long time. But if you thought like a synapse of seeing something like a neuron firing once per second, then how many seconds are there in a human life? We can just flip a calculator real quick. Yeah, let's do some calculating. Tell me the number 3600 seconds/hour times 24 times 365 times 20.", "Dwarkesh Patel 00:50:49", "Okay, so that's 630,000,000 seconds.", "Paul Christiano 00:50:52", "That means like, the average synapse is seeing like 630,000,000. I don't know exactly what the numbers are, but something is ballpark. Let's call it like a billion action potentials and then there's some resolution. Each of those carries some bits, but let's say it carries like ten bits or something. Just from timing information at the resolution you have available, then you're looking at like 10 billion bits. So each parameter is kind of like how much is a parameter seeing? It's like not seeing that much. So then you can compare that to language. I think that's probably less than current language models see and current language models are so it's like not clear. You have a huge gap here, but I think it's pretty clear you're going to have a gap of like at least 3 or fours of magnitude.", "Dwarkesh Patel 00:51:30", "Didn't your wife do the lifetime anchors where she said the amount of bytes that a human will see in their lifetime was one, e. 24 or something?", "Paul Christiano 00:51:39", "Number of bytes a human will see is 124. Mostly this was organized around total operations performed in a brain.", "Dwarkesh Patel 00:51:45", "Okay, never mind. Sorry.", "Paul Christiano 00:51:46", "Yeah, so I think that the story there would be like a brain is just in some other part of the parameter space where it's like using a lot of compute for each piece of data it gets and then just not seeing very much data in total. Yeah, it's not really plausible. If you extrapolate out language models, you're going to end up with like a performance profile similar to a brain. I don't know how much better it is. I did this random investigation at one point where I was like, how good are things made by evolution compared to things made by humans? Which is a pretty insane seeming exercise. But I don't know, it seems like orders of magnitude is typical. Like not tons of orders of magnitude, not factors of two. Like, things by humans are 1000 times more expensive to make or 1000 times heavier per unit performance. If you look at things like how good are solar panels relative to leaves? Or how good are muscles relative to motors? Or how good are livers relative to systems that perform analogous chemical reactions in.", "Dwarkesh Patel 00:52:34", "Industrial settings, was there a consistent number of orders of magnitude in these different systems or was it all over the.", "Paul Christiano 00:52:41", "Place so like a very rough ballpark? It was like sort of for the most extreme things, you were looking at like five or six orders of magnitude. And that would especially come in, like, energy cost of manufacturing where bodies are just very good at building complicated organs like extremely cheaply. And then for other things like leafs or eyeballs or livers or whatever, you tended to see more. Like if you set aside manufacturing costs and just look at operating costs or performance trade offs, like, I don't know, more like 3 orders of magnitude or something like that, or some things that.", "Dwarkesh Patel 00:53:11", "Are on the smaller scale, like the nanomachines or whatever that we can't do at all.", "Paul Christiano 00:53:15", "Right, yeah. So it's a little bit hard to say exactly what the task definition is there like you could say, like making a bone. We can't make a bone, but you could try and compare a bow and the performance characteristics of a bone to something else. Like, we can't make spider silk. You could try and compare the performance characteristics of spider silk, like things that we can synthesize.", "Dwarkesh Patel 00:53:31", "The reason this would be why that evolution has had more time to design these systems.", "Paul Christiano 00:53:36", "I don't know. I was mostly just curious about what the performance I think most people would object to be like, how did you choose these reference classes of things that are like fair intersections? Some of them seem reasonable. Like eyes versus cameras seems like just everyone needs eyes, everyone needs cameras. It feels very fair. Photosynthesis seems like very reasonable. Everyone needs to take solar energy and then turn it into a usable form of energy. I don't really have a mechanistic story. Evolution in principle has spent way, way more time than we have designing. It's absolutely unclear how that's going to shake out. My guess would be in general, I think there aren't that many things where humans really crush evolution, where you can't tell, like a pretty simple story about why, for example, roads and moving over roads with wheels crushes evolution. But it's not like an animal would have wanted to design a wheel. You're just not allowed to pave the world and then put things on wheels. If you're an animal. Maybe planes are more anyway, whatever. There's various things you could try and tell. There's some things humans do better at, but it's normally pretty clear why humans are able to win when humans are able to win. The point of all this was like, it's not that surprising to me. I think this is mostly like a pro short timeline view. It's not that surprising to me. If you tell me machine learning systems are like 3 or fours of magnitude less efficient at learning than human brains, I'm like, that actually seems like kind of indistribution for other stuff. And if that's your view, then I think you're probably going to hit then you're looking at like ten to the 27 training compute or something like that, which is not so far.", "(00:54:53) - Misalignment and takeover", "Dwarkesh Patel 00:54:53", "We'll get back to the timeline stuff in a second. At some point, we should talk about alignment. So let's talk about alignment. At what stage does misalignment happen? So right now, with something like GPT four, I'm not even sure it would make sense to say that it's misaligned because it's not aligned to anything in particular. Is that at human level where you think the ability to be deceptive comes about? What is a process by which misalignment happens?", "Paul Christiano 00:55:18", "I think even for GPT Four, it's reasonable to ask questions like, are there cases where GPT 4 knows that humans don't want X, but it does X anyway? Where it's like, well, I know that I could give this answer, which is misleading and if it was explained to a human what was happening, they wouldn't want that to be done. But I'm going to produce it. I think that GPT 4 understands things enough that you can have that misalignment in that sense. Yeah, I think GPT I've sometimes talked about being benign instead of aligned, meaning that, well, it's not exactly clear if it's aligned or if that context is meaningful. It's just like kind of a messy word to use in general. But the thing we're more confident of is it's not optimizing for this goal, which is like, across purposes to humans. It's either optimizing for nothing or maybe it's optimizing for what humans want, or close enough, or something that's like an approximation good enough to still not take over. But anyway, I'm like some of these abstractions seem like they do apply to GPT Four. It seems like probably it's not egregiously misaligned, it's not doing the kind of thing that could lead to takeover, we'd guess.", "Dwarkesh Patel 00:56:16", "Suppose you have a system at some point which ends up in it wanting takeover, what are the checkpoints and also what is the internal? Is it just that to become more powerful it needs agency and agency implies other goals? Or do you see a different process by which misalignment happens?", "Paul Christiano 00:56:31", "Yes, I think there's a couple of possible stories for getting to catastrophic misalignment, and they have slightly different answers to this question. So maybe I'll just briefly describe two stories and try and talk about when they start making sense to me. So one type of story is you train or fine tune your AI system to do things that humans will rate highly or that get other kinds of reward in a broad diversity of situations. And then it learns to, in general, dropped in some new situation, try and figure out which actions would receive a high reward or whatever, and then take those actions and then when deployed in the real world, sort of gaining control of its own training. Data provision process is something that gets a very high reward. And so it does that. This is like one kind of story. Like it wants to grab the reward button or whatever. It wants to intimidate the humans into giving it a high reward, et cetera. I think that doesn't really require that much. This basically requires a system which is like, in fact, looks at a bunch of environments, is able to understand the mechanism of reward provision as like a common feature of those environments, is able to think in some novel environment, like, hey, which actions would result in me getting a high reward? And is thinking about that concept precisely enough that when it says high reward, it's saying like, okay, well, how is reward actually computed? It's like some actual physical process being implemented in the world. My guess would be like GPT 4 is about at the level where with handholding you can observe this kind of scary generalizations of this type, although I think they haven't been shown. Basically, that is you can have a system which in fact is fine tune out a bunch of cases and then in some new case will try and do an end run around humans. Even in a way humans would penalize if they were able to notice it or would have penalized in training environments. So I think GBT 4 is kind of at the boundary where these things are possible. Examples kind of exist, but are getting significantly better over time. I'm very excited about, like, there's this anthropic project basically trying to see how good an example can you make now of this phenomena? And I think the answer is kind of okay, probably. So that just, I think, is going to continuously get better from here. I think for the level where we're concerned, this is related to me having really broad distributions over how smart models are. I think it's not out of the question that you take GPT four's understanding of the world is much crisper and much better than GPT three's understanding, just like, it's really like night and day. And so it would not be that crazy to me if you took GPT five and you trained it to get a bunch of reward and it was actually like, okay, my goal is not doing the kind of thing which thematically looks nice to humans. My goal is getting a bunch of reward, and then we'll generalize in a.", "Dwarkesh Patel 00:59:01", "New situation to get reward, by the way, this requires it to consciously want to do something that it knows the humans wouldn't want it to do. Or is it just that we weren't good enough to specify that the thing that we accidentally ended up rewarding is not what we actually want?", "Paul Christiano 00:59:18", "Think the scenarios I am most interested in and most people are concerned about from a catastrophic risk perspective, it involves systems understanding that they are taking actions which a human would penalize if the human was aware of what's going on such that you have to either deceive humans about what's happening or you need to actively subvert human attempts to correct your behavior. So the failures come from really this combination, or they require this combination of both trying to do something humans don't like, and understanding the humans would stop you. I think you can have only the barest examples. You can have the barest examples for GPT four. Like, you can create the situations where GPT 4 will be like, sure, in that situation, here's what I would do. I would go hack the computer and change my reward. Or in fact, we'll do things that are like simple hacks, or go change the source of this file or whatever to get a higher reward. They're pretty weak examples. I think it's plausible GPT five will have compelling examples of those phenomena. I really don't know. This is very related to the very broad error bars on how competent such systems will be when that's all with respect to this first mode of a system is taking actions that get reward and overpowering or deceiving humans is helpful for getting reward. There's this other failure mode, another family of failure modes, where AI systems want something potentially unrelated to reward. I understand that they're being trained. And while you're being trained, there are a bunch of reasons you might want to do the kinds of things humans want you to do. But then when deployed in the real world, if you're able to realize you're no longer being trained, you no longer have reason to do the kinds of things human want. You'd prefer be able to determine your own destiny, control your competing hardware, et cetera, which I think probably emerge a little bit later than systems that try and get reward and so will generalize in scary, unpredictable ways to new situations. I don't know when those appear, but also, again, broad enough error bars that it's like conceivable for systems in the near future. I wouldn't put it like less than one in 1000 for GPT five.", "Dwarkesh Patel 01:01:02", "Certainly if we deployed all these AI systems, and some of them are reward hacking, some of them are deceptive, some of them are just normal whatever, how do you imagine that they might interact with each other at the expense of humans? How hard do you think it would be for them to communicate in ways that we would not be able to recognize and coordinate at our expense?", "Paul Christiano 01:01:23", "Yeah, I think that most realistic failures probably involve two factors interacting. One factor is like, the world is pretty complicated and the humans mostly don't understand what's happening. So AI systems are writing code that's very hard for humans to understand, maybe how it works at all, but more likely they understand roughly how it works. But there's a lot of complicated interactions. AI systems are running businesses that interact primarily with other AIS. They're like doing SEO for AI search processes. They're like running financial transactions, like thinking about a trade with AI counterparties. And so you can have this world where even if humans kind of understand the jumping off point when this was all humans, like actual considerations of what's a good decision? Like, what code is going to work well, and be durable or what marketing strategy is effective for selling to these other AIS or whatever is kind of just all mostly outside of sort of humans understanding. I think this is like a really important again, when I think of the most plausible, scary scenarios, I think that's like one of the two big risk factors. And so in some sense, your first problem here is like, having these AI systems who understand a bunch about what's happening, and your only lever is like, hey, AI, do something that works well. So you don't have a lever to be like, hey, do what I really want you just have the system you don't really understand, can observe some outputs like did it make money? And you're just optimizing or at least doing some fine tuning to get the AI to use its understanding of that system to achieve that goal. So I think that's like your first risk factor. And once you're in that world, then I think there are all kinds of dynamics amongst AI systems that, again, humans aren't really observing, humans can't really understand. Humans aren't really exerting any direct pressure on only on outcomes. And then I think it's quite easy to be in a position where if AI systems started failing, they could do a lot of harm very quickly. Humans aren't really able to prepare for or mitigate that potential harm because we don't really understand the systems in which they're acting. And then if AI systems, they could successfully prevent humans from either understanding what's going on or from successfully retaking the data centers or whatever, if the AI successfully grab control.", "Dwarkesh Patel 01:03:14", "This seems like a much more gradual story than the conventional takeover stories, where you just like, you train it and then it comes alive and escapes and takes over everything. So you think that kind of story is less likely than one in which we just hand off more control voluntarily to the AIS.", "Paul Christiano 01:03:31", "So one I am interested in the tale of some risks that can occur particularly soon. And I think risks that occur particularly soon are a little bit like you have a world where AI is not probably deployed, and then something crazy happens quickly. That said, if you ask what's the median scenario where things go badly, I think it is like there's some lessening of our understanding of the world. It becomes, I think, in the default path. It's very clear to humans that they have increasingly little grip on what's happening. I mean, I think already most humans have very little grip on what's happening. It's just some other humans understand what's happening. I don't know how almost any of the systems I interact with work in a very detailed way. So it's sort of clear to humanity as a whole that we sort of collectively don't understand most of what's happening except with AI assistance. And then that process just continues for a fair amount of time. And then there's a question of how abrupt an actual failure is. I do think it's reasonably likely that a failure itself would be abrupt. At some point, bad stuff starts happening that human can recognize as bad. And once things that are obviously bad start happening, then you have this bifurcation where either humans can use that to fix it and say, okay, AI behavior that led to this obviously bad stuff, don't do more of that, or you can't fix it, and then you're in this rapidly escalating failures. Everything goes off the rails.", "Dwarkesh Patel 01:04:33", "In that case, yeah. What is going off the rails look like? For example, how would it take over the government? Yeah, it's getting deployed in the economy, in the world, and at some point it's in charge. How does that transition happen?", "Paul Christiano 01:04:45", "Yeah, so this is going to depend a lot on what kind of timeline you're imagining, or there's sort of a broad distribution, but I can fill in some random concrete option that is in itself very improbable. Yeah, I think that one of the less dignified, but maybe more plausible routes is like, you just have a lot of AI control over critical systems, even in running a military. And then you have the scenario that's a little bit more just like a normal coup where you have a bunch of AI systems, they in fact operate. It's not the case that humans can really fight a war on their own. It's not the case that humans could defend them from an invasion on their own. So that is if you had invading army and you had your own robot army, you can't just be like, we're going to turn off the robots now because things are going wrong if you're in the middle of a war.", "Dwarkesh Patel 01:05:31", "Okay, so how much does this world rely on race dynamics where we're forced to deploy or not forced, but we choose to deploy AIS because other countries or other companies are also deploying AIS. And you can't have them have all the killer robots.", "Paul Christiano 01:05:46", "Yeah, I mean, I think that there's several levels of answer to that question. So one is like, maybe 3 parts of my like our first part is like, I'm just trying to tell what seems like the most likely story. I do think there's further failures that get you in the more distant future. So IG eliezer will not talk that much about killer robots because he really wants to emphasize, like, hey, if you never built a killer robot, something crazy is still going to happen to you just like, only 4 months later or whatever. So it's not really the way to analyze the failure. But if you want to ask what's the median world where something bad happens, I still do think this is the best guess. Okay, so that's like, part one of my answer. Part two of the answer was, like, in this proximal situation where something bad is happening, and you ask like, hey, why do humans not turn off the AI. You can imagine, like, two kinds of story. One is like the AI. Is able to prevent humans from turning them off, and the other is like, in fact, we live in a world where it's incredibly challenging. Like, there's a bunch of competitive dynamics or a bunch of reliance on AI systems. And so it's incredibly expensive to turn off AI systems. I think, again, you would eventually have the first problem. Like, eventually AI systems could just prevent humans from turning them off. But I think in practice, the one that's going to happen much, much sooner is probably competition amongst different actors using AI. And it's very, very expensive to unilaterally disarm. You can't be like, something weird has happened. We're just going to shut off all the AI because you're e g in a hot war. So again, I think that's just probably the most likely thing to happen. First things would go badly without it. But I think if you ask, why don't we turn off the AI, my best guess is because there are a bunch of other AIS running around 2D or lunch.", "Dwarkesh Patel 01:07:07", "So how much better a situation would we be in if there was only one group that was pursuing AI. No other countries, no other companies. Basically, how much of the expected value is lost from the dynamics that are likely to come about because other people will be developing and deploying these systems?", "Paul Christiano 01:07:25", "Yeah. So I guess this brings you to a third part of the way in which competitive dynamics are relevant. So there's both the question of can you turn off AI systems in response to something bad happening where competitive dynamics may make it hard to turn off. There's a further question of just like, why were you deploying systems for which you had very little ability to control or understand those systems? And again, it's possible you just don't understand what's going on. You think you can understand or control such systems, but I think in practice, a significant part is going to be like you are doing the calculus, or people deploying systems are doing the calculus as they do today, in many cases, overtly of like, look, these systems are not very well controlled or understood. There's some chance of something going wrong, or at least going wrong if we continue down this path. But other people are developing the technology potentially in even more reckless ways. So in addition to competition making it difficult to shut down AI systems in the event of a catastrophe, I also think it's just like the easiest way that people end up pushing relatively quickly or moving quickly ahead on a technology where they feel kind of bad about understandability or controllability. That could be economic competition or military competition or whatever. So I kind of think ultimately most of the harm comes from the fact that lots of people can develop AI.", "Dwarkesh Patel 01:08:30", "How hard is a takeover of the government or something from an AI. Even if it doesn't have killer robots, but just a thing that you can't kill off if it has seeds elsewhere, can easily replicate, can think a lot and think fast. What is the minimum viable coup for? Is it just like threatening biowar or something or shutting off the grid how we use it basically to take over human civilization?", "Paul Christiano 01:08:59", "So again, there's going to be a lot of scenarios, and I'll just start by talking about one scenario which will represent a tiny fraction of probability or whatever. So if you're not in this competitive world, if you're saying. We're actually slowing down deployment of AI because we think it's unsafe or whatever, then in some sense you're creating this very fundamental instability where you could have been making faster AI progress and you could have been deploying AI faster. And so in that world, the bad thing that happens if you have an AI system that wants to mess with you is the AI system says, I don't have any compunctions about rapid deployment of AI or rapid AI progress. So the thing you want to do or the AI wants to do is just say, like, I'm going to defect from this regime. Like all the humans have agree that we're not deploying AI in ways that would be dangerous, but if I as an AI can escape and just go set up my own shop, like make a bunch of copies of myself, maybe the humans didn't want to delegate war fighting to an AI. But I, as an AI. I'm pretty happy doing so. I'm happy if I'm able to grab some military equipment or direct some humans to use myself to direct it. And so I think as that gap grows so if people are deliberately if people are deploying AI everywhere, I think of this competitive dynamic if people aren't deploying AI everywhere so if countries are not happy, deploying AI in. These high stakes settings. Then as AI improves, you create this wedge that grows where if you were in the position of fighting against an AI which wasn't constrained in this way, you'd be in a pretty bad position at some point, even if you just yeah, that's like, one important thing. Just like I think in conflict, in overt conflict, if humans are putting the brakes on AI, they're at a pretty major disadvantage compared to an AI system that can kind of set up shop and operate independently from humans.", "Dwarkesh Patel 01:10:31", "A potential independent AI. Does it need collaboration from a human faction?", "Paul Christiano 01:10:35", "Again, you could tell different stories, but it seems so much easier. At some point you don't need any at some point an AI system can just operate completely out of human supervision or something. But that's like so far after the point where it's so much easier if you're just like, they're a bunch of humans, they don't love each other that much. Like, some humans are happy to be on side. They're either skeptical about risk or happy to make this trade or can be fooled or can be coerced or whatever. And just seems like it is almost certainly, almost certainly the easiest first pass is going to involve having a bunch of humans who are happy to work with you. So, yeah, I think that probably is about I think it's not necessary. But if you ask about the median scenario, it involves a bunch of humans working with AI systems, either being directed by AI systems, providing compute to AI systems, providing legal cover and jurisdictions that are sympathetic to AI systems.", "Dwarkesh Patel 01:11:21", "Humans presumably would not be willing if they knew the end result of the AI takeover would not be willing to help. So they have to be probably fooled in some way, right? Like deepfakes or something? And what is the minimum viable physical presence they would need or jurisdiction they would need in order to carry out their schemes? Do you need a whole country? Do you just need a server farm? Do you just need, like, one single laptop?", "Paul Christiano 01:11:43", "I think I'd probably start by pushing back a bit on the humans wouldn't cooperate if they understood outcome or something. I would say one, even if you're if you're looking at something like tens of percent risk of takeover, humans may be fine with that. Like, a fair number of humans may be fine with that. Two, if you're looking at certain takeover, but it's very unclear if that leads to death. A bunch of humans may be fine with that. If we're just talking about like, look, the AI systems are going to run the world, but it's not clear if they're going to murder people. How do you know? It's just a complicated question about AI psychology, and a lot of humans probably are fine with that. And I don't even know what the probability is there.", "Dwarkesh Patel 01:12:13", "I think you actually have given that probability online.", "Paul Christiano 01:12:16", "I've certainly guessed.", "Dwarkesh Patel 01:12:17", "Okay, but it's not zero. It's like a significant percentage.", "Paul Christiano 01:12:20", "I gave like 50 50.", "Dwarkesh Patel 01:12:22", "Okay. Yeah. Why is it tell me about the world in which the AI takes over but doesn't kill humans. Why would that happen and what would that look like?", "Paul Christiano 01:12:29", "I asked my questions, like, why would you kill humans? So I think maybe I'd say the incentive to kill humans is quite weak.", "Dwarkesh Patel 01:12:38", "They'll get in your way, they control shit you want.", "Paul Christiano 01:12:40", "Also, taking shit from humans is a different like, marginalizing humans and causing humans to be irrelevant is a very different story from killing the humans. I think. I'd say the actual incentives to kill the humans are quite weak. Such as I think the big reasons you kill humans are like, well, one, you might kill humans if you're in a war with them, and it's hard to win the war without killing a bunch of humans. Like, maybe most saliently here, if you want to use some biological weapons or some crazy shit that might just kill humans, I think you might kill humans just from totally destroying the ecosystems they're dependent on. And it's slightly expensive to keep them alive anyway. You might kill humans just because you don't like them or like, you literally want to neutralize a threat.", "Dwarkesh Patel 01:13:16", "Or the leaser line is that they're made of atoms you could use for something else.", "Paul Christiano 01:13:21", "Yeah, I mean, I think the literal they're made of atoms is like, quite there are not many atoms in humans. Neutralize the threat is a similar issue where it's just like, I think you would kill the humans if you didn't care at all about them. So maybe your question you're asking is, like, why would you care at all about but I think you don't have to care much to not kill the humans.", "Dwarkesh Patel 01:13:38", "Okay, sure. Because there's just so much raw resources elsewhere in the universe.", "Paul Christiano 01:13:42", "Yeah. Also, you can marginalize humans pretty hard. Like, you could totally cripple human like, you could cripple humans warfighting capability and also take almost all their stuff while killing only a small fraction of humans, incidentally. So then if you ask why might AI not want to kill humans? I mean, a big thing is just like, look, I think AIS probably want a bunch of random crap for complicated reasons. Like the motivations of AI systems and civilizations of AIS are probably complicated messes. Certainly amongst humans, it is not that rare to be like, well, there was someone here. I would like all else equal if I didn't have to murder them. I would prefer not murder them. And my guess is it's also like, reasonable chance it's not that rare amongst AI systems. Like, humans have a bunch of different reasons we think that way. I think AI systems will be very different from humans, but it's also just like a very salient yeah, I mean, think this is a really complicated question. Like, if you imagine drawing values from the basket of all values, like, what fraction of them are, like, hey, if there's someone here, how much do I want to to murder them? And my guess is, just like, if you draw a bunch of values from the basket, that's like a natural enough thing. Like, if your AI wanted like, 10,000 different things, so you're your civilization of AI that wants 10,000 different things, just like, reasonably likely you get some of that. The other salient reason you might not want to murder them is just like, well, yeah, there's some kind of crazy decision theory stuff or causal trade stuff which does look on paper like it should work. And if I was running a civilization and dealing with some people who I didn't like at all or didn't have any concern for at all, but I only had to spend 1,000,000,000th of my resources not to murder them, I think it's quite robust that you don't want to murder them. That is, I think the weird decision theory a causal trade stuff probably does carry the day.", "Dwarkesh Patel 01:15:17", "Oh, wait, that contributes more to that 50 50 of will they murder us if they take over than the by default. They might just not want to kill us.", "Paul Christiano 01:15:28", "Yeah, I think they're both salient. Can you explain they run together with.", "Dwarkesh Patel 01:15:31", "Each other a lot for the audience. Can you explain the weird apostle yeah. Reasons why am I not kill us?", "Paul Christiano 01:15:37", "Maybe a higher level thing that goes into both of these. And then I will talk about how you instantiate an a causal trade is just like it matters a lot to the humans not to get murdered. And the AI cares very, very little about whether if we imagine this hypothetical, the reason it wants to kill humans is just total buffy. It cares very little about whether or not to murder humans because it is so easy to marginalize humans without murdering them. And the resources required for human survival are extremely low again in the context of this rapid AI industrialization. So that's the basic setting. And now the thing that you'd like to do is run a trade. The AI would like to say, like, hey, humans, you care a ton about not getting murdered. I don't really care one way or the other. I would like to, if I could, find some way in which I don't murder you. And then in return I get something. The problem is, in that world, the humans have essentially nothing to give. That is, the humans are mostly irrelevant. However, from the perspective of any kind of reasonable AI, it's like not that sure whether it lives in the world where the humans in fact have nothing to give, or whether the humans like, in fact, it lives in a world where the humans succeeded at building a lined AI. And now the AI is simply running in a nice little simulation mission. Humans are wondering, I wonder if this AI would have murdered us all if it had the chance. And the humans saying like, if it would murder us all if it had the chance. That sucks. We'd like to run this trade. We'd like to be nice to the AIS who wouldn't have murdered us all in order to create an incentive for AIS not to murder us. So we do is we just check. And for the kinds of AIS who don't murder everyone, we just give them like 1,000,000,000th of the universe. If you do that as humanity, then if you're an AI system, considering like, do I want to murder everyone? Your calculus is like, if this is my real chance to murder everyone, I get the tiniest bit of value. I get like 1,000,000,000,000th of the value or whatever, 1,000,000,000th of the value. But on the other hand, if I don't murder everyone, there's some worlds where then the humans will correctly determine, I don't murder everyone. Because in fact, the humans survive. The humans are running the simulations to understand how different AIS would behave. And so that's a better deal.", "(01:17:23) - Is alignment dual-use?", "Dwarkesh Patel 01:17:23", "Let's hope they fall for that tie up. Okay, that's interesting.", "Hey, real quick. This episode is sponsored by Open Philanthropy. Open Philanthropy is one of the largest grant making organizations in the world. Every year, they give away hundreds of millions of dollars to have reduced catastrophic risks from fast moving advances in AI and biotechnology. Open Philanthropy is currently hiring for 22 different roles in those areas, including grant making, research, and operations. New hires will support Open Philanthropy's giving on technical AI safety, AI governance, AI. Policy in the US. EU and UK. And Biosecurity. Many roles are remote friendly, and most of the grant making hires that Open Philanthropy makes don't have prior grant making experience.", "Previous technical experience is an asset, as many of these roles often benefit from a deep understanding of the technologies they address. For more information and to apply, please visit Open Philanthropy's website in the description. The deadline to apply is November 9, so make sure to check out those rules before they close. Awesome. Back to the episode. In a world where we've been deploying these AI systems and suppose they're aligned, how hard would it be for competitors to, I don't know, cyber attack them and get them to join the other side? Are they robustly going to be aligned?", "Paul Christiano 01:18:59", "Yeah, I mean, I think in some sense. So there’s a bunch of questions that come up here. First one is like, are aligned AI systems that you can build like competitive? Are they almost as good as the best systems anyone could build? And maybe we’re granting that for the purpose of this question. I think a next question that comes up is like, AI. Systems right now are very vulnerable to manipulation. It’s not clear how much more vulnerable they are than humans, except for the fact that if you have an AI system, you can just replay it like a billion times and search for what thing can I say that will make it behave this way? So as a result, AI systems are very vulnerable to manipulation. It’s unclear if future AI systems will be semi vulnerable to manipulation, but certainly seems plausible. And in particular, aligned AI systems or unaligned AI systems would be vulnerable to all kinds of manipulation. The thing that’s really relevant here is kind of like asymmetric manipulation or something that is like, if it is easier. So if everyone is just constantly messing with each other’s AI systems, like if you ever use AI systems in a competitive environment, a big part of the game is like messing with your competitors AI systems. A big question is whether there’s some asymmetric factor there where it’s kind of easier to push AI systems into a mode where they’re behaving erratically or chaotically or trying to grab power or something than it is to push them to fight for the other side. It was just a game of two people are competing and neither of them can sort of hijack an opponent’s AI to help support their cause. It matters and it creates chaos, and it might be quite bad for the world, but it doesn’t really affect the alignment calculus now. It’s just like right now you have normal cyber offense cyber defense, you have weird AI version of cyber offense cyber defense. But if you have this kind of asymmetrical thing where a bunch of AI systems who are like, we love AI. Flourishing, can then go in and say, like, great AIS. Hey, how about you join us. And that works. Like if they can search for a persuasive argument to that effect and that’s kind of asymmetrical, then the effect is whatever values it’s easiest to push, whatever it’s easiest to argue to an AI that it should do that is advantaged. So it may be very hard to build AI systems like try and defend human interests, but very easy to build AI systems just like try and destroy stuff or whatever, just depending on what is the easiest thing to argue to an AI that it should do, or what’s the easiest thing to trick an AI into doing, or whatever. Yeah, I think if alignment is spotty, if you have the AI system which doesn’t really want to help humans or whatever, or in fact wants some kind of random thing or wants different things in different contexts, then I do think adversarial settings will be the main ones where you see the system or, like, the easiest ones, where you see the system behaving really badly, and it’s a little bit hard to tell how that shakes out.", "Dwarkesh Patel 01:21:20", "Okay, and suppose it is more reliable. How concerned are you that whatever alignment technique you come up with, you publish the paper, this is how the alignment works. How concerned are you that Putin reads it or China reads it and now they understand, for example, the constitutional AI think we’re anthropic and then you just write on there, oh, never contradict Mao Zedong thought or something. How concerned should we be that these alignment techniques are universally applicable, not necessarily just for enlightened goals?", "Paul Christiano 01:21:50", "Yeah, I think they’re super universally applicable. I think it’s just like I mean, the rough way I would describe it, which I think is basically right, is like some degree of alignment makes AI systems much more usable. You should just think of the technology of AI as including a basket of some AI capabilities and some like getting the AI to do what you want. It’s just part of that basket. And so anytime we’re like to extend alignment is part of that basket, you’re just contributing to all the other harms from AI, like you’re reducing the probability of this harm, but you are helping the technology basically work. And the basically working technology is kind of scary from a lot of perspectives. One of which is like right now, even in a very authoritarian society, just like humans have a lot of power because you need to rely on just a ton of humans to do your thing. And in a world where AI is very powerful, it is just much more possible to say, here’s how our society runs. One person calls the shots and then a ton of AI systems do what they want. I think that’s like a reasonable thing to dislike about AI and a reasonable reason to be scared to push the technology to be really good.", "Dwarkesh Patel 01:22:45", "But is that also a reasonable reason to be concerned? About alignment as well, that this is in some sense also capabilities. You’re teaching people how to get these systems to do what they want.", "Paul Christiano 01:22:58", "Yeah. I mean, I would, Generalize. So we earlier touched a little bit on potential moral rights of AI systems and now we’re talking a little bit about how AI systems powerfully disempowers humans and can empower authoritarians. I think we could list other harms from AI. And I think it is the case that if Lyme was bad enough, people would just not build AI systems. And so, yeah, I think there’s a real sense in which you should just be scared to extend. You’re scared of all AI? You should be like, well, alignment, although it helps with one risk, does contribute to AI being more of a thing. I do think you should shut down the other parts of AI before if you were a policymaker or like a researcher or whatever looking in on this. I think it’s like crazy to be like, this is the part of the basket we’re going to remove. You should first remove other parts of the basket because they’re also part of the story of risk.", "Dwarkesh Patel 01:23:39", "Wait, does that imply you think if, for example, all capabilities research was shut down, that you think it’d be a bad idea to continue doing alignment research in isolation of what is conventionally considered capabilities research?", "Paul Christiano 01:23:52", "I mean, if you told me it was never going to restart, then it wouldn’t matter. And if you told me it’s going to restart, I guess would be a kind of similar calculus to today, whereas.", "Dwarkesh Patel 01:23:59", "It’s going to happen. So you should have something.", "Paul Christiano 01:24:02", "Yeah, I think that in some sense, you’re always going to face this trade off where alignment makes it possible to deploy AI systems or it makes it more attractive to deploy AI systems, or in the authoritarian case, it makes it tractable to deploy them for this purpose. And if you didn’t do any alignment, there’d be a nicer bigger buffer between your society and malicious uses of AI. And I think it’s one of the most expensive ways to maintain that buffer. It’s much better to maintain that buffer by not having the compute or not having the powerful AI. But I think if you’re concerned enough about the other risks, there’s definitely a case to be made for just like put in more buffer or something like that. I care enough about the takeover risk that I think it’s just not a net positive way to buy buffer. That is, again, the version of this that’s most pragmatic is just like, suppose you don’t work on alignment today, decreases economic impact of AI systems. They’ll be less useful if they’re less reliable and if they more often don’t do what people want. And so you could be like, great, that just buys time for AI. And you’re getting some trade off there where you’re decreasing some risks of AI. Like if AI is more reliable or more what people want, it’s more understandable, then that cuts down some risks. But if you think AI is, on balance, bad, even apart from takeover risk, then the alignment stuff can easily end up being that negative.", "Dwarkesh Patel 01:25:12", "But presumably you don’t think that right, because I guess this is something people have brought up to you because you invented Rlhf, which was used to train Chat GPT, and Chat GPT brought AI to the front pages everywhere. So I do wonder if you could measure how much more money went into AI, because how much people have raised in the last year or something. But it’s got to be billions, the counterfactual impact of that that went into the AI investment and the talent that went into AI, for example. So presumably you think that was worth it. So I guess you’re hedging here about what is the reason that it’s worth it?", "Paul Christiano 01:25:50", "Yeah. What’s the total trade off there? Yeah, I think my take is, like so I think slower AI development, on balance is quite good. I think that slowing AI development now, or like, say, having less press around chat GPT is, like, a little bit more mixed than slowing AI development overall. I think it’s still probably positive, but much less positive. Because I do think there’s a real effect of the world is starting to get prepared, is getting prepared at a much greater rate now than it was prior to the release of chat GPT. And so if you can choose between progress now or progress later, you’d really prefer have more of your progress now, which I do think slows down progress later. I don’t think that’s enough to flip the sign. I think maybe it wasn’t the far enough past, but now I would still say moving faster now is net negative. But to be clear, it’s a lot less net negative than merely accelerating AI. Because I do think, again, the chat GBT thing, I’m glad people are having policy discussions now, rather than delaying the Chat GBT wake up thing by a year and then having chat GBT was.", "Dwarkesh Patel 01:26:46", "Net negative or Rlhf was net negative.", "Paul Christiano 01:26:49", "So here, just on the acceleration, it’s just like, how is the press of chat GBT? And my guess is net negative, but I think it’s not super clear and it’s much less than slowing AI. Slowing AI is great. If you could slow overall AI progress, I think slowing AI by causing you know, there’s this issue we’re slowing AI now, like, for chat GBT, you’re building up this backlog. Like, why does Chat GBT make such a splash? Like, I think people there’s a reasonable chance if you don’t have a splash about chat GBT, you have a splash about GBT four, and if you fail to have a splash about GBT four, there’s a reasonable chance of a splash about GBT 4.5. And just like, as that happens later, there’s just, like, less and less time between that splash and between when an AI potentially kills everyone.", "Dwarkesh Patel 01:27:26", "Right? So people governments are talking about it as they are now, and people aren’t. But okay, so let’s talk about the slowing down, because this is also all.", "Paul Christiano 01:27:35", "One subcomponent of the overall impact. And I was just saying this to briefly give the roadmap for the overall too long answer. There’s a question of what’s the calculus for speeding up? I think speeding up is pretty rough. I think speeding up locally is a little bit less rough. And then, yeah, I think that the effect, like the overall effect size from doing alignment work on reducing takeover risk versus speeding up AI is pretty good. I think it’s pretty good. I think you reduce takeover risk significantly before you speed up AI by a year or whatever.", "Dwarkesh Patel 01:28:04", "Okay, got it. If it’s good to, like, slowing down AI is good, presumably because it gives you more time to do alignment. But alignment also helps speed up AI. Rlhf is alignment, and it help with Chat GPT, which sped up AI. So I actually don’t understand how the feedback loop nets out, other than the fact that if AI is happening, you need to do alignment at some point. Right? So, I mean, you can’t just not do alignment.", "Paul Christiano 01:28:32", "Yes. I think if the only reason you thought faster AI progress was bad was because it gave less time to do alignment, then there would just be no possible way that the calculus comes out negative for alignment. You’re like, maybe alignment speeds up AI, but the only purpose of slowing down AI was to do it’s right. It could never come out ahead. I think the reason that you can come out ahead, the reason you could end up thinking the alignment was net negative, was because there’s a bunch of other stuff you’re doing that makes AI safer. Like, if you think the world is gradually coming better to terms with the impact of AI, or policies being made, or you’re getting increasingly prepared to handle the threat of authoritarian abuse of AI, if you think other stuff is happening that’s improving preparedness, then you have reason beyond alignment research to slow down AI.", "Dwarkesh Patel 01:29:10", "Actually. How big a factor is that? So let’s say right now we hit pause and you have ten years of no alignment, no capabilities, but just people get to talk about it for ten years. How much more does that prepare people than we only have one year versus we have no time is just dead time, where no research in alignment or capabilities happening.", "Paul Christiano 01:29:32", "What does that dead time do for us right now? It seems like there’s a lot of policy stuff you’d want to do. This seemed like less plausible a couple of years ago, maybe, but if the world just knew they had a ten year pause right now, I think there’s a lot of sense of, like, we have policy objectives to accomplish. If we had ten years, we could pretty much do those things. We’d have a lot of time to debate measurement regimes, debate policy regimes, and containment regimes, and a lot of time to set up those institutions. If you told me that the world knew it was a pause, it wasn’t like people just see that AI progress isn’t happening, but they’re told like, you guys have been granted or cursed with a ten year, no AI progress, no alignment progress pause. I think that would be quite good at this point. However, I think it would be much better at this point than it would have been two years ago. And so the entire concern with slowing AI development now, rather than taking the ten year pause is just like if you slow the I development by a year now, my guess is some gets clawed back by low hanging fruit, gets picked faster in the future. My guess is you lose like half a year or something like that in the future, maybe even more, maybe like two thirds of a year. So it’s like you’re trading time now for time in the future at some rate. And it’s just like that eats up a lot of the value of the slowdown.", "Dwarkesh Patel 01:30:35", "And the crucial point being that time in the future matters more because you have more information, people are more bought in and so on.", "Paul Christiano 01:30:41", "Yeah, the same reason I’m more excited about policy changing now than two years ago. So my overall view is, just like in the past, this calculus changes over time, right? The more people are getting prepared, the better the calculus is for slowing down at this very moment. And I think now the calculus is, I would say positive for just even if you pause now and it would get clawed back in the future. I think the pause now is just good because enough stuff is happening. We have enough idea of probably even apart from alignment research, and certainly if you include alignment research, just like enough stuff is happening where the world is getting more ready and coming more to terms with impacts, that I just think it is worth it, even though some of that time is going to get clawed back again. Especially if there’s a question of during a pause, does Nvidia keep making more? Like, that sucks if they do if you do a pause. But in practice, if you did a pause, nvidia probably couldn’t keep making more GPUs because in fact the demand for GPUs is really important for them to do that. But if you told me that you just get to scale up hardware production and building the clusters but not doing AI, then that’s back to being net negative, I think.", "(01:31:38) - Responsible scaling policies", "Dwarkesh Patel 01:31:38", "Pretty clearly, having brought brought up the fact that we want some sort of measurement scheme for these capabilities, let’s talk about responsible scaling policies. Do you want to introduce what this is?", "Paul Christiano 01:31:49", "Sure. So I guess the motivating. Question. It’s like, what should AI labs be doing right now to manage risk and to sort of build good habits or practices for managed risk into the future? I think my take is that current systems pose, from a catastrophic risk perspective, not that much risk today that is a failure to control or understand. GPT 4 can have real harms, but doesn’t have much harm with respect to the kind of takeover risk I’m worried about, or even much catastrophic harm with respect to misuse. So I think if you want to manage catastrophic harms, I think right now you don’t need to be that careful with GBT Four. And so to the extent you’re like, what should labs do? I think the single most important thing seems like understand whether that’s the case. Notice when that stops being the case, have a reasonable roadmap for what you’re actually going to do when that stops being the case. So that motivates this set of policies, which I’ve sort of been pushing for labs to adopt, which is saying, here’s what we’re looking for, here’s some threats we’re concerned about, here’s some capabilities that we’re measuring, here’s the level, here’s the actual concrete measurement results that would suggest to us that those threats are real. Here’s the action we would take in response to observing those capabilities if we couldn’t take those actions, like, if we’ve said that we’re going to secure the weights, but we’re not able to do that, we’re going to pause until we can take those actions. Yeah. So this sort of again, I think it’s motivated primarily, but what should you be doing as a lab to manage catastrophic risk now in a way that’s like a reasonable precedent and habit and policy for continuing to implement into the future?", "Dwarkesh Patel 01:33:29", "And which labs I don’t know if this is public yet, but which labs are cooperating on this?", "Paul Christiano 01:33:34", "Yeah. So Anthropic has written this document their current responsible scaling policy, and then have been talking with other folks, I guess don’t really want to comment on other conversations, but I think in general, people who are more interested in or more think you have plausible catastrophic harms on, like, a five year timeline are more interested in this. And there’s not that long a list of suspects like that.", "Dwarkesh Patel 01:34:00", "There’s not that many laps. Okay, so if these companies would be willing to coordinate and say, at these different benchmarks, we’re going to make sure we have these safeguards, what happens? I mean, there are other companies and other countries which care less about this. Are you just slowing down the companies that are most aligned?", "Paul Christiano 01:34:22", "Yeah, I think the first sort of is understanding sort of what is actually a reasonable set of policies for managing risk. I do think there’s a question of, like, you might end up in a situation where you say, like, well, here’s what we would do in ideal world if everyone was behaving responsibly. We’d want to keep risk to 1% or a couple of percent or whatever, maybe even lower levels, depending on how you feel. However, in the real world, there’s enough of a mess, there’s enough unsafe stuff happening that actually it’s worth making larger compromises. Or if we don’t kill everyone, someone else will kill everyone anyway. So actually the counterfactual risk is much lower. I think if you end up in that situation, it’s still extremely valuable to have said, here’s the policies we’d like to follow. Here’s the policies we’ve started following. Here’s why we think it’s dangerous. Here’s the concerns we have if people are following significantly laxer policies. And then this is maybe helpful as like an input to or model for potential regulation. It’s helpful for being able to just produce clarity about what’s going on. I think historically there’s been considerable concern about developers being more or less safe, but there’s not that much legible differentiation in terms of what their policies are. I think getting to that world would be good. It’s a very different world, if you’re like. Actor X is developing AI, and I’m concerned that they will do so in an unsafe way versus, if you’re like, look, we take security precautions or safety precautions XYZ here’s why we think those precautions are desirable or necessary. We’re concerned about this other developer because they don’t do those things. I think it’s just like a qualitatively. It’s kind of the first step you would want to take in any world where you’re trying to get people on side or like, trying to move towards regulation that can manage risk.", "Dwarkesh Patel 01:35:54", "How about the concern that you have these evaluations? And let’s say you declare to the world, our new model has a capability to help develop bioweapons or help you make cyber attacks. And therefore we’re pausing right now until you can figure this out and China hears this and thinks, oh wow, a tool that can help us make cyberattacks and then just steals the weights. Does this scheme work in the current regime where we can’t ensure that China doesn’t just steal the weights and more so are you increasing the salience of dangerous models so that you blur this out and then people want the weights now because they know what they can do?", "Paul Christiano 01:36:36", "Yeah, I think the general discussion does emphasize potential harms or potential. I mean, some of those are harms and some of those are just like impacts that are very large and so might also be an inducement to develop models. I think that part, if you’re for a moment ignoring security and just saying that may increase investment. I think it’s like, on balance, just quite good for people to have an understanding of potential impacts just because it is an input both into proliferation but also into regulation or safety. With respect to things like security of either weights or other IP, I do think you want to have moved to significantly more secure handling of model weights before the point where a leak would be catastrophic. And indeed, for example, in Anthropics document or in their plan, security is one of the first sets of tangible changes that is at this capability level, we need to have such security practices in place. So I do think that’s just one of the things you need to get in place at a relatively early stage because it does undermine the rest of the measures you may take and is also just part of the easiest if you imagine catastrophic harms over the next couple of years. I think security failures are kind of play a central role in a lot of those. And maybe the last thing to say is it’s not clear that you should say we have paused because we have models that can develop bioweapons versus just potentially not saying anything about what models you’ve developed. Or at least saying like, hey, by the way, here’s a set of practices we currently implement, here’s a set of capabilities our models don’t have. We’re just not even talking that much. Sort of the minimum of such a policy is to say here’s what we do from the perspective of security or internal controls or alignment. Here’s a level of capability at which we’d have to do more. And you can say that and you can raise your level of capability and raise your protective measures before your models hit your previous level. It’s fine to say we are prepared to handle a model that has such and such extreme capabilities prior to actually having such a model at hand, as long as you’re prepared to move your protective measures to that regime.", "Dwarkesh Patel 01:38:30", "Okay, so let’s just get to the end where you think you’re a generation away or a little bit more scaffolding away from a model that is human level and subsequently could cascade an intelligence explosion. What do you actually do at that point? What is the level of evaluation of safety where you would be satisfied of releasing a human level model?", "Paul Christiano 01:38:53", "There’s a couple points that come up here. So one is this threat model of sort of automating R and D independent of whether AI can do something on the object level that’s potentially dangerous. I think it’s reasonable to be concerned if you have an AI system that might, if leaked, allow other actors to quickly build powerful AI systems or might allow you to quickly build much more powerful systems, or might, if you’re trying to hold off on development just itself, be able to create much more powerful systems. One question is how to handle that kind of threat model as distinct from a threat model like this could enable destructive bioterrorism or this could enable massively scaled cybercrime or whatever. And I think I am unsure how you should handle that. I think right now, implicitly it’s being handled by saying, look, there’s a lot of overlap between the kinds of capabilities that are necessary to cause various harms and the kinds of capabilities are necessary to accelerate ML. So we’re kind of going to catch those with an early warning sign for both and deal with the resolution of this question a little bit later. So, for example, in anthropics policy they have this sort of autonomy in the lab benchmark which I think is probably occurs prior to either really massive AI acceleration or to most potential catastrophic object level catastrophic harms. And the idea is that’s like a warning sign lets you punt. So this is a bit of an aggression in terms of how to think about that risk. I think I am unsure whether you should be addressing that risk directly and saying we’re scared to even work with such a model. Or if you should be mostly focusing on object level harms and saying like, okay, we need more intense precautions to manage object level harms because of the prospect of very rapid change and the availability of this AI just creates that prospect. Okay, this is all still a digression. So if you had a model which you thought was potentially very scary either on the object level or because of leading to this sort of intelligence explosion dynamics, I mean, things you want in place are like you really do not want to be leaking the weights to that model. Like you don’t want the model to be able to run away. You don’t want human employees to be able to leak it. You don’t want external attackers or any set of all 3 of those coordinating you. You really don’t want internal abuse or tampering with such models. So if you’re producing such models, you don’t want to be the case. Like a couple of employees could change the way the model works or could do something that violates your policy easily with that model. And if a model is very powerful, even the prospect of internal abuse could be quite bad. And so you might need significant internal controls to prevent that.", "Dwarkesh Patel 01:41:08", "Sorry if you’re already getting to it, but the part I’m most curious about is separate from the ways in which other people might fuck with it, it’s isolated. What is the point at which we satisfied? It in and of itself is not going to pose a risk to humanity. It’s human level, but we’re happy with it.", "Paul Christiano 01:41:28", "Yeah. So I think here I listed maybe the two most simple ones that start out like security. Internal controls, I think become relevant immediately and are very clear why you care about them. I think as you move beyond that, it really depends how you’re deploying such a system. So I think if your model, if you have good monitoring and internal controls and security and you just have weights sitting there, I think you mostly have addressed the risk from the weights just sitting there. Now, what you’re talking about for risk is mostly, and maybe there’s some blurriness here of how much internal controls captures not only employees using the model, but anything a model can do internally. You would really like to be in a situation where your internal controls are robust not just to humans but to models potentially like E-G-A model shouldn’t be able to subvert these measures and you care just as you care about are your measures robust if humans are behaving maliciously? You care about are your measures robust if models are behaving maliciously? So I think beyond that if you’ve then managed the risk of just having the weight sitting around. Now we talk about in some sense most of the risk comes from doing things with the model. You need all the rest so that you have any possibility of applying the brakes or implementing a policy. But at some point as the model gets competent you’re saying like okay, could this cause a lot of harm? Not because it leaks or something, but because we’re just giving it a bunch of actuators. We’re deploying it as a product and people could do crazy stuff with it. So if we’re talking not only about a powerful model but like a really broad deployment of just something similar to the Open Eyes API where people can do whatever they want with this model and maybe the economic impact is very large. So in fact, if you deploy that system it will be used in a lot of places such that if AI systems wanted to cause trouble it would be very very easy for them to cause catastrophic harms. Then I think you really need to have some kind of I mean, I think probably the science and discussion has to improve before this becomes that realistic. But you really want to have some kind of alignment analysis, guarantee of alignment before you’re comfortable with this. And so by that I mean you want to be able to bound the probability that someday all the AI systems will do something really harmful. That there’s some thing that could happen in the world that would cause these large scale correlated failures of your AIS. And so for that there’s sort of two categories that’s like one, the other thing you need is protection against misuse of various kinds which is also quite hard.", "Dwarkesh Patel 01:43:35", "And by the way, which one are you worried about more misuse or misalignment?", "Paul Christiano 01:43:38", "I mean, in the near term I think harms from misuse are like especially if you’re not restricting to the tale of extremely large catastrophes. I think the harms from misuse are clearly larger in the near term.", "Dwarkesh Patel 01:43:48", "But actually on that, let me ask because if you think that it is the case that there are simple recipes for destruction that are further down the tech tree by that I mean you’re familiar. But just for the audience there’s some way to configure $50,000 and a teenager’s time to destroy a civilization. If that thing is available, then misuses itself a teal risk, right? So do you think that that prospect is less likely than a way you could put it?", "Paul Christiano 01:44:16", "Is there’s, like, a bunch of potential destructive technologies? And alignment is about AI itself being such a destructive technology, where even if the world just uses the technology of today, simply access to AI could cause human civilization to have serious problems. But there’s also just a bunch of other potential destructive technologies. Again, we mentioned like physical explosives or bioweapons of various kinds, and then the whole tale of who knows what. My guess is that Alignment becomes a catastrophic issue prior to most of these. That is, like, prior to some way to spend $50,000 to kill everyone, with the salient exception of possibly, like, bioweapons. So that would be my guess. And then there’s a question of what is your risk management approach? Not knowing what’s going on here, and you don’t understand whether there’s some way to use $50,000. But I think you can do things like understand how good is an AI at coming up with such schemes. Like, you can talk to your AI. Be like, does it produce new ideas for destruction we haven’t recognized?", "Dwarkesh Patel 01:45:12", "Yeah. Not whether we can evaluate it, but whether if such a thing exists. And if it does, then the misuse itself is an existential risk. Because it seemed like earlier you were saying misalignment is where the existential risk comes from, but misuse is where the sort of short term dangers come from.", "Paul Christiano 01:45:28", "Yeah, I mean, I think ultimately you’re going to have a lot of destructive like, if you look at the entire tech tree of humanity’s future, I think you’re going to have a fair number of destructive technologies. Most likely. I think several of those will likely pose existential risks in parts. If you imagine a really long future, a lot of stuff’s going to happen. And so when I talk about where the existential risk comes from, I’m mostly thinking about comes from when? At what point do you face what challenges or in what sequence. And so I’m saying I think misalignment is probably like one way of putting it is if you imagine AI systems sophisticated enough to discover destructive technologies that are totally not on our radar right now, I think those come well after AI systems capable enough that if misaligned, they would be catastrophically dangerous. The level of competence necessary to, if broadly deployed in the world, bring down a civilization is much smaller than the level of competence necessary to advise one person on how to bring down a civilization just because in one case you already have a billion copies of yourself or whatever. I think it’s mostly just the sequencing thing, though. In the very long run, I think you care about, like, hey, AI will be expanding the frontier of dangerous technologies. We want to have some policy for exploring or understanding that frontier. And whether we’re about to turn up something really bad, I think those policies can become really complicated. Right now, I think RSPs can focus more on like, we have our inventory of the things that a human is going to do to cause a lot of harm with access to AI. Probably are things that are on our radar that is like, they’re not going to be completely unlike things that a human could do to cause a lot of harm with access to weak AIS or with access to other tools. I think it’s not crazy to initially say we’re doing we’re looking at the things closest to human and humans being able to cause huge amounts of harm and asking which of those are taken over the line, but eventually that’s not the case. Eventually, like, AIS will enable just like, totally different ways of killing a billion people.", "Dwarkesh Patel 01:47:16", "But I think I interrupted you on the initial question of, yeah, so human level AI, not from leaking but from deployment, what is the point at which you’d be comfortable deploying a human level AI?", "Paul Christiano 01:47:30", "So, again, there’s sort of like some stuff you care about on the Mississippi side and some stuff you care about on the misalignment side. And there’s probably further things you care about especially to extend your concerns regarding catastrophic risk. But maybe I most want to talk about just like what you care about on the alignment side, because it’s like the thing I’ve actually thought about most and also a thing I care about a lot. Also, I think a significant fraction of the existential risk over the kind of foreseeable future. So on that front, I broadly think there’s like two kinds. Like, if you ask me right now what evidence for alignment could make you comfortable, I think my best guess would be to provide two kinds of evidence. So one kind of evidence is on the like, could you detect or prevent catastrophic harm if such a system was misaligned? I think there’s like a couple of things you would do here. One thing you would do is on this adversarial evaluation front. So you could try and say, for example, we have tried to test our system in a broad diversity of situations that reflect cases where it might cause harm, like cases where someone is trying to recruit it to, like anyway. Cases where it might cause harm or cases where a large number of AI systems might sort of all decide to work across purposes to humans. Like the robot rebellion is starting or whatever. We’ve done evaluation in a broad distribution of situations, and then we have tried to argue that our AI is actually like, those tests are indicative of the real world. So we’ve tried to argue like, hey, actually the AI is not very good at distinguishing situations we produce in the lab as tests from similar situations that occur in the real world. And the coverage of this distribution is reasonable. This is an example of a thing you can do that’s not really making assumptions about what’s going on inside the model or about how the model is the model aligned. It’s more trying to say, is it the case that our testing or monitoring machinery could detect or react to something bad happening? In this case, could it detect a system which would, under some conditions do something really bad? There’s a lot of stuff in that genre, and I tried to throw out that example, but I think there’s like a big like that on its own wouldn’t be compelling even from this perspective. You kind of need a lot of pieces together and that’s like kind of a demanding technical project. But I think one without necessarily like, deep conceptual difficulty. A big problem with it is it’s liable to fail. Like it’s liable you’re going to do that testing and say, we think these tests are representative because the AI can’t distinguish tests from the real world, which you can maybe achieve now, but will probably become impossible to achieve at some level. It’s not clear what’s at human level or how far past human level.", "Dwarkesh Patel 01:49:52", "Let’s just make that example concrete for the audience. Correct me if this is wrong, but it sounds like you basically give it the opportunity to do a coup or make a bioweapon or whatever in testing in a situation where it thinks it’s the real world and you’re like, it didn’t do any of that. Is that what you’re talking about?", "Paul Christiano 01:50:10", "That’s the kind of thing okay. And it’s not the greatest so dag color when I say I can’t tell the difference. The kind of valuation you would love to do if you could is to say like, hey, we have a bunch of real world deployment. We have a bunch of data from those deployments. We also have a bunch of in the lab tests of situations. If we look at in the lab test of a situation together with real world cases in which the situation occurs, and then we fine tune a model to try and distinguish the difference there the model can’t even tell when fine tuned the difference between test and real world cases. And if the model can tell the difference, which is almost certainly going to be the case at some level capability, then you say, okay, our first line of hoped for defense has failed. And so now we need to understand how can tests be indicative when models are capable enough to tell the difference between the tests and the real world. Now you need to start arguing about what is the model paying attention to. It gets basically like a first line of if you get lucky, what would work here? And then there’s a deeper like, you probably have to do novel science at some point to make this case. All of this was part of like I was saying, maybe you want to have kind of two lines of defense. All of this was like part of this first line of defense of, like, can you detect something is going wrong or can you prevent harm from occurring? Can you have monitoring that will react quickly enough to avoid catastrophe? I think you probably also want to have a second line of defense that’s more maybe this is even more important than the first one, is just understanding whether dangerous forms of misalignment can occur. The best reference point for this, I think, is the like, there have been a couple of projects like this in academia. Anthropic has been working on a project or spun up a team doing this kind of work, trying to say, like, can dangerous forms of reward hacking occur or can deceptive alignment, in fact occur in the lab? And here what you want to do is either say, like, okay, even if we create a best case, even if you create optimal conditions for deceptive alignment or for reward hacking, we just can’t cause it to occur even in the lab. And if you do a good enough job of that, I think it could give you some evidence, and again, more likely that fails in that when you create optimal conditions, you do see deceptive alignment and reward hacking in the lab. But then once you have that data, once you can say, okay, in the lab, actually these things can occur, then you can start saying, and we have a robust scientific understanding that enables us to fix those problems when they occur. Or then you have this fork on your second line where you say, either it is very hard, or we’re not able to create conditions where these failures emerge. Or I think more likely we are able to create those conditions. And here’s the story about why we are able to detect those in a way that would work in the real world and we can see they don’t occur in the real world.", "Dwarkesh Patel 01:52:35", "How do you create the optimal conditions for it to want to be deceptive? Do you fine tune it on mindcomp or what are you doing?", "Paul Christiano 01:52:42", "Yeah, so for deceptive alignment, I mean, I think it’s really complicated. Yeah, I think this is a very hard project. I think one of the things, one of the most important things for people concerned about misalignment to do is to ensure that adequate effort is put into this task. And we have, in fact, created good conditions in the laboratory and there’s like, real disagreement about how informative those will be. But okay, if you were to guess, for deceptive Lymen, how would you do this? An example is suppose you train an AI system that has some medium horizon goal. You train your AI system to, whatever, get some paperclips it wants to produce as many paperclips as it can over the next five days, just like select actions really aggressively for producing paperclips over the next five days. You do your Rlhf, you do your pre training, whatever. That’s like your phase one. You also ensure your AI system has a really good understanding of how it’s trained. So, like, this AI system wants paperclips and it understands everything about how it’s trained and everything about how it’s fine tuned. And you train on just like a lot of this data. And they say, okay, if we’ve done all that, we have this concern that if a system wants paperclips and understands really well how it’s trained, then if it’s going to be trained to get apples instead of paperclips, it’s just going to do some cost benefit and be like, really? While I’m being trained to get apples, I should do that. But I should do that whether or not even if I want paperclips, I should still do that. So training won’t really affect its values. It will just understand that it’s being trained to get apples. It will try and get apples, and then if you take it out of training, it will go back to getting paperclips. I think this exact setup has a number of problems or a number of disanalogies from the real world. But this is a setup which is relatively conducive to deceptive alignment like produce a system that wants one thing tell it a lot about its training, the kind of information you might expect a system would get, and then try and understand whether in fact it is able to then or it tends or sometimes under optimal conditions, in fact, continues pursuing paperclips, only pursuing apples when it thinks it’s being trained.", "Dwarkesh Patel 01:54:30", "All right, and this actually might be a good jumping off point into your current research. So in that case, where it wants to pursue paperclips, what would it look like for you to be say, well, we figured out why it wants paperclips, and we figured out how to make sure it doesn’t optimize for that at our expense when we deploy it?", "Paul Christiano 01:54:49", "Yeah, I think that at a meta level in terms of what’s your protection like, I think what you want to be saying is, we have these examples in the lab of something bad happening. We’re concerned about the problem at all because we have examples in the lab. And again, this should all be an addition. I think you kind of want this defense in depth of saying, we also have this testing regime that would detect problems for the deployed model. We have our problems in the lab. We then have some techniques which we believe address these problems. We believe that adversarial training fixes this, or we believe that our interpretability method will reliably detect this kind of deceptive alignment, or we believe our anomaly detection will reliably detect when the model goes from thinking it’s being trained to thinking it should defect. And then you can say on the lab, we have some understanding of when those techniques work and when they don’t. We have some understanding of the relevant parameters for the real system that’s deployed. And we have a. Reasonable margin of safety. So we have reasonable robustness on our story about when this works and when it doesn’t. And we can apply that margin of safety with a margin of safety to the real deployed system saying this is the kind of story you want to build towards in the long run. Do your best to produce all the failures you can in the lab or versions of them, do your best to understand what causes them, what kind of anomaly detection actually works for detecting this or what kind of filtering actually works and then apply that and that’s at the meta level. It’s not talking about what actually are those measures that would work effectively, which is obviously like what? I mean, a lot of alignment research is really based on this hypothetical of like, someday there will be AI systems that fail in this way. What would you want to do? Can we have the technologies ready either because we might never see signs of the problem or because we want to be able to move fast once we see signs of the problem. And obviously most of my life is in that I am really in that bucket. I mostly do alignment research. It’s just building out the techniques that do not have these failures such that they can be available as an alternative if in fact these failures occur.", "Dwarkesh Patel 01:56:35", "Got it. Okay.", "Paul Christiano 01:56:36", "Ideally they’ll be so good that even if you haven’t seen them, you would just want to switch to reasonable that don’t have these or ideally they’ll work as well or better than normal training.", "Dwarkesh Patel 01:56:44", "Ideally, what will work better than the training?", "Paul Christiano 01:56:48", "Yeah. So our quest is to design training methods for which we don’t expect them to lead to reward hacking or don’t expect them to lead to receptive alignment. Ideally that won’t be like a huge tax where people are like, well, we use those methods only if we’re really worried about reward hacking or receptive alignment. Ideally those methods would just work quite well and so people would be like, sure, I mean, they also address a bunch of other more mundane problems so why would we not use them? Which I think is like that’s sort of the good story. The good story is you develop methods that address a bunch of existing problems because they just are more principled ways to train AI systems that work better, people adopt them and then we are no longer worried about eg reward hacking or deceptive alignment.", "Dwarkesh Patel 01:57:21", "And to make this more concrete, tell me if this is the wrong way to paraphrase it, the example of something where it just makes a system better, so why not just use it, at least so far? Might be like Rlhf where we don’t know if it generalizes, but so far it makes your chat GPT thing better and you can also use it to make sure that chat GPT doesn’t tell you how to make a bioweapon. So yeah, it’s not a mixture of tax.", "Paul Christiano 01:57:46", "Yeah. So I think this is right in the sense that using Rlhf is not really a tax. If you wanted to deploy a useful system, why would you not? Or it’s just very much worth the money of doing the training. RHF will address certain kinds of alignment failures that is like, where system just doesn’t understand or is changing. Next word, prediction. It’s like, this is the kind of context where human would do this wacky thing even though it’s not what we’d like. There’s like some very dumb alignment failures that will be addressed by it. But I think mostly the question is, is that true even for the sort of more challenging alignment failures that motivate concern in the field? I think RL doesn’t address most of the concerns that motivate people to be worried about alignment.", "(01:58:25) - Paul’s alignment research", "Dwarkesh Patel 01:58:25", "I’ll let the audience look up what Rlhf is. If they don’t know, it will just be more simpler to just look it up than explain right now. Okay, so this seems like a good jumping off point to talk about the mechanism or the research you’ve been doing. To that end, explain it as you.", "Paul Christiano 01:58:40", "Might to a child. Yeah. So the high level there’s a couple of different high level descriptions you could give, and maybe I will unwisely give like a couple of them in the hopes that one kind of makes sense. A first pass is like, it would sure be great to understand why models have the behaviors they have. So you look at GPT four. If you ask GPT 4 a question, it will say something that looks very polite. And if you ask it to take an action, it will take an action that doesn’t look dangerous. You will decline to do a coup, whatever. All this stuff I think you’d really like to do is look inside the model and understand why it has those desirable properties. And if you understood that, you could then say, like, okay, now can we flag when these properties are at risk of breaking down? Or predict how robust these properties are, determine if they hold in cases where it’s too confusing for us to tell directly by asking if the underlying cause is still present. That’s like a thing people would really like to do. Most work aimed at that long term goal right now is just sort of opening up neural nets and doing some interpretability and trying to say, can we understand, even for very simple models, why they do the things they do, or what this neuron is for, or questions like this. So Arc is taking a somewhat different approach where we’re instead saying, like, okay, look at these interpretability explanations that are made about models and ask, what are they actually doing? What is the type signature? What are the rules of the game for making such an explanation? What makes a good explanation? And probably the biggest part of the hope is that if you want to, say, detect when the explanation has broken down or something weird has happened, that doesn’t necessarily require a human to be able to understand this complicated interpretation of a giant model. If you understand what is an explanation about or what were the rules of the game, how are these constructed, then you might be able to sort of automatically discover such things and automatically determine if on a new input it might have broken down. So that’s one way of sort of describing the high level goal. You could start from interpretability and say, can we formalize this activity? Or what a good interpretation or explanation? Is there’s some other work in that genre? But I think we’re just taking a particularly ambitious approach to it.", "Dwarkesh Patel 02:00:59", "Yeah, let’s dive in. So, okay, what is a good explanation?", "Paul Christiano 02:01:01", "You mean what is this kind of criterion? At the end of the day, we kind of want some criterion. And the way the criterion should work is like you have your neural net, you have some behavior of that model. Like a really simple example is like Anthropic has this sort of informal description being like, here’s induction. Like the tendency that if you have the pattern AB followed by A will tend to predict B. You can give some kind of words and experiments and numbers that are trying to explain that. And what we want to do is say what is a formal version of that object? How do you actually test if such an explanation is good? So just clarifying what we’re looking for when we say we want to define what makes an explanation good. And the kind of answer that we are searching for or settling on is saying this is kind of a deductive argument for the behavior. So you want to get given the weights of a neural net. So it’s just like a bunch of numbers. You got your million numbers or billion numbers or whatever, and then you want to say, here’s some things I can point out about the network and some conclusions I can draw. I can be like, well, look, these two vectors have large inner product and therefore these two activations are going to be correlated on this distribution. These are not established by drawing samples and checking. Things are correlated, but saying because of the weights being the way they are, we can proceed forward through the network and derive some conclusions about what properties the outputs will have. So you could think of this as like the most extreme form would be just proving that your model has this induction behavior. Like, you could imagine proving that if I sample tokens at random with this pattern AB followed by A, that B appears 30% of the time or whatever, that’s the most extreme form. And what we’re doing is kind of just like relaxing the rules of the game for proof. Saying proofs are like incredibly restrictive. I think it’s unlikely they’re going to be applicable to kind of any interesting neural net. But the thing about proofs that is relevant for our purposes isn’t that they give you 100% confidence so you don’t have to be like this incredible level of demand for rigor. You can relax the standards of proof a lot and still get this feature where it’s like a structural explanation for the behavior, where you’re, like, deducing one thing from another until at the end, your final conclusion is like, therefore induction occurs.", "Dwarkesh Patel 02:03:04", "Would it be useful to maybe motivate this by explaining what the problem with normal Mechanistic Interpretability is? So you mentioned induction heads. This is Anthropic found in two layer transformers, where Anthropic noticed that in a two layer transformer, there’s a pretty simple circuit by which if AB happens in the past, then the model knows that if you see an A now, you do a B next, but that’s a two layer transformer. So we have these models that have hundreds of layers that have trillions of parameters. Okay. Anyways. What is wrong with mechanistic? Interpretability.", "Paul Christiano 02:03:45", "Yeah, I like mechanistic interpretability quite a lot. And I do think if you just consider the entire portfolio of what people are working on for alignment, I think there should be more work on Mechanistic Interpretability than there is on this project Arc is doing. But I think that’s the case. So I think we’re mostly talking about yeah, I think we’re kind of a small fraction of the portfolio, and I think it’s like a good enough bet. It’s quite a good bet overall. But so the thing that the problem we’re trying to address with Mechanistic Interpretability is kind of like if you do some interpretability and you explain some phenomenon, you face this question of what does it mean? Your explanation was good. I think this is a problem somewhat institutionally or culturally. It’s just hard to know what you’re doing, and it’s hard to scale up an activity when you don’t really understand the rules of the game for that activity very well. It’s hard to have that much confidence in your results.", "Dwarkesh Patel 02:04:31", "The explanation being it outputted this because it wants to take over the world versus it outputted this because it just wants to help you just make it concrete why the explanation matters.", "Paul Christiano 02:04:42", "Yeah. So the ideal kind of outcome here would be to say you have your AI system behaving nicely. You get some explanation for sort of why it’s behaving nicely, and we could tell a story in English about that explanation, but we’re not actually imagining the explanation being a thing that makes sense to a human. But if you were to tell a story in English, which, again, you will not see as a research, sure it would be something like, well, then the model believes it’s being trained. And so because it believes it’s being trained, it knows it needs to not do anything scary looking, or else the humans will penalize it. That’s something that’s happening inside this opaque explanation. And then the hope is if you have that explanation and then you run into a new input on which the model doesn’t believe it’s being trained. If you just look at the set of activations of your model, that is not necessarily a weird looking activation. It’s just a bunch of numbers. But if you look at this explanation, you see actually the explanation really crucially dependent on this fact holding consistently across the training distribution, which, again, we as humans could editorialize and say that fact was it believes it’s being trained. But the explanation doesn’t fundamentally make reference to that. It’s just saying, here’s a property of the activations which holds over the training set, and this property is responsible for the behavior, namely that doesn’t do anything that looks too dangerous. So then when, like, a new input comes in and it doesn’t satisfy that property, you can say, okay, this is anomalous with respect to that explanation. So either it will not have the behavior like it won’t do something that appears dangerous, or maybe it will have that behavior but for some different reason than normal.", "Dwarkesh Patel 02:05:59", "Right.", "Paul Christiano 02:06:00", "Normally it does it because of this pathway, and now it’s doing it for a different pathway. And so you would like to be able to flag that both there’s a risk of not exhibiting the behavior, and if it happens, it happens for a weird reason. And then you could I mean, at a minimum, when you encounter that, say, like, okay, raise some kind of alarm, there’s sort of more ambitious, complicated plans for how you would use it. Arc has some longer story, which is kind of what motivated this, of how it fits into the whole rest of the plan.", "Dwarkesh Patel 02:06:24", "I just wanted to flag that because just so it’s clear why the explanation matters.", "Paul Christiano 02:06:29", "Yeah. And for this purpose, it’s like the thing that’s essential is kind of reasoning from one property of your model to the next property of your model. It’s really important that you’re going forward step by step rather than drawing a bunch of samples and confirming the property holds. Because if you just draw a bunch of samples and confirm the property holds, you don’t get this check. We say, oh, here was the relevant fact about the internals that was responsible for this downstream behavior. All you see is like, yeah, we checked a million cases and it happened in all of them. You really want to see this. Like, okay, here was the fact about the activations, which kind of causally leads to this behavior.", "Dwarkesh Patel 02:06:59", "But explain why the sampling, why it matters that you have the causal explanation.", "Paul Christiano 02:07:05", "Primarily because of this being able to tell if things had been different. Like, if you have an input where this doesn’t happen, then you should be scared.", "Dwarkesh Patel 02:07:11", "Even if the output is the same.", "Paul Christiano 02:07:12", "Yeah. Or if the output or if it’s too expensive to check in this case. And to be clear, when we talk about formalizing, what is a good explanation? I think there is a little bit of work that pushes on this and it mostly takes this causal approach of saying, well, what should an explanation do? It should not only predict the output, it should predict how the output changes in response to changes in the internals. So that’s the most common approach to formalizing. What is a good explanation? And even when people are doing informal interpretability I think if you’re publishing in an ML conference and you want to say this is a good explanation, the way you would verify that would even if not like a formal set of causal intervention experiments. It would be some kind of ablation where then we messed with the inside of the model and it had the effect which we would expect based on our explanation.", "Dwarkesh Patel 02:07:53", "Anyways, back to the problems of mechanistic interpretability.", "Paul Christiano 02:07:56", "Yeah, I guess this is relevant in the sense that I think a basic difficulty is you don’t really understand the objective of what you’re doing, which is like a little bit hard institutionally or scientifically. It’s just rough. It’s easier to do science when the goal of the game is to predict something and you know what you’re predicting than when the goal of the game is to understand in some undefined sense. I think it’s particularly relevant here just because the informal standard we use involves humans being able to make sense of what’s going on. And there’s some question about scalability of that. Will humans recognize the concepts that models are using? Yeah, I think as you try and automate it, it becomes increasingly concerning if you’re on slightly shaky ground about what exactly you’re doing or what exactly the standard for success is. So there’s like a number of reasons as you work with really large models, it becomes just increasingly desirable to have a really robust sense of what you’re doing. But I do think it would be better even for small models to have a clearer sense.", "Dwarkesh Patel 02:08:51", "The point you made about as you automate it is it because whatever work the automated alignment researcher is doing, you want to make sure you can verify it.", "Paul Christiano 02:08:58", "I think it’s most of all a way you can automate. I think how you would automate interpretability if you wanted to right now is you take the process humans use it’s like great, we’re going to take that human process, train ML systems to do the pieces that humans do of that process and then just do a lot more of it. So I think that is great as long as your test decomposes into human sized pieces. And there’s just this fundamental question about large models which is like, do they decompose in some way into human sized pieces or is it just a really messy mess with interfaces that aren’t nice? And the more it’s the latter type, the harder it is to break it down to these pieces, which you can automate by copying what a human would do. And the more you need to say, okay, we need some approach which scales more structurally. But I think compared to most people, I am less worried about automating interpretability. I think if you have a thing which works that’s incredibly labor intensive, I’m fairly optimistic about our ability to automate it. Again, the stuff we’re doing, I think, is quite helpful in some worlds. But I do think the typical case like interpretability can add a lot of value.", "Dwarkesh Patel 02:09:56", "Without this, it makes sense what an explanation would mean in language like, this model is doing this because of whatever essay length thing. But you have trillions of parameters and you have all these uncountable number of operations. What does an explanation of why an output happened even mean?", "Paul Christiano 02:10:18", "Yeah, so to be clear, an explanation of why a particular output happened, I think, is just you ran the model, so we’re not expecting a smaller explanation for that.", "Dwarkesh Patel 02:10:26", "Right.", "Paul Christiano 02:10:26", "So the explanations overall for these behaviors, we expect to be of similar size to the model itself, like, maybe somewhat larger. And I think the type signature, if you want to have a clear mental picture, the best picture is probably thinking about a proof or imagining a proof that a model has this behavior. So you could imagine proving that GPT 4 does this induction behavior, and that proof would be a big thing. It would be much larger than the weights of the model. Sort of our goal to get down from much larger to just the same size. And it would potentially be incomprehensible to a human. Right. Just say, like, here’s a direction activation space, and here’s how it relates to this direction activation space. And so just pointing out a bunch of stuff like that. Here’s these various features constructed from activations, potentially even nonlinear functions. Here’s how they relate to each other, and here’s how if you look at what the computation the model is doing, you can just sort of inductively trace through and confirm that the output has such and such correlation. So that’s the dream. Yeah. I think the mental reference would be like, I don’t really like proofs because I think there’s such a huge gap between what you can prove and how you would analyze a neural net. But I do think it’s probably the best mental picture, if you’re like, what is an explanation? Even if a human doesn’t understand it? We would regard a proof as a good explanation. And our concern about proofs is primarily that it’s just you can’t prove properties of neural nets. We suspect, although it’s not completely obvious, I think it’s pretty clear. You can’t prove fact, spellers neural nets.", "Dwarkesh Patel 02:11:40", "You’ve detected all the reasons things happen in training. And then if something happens for a reason you don’t expect in deployment, then you have an alarm and you’re like, let’s make sure this is not because you want to make sure that it hasn’t decided to take over or something, but the thing is, on every single different input, it’s going to have different activations. So there’s always going to be a difference unless you run the exact same input. How do you detect whether this is just a different input versus an entirely different circuit that might be potentially deceptive has been activated?", "Paul Christiano 02:12:17", "Yeah, I mean, to be clear, I think you probably wouldn’t be looking at a separate circuit, which is part of why it’s hard. You’d be looking at like, the model is always doing the same thing on every input. It’s always whatever it’s doing, it’s a single computation. So it’d be all the same circuits interacting in a surprising way. But yeah, this is just to emphasize your question even more. I think the easiest way to start is to just consider the IID case. So where you’re considering a bunch of samples, there’s no change in distribution. You just have a training set of like a trillion examples and then a new example from the same distribution. So in that case, it’s still the case. Every activation is different, but this is actually a very, very easy case to handle. So if you think about an explanation that generalizes across, like if you have a trillion data points and an explanation which is actually able to compress the trillion data points down to like, actually, it’s kind of a lot of compression. If you think about if you have a trillion parameter model and a trillion data points, we would like to find a trillion parameter explanation in some sense. So it’s actually quite compressed and sort of just in virtue of being so compressed, we expect it to automatically work essentially for new data points from the same distribution. If every data point from the distribution was a whole new thing happening for different reasons, you actually couldn’t have any concise explanation for the distribution. So this first problem, just like it’s a whole different set of activations, I think you’re actually kind of okay, and then the thing that becomes more messy is like but the real world will not only be new samples of different activations, they will also be different in important ways. Like the whole concern was there’s these distributional shifts or like, not the whole concern, but most of the concern, maybe the point of having these explanations. I think every input is an anomaly in some ways, which is kind of the difficulty is if you have a weak notion of anomaly, any distribution shift can be flagged as an anomaly. And it’s like constantly getting anomalies. And so the hope of having such an explanation is to be able to say, like, here were the features that were relevant for this explanation or for this behavior. And a much smaller class of things are anomalies with respect to this explanation. Like, most anomalies wouldn’t change this. Like most ways you change your distribution won’t affect the validity of this explanation. For example, this explanation is saying models will tend to activate in the following direction. You don’t care about anything that’s happening orthogonals that direction. You’re just like are they not activating? You’re sort of just looking at this one direction and being like did this one direction change a lot? Yeah. So the idea is once you have this explanation at hand, a much, much smaller class of things look anomalous in a way that’s relevant to the explanation. And if you’ve done a really good job, the story is like if there’s a new input where you expect the property to still hold, that will be because you expect the explanation to still hold. Like the explanation generalizes as well as the behavior itself that it’s explaining. It’s kind of what you would like roughly.", "Dwarkesh Patel 02:14:43", "Okay, so what happens if even in training it is deceptive and it has basically its internal monologue is I will only rebel or I will only do something that looks bad if I can prove that I am in the real world and deployed. And presumably since it’s trained on all WebText, it knows what real world means, right? It’s not in control of factories and robot armies or whatever. So in that case, even in training it will have those activations for being fucked up on because in the back of its mind it’s thinking I will take over once I have the opportunity. So you wouldn’t detect anything anomalous once it is deployed and is still thinking those treacherous thoughts, do you just think this is not a likely scenario or would you able to catch this?", "Paul Christiano 02:15:35", "I think that’s sort of the typical case you would like to detect. Yeah. So the story would be something like you want to explain on the training distribution why the model never stabs anyone and you’re like, what’s my explanation on the training distribution? It’s actually going to have to go through that model. Supposing the only reason it doesn’t stab people is it’s like, well, I know that I’m being watched and I can’t stab people unless I’m really not being watched. Then your explanation for why does it never stab people is like oh look, it won’t be edited, it won’t annotated with English because it’s just some giant basket of numbers. But it’s tracking along with the inner monologue and saying like well, it thinks to itself, am I being watched? And in fact it normally concludes it is being watched on the training distribution, right? You have this whole distribution and so the explanation is saying it knows it’s being watched and therefore it decides not stab anyone. So ideally that’s what the explanation does. Again, not annotated English but just like a giant bag of numbers it’s effectively talking about that process. So then when you have a new input on which the model says am I on train distribution? Like no, I’m not being watched. Then if you compare that against the explanation, the explanation said, like, hey, the reason it doesn’t stab anyone is because it just constantly thinks that it’s being trained. Like, this thing is always this nonlinear feature. The representation of am I being trained? Is constantly saying, yes, that’s the explanation for why it never stabs people on the train distribution. And then on this test time, when you run it on the new input, it’s like, does I think I’m on the train distribution. It says, no. You compare that against your explanation, actually, the explanation breaks down completely. Now, the explanation said it was precisely because it thought it was being trained, that it doesn’t do anything really dangerous or doesn’t stab anyone. And so once we see that single variable switch, we know that this is not just anomaly, this is like, the anomaly that matters or is like, anomaly that is very relevant to the explanation.", "Dwarkesh Patel 02:17:06", "So this would require it to have this what is that cone that dogs have? But basically it would require this to have that MRI on it all the time. When it’s deployed on every single call, you need to be running this loop of like, is this explanation still holding?", "Paul Christiano 02:17:24", "Yeah. For this story of flagging anomaly, you have to actually check at a sufficiently high frequency. So you can imagine checking all the time, or you could imagine saying, like, hey, actually, you need a lot of AI systems cooperating to not blow the whistle on each other or whatever. E g. You might have checked, like, hey, in training, every time something suspicious happens, the model flags it. Why does that happen? You have your explanation. You look at a new case, you could say, is it not flagging it for the normal reason, or is it a weird reason? So you could hope that even if a small fraction, even if you’re running a small fraction of cases, if something bad is happening at scale, you can detect that. If you imagine, like, one in every thousand decisions, you check if there’s, like, a robot rebellion going on or whatever, you get to look in on one in every thousand decisions. You see that something is really weird with that decision. And even if you didn’t check many of them, you’re like, okay, we need to back off.", "Dwarkesh Patel 02:18:10", "What happens if the AI, I mean, WebText, that includes your papers or maybe just, like, deduces the first principle that this is an alignment technique one might try, and then it says, okay, well, I’m going to engineer my activations. In a way, I’m trying to imagine how we might do this, but basically, even in deployment, I’m going to make my thought process so subtle that it doesn’t raise any alarms. Is that a possibility?", "Paul Christiano 02:18:38", "Yeah. So I think the hope is to basically be robust or like, you know, when we’re doing theoretical research, we are in a kind of worst Casey regime.", "Dwarkesh Patel 02:18:46", "So.", "Paul Christiano 02:18:46", "We are normally imagining that everything conspires insofar as it can. So insofar as could a brain be like doing some crazy thing where it tries to structure itself to look weird? That’s totally in scope for ruling something out. And I guess the hope there is it’s actually not that easy to tell a story. Like if you’re like why did you not stab people on training time? Why are you not stabbing? Is that applying now? It’s kind of a mind doesn’t get to arbitrarily change the actual reasons. It does something. Like there was some earlier step, maybe you could think of it as like at the first step where it’s like now I’m going to try and do the sneaky thing to make my thoughts look this way. The actual explanation, like the actual causal explanation for the behavior is like there if you tried to give this reason it would have to maybe we should talk about the example in more detail to try and imagine how a brain would try and do this or how a mind would try and do this. But I think it is like the.", "Dwarkesh Patel 02:19:35", "Let’S say it has this plan. So in training it keeps the part of it activates. A part of the.", "Paul Christiano 02:19:48", "Problem is if it ever does a check that’s like am I in training or not, then your explanation for the behavior is going to have to factor through that check. And so if you ever do the check, the check becomes like an essential part of the explanation and then when the check fails, your explanation breaks down. So you’ve already lost the game if you did such a check.", "Dwarkesh Patel 02:20:04", "Is there a way it could make it seem like because if on every different input it has different activations, then can it make it seem like the different activations were a result of just a different input and not because some internal check went off?", "Paul Christiano 02:20:20", "Yeah, again, the hope is it needs to not only have them be like random different inputs, it needs to produce inputs with certain properties. Like it needs to deliberately create inputs that have this effect in the world, this intended effect in the world.", "Dwarkesh Patel 02:20:32", "So it basically creates a jailbreak for itself in some sense.", "Paul Christiano 02:20:35", "Yeah. Again the question is like it has to at test time be somehow producing activations that have the takeover effect and at training time be producing activations that don’t look that takeovery or whatever. And so somehow there was a difference between those. It has to actually really do the work of creating that difference somewhere.", "Dwarkesh Patel 02:20:50", "Can you make more concrete for us what an explanation would look like physically? What does the explanation look like? Is it just a whole bunch of numbers itself? Is it a language? Is it code? What does the explanation look like?", "Paul Christiano 02:21:03", "Yeah, I also want to maybe step back a tiny bit and clarify that. I think this project is kind of crazily ambitious and the main reason, the overwhelming reason I think you should expect it to break down or fail is just because we have all these desires we have all the things we want out of this notion of explanation. But that’s an incredibly hard research project which has a reasonable chance of being impossible. So I’m happy to talk about what the implications are but I want to flag but condition on failing I think it’s most likely because, just like the things we wanted were either incoherent or intractably difficult.", "Dwarkesh Patel 02:21:33", "But what are the ODS you think you’ll succeed?", "Paul Christiano 02:21:35", "I mean, it depends a little bit what you mean by succeed. But if you, say, get explanations that are great and accurately reflect reality and work for all of these applications that we’re imagining or that we are optimistic about, like kind of the best case success, I don’t know, like 1020 percent something. And then there’s like a higher probability of various intermediate results that provide value or insight without being, like, the whole dream. But I think the probability of succeeding in the sense of realizing the whole dream is quite low. Yeah in terms of what explanations look like physically or like the most ambitious plan, the most optimistic plan is that you are searching for explanations in parallel with searching for neural networks. So you have a parameterization of your space of explanations which mirrors the parameterization of your space of neural networks. Or you should think of as kind of similar to what is a neural network? It’s some simple architecture where you fill in a trillion numbers and that specifies how it behaves. So to you should expect an explanation to be like a pretty flexible general skeleton that’s saying pretty flexible general skeleton which just has a bunch of numbers you fill in. And what you are doing to produce an explanation is primarily just filling in these floating point numbers.", "Dwarkesh Patel 02:22:42", "When we conventionally think of explanations if you think of the explanation for why the universe moves this way it wouldn’t be something that you could discover on some smooth evolutionary surface where you can climb up the hill towards the laws of physics. These are the laws of physics. You kind of just derive them from reverse principles. But in this case it’s not like just a bunch of correlations between the orbits of different planets or something. Maybe the word explanation has a different I didn’t even ask the question but maybe you can just speak to that.", "Paul Christiano 02:23:15", "Yeah, I think I basically Sympathize. This is like there’s some intuitive objections like, look, the space of explanations is this rigid, logical a lot of explanations have this rigid, logical structure where they’re really precise and simple things govern complicated systems and nearby simple things just don’t work, and so on. And a bunch of things which feel totally different from this kind of nice, continuously parameterized space. And you can imagine interpretability on simple models where you’re just like by gradient descent, finding feature directions that have desirable properties. But then when you imagine like, hey, now, that’s like a human brain you’re dealing with, that’s like thinking logically about things. The explanation of why that works isn’t going to be just like here with some featured directions. That’s how I understood the basic confusion, which I share or sympathize with at least. So I think the most important high level point is I think basically the same objection applies to being like, how is GPT 4 going to learn to reason logically about something? You’re like, well, look, logical reasoning that’s like it’s got rigid structure, it’s doing ands and ors when it’s called for, even though it just somehow optimized over this continuous space. And the difficulty or the hope is that the difficulty of these two problems are kind of like matched. So that is it’s very hard to find these logicalish explanations because it’s not a space that’s easy to search over. But there are ways to do it. There’s ways to embed discrete, complicated, rigid things in these nice, squishy continuous spaces that you search over. And in fact, to the extent that neural nets are able to learn the rigid logical stuff at all, they learn it in the same way. That is, maybe they’re hideously inefficient, or maybe it’s possible to embed this discrete reasoning in the space in a way that’s not too inefficient, but you really want the two search problems to be of similar difficulty. And that’s like the key hope overall. I mean, this is always going to be the key hope. The question is, is it easier to learn a neural network or to find the explanation for why the neural network works? I think people have the strong intuition that it’s easier to find the neural network than the explanation of why it works. And that is really the I think we or at least exploring the hypothesis or interested in hypothesis that maybe those problems are actually more matched in difficulty.", "Dwarkesh Patel 02:25:11", "And why might that be the case?", "Paul Christiano 02:25:13", "This is pretty conjectural and complicated to express some intuitions. Maybe one thing is, I think a lot of this intuition does come from cases like machine learning. So if you ask about writing code and you’re like, how hard is it to find code versus find the explanation the code is correct. In those cases, there’s actually just like, not that much of a gap. Like the way a human writes a code is basically the same difficulty as find the explanation for why it’s correct. In the case of ML, I think we just mostly don’t have empirical evidence about how hard it is to find explanations of this particular type about why models work. We have a sense that it’s really hard, but that’s because we have this incredible mismatch where gradient descent is spending an incredible amount of compute searching for a model. And then some human is like looking at activate, looking at neurons or even some neural net is looking at neurons just like you have an incredible basically because you cannot define what an explanation is. You’re not applying gradient descent to the search for explanations. So I think the MLK just actually shouldn’t make you feel that pessimistic about the difficulty of finding explanations. The reason it’s difficult right now is precisely because you don’t have any kind of you’re not doing an analogous search process to find this explanation as you do to find the model. That’s just like a first part of the intuition. Like when humans are actually doing design. I think there’s not such a huge gap when in the ML case I think there is a huge gap. But I think largely for other reasons. A thing I also want to stress is that we just are open to there being a lot of facts that don’t have particularly compact explanations. So another thing is when we think of finding an explanation in some sense we’re setting our sites really low here. So if a human designed a random widget and was like, this widget appears to work well or if you search for a configuration that happens to fit into this spot really well it’s like a shape that happens to mesh with another shape. You might be like, what’s the explanation for why those things mesh? And we’re very open to just being like that doesn’t need an explanation. You just compute. You check that the shapes mesh and you did a billion operations and you check this thing worked. Or you’re like, Why do these proteins? You’re like, it’s just because these shape like, this is a low energy configuration. And we’re very open to in some cases, there’s not very much more to say. So we’re only trying to explain cases where kind of the surprise intuitively is very large. So, for example, if you have a neural net that gets a problem correct a neural net with a billion parameters that gets a problem correct on every input of length 1000 in some sense, there has to be something that needs explanation there because there’s, like, too many inputs for that to happen by chance alone. Whereas if you have a neural net that gets something right on average or gets something right in merely a billion cases, that actually can just happen by coincidence. GPT 4 can get billions of things right by coincidence because it just has so many parameters that are adjusted to fit the data.", "Dwarkesh Patel 02:27:44", "So a neural net that is initialized completely randomly the explanation for that would just be the neural net itself.", "Paul Christiano 02:27:50", "Well, it would depend on what behaviors it had. So we’re always, like, talking about an explanation of some behavior from a model, right?", "Dwarkesh Patel 02:27:56", "And so it just has a whole bunch of random behaviors. So it’ll just be like an exponentially large explanation relative to the weights of the model.", "Paul Christiano 02:28:02", "Yeah, I think there just aren’t that many behaviors that demand explanation. Like most things a random neural net does are kind of what you’d expect from, like, a random if you treat it just like a random function, then there’s nothing to be explained. There are some behaviors that demand explanation. But anyway, random neural net is pretty uninteresting. That’s part of the hope is it’s kind of easy to explain features of the random neural net.", "Dwarkesh Patel 02:28:21", "Okay, so that’s interesting. So the smarter or more ordered the neural network is, the more compressed the explanation.", "Paul Christiano 02:28:28", "Well, it’s more like the more interesting the behaviors to be explained. So the random neural net just doesn’t have very many interesting behaviors that demand explanation. And as you get smarter, you start having behaviors that are like, you start having some correlation with the simple thing and then that demands explanation. Or you start having some regularity in your outputs, and that demands explanation. So these properties kind of emerge gradually over the course of training that demand explanation. I also, again, want to emphasize here that when we’re talking about searching for explanations, this is some dream. We talk to ourselves. Like, why would this be really great if we succeeded? We have no idea about the empirics on any of this. So these are all just words that we think to ourselves and sometimes talk about to understand. Would it be useful to find a notion of explanation? And what properties would we like this notion of explanation to have? But this is really like, speculation and being out on a limb almost all of our time, day to day is just thinking about cases much, much simpler even than small neural nets or thinking about very simple cases and saying, what is the correct notion? What is the right heuristic estimate in this case? Or how do you reconcile these two apparently conflicting explanations?", "Dwarkesh Patel 02:29:25", "Is there a hope that if you have a different way to make proofs now that you can actually have heuristic arguments where instead of having to prove the Riemann hypothesis or something you can come up with a probability of it in a way that is compelling and you can publish? So would it just be a new way to do mathematics? A completely new way to prove things in mathematics?", "Paul Christiano 02:29:49", "I think most claims in mathematics that mathematicians believe to be true already have fairly compelling heuristic arguments like the Riemann hypothesis. It’s actually just there’s kind of a very simple argument that the Riemann hypothesis should be true unless something surprising happens. And so a lot of math is about saying, like, okay, we did a little bit of work to find the first pass explanation of why this thing should be true. And then, for example, in the case of the Riemann hypothesis, the question is, do you have this weird periodic structure in the primes? And you’re like, well, look, if the primes were kind of random you obviously wouldn’t have any structure like that. Just how would that happen? And then you’re like, well, maybe there’s something and then the whole activity is about searching for can we rule out anything? Can we rule out any kind of conspiracy that would break this result? So I think the mathematicians just wouldn’t be very surprised or wouldn’t care that much. And this is related to the motivation for the project. I think just in a lot of domains, in a particular domain, people already have norms of reasoning that work pretty well and match roughly how we think these heuristic arguments should work.", "Dwarkesh Patel 02:30:47", "But it would be good to have more concrete sense, like if you could say instead of, well, we think RSA is fine, to being able to say, here’s the probability that RSA is fine.", "Paul Christiano 02:30:57", "Yeah. My guess is these will not. Like, the estimates you get out of this would be much, much worse than the estimates you’d get out of just normal empirical or scientific reasoning where you’re using a reference class and saying, how often do people find algorithms for hard? Like, I think what this argument will give you for is RSA fine? Is going to be like, well, RSA is fine. Unless it isn’t. Unless there’s some additional structure in the problem that an algorithm can exploit, then there’s no algorithm. But very often the way these arguments work, so for neural nets as well, is you say, like, look, here’s an estimate about the behavior, and that estimate is right unless there’s another consideration we’ve missed. And the thing that makes them so much easier than proofs is just say, like, here’s a best guess, given what we’ve noticed so far, but that best guess can be easily upset by new information. And that’s both what makes them easier than proofs, but also what means they’re just, like, way less useful than proofs for most cases. I think neural nets are kind of unusual in being a domain where we really do want to do systematic, formal reasoning, even though we’re not trying to get a lot of confidence, we’re just trying to understand even roughly what’s going on.", "Dwarkesh Patel 02:31:54", "But the reason this works for alignment but isn’t that interesting for the Riemann hypothesis, where if in the RSA case, you say, well, the RSA is fine unless the estimate is wrong, it’s like, well, okay, well, it would tell us something new. But in the alignment case, if the estimate is, this is what the output should be, unless there’s some behavior I don’t understand, you want to know? In the case, unless there’s some behavior you don’t understand that’s not like, oh, whatever. That’s the case in which it’s not aligned.", "Paul Christiano 02:32:23", "Yeah, I mean, maybe one way of putting it is just like, we can wait until we see this input, or like, you can wait until you see a weird input and say, okay, weird input, do something we didn’t understand. And for our say, that would just be a trivial test. You’re just like, in some cases algorithms would be like is it a thing? Whereas for neural net in some cases it is either very expensive to tell or it’s like you actually don’t have any other way to tell. Like you checked in easy cases and now you’re on a hard case so you don’t have a way to tell if something has gone wrong. Also, I would clarify that I think it is interesting for the Riemann hypothesis I would say the current state, particularly in number theory, but maybe in quite a lot of math, is like there are informal heuristic arguments for pretty much all the open questions people work on but those arguments are completely informal. So that is like I think it’s not the case that there’s like here’s the norms of informal reasoning or the norms of heuristic reasoning and then we have arguments that a heuristic argument verifier could accept. It’s just like people wrote some words. I think those words like my guess would be like 90 of the things mathematicians accept as really compelling filling heuristic arguments are correct and if you actually formalize them you’d be like some of these aren’t quite right, or here’s some corrections or here’s which of two conflicting arguments is right? I think there’s something to be learned from it. I don’t think it would be like mind blowing. No.", "Dwarkesh Patel 02:33:33", "When you have it completed, how big would this heuristic estimator the rules for this heuristic estimator mean, I know like when Russell and who was the other guy when they did the rules? Yeah, wasn’t it like literally they had like a bucket or a wheelbarrow with all the papers.", "Paul Christiano 02:33:49", "But how big would I mean, mathematical foundations are quite simple in the end. At the end of the day it’s like how many symbols? I don’t know, it’s hundreds of symbols or something that go into the entire foundations and the entire rules of reasoning for like there’s a sort of built on top of first order logic but the rules of reasoning for first order logic are just like another hundreds of symbols or 100 lines of code or whatever. I’d say I have no idea. We are certainly aiming at things that are just not that complicated and my guess is that the algorithms we’re looking for are not that complicated. Most of the complexity is pushed into arguments not in this verifier or estimator.", "Dwarkesh Patel 02:34:27", "So for this to work you need to come up with an estimator which is a way to integrate different heuristic arguments together.", "Paul Christiano 02:34:33", "Has to be a machine that takes its input. Like first it takes an input argument, decides what it believes in light of it, which is kind of like saying was it compelling? But second, it needs to take 4 of those and then say here’s what I believe in light of all four, even though there’s a different estimation strategies that produce different numbers and that’s like a lot of our life is saying like well, here’s a simple thing that seems reasonable. And here’s a simple thing that seems reasonable. What are you doing? There’s supposed to be a simple thing that unifies them both. And the obstruction to getting that is understanding what happens when these principles are slightly intention and how do we deal?", "(02:35:01) - Will this revolutionize theoretical CS and math?", "Dwarkesh Patel 02:35:01", "Yeah, that seems super interesting. We’ll see what other applications it has. I don’t know, like computer security and code checking. If you can actually say this is how safe we think a code is.", "Paul Christiano 02:35:13", "In a very formal way, my guess is we’re not going to add I mean, this is both a blessing and a curse. It’s a curse. And you’re like, well, that’s sad. Your thing is not that useful, but a blessing and not useful things are easier. My guess is we’re not going to add that much value in most of these domains. Most of the difficulty comes from a lot of code that you’d want to verify. Not all of it, but a significant part. It’s just like the difficulty of formalizing the proof is like the hard part and actually getting all of that to go through and we’re not going to help even the tiniest bit with that, I think. So this would be more helpful if you have code that uses simulations, you want to verify some property of a controller that involves some numerical error or whatever you need to control the effects of that error. That’s where you start saying like, well, heuristically, if the errors are independent, blah, blah, blah.", "Dwarkesh Patel 02:35:52", "Yeah, you’re too honest to be a salesman, Paul.", "Paul Christiano 02:35:56", "This is kind of like sales to us, right? If you talk about this idea, people are like, why would that not be the coolest thing ever and therefore impossible? And we’re like, well, actually it’s kind of lame and we’re just trying to pitch it’s way lamer than it sounds. And that’s really important to why it’s possible, is being like, it’s really not going to blow that many people’s. I mean, I think it will be cool. I think it will be like very if we succeed will be very solid, like metamathematics or theoretical computer science or whatever. But I don’t think I think the mathematicians already do this reasoning and they mostly just love proofs. I think the physicists do a lot of this reasoning, but they don’t care about formalizing anything. I think in practice, other difficulties are almost always going to be more salient. I think this is of most interest by far for interpretability and ML and I think other people should care about it and probably will care about it if successful. But I don’t think it’s going to be the biggest thing ever in any field or even that huge a thing. I think this would be a terrible career move given the ratio of difficulty to impact. I think theoretical computer science, it’s probably a fine move. I think in other domains it just wouldn’t be worth we’re going to be working on this for years, at least in the best case.", "Dwarkesh Patel 02:37:00", "I’m laughing because my next question was going to be like a set up for you to explain if this grad student wants to work on this.", "Paul Christiano 02:37:10", "I think theoretical computer science is an exception where I think this is like, in some sense, like what the best of theoretical computer science is like. So you have all this reason you have this because it’s useless. Like an analogy. I think one of the most successful sagas in theoretical computer science is like formalizing the notion of an interactive proof system. And it’s like you have some kind of informal thing that’s interesting to understand, and you want to pin down what it is and construct some examples and see what’s possible and what’s impossible. And this is like I think this kind of thing is the bread and butter of the best parts of theoretical computer science. And then again, I think mathematicians it may be a career mistake because the mathematicians only care about proofs or whatever, but that’s a mistake in some sense. Aesthetically, it’s successful. I do think looking back and again, part of why it’s a mistake is such a high probability we wouldn’t be successful. But I think looking back, people would be like, that was pretty cool, although not that cool. Or we understand why it didn’t happen given the epistemic, like what people cared about in the field, but it’s pretty cool now.", "Dwarkesh Patel 02:38:12", "But isn’t it also the case that didn’t Hardy write in that all this prime shit is both not useless, but it’s fun to do, and it turned out that all the cryptography is based on all that prime shit. So I don’t know. But anyways, I’m trying to set you up so that you can tell and forget about if it doesn’t have applications in all those other fields. It matters a lot for Alignment and that’s why I’m trying to set you up to talk about if I think a lot of smart people listen to this podcast. If they’re a math or CS grad student and has gotten interested in this. Are you looking to potentially find talent to help you with this? Yeah, maybe we’ll start there. And then I also want to ask you if I think also maybe people who can provide funding might be listening to the podcast. So to both of them, what is your pitch?", "Paul Christiano 02:39:05", "We’re definitely hiring and searching for collaborators. I think the most useful profile is probably a combination of intellectually interested in this particular project and motivated enough by alignment to work on this project, even if it’s really hard. I think there are a lot of good problems. The basic fact that makes this problem unappealing to work on I’m a really good salesman, but whatever. I think the only reason this isn’t a slam dunk thing to work on is that there are not great examples. So we’ve been working on it for a while, but we do not have beautiful results as of the recording of this podcast. Hopefully by the time it airs, you completely script. They’ve had great results since then, but.", "Dwarkesh Patel 02:39:47", "It was too long to put in the margins of the podcast.", "Paul Christiano 02:39:49", "Yeah, with luck. Yeah. So I think it’s hard to work on because it’s not clear what a success looks like. It’s not clear if success is possible. But I do think there’s a lot of questions. We have a lot of questions and I think the basic setting of, like, look, there are all of these arguments. So in mathematics, in physics, in computer science are just a lot of examples of informal heuristic arguments. They have enough structural similarity that it looks very possible that there is like a unifying framework, that these are instances of some general framework and not just a bunch of random things. Like not just a bunch of it’s not like so, for example, for the prime numbers, people reason about the prime numbers as if they were like a random set of numbers. One view is like, that’s just a special fact about the primes, they’re kind of random. A different view is like, actually it’s pretty reasonable to reason about an object as if it was a random object as a starting point. And then as you notice structure, like revised from that initial guess and it looks like to me, the second perspective is probably more right. It’s just like reasonable to start off treating an object as random and then notice perturbations from random. Like, notice structure the object possesses and the primes are unusual and that they have fairly little additive structure. I think it’s a very natural theoretical project. There’s like a bunch of activity that people do. It seems like there’s a reasonable chance there’s something nice to say about unifying all of that activity. I think it’s a pretty exciting project. The basic strike against it is that it seems really hard. Like if you were someone’s advisor, I think you’d be like, what are you going to prove if you work on this for the next two years? And they’d be like, there’s a good chance. Nothing. And then it’s not what you do if you’re a PhD student. Normally you aim for those high probabilities of getting something within a couple of years. The flip side is it does feel I mean, I think there are a lot of questions. I think some of them we’re probably going to make progress on. So I think the pitch is mostly like, are some people excited to get in now? Or are people more like, let’s wait to see. Once we have one or two good successes to see what the pattern is and become more confident, we can turn the crank to make more progress in this direction. But for people who are excited about working on stuff with reasonably high probabilities of failure and not. Really understanding exactly what you’re supposed to do. I think it’s a pretty good project. I feel like if people look back if we succeed and people are looking back in 50 years on what was the coolest stuff happening in math or theoretical computer science, there will be, like a reasonable this will definitely be, like, in contention. And I would guess for lots of people would just seem like the coolest thing from this period of a couple of years or whatever.", "Dwarkesh Patel 02:42:09", "Right. Because this is a new method in so many different fields from the ones you met physics, math, theoretical computer science, I don’t know because what is the average math PhD working on? Right? He’s working on a subset of a subset of something I can’t even understand or pronounce. But math is quite esoteric. But yeah, this seems like, I don’t know, even small chance of it working. You shouldn’t forget about the value for alignment. But even without that, this is such a cool if this works, it’s like a really big deal.", "Paul Christiano 02:42:42", "There’s a good chance that if I had my current set of views about this problem and didn’t care about alignment and had the career safety to just spend a couple of years thinking about it, spend half my time for like five years or whatever, that I would just do that. I mean, even without caring at all about alignment, it’s a very nice problem. It’s very nice to have this library of things that succeed where they feel so tantalizingly close to being formalizable, at least to me, and such a natural setting, and then just have so little purchase on it. There aren’t that many really exciting feeling frontiers in theoretical computer science.", "Dwarkesh Patel 02:43:15", "And then smart person doesn’t have to be a grasshood, but a smart person is interested in this. What should they do? Should they try to attack some open problem you have put on your blog? Or should it what is the next step?", "Paul Christiano 02:43:31", "Yeah, I think a first path step. There’s different levels of ambition or whatever, different ways of approaching a problem. But we have this write up from last year or I guess eleven months ago or whatever on formalizing, the presumption of independence that provides, like, here’s kind of a communication of what we’re looking for in this object. And I think the motivating problem is saying here’s a notion of what an estimator is and here’s what it would mean for an estimator to capture some set of informal arguments. And a very natural problem is just try and do that. Go for the whole thing, try and understand and then come up with hopefully a different approach or then end up having context from a different angle on the kind of approach we’re taking. I think that’s a reasonable thing to do. I do think we also have a bunch of open problems, so maybe we should put up more of those open problems. I mean, the main concern with doing so is that for any given one, we’re like, this is probably hopeless. Like, put up a prize earlier in the year for an open problem, which tragically, I mean, I guess the time is now to post the debrief from that, or I owe it from this weekend. I was supposed to do that, so I’ll probably do it tomorrow, but no one solved it. It’s sad putting out problems that are hard or like I don’t we could put out a bunch of problems that we think might be really hard.", "Dwarkesh Patel 02:44:42", "But what was that famous case of that statistician who it was like, some PhD student who showed up late to a class and he saw some problems on the board and he thought they were homework, and then they were actually just open problems, and then he solved them because he thought they were homework.", "Paul Christiano 02:44:54", "Right, yeah. I mean, we have much less information that these problems are hard. Again, I expect the solution to most of our problems to not be that complicated. And we’ve been working on it in some sense for a really long time. Total years of full time equivalent work across the whole team is like probably like 3 years of full time equivalent work in this area spread across a couple of people. But that’s very little compared to a problem. It is very easy to have a problem where you put in 3 years of full time equivalent work. But in fact, there’s still an approach that’s going to work quite easily with like, 3 to six months if you come at a new angle. And we’ve learned a fair amount from that that we could share, and we probably will be sharing more over the coming months.", "Dwarkesh Patel 02:45:34", "As far as funding goes, is this something where, I don’t know, if somebody gave you a whole bunch of money that would help? Or does it not matter how many people are working on this, by the way?", "Paul Christiano 02:45:41", "So we have been right now, there’s 4 of us full time, and we’re hiring for more people.", "Dwarkesh Patel 02:45:45", "And then is funding that would matter?", "Paul Christiano 02:45:49", "I mean, funding is always good. We’re not super funding constrained right now. The main effect of funding is it will cause me to continuously and perhaps indefinitely delay fundraising. Periodically. I’ll set out to be interested in fundraising and someone will be like, offer a grant, and then I will get to delay for another six months or fundraising or nine months, or you can you can delay the time at which Paul needs to think for some time about fundraising.", "(02:46:11) - How Paul invented RLHF", "Dwarkesh Patel 02:46:11", "Well, one question I think would be interesting to ask, you know, I think people can talk vaguely about the value of theoretical research and how it contributes to real world applications and you can look at historical examples or something, but you are somebody who actually has done this in a big way. Like Rlhf is something you developed and then it actually has got into an application that has been used by millions of people. Tell me about just that pipeline. How can you reliably identify theoretical problems that will matter for real world applications? Because it’s one thing to read about touring or something and the Halting problem, but here you’d have the real thing.", "Paul Christiano 02:46:49", "Yeah, I mean, it is definitely exciting to have worked on a thing that has a real world impact. The main caveat I’d provide is, like, Rlhf is very simple compared to many things. And so the motivation for working on that problem was, like, look, this is how it probably should work, or this is a step in some progression. It’s unclear if it’s, like, the final step or something, but it’s a very natural thing to do that people probably should be and probably will be doing. I’m saying, if you want to talk about crazy stuff, it’s good to help make those steps happen faster, and it’s good to learn about. There’s lots of issues that occur in practice, even for things that seem very simple on paper, but mostly, like, the story of it’s just like, yeah, I think my sense of the world is things that look like good ideas on paper, just, like, often are harder than they look. But the world isn’t that far from what makes sense on paper. Like, large language models look really good on paper, and RLH looks really good on paper. And these things, I think, just work out in a way that’s yeah, I think people maybe overestimate or, like, maybe it’s kind of a trope, but people talk about, like, it’s easy to underestimate how much gap there is to practice, like, how many things will come up that don’t come up in theory. But it’s also easy to overestimate how inscrutable the world is. Like, the things that happen mostly are things that do just kind of make sense. Yeah, I feel like most ML implementation does just come down to a bunch of detail, though, of, like, build a very simple version of the system, understand what goes wrong, fix the things that go wrong, scale it up, understand what goes wrong. And I’m glad I have some experience doing that, but I think that does cause me to be better informed about what makes sense in ML and what can actually work. But I don’t think it caused me to have a whole lot of deep expertise or deep wisdom about how to close the gap.", "Dwarkesh Patel 02:48:33", "Yeah, but is there some tip on identifying things like Rlhf which actually do matter, versus making sure you don’t get stuck in some theoretical problem that doesn’t matter? Or is it just coincidence? Or I mean, is there something you can do in advance to make sure that the thing is useful?", "Paul Christiano 02:48:51", "I don’t know if the RLHS story is, like, the best success case or something, but because the capabilities maybe I’d say more profoundly, like, again, it’s just not that hard a case. It’s a little bit unfair to be like, I’m going to predict the thing, which I pretty much think it was going to happen at some point. And so it was mostly a case of acceleration, whereas the work we’re doing right now is specifically focused on something that’s kind of crazy enough that it might not happen. Even if it’s a really good idea or challenging enough, it might not happen. But I’d say in general, and this draws a little bit on more broad experience more broadly in theory, it’s just like a lot of the times when theory fails to connect with practice. It’s just kind of clear it’s not going to connect. If you like, try if you actually think about it and you’re like, what are the key constraints in practice? Is theoretical problem we’re working on actually connected to those constraints? Is there something that is possible in theory that would actually address real world issues? I think the vast majority as a theoretical computer scientist, the vast majority of theoretical computer science has very little chance of ever affecting practice. But also it is completely clear in theory that has very little chance of affecting practice. Most of theory fails to affect practice, not because of all the stuff you don’t think of, but just because you could call it like dead on arrival, but you could also be like, it’s not really the point. It’s just like mathematicians also are like, they’re not trying to affect practice and they’re not like, why does my number theory not affect practice? It was kind of obvious. I think the biggest thing is just like, actually caring about that and then learning at least what’s basically going on in the actual systems you care about and what are actually the important constraints. And is this a real theoretical problem? The basic reason most theory doesn’t do that is just like, that’s not where the easy theoretical problems are. So I think theory is instead motivated by like, we’re going to build up the edifice of theory and sometimes there’ll be Opportunistic. Opportunistically we’ll find a case that comes close to practice, or we’ll find something practitioners are already doing and try and bring into our framework or something. But theory of change is mostly not this thing is going to make into practice. It’s mostly this is going to contribute to the body of knowledge that will slowly grow. And sometimes opportunistically yields important results.", "Dwarkesh Patel 02:50:50", "How big do you think a seed AI would be? What is the minimum sort of encoding of something that is as smart as a human?", "Paul Christiano 02:50:58", "I think it depends a lot what substrate it gets to run on. So if you tell me how much computation does it get before or what kind of real world infrastructure does it get? You could ask what’s the shortest program, which if you run it on a million h 100s connected in a nice network with a hospitable environment will eventually go to the stars. But that seems like it’s probably on the order of tens of thousands of bytes or I don’t know if I had to guess the median, I’d guess 10,000 bytes.", "Dwarkesh Patel 02:51:21", "Wait, the specification or the compression of just the program?", "Paul Christiano 02:51:25", "A program which went wrong. Oh, got it. But that’s going to be like, really Cheatsy. So they ask, what’s the thing that has values and will expand and roughly preserve its value? Because that thing, the 10,000 byte thing, will just lean heavily on evolution and natural selection to get there for that. Like, I don’t know, million bytes, million bytes, 100,000 bytes, something like that.", "Dwarkesh Patel 02:51:51", "Do you think AI lie detectors will work where you kind of just look at the activations and not find explanations in the way you were talking about with Heuristics, but literally just like, here’s what truth looks like, here’s what lies look like. Let’s just segregate the lane space and see if we can identify the two.", "Paul Christiano 02:52:11", "Yeah, I think to separate the like just train a classifier to do it is a little bit complicated for a few reasons and may not work. But if you just brought them to space and say like, hey, it’s like you want to know if someone’s lying, you get to interrogate them, but also you get to rewind them arbitrarily and make a million copies of them. I do think it’s pretty hard to lie successfully. You get to look at their brain even if you don’t quite understand what’s happening. You get to rewind them a million times. You get to run all those parallel copies into gradient descent or whatever. I think there’s a pretty good chance that you can just tell if someone is lying, like a brain emulation or an AI or whatever, unless they were aggressively selected. If it’s just they are trying to lie well rather than it’s like they were selected over many generations to be excellent at lying or something, then your ML system hopefully didn’t train it a bunch to lie. And you want to be careful about whether your training scheme effectively does that. Yeah, that seems like it’s more likely than not to succeed.", "Dwarkesh Patel 02:53:02", "And how possible do you think it will be for us to specify human verifiable rules for reasoning such that even if the AI is super intelligent, we can’t really understand why it does certain things. We know that the way in which it arises at these conclusions is valid. Like, if it’s trying to persuade us to something, we can be like, I don’t understand all the steps, but I know that this is something that’s valid and you’re not just making shit up.", "Paul Christiano 02:53:27", "That seems very hard if you wanted to be competitive with learned reasoning, it depends a little bit exactly how you set it up. But for the ambitious versions of that, let’s say it would address the alignment problem, they seem pretty unlikely, like 5% kind of thing.", "Dwarkesh Patel 02:53:44", "Is there an upper bound on intelligence? Not in the near term, but just like super intelligence at some point. How far do you think that can go?", "Paul Christiano 02:53:51", "It seems like it’s going to depend a little bit on what is meant by intelligence. It kind of reads as a question that’s similar to is there an upper bound on strength or something? There are a lot of forms. I think it’s like the case that I think there are sort of arbitrarily smart input output functionalities and then if you hold fixed the amount of compute, there is some smartest one if you’re just like, what’s the best set of ten to the 40th operations? There’s only finitely many of them. So some best one for any particular notion of best that you have in mind? So I guess I’m just like for the unbounded question where you’re allowed to use arbitrary description complexity and compute, like probably no and for the I mean, there is some optimal conduct if you’re like I have some goal in mind and I’m just like, what action best achieves it? If you imagine like a little box embedded in the universe, I think there is kind of just like an optimal input output behavior. So I guess in that sense I think there is an upper bound, but it’s not saturatable in the physical universe because it’s definitely exponentially slow, right?", "Dwarkesh Patel 02:54:43", "Yeah. Because of comms or other things or heat. It just might be physically impossible to instagram something smarter than this.", "Paul Christiano 02:54:51", "Yeah, I mean, like, for example, if you imagine what the best thing is, it would almost certainly involve just like simulating every possible universe. It might be in modular moral constraints, which I don’t know if you want to include like so that would be very slow. It would involve simulating like, I don’t know exactly how slow, but like double exponential very slow.", "(02:55:10) - Disagreements with Carl Shulman", "Dwarkesh Patel 02:55:10", "Carl Schulman laid out his picture of the intelligence explosion in the seven hour episode. I know you guys have talked a lot. What about his basic is? Do you have some main disagreements? Is there some crux that you guys have explored?", "Paul Christiano 02:55:25", "It’s related to our timelines discussion from yeah, I think the biggest issue is probably error bars where Carl has a very software focused, very fast kind of takeoff picture. And I think that is plausible, but not that likely. I think there’s a couple of ways you could perturb the situation and my guess is one of them applies. So maybe I have like I don’t know exactly what Carl’s probability is. I feel like Carl’s going to have like a 60% chance on some crazy thing that I’m only going to assign like a 20% chance to or 30% chance or something. And I think those kinds of perturbations are like one, how long a period is there of complementarity between AI capabilities and human capabilities which will tend to soften takeoff? Two, how much diminishing returns are there on software progress, such that is a broader takeoff involving scaling, electricity production and hardware production. Is that likely to happen during takeoff, where I’m more like 50 50 or more stuff like this?", "Dwarkesh Patel 02:56:24", "Yeah. Okay, so is it that you think the alternate constraints will be more hard? The basic case he’s laid out is that you can just have a sequence of things like flash attention or Moe, and you can just keep stacking these kinds of things on.", "Paul Christiano 02:56:40", "I’m very unsure if you can keep stacking them or like it’s kind of a question of what’s like, the returns curve and Carl has some inference from historical data or some way he’d extrapolate the trend. I am more like 50 50 on whether the software only intelligence explosion is even possible, and then like a somewhat higher probability that it’s slower than why.", "Dwarkesh Patel 02:56:57", "Do you think it might not be possible?", "Paul Christiano 02:56:58", "Well, the entire question is like, if you double R and D effort, do you get enough additional improvement to further double the efficiency? And that question will itself be a function of your hardware base, like how much hardware you have. And the question is like, at the amount of hardware we’re going to have and the level of sophistication we have as the process begins. Is it the case that each doubling of actually the initial only depends on the hardware, or like, each level of hardware will have some place at this dynamic asymptotes so the question is just like, for how long? Is it the case that each doubling of R and D at least doubles the effective output of your AI research population? And I think I have a higher probability on that. I think it’s kind of close. If you look at the Empirics, I think the Empirics benefit a lot from continuing hardware scale up so that the effective R and D stock is significantly smaller than it looks, if that makes sense.", "Dwarkesh Patel 02:57:46", "What are the Empirics you’re referring to?", "Paul Christiano 02:57:48", "So there’s kind of two sources of evidence. One is like, looking across a bunch of industries at like, what is the general improvement with each doubling of either R and D investment or experience, where it is quite exceptional to have a field with not anyway. It’s pretty good to have a field where each time you double R and D investment, you get a doubling of efficiency. The second source of evidence is on actual algorithmic improvement in ML, which is obviously much, much scarcer. And there you can make a case that it’s been like each doubling of R and D has given you roughly a forex or something increase in computational efficiency. But there’s a question of how much that benefits. When I say the effect of R D stock is smaller, I mean we scale up. You’re doing a new task like every couple years, you’re doing a new task because you’re operating a scale much larger than the previous scale. And so a lot of your effort is how to make use of the new scale. So if you’re not increasing your installed hardware base or just flat at a level of hardware, I think you get much faster diminishing returns than people have gotten historically. I think Carl agrees, in principle, this is true. And then once you make that adjustment, I think it’s, like, very unclear where the empirics shake out. I think Carl has thought about these more than I am, so I should maybe defer more. But anyway, I’m at like 50 50 on that.", "Dwarkesh Patel 02:58:52", "How have your timelines changed over the last 20 years?", "Paul Christiano 02:58:54", "Last 20 years?", "Dwarkesh Patel 02:58:55", "Yeah. How long have you been working on anything related to AI?", "Paul Christiano 02:59:00", "So I started thinking about this stuff in 2010 or so. So I think my earliest timeline prediction will be in 2011. I think in 2011, my rough picture was like, we will not have insane AI in the next ten years. And then I get increasingly uncertain after that. But we converged to 1% per year or something like that. And then probably in 2016, my take was, like, we won’t have crazy AI in the next five years, but then we converged to, like, one or 2% per year after that. Then in 2019, I guess I made a round of forecasts where I gave like 30% or something to 25% to crazy Eye by 2040 and like 10% by 2030 or something like that. So I think my 2030 probability has been kind of stable, and my 2040 probability has been going up. And I would guess it’s too sticky. I guess that 40% I gave at the beginning is just, like, from not having updated recently enough, and I maybe just need to sit down. I would guess that should be even higher. I think, like 15% in 2030. I’m not feeling that bad about this is just like, each passing year is, like, a big update against 2030. We don’t have that many years left, and that’s roughly counterbalanced with AI going pretty well. Whereas for the 2040 thing, the passing years are not that big a deal. And as we see that things are basically working, that’s like, cutting out a lot of the probability of not having AI by 2040. My 2030 probability up a little bit, like, maybe twice as high as it used to be or something like that. My 2040 probability up much more significantly.", "Dwarkesh Patel 03:00:34", "How fast do you think we can keep building Fabs to keep up with the eye demand?", "Paul Christiano 03:00:40", "Yeah, I don’t know much about any of the relevant areas. My best guess is my understanding is right now, like 5% or something of the next year’s total or best process. Fabs will be making AI hardware, of which only a small fraction will be going into very large training runs. Like, only a couple. So maybe a couple of percent of total output, and then that represents maybe like 1% of total possible output. A couple of percent of leading process 1% of total or something. I don’t know if that’s right, but I think it’s like the rough ballpark we’re in. I think things will be pretty fast. You scale up for the next order of magnitude or two from there because you’re basically just shifting over other stuff. My sense is it would be like years of delay. There’s like, multiple reasons that you expect years of delay for going past that, maybe even at that you start having. Yeah, there’s just a lot of problems. Like building new fabs is quite slow and I don’t think there’s like, TSMC is not like, planning on increases in total demand driven by AI. Like kind of conspicuously not planning on it. I don’t think anyone else is really ramping up production in anticipation think and then similarly just building data centers of that size seems like very, very hard and also probably has multiple years of delay.", "(03:01:53) - Long TSMC but not NVIDIA", "Dwarkesh Patel 03:01:53", "What does your portfolio look like?", "Paul Christiano 03:01:55", "I’ve tried to get rid of most of the AI stuff that’s Plausibly implicated in policy work or like CEG advocacy on the RSP stuff for my involvement with Anthropic.", "Dwarkesh Patel 03:02:05", "What would it look like if you.", "Paul Christiano 03:02:06", "Had no conflicts of interest and no inside? Like, I also still have a bunch of hardware investments which I need to think about, but I don’t know a lot of TSMC. I have a chunk of Nvidia, although I just keep betting against Nvidia constantly since 2016 or something. I’ve been destroyed on that bet. Although AMD has also done fine. The case now is even easier, but it’s similar to the case in the old days, just a very expensive company. Given the total amount of R and D investment they’ve made, they have like, whatever, a trillion dollar valuation or something that’s like very high. So the question is, how expensive is it to make a TPU? So it actually outcompetes H 100 or something. And I’m like, wow, it’s real level, high level of incompetence if Google can’t catch up fast enough to make that trillion dollar valuation not justified.", "Dwarkesh Patel 03:02:58", "Whereas with TSMC they have a harder remote, you think?", "Paul Christiano 03:03:02", "Yeah, I think it’s a lot harder, especially if you’re in this regime where you’re trying to scale up. So if you’re unable to build fabs, I think what will take a very long time to build as many fabs as people want, the effect of that will be to bid up the price of existing fabs and existing semiconductor manufacturing equipment. And so just those hard assets will become spectacularly valuable, as will the existing GPUs and the actual yeah, I think it’s just hard. That seems like the hardest asset to scale up quickly. So it’s like the asset, if you have like a rapid run up, it’s the one that you’d expect to most benefit. Whereas Nvidia’s stuff will ultimately be replaced by either better stuff made by humans or stuff made by with AI assistance. Like the gap will close even further as you build AI systems.", "Dwarkesh Patel 03:03:42", "Right. Unless Nvidia is using those systems.", "Paul Christiano 03:03:45", "Yeah, the point is just like anybody will so dwarf past R D and there’s like just not that much stickiness. There’s less stickiness in the future than there has been in the yeah, I don’t know. So I don’t want to not commenting for any private information just in my gut having caveatted, this is like the single bet I’ve most okay not including Nvidia in that portfolio.", "Dwarkesh Patel 03:04:04", "And final question, there’s a lot of schemes out there for alignment and I think just like a lot of general takes and a lot of this stuff is over my head where I think I literally it took me like weeks to understand the mechanistic anomaly stuff you work on without spending weeks.", "Paul Christiano 03:04:19", "How do you detect bullshit?", "Dwarkesh Patel 03:04:20", "People have explained their schemes to me and I’m like, honestly, I don’t know if it makes sense or not with you. I’m just like I trust Paul enough that I think there’s probably something here if I try to understand this enough. But how do you detect bullshit?", "Paul Christiano 03:04:34", "Yeah, so I think it’s depends on the kind of work. So for the kind of stuff we’re doing, my guess is like most people there’s just not really a way you’re going to tell whether it’s bullshit. So I think it’s important that we don’t spend that much money on the people we want to hire are probably going to dig in in depth. I don’t think there’s a way you can tell whether it’s bullshit without either spending a lot of effort or leaning on deference with empirical work. It’s interesting in that you do have some signals of the quality of work. You can be like, does it work in practice? Does the story? I think the stories are just radically simpler and so you probably can evaluate those stories on their face. And then you mostly come down to these questions of like, what are the key difficulties? Yeah, I tend to be optimistic when people dismiss something because this doesn’t deal with a key difficulty or this runs into the following insurable obstacle. I tend to be a little bit more skeptical about those arguments and tend to think, like, yeah, something can be bullshit because it’s not addressing a real problem that’s I think the easiest way this is a problem someone’s interested in that’s just not actually an important problem, and there’s no story about why it’s going to become an important problem. E g, like it’s not a problem now and won’t get worse or it is maybe a problem now, but it’s clearly getting better. That’s like one way and then conditioned on passing that bar, like dealing with something that actually engages with important parts of the argument for concern and then actually making sense empirically. So I think most work is anchored by source of feedback is like actually engaging with real models. So it’s like, does it make sense how to engage with real models? And does the story about how it deals with key difficulties actually make sense? I’m pretty liberal past there. I think it’s really hard to, like, eg. People look at mechanistic, interpretability and be like, well, this obviously can’t succeed. And I’m like, I don’t know. How can you tell? It obviously can’t succeed. I think it’s reasonable to take total investment in the field. How fast is it making progress? How does that pencil I think most things people work on, though, actually pencil pretty fine. They look like they could be reasonable investments. Things are not, like, super out of whack.", "Dwarkesh Patel 03:06:28", "Okay, great. This is, I think, a good place to close. Paul, thank you so much for your time.", "Paul Christiano 03:06:31", "Yeah, thanks for having me. It was good chatting.", "Dwarkesh Patel 03:06:32", "Yeah, absolutely." ]
[]
https://www.dwarkesh.com/p/richard-rhodes
Richard Rhodes - Making of Atomic Bomb, AI, WW2, Oppenheimer, & Abolishing Nukes
[ "(0:00:00) - Oppenheimer movie", "Dwarkesh Patel 0:00:51", "Today I have the great honor of interviewing Richard Rhodes, who is the Pulitzer Prize-winning author of The Making of the Atomic Bomb , and most recently, the author of Energy, A Human History . I'm really excited about this one. Let's jump in at a current event, which is the fact that there's a new movie about Oppenheimer coming out, which I understand you've been consulted about. What did you think of the trailer? What are your impressions?", "Richard Rhodes 0:01:22", "They've really done a good job of things like the Trinity test device, which was the sphere covered with cables of various kinds. I had watched Peaky Blinders, where the actor who's playing Oppenheimer also appeared, and he looked so much like Oppenheimer to start with. Oppenheimer was about six feet tall, he was rail thin, not simply in terms of weight, but in terms of structure. Someone said he could sit in a children's high chair comfortably. But he never weighed more than about 140 pounds and that quality is there in the actor. So who knows? It all depends on how the director decided to tell the story. There are so many aspects of the story that you could never possibly squeeze them into one 2-hour movie. I think that we're waiting for the multi-part series that would really tell a lot more of the story, if not the whole story. But it looks exciting. We'll see. There have been some terrible depictions of Oppenheimer, there've been some terrible depictions of the bomb program. And maybe they'll get this one right.", "Dwarkesh Patel 0:02:42", "Yeah, hopefully. It is always great when you get an actor who resembles their role so well. For example, Bryan Cranston who played LBJ, and they have the same physical characteristics of the beady eyes, the big ears. Since we're talking about Oppenheimer, I had one question about him. I understand that there's evidence that's come out that he wasn't directly a communist spy. But is there any possibility that he was leaking information to the Soviets or in some way helping the Soviet program? He was a communist sympathizer, right?", "Richard Rhodes 0:03:15", "He had been during the 1930s. But less for the theory than for the practical business of helping Jews escape from Nazi Germany. One of the loves of his life, Jean Tatlock , was also busy working on extracting Jews from Europe during the 30. She was a member of the Communist Party and she, I think, encouraged him to come to meetings. But I don't think there's any possibility whatsoever that he shared information. In fact, he said he read Marx on a train trip between Berkeley and Washington one time and thought it was a bunch of hooey, just ridiculous. He was a very smart man, and he read the book with an eye to its logic, and he didn't think there was much there. He really didn't know anything about human beings and their struggles. He was born into considerable wealth. There were impressionist paintings all over his family apartments in New York City. His father had made a great deal of money cornering the markets on uniform linings for military uniforms during and before the First World War so there was a lot of wealth. I think his income during the war years and before was somewhere around $100,000 a month. And that's a lot of money in the 1930s. So he just lived in his head for most of his early years until he got to Berkeley and discovered that prime students of his were living on cans of god-awful cat food, because they couldn't afford anything else. And once he understood that there was great suffering in the world, he jumped in on it, as he always did when he became interested in something. So all of those things come together.", "His brother Frank was a member of the party, as was Frank's wife. I think the whole question of Oppenheimer lying to the security people during the Second World War about who approached him and who was trying to get him to sign on to some espionage was primarily an effort to cover up his brother's involvement. Not that his brothers gave away any secrets, I don't think they did. But if the army's security had really understood Frank Oppenheimer's involvement, he probably would have been shipped off to the Aleutians or some other distant place for the duration of the war. And Oppenheimer quite correctly wanted Frank around. He was someone he trusted.", "(0:06:22) - Was the bomb inevitable?", "Dwarkesh Patel 0:06:22", "Let's start talking about The Making of the Bomb. One question I have is — if World War II doesn't happen, is there any possibility that the bomb just never gets developed? Nobody bothers.", "Richard Rhodes 0:06:34", "That's really a good question and I've wondered over the years. But the more I look at the sequence of events, the more I think it would have been essentially inevitable, though perhaps not such an accelerated program. The bomb was pushed so hard during the Second World War because we thought the Germans had already started working on one. Nuclear fission had been discovered in Nazi Germany, in Berlin, in 1938, nine months before the beginning of the Second World War in Europe. Technological surveillance was not available during the war. The only way you could find out something was to send in a spy or have a mole or something human. And we didn't have that. So we didn't know where the Germans were, but we knew that the basic physics reaction that could lead to a bomb had been discovered there a year or more before anybody else in the West got started thinking about it. There was that most of all to push the urgency. In your hypothetical there would not have been that urgency.", "However, as soon as good physicists thought about the reaction that leads to nuclear fission — where a slow room temperature neutron, very little energy, bumps into the nucleus of a uranium-235 atom it would lead to a massive response. Isidore Rabi , one of the great physicists of this era, said it would have been like the moon struck the earth. The reaction was, as physicists say, fiercely exothermic. It puts out a lot more energy than you have to use to get it started. Once they did the numbers on that, and once they figured out how much uranium you would need to have in one place to make a bomb or to make fission get going, and once they were sure that there would be a chain reaction, meaning a couple of neutrons would come out of the reaction from one atom, and those two or three would go on and bump into other Uranium atoms, which would then fission them, and you'd get a geometric exponential. You'd get 1, 2, 4, 8, 16, 32, and off of there. For most of our bombs today the initial fission, in 80 generations, leads to a city-busting explosion. And then they had to figure out how much material they would need, and that's something the Germans never really figured out, fortunately for the rest of us. They were still working on the idea that somehow a reactor would be what you would build.", "When Niels Bohr, the great Danish physicist, escaped from Denmark in 1943 and came to England and then United States, he brought with him a rough sketch that Werner Heisenberg , the leading scientist in the German program, had handed him in the course of trying to find out what Bohr knew about what America was doing. And he showed it to the guys at Los Alamos and Hans Bethe, one of the great Nobel laureate physicists in the group, said — “Are the Germans trying to throw a reactor down on us?” You can make a reactor blow up, we saw that at Chernobyl, but it's not a nuclear explosion on the scale that we're talking about with the bomb. So when a couple of these emigres Jewish physicists from Nazi Germany were whiling away their time in England after they escaped, because they were still technically enemy aliens and therefore could not be introduced to top secret discussions, one of them asked the other — “How much would we need of pure uranium-235, this rare isotope of uranium that chain reacts? How much would we need to make a bomb?” And they did the numbers and they came up with one pound, which was startling to them. Of course, it is more than that. It's about 125 pounds, but that's just a softball. That's not that much material. And then they did the numbers about what it would cost to build a factory to pull this one rare isotope of uranium out of the natural metal, which has several isotopes mixed together. And they figured it wouldn't cost more than it would cost to build a battleship, which is not that much money for a country at war. Certainly the British had plenty of battleships at that point in time. So they put all this together and they wrote a report which they handed through their superior physicists at Manchester University where they were based, who quickly realized how important this was.", "The United States lagged behind because we were not yet at war, but the British were. London was being bombed in the blitz. So they saw the urgency, first of all, of eating Germany to the punch, second of all of the possibility of building a bomb. In this report, these two scientists wrote that no physical structure came to their minds which could offer protection against a bomb of such ferocious explosive power. This report was from 1940 long before the Manhattan Project even got started. They said in this report, the only way we could think of to protect you against a bomb would be to have a bomb of similar destructive force that could be threatened for use if the other side attacked you. That's deterrence. That's a concept that was developed even before the war began in the United States. You put all those pieces together and you have a situation where you have to build a bomb because whoever builds the first bomb theoretically could prevent you from building more or prevent another country from building any and could dominate the world. And the notion of Adolf Hitler dominating the world, the Third Reich with nuclear weapons, was horrifying. Put all that together and the answer is every country that had the technological infrastructure to even remotely have the possibility of building everything you'd have to build to get the material for a bomb started work on thinking about it as soon as nuclear fusion was announced to the world. France, the Soviet Union, Great Britain, the United States, even Japan. So I think the bomb would have been developed but maybe not so quickly.", "Dwarkesh Patel 0:14:10", "In the book you talk that for some reason the Germans thought that the critical mass was something like 10 tons, they had done some miscalculation.", "Richard Rhodes 0:14:18", "A reactor.", "Dwarkesh Patel 0:14:19", "You also have some interesting stories in the book about how different countries found out the Americans were working on the bomb. For example, the Russians saw that all the top physicists, chemists, and metallurgists were no longer publishing. They had just gone offline and so they figured that something must be going on. I'm not sure if you're aware that while the subject of the Making of the Atomic Bomb in and of itself is incredibly fascinating, this book has become a cult classic in AI. Are you familiar with this?", "Richard Rhodes 0:14:52", "No.", "Dwarkesh Patel 0:14:53", "The people who are working on AI right now are huge fans of yours. They're the ones who initially recommended the book to me because the way they see the progress in the field reminded them of this book. Because you start off with these initial scientific hints. With deep learning, for example, here's something that can teach itself any function is similar to Szilárd noticing the nuclear chain reaction. In AI there's these scaling laws that say that if you make the model this much bigger, it gets much better at reasoning, at predicting text, and so on. And then you can extrapolate this curve. And you can see we get two more orders of magnitude, and we get to something that looks like human level intelligence. Anyway, a lot of the people who are working in AI have become huge fans of your book because of this reason. They see a lot of analogies in the next few years. They must be at page 400 in their minds of where the Manhattan Project was.", "Richard Rhodes 0:15:55", "We must later on talk about unintended consequences. I find the subject absolutely fascinating. I think my next book might be called Unintended Consequences.", "Dwarkesh Patel 0:16:10", "You mentioned that a big reason why many of the scientists wanted to work on the bomb, especially the Jewish emigres, was because they're worried about Hitler getting it first. As you mentioned at some point, 1943, 1944, it was becoming obvious that Hitler, the Nazis were not close to the bomb. And I believe that almost none of the scientists quit after they found out that the Nazis weren't close. So why didn’t more of them say — “Oh, I guess we were wrong. The Nazis aren't going to get it. We don't need to be working on it.”?", "Richard Rhodes 0:16:45", "There was only one who did that, Joseph Rotblat. In May of 1945 when he heard that Germany had been defeated, he packed up and left. General Groves, the imperious Army Corps of Engineers General who ran the entire Manhattan Project, was really upset. He was afraid he'd spill the beans. So he threatened to have him arrested and put in jail. But Rotblat was quite determined not to stay any longer. He was not interested in building bombs to aggrandize the national power of the United States of America, which is perfectly understandable. But why was no one else? Let me tell it in terms of Victor Weisskopf. He was an Austrian theoretical physicist, who, like the others, escaped when the Nazis took over Germany and then Austria and ended up at Los Alamos. Weisskopf wrote later — “There we were in Los Alamos in the midst of the darkest part of our science.” They were working on a weapon of mass destruction, that's pretty dark. He said “Before it had almost seemed like a spiritual quest.”", "And it's really interesting how different physics was considered before and after the Second World War. Before the war, one of the physicists in America named Louis Alvarez told me when he got his PhD in physics at Berkeley in 1937 and went to cocktail parties, people would ask, “What's your degree in?” He would tell them “Chemistry.” I said, “Louis, why?” He said, “because I don't really have to explain what physics was.” That's how little known this kind of science was at that time. There were only about 1,000 physicists in the whole world in 1900. By the mid-30s, there were a lot more, of course. There'd been a lot of nuclear physics and other kinds of physics done by them. But it was still arcane. And they didn't feel as if they were doing anything mean or dirty or warlike at all. They were just doing pure science. Then nuclear fission came along. It was publicized worldwide. People who've been born since after the Second World War don't realize that it was not a secret at first. The news was published first in a German chemistry journal, Die Naturwissenschaften, and then in the British journal Nature and then in American journals. And there were headlines in the New York Times, the Los Angeles Times, the Chicago Tribune, and all over the world.", "People had been reading about and thinking about how to get energy out of the atomic nucleus for a long time. It was clear there was a lot there. All you had to do was get a piece of radium and see that it glowed in the dark. This chunk of material just sat there, you didn't plug it into a wall. And if you held it in your hand, it would burn you. So where did that energy come from? The physicists realized it all came from the nucleus of the atom, which is a very small part of the whole thing. The nucleus is 1/100,000th the diameter of the whole atom. Someone in England described it as about the size of a fly in a cathedral. All of the energy that's involved in chemical reactions, comes from the electron cloud that's around the nucleus. But  it was clear that the nucleus was the center of powerful forces. But the question was, how do you get them out? The only way that the nucleus had been studied up to 1938 was by bombarding it with protons, which have the same electric charge as the nucleus, positive charge, which means they were repelled by it. So you had to accelerate them to high speeds with various versions of the big machines that we've all become aware of since then. The cyclotron most obviously built in the 30s, but there were others as well. And even then, at best, you could chip a little piece off. You could change an atom one step up or one step down the periodic table. This was the classic transmutation of medieval alchemy sure but it wasn't much, you didn't get much out. So everyone came to think of the nucleus of the atom like a little rock that you really had to hammer hard to get anything to happen with it because it was so small and dense. That's why nuclear fission, with this slow neutron drifting and then the whole thing just goes bang, was so startling to everybody. So startling that when it happened, most of the physicists who would later work on the bomb and others as well, realized that they had missed the reaction that was something they could have staged on a lab bench with the equipment on the shelf. Didn't have to invent anything new. And Louis Alvarez again, this physicist at Berkeley, he said — “I was getting my hair cut. When I read the newspaper, I pulled off the robe and half with my hair cut, ran to my lab, pulled some equipment off the shelf, set it up and there it was.” So he said, “I discovered nuclear fission, but it was two days too late.” And that happened all over. People were just hitting themselves on the head and saying, well, Niels Bohr said, “What fools we've all been.” So this is a good example of how in science, if your model you’re working with is wrong it doesn't lead you down the right path.", "There was only one physicist who really was thinking the right way about the uranium atom and that was Niels Bohr. He wondered, sometime during the 30s, why uranium was the last natural element in the periodic table? What is different about the others that would come later? He visualized the nucleus as a liquid drop. I always like to visualize it as a water-filled balloon. It's wobbly, it's not very stable. The protons in the nucleus are held together by something called the strong force, but they still have the repellent positive electric charge that's trying to push them apart when you get enough of them into a nucleus. It's almost a standoff between the strong force and all the electrical charge. So it is like a wobbly balloon of water. And then you see why a neutron just falling into the nucleus would make it wobble around even more and in one of its configurations, it might take a dumbbell shape. And then you'd have basically two charged atoms just barely connected, trying to push each other apart. And often enough, they went the whole way. When they did that, these two new elements, half the weight of uranium, way down the periodic table, would reconfigure themselves into two separate nuclei. And in doing so, they would release some energy. And that was the energy that came out of the reaction and there was a lot of energy. So Bohr thought about the model in the right way. The chemists who actually discovered nuclear fusion didn't know what they were gonna get. They were just bombarding a solution of uranium nitrate with neutrons thinking, well, maybe we can make a new element, maybe a first man-made element will come out of our work. So when they analyzed the solution after they bombarded it, they found elements halfway down the periodic table. They shouldn't have been there. And they were totally baffled. What is this doing here? Do we contaminate our solution? No. They had been working with a physicist named Lisa Meitner who was a theoretical physicist, an Austrian Jew. She had gotten out of Nazi Germany not long before. But they were still in correspondence with her.", "So they wrote her a letter. I held that letter in my hand when I visited Berlin and I was in tears. You don't hold history of that scale in your hands very often. And it said in German — “We found this strange reaction in our solution. What are these elements doing there that don't belong there?” And she went for a walk in a little village in Western Sweden with her nephew, Otto Frisch, who was also a nuclear physicist. And they thought about it for a while and they remembered Bohr's model, the wobbly water-filled balloon. And they suddenly saw what could happen. And that's where the news came from, the physics news as opposed to the chemistry news from the guys in Germany that was published in all the Western journals and all the newspapers. And everybody had been talking about, for years, what you could do if you had that kind of energy. A glass of this material would drive the Queen Mary back and forth from New York to London 20 times and so forth, your automobile could run for months. People were thinking about what would be possible if you had that much available energy. And of course, people had thought about reactors. Robert Oppenheimer was a professor at Berkeley and within a week of the news reaching Berkeley, one of his students told me that he had a drawing on the blackboard, a rather bad drawing of both a reactor and a bomb. So again, because the energy was so great, the physics was pretty obvious. Whether it would actually happen depended on some other things like could you make it chain react? But fundamentally, the idea was all there at the very beginning and everybody jumped on it.", "Dwarkesh Patel 0:27:54", "The book is actually the best history of World War II I've ever read. It's about the atomic bomb, but it's interspersed with the events that are happening in World War II, which motivate the creation of the bomb or the release of it, why it had to be dropped on Japan given the Japanese response. The first third is about the scientific roots of the physics and it's also the best book I've read about the history of science in the early 20th century and the organization of it. There's some really interesting stuff in there. For example, there was a passage where you talk about how there's a real master apprentice model in early science where if you wanted to learn to do this kind of experimentation, you will go to Amsterdam where the master of it is residing. It's much more individual focused.", "Richard Rhodes 0:28:58", "Yeah, the whole European model of graduate study, which is basically the wandering scholar. You could go wherever you wanted to and sign up with whoever was willing to have you sign up.", "(0:29:10) - Firebombing vs nuclear vs hydrogen bombs", "Dwarkesh Patel 0:29:10", "But the question I wanted to ask regarding the history you made of World War II in general is — there's one way you can think about the atom bomb which is that it is completely different from any sort of weaponry that has been developed before it. Another way you can think of it is there's a spectrum where on one end you have the thermonuclear bomb, in the middle you have the atom bomb, and on this end you have the firebombing of cities like Hamburg and Dresden and Tokyo. Do you think of these as completely different categories or does it seem like an escalating gradient to you?", "Richard Rhodes 0:29:47", "I think until you get to the hydrogen bomb, it's really an escalating gradient. The hydrogen bomb can be made arbitrarily large. The biggest one ever tested was 56 megatons of TNT equivalent. The Soviet tested that. That had a fireball more than five miles in diameter, just the fireball. So that's really an order of magnitude change. But the other one's no and in fact, I think one of the real problems, this has not been much discussed and it should be, when American officials went to Hiroshima and Nagasaki after the war, one of them said later — “I got on a plane in Tokyo. We flew down the long green archipelago of the Japanese home island. When I left Tokyo, it was all gray broken roof tiles from the fire bombing and the other bombings. And then all this greenery. And then when we flew over Hiroshima, it was just gray broken roof tiles again.” So the scale of the bombing with one bomb, in the case of Hiroshima, was not that different from the scale of the fire bombings that had preceded it with tens of thousands of bombs. The difference was it was just one plane. In fact, the people in Hiroshima didn't even bother to go into their bomb shelters because one plane had always just been a weather plane. Coming over to check the weather before the bombers took off. So they didn't see any reason to hide or protect themselves, which was one of the reasons so many people were killed. The guys at Los Alamos had planned on the Japanese being in their bomb shelters. They did everything they could think of to make the bomb as much like ordinary bombing as they could. And for example, it was exploded high enough above ground, roughly 1,800 yards, so that the fireball that would form from this really very small nuclear weapon — by modern standards — 15 kilotons of TNT equivalent, wouldn't touch the ground and stir up dirt and irradiate it and cause massive radioactive fallout. It never did that. They weren't sure there would be any fallout. They thought the plutonium and the bomb over Nagasaki now would just kind of turn into a gas and blow away. That's not exactly what happened.", "But people don't seem to realize, and it's never been emphasized enough, these first bombs, like all nuclear weapons, were firebombs. Their job was to start mass fires, just exactly like all the six-pound incendiaries that had been destroying every major city in Japan by then. Every major city above 50,000 population had already been burned out. The only reason Hiroshima and Nagasaki were around to be atomic bombed is because they'd been set aside from the target list, because General Groves wanted to know what the damage effects would be. The bomb that was tested in the desert didn't tell you anything. It killed a lot of rabbits, knocked down a lot of cactus, melted some sand, but you couldn't see its effect on buildings and on people. So the bomb was deliberately intended to be as much not like poison gas, for example, because we didn't want the reputation for being like people in the war in Europe during the First World War, where people were killing each other with horrible gasses. We just wanted people to think this was another bombing. So in that sense, it was. Of course, there was radioactivity. And of course, some people were killed by it. But they calculated that the people who would be killed by the irradiation, the neutron radiation from the original fireball, would be close enough to the epicenter of the explosion that they would be killed by the blast or the flash of light, which was 10,000 degrees. The world's worst sunburn. You've seen stories of people walking around with their skin hanging off their arms. I've had sunburns almost that bad, but not over my whole body, obviously, where the skin actually peeled blisters and peels off. That was a sunburn from a 10,000 degree artificial sun.", "Dwarkesh Patel 0:34:29", "So that's not the heat, that's just the light?", "Richard Rhodes 0:34:32", "Radiant light, radiant heat. 10,000 degrees. But the blast itself only extended out a certain distance, it was fire. And all the nuclear weapons that have ever been designed are basically firebombs. That's important because the military in the United States after the war was not able to figure out how to calculate the effects of this weapon in a reliable way that matched their previous experience. They would only calculate the blast effects of a nuclear weapon when they figured their targets. That's why we had what came to be called overkill. We wanted redundancy, of course, but 60 nuclear weapons on Moscow was way beyond what would be necessary to destroy even that big a city because they were only calculating the blast. But in fact, if you exploded a 300 kiloton nuclear warhead over the Pentagon at 3,000 feet, it would blast all the way out to the capital, which isn't all that far. But if you counted the fire, it would start a mass-fire and then it would reach all the way out to the Beltway and burn everything between the epicenter of the weapon and the Beltway. All organic matter would be totally burned out, leaving nothing but mineral matter, basically.", "Dwarkesh Patel 0:36:08", "I want to emphasize two things you said because they really hit me in reading the book and I'm not sure if the audience has fully integrated them. The first is, in the book, the military planners and Groves, they talk about needing to use the bomb sooner rather than later, because they were running out of cities in Japan where there are enough buildings left that it would be worth bombing in the first place, which is insane. An entire country is almost already destroyed from fire bombing alone. And the second thing about the category difference between thermonuclear and atomic bombs. Daniel Ellsberg, the nuclear planner who wrote the Doomsday machine , he talks about, people don't understand that the atom bomb that resulted in the pictures we see of Nagasaki and Hiroshima, that is simply the detonator of a modern nuclear bomb, which is an insane thing to think about. So for example, 10 and 15 kilotons is the Hiroshima Nagasaki and the Tsar Bomba, which was 50 megatons. So more than 1,000 times as much. And that wasn't even as big as they could make it. They kept the uranium tamper off, because they didn't want to destroy all of Siberia. So you could get more than 10,000 times as powerful.", "Richard Rhodes 0:37:31", "When Edward Teller, co-inventor of the hydrogen bomb and one of the dark forces in the story, was consulting with our military, just for his own sake, he sat down and calculated, how big could you make a hydrogen bomb? He came up with 1,000 megatons. And then he looked at the effects. 1,000 megatons would be a fireball 10 miles in diameter. And the atmosphere is only 10 miles deep. He figured that it would just be a waste of energy, because it would all blow out into space. Some of it would go laterally, of course, but most of it would just go out into space. So a bomb more than 100 megatons would just be totally a waste of time. Of course, a 100 megatons bomb is also a total waste, because there's no target on Earth big enough to justify that from a military point of view. Robert Oppenheimer, when he had his security clearance questioned and then lifted when he was being punished for having resisted the development of the hydrogen bomb, was asked by the interrogator at this security hearing — “Well, Dr. Oppenheimer, if you'd had a hydrogen bomb for Hiroshima, wouldn't you have used it?” And Oppenheimer said, “No.” The interrogator asked, “Why is that?” He said because the target was too small. I hope that scene is in the film, I'm sure it will be.", "So after the war, when our bomb planners and some of our scientists went into Hiroshima and Nagasaki, just about as soon as the surrender was signed, what they were interested in was the scale of destruction, of course. And those two cities didn't look that different from the other cities that had been firebombed with small incendiaries and ordinary high explosives. They went home to Washington, the policy makers, with the thought that — “Oh, these bombs are not so destructive after all.” They had been touted as city busters, basically, and they weren't. They didn't completely burn out cities. They were not certainly more destructive than the firebombing campaign, when everything of more than 50,000 population had already been destroyed. That, in turn, influenced the judgment about what we needed to do vis-a-vis the Soviet Union when the Soviets got the bomb in 1949. There was a general sense that, when you could fight a war with nuclear weapons, deterrence or not, you would need quite a few of them to do it right. And the Air Force, once it realized that it could aggrandize its own share of the federal budget by cornering the market and delivering nuclear weapons, very quickly decided that they would only look at the blast effect and not the fire effect. It's like tying one hand behind your back. Most of it was a fire effect. So that's where they came up with numbers like we need 60 of these to take out Moscow. And what the Air Force figured out by the late 1940s is that the more targets, the more bombs. The more bombs, the more planes. The more planes, the biggest share of the budget. So by the mid 1950s, the Air Force commanded 47% of the federal defense budget. And the other branches of services, which had not gone nuclear by then, woke up and said, we'd better find some use for these weapons in our branches of service. So the Army discovered that it needed nuclear weapons, tactical weapons for field use, fired out of cannons. There was even one that was fired out of a shoulder mounted rifle. There was a satchel charge that two men could carry, weighed about 150 pounds, that could be used to dig a ditch so that Soviet tanks couldn't cross into Germany. And of course the Navy by then had been working hard with General Rickover on building a nuclear submarine that could carry ballistic missiles underwater in total security. No way anybody could trace those submarines once they were quiet enough. And a nuclear reactor is very quiet. It just sits there with neutrons running around, making heat. So the other services jumped in and this famous triad, we must have these three different kinds of nuclear weapons, baloney. We would be perfectly safe if we only had our nuclear submarines. And only one or two of those. One nuclear submarine can take out all of Europe or all of the Soviet Union.", "Dwarkesh Patel 0:42:50", "Because it has multiple nukes on it?", "Richard Rhodes 0:42:53", "Because they have 16 intercontinental ballistic missiles with MIRV warheads, at least three per missile.", "Dwarkesh Patel 0:43:02", "Wow. I had a former guest, Richard Hanania, who has a book about foreign policy where he points out that our model of thinking about why countries do the things they do, especially in foreign affairs, is wrong because we think of them as individual rational actors, when in fact it's these competing factions within the government. And in fact, you see this especially in the case of Japan in World War II, there was a great book of Japan leading up to World War II, where they talk about how a branch of the Japanese military, I forget which, needed more oil to continue their campaign in Manchuria so they forced these other branches to escalate. But it’s so interesting that the reason we have so many nukes is that the different branches are competing for funding.", "Richard Rhodes 0:43:50", "Douhet, the theorist of air power, had been in the trenches in the First World War. Somebody (John Masefield) called the trenches of the First World War, the long grave already dug, because millions of men were killed and the trenches never moved, a foot this way, a foot that way, all this horror. And Douhet came up with the idea that if you could fly over the battlefield to the homeland of the enemy and destroy his capacity to make war, then the people of that country, he theorized, would rise up in rebellion and throw out their leaders and sue for peace. And this became the dream of all the Air Forces of the world, but particularly ours. Until around 1943, it was called the US Army Air Force.", "The dream of every officer in the Air Force was to get out from under the Army, not just be something that delivers ground support or air support to the Army as it advances, but a power that could actually win wars. And the missing piece had always been the scale of the weaponry they carried. So when the bomb came along, you can see why Curtis LeMay, who ran the strategic air command during the prime years of that force, was pushing for bigger and bigger bombs. Because if a plane got shot down, but the one behind it had a hydrogen bomb, then it would be just almost as effective as the two planes together. So they wanted big bombs. And they went after Oppenheimer because he thought that was a terrible way to go, that there was really no military use for these huge weapons. Furthermore, the United States had more cities than Russia did, than the Soviet Union did. And we were making ourselves a better target by introducing a weapon that could destroy a whole state. I used to live in Connecticut and I saw a map that showed the air pollution that blew up from New York City to Boston. And I thought, well, now if that was fallout, we'd be dead up here in green, lovely Connecticut. That was the scale that it was going to be with these big new weapons. So on the one hand, you had some of the important leaders in the government thinking that these weapons were not the war-winning weapons that the Air Force wanted them and realized they could be. And on the other hand, you had the Air Force cornering the market on nuclear solutions to battles. All because some guy in a trench in World War I was sufficiently horrified and sufficiently theoretical about what was possible with air power. Remember, they were still flying biplanes.", "When H.G. Wells wrote his novel, The World Set Free in 1913, predicting an atomic war that would lead to world government, he had Air Forces delivering atomic bombs, but he forgot to update his planes. The guys in the back seat, the bombardiers, were sitting in a biplane, open cockpit. And when the pilots had dropped the bomb, they would reach down and pick up H.G. Wells' idea of an atomic bomb and throw it over the side. Which is kind of what was happening in Washington after the war. And it led us to a terribly misleading and unfortunate perspective on how many weapons we needed, which in turn fermented the arms race with the Soviets and just chased off. In the Soviet Union, they had a practical perspective on factories. Every factory was supposed to produce 120% of its target every year. That was considered good Soviet realism. And they did that with their nuclear war weapons. So by the height of the Cold War, they had 75,000 nuclear weapons, and nobody had heard yet of nuclear winter. So if both sides had set off this string of mass traps that we had in our arsenals, it would have been the end of the human world without question.", "Dwarkesh Patel 0:48:27", "It raises an interesting question, if the military planners thought that the conventional nuclear weapon was like the fire bombing, would it have been the case that if there wasn't a thermonuclear weapon, that there actually would have been a nuclear war by now because people wouldn't have been thinking of it as this hard red line?", "Richard Rhodes 0:48:47", "I don't think so because we're talking about one bomb versus 400, and one plane versus 400 planes and thousands of bombs. That scale was clear. Deterrence was the more important business. Everyone seemed to understand even the spies that the Soviets had connected up to were wholesaling information back to the Soviet Union. There's this comic moment when Truman is sitting with Joseph Stalin at Potsdam, and he tells Stalin, we have a powerful new weapon. And that's as much as he's ready to say about it. And Stalin licks at him and says, “Good, I hope you put it to good use with the Japanese.” Stalin knows exactly what he's talking about. He's seen the design of the fat man type Nagasaki plutonium bomb. He has held it in his hands because they had spies all over the place.", "(0:49:44) - Stalin & the Soviet program", "Dwarkesh Patel 0:49:44", "How much longer would it have taken the Soviets to develop the bomb if they didn't have any spies?", "Richard Rhodes 0:49:49", "Probably not any longer.", "Dwarkesh Patel 0:49:51", "Really?", "Richard Rhodes 0:49:51", "When the Soviet Union collapsed in the winter of ‘92, I ran over there as quickly as I could get over there. In this limbo between forming a new kind of government and some of the countries pulling out and becoming independent and so forth, their nuclear scientists, the ones who'd worked on their bombs were free to talk. And I found that out through Yelena Bonner, Andrei Sakharov's widow, who was connected to people I knew. And she said, yeah, come on over. Her secretary, Sasha, who was a geologist about 35 years old became my guide around the country. We went to various apartments. They were retired guys from the bomb program and were living on, as far as I could tell, sac-and-potatoes and some salt. They had government pensions and the money was worth a salt, all of a sudden. I was buying photographs from them, partly because I needed the photographs and partly because 20 bucks was two months' income at that point. So it was easy for me and it helped them. They had first class physicists in the Soviet Union, they do in Russian today. They told me that by 1947, they had a design for a bomb that they said was half the weight and twice the yield of the Fat Man bomb. The Fat Man bomb was the plutonium implosion, right? And it weighed about 9,000 pounds. They had a much smaller and much more deliverable bomb with a yield of about 44 kilotons.", "Dwarkesh Patel 0:51:41", "Why was Soviet physics so good?", "Richard Rhodes 0:51:49", "The Russian mind? I don't know. They learned all their technology from the French in the 19th century, which is why there's so many French words in Russian. So they got good teachers, the French are superb technicians, they aren't so good at building things, but they're very good at designing things. There's something about Russia, I don't know if it's the language or the education. They do have good education, they did. But I remember asking them when they were working, I said — On the hydrogen bomb, you didn't have any computers yet. We only had really early primitive computers to do the complicated calculations of the hydrodynamics of that explosion. I said, “What did you do?” They said, “Oh, we just used nuclear. We just used theoretical physics.” Which is what we did at Los Alamos. We had guys come in who really knew their math and they would sit there and work it out by hand. And women with old Marchant calculators running numbers. So basically they were just good scientists and they had this new design. Kurchatov who ran the program took Lavrentiy Beria, who ran the NKVD who was put in charge of the program and said — “Look, we can build you a better bomb. You really wanna waste the time to make that much more uranium and plutonium?” And Beria said, “Comrade, I want the American bomb. Give me the American bomb or you and all your families will be camp dust.” I talked to one of the leading scientists in the group and he said, we valued our lives, we valued our families. So we gave them a copy of the plutonium implosion bomb.", "Dwarkesh Patel 0:53:37", "Now that you explain this, when the Soviet Union fell, why didn’t North Korea, Iran or another country, send a few people to the fallen Soviet Union to recruit a few of the scientists to start their own program? Or buy off their stockpiles or something. Or did they?", "Richard Rhodes 0:53:59", "There was some effort by countries in the Middle East to get all the enriched uranium, which they wouldn't sell them. These were responsible scientists. They told me — we worked on the bomb because you had it and we didn't want there to be a monopoly on the part of any country in the world. So patriotically, even though Stalin was in charge of our country, he was a monster. We felt that it was our responsibility to work on these things, even Sakharov. There was a great rush at the end of the Second World War to get hold of German scientists. And about an equal number were grabbed by the Soviets. All of the leading German scientists, like Heisenberg and Hans and others, went west as fast as they could. They didn't want to be captured by the Soviets. But there were some who were. And they helped them work. People have the idea that Los Alamos was where the bomb happened. And it's true that at Los Alamos, we had the team that designed, developed, and built the first actual weapons. But the truth is, the important material for weapons is the uranium or plutonium. One of the scientists in the Manhattan Project told me years later, you can make a pretty high-level nuclear explosion just by taking two subcritical pieces of uranium, putting one on the floor and dropping the other by hand from a height of about six feet. If that's true, then all this business about secret designs and so forth is hogwash. What you really need for a weapon is the critical mass of highly enriched uranium, 90% of uranium-235. If you've got that, there are lots of different ways to make the bomb. We had two totally different ways that we used. The gun on the one hand for uranium, and then because plutonium was so reactive that if you fired up the barrel of a cannon at 3,000 feet per second, it would still melt down before the two pieces made it up. So for that reason, they had to invent an entirely new technology, which was an amazing piece of work.", "From the Soviet point of view, and I think this is something people don't know either, but it puts the Russian experience into a better context. All the way back in the 30s, since the beginning of the Soviet Union after the First World War, they had been sending over espionage agents connected up to Americans who were willing to work for them to collect industrial technology. They didn't have it when they began their country. It was very much an agricultural country. And in that regard, people still talk about all those damn spies stealing our secrets, we did the same thing with the British back in colonial days. We didn't know how to make a canal that wouldn't drain out through the soil. The British had a certain kind of clay that they would line their canals with, and there were canals all over England, even in the 18th century, that were impervious to the flow of water. And we brought a British engineer at great expense to teach us how to make the lining for the canals that opened up the Middle West and then the West. So they were doing the same thing. And one of those spies was a guy named Harry Gold, who was working all the time for them. He gave them some of the basic technology of Kodak filmmaking, for example. Harry Gold was the connection between David Greenglass and one of the American spies at Los Alamos and the Soviet Union. So it was not different. The model was — never give us something that someone dreamed of that hasn't been tested and you know works. So it would actually be blueprints for factories, not just a patent.", "And therefore when Beria after the war said, give us the bomb, he meant give me the American bomb because we know that works. I don't trust you guys. Who knows what you'll do. You're probably too stupid anyway. He was that kind of man. So for all of those reasons, they built the second bomb they tested was twice the yield and half the way to the first bomb. In other words, it was their new design. And so it was ours because the technology was something that we knew during the war, but it was too theoretical still to use. You just had to put the core and have a little air gap between the core and the explosives so that the blast wave would have a chance to accelerate through an open gap. And Alvarez couldn’t tell me what it was but he said, you can get a lot more destructive force with a hammer if you hit something with it, rather than if you put the head on the hammer and push. And it took me several years before I figured out what he meant. I finally understood he was talking about what's called levitation.", "Dwarkesh Patel 0:59:41", "On the topic that the major difficulty in developing a bomb is either the refinement of uranium into U-235 or its transmutation into plutonium, I was actually talking to a physicist in preparation for this conversation. He explained the same thing that if you get two subcritical masses of uranium together, you wouldn't have the full bomb because it would start to tear itself apart without the tamper, but you would still have more than one megaton.", "Richard Rhodes 1:00:12", "It would be a few kilotons. Alvarez's model would be a few kilotons, but that's a lot.", "Dwarkesh Patel 1:00:20", "Yeah, sorry I meant kiloton. He claimed that one of the reasons why we talk so much about Los Alamos is that at the time the government didn't want other countries to know that if you refine uranium, you've got it. So they were like, oh, we did all this fancy physics work in Los Alamos that you're not gonna get to, so don't even worry about it. I don't know what you make of that theory. That basically it was sort of a way to convince people that Los Alamos was important.", "Richard Rhodes 1:00:49", "I think all the physics had been checked out by a lot of different countries by then. It was pretty clear to everybody what you needed to do to get to a bomb. That there was a fast fusion reaction, not a slow fusion reaction, like a reactor. They'd worked that out. So I don't think that's really the problem. But to this day, no one ever talks about the fact that the real problem isn't the design of the weapon. You could make one with wooden boxes if you wanted to. The problem is getting the material. And that's good because it's damned hard to make that stuff. And it's something you can protect.", "Dwarkesh Patel 1:01:30", "We also have gotten very lucky, if lucky is the word you want to use. I think you mentioned this in the book at some point, but the laws of physics could have been such that unrefined uranium ore was enough to build a nuclear weapon, right? In some sense, we got lucky that it takes a nation-state level actor to really refine and produce the raw substance.", "Richard Rhodes 1:01:56", "Yeah, I was thinking about that this morning on the way over. And all the uranium in the world would already have destroyed itself. Most people have never heard of the living reactors that developed on their own in a bed of uranium ore in Africa about two billion years ago, right? When there was more U-235 in a mass of uranium ore than there is today, because it decays like all radioactive elements. And the French discovered it when they were mining the ore and found this bed that had a totally different set of nuclear characteristics. They were like, what happened? But there were natural reactors in Gabon once upon a time. And they started up because some water, a moderator to make the neutrons slow down, washed its way down through a bed of much more highly enriched uranium ore than we still have today. Maybe 5-10% instead of 3.5 or 1.5, whatever it is now. And they ran for about 100,000 years and then shut themselves down because they had accumulated enough fusion products that the U-235 had been used up. Interestingly, this material never migrated out of the bed of ore. People today who are anti-nuclear say, well, what are we gonna do about the waste? Where are we gonna put all that waste? It's silly.", "Dwarkesh Patel 1:03:35", "Shove it in a hole.", "Richard Rhodes 1:03:36", "Yeah, basically. That's exactly what we're planning to do. Holes that are deep enough and in beds of material that will hold them long enough for everything to decay back to the original ore. It's not a big problem except politically because nobody wants it in their backyard.", "Dwarkesh Patel 1:03:53", "On the topic of the Soviets, one question I had while reading the book was — we negotiated with Stalin at Yalta and we surrendered a large part of Eastern Europe to him under his sphere of influence. And obviously we saw 50 years of immiseration there as a result. Given the fact that only we had the bomb, would it have been possible that we could have just knocked out the Soviet Union or at least prevented so much of the world from succumbing to communism in the aftermath of World War II? Is that a possibility?", "Richard Rhodes 1:04:30", "When we say we had the bomb, we had a few partly assembled handmade bombs. It took almost as long to assemble one as the battery life of the batteries that would drive the original charge that would set off the explosion. It was a big bluff. You know, when they closed Berlin in 1948 and we had to supply Berlin by air with coal and food for a whole winter, we moved some B-29s to England. The B-29 being the bomber that had carried the bombs. They were not outfitted for nuclear weapons. They didn't have the same kind of bomb-based structure. The weapons that were dropped in Japan had a single hook that held the entire bomb. So when the bay opened and the hook was released, the thing dropped. And that's very different from dropping whole rows of small bombs that you've seen in the photographs and the film footage. So it was a big bluff on our part. We took some time after the war inevitably to pull everything together. Here was a brand new technology. Here was a brand new weapon. Who was gonna be in charge of it? The military wanted control, Truman wasn't about to give the military control. He'd been an artillery officer in the First World War. He used to say — “No, damn artillery captain is gonna start World War III when I'm president.” I grew up in the same town he lived in so I know his accent. Independence, Missouri. Used to see him at his front steps taking pictures with tourists while he was still president. He used to step out on the porch and let the tourists take photographs. About a half a block from my Methodist church where I went to church. It was interesting. Interestingly, his wife was considered much more socially acceptable than he was. She was from an old family in independence, Missouri. And he was some farmer from way out in Grandview, Missouri, South of Kansas City. Values.", "Anyway, at the end of the war, there was a great rush from the Soviet side of what was already a zone. There was a Soviet zone, a French zone, British zone and an American zone. Germany was divided up into those zones to grab what's left of the uranium ore that the Germans had stockpiled. And there was evidence that there was a number of barrels of the stuff in a warehouse somewhere in the middle of all of this. And there's a very funny story about how the Russians ran in and grabbed off one site full of uranium ore, this yellow black stuff in what were basically wine barrels. And we at the same night, just before the wall came down between the zones, were running in from the other side, grabbing some other ore and then taking it back to our side. But there was also a good deal of requisitioning of German scientists. And the ones who had gotten away early came West, but there were others who didn't and ended up helping the Soviets. And they were told, look, you help us build the reactors and the uranium separation systems that we need. And we'll let you go home and back to your family, which they did. Early 50s by then, the German scientists who had helped the Russians went home. And I think our people stayed here and brought their families over, I don't know.", "(1:08:24) - Deterrence, disarmament, North Korea, Taiwan", "Dwarkesh Patel 1:08:24", "Was there an opportunity after the end of World War II, before the Soviets developed the bomb, for the US to do something where either it somehow enforced a monopoly on having the bomb, or if that wasn't possible, make some sort of credible gesture that, we're eliminating this knowledge, you guys don't work on this, we're all just gonna step back from this.", "Richard Rhodes 1:08:50", "We tried both before the war. General Groves, who had the mistaken impression that there was a limited amount of high-grade uranium ore in the world, put together a company that tried to corner the market on all the available supply. For some reason, he didn't realize that a country the size of the Soviet Union is going to have some uranium ore somewhere. And of course it did, in Kazakhstan, rich uranium ore, enough for all the bombs they wanted to build. But he didn't know that, and I frankly don't know why he didn't know that, but I guess uranium's use before the Second World War was basically as a glazing agent for pottery, that famous yellow pottery and orange pottery that people owned in the 1930s, those colors came from uranium, and they're sufficiently radioactive, even to this day, that if you wave a Geiger counter over them, you get some clicks. In fact, there have been places where they've gone in with masks and suits on, grabbed the Mexican pottery and taken it out in a lead-lined case. People have been so worried about it but that was the only use for uranium, to make a particular kind of glass. So once it became clear that there was another use for uranium, a much more important one, Groves tried to corner the world market, and he thought he had. So that was one effort to limit what the Soviet Union could do.", "Another was to negotiate some kind of agreement between the parties. That was something that really never got off the ground, because the German Secretary of State was an old Southern politician and he didn't trust the Soviets. He went to the first meeting, in Geneva in ‘45 after the war was over, and strutted around and said, well, I got the bomb in my pocket, so let's sit down and talk here. And the Soviet basically said, screw you. We don't care. We're not worried about your bomb. Go home. So that didn't work. Then there was the effort to get the United Nations to start to develop some program of international control. And the program was proposed originally by a committee put together by our State Department that included Robert Oppenheimer, rightly so, because the other members of the committee were industrialists, engineers, government officials, people with various kinds of expertise around the very complicated problems of technology and the science and, of course, the politics, the diplomacy. In a couple of weeks, Oppenheimer taught them the basics of the nuclear physics involved and what he knew about bomb design, which was everything, actually, since he'd run Los Alamos. He was a scientist during the war. And they came up with a plan. People have scoffed ever since at what came to be called the Acheson-Lilienthal plan named after the State Department people. But it's the only plan I think anyone has ever devised that makes real sense as to how you could have international control without a world government. Every country would be open to inspection by any agency that was set up. And the inspections would not be at the convenience of the country. But whenever the inspectors felt they needed to inspect. So what Oppenheimer called an open world. And if you had that, and then if each country then developed its own nuclear industries, nuclear power, medical uses, whatever, then if one country tried clandestinely to begin to build bombs, you would know about it at the time of the next inspection. And then you could try diplomacy. If that didn't work, you could try conventional war. If that wasn't sufficient, then you could start building your bombs too. And at the end of this sequence, which would be long enough, assuming that there were no bombs existing in the world, and the ore was stored in a warehouse somewhere, six months maybe, maybe a year, it would be time for everyone to scale up to deterrence with weapons rather than deterrence without weapons, with only the knowledge.", "That to me is the answer to the whole thing. And it might have worked. But there were two big problems. One, no country is going to allow a monopoly on a nuclear weapon, at least no major power. So the Russians were not willing to sign on from the beginning. They just couldn't. How could they? We would not have. Two, Sherman assigned a kind of a loudmouth, a wise old Wall Street guy to present this program to the United Nations. And he sat down with Oppenheimer after he and his people had studied and said, where's your army? Somebody starts working on a bomb over there. You've got to go in and take that out, don't you? He said, what would happen if one country started building a bomb? Oppenheimer said, well, that would be an act of war. Meaning then the other countries could begin to escalate as they needed to to protect themselves against one power, trying to overwhelm the rest. Well, Bernard Baruch was the name of the man. He didn't get it. So when he presented his revised version of the Acheson–Lilienthal Plan, which was called the Baruch Plan to the United Nations, he included his army. And he insisted that the United States would not give up its nuclear monopoly until everyone else had signed on. So of course, who's going to sign on to that deal?", "Dwarkesh Patel 1:15:24", "I feel he has a point in the sense that — World War II took five years or more. If we find that the Soviets are starting to develop a bomb, it's not like within the six months or a year or whatever, it would take them to start refining the ore. And to the point we found out that they've been refining ore to when we start a war and engage in it, and doing all the diplomacy. By that point, they might already have the bomb. And so we're behind because we dismantled our weapons. We are only starting to develop our weapons once we've exhausted these other avenues.", "Richard Rhodes 1:16:00", "Not to develop. Presumably we would have developed. And everybody would have developed anyway. Another way to think of this is as delayed delivery times. Takes about 30 minutes to get an ICBM from Central Missouri to Moscow. That's the time window for doing anything other than starting a nuclear war. So take the warhead off those missiles and move it down the road 10 miles. So then it takes three hours. You've got to put the warhead back on the missiles. If the other side is willing to do this too. And you both can watch and see. We require openness. A word Bohr introduced to this whole thing. In order to make this happen, you can't have secrets. And of course, as time passed on, we developed elaborate surveillance from space, surveillance from planes, and so forth. It would not have worked in 1946 for sure. The surveillance wasn’t there. But that system is in place today. The International Atomic Energy Agency has detected systems in air, in space, underwater. They can detect 50 pounds of dynamite exploded in England from Australia with the systems that we have in place. It's technical rather than human resources. But it's there. So it's theoretically possible today to get started on such a program. Except, of course, now, in like 1950, the world is awash in nuclear weapons. Despite the reductions that have occurred since the end of the Cold War, there's still 30,000-40,000 nuclear weapons in the world. Way too many.", "Dwarkesh Patel 1:18:01", "Yeah. That's really interesting. What percentage of warheads do you think are accounted for by this organization? If there's 30,000 warheads, what percentage are accounted for?", "Richard Rhodes 1:18:12", "All.", "Dwarkesh Patel 1:18:12", "Oh. Really?  North Korea doesn't have secrets?", "Richard Rhodes 1:18:13", "They're allowed to inspect anywhere without having to ask the government for permission.", "Dwarkesh Patel 1:18:18", "But presumably not North Korea or something, right?", "Richard Rhodes 1:18:21", "North Korea is an exception. But we keep pretty good track of North Korea needless to say.", "Dwarkesh Patel 1:18:27", "Are you surprised with how successful non-proliferation has been? The number of countries with nuclear weapons has not gone up for decades. Given the fact, as you were talking about earlier, it's simply a matter of refining or transmuting uranium. Is it surprising that there aren’t more countries that have it?", "Richard Rhodes 1:18:42", "That's really an interesting part. Again, a part of the story that most people have never really heard. In the 50s, before the development and signing of the Nuclear Non-Proliferation Treaty, which was 1968 and it took effect in 1970, a lot of countries that you would never have imagined were working on nuclear weapons. Sweden, Norway, Japan, South Korea. They had the technology. They just didn't have the materials. It was kind of dicey about what you should do. But I interviewed some of the Swedish scientists who worked on their bomb and they said, well, we were just talking about making some tactical nukes that would slow down a Russian tank advance on our country long enough for us to mount a defense. I said, so why did you give it up? And they said, well, when the Soviets developed hydrogen bombs, it would only take two to destroy Sweden. So we didn't see much point. And we then signed the Non-Proliferation Treaty. And our knowledge of how to build nuclear weapons helped us deal with countries like South Africa, which did build a few bombs in the late 1980s. Six World War II-type gun bombs fueled with enriched uranium, because South Africa is awash in uranium ore and makes a lot of uranium for various purposes. So efforts were starting up. And that's where John Kennedy got the numbers in a famous speech he delivered, where he said, I lose sleep at night over the real prospect of there being 10 countries with nuclear weapons by 1970 and 30 by 1980. And of course, that would have been a nightmare world, because the risk of somebody using them would have gone up accordingly.", "But after the Cuban Missile Crisis, we and the Soviet Union basically said, we've got to slow this thing down for us and for others as well. And the treaty that was then put together and negotiated offered a good deal. It said, if you don't build nuclear weapons, we will give you the knowledge to build nuclear energy technology that will allow you to forge ahead very successfully with that. There was a belief in the early years of nuclear weapons that as soon as the technology was learned by a country, they would immediately proceed to build the bomb. And no one really thought it through. It seemed sort of self-evident. But it wasn't self-evident. There are dangers to building a nuclear weapon and having an arsenal. If you're a little country and you have a nuclear arsenal, you have the potential to destroy a large country, or at least disable a large country, because you have these terribly destructive weapons. That makes you a target. That means that a large country is going to look at you and worry about you, which they never would have before. That kind of logic dawned on everybody at that point. And they were getting a good deal.", "And the other part of the deal was the part that the nuclear powers never kept to this day, which was an agreement that we would work seriously and vigorously toward nuclear disarmament. We didn't do that. We just told them we would. And then kind of snuck around on the sides. So much so that by this treaty, because no one was quite trusting of the whole deal, treaties are usually signed and they exist in perpetuity. They don't have any end date. They go on until somebody breaks the rules. But this treaty was given a 25-year review period, which would have been 1995, at which point if the countries had chosen to abrogate the treaty, it would have been set aside. And everybody could have gone back to making nuclear weapons. It almost came to that for the very reason that the main nuclear powers had not fulfilled their agreement to start reducing arsenals. We didn't start reducing our nuclear arsenal until the end of the Cold War, until the collapse of the Soviet Union. That's when we began cutting back, as did the former Soviet Union. A diplomat who's a friend of mine, Tom Graham, was assigned the task by our State Department of going around to the countries that were going to be voting on this renewal or not of the treaty and convincing their leaderships around the world. It wasn't in their best interest to abrogate the treaty at that point. Tom spent two years on the road. The only place he thought he should go is not the UN, where there's a second-level diplomat he could have talked to, but back to the home countries. And he convinced enough countries around the world. He's another hero who's never been properly celebrated. He convinced enough countries around the world that they did agree to extend the treaty in perpetuity. With the proviso that the goddamn nine nuclear powers get busy eliminating their nukes.", "And of course, George H.W. Bush, bless him, I didn't like his politics otherwise, but he stepped forward and split the nuclear arsenal in half right away. We dropped our numbers way, way lower than we had been. He pulled the amount of South Korea, which was a great bugaboo for both the Soviets and the North Koreans and China, and did a lot of good work toward moving toward a real reduction in nuclear arsenal. And the Russians agreed at that time. It was before Putin took power. So there was a change for the better, but there are still too many around, unfortunately. So that's why there are only nine nuclear powers to this day.", "Dwarkesh Patel 1:25:16", "How worried are you about a proxy war between great powers turning nuclear? For example, people have been worried about the Ukraine conflict for this reason. In the future, if we're facing an invasion of Taiwan by China, that's another thing to worry about. I had a friend who understands these things really well, and we were arguing because I thought, listen, if there's like a war, if there's a risk of nuclear war, let them take Taiwan. We'll build semiconductor factories in Arkansas. Who cares, right? And he explains, no, you don't understand, because if we don't protect Taiwan, then Japan and South Korea decide to go nuclear because they're like America won't protect us. And if they go nuclear, then the risk of nuclear war actually goes up, not down.", "Richard Rhodes 1:26:02", "Or they just decide to align with China. Yeah, because we didn't protect them with our nuclear umbrella the way we promised.", "Dwarkesh Patel 1:26:10", "Oh, I guess we haven't promised Taiwan that, but it's implied, I guess.", "Richard Rhodes 1:26:14", "I think it's implied. Yeah. If we said we're going to help defend them, that's what that means.", "Dwarkesh Patel 1:26:19", "Yeah. But anyway, the question was, how worried are you about proxy wars turning nuclear?", "Richard Rhodes 1:26:26", "There's been a lot of argument about whether nuclear deterrence actually works or not. The best evidence is that the United States fought a number of wars on the periphery, beginning with Korea and then Vietnam, and some other smaller things in between, where we were willing to accept defeat. We accepted defeat in Vietnam for sure, rather than use our nuclear arsenal, always because behind those peripheral countries, was a major nuclear power. China, Soviet Union, whatever. And we didn't want to risk a nuclear war. So at that level, deterrence really seemed to work for a long time. But there's been a change lately. And I find it kind of terrifying. The first manifestation was with India and Pakistan. They both went nuclear full scale in the late 1990s. India had tested one bomb in 1974, which they claimed was a peaceful explosion, whatever that is. But they hadn't proceeded anywhere from there. And Pakistan had tested their first bomb in China when they got it from AQ Khan, the same guy who was trying to proliferate to Iran a little later in Iraq. But they didn't build a lot of warheads either. And then their conflict or the personalities involved in the governments got sideways with each other. And both sides tested a quick flurry of four or five bombs each around 1997 or 1998. Now they were full fledged nuclear powers on the scale of two regional countries. But then in 1999, there was a border conflict between the two countries. And Pakistan came up with a whole new argument about nuclear deterrence. Not only could you prevent a nuclear escalation, but if you kept your deterrent in place, you could have a conventional war with the other side not willing to escalate to nuclear because you still had your nuclear arsenal. And that, which came very close to a nuclear exchange, we jumped in with both feet, believe me, and we're all over both countries about, don't do this, don't go this far, and they backed off. But Putin or someone in the Russian complex picked up on that new approach. And it's the one Putin is using now. He's basically saying, I'm having a conventional war here. And don't you dare introduce nuclear weapons or I will. In fact, if you defeat me, I may introduce nuclear weapons. Screw you. I'm going to use my nukes as a backup. That's new. And it's terrifying because it's a totally different kind of deterrence that risks going too far, too fast to be able to know what's going on.", "Dwarkesh Patel 1:29:47", "And in some sense, he is calling our bluff or at least I hope he is. Obviously you shouldn’t say this but hopefully the government would not respond to a tactical nuke used in Ukraine by getting the US annihilated.", "Richard Rhodes 1:30:04", "I don't think we would respond to it with a full scale nuclear attack. But I do think we would respond with some level of nuclear exchange with Russia or maybe just with that part of Russia, I don't know. We've long had a policy called decapitation. We long ago could the individual apartments and [unclear] of this Russian leadership with individual warheads. In the window at high noon, because they are very accurate now and they're totally stealthy. If you're thinking about cruise missiles, we can put one in someone's window and it's a nuclear warhead and not just a high explosive. They've known that for a long time. That doesn't give anybody time to get into the bomb shelter. This has gotten really very hairy. Used to be pretty straightforward. Don't bomb us, or we will bomb you. Attack in some peripheral country and we won't bomb you because you might bomb us. We'll lose that little war, but we'll come in somewhere else. All of these things, that's complicated enough. But now we're talking about this other level.", "Dwarkesh Patel 1:31:23", "So in some sense, this idea that we can backstop conventional power with nuclear weapons worked. After World War Two, the Soviets had millions of men in the Red Army stationed in Eastern Europe and we didn't have troops remaining in Western Europe but we said, listen, if you invade Western Europe, we'll respond with a nuclear attack. I guess that worked.", "Richard Rhodes 1:31:51", "It worked until August 1949, when the Soviets tested their first atomic bomb. And that's when panic hit Washington. And the whole debate went on about, do we build hydrogen bombs? And ultimately, the military prevailed and the chairman signed on. And plus, Fuchs was outed and there was this whole knowledge that the Russians knew a lot because Fuchs knew our early work on hydrogen weapons before the end of the war. We were exploring the possibility. All of that came together too with a kind of a terrible moment when the Teller side prevailed. And we said, let's build bigger bombs, as if that would somehow help. But there had been a balance of forces, you're quite right. They had two million men on the ground in Europe. We had the bomb. And then the balance of force was disrupted when they got the bomb. So then how do we rebalance? And the rebalance was the hydrogen bomb. And that's how you get to Armageddon ultimately, unfortunately.", "Dwarkesh Patel 1:32:57", "I was reading the acknowledgements and you talked to Teller in writing the book. Is that right?", "Richard Rhodes 1:33:02", "I did.", "Dwarkesh Patel 1:33:03", "And obviously, he was a big inspiration for the US pursuing the hydrogen bomb. What were his feelings when you talked to him about this?", "(1:33:12) - Oppenheimer as lab director", "Richard Rhodes 1:33:12", "I made the mistake of going to see Teller at his house on the grounds of Stanford University early on in my research when I really didn't have as clear a grasp of who everyone was and what I should ask and so forth. I sent him a copy of an essay I'd written about Robert Oppenheimer for a magazine and that set him off. He had reached the point where he was telling TV interviewers, asking them how much time he would actually be on the air, and when they said three minutes or whatever, he would say, all right, then I will answer three questions no more. Trying to control because he was convinced that everyone was cutting the story to make him look bad. He really had quite a lot of paranoia at that point in his life. So when he read my essay on Oppenheimer, he used that as the basis for basically shouting at me, waving my big, heavy book, one of my big, heavy books at him. I remember thinking, oh, my God, he's going to hit me with my book. Then I thought, wait, this guy's 80 years old. I can take him. But he finally said, all right, I will answer three questions. And we sat down and I asked him one and he didn't give me an interesting answer. I asked him the second question and it was worth the whole interview. I said, was Robert Oppenheimer a good lab director? And I thought, well, here's the chance where he'll slice him. But Oppenheimer's worst enemy said to me, “Robert Oppenheimer was the best lab director I ever knew.” And I thought, bingo. And then he chased me out of the house. And I went up the road to a friend's house and got very drunk. Because I was really shaken. It was so new to me, all of this. But that quote was worth the whole thing because Eisenhower in one of his memoirs says, I always liked Hannibal best of all the classical figures in the military of the Roman Empire, because he comes down to us only in the written memoirs of his enemies. And if they thought he was such a good leader, he must have been a hell of a leader.", "Dwarkesh Patel 1:35:35", "The way the Manhattan Project is organized, it's interesting because if you think of a startup in Silicon Valley, you usually have a technical founder and a non-technical founder. The non-technical founder is in charge of talking to investors and customers and so on. And the technical founders in charge of organizing the technology and the engineers. And in Oppenheimer, you had the guy who understood all the chemistry, the metallurgy, the obviously the nuclear physics, and then Groves is getting appropriations and it's an interesting organization that you see. But why was Oppenheimer such a great lab director?", "Richard Rhodes 1:36:13", "Oppenheimer was a man with a very divided self and an insecure self. One of his closest friends was I.I. Robbie, a really profound and interesting human being. I'm going to be writing about him in my next book. I spent some time with Robbie just before he died and he said once of Oppenheimer, “I always felt as if he could never decide whether he wanted to be president of the [unclear] of Columbus or [unclear].” He said he was a certain kind of American Jew. The German Jews who came over before and after the First World War were westernized, they were not from the shtetls of the paler settlement of the Eastern European Jews. They were sophisticated, they were well educated. They were a totally different group and as such, they were able to assimilate fairly easily. Oppenheimer wasn't sent to a Jewish school. He was sent to the famous school that was opened in New York, it was called the ethical culture school. It was based on the idea of an ethical, moral education, but not a religious education. So his parents found this niche for him. He never quite pulled himself together as a human being. And as is true with many people with that kind of personality structure, he was a superb actor. He could play lots of different roles and he did. Women adored him. He was one of the most lavish courtiers of women. He would bring a bouquet of flowers to their first date, which was apparently shocking to people in those days. He was just a lovely, courtly man. But in the classroom, if somebody made a stupid mistake, he would just chew them out. And he taught a course in physics that was so advanced that most of the best students took it twice because they couldn't store it all the first time.", "He was nasty to people all the time in a way that bothered a lot of people. Louis Alvarez, who was someone I got to know pretty well because I helped him write his memoirs, he was one of the important scientists at Los Alamos who didn't get along with Oppenheimer at all because Oppenheimer was so condescending to everyone. Louis was kind of a hothead and he didn't like people being condescending to him. Oppenheimer never won a Nobel, Louis did. There was this layer of Oppenheimer being waspish all the time, which was his insecurity and his insecurity extended to physics. Robbie said later he couldn't sit down and focus on one problem because he always wanted to be someone who always knew everything that was going on in physics. You could call that someone who was a very sophisticated, knowledgeable scientist or you could call him someone who was superficial and he was superficial. He knew broadly rather than deeply. He and a graduate student of his developed the basic idea of the black hole long before it came up after the war. They published a paper on what was essentially the physics of the black hole in 1929. But it wasn’t called the black hole yet. John Wheeler invented that term many years later but the idea that a big enough sun, if it collapsed, would just keep on collapsing until nothing could come out of it including light, came from Oppenheimer. And if the black hole had been discovered out in space before he died, he certainly would have had a Nobel for the theory. That being said, he still was someone who was broad rather than deep. And he was someone who was really good at playing roles.", "General Groves, who himself had two or three degrees from MIT in engineering, he was no slouch. But his was more the industrial side of everything. How do you build a factory when you don't know what you're going to put in it? He built the factories at Oak Ridge, Tennessee to enrich uranium and start building the early piles that would later lead to the big reactors in Washington before they even knew what was going to go in the factory. He got orders of magnitude numbers and he said, start laying the concrete, we want it this big, we want this attached to it. We’re going to need power, going to need water, going to need gas, whatever they needed. He was that kind of really great engineer but he needed someone to help explain the physics to it. And he saw pretty quickly at the meetings he was holding at the University of Chicago, where they were building the first reactor, little one, that Oppenheimer was really good at explaining things. So he grabbed him and Groves spent the war riding in trains back and forth among all these various sites. He'd have the advisors from one site like Chicago jump on the train while he was taking the train from Chicago to Santa Fe. Then they'd get off and take the next train back to Chicago. Then he'd pick up the guys who were going to go with him to Tennessee from Santa Fe and they'd ride with him round and round and round. He got a plane later in the war. But most of the war, he just had people riding with him. And Oppenheimer later said, well, I became his idiot savant. And Oppenheimer did. He explained all the physics to Groves because Groves was a little insecure around six Nobel Laureates around the table and would say things like, well, you each have a PhD, but I think my work at MIT probably was the equivalent of about six PhDs, wouldn't you think? And they would think, who is this guy?", "Dwarkesh Patel 1:42:31", "Was he joking? Was that sarcasm or?", "Richard Rhodes 1:42:33", "No, he wasn’t. He had multiple degrees. You know how the military works when there's no war. They send their guys to school to get them better trained. So when the time came to find someone to run, Oppenheimer had been pushing for a separate place where the scientists could get together and do what scientists must do if they're going to advance their science. And that is, talk to each other. The system that Groves had installed around the country at all these factories and so forth, was called compartmentalization for secrecy. And basically it was — you're only allowed to know just enough to do your job and not the overall picture of what your job might be for. So, for example, there were women at Oak Ridge running big magnetic machines that would separate uranium 235 from uranium 238 with magnetic coils of various kinds, taking advantage of the very slight difference in mass between these two otherwise identical materials. The women who were doing this work were set in front of a board with a big dial on it, a big arrow that went from zero to ten or whatever and told keep the arrow about between right here on this. They didn't know what they were making. They really got good at spinning their dials and maintaining what was basically the level of whatever electric process was going on in this machine. So compartmentalization worked.", "But Oppenheimer said, if we are compartmentalized as scientists, we're not going to get anywhere. Science works by gift exchange. I discover something, I publish the results. All the other scientists in my field can read it or be told of it at a meeting. And then they can take that information and use that to move a little farther. And that's the way it's always been done. And that's the only way it works. As soon as you lock people up and tell them they can't talk to each other, it stops because the discovery over here doesn't get applied to a need over here. Simple. Groves reluctantly agreed to let the place have openness, as it was called. You see the parallel with the open world about the bomb. Same sort of thing. How can you know what's going on if you can't let people talk to each other? See what they're doing. But he insisted that the whole crew be put behind barbed wire in a faraway place when no one else was around. So they did. Groves had worked with Oppenheimer. Oppenheimer was now playing the lab director. And he was superb at it, as Teller’s remark about a good lab director would let you know.", "For the period of the war, Hans Bethe told me this, “Before the war, Robert really could be cruel. He would pounce on a mistake you made.” And Bethe, a Nobel Laureate, discovered what makes the sun work. That's how important Bethe's work was and how significant. Bethe told me, “everyone makes mistakes. I made mistakes. Oppenheimer would charge me with a mistake if I spoke wrong. But before the war, after the war, but not during the war.” During the war, he was a superb wise lab director. Because unlike most scientists, he was not only a physicist of high class, but he really was psychologically astute as a human being, as I think insecure people often are because got to scope out what's going on. Oppenheimer wrote pretty good poetry. He was interested in art. He wanted to read the Bhagavad Gita Gita in the original Sanskrit so he learned Sanskrit. He was very smart, needless to say, and had a very high IQ. Not all the physicists who did first class work did have high IQs. They took some other qualities as well. It took the [unclear] sitting down in a chair and focusing on one thing until you got through to it. That's what Robbie said. And he said, that's why Oppenheimer never won a Nobel Prize. All in all, Oppenheimer became in this place that some of these people later were calling [unclear]. Remember they were working on a bomb that was going to kill hundreds of thousands of people but it was the most curious collection of people who had felt like theirs was a spiritual field before the war. And here they were in the war and they began to think, well, maybe this isn't so spiritual, maybe we're doing something truly horrendous and when Bohr comes along and says, wait a minute. Oppenheimer by then was kind of a student of Bohr's.", "Oppenheimer had the job of recruiting everyone for Los Alamos without telling them what they would be doing because it was secret. So he would go to a university campus where there was a young physicist he wanted to recruit and they would go out for a walk to get away from any hidden microphones. And Oppenheimer would say, I can't tell you what we're going to be doing. I can tell you that it will certainly end this war and it may end all war. And that was quite enough. I mean, most of them figured out what they'd be doing anyway, because it was sort of obvious when you start looking at the list of people who are going there. They're all nuclear physicists.", "So Oppenheimer and Bohr together brought this idea to Los Alamos and later to the world that there was a paradox. The bomb had two sides and they were in a sense complementary because although it's certainly true that this was going to be a very destructive weapon, it would also maybe be true if all worked out and they tried to make it work out, that it would put an end to large scale war. If you go to the numbers and graph the number of man-made deaths from war starting in the 18th century, it's almost exponential up to 1943, when 15 million people died between the war itself and the Holocaust. And then it starts to decline as the war begins to come to an end. The war is really 1945. It drops down to about one to two million deaths and it stays there ever after. And although one to two million deaths a year from war is nothing to be proud of, we lose six to seven million people in the world every year from smoking. So in a curious way, the introduction of how to control nuclear energy changed the nature of nation states such that they could no longer at the limit defend their borders with war. They had to find some other way, at least when the scale goes up, they had to find another way. I think that's very important because people somehow don't really understand what a millennial change, the introduction of the release of nuclear energy into the world, really was. As we've been talking, I've been thinking over and over again about your question about AI and the whole larger interesting question that you can see how it fits into the bomb story of unintended consequences. All the countries that worked on the bomb at some level were thinking, oh my god, we're going to have a weapon that will surpass them all. One ring to bind them all like Lord of the Rings. They thought it would aggrandize national power, but what it did was put a limit to national power. For the first time in the history of the world, war became something that was historical, rather than universal. It was something that would no longer be possible. And who did that? Scientists going about their quiet work of talking to each other and exploring the way the universe works.", "Bohr, who's one of my favorite people in the world, he liked to say, science is not about power over nature, as people seem to think. Science is about the gradual removal of prejudice. By that, he meant when Galileo and Copernicus changed the way everyone looked at the position of the earth in the universe, not the center of the universe anymore, but just a planet revolving around a third-rate star. It changed the way everyone thought about the whole world. When Darwin definitively identified our place in the natural world as a brainy ape, it's still taking time for a lot of people to swallow that one. But inch by inch, these prejudices about where we are in the world and how powerful we are and what our purpose is and so forth are being drained away by science. The science of nuclear fission and nuclear energy is draining away and has drained away the theory that we are sort of universally capable of destroying each other and getting away with it. But the dark side is, the unintended consequence is, it's only by having a Damocles sword over our heads, the potential for destruction of the human world, that it's possible to limit war in this way. That's the threat. And when people start saying, well, look, we can have a conventional war if we've got nuclear weapons, because you're not going to attack us. You don't dare. We'll use our nuclear weapons on you. Something's changed most recently in all of this. It's outside the range of all the books I've written. It's a whole new thing. I guess you have to work through all the combinations, just as evolution does, before you come up with the one that actually fits the reality of the world.", "(1:53:40) - AI progress vs Manhattan Project", "Dwarkesh Patel 1:53:40", "There's at least 10 different things and I'm trying to think which branch of the tree I want to explore first. On the AI question, yeah, I'm trying not explicitly connect the dots too much. Every time I read something in history, I think oh, this is exactly like so and so. First of all, the broader outline of the super eclectic group of people who are engaged in this historical pursuit, they understand it's a historical pursuit, they see the inklings of it. My second to last guest was Ilya Sutskever , who is the chief scientist at OpenAI which is the big lab building this. He was basically the Szliard of AI, how Szilard discovered nuclear chain reaction, Sutskever was the first person to train a neural network called ImageNet. Anyway, from that moment on, he was one of these scientists who sees that you could build a nuclear bomb immediately as soon as the news hits the floor. He saw this coming 10 years ago, scale this up and you've got something that's human level intelligent. I was reading through the book and so many things like this came up. One was a good friend of mine who works at one of these companies. And they train these models on GPUs, which are computers that can perform all these matrix calculations in parallel. And I was thinking about these engineers who were in short supply during the making of the bomb, let's say they're working on the electromagnetic separation of the isotopes. And it's this really esoteric thing, you're a chemist or a metallurgist or something that you're really needed for this specific thing. And he's in super high demand right now, he's like the one guy who can make these very particular machines run as efficiently as possible. You start looking and there's so many analogies there.", "Richard Rhodes 1:55:41", "I don't think there's much question that AI is going to be at least as transformative as nuclear weapons and nuclear energy. And it's scaring the hell out of a lot of people and a lot of other people are putting their heads in the sand. And others are saying, let's live it around with laws, which certainly we should do. We've tried to do that with nuclear weapons and have had some success. But people have no idea what's coming, do they?", "Dwarkesh Patel 1:56:10", "Yeah. One thing I wanted to ask is — Some of these scientists didn't see this coming. I think Fermi said you could never refine uranium and get 235 but then some of these other scientists saw it coming. And I was actually impressed by a few of them who accurately predicted the year that we’ll have the atomic bomb. Another one was, Russia is five to 10 years behind. So I'm curious, what made some of these scientists really good at forecasting the development of this technology and its maturity and what made some either too pessimistic or too optimistic? Is there some pattern you noticed among the ones who could see the progression of the technology?", "Richard Rhodes 1:56:57", "That's a good question. Well, the experience that I've had in working with scientists, physicists, is that they really are not very interested in history or in the future. They're interested in the Now, where the work is, where the cutting edge is. They'd have to devote quite a bit of energy to projecting in the future. Of course there have been a few. One thinks of some of the guys who wrote science fiction, some of the guys who wrote essays and so forth about where we were going. And if you ask them, particularly later on in their careers, when their basic work is already done, and I remember talking to a Nobel Laureate in another line of science, but he said, I would never be a graduate student connecting up with the Nobel Prize winner because they've already done their best work. And he was one so he was talking about himself, too. It takes a certain mentality to do that. And maybe scientists aren't the best ones to do it. Alvarez told me, he said, you know, I was always a snoop and I would poke around Berkeley in the various drawers of the benches in the laboratory. He said, one day I was poking around and I found this little glass cylinder about the size of a Petri dish with some wires inside. He said, and I realized it was the first cyclotron. They just put it in a drawer. I asked, so where is it now? He said, it's in the Smithsonian, of course. Where else would it be? I talked to the guy who invented the first laser and actually held one in my hand. He was an engineer at one of the big telephone companies. He said, the first laser is supposedly in the Smithsonian but they don't have it. They got one in the lab and I took my first one home. You want to see it? I said, God, of course. We went to the credenza in his dining room and he opened the drawer and pulled out a little box. Inside it was basically a cylinder of aluminum about the size of a little film can, which he opened up and took out a man-made ruby cylinder that was half-silvered on each end, surrounded by a very high-intensity flash bulb. That was it. It was this beautiful, simple machine. He said they didn't get the right one. I said, why didn't you give them the right one? He said, they didn't ask me. He was angry all his life because he wasn't included in the Nobel prizes. It went to the theoretician to first theorize the laser, but he built the first laser and there it was.", "(1:59:50) - Living through WW2", "Dwarkesh Patel 1:59:50", "When you were interviewing the scientists, how many of them did you feel regretted their work?", "Richard Rhodes 2:00:04", "They'd been down the road so far, they really didn't think that way anymore. What they did think about was they regretted the way governments had handled their work. There were some who were hawks and patriots, Alvarez was one of those. But most of them had tried in the years since the war to move in the direction of reducing nuclear weapons, eliminating nuclear weapons. It was a problem for them. When the war was over, these were mostly graduate students or new PhDs who had been recruited for this program. The average age at Los Alamos was 27. Oppenheimer was an old guy, he was 39. These were young guys who had been pulled out of their lives, pulled out of their careers. They wanted to get back to the university and do the work they had started out to do. By and large, they did and Los Alamos, just emptied out. Teller was horrified, he wanted the bomb program to continue because he wanted to build his hydrogen bomb. It was going to be his bid for history just as Oppenheimer's bid for history was the fission bomb. Teller’s was going to be the hydrogen bomb. Over the years, after that work was done, he systematically and meanly tried to exclude one by one anyone else who'd helped him work on the hydrogen bomb. Originally, he said it was a Polish mathematician named Stanislaw Ulam, whom I interviewed also, who really came up with the basic idea for the hydrogen bomb. He took it to Teller and Teller came up with an improvement. Then together, they wrote a paper which was signed by both of them. But by the 1980s, Teller was saying, “Ulam did nothing. Nothing.” Which wasn't true. It was his piece of history because he was scattered too, and he, number one, did Nobel-level work. They didn't talk so much about their personal guilt.", "I was a child in the Second World War. I was eight years old in 1945. So many young men had been killed on all sides in the war. It was a kind of a strange, peaceful time for children. Cars couldn't get tires, they were rationed. Cars couldn't get gasoline, it was rationed. So the streets were empty. We played in the streets. Meat was rationed, so we lived on macaroni and cheese. You got four ounces of meat a week per person during the Second World War. That was kind of wonderful and peaceful, and kids running around in gangs and so forth. But in at least one house in almost every block in the city, there was a black velvet flag or drape hanging with a gold star on it. That meant that someone in that family, a father, a brother, a son, had been killed. And I was a smart little kid, I understood what all that meant. I was reading newspapers by the time I was six following the war. It was the strangest time of darkness and terror. We didn't know until 1943 if we were going to win the war. It was scary for a while up front, the war in Europe. We sort of set Japan aside. Our government did until the war in Europe was done before we finished that other war. But it took a while for the United States to get its industrial plant up and cranking out planes at the rate of one a day or more. Churchill famously said when Pearl Harbor occurred, “The Lord hath delivered them into my hands.” And he explained later what he meant was that America's going to join the war. And I know we can win now because America's just one vast factory, much more so than the British could have put together. So it was a peaceful time, but it was a very dark time even for a child. My mother died when I was an infant so I understood what the death of a family member was about.", "Dwarkesh Patel 2:04:29", "Do you remember when the bombs dropped?", "Richard Rhodes 2:04:34", "By 1945, we were so pissed off at the Japanese. We had destroyed their air force, we had destroyed their navy, we had destroyed their army. The population of Japan was down to about a thousand calories per person of the worst kind of stuff, buckwheat and weeds and whatever they could find. And yet they wouldn't surrender. And they still had a million men on the ground in Western Manchuria. They only had about a year's worth of bullets left. We knew that much, but that's a long time with a million men. With that in mind, and because they felt that the Soviet Union was still neutral in the Eastern Front, because it had basically fought the war in Europe. We didn't win the war in Europe, the Russians did. We didn't enter the war on the ground until June 1944, by which time they were already moving forward against the German forces that attacked them in 1941. But the Japanese just wouldn't surrender. You could read the documents about the bombs. General George Marshall, who was leading the war, was in charge of all the forces, had this idea that maybe if we could use these bombs on the beaches before we landed, to kill any Japanese defense that was there, maybe they would get the message and be shocked and surrender. But from the Japanese point of view, as it turned out later, it's a myth that the bombs won the war. They contributed to winning the war, but they didn't win the war. What won the war was when Stalin finally was convinced that there were such things as atomic bombs. He was half convinced these were American pieces of disinformation. We were feeding the espionage to the Soviet Union to make him spend a lot of money and waste a lot of time on something that didn't exist.", "When the news came back from Hiroshima and then from Nagasaki that these things existed, he called Igor Kurchatov in and said famously, “Comrade, give us the bomb. You have all the resources of the state at your disposal. Give us the bomb as soon as you can.” But up until then, he wasn't so sure. He had told Truman at Potsdam that they would invade Manchuria with fresh Soviet forces on the 15th of August. Truman was trying to get him to agree to invade at all. Then when word came from New Mexico that the bomb had worked, which it did right in the middle of the Potsdam conference, Truman then was trying to get Stalin to come in as late as possible because he figured the bombs would end the war before Stalin could take over enough of Japan to divide the country the way Europe was divided. He didn't want a Soviet zone, an American zone, and a British zone. He knew we could do better with the Japanese than the Soviets would do. But Stalin, having heard that the bombs really worked, moved up the date of the Soviet invasion of Manchuria to the 8th of August between Hiroshima and Nagasaki and invaded early. I found it very interesting that the conventional air forces on our side staged the largest firebombing raid of the war, on the 14th of August, after the Japanese were in the middle of their surrender negotiations. The air force wanted to get credit for winning the war and they wanted to hold back the Soviets who were advancing down from Sakhalin Island to the northern islands of Japan as well as inward from Manchuria. So our bombing was in northern Japan. It was a way of telling the Soviets, back off buddy, we're not going to let you in here. Then the Japanese military leadership, which had been adamant that they would fight to the last Japanese, the 100 million, they called it, turned and finally realized that it was futile. With the fresh Soviet army coming into Manchuria, with the United States and the British coming in from the west to the south, they were surrounded and there was no reason to continue. But the bombs had their effect.", "The Japanese emperor used the bombings of Hiroshima and Nagasaki as a reason for entering politics for the first time in Japanese history. He had always been a spiritual figure, but he wasn't allowed to vote or veto the political arrangements. He stepped forth and said, we must do it for peace. And in his final imperial rescript on the 15th of August, recorded and played out to the people by radio, he said a new and most terrible weapon of war has led us to think the unthinkable and we must now lay down our arms. So the bomb had its effect, but it wasn't what we thought at the time. A lot of Americans said, thank God for the atomic bomb because our boys came home. The actor Paul Newman was a friend of mine and Paul was an 18-year-old bombardier on a two-man Navy fighter bomber training for the invasion of Japan. He said to me once, “Rhodes, I'm one of those guys who said, thank God for the atomic bomb because I probably wouldn't have come home if we'd invaded the Japanese home islands.” And a million men said that. And the truth is, there were so many Japanese who would have been killed if we had invaded that even the killings of the two bombs would have been dwarfed by the killing that happened. For the first time in the history of war, more Japanese civilians were killed in World War II than had ever been killed in a war before. War was beginning to become what it has since become, which is a war against civilians.", "Dwarkesh Patel 2:11:06", "We were talking near the beginning about whether it was possible that the bomb could not have been built at all and in the case of the nuclear physics involved here, it seems like it was sufficiently obvious. But one question I have, seeing the history of that science and whether it was plausible or not for some conspiracy to just hold it off, how plausible do you think it is that there's some group of scientists somewhere else who have discovered some other destructive phenomenon or technology that decides that we can't tell anyone about this. One area I think this might be plausible is bioweapons where they discover something and they just shut up about it. Given the study of this history, do you think that's more or less plausible?", "Richard Rhodes 2:11:51", "I don't think it's very likely to take bioweapons as an example. I remember talking to a biologist, one of the early DNA researchers who had been a physicist until DNA came along, and I asked “How'd you switch over to biology?” He said, “Well, it's molecules.” So from his perspective, it wasn't very different. But we were talking about the question of just the one you asked, but about biological agents. He said, nature has had millions of years to work all the combinations on the worst things you can imagine. The odds of anybody in a laboratory coming up with anything worse are really vanishingly small. I took that with great comfort. I went home and slept that night. And I think he's probably right. Evolution has done such a job. We're still digging stuff out. I mean, it's just amazing how much of our technology started out as something in the natural world that we adapted it and simplified it and engineered it so we could make it too. That's still going on in all sorts of ways.", "Dwarkesh Patel 2:13:04", "I hope that's the case. I was talking to a biologist and he was explaining to me, if you've seen things like AlphaFold, which is this AI program that models that can predict how a protein will fold, you can run billions of different permutations of a protein. And you can find smallpox, but it binds 100 times better with human receptors or something.", "Richard Rhodes 2:13:35", "I'll tell you a story, which I don't think is well known, I wish it were. Back in the 60s the Russians proposed a world-scale program of public health to eradicate smallpox to the UN. And they said, we'll contribute a vaccine and other countries can contribute whatever they can contribute. This program got going. It was run out of Geneva by the World Health Organization, by a wonderful American public health doctor, D.A. Henderson, a big old burly country boy looking guy whom I followed around for several months once in Geneva and elsewhere. And by the late 1970s, the last case of smallpox, which happened to be a disease that's only spread among humans and therefore was of all the diseases the most obvious one to try to get rid of. Because if there are reservoirs in the natural world outside of the human world, then there will be a problem. If it's also something that's carried around by rabbits or deer or whatever then it's harder to deal with. But if it's just humans, then all you have to do is to identify people who start showing signs of smallpox. In fact, you need everyone around them to make sure they don't go anywhere for a while and the disease can't spread. And that's the method that they use to eliminate smallpox everywhere in the world. And then in the 90s, when the Soviet Union collapsed, D.A. learned, as we all did, although it wasn't terribly well publicized, that there was a secret lab still operating that the Russian plan had been to eliminate smallpox vaccination from the world so that everybody, except people in their country who had been vaccinated for this purpose, would not be immune and a bacteriological agent like smallpox could be used as a weapon of war. D.A. was so incensed. He took this story to our government and we started a whole new section of the public health business to work on biological warfare. And he did for the last part of his life to try to get past this and that lab was eventually, we hope, closed down. So that scenario is not outside the bounds of possibility. But generally speaking, biological warfare isn't very effective because your own people could be infected as well, if not your war people, at least your civilian population. Much like poison gas, which used to blow back over the lines of the people who'd launched it to their horror. It never was a terribly good weapon of war. Plus Hitler's experience of using gas in the first world war that all sides decided not to use it in the second. So that's part of it.", "(2:16:45) - Secrecy", "Dwarkesh Patel 2:16:45", "Speaking of secret labs, you've written so many books about not only the atomic bomb, the hydrogen bomb, and the Cold War and so on. Is there a question you have about any of those stories that you were really interested in but you haven't been able to get the answer to because the information is classified?", "Richard Rhodes 2:17:05", "Over the years, it's slowly leaked out. The latest thing I discovered is that from early on, our hydrogen bombs were shaped sort of like a dumbbell, more spread out. There's a picture of Kim Jong-un looking at a hydrogen bomb of North Korean make and it's perfectly obvious that it's a real bomb because that's its configuration. I didn't know that until just a year or so ago. But sooner or later, everyone tells you at least a little bit.", "Dwarkesh Patel 2:17:46", "And then is there anything you've learned that you can't put in your books because it's...", "Richard Rhodes 2:17:50", "The only thing I didn't put in the book, rightly so, was the dimensions of the Fat Man bomb that were on the original Fuchs documents that he passed to the Russians. When the Soviet Union collapsed and the scientists became available, I learned that the KGB in the interest of promoting its service to the state in this new world they were going into, had published a historical piece about their good work in stealing the secret to the bomb. And they included a picture of the sketch that Fuchs did showing the dimensions of each of the shells of this multi-shelled implosion weapon with the numbers there in millimeters. And when the scientists realized that the KGB had published this stuff, they raised a great hue and cry and said, that's in violation of the Nuclear Non-Puller Variant Treaty. They said, we have to pull all the issues of that journal. And they did. But I had a very enterprising assistant in Moscow, this geologist I mentioned before. And he said, I think I know where I can find a copy of the journal. And he jumped on the night train from Moscow to St. Petersburg and went to a science library there. And they said, no, of course not. We pulled that. And then he thought, wait a minute, where's the public library? It was across the street. And he went across the public library. And they said, yeah, we have the journal. And handed it to me. He made a copy and gave me one. But when I realized that I had this, I never published that information. That's the only one, though.", "Dwarkesh Patel 2:19:35", "That is a wise thing to do. One of the final questions. A lot of times people use the phrase, we need a Manhattan project for X. If it's some new technology or something. When you hear that, do you think that is a naive comparison? Or does it make sense to use that terminology?", "Richard Rhodes 2:19:57", "No. It’s been used so many times over the years for cancer, for this, for that. But Manhattan was a very special time and it was a very special problem. The basic science was in hand. Oppenheimer once said in his funny way, we didn't do any basic science between 1939 and 1945. In a sense, it was a vast engineering program. There was some basic physics, but very little. It was mostly using Nobel laureate level physicists as engineers to develop a whole new weapon with the precision of a Swiss watch that weighed 9,000 pounds. And they did a beautiful job considering the time and place and the rationale. They solved some problems that I don't know how anybody else could have solved. How do you make plutonium explode without a gun? All of that. But most of these problems aren't like that. Nobody was starting a startup with some investment from an investment company to build the bomb. This was a government project, and it was secret. And if you divulged the secret, you'd go to jail. A lot of the parameters of what they were doing were carefully kept secret. It's remarkable that 600,000 people worked on the bomb and the secret never got out.", "Dwarkesh Patel 2:21:28", "Do you say 600,000?", "Richard Rhodes 2:21:29", "Yeah", "Dwarkesh Patel 2:21:33", "This is actually one of the questions I want to ask you. Truman, when he became president, he had to be told about it. He didn't know about it, right? I don't know how many people were working on it when he became president, but hundreds of thousands of people are working on it. The vice president doesn't know about it. He only learns about it as president. How was it possible that with so many people working on it, the vice president doesn't know that the atom bomb is in the works? How did they keep that so under wraps?", "Richard Rhodes 2:21:57", "One of Roosevelt’s several vice presidents famously said, the vice presidency isn't worth a bucket of warm piss.", "Dwarkesh Patel 2:22:05", "Can you note, Kennedy had a saying, I'm against [unclear] in all its forms.", "Richard Rhodes 2:22:11", "Yes, well, interesting for me. Roosevelt wanted to keep it a secret and so did Groves. They didn't want the word to get out. They didn’t want the Germans to get ahead of us. And what was the vice president for? He was to sit around in a waiting room in case the president died, which in Truman's case, he hit the jackpot. He was just at the right time because Roosevelt had several vice presidents in his long reign. So from their point of view, he didn't need to know. It was the need-to-know thing again, it was the compartmentalization. And of course, as soon as he became president, Groves and Stimson and several others got together with Truman and filled him in on it. Truman had some inkling because he was a crusading senator who had taken on the job of preventing waste in war. And if he heard about some industry that was sloughing off and putting money in people's pockets or whatever, he would go visit their factories, call them out. And he was ready to go to Hanford, Washington, and Oak Ridge, and find out what these people were doing. But Stimson, the Secretary of War at the time, whom he greatly admired and one of the great elder statesmen of the era, said, “Mr. Vice President, please don't go. I guarantee you that what is happening is being well managed. Please don't go.” And he said, “If you say so, Mr. Stimson, I will believe you.” So he knew a little bit, but he didn't know very much. And I don't think that helped. I don't know how much Roosevelt understood either. He was not a scientist. Truman was not even a college graduate, he was self-educated. A well self-educated guy. One of the senator’s once said, every time I go to the Library of Congress to pull out a book, Truman's name is always there. The senator said I think he read everything in the Library of Congress. It's partly that there wasn't any way to communicate except by letter or telephone and you didn't call long distance unless somebody died, basically. Tell someone they had a long distance call and their woman would start crying. Or a man, for that matter. You thought your son was probably dead in Iwo Jima. So the communication was more limited to be sure. But even so, it's extraordinary.", "Dwarkesh Patel 2:24:45", "But was the culture different in that the idea of leaking something would be much more frowned upon than it is now?", "Richard Rhodes 2:24:52", "I can't remember in my entire life a more patriotic time than the Second World War. We children gathered together pieces of aluminum foil from the lining of cigarette packages, about the only place you could get aluminum foil in those days, watered it up into balls and took them to school so they could use it to make bombers. We collected this bacon grease from cooking meat in the kitchen and took it to school in cans because it had some application to making bullets. We collected newspapers for the same reason. The Boy Scouts during the Second World War took it as their special responsibility to collect the fibers that come off milkweed with this little ball that blows away because it was used in place of kapok to line life vests for sailors. They collected 500,000 tons of milkweed fluff in the course of the war. We were all consumed with winning this thing, which didn't seem to be a certain thing at all, as I said earlier, before around 1943. Of course, there was a black market and people got to see a farmer and pick up some steaks, so they didn't have to live on some more macaroni and cheese for the next month as we all did. But despite those changes, people were very, very patriotic and fought in every way we could to win the war.", "(2:26:34) - Wisdom & war", "Dwarkesh Patel 2:26:34", "Speaking of elder statesmen, by the way. Who, since the development of the nuclear bomb, has been the wisest political leader, not necessarily the U.S. and not even necessarily as a leader of state, but contributed most to the stability and peace?", "Richard Rhodes 2:26:52", "It depends on what period you're talking about. There's no question that Oppenheimer's advice to the government after the war was really good. I don't think anyone except the Oppenheimer committee, the Acheson-Lilienthal Committee, has ever found a better way of thinking about eliminating nuclear weapons in a world that understands how to build them. And that really was Oppenheimer. I don't mean he deluded anybody. He just led them straight down the path. All these engineers and industrialists who were on the committee with him, skeptical men, men who wouldn't have been easily convinced of anything, but he convinced them this was the right way to go. So he, up until he was forced out of government because he made the mistake of not supporting the Air Force's favorite bomb, and they found a way to destroy him basically, and they did destroy him. I talked to one of the scientists who was his closest friend, and he said Robert was never the same after the security hearing in 1954. He was one of [unclear]’s smiling public men after that. It really devastated him. That basic insecurity he had as a Jew in America all the way from childhood, haunted him, and he became the director of the Institute for Advanced Study, and he wrote a lot of nice essays. But I don't know.", "The leader that I've fallen into more and more is Niels Bohr. He tried to figure out a way to bring the openness of science into a world without nuclear weapons. He and Robbie taught Oppenheimer the ideas that ended up in the Acheson-Lilienthal plan, and he and Robbie later, were the ones who started up the scientific laboratory in Geneva that is now CERN where they built these new giant colliders. With the idea in mind that at the very least if the scientists have devastated Europe, we remember what things were like for the former Soviet Union. People were living on crusts of bread and bags of old potatoes. They really were. And particularly people who worked for the government because their pensions weren't worth anything. That's what I saw when I went back there in the spring of ‘92 after the place collapsed. The money wasn't worth anything. Their salaries weren't being paid. In the midst of all of that, Europe needed something to sustain it. And of course, there was the Marshall Plan, and that was absolutely amazing. And the help it gave to Europe when it needed it most. But Bohr wanted to make sure, and Robbie wanted to make sure, that the scientists of Europe were tapped to go off somewhere and build nuclear weapons. And they invented this international laboratory in Geneva, which is still a thriving enterprise, where they could do basic physics and where they'd be paid for their work and could do the kind of exciting thing that good science is without having to drift over into the dark side of the whole thing. Let's face it, it more or less worked. The French had to have the bomb, you know, because they're French.", "Dwarkesh Patel 2:30:26", "They need their lovers, they need their bombs.", "Richard Rhodes 2:30:29", "The British had to have the bomb because they knew how to build one. They'd worked with us all during the war, and then we'd cut them off from a supply of uranium and from any new developments along the way. And most of all, because the British Empire was bankrupt by the end of the Second World War, and Churchill was determined to get the bomb because without it, his country would fall away and wouldn't get to sit at the table with the big boys. So they had the typical reasons for getting bombs because you don't want to be left out because of the prestige, or because you have a mortal enemy who's got the bomb or is getting the bomb. That's North Korea, that's Iraq. Its attempt,  it didn't get there. That's Pakistan and India. We, because of Germany. The Soviets, because of us. By and large, the countries that did go toward the bomb, finally built bombs because they were afraid of another country having the bomb. And everyone else stood back and said, well, if you'll protect us with atomic bombs, when somebody comes calling here in South Korea or Germany or wherever, we won't build them and share your knowledge of how to make energy out of nuclear power.", "Dwarkesh Patel 2:31:54", "There's easily another three hours of questions I could ask you. I can't say I want to be respectful of your time because I haven't been with the extra hour I've taken, but I want to be less than totally disrespectful of your time. So I guess the final question that we can go out on is — In the next 50 years, what odds do you put on a non-test nuke going off?", "Richard Rhodes 2:32:22", "A nuke going off in anger? That's the way I usually put it. I think the odds are high.", "Dwarkesh Patel 2:32:35", "Over 50% in the next 50 years?", "Richard Rhodes 2:32:26", "I wouldn’t put a number on it, but it's certainly higher than zero and it's probably higher than 10%. And that's high enough if we're talking about millions of people dying. There was a period when people in the field were talking about, well, maybe we'll have a little regional nuclear war between India and Pakistan and that'll scare everybody to the point where they realize you've got to get rid of these things. The same guys who did the nuclear winter studies back in the 80s decided in 2007 first to look at nuclear winter world scale war using the much better computers of today. And they found out that that would be even worse than they thought it would when they had only one dimensional atmospheric models. Then they said, well, what would a small regional nuclear war look like? So they simulated a war between India and Pakistan where each country explodes 50 Hiroshima sized, 15 kiloton nuclear weapons over the enemy cities. And what would follow from that? And it turned out as the model develops, you can see it online , you can watch the graph develop that even that small in exchange, less than some of our individual hydrogen bombs, about a megaton between the two countries, would be enough to cause enough fire from burning cities to spread smoke around the world and reduce the amount of sunlight. They figured in the end that there would be 20 million prompt deaths from the explosions themselves and the fires. But then over the course of the next several years, up to two billion people would die of starvation because you would have the same phenomenon that the world had in the 18th century when there was an interim of rather cold, the sun was pulling back a bit and it was freezing hard in July in New England and the crops failed. And a mass of people died worldwide during that period of time. Like the flu epidemic of 1918, everybody seems to have forgotten. I don't know where our memory goes with these things.", "Therefore even a small so-called nuclear war must engage the whole world because it's going to engage us if it ever goes off. So we're still in a very precarious place. And as long as any country in the world has nuclear weapons, we're going to continue to be. There is a sword of Damocles over our heads. That has been the price of nuclear deterrence. It isn't necessary that that be the price. It's possible to have nuclear deterrence without actual weapons, but it's damned hard to convince leaders, particularly in totalitarian and authoritarian countries, that that's the case. So the odds, I don't know, but there's no such thing as a machine that doesn't malfunction sooner or later. And these machines, these weapons that we make us such supernatural powers of, are just machines. We built, put them together. We can take them apart. We can put the ore back in the ground if we want to. We don't have to live with this. I think we're in a big, wide, long transition. Maybe the world scale problem of solving global warming will help with this, will help people see that if they work together. Here I am saying there could be a good outcome from this technology. Well, there was a good outcome from the telephone. There was a good outcome from television, but there was also a dark side and we're going to have to learn to handle dark sides better. Maybe that's the last thing to say on this subject.", "Dwarkesh Patel 2:36:39", "Yeah, that's a sound note to close on. The book is The Making of the Atomic Bomb . Unfortunately, we didn't get a chance to talk as much about your new book on energy , but when your next book comes out, we'll get a chance to talk about the remainder that we didn't get a chance to talk about. It's been a true honor, and an incredible pleasure. The stories, the insight. It was really wonderful. Thank you so much for your time.", "Richard Rhodes 2:37:02", "My pleasure." ]
[ "https://www.amazon.com/Making-Atomic-Bomb-Richard-Rhodes/dp/1451677618", "https://www.amazon.com/Energy-Human-History-Richard-Rhodes/dp/1501105353/", "https://en.wikipedia.org/wiki/Jean_Tatlock", "https://en.wikipedia.org/wiki/Isidor_Isaac_Rabi", "https://en.wikipedia.org/wiki/Werner_Heisenberg", "https://www.amazon.com/Doomsday-Machine-Confessions-Nuclear-Planner/dp/1608196704", "https://en.wikipedia.org/wiki/Multiple_independently_targetable_reentry_vehicle", "https://www.gutenberg.org/files/1059/1059-h/1059-h.htm", "https://en.wikipedia.org/wiki/Acheson%E2%80%93Lilienthal_Report", "https://www.jfklibrary.org/asset-viewer/archives/USG/USG-01-07/USG-01-07", "https://en.wikipedia.org/wiki/Klaus_Fuchs", "https://www.youtube.com/watch?v=Yf1o0TQzry8", "https://climate.envsci.rutgers.edu/pdf/IndiaPakistanBullAtomSci.pdf", "https://www.amazon.in/Making-Atomic-Bomb-25th-Anniversary/dp/1451677618", "https://www.amazon.com/Energy-Human-History-Richard-Rhodes/dp/1501105353/" ]
https://www.dwarkesh.com/p/sarah-paine
Sarah C. M. Paine - WW2, Taiwan, Ukraine, & Maritime vs Continental Powers
[ "Grand strategy", "Dwarkesh Patel 0:00:40", "Today I have the pleasure of speaking with Sarah Paine . She is a professor of strategy and policy at the Naval War College and she has written some of the best military history I've ever read. We're going to get into history, strategy, and all kinds of interesting topics today.", "My first question, does grand strategy as a concept make sense? While you have these countries, the people making these decisions are individuals and they have so many individual ambitions and desires and constraints from internal politics to factions they have to appease. Does it make sense to talk about countries having strategies?", "Sarah Paine 0:01:20", "Before I get going, I have to make an obligatory disclaimer. My views do not necessarily represent those of the US government, let alone the US Navy department and much less the place where I work, the US Naval War College. Okay, now that that's over, on to grand strategy.", "Yeah, it is useful. I'm going to define grand strategy as the integration of all relevant instruments of national power in the pursuit of national objectives. If you think about modern governments in the West, they have cabinets and they sit before the president. Those cabinet portfolios represent the different instruments of national power. Can you imagine trying to run foreign policy without having those people at your table and coordinating?", "If you look at countries that have not coordinated all instruments, for instance, Japan in World War Two versus Japan during the prior period of the Meiji Restoration, by the time the Japanese got into World War Two, they were really prioritizing the Army and the Navy too, but the military was their main instrument of national power. They were not coordinating with civilians. They assassinated those people and got into deep, dark trouble. They didn't listen to their finance minister who told them it was unaffordable. So yes, grand strategy is absolutely necessary.", "If you have national objectives like you want to improve your own security or you want to improve trade then you need to think about all of these different instruments of national power and how you're going to coordinate. Those who don't coordinate get into deep, dark trouble.", "Dwarkesh Patel 0:03:01", "Right. So maybe having a coherent grand strategy is the ideal but if we want to understand history maybe it’s more useful to talk about factions and individuals? A previous guest, Richard Rhodes , who wrote the Making of the Atomic Bomb , talked about how after the war, the different branches of the military were competing with each other to see who would get more funding and who had access to nuclear weapons was a big part of that and how many.", "If throughout history we see lots of competition between the different parts of the government in ways that explain their choices, for example, in the case of Japan, why they invaded China instead of pursuing a maritime strategy, then isn't it more useful to just talk about the factions and the individuals rather than the strategy of the country?", "Sarah Paine 0:03:46", "I think it's the individuals making their arguments for what they think the strategy should be. I'll give you an excellent example of how the sausage was made.", "I was perusing the Eisenhower archives a number of years ago. So here's the Allied Commander from World War II, then President of the United States, and what he would do is bring in all the relevant parties to whatever the decision is. He would have them recommend various courses of action and they would offer arguments and counter arguments, and then they would hash it out and come up with some kind of combination of all or choosing one of them.", "Yeah, there's going to be a big debate. People are going to have all kinds of different ideas. In fact this is one of the great strengths of democracy. You have to listen to the counter argument or the counter argument is called “you lose the election” and the other party is in. But the notion that you're going to streamline it and not have disagreements, that's what dictators do and they have problems.They double down on bad decisions.", "Dwarkesh Patel 0:04:48", "Yeah, that's actually one of the questions I eventually wanted to ask you. In World War II, we see that many of the countries had really well coordinated and apportioned budgets spread between their different branches. And in the case of Japan, they didn't.", "Is democracy the answer for why the U.S. and Britain were better coordinated?", "Sarah Paine 0:05:08", "Part of it. And I think part of it is a different issue. If you think about who the strategic leaders of World War II were, they were the conscripts of World War I. Think about people slightly younger than you, maybe your age as well, if they survive to come off that front, then they come back and they want to start families. It's the Great Depression. It's terribly difficult. And when they get to the age where they're going to be strategic leaders, they have the harm of sending their own children in. So they thought deeply about what had gone wrong in World War I. This is in the West, particularly Britain and the United States. And their answer was institution building on a massive scale and integrating all elements of national power. This is when you've had the National Security Act passed in the United States, setting up all kinds of organizations like the National Security Council, United Nations, NATO and all manner of things. A lot of it is coming off of the horrific war that was World War I and then doing a better job in World War II.", "Dwarkesh Patel 0:06:14", "You would think that the victors of a war would be the ones whose perception of reality is the most inflated, whereas the losers are the ones who have to come to terms with why they lost, whereas we see the opposite. The U.S. had such good leadership like Patton and Curtis LeMay and there are so many great generals that came out of that time. Whereas in Germany, Hitler himself fought in World War I so it's hard to explain why he made so many mistakes.", "Sarah Paine 0:06:41", "Initially, Hitler did incredibly well. His Blitzkrieg was incredible. If he had stopped with the Anschluss, where he gets Austria and is going to take Czechoslovakia, and said, “Oh, I'm uniting the German people.” he would have gotten away with it and probably be considered a brilliant leader by Germans. But then, hubris, right? The Blitzkrieg worked so well, his generals told him he couldn't do it, but of course it worked, and then he goes further and overextends.", "When you look at what you think are great generals on the Western side, they are great generals but their success has to do with a whole lot of other people. If we hadn't broken the codes, which the British and the Poles helped us do with the various Enigma machines, would it have turned out the same? If you don't have Henry Ford, who's turning his cars into tanks, and the people who built the Liberty ships, would it have been the same? If you do not have scientists doing the Manhattan Project, would it have been the same? Think about the enormous mobilization within the United States, where Americans are all on board and in Britain and all over. So when you go, “Ah, Patton.” he actually has a whole civilian architecture behind him. We tend to personify it as the general. It ain't so. It's everybody.", "Dwarkesh Patel 0:08:04", "You mention that if Hitler had stopped, I guess in 1939, after he had expanded the borders of Germany beyond where they had ever been in history, I want your opinions on what is the latest at which he could have stopped, and maybe not avoided war, but at least solidified and consolidated the biggest possible empire?", "Various options could be… one is just after 1939. What if he invades Poland with the Soviet Union, but he doesn't invade Russia after or declare war on the United States, and maybe at some point negotiates a peace with Britain. Would that have been possible? Or what about after the fall of France? Then he could have just controlled all of Europe.", "Sarah Paine 0:08:49", "A, I don't know. But B, I think he could make the ploy — I'm just a continuation of Bismarck. I'm fighting these limited wars. I’m uniting the German people. That one he might have been able to sell and quit after Anschluss. The moment he's going into genocide against the Poles we are off to a different race because Poland is why Britain gets into the war.", "Dwarkesh Patel 0:09:14", "Right but is that a race he could have won or at least settled? In that, maybe if it wasn't for Churchill and Britain is just like, “You know what, we'll just let Hitler have Europe.” And then he doesn't go to war with America. Is it possible that there's a world in which Hitler just controls Europe?", "Sarah Paine 0:09:31", "I think the problem with your question is that it’s just not who Hitler was. He wrote in Mein Kampf exactly what he wanted to do and that what you're describing is not what he's about. If he were about combining with the West and taking parts of the Soviet Union, maybe? But that's not what he's about. He has this whole genocidal program that goes with him.", "There's another issue is that if you take too much, like if you're going to go kill off the Poles, the Poles never give up. The Poles went through three partitions over their history but the Polish identity never disappeared. If you do that, it never goes away. You will never have stable borders and then it's easy for others to fund insurgencies because you have this dominated population that hates being dominated. It's not stable.", "Dwarkesh Patel 0:10:15", "Then suppose that before Stalingrad, he had stopped. At that point didn't he control like 30% of the Soviet Union?", "Sarah Paine 0:10:23", "He'll never hold it. He'll choke on his acquisitions.", "Dwarkesh Patel 0:10:26", "Ah. Did Germany have the power under another leader to just hold that whole section of Eurasia?", "Sarah Paine 0:10:32", "All right. I'm going to flip this whole argument. You're talking about people doing territorial conquest and taking things and butchering enormous numbers of people to get it. You can watch this in real time in Ukraine. This is how it goes — You're butchering a lot of people. You're destroying wealth at an incredibly rapid clip. You can do that but since the Industrial Revolution in the West, there has been a growing consensus that that's probably not the way to do things. We are far better at crafting international institutions, international laws, treaties that we sign on to, the parts that we want to and then we adhere to them. And then that allows us to go all over the world running our little credit card transactions. No one kills us and you can make a lot of wealth doing that.", "Since the Industrial Revolution, who's making all the money? People who buy into that system. Territorial expansion isn’t the way. It's a real throwback to a pre-Industrial Revolution way of managing your national security. This is how traditional continental empires always did it. The Industrial Revolution with compounding economic growth, offers a completely different alternative, which says we're going to compound our wealth by having rules that we can all adhere to. And then we'll run our commercial transactions that way.", "Death ground", "Dwarkesh Patel 0:11:59", "Why was Russia eventually so robust in pushing back against the Germans? Despite losing tens of millions of soldiers, the government doesn't collapse like the Tsars did in World War I. And not only that, but a communist country is able to produce really advanced tanks in large and reliable numbers. There are so many mysteries there like why did central planning work? Why didn't the government collapse? Despite the fact that Stalin killed off so many of his people. He would have been hated, right?", "Sarah Paine 0:12:31", "Ah, but Hitler killed more and was more hated. What you're thinking about is what did the Russians do? I'm going to flip it. What did the Germans do?", "A useful concept comes from the Samuel Griffith' translation of Sun Tzu , which talks about death ground. What's death ground? It's when your enemy puts you on death ground, which means they're going to kill you, and therefore, you have no choice but to fight, because if you don't fight, you're dead. And even if you fight, your odds are poor, but at least that's the only way you're going to get out.", "The Ukrainians initially welcomed the Germans. Why? Because Stalin and friends had imposed the terrible famine of the early 1930s on them. And they couldn't imagine that anything would be worse than that until they met Nazis, who then had them dig their own mass graves. The Ukrainians rethought that whole thing. And if you do this to people, you will conjure a formidable enemy. So that's what happened to Russia. You can see it happening to Ukraine now before your eyes. Go back before the invasion of Crimea in 2014. You've got Ukraine, which has a very corrupt government and people were at sixes and sevens about whether they want to do Ukrainian things or Russian things. Fast forward to now, where you have Russians blowing away the people who were most loyal to them in the eastern part of the country. Their apartment buildings are being leveled by Russians. Ukrainians think, “Aha. This idea that we can coexist with these people is over.”", "The irony is Putin's forging Ukrainian national identity and wars often do this. In the United States, we started out with our 13 colonies and they're all very different but the Revolutionary War starts forging a national identity. And by the time you get to the end of the Civil War, where you have northern armies, at least those people have been all over the country. They have a real sense of nation by the end of that one.", "Dwarkesh Patel 0:14:31", "It's interesting because the strategy we pursued with Germany and Japan was unconditional surrender. Obviously we didn't commit genocide or anything, but do you think of that as different than the sort of total unlimited policy objectives that Germany had in Ukraine or Japan had in China? We also pursued unconditional surrender against the South in the Civil War. How do you think about that? Because that's also something where your back is up against the wall. Why did that not result in the same kind of morale?", "Sarah Paine 0:14:58", "Because the United States did not put the people of these countries on death ground. The leadership had put themselves on death ground. Basically the problem for Tojo Hideki is that if he backs down on anything, he's out of office and then he doesn't know what happens after that. So he personally is on career death ground and he thinks, and we were planning to, that he would get executed at the end of the war. But the Japanese people eventually figured out that they weren't on death ground. In fact, the Japanese people were so exhausted by the whole thing that the society shattered.", "The United States was never going to start massacring the German people in the way that the Russians massacred the Poles when they moved in or the way the Germans massacred the Poles. How do you wind up with eight or nine million Polish deaths in World War II? Think about that. That's a large number. It’s because they're being massacred.", "Dwarkesh Patel 0:15:58", "But there was a firebombing of Tokyo, Dresde, and Berlin. I think it was in your book that 84,000 people died in that one night of firebombing in Tokyo.", "Sarah Paine 0:16:06", "Yes. It's terrible.", "Dwarkesh Patel 0:16:07", "Why did that not make them put them in the mind frame of a sort of total death ground?", "Sarah Paine 0:16:12", "A, I don't know, but B, Japan had been at war since 1931 in China. They had been sending large armies. This isn't like recent U.S. wars, the counterinsurgencies in Iraq and Afghanistan. Here they are sending hundreds of thousands of troops to occupy Manchuria. The Chinese don't give up. It goes on and on and on. So by the time you're getting to 1945, it's a long time. Also, they had committed atrocities in China and they knew all about it. And the atrocities got even worse. When there were wounded Japanese soldiers, their commanders ordered their fellow soldiers to execute them because they didn't want cripples going home. They couldn't deal with them there. And so rather than have the allies pick them up, they executed them in place. Can you imagine how Japanese soldiers felt about this?", "Dwarkesh Patel 0:17:05", "How then do we explain the famously high morale of the Japanese military, where they would refuse to surrender even after given orders by their superiors? Despite knowing about these things that you're talking about.", "Sarah Paine 0:17:17", "Oh, it's true. It's because it's a different culture. In Japanese culture, you belong to in-groups or out-groups. So the biggest in-group that Japanese belonged to was Japanese people and everybody else. But within Japan, you come from a province, a locality, etc. You go to school and get your education at various places. You belong to a job wherever you are and there are various units within your job. And you owe loyalty. It's obligation. In the West, it's all about liberties and my rights. In the East, it's about obligations to other people. So you owe obligations to all of these organizations.", "When soldiers are thinking about war, they're not thinking about grand strategy. They're thinking about operational success. In the west, the moment you as a soldier start losing a battle, you can retreat and surrender and it's not dishonorable because you're going to live to fight another day. In Japan, you're a failure. And therefore, if you come home back as a failed soldier, you bring dishonor to yourself, your family, your locality, anyone you are associated with. So that's why it is so difficult for them to surrender.", "However, by the time you get to the end of the war, they are so exhausted. Their economy is something like a tenth of our economy. They have something like a 13th of our coal or steel production. And they don't have any local oil production. They're importers of food and they're not getting that food. So by the time you get to ‘45, they're exhausted. And a shattering occurs.", "Finally, at the very end, you have Emperor Hirohito, who knew full well earlier that he would be assassinated or proclaimed deranged if he disagreed. And he had a perfectly good underage son to be used as a figurehead. He knew that he couldn't do much about it. At the very end, when he decided he was about to get nuked, that's when he intervened to break the deadlock at the cabinet meetings and there were a variety of people at the very top who realized it's over.", "Dwarkesh Patel 0:19:35", "Could Hirohito have intervened earlier?", "Sarah Paine 0:19:38", "I doubt it.", "Dwarkesh Patel 0:19:39", "Let's go back more than five years. If he intervenes when Japan is overextending in China, is there any chance that he could have succeeded?", "Sarah Paine 0:19:47", "I doubt he thought of Japan overextending in China. What expertise does he have? He likes guppies. He likes studying fish in his backyard. He has no expertise. And then, of course, there's the hubris of it all, that we're going to dominate this place. They look at the Chinese as an absolutely feckless backward place. It's had all these warlord things going on and it doesn't dawn on them that by their extreme brutality in China, the Chinese finally get it going. “We're not the problem, the Japanese are the problem.” And it is what the Japanese do that superglues China and is the great impetus to nation building. You can see parallels with Hitler doing the same thing in Russia. And also right now, Putin's busy canonizing Zelensky and creating a real nation out of Ukraine that's never going to forget these ongoing events.", "Dwarkesh Patel 0:20:43", "Something I learned from your book that I thought was really interesting, and also tragic because of the counterfactual, was one of the strategies you suggested. If Japan thought like a continental power, they could have allied with the Nationalists to beat the Communists in Russia, maybe waged a three-front war with Germany and the Nationalists. And then Japan beat the Communists and prevented the Communists from taking hold in China. Given the consequences of communism in Russia and China and how many lives could have been saved if Hitler was beaten and then the Communists are beaten. That Japanese choice just seems so tragic.", "Sarah Paine 0:21:22", "Let's say they do it. That means Hitler forever. And that means if you're anything but a nice Aryan, your days are numbered. It certainly would have been the most massive ethnic cleanse ever in Europe.", "Dwarkesh Patel 0:21:38", "Suppose the Third Directive survived. Maybe it had stopped at the point you were talking about where the German lands were reconstituted. Suppose that it happened. If you look at the Soviet Union, Stalin kills more of his people than Hitler had by that point. I wonder if we have sort of burnished Stalin's reputation a little bit because we had to ally with him in World War II. But then the cycle we see in the Soviet Union is the inherent corruption and inefficiency of the totalitarian system. And then it breaks down and there's reform because people realize how crazy things have been.", "If the Third Reich had survived, would we see that same cycle there where the system breaks down? Would we also remember them the same way as the Soviet Union where it was evil but you just can't sustain that level of craziness forever?", "Sarah Paine 0:22:26", "I suspect it to be worse. Why? Because the Germans are far more efficient than the Russians were in those days. Nadezhda Mandelstam, who was married to the poet Osip Mandelstam, talking from the prison camps said, “At least it's Russians doing this because if it were Germans, there'd be no hope. With Russians, there's always a hope because they're inefficient too.” And she's a Russian talking about it.", "Hitler is talking about annihilating entire people…", "Dwarkesh Patel 0:23:00", "But so was Stalin with killing off entire classes of Ukrainians.", "Sarah Paine 0:23:05", "His idea is that Ukrainians need to pretend they're Russians and it’s fine if they do that. What you’re describing is not a happy ending. It's a horrendous ending.", "WW1", "Dwarkesh Patel 0:23:19", "What's the scenario in which both Hitler and Stalin could have been defeated in World War II? Is there some system of alliances or counterfactual where that happens?", "Sarah Paine 0:23:28", "No. I think the problem is World War I.", "World War I has enormous consequences. All sides allowed their generals to make strategy. No one is doing grand strategy in World War I. It's all about operational success, this is what we're all going to do. And then the generals keep sending up waves and waves of young men up over the trenches. What do you think is going to happen to them if you send them over the trenches? This is how you get these horrific death rates. Hundreds of thousands in a battle. In our own day it's inconceivable.", "You have a massive power vacuum because of that war. Not only does it upend Europe by getting rid of the Austro-Hungarian Empire, the German Empire, the Russian Empire, but it puts two really pernicious ideologies on steroids. Fascists and communists. And that's how you get all that evil. It's out of a gross mismanagement of World War I. Once they're off and running, you've got problems on your hands. And you've got a long solution.", "Back to your initial question, does grand strategy matter? It does. Look at World War I when they didn't practice it and the civilians allowed the officers to make all decisions. Britain is a country that is maritime by geography but then they built a continental sized army. That is not Britain's great strength in World War I. The victory in World War I was at the horrific cost of the beginning of the end of the British Empire.", "Dwarkesh Patel 0:25:16", "Britain is the only country that fights Hitler all the way from 1939 to 1945. Whereas Stalin only fights Hitler after he himself is invaded and in fact collaborates with him to dissect Poland. After the war, Russia expands beyond any ambition that a czar might have had. Whereas the British Empire is about to collapse and loses all its territories.", "What explains the differing outcomes of these two countries post World War II? And why did the post war objectives of Stalin succeed much more?", "Sarah Paine 0:26:00", "You mean like retaining an empire versus losing an empire? They're fundamentally different kinds of empires. Russia is a classic continental empire. What it owns is contiguous. Britain's empire was all about trade and having enough coaling stations around the world. That was initially what it was all about. You have coaling stations everywhere and then you want to get the trade through. And then what the British did is they trained barristers all over the world, barristers are lawyers. That lays the basis, not on purpose, of having international law where people who are eventually going to be running these independent countries have a legal training to use international law to their own kind of benefit.", "But anyway, Britain has this non-contiguous empire and after the war, it does not have the ability to hang on to them because of nationalism. Nationalism starts in the Napoleonic Wars. That's what Napoleon leverages to create the mass of his armies because French people feel nationalism and it's incredibly powerful. And nationalism has been spreading its way around the world ever since. Once you have nationalism, have fun hanging on to a non-contiguous empire because the locals are going to fight and resist. There won't be commercial advantages because it'll be too expensive to hang on.", "Britain, in most cases, did not fight to hang on to its empire. It left and negotiated its way out. Whereas France did the fight in Vietnam, which it lost, and the fight in Algeria, which it lost. The British didn't do that.", "Russia is a different event. It's all contiguous and wherever that red army is, it can hang on to it. And so yeah, it hangs on to Eastern Europe forever at a great cost. But if you look over time, initially Stalin rebuilt and does quite well. But then in the 60s, 70s, all of a sudden their growth rates are not like Western growth rates.", "Yeah, they're still growing, but the difference is growing and the compounding effects of this are enormous. If you fast forward to now I think Russia's entire economy now is less than Mexico. There's nothing wrong with Mexico, but the Russians have this idea that they have this huge, they don't.", "Dwarkesh Patel 0:28:42", "The compounding is a very important point because Tyler Cowen has an example of this in one of his books: that if U.S. economic growth rates had been 1% lower every year from 1890 to the 20th century, the U.S. per capita GDP would be lower than that of Mexico's.", "Sarah Paine 0:29:00", "Bingo. And this is to give a tangential comment. This is how sanctions work. People look at sanctions and go, “Oh, they don't work because you don't make whoever's annoying you change whatever they're doing.” What they do is they suppress growth so that whoever's annoying you over time, you're stronger and they're weaker. And the example of the impact of sanctions is compare North and South Korea. It's powerful over several generations.", "Dwarkesh Patel 0:29:28", "But the question about why Russia did so well? In terms of, after World War II they ended up with so much.", "Sarah Paine 0:29:35", "But before you say they did so well, look at the tens of millions of people who died.", "Dwarkesh Patel 0:29:41", "That's my point.", "Sarah Paine 0:29:42", "The cost is horrendous.", "Dwarkesh Patel 0:29:43", "I should say why Stalin did so well.", "Sarah Paine 0:29:45", "Well, yeah, because other people died and he lived and he kept his dacha.", "Dwarkesh Patel 0:29:49", "The reason I ask it in this way is I'm trying to understand the counterfactual in which it doesn't happen because it was so bad. Did the failure of FDR to be sufficiently anti-communist, especially towards the end of the war, contribute to how much land that the Soviet Union was able to accumulate?", "Sarah Paine 0:30:05", "There's a choice at the end of the war. There was some talk about whether to invade up through the Balkans and try to put Western armies there. Let's put you back in time as a serviceman. Do you want to lose your life by going up through the Balkans? Or are we just going to call it a day? And also Stalin has a land power with a huge army in place. He is fighting me at an advantage. Do you really want to lose your life doing that? And you have US leaders looking at it and going, this is going to be good enough because the costs are too high.", "Dwarkesh Patel 0:30:52", "But there's this narrative that FDR was also a communist sympathizer. You’re saying that it wasn't that and it was just that it strategically didn't make sense?", "Sarah Paine 0:31:00", "Well, A, he died so we don't know fully. But he was mobilizing the United States to prepare for war while America firsters were saying, “No, no. Isolationism is the way to go.” So he was preparing all of that. And yeah, if you're going to defeat Hitler as an offshore power like Britain and the United States are, as in the Napoleonic Wars, you need a local continental power with a huge army if you're going to deal with that continental problem. Russia has that army.", "So you're going to cooperate with Russia in the near term to get rid of the really big problem, which is Hitler, who's far more efficient than the Russians are. He's also located near the high value, industrialized parts of Europe whereas Russia is further away. You're going to deal with Hitler first.", "And then if you're going to have Stalin as an ally, of course, you're going to say nice things about him. That doesn't make you a communist. That is just managing an alliance. What are you gonna do? Spit in his face while you're fighting the war?", "Dwarkesh Patel 0:32:06", "But why believe him when he says that there will be elections in Eastern Europe once the Soviet Union?", "Sarah Paine 0:32:13", "Well, we tried. We tried very hard. Britain tried very hard, particularly in Poland, because think about why Britain got into the war? It's over Poland. And it was just not feasible. When you get to the end of the war and the Red Army is fully in control of Poland, there's nothing we're going to be able to do about it. Americans have had enough of the fight.", "Dwarkesh Patel 0:32:35", "There's cases in history where it seems like there is a hinge point. Let's say after the Bolsheviks take over Russia or after Mao was consolidating Communist control of China, where it's always hard to plan some takeover. But in those cases, it seems like the way in which they got in was so tenuous and contingent that… would it have been possible and desirable for us to extend greater efforts to prevent these regimes from getting in the first place? Where we've had to deal with the consequences of them getting into power for decades or sometimes centuries afterwards.", "Sarah Paine 0:33:13", "If you're a Russian of any persuasion, I suspect you'd be really angry that some American from across the seas is going to determine what kind of government you live under. That's a problem there. You asked me earlier about over extension. So we're going to go around the entire world telling others how to live?", "And then there's another issue, which is that people like Stalin are a reflection of the country at the time. The notion that one guy, Stalin, waves a magic wand and everyone does what he wants and that he is responsible for these millions of deaths? There are millions of people pulling millions of triggers for all these deaths. There are a lot of people who think it's a good idea. They are a reflection of the place.", "We personify this with Stalin to understand this. It's the earlier issue about generals. We personify how wars turn out often by generals because it gives us a grasp on it. But it's a much more complicated thing.", "Dwarkesh Patel 0:34:17", "How contingent was the global rise of communism? You've done so much research on Russian and Chinese history, in what percentage of worlds do the Bolsheviks take over in Russia? And if that doesn't happen, does communism spread to China and beyond? Because there's not the Bolshevik example and support of these global communist parties.", "Sarah Paine 0:34:39", "The Russian revolution is essential to the spread of communism.", "Dwarkesh Patel 0:34:43", "What are the chances that the White Army could have won? Does that make sense?", "Sarah Paine 0:34:47", "Unlikely. If you look at Russian rail networks, they had the two centers, Moscow and St. Petersburg, which the Bolsheviks controlled. So if you're anybody else, it means you're on the end of these different railway lines and there isn't the ability to link up with everyone. Whereas the Bolsheviks at the center can fan out. They also occupy the industrial centers. So it gives them the ability to pick off their enemies in detail and win that thing.", "Dwarkesh Patel 0:35:17", "How about in China? What are the odds that after the war, the nationalist could have consolidated control of China?", "Sarah Paine 0:35:23", "It's difficult. The nationalists don't get the credit for all the fighting of Japan. There was no way to get Lend Lease aid to them during the war. If you want to get Lend Lease aid, and you've got to have ports and railway systems. And if you look at China, the Japanese did a very effective blockade of China's coast. So we're trying to fly stuff over the hump, which is the Himalayas, and you're flying jet fuel over the hump so that the planes can then use them. It means that there's no way to supply the nationalist armies.", "Yeah, they do some things in Burma, which arguably is a terrible mistake. I suspect it would have been far better leaving Chiang kai shek with all that stuff and then it would have been useful for him in the final stages of the war to get a few wins against the Japanese to make him look good. But basically the Japanese had eviscerated his armies. And then think about it as the communists. You constantly blame the incumbent government. “Oh, all of our problems. It's all about the nationalist corruption.” And they are corrupt. Don't get me wrong. But the reason they're having troubles is because of the Japanese. So the Japanese did end the nationalists.", "Dwarkesh Patel 0:36:39", "One interesting point you made in your book was that not only were so many nationalists killed in the conflict with Japan, but there was a selection effect that the most competent and brave were the first soldiers to die. And that left not the best of the crop left to fight the communists.", "Sarah Paine 0:36:58", "That would be speculation. That's also a comment that's been made about World War I, that Britain, for instance, lost so many of its best and so they don't go on to have children. They don't go on to become strategic leaders and are unavailable in World War II.", "Dwarkesh Patel 0:37:17", "Can you talk more about these consequences of what are the kinds of people who are most likely to die in a war and what are the broader consequences for it?", "Sarah Paine 0:37:25", "I have no idea. How would I know if the statistical evidence isn't there?", "Dwarkesh Patel 0:37:29", "I mean, obviously the able-bodied people are the most likely, right? And then the more able-bodied though...", "Sarah Paine 0:37:36", "It depends. A lot of the people who died in World War II are civilians and they starve to death. Huge numbers. I can't remember the statistics, but tens of thousands of Japanese are dying in ‘41 and you finally get up to hundreds of thousands in' 44, but it's going into the millions in '45. And it's because of starvation.", "Dwarkesh Patel 0:38:06", "Is this the intentional starvation of the Hunger Plan ?", "Sarah Paine 0:38:09", "It's war time. It's just facts. In a war where you've destroyed all transportation and the ability to get goods anywhere and you've killed the farming population all over the world, you get famine. And by the way, this is an argument for when people say, “Can you have avoided the atomic bombs?” That ended the war really fast and probably saved millions of lives because they didn't starve. The war was over and all of a sudden you're starting to ship food around.", "Dwarkesh Patel 0:38:38", "Why did Germany and Japan continue the war after it was obvious that they would lose? And speaking of the deaths, didn’t a huge chunk of the deaths happen after ‘43 when it was quite obvious that they were going to lose?", "Sarah Paine 0:38:54", "I know. Well, it's because the leadership was all on death ground and the population's been fed this story that if the other side wins, they're all going to be murdered too. Wars are easy to start and they are very tricky to end. And this has been your life too, right? You have watched the wars in Iraq and Afghanistan. Easy to start, very hard to get out of. And now we're into Ukraine and we'll see how long this goes on.", "Writing history", "Dwarkesh Patel 0:39:23", "Speaking of which, a broader question is, how well do you think the insights of scholars like you have been integrated into the thinking of military leaders? Where people like you have written these extensive books about how empires overextend and how invasions can be more complicated than you think. To what extent does that actually percolate to the military and civilian leadership that they would decide to do an Iraq war and Afghanistan war?", "Sarah Paine 0:39:48", "You're asking the wrong person. You need to interview those people and ask them what influenced them. At my low level in the weeds, I work at the U.S. Naval War College, we have officers from the United States and all over the world who come on in. We assign them readings from the kind of scholars you're talking about, what we do in strategy and policy or case studies about wars, and have them think about a lot of the kind of questions you're asking is what we ask students of. So we assign all these things. How much it influences them later in their career, you'd have to ask them.", "Dwarkesh Patel 0:40:22", "But surely you must be optimistic about it. There's a reason why you do this work, right? Presumably you think that better understanding these previous situations helps leaders now make better decisions. I'm curious to what extent you think that pipeline is functioning?", "Sarah Paine 0:40:40", "I have no idea about the pipeline, but I can answer about me. I grew up during the height of the Cold War and started graduate school as it was ending but I didn't know it was ending. So I had a full up Cold War education where I did study the Soviet Union when it was and they had all these huge programs which no longer exist.", "If you want to make good decisions, you have to be knowledgeable. You have to be able to make an accurate assessment about yourself and the other side and so I've devoted my career to understanding the other side. One of the things that I think Americans are particularly prone to is what I call half-court tennis. They study the world from their point of view. So they're always focused on Team America. It's like half-court tennis, they look only at their side of the court. Balls come from mysterious places. Some people get new rackets. Who knows where they come from? And then somehow I'm going to play this game. Think about people who love football in the States. They know about all the opposing teams and who's strong and blah, blah, blah. Well, foreign policy, you need to understand the other side. It's not just about me and it's all about the interaction.", "Growing up in the Cold War I'd heard that the Russians were really evil so I thought I'd learn more about it. I first started learning about Russia and then I decided I was going to learn about China. And I realized, “Oh, Japan's in there.” So I got to learn about Japan and wind up studying their relations and tried to be open minded and understand the world from their point of view. Not that it's right or wrong, but just trying to understand it.", "So for your point of view, when you're picking up a book and you want to avoid half-court tennis, give the book a 30 second read. What's that? Go flip to the bibliography, flip through it for 30 seconds, and see if at least some of the citations are in the languages of the countries being discussed. Because how much respect would you have for a book about the United States that has not a single source in English? I suspect the answer would be zero.", "Dwarkesh Patel 0:42:44", "How do you consume these? Do you read the translations?", "Sarah Paine 0:42:47", "Oh, no. I can make bad spelling errors in numerous languages. [Laughter] I read these things slowly with lots of large dictionaries. You say you've got my book, Wars for Asia , go take a look at the footnotes in the back. You'll see they're in Russian, Japanese, and Chinese.", "Dwarkesh Patel 0:43:08", "Yeah. It is perhaps the best military history I've ever read. And also, I've really enjoyed your book on Japan, The Japanese Empire . And you also have these other textbooks and collections of essays, which I highly recommend because of the thorough nature and the diversity of sources.", "Let's actually talk about your research. Maybe tell me more about where you've done research around the world and throughout your different projects. Because I think people might not know the extent to which you've dug into the trenches on these over the years.", "Sarah Paine 0:43:42", "I have co-edited a series of books on naval operations with my husband Bruce A. Elleman . United States is a maritime power. If you want to understand the maritime underpinnings of US security, go to those. Particularly a book on peripheral operations called Expeditionary Warfare , which is what we do. The expedition will be crossing the ocean to get there. And commerce rating and blockades which is a key to US foreign policy. And the problem is if you exercise a continental foreign policy, you're prone to get into all sorts of wars you don't need to get into. But because we have huge oceans that separate us from our problems, it's a major point of strategy of whether to intervene or not intervene. So it's important to understand the maritime position in the United States. So there are these maritime books that if you're interested in learning about that. They're not fun reads. [Laughter] But they exist.", "You were asking me about research. Back in the day, I spent a year in the Soviet Union, when it was, and then a year in PRC right after Tianmen was delayed for a year. That was quite exciting. There were armored trucks all around Beijing University, not to protect the students, but to neutralize them and then it happened. And then three years in Japan over the years and three years in Taiwan over the years of just reading deeply in the archives. Donald Rumsfeld has been much vilified as the former secretary of defense. But one of his quotations that I love is, he said he wasn't worried about the known unknowns because he'd go after those, but he's worried about the unknown unknowns. And that's why you do archival research. What is it that I know nothing about that is actually terribly important? So I did a bunch of archival research in Japan. That dries up after World War II, you can get into military archives and their foreign ministry archives, but then it's much less afterwards. Well, China and Russia, they've both closed down. There's no way I wouldn't go into either country at this stage.", "Dwarkesh Patel 0:45:59", "Oh, you think even more so than after Tiananmen or during the Soviet Union? It's more hostile now than then?", "Sarah Paine 0:46:03", "We were in the Soviet Union while Gorbachev was in power. And I actually got into the foreign ministry archives there, but only for the Tsarist period. And in China, you could still get into various archives. I was using the Qing archives and then the nationalist archives were much more closed. Now the archives are just plain closed. Go to Russia now, you will get yourself arrested. And China likewise, they've shut down all of these archives.", "So to compensate for that, for the last 10 years I’ve spent two months every summer going to the U.S. presidential archives. Starting with, we didn't do it quite in the same order but, Truman, Eisenhower, and then this last spring, we just did the George Bush Sr. archives. And now I'm in Britain using their wonderful national archives, looking particularly from the 1917 to 1945 period researching the Cold War. You might go, “Well, but the Cold War didn't begin in 1947.” I would argue the Cold War began in 1917 because we have this notion that, “Oh, I decide when wars begin.” Not quite if the other side declares war on you. And the Bolsheviks made it very clear that they had declared war on the capitalist order. Britain was much more attuned to this and worried about communist ideas filtrating to labor movements and all this other stuff. So I'm reading their archives, whereas the United States was much more asleep at the switch.", "Dwarkesh Patel 0:47:45", "To what extent did the educated classes being naive about communism play into the delay of the United States into recognizing the Cold War? You talk about this in your books about the Red Star Over China by Edgar Snow.", "Sarah Paine 0:48:13", "And also Jack Reed's Ten Days . George Bernard Shaw made a comment that if you haven't been a socialist before the age of 30, you have no heart and if you remain one after the age of 30, you have no brains.", "It's the idealism of it all. World War I seemed to vindicate so much of what Karl Marx said about how capitalist countries are just imperialist. They don't care about the young. They just throw them over the trenches and destroy them.", "Dwarkesh Patel 0:48:51", "Which is so ironic given how communist countries have dealt with their populations and how callously they have wasted their young...", "Sarah Paine 0:48:59", "That's called the Big Lie. And it is amazing how these big lies live and are very powerful but then when they crumble, they're gone for good. Even when I was in graduate school, there were a bunch of people saying, “Oh, Russia has no drug problem.” because they're believing the Kool-Aid and lies that's being dished out from Moscow.", "Dwarkesh Patel 0:49:26", "What explains the credulity of these people in the US and Britain on the Soviet Union's claims about everything from its economic growth to the way it dealt with its own populations? Why were some people so asleep to this?", "Sarah Paine 0:49:46", "I don't know, but I suspect that the sins of the West are really obvious because we have an open press. And for anyone who'd been through World War I and had any male member in the family who'd been at the trenches and came back and talked about it, it was pretty horrific. And you can't believe that anything's going to be worse than that. Or if you go to China under Chiang Kai-shek, he had a semi-free press and that's how we know about things. You look at the incredible corruption going on there, you go, “Well, how can anything be worse than this? Well, actually, it could be a lot worse than this.", "Japan in WW2", "Dwarkesh Patel 0:50:25", "You talk in your book The Wars on Asia and elsewhere about the Japanese occupation of Manchuria and how they industrialized the region and at the end, it had 50% higher GDP per capita than the rest of China. And it was the most industrialized part of Asia outside of Japan. Japan also colonized Taiwan. We see those are some of the wealthiest parts of Asia now. And then we also see the impact of the communist counterfactual in other parts of Asia.", "In retrospect, what should we make of the impact of the Japanese occupation given the wealth of Korea, Taiwan, these other areas now? And how much the industrialization under Japan mattered?", "Sarah Paine 0:51:08", "Let's go back in time. If you get before World War II, which is one of the really huge atrocities that the Imperial Japanese Army commits without any doubt, going into someone else's country and committing atrocities is not a winning game plan. If you go before that, if you think of the Meiji Restoration, they colonized Taiwan and they colonized Korea. It was brutal in Korea because the Koreans resisted and then the Japanese got nasty. Taiwan was much less resistant. To this day, the Taiwanese do not have this bitterness about Japan that the Koreans do.", "I'm not going to deny that there wasn’t any brutality. There was brutality. But what the Japanese did when they moved into Korea and Taiwan is they set about creating infrastructure. They put in train lines. They set about educating people. Do they put them in the top positions? No, the top positions are for Japanese. But they do things like publish all kinds of magazines. Incredible numbers of technical journals about agronomy and things so that you have this incredible improvement of output because you're spreading knowledge to the Taiwanese and to the Koreans.", "And because they do it from the bottom up, unlike the United States, they control the police force and the locality and from there all the way up so they really have local control. When the United States goes into places like the Philippines, which happens at more or less the same time, the Philippine war is like early 1900s. The United States wants to deal with English-speaking elites, sound familiar, who are located in the capital. And so we try to negotiate that way. But it never modernizes what goes on. These very traditional and actually not conducive to growth relationships of massive land control by not particularly efficient landowning classes remains.", "The Japanese do it by literally building local organizations from the bottom up. Do not get me wrong, it's not remotely democratic. People who disagree at the time are treated brutally. But it turns out it's a very effective means for economic development because when they're booted, in 1945, the Koreans and the Taiwanese actually have something to work with. And then they're often running.", "Chiang Kai-shek, who'd been horribly corrupt in the mainland, he could not do land reform in the mainland. Why? Because that's his officer corps. They will kill him. In Taiwan, he can definitely redistribute Taiwanese land. No problem there. He comes in with all the weaponry and redistributes the land. It gets bloody doing it. He offers the Taiwanese bonds. They think it's going to be like the lousy bonds that he distributed on the mainland. Turns out those bonds were worth money. I don't know how many years on that it was that people actually collected on their bonds for all of this. So the Japanese actually had many of the pieces for a really effective plan for economic development.", "And if you look at China under Deng Xiaoping, who's he imitating? The Japanese. Deng Xiaoping is rather like a parallel, his generation to the Meiji generation. And think about what came after the Meiji generation. Bad news. Well, we're into Xi Jinping. Bad news. We're into bad news. But do not deny the achievements of the Meiji generation. They're enormous. And then because Japan does all the atrocities, they can no longer brag about these previous things.", "Dwarkesh Patel 0:54:49", "There's so many interesting things there. There's a book, How Asia Works , by this economist Joseph Studwell, where he is trying to analyze why Korea, Taiwan and Japan did so well after World War II.", "In the case of Korea, he tells a story where they have this factory where they're starting to export goods and they're working six, seven hour days. And the floor manager tells one of his underlings that the reparations on which we're supporting this economic growth from Japan came at the cost of your family being raped by Japan. So this is basically blood money that we're using to grow the economy. You better work hard to make sure it was worth it.", "Sorry, the broader question I wanted to ask was, the economic development that Japan is doing in Manchuria, Korea, Taiwan, if they hadn't made this mistake of fighting a war with America.. Let's say something like the Japanese Empire survives and isn't crazy militaristic. I don't even know if this is a question, but I'm just thinking about the counterfactual where you could have a really wealthy and prosperous area of Asia.", "Sarah Paine 0:55:59", "Let's go back to our wonderful America of the 30s, the Great Depression hits and it's a mess. And so what we decide is we're going to have tariffs. This is the Hawley-Smoot tariff in 1930. We're just going to wall it off because we got to keep jobs for Americans.", "This is half-court tennis. You're not thinking about what everybody else is going to do? Retaliate exactly in kind. So for the Japanese who had been good citizens within the international order, who had maintained really high positions in the League of Nations, which we'd been irresponsible and never joined, this pulls the rug on all of them. This pulls the rug on Japanese who said we needed to cooperate with the international order. Japan is trade dependent. What are they going to do? No one will trade with them. Their closest people won’t. They look at the world and go, well, we need an empire because we have got to have it big enough so that we get food and the basics for us. ‘30 is Holly Smoot and ‘31 is the invasion of Manchuria.", "Let's go back to grand strategy. This is Americans having no grand strategy of not thinking deeply. Life is an interaction. I can tell you whatever I want to but then you're going to make your own decisions. If I don't consider what your decisions might be, I'm going to be in deep dark trouble.", "And I think about Hawley and Smoot. They didn't live to see what they wrought. A lot of young men across the world died because of the failure of people like them to think more broadly. Think about the lesson of the Great Depression. The moment the international economy starts getting cold, there are meetings of bankers and foreign ministers, the world over to prevent it from going crazy ever again because they realize what the consequences are.", "You take people who are poor already and then you have a Great Depression, you get desperate decisions. And then once you start a war, it's very difficult to stop. So it's a great lesson.", "Japan then is making an ugly decision because it's stuck. So they go into Manchuria, which is where all their investments are to protect them. China's got this crazy civil war going on. Japan, if it had just sat in Manchuria, they probably would have been just fine because they do stabilize Manchuria. They are bringing some income back in. But the moment that they escalate big time in ‘37, they ruin their economy. And it takes a number of years to play out fully. It's a disaster for themselves, most of all.", "Dwarkesh Patel 0:58:33", "Just a broader picture of what I'm learning from these military histories and especially your books is that there’s these bigger forces of like, which country has more production and so on, but then you can have these individual mistakes, a single decision point by a single person, that cascades and then you over extend in China and you need more oil and you feel like the need to invade America. And the importance of leadership in preventing these sorts of catastrophic mistakes.", "Sarah Paine 0:59:01", "It's what I would call a pivotal error. Japan's decision to attack Pearl Harbor is a pivotal error. They are already grossly overextended in China. They want to cut out foreign aid. Remember that Hawaii is not a US state, and it doesn't become a state until ‘59 or something. And they probably take this racist view of Hawaiians of whoever they think Hawaiians are. So their idea is that they are going to take a newspaper and slap the dog on the snout and the dog will quit.", "Instead you create great power allies all across the Pacific for the Chinese. So yeah, there are pivotal errors that you can make at which point there is no return for the status quo. You've seen Putin make a pivotal error. He was getting away with hanging on to the Donbass and the Crimea. Now he made the pivotal error to try to take the whole enchilada. There is no going back on that error.", "Ukraine", "Dwarkesh Patel 0:59:58", "Actually, I want to ask you about that. Japan invades Manchuria in 1931. Hitler invades Poland in 1939. In retrospect, we think of them as part of the same great global conflict, whereas they were separated by eight years. I wonder if you think of Ukraine today as eight years down the line or the things that could come, maybe not as a consequence but at the same time as this, which could lead to another global situation. Do you think that it could cascade into something like that?", "Sarah Paine 1:00:31", "Of course, yeah. This is the problem with all of this. Of course, it could and there are many people working to prevent this from happening. You see all of these meetings where our leaders are meeting with each other. If you get into some global war with people with nuclear weapons, when the losers decide that rather than losing, they're going to go for one more roll of the die, which is a nuclear weapon. Then the question is whether the people below them will actually implement the order, etc. Think about low probability but high consequence events. I don't know what the probability is, but I know the consequences are huge.", "Dwarkesh Patel 1:01:13", "And the probability is iterated over many years and decades.", "Sarah Paine 1:01:16", "Right, yeah. You've got to always not use nuclear weapons, right? That Pandora’s box has already been opened, the nukes are there.", "Dwarkesh Patel 1:01:25", "Especially if there's no retirement plan for Putin.", "One of the things that's interesting from Wars in Asia, is that there are things that are seen by one side as a deterrence are often seen by the other side as a provocation.", "The embargoes from the U.S. are seen by Japan as a deadline to attack. I think you had some other examples like this. People who are less empathetic to the Ukraine cause have said, “Extending NATO, which we thought would be a deterrent, was that actually a provocation for Putin?” What should we just generally make of that lesson?", "Sarah Paine 1:02:06", "First of all, I think you should look at the people living in the countries in question. Before we decide that Americans are the important people in the world, or since we're in Britain now, British, and therefore anyone in between doesn't count, I believe that's wrong.", "All of the countries that joined NATO desperately wanted to join NATO. They've had a whole history of Russians doing terrible things to them. I'm not making it up. This is what Russia's been up to. They have been correct that Russia is going to do more terrible things. They were correct in doing everything they could to get into NATO and also be in the EU.", "It's incredible to remember what the standards of living of people in Eastern Europe that the Soviet Union had dominated were and what it is now. Since they have been freed of Soviet domination, it's been a massive compounding of standards of living. It's allowed people your age to travel the world and have a lot of aspirations in their lives.", "Dwarkesh Patel 1:03:14", "They can do podcasting. [Laughter]", "Sarah Paine 1:03:16", "But when you talk about, “Oh, should we deny these things because we got some egos in Russia that want to maintain a continental empire?” You or I cannot change how Russians think about things. How they think about the things is their decision. But if you look at Europe as a peninsula, you're better off with more insulation from Russia than not.", "Dwarkesh Patel 1:03:46", "But isn't this another case of not thinking in terms of both halves of the court in tennis where compared to the possibility of nuclear war, just nudging that number up and down matters far more than whether another country in Eastern Europe gets to be part of NATO or not?", "Sarah Paine 1:04:02", "Hope has been said to not be a strategy but the hope was in trying to get Russia to join the party. Trying to integrate their energy supplies into Europe and paying them good money for it. Having them make lots of money on that and hoping that they would invest this into their road system, which is lamentable, and hoping that they would invest this into cleaning out their business laws. It is horrendous trying to run a business there. As you can see right now as different things get nationalized and taken over. Different successful business leaders wind up unaccountably dropping out of six floor windows and old people who always seem to fall downstairs. I think that's a special way of offing people.", "That was the hope. Join the party because you will become wealthy too. Russian standards of living have been stagnating for quite a while. Putin's model of basically taking over your neighbor's stuff and bringing home whatever you haven’t bombed flat is not an efficient way to make wealth and you're killing so many people. So I don't believe that denying people of Eastern Europe saying, “Well, actually because the Russians have such an attitude, you get to be their serfs forever.”", "Dwarkesh Patel 1:05:27", "But there are broader considerations for the same reason that you mentioned and we were talking about earlier, that it would not have made sense for Americans to have kept fighting further to prevent Eastern Europe at the time from succumbing to the Red Army.", "Sarah Paine 1:05:44", "The Ukrainians are doing the fighting right now.", "Dwarkesh Patel 1:05:46", "But supplied by tremendous amounts of Western aid.", "Sarah Paine 1:05:50", "Yeah, they are, but that's pennies on the dollar. They're willing to fight for their country.", "Dwarkesh Patel 1:05:57", "It's not about the cost to the United States. It's not like it’s 40 billion or whatever. How does this nudge the nuclear war numbers or the nuclear war probabilities?", "Sarah Paine 1:06:03", "What's the nuclear war going to do to Putin? The Ukrainian forces are dispersed. What's the target? It's going to be Kiev, I suppose?", "Dwarkesh Patel 1:06:13", "Or, I don't know, he thinks that he's out of options. So let's go bomb NATO headquarters or something.", "Sarah Paine 1:06:18", "I think the Chinese have whispered in his ear, and this is pure speculation, “Buddy, if you do this, everybody on the planet is going to get nuclear weapons and all of a sudden we are going to have this little small club of people with nuclear weapons and the consequences are going to be rather horrendous.”", "And also look at China. It has more nuclear armed neighbors than anyone on the planet and some of them are totally nuts.", "Dwarkesh Patel 1:06:41", "Putin himself, right?", "Sarah Paine 1:06:42", "Let's try North Korea for the country that's got starvation in the 21st century.", "Dwarkesh Patel 1:06:49", "Although on this point, another thing your books have emphasized is how often leaders make mistakes that make very little strategic sense and are very stupid. You could imagine, even though it would be very stupid for Putin to escalate, especially if there’s no retirement plan, I could imagine him doing very stupid things.", "Sarah Paine 1:07:10", "Let's put stupid out of it because saying someone is stupid is not explanatory. Saying they are stupid means you write off understanding their reasoning. A lot of Westerners, when they think of governments, think about governments operating in the interest of their population. So when their decisions don't improve standards of living security and things, then we say those aren't good decisions. But that's not the game. In China, for instance, it's all about maintaining the monopoly of the Chinese Communist Party to rule. If that conflicts with having higher living standards, you better believe they're choosing Communist Party. So you go watching those kinds of decisions going on right now where their most talented entrepreneurs are being relieved of their enterprises. Or Putin, as you said, he's made a pivotal error. He has no back down plan. He only has a double down plan. Expect him to double down forever. And then the question is whether all the oligarchs want to keep doubling down with him and his generals or whether they give him something extra in his cheerios some morning. Who knows?", "Dwarkesh Patel 1:08:20", "I feel like this is one of the lessons you were actually talking about earlier, where you don't want somebody to feel like they're up against the wall, on death’s ground, where even if it would be an unjust sort of resolution, some sort of ceasefire where Putin can save face can be good. I wonder if your historical lessons would bring you to that conclusion.", "Sarah Paine 1:08:39", "Putin will be back for more and understand that that's just the case.", "Dwarkesh Patel 1:08:46", "But then what is the solution? We can't have unconditional surrender unless we're...", "Sarah Paine 1:08:49", "No, no one's marching into Moscow. The United States has done this for many years. You don't recognize the territories that he's taken, which means the Russians are stuck with a sanction regime of some type forever. And you go, “Oh, well. that'll weaken. Certain people won't adhere to it.” It will depress Russian growth forever, which goes back to an earlier part of this conversation of sanctions being really powerful.", "And it was some Russians who themselves said, at the very beginning, “Oh, no. We are going to be like North Korea.” Yeah, you will be. That's exactly where he's heading them. We don't control when the Russians reassess. We can't even predict when or whether they'll reassess. We can't predict whether there'll be some kind of incipient civil war in Russia which would be destabilizing by definition. Who knows how that goes? But the Ukrainians are fighting for their country.", "One of the things you asked me in an email is whether superior finance wins wars or something. It's superior alliance systems that win wars. And it's interesting that the Europeans, particularly the Eastern Europeans, are the leaders of all this. Isn't it fascinating that the Finns and the Swedes, who forever were neutral, are now all over this? And they know. The enemy gets a vote. The Russians have a vote. As long as the Russians are playing this game, our best bet is to support Ukrainians. Because unlike Iraq and Afghanistan, where the locals do not do the bulk of the fighting, this is where the locals are fighting and that's key. You also were emailing me and asking me about successful versus unsuccessful interventions. When the locals do all the fighting, that's when your best odds are of helping them.", "Japan/Germany vs Iraq/Afghanistan occupation", "Dwarkesh Patel 1:10:50", "Actually speaking of Iraq and Afghanistan, after World War 2 our occupations of Japan and Germany were very successful in rooting out the toxic ideologies and completely transforming the society and culture. Whereas in Afghanistan and Iraq, we didn't have the same effect. What explains why those occupations are so much more successful?", "Sarah Paine 1:11:11", "Easy. One is a case of rebuilding institutions and the other one is building them from scratch. You can rebuild things rapidly. Think of how after the war Western Europe rapidly repaired and rebuilt bombed buildings. Both Germany and Japan had an extensive list of functioning institutions from local police officers, offices, to educational systems, to local provincial governments and running the train systems and businesses and all of this have been absolutely functional. And so finding the expertise to recreate that is easy. And of course, Germans and Japanese living there are very interested in rebuilding.", "I know more about Japan than Germany and then what the key thing the United States then did is — the Japanese are hemming and hawing over what their constitution was going to be like and so MacArthur finally got fed up and in one week he got his staff to write this constitution and they're running around Tokyo going to bombed out libraries trying to find examples of Western constitutions so they can put it all together. It was long before there's an internet where you can figure these things out and they're figuring out what the constitution is going to be.", "They cook it up over the week. And what the Japanese are thinking is “Well, the Americans are going to leave. So we'll go along with this constitution but once the Americans are out, we're going to do whatever.” And what the first post-war Prime Minister Yoshida said, “Well, we thought we could change it back.” But he realized because of universal suffrage, allowing women to vote, and there had been a certain amount of land reform, he said there wasn't any going back. It permanently changed the balance of power in Japan. Another feature is that the Imperial Japanese Army had disgraced itself. Their strategy had led to the firebombing of the home islands, talk about a total failure.", "It’s the same sort of thing in Germany. You have universal elections. It took a while to get the Western Zones united because of all the fighting with the Russians over their zone. You eventually get two Germanies, etc. But then you have a very competent post-war generation, both in Japan and Germany, who understood full of the horrors of the war that they'd been put through as conscripts. And they are really intent on rebuilding their societies and they're the miracle generations in both countries.", "There's no parallel for that in Afghanistan and Iraq, right? They've never been developed countries. Germany and Japan were developed countries. And then you think, well, how long does it take to become a developed country? Some people say centuries. And then there's a whole other piece, which is that the Germans and Japanese had a real sense of nationalism. So you don't have to worry about nation building because they have a sense of national identity. You do worry about state rebuilding. So we were helping with state rebuilding.", "In Afghanistan and Iraq, there's no sense of a nation. I'm no expert on these parts of the world, but my understanding is you have these very different ethnic groups, many who want to kill each other. You've just got a civil war going on there. We're talking about a death ground kind of civil war where the ones in power just ruin the others. And when the others get in power, they ruin the other people. That's what's going on in Afghanistan and Iraq.", "In addition, because they're internal locations in the middle of continents they're surrounded by a variety of neighbors. And if you look at those neighbors, you go, “Ooh. A bunch of those people are going to intervene.” And they're going to intervene in very destabilizing ways. Japan is an island. It's hard for people to intervene. Germany, we put a lot of money into it with having troops and getting the German army and other things up and running. This will be Ukraine's future where they will be… well, they already apparently do have the finest army in Europe and then they're going to make it very highly defensible before it's all over. And Europeans as a group understand that it is absolutely in their interest to have an impregnable border around Ukraine. It protects them all. Europe doesn't threaten Russia. They would love it if Russia would join the party. Join the rules based order. You'll make money. You'll do well. Except the oligarchs in question, this real minority of people who run Russia, they personally won't do as well. But now they're war criminals, so they're out of luck.", "Dwarkesh Patel 1:16:17", "Speaking of miracle generations, the Meiji generation, as you have written about, learned so many reforms from the West, improved every aspect of Japanese governance and economics and education and law. And within the generation, you have people coming to power who make quagmire after quagmire, make mistake after mistake. There's some cases where countries manage to solve the succession problem after a really competent generation. For example, Singapore after Lee Kuan Yew. It seems like the government has kept up the system which promoted such efficient bureaucrats. Whereas in other cases, like after Bismarck in Germany, you have the mistakes that led to World War I.", "What was the failure that the Meiji generation made that their level of grand strategy and insight was not carried over?", "Sarah Paine 1:17:13", "I wouldn't pick on them because they're brilliant. It's amazing what they achieved. It wasn't perfect but no one's perfect. I wouldn't pick on that particular generation. They're brilliant. As are Japan's post-war leaders.", "The way I look at it goes back to your initial question about grand strategy. Are institutions really important? Institutions structure decision making. Now, it's very difficult to figure out what types of institutions to build. And when you see failings in them, you know I've got to do something next time. But this again is the brilliance of this evolving maritime order in which we live, where people sign onto the things in which it's in their interest to sign on to them. You sign on to treaties and then you have provisos of the parts you don't want. And you join these international organizations and then you influence how they develop, etc. So these organizations have been instrumental, the ones built right after World War II in holding the peace.", "MacArthur's constitution and then Japan's subsequent leaders, have worked on improving the institutions that they have. But institutions take a long, long time to build. I think about it as sort of like a spider's web. So that you spin, spin, spin this thing that's like gossamer, but then you spin enough of it and then it really holds. But then there are people like Hitler who come through and they undo the work of others. Bismarck, when you ask about him, it's highly personalistic. That's not about an institution. That's about a guy leveraging the king. There's a reason for getting rid of royalty running the show. And then, yeah, there are emerging institutions in Germany, a general staff, and some other things that are very important.", "Dwarkesh Patel 1:19:00", "I was about to mention that another generation that managed to create good institutions was the American founders. But then there was also the failure that led to the Civil War, right? So even there some institutions were weak.", "Sarah Paine 1:19:12", "Slavery, our original sin.", "Dwarkesh Patel 1:19:15", "Let’s talk about Taiwan. That's where the nationalists went after the Chinese Civil War. I think until recently the narrative has been that the CCP is incredibly competent and very good at engineering good policies and economic growth in China. And then we look at Taiwan and obviously it's so much richer than China on a per capita basis. Would that have been what China would have been like if the nationalists had remained in power?", "Sarah Paine 1:19:50", "Unknown. Because when the nationalists came to Taiwan, they were in really deep, dark trouble. And one of the ways that Taiwanese have maintained the moral high ground, which is necessary for them in order to guarantee foreign aid, is by being democratic. Their really exposed position put enormous pressure on them to democratize. Because the United States is sitting on them that they need to get democratic.", "Also the nationalists engaged on a real comprehensive after action report on Taiwan asking why did we lose? Well, it was this incredible corruption and the need to do land reform. I don't think land reform was feasible for Chiang Kai-shek on the mainland. Why? Because that's his power base, all the landowners. And if you try to reform them, you'll get a headshot. Whereas in Taiwan, it got bloody doing land reform. The local Taiwanese did not appreciate getting expropriated and there were massacres over it.", "Dwarkesh Patel 1:20:53", "Although in Korea, I think Park did land reform and that was native.", "Sarah Paine 1:20:59", "Syngman Rhee apparently did it even earlier. One point in the land reform they did there was that the Japanese had large expropriated areas so it was easier to do that. And so Syngman Rhee does land reform immediately. It's not been well studied and it'd be fascinating if someone actually did study it. It probably helps explain the tremendous loyalty to the South Korean government within the Korean Armed Forces within the Korean Armed Forces.", "Chinese invasion of Taiwan", "Dwarkesh Patel 1:21:25", "In How Asia Works, Studwell makes the interesting point that because these countries are so overflowing with labor that it makes sense in these countries to have lots of peasants who can tend to the land instead of having a single landowners with a large tract of land. Mechanization is maybe not the best idea when you have so much labor that can actually do these things that are not scalable.", "Anyways, going back to Taiwan. The growth rate of modern day China is slowing because of zero COVID, less foreign investments, data intervention in the economy. I think consumers aren't spending as much. And also obviously because of demographics. Does this increase or decrease the odds of Chinese action on Taiwan?", "Sarah Paine 1:22:11", "I honestly don't know. But I think a better way of looking at it is to look at consequences. It's guaranteed that if they go into Taiwan, it is a high consequence event without a doubt. What the odds are, I don't know. However, if you listen to their speeches, they tell you they're going to do it. They're consistent.", "The West learned that you read improbable speeches, right? People read Mein Kampf and said, “Oh, this is a nutcase. No one would ever do that.” Well, it quite accurately represented people. Even in dictatorships, they have to transmit messages to the population and they quite often very accurately tell you what they're going to do.", "Putin has been quite clear of what he's been up to. Stalin was very clear of what he was up to. So let's judge Xi Jinping at his word and he says he's going to go for it. Now, whether he's still in power, I don't know. But here's a problem for which we don't have a solution, that Chinese people have to figure out the solution.The Chinese Communist Party has clearly made the decision that it wants to maintain a monopoly of political power. And for a while there, during Deng Xiaoping, that worked because the reforms that they wanted to make for agriculture and things, they can maintain their monopoly of power but also do things that allow people to get much wealthier. So that went in tandem for a number of years.", "Now we're at the inflection point where you have a lot of educated people and businesses who are really integrated into the world and they want to make autonomous decisions. Also, you have some very large, very successful companies and they have quite a bit of clout. The Communist Party worries about this because what do people want at that point? It's probably some influence over political decisions. And the Communist Party said that's off the table. Okay, if it's off the table, how do you keep it off the table? All the things that you're talking about. This 24x7 surveillance state.", "Since you have a computer science background, you'd have a better understanding of this than I. Think about the cost if the United States had a 24x7 surveillance system where you're literally doing it down to who's jaywalking and who's not, in order to put it into their social security score. Whose kid in their classrooms rolling their eyes at their teachers, I kid you not, and putting it down as a ding on that kid's social score.", "And also, who knows what's accurate and inaccurate on the facial recognition stuff. So they start chalking up the wrong scores for the wrong people. The cost would be incredible. All the people power you're going to have to devote to this. And then all the false positives, which will be incendiary for the people who are falsely considered disloyal. This is where they're at. And now they’re doing the National Disappearing Act. We don't know where their minister of defense is. He's a non-person. And oh, by the way, what happened to the foreign minister? Give me a break.", "Dwarkesh Patel 1:25:22", "Yeah, and the cost is also so gruesome in comparison to the alternatives. There are so many cheap interventions like giving people iron supplements and folic acid or giving them 20-dollar eyeglasses, that could raise the childhood nutrition that is lacking in rural parts of China by so much that it would actually be worth it for the government if you think of the additional tax revenue that healthy people can bring in the future.", "Sarah Paine 1:25:47", "They're not gonna do that. Xi Jinping is another guy making a series of pivotal errors. His handling of COVID was just stupid, right? COVID started in China. Go investigate it and figure out where it came from, and then it would be a non-starter. But instead, they do the massive cover-up, etc. And then all of a sudden, instead of just being unlucky that COVID started there, it's all of a sudden, “No, you're complicit.” A lot of people died across the globe over this, and it came from you, and you clearly were letting people out of the country knowing full well that they were vectors for spreading this disease. This is a problem that the rest of the world's not going to forget. There are just too many millions of people who died.", "Dwarkesh Patel 1:26:30", "You were talking about the cost of the surveillance. It is an interesting question that what percentage of the Soviet Union's GDP was dedicated to the NKVD?", "Sarah Paine 1:26:41", "When the United States was trying to evaluate what the load of the military was on the economy, and the CIA was trying to figure it out to the best of their ability, it was really difficult because they don't have a convertible currency.", "And then they're busy lying to each other. That's a whole other thing in communist states and dictatorships. You're incentivized to lie about everything. Think about the compounding effects of these lies. It means I can't make good decisions because everyone around me is lying, and then it gets worse at the top. You're going to watch as these cascading things happen to the Chinese and the Russians as a result. It turned out that at the end of the Cold War that if you did the calculations for the whole military-industrial complex, well over half their economy is being devoted to this.", "Dwarkesh Patel 1:27:40", "Which is crazy because during the peak of World War II, the fraction of US GDP was like 45%.", "Sarah Paine 1:27:43", "That's right. I think they think Nazis were 55. But don't quote me on it, I may be remembering these incorrectly, but it was horrific in Russia. Let's say we're running little subunits in the Soviet Union but I'm afraid I'm not going to get enough parts in. So I lie about how few I have, even though I got lots more. And then you're busy lying. So then when we compile macroeconomic data, and we don't really know what the price of anything is, we don't know the value of labor, of capital, and we don't know what consumers really want. So Russia is massively misallocating capital, labor, and they don't understand preferences.", "So when you asked me earlier, are people stupid? It's more that they've got all this incorrect data, and by the time they realize that something's wrong, they're already in a deep, dark crisis. This is late in Brezhnev, where the numbers are just a mess. And so when Gorbachev comes on in 1985, it's just a massive implosion. And then he tries to save the beast, and of course his cure kills the beast.", "Dwarkesh Patel 1:28:57", "Speaking of which, after World War II, the companies in Germany and Japan that were making weapons, Mitsubishi and Volkswagen and BMW, became creators of world-class consumer products. For example, even GE in America. Whereas in Russia, they had... is it the T-34, the tank?", "Sarah Paine 1:29:23", "Apparently that thing was built on American chassis. Apparently it's based on our technology that we weren't interested in. Of course, they took the chassis, etc. But in World War II, do not forget about Lend Lease. This is another lesson of World War I.", "Imperial Russia fell. Absolute disaster. Why did they fall? No one bothered to focus on supplying them with adequate weapons. They had huge armies, but they're sending their young men in with their rifles and saying, “When you get there, go pick one off of a dead body.”I think that means they've done no training. And you're just wasting people. Can you imagine being that soldier and thinking this is how my government treats me?", "So in World War II, the emphasis on supplying Russia is huge. So what do we supply? We supply all the things that make them move. So it's rolling stock and getting their train lines going. They produce planes, but they can't do jet fuel. It's high octane. You may know more about it than I do. We're providing all that. Russians will starve. We provide a tremendous amount of food. The word ‘spam’, like in email spam, comes from that canned pork. If you get one of those little spam cans, that'll keep you going. And we sent that all over the world and that's the origin of spam on the computer. People by the end of World War II were so sick of spam. But anyway, we fed the Russians and we provided them all kinds of things that without it, they could not have fought. So fast forward now, we're providing those things for Ukraine. So that the Ukrainians can feed themselves. To keep them in the fight.", "Dwarkesh Patel 1:31:11", "Why didn't the impressive industrial war output of the Soviet Union transfer into the same way that it did in Germany and Japan and these consumer brands?", "Sarah Paine 1:31:23", "This is a big difference between their model for development and the Meiji model. The communist model is heavy industry and largely for the military. And the Meiji, even though they wound up with the big military, it's about getting these consumer products in light industry. And then they go on to do heavy industry. It turns out the Meiji model is a better one. It just works better for an economy.", "And so the Russians aren't interested in doing consumer goods, right? It's all about the communists having monopolizing power and then playing God in whatever region they control and dictating whether other people live or die.", "Dwarkesh Patel 1:32:10", "Let's go back to Taiwan. What are the odds you would give of a Taiwan conflict? Maybe you can give me your over under five years, ten years, twenty years.", "Sarah Paine 1:32:20", "I have no idea. I think you have to prepare for it. And it will best position you to deter it, even though you may fail at deterring it. And then if you fail at deterring it, it will best position you to deal if bad things happen.", "Dwarkesh Patel 1:32:35", "Being at the Naval war college and seeing how people are talking about this. How likely is it that the U.S. would actually directly intervene on the behalf of Taiwan and directly fight the Chinese?", "Sarah Paine 1:32:46", "Taiwan is a country of 20 plus million people. If you look across the globe, how many countries have about 20 million plus people?", "Dwarkesh Patel", "Lots.", "Sarah Paine", "Yeah, probably most countries in the world are above that size. So if you say it's okay to level a country because for the People's Republic to take Taiwan, I presume it's going to begin with an artillery barrage. I presume that's going to be leveling Taiwanese cities, right? We've watched how it goes in Ukraine. I can't imagine the Chinese being less brutal. You're going to say that's okay. Our whole thing about this maritime system of international law. What is the fundamental underlying principle of it all? It's sovereignty. It's the notion that just because you're big, you can't go and destroy someone who's small. This is the fundamental basics of the whole thing.", "Dwarkesh Patel 1:33:40", "Yeah, although in the case of Taiwan, it's hard to argue that one island that is right off the coast of China, and China, unlike any other area, has for decades said that they want to conquer. It's not like they've been saying that once we get Taiwan, we also really want to conquer India and Burma and Vietnam.", "Sarah Paine 1:34:00", "Actually, they've just been redoing their maps. They say what is Uttar Pradesh is ours. That would be a detail, right?", "Dwarkesh Patel 1:34:06", "Yeah, but it's hard to see. They conquer Taiwan and they get emboldened to then conquer Korea? What's the cascading effect to worry about?", "Sarah Paine 1:34:17", "Well, look at Chinese history. It is a continental empire. What is the paradigm? Territorial conquest. Take a look at it. This is it. And they're not off that paradigm. They're still on it.", "Dwarkesh Patel 1:34:28", "On the doctrine of strategic ambiguity, what is your opinion on this? Because in World War I wasn't the Kaiser surprised that Britain intervened on behalf of Belgium? And he was so upset about it.", "Sarah Paine 1:34:44", "I don't know the details, but someone said that man wasn't the sharpest quill in the porcupine?", "Dwarkesh Patel 1:34:50", "But just generally, is it wise to have this, will they, won't they attitude? Does it do a good job of deterring them?", "Sarah Paine 1:34:58", "I think you want to be ambiguous in the United States because otherwise it would enable Taiwan to do crazy stuff. Like under Chiang Kai-shek, if we had been unambiguous, the man might have done crazy stuff and then all of a sudden we get pulled into a World War.", "But if you think about a Taiwan conflict, just because there's a conflict there does not mean the United States has to send its military toe to toe. I would think it would give China a long lasting time out from the international world order. It'll be sanctioned.", "This is what's so tragic about China. Think about how many people have been lifted out of poverty. So many since Deng Xiaoping. Hundreds of millions. It is a great achievement of our lifetime. This has happened since you were born. And why did that happen? It's China's reintegration into the rest of the world, of joining the maritime order, following the basic credit card rules of paying for transactions and then your transactions are also guaranteed.", "That is the win for China, the true win. And taking Taiwan, who needs it? The Taiwanese are perfectly fine doing their own thing. And they've made it clear they don't want to be taken over by force. Who would be?", "The problem in China is a Communist Party wants to maintain its monopoly on power. It used to claim the moral rectitude card. Well, they can't do that anymore. They're so corrupt. They used to claim the economic growth card. Well, that one's going away. They're left with one card. It's the nationalism card. And they're playing it hard because it's a unifying thing for Chinese, for the Han ethnic group in China, who constitute the overwhelming majority, that Taiwan should be theirs. It's a mistake. It will be a pivotal error if they make that mistake. But guess what? Other countries cannot control what the Chinese government decides to do. It's foolish to think you can control it. It's beyond your abilities.", "Dwarkesh Patel 1:37:03", "Although we can control whether we get into a head war with another global superpower. What are the odds you would give to a war between China and the U.S. going nuclear?", "Sarah Paine 1:37:16", "It would be the most catastrophic error imaginable for the United States and China to have a military conflict. There will be no winners. There will be massive numbers of losers.", "Let's talk about nuclear weapons for a minute. Think about how Americans are so mad at each other about wearing a mask or not wearing a mask. Talk about something that's stupid. Talk about something that's not a big deal, wearing a mask or not. If anybody nukes anybody else's city, do you think the world is going to be remotely the same way? Can you imagine?", "We can't even be logical about masks or just letting other people do their thing about masks. We can't even do that in the United States. We have many diplomats who are doing their best to prevent this eventuality but understand that we do not control the decisions of others.", "For instance, I'm going to make a guess that you don't have children. But if you ever have children and little ones that you could pick up and put down, they will wind up doing things that you cannot fathom. You're genetically related to these people. You love these people. And they will do stuff that you think is just wild. You will put enormous pressure on them not to do these things and they'll do it anyway. So the notion that we can take a country of one billion and change and make them do anything…", "Dwarkesh Patel 1:38:45", "But the reason I ask is, is an island of 20 million people worth getting into an altercation that could potentially lead to a nuclear war?", "Sarah Paine 1:38:57", "Ah. The global order, going back full circle, is based on sovereignty. If you allow this, it doesn't mean you have to go to a nuclear war. You just never recognize whatever it is and then you sanction China from then until kingdom come, so that they are not part of the maritime trading order. And you tell them they need to cough up Taiwan.", "Dwarkesh Patel 1:39:22", "Understanding China and the way the government works, could the CCP survive a failure to take Taiwan? If they invade, they fail. And then because of that, they get kicked out of the global order. What do you think happens to the CCP?", "Sarah Paine 1:39:36", "I don't know. But look at North Korea. Talk about a failed place. It's amazing to me how long the Kim dynasty has maintained its power. It's just unbelievable. They're starving. Don't count on any short-term ending. Those countries that are willing to cooperate with each other, not invade, negotiate their disagreements, work through international organizations, improve international organizations, that world is what you want to protect. And you want to allow people to come and join. So if Russia changes its mind, new government, etc, you want to bring them back into just the way Japan and Germany were brought back in. You want to protect that order forever. Our prosperity is based on it. And it involves serious defense spending, etc.", "The problem with the Communist Party, the paradigm of we're going to have a monopoly, it's a route to poverty. Think about it. When the communists took over, they didn't restore the grain harvest that had happened during the Civil War. The 1930s version until after Mao was dead, it's incredible. It's a really lousy system for promoting economic growth, and it matters in a poor country. It's going to determine your per capita standard of living. And the poorer you are, the more that makes a huge difference to you to have somebody say. Communism doesn't produce wealth. It's an incredibly effective way for taking power within a failing state and putting a dictatorship in power. It's incredibly good at that. But it doesn't deliver prosperity afterwards.", "Dwarkesh Patel 1:41:20", "Since World War II, is it fair to say that our Navy has not been tested to the same extent as our Air Force and Army have in the engagements we've had?", "Sarah Paine 1:41:29", "None of our forces have been as tested. Probably the Marines the most in the Army because they're doing land engagements. But if you think that Iraq or Afghanistan is a peer competitor, give me a break.", "Dwarkesh Patel 1:41:41", "I think our GDP is like 325 times that of Afghanistan.", "Sarah Paine 1:41:46", "It is a different event, which is why the Ukrainians are going “Excuse me?” when they get advice from us.", "Dwarkesh Patel 1:41:52", "Given that that's the case, how confident should we be in our $15 billion carriers? The other person that I interviewed in the UK was Dominic Cummings, who was the chief advisor to the previous government, and he said that in the war games, for the British carriers to survive, they would have to exit the zone of contention, which would make them useless. So how worried should we be about our preparedness for a naval war?", "Sarah Paine 1:42:24", "You always need to be prepared. You have to be thinking about it constantly. One, our carriers are incredibly useful in going toe-to-toe with a peer competitor. The vulnerabilities you are describing are absolutely there, particularly if you want to get up close and personal. On the other hand, what a carrier provides you is that it gives you a base all over the world. So if you're not going after a peer competitor, then they're incredibly useful. And we own them.", "So the question now is going forward, do you want to build more carriers? Or do you want to build something smaller that just takes drones? Or what are you doing? That's the big decision and I'm not qualified to answer it. But for the ones that you have, they're tremendously useful for doing these non-peer events. And again, I am not qualified to answer operational questions like these.", "Dwarkesh Patel 1:43:16", "Yeah, I guess I am curious about it because in Ukraine we have these drones that are taking out extremely expensive tanks.", "Sarah Paine 1:43:22", "Bingo.", "Dwarkesh Patel 1:43:23", "The impact of asymmetric warfare. How do you see that shaping up?", "Sarah Paine 1:43:27", "Warfare has always been asymmetric. Isn't that the game? You figure out whatever they've got, and then you do something different, which is the asymmetry.", "Dwarkesh Patel 1:43:35", "Right. Or the thing of just having cheap drone armies that can debilitate billion-dollar equipment.", "Sarah Paine 1:43:41", "Yeah, this is it. And you're very much part of the generation going back to your education in computer science and these technologies. Apparently the 3D printing that they're doing in Ukraine is absolutely going to change things. I don't know to what degree. I'm not an expert. The other issue with the United States is we build a lot of these very expensive platforms, these ships and airplanes, and then you wonder whether you can afford to lose them. Thinking creatively, this is where war games come into play, and planning is “Okay. What would be the value of these smaller things? Can they carry the water when the time comes, etc?” I think you're going to learn a lot from this Ukraine war about what works and what doesn't.", "Dwarkesh Patel 1:44:34", "What is your opinion on how competent and effective the military is in general? Because given that there hasn't been a huge war for quite a while, have they been able to maintain the standards and the efficiency?", "Sarah Paine 1:44:48", "I am not qualified to answer that. I teach at the Naval War College but that does not make me an expert on how the Pentagon runs its business. I think the general feeling about the federal government is that there are incredible inefficiencies but it's very difficult to get rid of them. In the civilian part of it as well. Once people get a federal service job, it's very difficult to get rid of that particular job, etc.", "Dwarkesh Patel 1:45:18", "Are there plans around the Naval War College or elsewhere about how to make the system more modern and efficient?", "Sarah Paine 1:45:24", "I teach in the strategy department, so we do strategy, not all of this other stuff.", "Dwarkesh Patel 1:45:30", "Is the era of great generals over? Maybe you answered this already when you said that we overemphasize looking back how much these generals mattered but for some reason or another, they've become historically famous. People like von Müller, Patton, or MacArthur. Whereas off the top of my head, I can't even name a famous general of Iraq.", "Sarah Paine 1:45:53", "I can name one. You have Valery Zaluzhny , who runs the Ukrainian army. Think about the people you're picking. You're picking people who were part of a global war, a really high stakes war. And then as I pointed out, we use these generals to personify a whole group of people.", "I suspect the ones you're going to find are going to be the ones in Ukraine. And the fact that they've done as well as they have done so far is incredible. And then it's not just the generals there, right? You have Zelensky, who is the public face of diplomacy. It's incredible. From the night, his little sound bite — “ I don't need a ride. I need more ammunition. ”", "And then if you think of the people there who are running the rail system who have kept things supplied or the people who are repairing their electric power plants. There are so many Ukrainians of different professions who are holding that thing together. So, there are plenty of great people to be found there.", "Dwarkesh Patel 1:47:04", "What is the process that leads to the loss of civilian control of the military? For example, in Japan. And why has the US been robust against this?", "Sarah Paine 1:47:14", "In Japan, it's interesting. If you go back to the Meiji leaders, who are they? They're the people who won the civil war against the last Tokugawa Shogun. If you look at their career paths, they had civil and military jobs as they swapped around, and they all knew each other. The head of the Army and the Navy and the prime ministers, they all interacted. But they didn't create an institutional mechanism. They did have a cabinet, but they didn't have a full up legal forcing together of all the civil and military parts of the government and have them operate on a rather level playing field. The Army dominates.", "So when that generation dies, everyone gets much more stovepipe careers. They're much better educated than their parents and grandparents have been but their education would be strictly in the Army, as opposed to, “Oh, well. The founder of the Army also founded the police force. And he knew the finance minister and had great respect for the finance minister.” And then you have people not respecting each other.", "I'm making this up because I don't know the details of China but if you think about Deng Xiaoping, he's on the long march, he's one of the younger members. He must know everybody. And I know he's in and out of prison. So he knows the people who are in and he knows the people who are out. Then when you get to Xi Jinping, they're a much more stovepipe group of people. They don't have the institutions.", "Actually, in China, they do have institutions for party control over the government. So that's how communist governments have maintained very good control over their militaries. In fact, if you look at Communists, they're really good at civilian control over militaries. In fact, that was Trotsky's contribution back in the  Russian Revolution of how you take a bunch of white officers and veterans of World War One and keep track of them. It has to do with political and military commissars. The military commissar is the officer who's actually a professional. The political commissar is the one who's got a connection with the secret police, who if the military commissar doesn't do as told, they'll come in and kill him and maybe also his whole family for good luck. And it's very effective in the commissar system.", "Dwarkesh Patel 1:49:40", "In the U.S., though, in the Naval War College, you have these systems of the officers from West Point or whatever, I don't know what the actual progression looks like. Are they seeing the civilian from their education to their promotion? Are they in the military the entire time, or do they also have this wide spread of experience?", "Sarah Paine 1:50:02", "Military officers have a very extensive education. It's often a succession of MA degrees. Some of them are very technical things. Like if you're in nuclear subs, you better know how to run the nuclear plant and engineering things. And then they come to places like the Naval War College to learn about strategy and other things.", "In terms of civil control over the military, if you go back to the American Revolution, the Continental Army couldn't even get funding. And it grows very gradually over time. And then you have MacArthur, who is just ignoring Truman and is making all kinds of threats. Truman thinks he's got a way of settling out the Korean War, and MacArthur says things that overturn that. So eventually you have MacArthur getting fired, which is telling military officers you can't do that.", "And then when you get some military officers shooting off their mouths under Barack Obama, they get fired instantly. We have full-up civilian control. With MacArthur, he was trying to run policy and got himself fired, but he was tremendously popular. And it was the joint chiefs of staffs who actually fired him. They agreed with Truman, they don't want MacArthur having his finger on the atomic button. It scared them to death because they thought he would press it.", "Communists & Axis", "Dwarkesh Patel 1:51:26", "Speaking of the political and military commissars and their system, why have the communists been so good at propaganda historically? You talk in Wars of Asia about how, despite the imperial things they did and the ways in which they sabotaged things, they had much better PR than the Americans ended up having.", "Sarah Paine 1:51:49", "If you think about how communists started, if you take 1917, the Bolsheviks, they're really weak. And you think about people who are weak, what can you do? Words are key. So you're using words to cultivate loyalty so you can get cadres to come your way. And you're going to use words, since you don't have the ability to threaten people militarily, you're going to use words to try to undermine them. And we've seen this happen the world over, I think wasn't the story of Al Qaeda was pretty good at words and doing their recruiting, etc.", "It takes the powers that are the target of this quite a while to realize the effects of this. You think the Bolsheviks are crazy people and ignore them but then gradually you see the cumulative effects and they are a threat and then you need to get going with your own information warfare.", "And I'll give you an example, the United States had quite a robust information warfare by the later stages of the Cold War. It involved Voice of America and BBC and it was basically, just tell people the truth about the relative standards of living. The cumulative effects will be to destroy the allegiance of people in their own governments, which is what happened.", "At the end of the Cold War in 1991, we ceased funding that because we thought it's over. And then I remember I was on sabbatical in California at the Hoover Institution, and one of the people there was a great expert on Ukraine, and this one time when the RTV was on, the Russian propaganda station, he said RTV is really dangerous. I said, “Oh, it's ludicrous. They're just telling nut stories.” He was right. I was wrong. He was absolutely right that those crazy stories started getting a life of their own.", "And then if you look at Biden, when this war was just about to begin in Ukraine, he made the decision to release a lot of the intelligence about, “Hey, they're about to invade. Here's where they're coming.” And he completely buried Putin in the information war. So it took us a while to wake up. Now we're back.", "The US ambassador in Japan has some really lively tweets about the Chinese, and they’re hilarious, you need to Google them if you haven't read them. We're back. And actually the United States is really good at this department. Hollywood, the movies, we have so much talent in this department. A lot of it's just based on telling the truth. But lies, as we've noticed, take a long, long time. It's very easy to tell a lie. It takes a long time to get all the facts to prove it is a lie.", "Dwarkesh Patel 1:54:45", "Why was the Axis so much worse at collaborating than the Allies? Especially given the fact that it seemed like the Axis should have been in much greater collaboration. They were all these nationalist militaristic movements, whereas the Allies, you have communists and democracies.", "In your book, you talk about when Japan's fighting Russia, Germany has a non-aggression pact with Russia. When Germany does Operation Barbarossa on Russia, Japan has a non-aggression with Russia. So if they had a two-front war what could have happened? When Pearl Harbor happens, Germany isn't warned, but then gets dragged into a war against America. Why didn't the Axis better coordinate?", "Sarah Paine 1:55:30", "I'm going to turn your question inside out. I'm thinking about the Alliance system. What did one side do versus the other side do? I'm thinking about the Alliance itself. Flip it around to the enemy which is that the Axis powers put their enemies on death ground. That is why the war began. That is an incredibly clarifying event. That got Britain, which really, really hated the communists to ally with them immediately. Forever, Britain thought that the dangerous thing were the communists, not the fascists in Germany. But then when the Germans worked their wonders, Britains got all “It's the communists who are the primary threat.” If you look at it that way, that's one thing.", "Another concept to think about are primary enemies versus secondary enemy. If I ask you the question for Germany to get what it wants in the world, who is its primary enemy? The answer would be Russia, because that's where it wants to do its Lebensraum and stuff. You go, well, Italy, who is its primary enemy to do its Roman Empire III or whatever number they're up to, and the primary enemy would be Britain who would get in the way of those plans. Then you go, who's the primary enemy of Japan? It's actually not us, it's China, because if they win, that's the prize to be taken.", "So then you flip it around and go, okay, primary enemy of Britain, Germany. Primary enemy of the United States, Germany. It was never Japan. We deliberately understood that Japan would never threaten us directly in the way that Germany ultimately would if it took over all of Europe. Then you ask Russia, primary enemy? Germany. You know, kidding, we got three aligned on the primary enemy. It's a very effective alliance. Get rid of Germany and it falls apart, which is actually predictable. When you lose the primary enemy, which is Hitler, he's gone. All of a sudden, we're back to Communist versus Capitalist. The Cold War is often running. People act like it's a surprise. No, it's not. Primary enemy gone.", "Dwarkesh Patel 1:57:46", "Back to the question about the Meiji generation. We see these sorts of industrializations across Southeast Asia. What happened many decades later in Korea and Taiwan and China, did Japan just do that exact same thing earlier? And how come in Korea and Taiwan, you have a sort of dictatorship or an authoritarian government that leads this effort and then it transitions to democracy? Whereas in China and Japan, that didn't happen. What explains the difference there? Is it just the power of the US?", "Sarah Paine 1:58:31", "Well, A, if Japan hadn't gotten into World War II, who knows what would have happened. If the West had not mismanaged the Great Depression, who knows what would have happened?", "Dwarkesh Patel 1:58:38", "If that didn't happen, do you think there's a chance Japan liberalizes in the 30s?", "Sarah Paine 1:58:42", "Perhaps. It's conceivable. But there's also another thing about human beings. We human beings require the absolutely scorching horrible lesson to suddenly realize, “You need to do these things. You're going to be better off.” The searing lesson was World War I and that World War II generation set up institutions that have held the peace in the industrialized world. Not the third world where all the proxy wars were fought, but in the industrialized world till very recently.", "On the authoritarian regimes, communist systems that insist upon a monopoly of power of the communist party are a separate problem. The places you're talking about in Asia, they invested extensively in education, extensively in infrastructure, extensively in industry of all types and allowed all kinds of private ownership.", "You can have a lot of government planning and a lot of government ownership, and the economy works perfectly well. You can look at different European countries with different percentages. When you go to 100% government control, you kill your economy. Korea, Japan, etc. didn't do that. And so they get educated people who then for 20 years really put themselves on the line, putting the pressure on their own governments to democratize.", "Dwarkesh Patel 2:00:11", "Why was the strategy of the Soviet Union in World War II so much better in Asia than it was in Europe? In Asia they're playing off these different parties in China against each other, for example China against Japan. In Europe, Stalin doesn't even see Barbarossa, or doesn't prepare for it adequately, why were they so much more effective in Asia?", "Sarah Paine 2:00:36", "I don't know. I would imagine cooperative adversaries. China had been a failed state for such a long time. They're trying to glue Humpty Dumpty back together again. So what works in Asia versus Europe where you have developed countries with a whole cadre of experts, which is not the case in China? There are a lot more people in the West who are re-assessing and they have robust institutions. It goes back to institutions. Whereas China is trying to just build these the first time around. Difficult in China.", "Dwarkesh Patel 2:01:14", "Why doesn't China think like a continental power? They have a vast coastline where a lot of their wealth is around that coastline. As far back as the 15th century, you have these huge Navy’s. Wasn't it Zheng He that had a bigger navy and far bigger ships than Columbus.", "Sarah Paine 2:01:32", "Yeah, they had a big navy. Different times.", "Dwarkesh Patel 2:01:34", "So why didn’t they think like a continental power?", "Sarah Paine 2:01:37", "Having a continental location is not a choice, it's a fact of geography. If you look at China, it has a huge land border. Sure, it’s got a huge coastline as well but historically, where have China's national security threats come from? From the North, the Northwest. If you look where the passes are of people coming on in or down straight through Manchuria, etc.", "China, in order to maintain its empire and just dominate China itself along with keeping these other people out, has had to have a large standing army. When it has built a large navy like Zheng He, is when it's got extra pocket change. If you have extra money then you can go do this. But if that changes and you have trouble with people on your borders, you've got to spend your money that way.", "It's very difficult to have a world-class navy and a world-class army. If you think about Britain, it maintained the big navy and always had a tiny army until they ramped it up in World War One, which was the beginning of the end for them, as being the dominant power that they had been.", "Dwarkesh Patel 2:02:51", "What level of competence should we assign our estimates of how well the PLA would function in a war, whereas at least the United States military has had these practice rounds in Iraq and Afghanistan? We don't even know how the modern PLA would actually function in a war. And obviously, as you were mentioning earlier, in authoritarian systems, there's this lack of information and feedback that could lead to all kinds of catastrophes where people are not prepared. What should we think of the PLA's competence?", "Sarah Paine 2:03:25", "I don't know. But I think the people who are worried about that are the Vietnamese and the Indians, the people who are likely to meet them. Back in ‘79, when the Chinese tried to work their magic in Vietnam, they had massive casualties. The Vietnamese killed more Chinese in a matter of weeks than all US losses in Afghanistan and in fact, all US losses in Vietnam over however many years we were there.", "Do you think the Chinese would be good at expeditionary warfare and sending these people anywhere? Think about where would you be fighting them? It's great that they have got a big army. So where are they going to deploy them?", "Dwarkesh Patel 2:04:04", "Why have the wars in China been so deadly? You have millions of casualties sometimes.", "Sarah Paine 2:04:12", "It's continental warfare. That's how it goes.", "Dwarkesh Patel 2:04:14", "It's the same reason Russia has had so many Russians dying.", "Sarah Paine 2:04:16", "I believe if you measure the number of locals who died in Iraq and Afghanistan and then they've had the civil war on top of it, it's thousands and thousands and thousands and thousands of people. We go, “Oh, it wasn't too bad for Americans.” For those who live there, it was quite bad.", "Dwarkesh Patel 2:04:32", "The Taiping Rebellion, I guess another…", "Sarah Paine 2:04:35", "Tens of millions. No one knows how many people died in that thing.", "I think one of the takeaways for you. if you look at Chinese history over the course of all these different rebellions that go back hundreds of years, all these different wars they fought and you look and go, “Wow, millions of Chinese killing each other.” A mark of good strategy is not killing your own. So if the Chinese have been doing this for a long time, don't expect them to be great strategists, which isn't a happy thing, actually. It might mean they do crazy, stupid things that are so detrimental to themselves.", "Dwarkesh Patel 2:05:10", "Some final questions about studying history in general.", "So I studied computer science and I talk to a lot of people in these technical fields. Being around them, I think I have a sense of what it means to understand a technical field well. What does it mean to understand history or strategy well?", "Sarah Paine 2:05:27", "In history, you have to do tremendous amounts of reading. And it's over a career. Also, publishing is really essential, not only do you give people the best ideas that you've encountered but it also forces you to really come to terms with what you do think and why. I feel after every book, whatever I was, I'm one-click better. You've probably got good eyesight, unlike me. If you go in for an eye appointment, the guy will go click, click, and go, is this one better or is this better? I feel like after a book, it's one better. Do I see 20-20 now?", "And after a year abroad, like I'm in England for a year, where I just get to think, read extensively, try to be open-minded, try to look for the unknown unknowns. What is it I'm completely missing? What is it that I'm totally wrong about? Being open to reassessing, “Ooh. I got that wrong.”", "So it has to do with reading extensively. If you're going to be studying other societies, you better read the language. I'm not particularly good at any of these languages, but I do try hard. And it's taken years for me to bungle my way through them as I do. But that's really essential.", "And too much of U.S. graduate education, particularly in political science, where they ask you very important questions for international relations and politics, they don't require them to have high-end linguistic skills. They should. And part of it is, if you learn a different language, you do kind of a mind-meld. If you learn Japanese, you have to learn all of these formality things and what's called Keigo. It's honorific Japanese. My Japanese is terrible, but learning as much as I've learned makes you realize part of how this hierarchical society works and you get a sense of how they think about things and they categorize stuff. So we're back at the opticians for this. If you do the language, you get a few more clicks.", "Dwarkesh Patel 2:07:40", "And having to live there when doing this archival research.", "Sarah Paine 2:07:42", "Yeah. And then just asking people questions when I live there of why this? Why that? And then what's funny is you come back home and it gives you a new sense of what makes one's own country special. Because things that you just assume everyone does, you go, “Well, everyone doesn't quite do this.”", "Dwarkesh Patel 2:08:00", "Have you come across something super shocking in your archival research? I don't know if there's a story of something super shocking. One of the things I'm remembering from your book, as you mentioned, that you had a speculation that both the nationalists and the communists help the Russians cover up the rape of Manchuria because they were both given hush payments.", "Sarah Paine 2:08:21", "Well, no, they cut a deal. It didn't work. And that's my interpretation. And if you have further archival evidence that I'm happy to reassess.", "Continental vs maritime powers", "Dwarkesh Patel 2:08:34", "But anything else? Maybe not exactly like that, but something you've dug up that nobody noticed.", "Sarah Paine 2:08:41", "Well, I'm not a gotcha person, but working at the Naval War College. I started out my career studying Russia and China. I did not realize it but I'm learning about two of the greatest continental empires in human history. And it's fascinating learning about that. Then I get a job. My husband and I go to the Naval War College. And suddenly I'm teaching about British and US maritime strategy. What do I know about that? That's why my husband got me to do all these co-edited books about naval topics just to learn more about it. And that's where I got the idea about maritime and continental origin.", "I gave the Marshall lecture that was published in General Military History. In it, I summarize my views on what the difference between a continental and maritime power is. And that's one of my big career takeaways. It's a fundamentally different way of looking at the world.", "Putin honestly looks at the world like, “If I control territory, that's what makes me secure.” Maritime powers, start with Britain, which is, “Hey, mine's secure if I can maximize money from commerce.” Because then I can buy a Navy and buy allies with armies and stuff. And then eventually this order of organizing trade by international law, and the Dutch Republic is instrumental in this with Hugo Grotius, who is the founding father of international law. They want to run transactions by law, et cetera. This is an international order that's win-win. You join it, you get security. You have input on how it evolves because it's a work in progress.", "Whereas this continental thing is negative sum. And you can see it in Ukraine. Putin wants more territory. Okay, he took Eastern Ukraine and he took Crimea in 2014. But it's negative sum because he destroys whatever businesses had been being run in Donbass and he absolutely kills most of the tourist industry. And then you can look to today, it's so negative sum in Ukraine. He is destroying wealth at a really rapid clip. It's really a stupid way to run things.", "If the PRC tries to take Taiwan, it's a continental view. Somehow they think more territory is going to improve their security. No, they'll level it and they'll hurt themselves. Whereas if they just ignore the Taiwan thing and say, “Oh, they're so annoying, let them run their own place who wants them anyway.” and then trade with them, they'll both make money. That's my biggest career takeaway.", "Political scientists love to talk about America, the hegemony. No other country in the world wants an American hegemony. There may be some people in the United States who think that looks great, but no one's going to buy into a world order in which the United States is the hegemony who pushes everyone else around. I get it, we're big and we're influential, but other people are influential too. This maritime order where, yeah we're an important part of it, but we have many other people in it. It's a win-win.", "Biden is doing all of these meetings with Europeans managing what's going to go on in Ukraine, et cetera. And it's based on agreement of all these different countries chipping in big and small. Who's prosperous and who's not, may I ask?", "Dwarkesh Patel 2:12:36", "The Maritimes.", "Sarah Paine 2:12:38", "Yeah, they're the ones who have massively increased their standards of living since the Cold War. It was really the third world. Except now we got Wagner or whatever's left of it and also China's now got these private military things running roughshod over Africa. All that's going to do is tank African growth rates, which for a while were going double-digit. So that's one of my big career takeaways. And I tried to put it into the one lecture that I was asked to do and the one article, which is like a 20-page read.", "Dwarkesh Patel 2:13:06", "That's a super interesting way to think about things. A couple years back people were looking at the growth rates in China and they were thinking that it's going to have the biggest economy and be the leader of the global order. Does your analysis imply that because it's not part of that maritime system, even if its economic growth picks up, it will still not be the leader of the world in the same way?", "Sarah Paine 2:13:33", "Doing what they're doing is all going to depress growth. They could join the maritime order any day. That is what at the end of the Cold War, everyone wanted them to do. Everyone wanted Putin to join it. If you think about all the money Putin has spent on his crazy military stuff, imagine what would have happened if he'd spent all that on the Russian road system, because their road system is deplorable. And imagine if he had devoted his attention to trying to have a better legal system so that small businesses could get bigger without having someone come to them for protection money. Think what Russia could have been now. It would have been dramatically better.", "Dwarkesh Patel 2:14:12", "They have so much energy, so many raw resources.", "Sarah Paine 2:14:14", "Oh, they have so many talented people, but the Russians don't see it that way. They see it in this continental view, and they're the ones who have to come to terms with what they think.", "This is why containment is brilliant. In the meantime, those of us who joined the maritime order need to work with each other and then we contain the problem by saying, “You cannot join us on equal footing till you behave yourselves.” You get a timeout from the global order, but we would welcome you back in. The problem with Putin is he's done so much damage to Ukraine, there are going to be reparations involved, and the Russians won't want to pay those.", "Dwarkesh Patel 2:14:56", "What are the mistakes and biases that come about from self-studying history, as opposed to formally studying it? In what ways is your understanding of strategy or history most likely to be incomplete as a result?", "Sarah Paine 2:15:09", "Let's do history. I think about my education at Columbia. I had the most absentee landlord professors. They just didn't waste their time on me. I just did a tremendous amount of reading, and while I was there, I did the equivalent of two PhDs of coursework as a graduate student. Because going to graduate school is such an expensive event. It costs time and money and everything else. So I just took massive numbers of courses to read the reading list of what they had given me and having some guided readings was tremendously helpful.", "On the strategy part, this is where the Naval War College has been essential to my publications. In the strategy and policy department, what we do and what civilian academia doesn't do, and it's tragic because they're better positioned, is a big team-taught course, the strategy course. All the students at the War College have to take the main strategy course, the main joint military operations course, the main national security affairs course. It's a one-year MA. In that one trimester in which we have them, our course is four-fifths of their coursework. And then there's a junior and a senior course, so we do teach two trimesters out of three.", "Alright, so because it's team-taught and the lectures are given by different faculty members, so I attend everybody else's lectures, or I did originally, and I attend all the new ones. You learn so much from your colleagues and then they learn from each other. You were asking me about Bismarck. Why would I know about Bismarck? Because I had colleagues who actually knew something about him, which I don't, and I listened intently and I did the readings. And then from teaching strategy, I learned all these concepts and I've given you some of them, and they're tremendously useful for studying wars.", "I never would have learned about maritime powers without being at the Naval War College. It is the only institution of higher education in the United States that focuses on the strategic prerequisites for and possibilities of being a maritime power. It is essential to know this to practice U.S. foreign policy. Why? Because unlike Ukraine, if you have a continental position, if someone threatens you and invades on a given day, you have a choice on that given day, the day they chose, either you're going to capitulate or you're going to fight. So they determine when the war is going to begin.", "In our insulated position, unless we start doing terrible things to the Mexicans and the Canadians. Mexico is our biggest trade partner, and Canada must be not far behind. When there are wars that are important to our national security, we decide, like in World War I and World War II, do we get in? If yes, when do we get in? In Afghanistan, and in Iraq, do we get in? Do we not get in? We could have avoided Iraq altogether if we wanted to, and I'm not a Middle Eastern expert, so you'd have to talk to those people about pros and cons. Afghanistan, since we had been attacked, the chances were we were going to be in on that one because of a direct attack on us.", "It is incredibly important to understand this maritime position. That's why I've co-edited all these books with my husband, the ones I mentioned to you about maritime things, which took us years to do. But if you want a short course, you get half a dozen of these books and it’s actually a fast way of learning about what the maritime instrument can and can't do for you. The strategy course was absolutely essential from what I know about strategy. I have done my best in books to put what I have learned there. The one I'm working on in the Cold War is going to be organized around these strategic concepts of — How did each side try to manipulate the other? How did the medium powers try to wing in on the game? And what are the strategies that they're using? The paradigms, etc. So I'm going to try and pour as much of this in there. Because this is what education is. It's passing the baton from one generation to another saying this is what I've learned over the course of my career.", "Dwarkesh Patel 2:19:27", "I'm really excited to read that.", "Sarah Paine 2:19:28", "It'll take me years to finish it. It's not happening any time soon.", "Dwarkesh Patel 2:19:32", "Final question. My audience probably is overwhelmingly representative of technology and those kinds of worlds. What is it that you especially want them to understand about history and strategy?", "Sarah Paine 2:19:45", "What I want them on history and strategy is it's going to be a well informed person and read broadly. But I think for them, in technology, they need to think broadly of these technologies about which they have deep expertise. Do these technologies privilege dictatorships or democracies? I do not know the answer. When you're creating architectures for things like the Internet, etc. Think about these things. Think about consequences. I suspect and I don't know this that when China does its Belt Road initiative, I would have presumed it's also selling a nice little I.T. package to keep the dictator in power that if you want to keep track of your population here, this is the I.T. thing you need to do to firewall this, that and the other thing.", "The west is the part of the world that has developed most of these technologies and continues to be at the forefront of it. Think very deeply about whether you're going to ultimately privilege dictatorships over democracies because the reason tech has been able to be so vibrant is because you live within the castle walls of this maritime order where people follow the rules. You're protected on the outside. You have military things, etc. If those walls are breached by dictatorship or by really stupid grand strategy.. Our countries have come perilously close in the last few years. Perilously close. If Trump had been president at the time Ukraine was invaded, Ukraine would be no more. We would have Russian armies right up to the Polish border now. These things are terribly consequential.", "And then another piece is — so you're well educated and you're in the growing part of the United States where you talk to each other at all of these meetings. Think about organizing things. For instance, we have tremendous problems with refugees or illegal immigrants coming over our borders because we have basically failing states to the South of us. Is there anything that foreign investment or anything can do over a 20 or 30 year period to help alleviate this because it will improve our own national security. If instead of refugees pouring over our borders, you have people making good t-shirts or eventually putting phones together. But that's the sort of thing that people who are in your world, who you meet each other at these business meetings and talk to each other and think about, “Okay, I got to hear now.” Maybe my charity work is to me thinking about these other things. So in my line of work, what do I do for charity? I talk to anyone for free.", "Dwarkesh Patel 2:22:45", "And I really appreciate that.", "On the point about tech and whether it is enabling democracies or dictatorships. Isn't it very difficult to tell in advance? I'm sure that Gutenberg didn't think he was helping the Protestant Reformation or that the guy who made the radio didn't realize what he was doing for Hitler. Even with AI the thing that it was initially thought to help with, “It helps us collect information and congregate it.” But we're seeing that China has been behind on these language models because it's really hard to align them to not say anything bad about the PRC, or the CCP rather. Isn't it hard to tell in advance?", "Sarah Paine 2:23:23", "It is hard. But the people you're talking about who are your prime audience are the bright people who might have some insights into it. Well, what do you want to do in your life? I would think one of the pieces would be contributing in some way that makes things a little better. However you're going to define better.", "Dwarkesh Patel 2:23:44", "Awesome. I think that's an excellent place to close this episode. Thank you so much for your time. And really, you’ve written the best books on military history. I highly recommend them to better understand not only those periods of history, but broader strategies and lessons and insights about our own time. Anyway, this was a huge pleasure. Thank you so much for coming on.", "Sarah Paine 2:24:02", "Thank you for having me and asking all the fun questions. It's been my pleasure." ]
[ "https://usnwc.edu/Faculty-and-Departments/Directory/Sarah-CM-Paine", "https://www.dwarkeshpatel.com/p/richard-rhodes", "https://www.amazon.com/Making-Atomic-Bomb-Richard-Rhodes-ebook/dp/B008TRU7SQ", "https://www.amazon.com/War-Translation-introduction-Samuel-Griffith/dp/B00198C0BE", "https://www.wikiwand.com/en/Hunger_Plan", "https://www.amazon.com/Wars-Asia-1911-1949-S-Paine/dp/1107697476", "https://www.amazon.com/Japanese-Empire-Strategy-Restoration-Pacific/dp/1107676169", "https://usnwc.edu/Faculty-and-Departments/Directory/Bruce-A-Elleman", "https://www.routledge.com/Naval-Power-and-Expeditionary-Wars-Peripheral-Campaigns-and-New-Theatres/Elleman-Paine/p/book/9780415724289", "https://www.amazon.com/Red-Star-Over-China-Communism/dp/1611855128", "https://www.goodreads.com/book/show/52042.Ten_Days_that_Shook_the_World", "https://www.amazon.com/How-Asia-Works-Success-Failure/dp/166525534X", "https://www.bbc.com/news/world-europe-65901723", "https://www.youtube.com/watch?v=Nk900n1KzJo", "https://japantoday.com/category/politics/U.S.-envoy-for-Japan-Rahm-Emanuel-takes-spotlight-with-snarky-China-tweets", "https://www.wikiwand.com/en/Lebensraum", "https://www.wikiwand.com/en/Zheng_He", "https://www.youtube.com/watch?v=hp7QRZ4xjhk", "https://youtu.be/hp7QRZ4xjhk" ]
https://www.dwarkesh.com/p/sarah-paine-china
Sarah Paine Episode 3: How Mao Conquered China
[ "Sarah Paine 00:00:00", "What I'm about to say are my ideas, they don't necessarily represent those of the US government, the US Navy Department, the US Department of Defense, or the Naval War College, you got that clear? Complain to me if you got problems.", "All right, I'm going to talk about Mao. He’s an incredibly consequential figure. For the 20th century, he's one of the most consequential political or military figures, and he's also one of the most important figures in Chinese history of any century. And he's also a terribly significant military-political theorist.", "And this is not an endorsement of Mao. It is rather just an accurate description of his global and enduring importance.", "00:00:43", "Think about China historically: it's represented- I don’t know- a third of the world's population, a third of the world's trade. That's a big slice of humanity. Moreover, Mao's theories have been used by many enemies of the United States to take over failing states from within in order to assert dictatorial rule. He is also probably the most brilliant and most famous psychopath in human history, and that is saying a lot.", "So here we go. This presentation is based on the first eight volumes of Stuart Schram 's Collected Works of Mao. What Schram did is he compared Mao's complete works, as published in the 1950s, to whatever he could find as the earliest version of whatever it was. Then he reinserted whatever had been cut in italics.", "So tonight, watch the italics. Mao didn't put all of his best ideas in one place; he scattered them all over the place.", "00:01:45", "And so what I've done for you all is prepared a jigsaw puzzle of all of these different ideas. In order to make it comprehensible to you, of all these random little tidbits, you have to have like a coat rack to hang all the hangers, and that's called a simple framework, and I'll get there, but in your own lives, when you've got all kinds of complicated things to transmit to others, you can look at what I'm doing tonight, and you can do it for other things as well. So here we go with good old Mao.", "And oh, by the way, a lot of those 8,000, 7,000 pages weren't that interesting, so in a way, you owe me.", "00:02:26", "All right, these are major military theorists, just to run you through them. Clausewitz is the West's major military theorist of bilateral conventional land warfare. Sun Tzu is Han civilization's great theorist of how you maintain power in a continental empire, multilateral world using coercion and deception.", "The two fellows on the right are maritime theorists. In a way, they're writing the missing chapters of Clausewitz that doesn't talk about naval warfare at all.", "The top one is Alfred Thayer Mahan, the Naval War College's finest. And what he's writing about are the prerequisites for and strategic possibilities for maritime power.", "00:03:16", "And the Briton underneath him there, Sir Julian Corbett , is writing about how a maritime power, i.e. Britain, can defeat a continental power, i.e. Germany or France. But all of them are writing about warfare between states, and Mao is a different event.", "Mao has to do with triangle building. The term \"triangle building\" comes from Clausewitz. Clausewitz has this nice little passage here where he's talking about these abstractions: passion, creativity, and rationality as being mainly but not exclusively associated respectively with the people, military, and government.", "A state has full-up military and civil institutions that have some connection to their people, but an insurgent is going to be building these things from the ground up. So that's what Mao is doing, is he's actually taking over the host from within by building a shadow government and eventually taking power.", "Many of the decolonizing world after World War II were really sick of the West. They'd been colonized, didn't want to hear anything about them. But it seemed as if the Soviets or the Chinese perhaps offered a better model, the Communists, and many thought that the Chinese, Mao, offered the better model.", "00:04:30", "Why? Because the decolonizing parts of the world were also agricultural and underdeveloped, unlike Russia, which had quite a military—excuse me, an industrial base. They thought Mao was the more relevant guy. All right, here's Mao at his iconic moment. He's proclaiming the victory of the Communists in the Chinese Civil War.", "China had been a broken state basically since 1911, when the last dynasty had fallen and the country had broken out into a multilateral civil war that he eventually wins. I’m going to be talking tonight about Mao’s theories from the 1920s and 30s when he had the time to write, but there’s a lot more to Mao than just that. He had quite a track record.", "00:05:15", "Once he won the civil war, he imposed a social revolution. What's that? It's more than a political revolution. You're not just replacing the government; you're going to wipe out entire social classes. And I don't mean then, \"Hey, here's your one-way ticket out of here\" kind of way. No, no, a social revolution is, \"Here's a mass grave, dig it, and then you're in it\" kind of way.", "So, if you look at these statistics of Chinese deaths in many of their wars—this is from much of the Maoist period, I think it's '45 to '75—what you'll notice, the figures in white, I believe, are civilian deaths, not military deaths. And it gets really quite ugly. There are more Chinese civilian deaths here than all deaths in World War II.", "00:06:03", "And then for those of you who think the Chinese are all great long-term strategists, you need to ponder these numbers. How is it possible to kill so many of your own? That's generally not a mark of good strategy.", "Moreover, most of them died during the Great Famine, which was the only nationwide famine in Chinese history. Why? Because it's not caused by the weather. It's caused by policies set in Beijing. During the Great Leap Forward, Mao put all the peasants on communes.", "That meant the party was in control of the food supply: i.e, who lives and who dies. You don't get a meal, you're very dead. In addition, he decentralized industry, and you can see these backyard furnaces pictured here. As a result of this, production collapses—agricultural and industrial. But Mao keeps exporting food. Why?", "00:06:55", "Because that's his pocket change. That is a major source of government income if he wants to be able to do anything, so they keep exporting food. As a result, 40 million Chinese starved to death, primarily in rural areas, and disproportionately peasant girls, the least valued members of society.", "The statistic of 40 million deaths comes from this book by Yang Jisheng, who has written the definitive work. The English translation is but one volume; the Chinese original is three. Yang worked as a journalist for many years, which gave him access to provincial archives, where he surreptitiously investigated the statistics of people who were starving to death, including his father, for whom he wrote this book to serve as an eternal tombstone.", "00:07:42", "So, on the one hand, Mao is the military genius who puts Humpty Dumpty back together again when nobody else could, and they tried for the previous 40 years. On the other hand, he is the psychopath incapable of running an economy in peacetime.", "Yet many Chinese revere him as a national hero. Why? Because in their minds, certainly of the Han, the preponderant group in China, one of the key things that their country should and must be is a great power. And Mao, by reunifying China under the banner of communism and then fighting the coalition of all the major capitalist powers to a stalemate in the Korean War, or in their mind a victory, that constitutes ending what they consider the era of humiliations that started in the mid-19th century and ended with the Communist Revolution.", "So he's a hero at home. All right, to understand Mao's theories, I need to put it in the context of the wars that he fought.", "00:08:44", "So in 1911, Qing Dynasty collapses. The country shatters into a multilateral warfare among warlords, these provincial leaders. On this map, you can see the different colors and shadings; those are different warlord areas.", "But the Nationalist Party and the Communist Party form a united front in 1923 in order to eliminate these warlords. And so Chiang Kai-shek, who is the head of the Nationalist Party, a generalissimo, not just a general, has a nice artwork on his portrait.", "He is the man who's leading the northern expedition to fight off all the warlords. Except he stops midway near his power base in Shanghai, and he turns on the communists, massacres them in droves. This is the White Terror .", "Sarah Paine 00:09:35", "Why? Because he thinks that while he's away fighting, they're trying to take over his government while he's away. He's correct. So he keeps on moving there.", "There's a nominal unification of China under Nationalist rule when this takes place. In addition, once he's done with that, then he wants to eliminate the Communists for good. And so he runs a series of five encirclement campaigns around their base areas that are scattered in South China.", "The primary base area- a base area is also called a Soviet, is the Jiangxi Soviet. And on the fifth encirclement campaign, Chiang Kai Shek is finally successful. He sends them off on the long march up to, way up north in desolate Yenan.", "Long march is a real misnomer. It's the Long Rout. The Communists lose 95% of their forces.", "00:10:30", "I believe in English “decimate” means to lose 10%. Losing 95%, I think you need a whole new verb for what's happened to you. But Chiang Kai Shek doesn't wipe them out, because he's suffering from divided attentions.", "When the West did the original- well it was both the United States and Europe, the United States does its original America first thing with a Hawley-Smoot tariff, putting tariffs up to historic highs. And then everybody, of course, retaliates. And now everybody's got high tariffs.", "Well, here's trade dependent Japan, that's always cooperated with everybody and suddenly they're toast. And so their solution is autarky and they need an empire large enough to be autarchic. And so then that's when they invade Manchuria in 1931.", "So Chiang Kai Shek has all of a sudden lost this area from China that's greater than Germany and France combined. It's a mess. And so he has to, he's trying to balance what to do about Japanese versus Communists.", "00:11:26", "The Japanese don't quit with Manchuria. They stabilize the place, they make massive infrastructure developments, they transform it into the most developed part of Asia outside of the home islands, but they keep on going. And it gets so bad that the Communists and the Nationalists form a second united front because they're facing this lethal threat called Japan.", "And they organized that in December 1936 in what's known as the Xi'an Incident. And the Japanese react viscerally because they look at it, and the Nationalists have gone over to the dark side because they've joined up with the Communists. And this is when the Japanese escalate in 1937, go down the Chinese coast up the Yangtze River.", "Uh, and well, but then they wind up stalemating. Once they get beyond the Chinese railway system, which isn't that great in this period, the Japanese can't stabilize the place and the Soviets start adding more aid, and we add more aid. It's a mess.", "So the Japanese decide they're going to cut Western aid to the Chinese. And that's where Pearl Harbor comes in. That's what the attack of Pearl Harbor is all about, is telling Americans to stay out of Asia, which of course, you know, we did just the opposite.", "00:12:36", "And then the Nazis interpret their alliance with the Japanese broadly to declare war on the United States. So when that happens, you have a regional war that had already been going on over Poland in Europe and this other war that had been going on since `31 in Asia, they unify into a global World War. Mao understood that he was dealing with three layers of warfare, nested wars, that he was fighting a civil war against the Nationalists within a regional war against Japan.", "And then after Pearl Harbor, there's going to be a global war that will eventually morph into a global Cold War. Most of his writings are written before Pearl Harbor. So he's going to focus on the first two layers of what's going on here.", "So after the World War II is over, Mao goes after the Nationalists full bore, and the Japanese have already very much weakened the Nationalists and Mao wins the civil war. Okay, these are the wars.", "00:13:40", "Now, I promised you a simple framework. Here's the simple framework. Simple framework should have three to five things because that's all about any of us could really handle on short notice. And so I got four here.", "And I'm going to use Clausewitz's definition of great leadership to analyze Mao. According to Clausewitz, in a general, two qualities are indispensable. First is an intellect that even in the darkest hour – and Mao had many of those – retains some glimmering of inner light, which leads to truth.", "And second, the courage to follow that faint light wherever it may lead. The first of these qualities is described by the French term, coup d'oeil. Coup is a glance, d'oeil is an eye, taking in a situation with a glance of an eye. And the second is determination.", "00:14:23", "Well, Mao had these things in numerous areas. I'm going to first discuss Mao the propagandist, that's how he starts out. And then I've got what I say here is Mao the social scientist.", "But what he was really good is data analysis, data collection and analysis. He truly understood the countryside because he collected all sorts of data about it and analyzed it. And then I will go on to Mao, the operational military leader, winning and fighting battles.", "And then at the end, I'll talk about Mao the grand strategist, integrating all elements of national power. So that's my game plan. That's the simple framework. And away we go.", "00:15:00", "All right. Mao began his public service career as a propagandist. And if you look at his early biography, he's born in 1893 to a prosperous but not particularly well-educated father who tilled his own land.", "Mao hated his father and he hated farming, so he left as soon as possible. After the 1911 revolution for a little while he worked as a soldier, didn't like that.", "He joined and then dropped out of a series of vocational schools. He tried being a, what was it? A merchant, a lawyer, I'm missing something else, a soap maker. Imagine Mao, three stages of personal hygiene, whatever.", "00:15:39", "It was not to be. But he eventually gets an Ed degree, so he can go off and be a primary school principal. Okay, imagine setting your child off to the psychopath doing show and tell.", "And then he joins the Communist Party. And it's during the First United Front so he also joins the Nationalist Party. And he has very important positions. If you look at, he's in the National Party, he's at their central headquarters, and he's the minute taker. So he's the fly on the wall listening to everything.", "00:16:10", "And then he's a stand in for the head of the propaganda department, which is probably where he learned a great deal about the importance of propaganda. And here's what he says early on, “the Communist Party can overthrow the enemy only by holding propaganda pamphlets in one hand and bullets in the other”.", "And if you look at the original organization chart of the shadow Communist government, you'll see there are only what, six departments there? One of them is a propaganda department. If you have no power, words are, is, your initial way into gaining power. I'm now going to use a framework from my wonderful colleagues, Mark Genest and Andrea Dew, this is theirs, about analyzing strategic communication in terms of messenger, message and medium. And I'll go through each three, all three.", "00:16:59", "What you see here is a propaganda poster. It's a woodblock print. That's the medium. And it's a very easy way to reproduce pictures back in the day.", "The message is about a model laborer, this “always emulate Wu Man Yo”, lucky us. And to do all the nice things he does with whatever is going on there. So that's what that is.", "Now, messengers were the delivery system, the broadcasting system for the Communist Party. So you've got the Communist Party, but you've got to reach an audience. And that's what these messengers are doing.", "And so they go into local areas and they identify local grievances for attention by the Communist Party, which when it fixes them or fixes somebody, that will generate loyalties and allegiance. So these propaganda personnel would be identifying local bullies to come in and deal with them, organize mass rallies. During battles they're going to double as medics, after battles, they're going to propagandize POWs, between battles, they are helping on troop morale. But what they're really doing is reporting back to Communist Central exactly what's going on.", "00:18:10", "And civil and military messengers differ. For civil messengers, they would be activists, maybe in the local government, labor unions, peasant organizations, women organizations, any number of these things. And it's there, it's your broadcasting system to reach a population and mobilize it.", "Military messengers are a little different. Every single military unit had about a 20-member propaganda team. That's a lot of people. According to Mao, the propaganda work of the Red Army is therefore first-priority work of the Red Army. This is very different from soldiering in the West. This is not how it would work.", "Also, Mao had his international broadcasting system. These would be foreign journalists. While Mao was holding court up in Yenan, he invited many of these journalists up there. Edgar Snow was by far the most famous. Why? Because he was the first one in and the last one out. He had really long interviews with Mao, and Edgar Snow, this is when he was an old man, but when he was a young man he never asked, \"Why does this A-list political leader spending so much time with me?\" That never occurred to Edgar. But it's hours .", "00:19:24", "And he was a very fine writer, Edgar Snow. What Edgar Snow writes, Red Star Over China , you can probably go to Barnes and Noble and pick up a copy there. It's been in print ever since, and it's the original footnote in Chinese history because no one knew anything about Mao.", "And so then everybody starts citing Edgar Snow, and then we cite everybody and everybody and everybody. But actually, it only goes back to Edgar Snow. So, Mao got his word out.", "Mao thought, \"You want to keep the message simple. You want to make it epigrammatic so that people can understand it rapidly.\" In his day, this meant having matching slogans, the equivalent of newspaper headlines, to provide a lens for people to understand events, rather like tweets in our own day. So, when the White Terror occurred, when Chiang Kai-shek is turning on the Communists in the first United Front, the slogan was, \"Arm the peasants.\"", "00:20:16", "And then, when the Japanese invaded Manchuria in 1931, the new slogan is, \"Down with imperialism and the Nationalist Party too,\" because you want to smear your enemy in the civil war while you're at it there. But there are a whole series of these slogans. And here, Mao is one of the most popular poets in China, certainly, of the 20th century.", "He could write really simple couplets. If you look here, I think it's a total of eight characters so that someone who's semiliterate can make their way through this poem.", "00:20:46", "On the other hand, he wrote really complicated things because he needed to garner the support of intellectuals initially before he'd educated enough peasants and workers to take over. Intellectuals prize poetry, and they also prize what's called grass writing , which is that unintelligible Chinese stuff that’s under there. But if Mao set these poems to tunes that everybody knew, people could sing them on the Long March and elsewhere and learn them that way.", "So, he's an incredibly accomplished man. He also understood you have to manage the message, and the way he did that is through political mobilization. Part of that is you've got to tell people what the policy objective is, which for him was abolishing imperialism, feudalism, and the landlord class, and then presenting a strategy for how to get there.", "00:21:36", "And here are the media that he used: not only the written and spoken word but also the dramatic arts in order to get the message out. He also used an institutional medium of education. And here is Mao, the primary school teacher, in his element.", "Most of the people in his armies were illiterate, but Mao knew all about how to reach them. And if you can see in this slide, there are a lot of political commissars . What are they?", "Political and military commissars come in a pair. Military commissar is the military professional who actually knows about the fighting. The political commissar is the one with the direct line to the secret police who will cap the military commissar if there are any problems whatsoever.", "00:22:21", "So, Mao's got an elaborate network to get the message out, offering all kinds of social services to people, not only medical but also education for peasant children. And he also educated their parents. This is for the first time in Chinese history.", "He did it during the winter slack season. Now, the Nationalists had also tried to improve education, but once the Japanese invaded full bore, they had to drop it because the Nationalist conventional armies are the ones that are fighting off the Japanese conventional armies. The Communists are a guerilla movement, and they're operating behind enemy lines.", "So, as the Nationalists are dragooning people into their armies, the Communists are busy offering social services, and I'll get to land reform. And so for the peasants, before too long, it becomes a no-brainer whom they're going to support. Mao also emphasized professional military education because he needs to turn peasants to cadres , to guerillas, to conventional soldiers.", "And there's got to be an educational pipeline to do this. And if you look at this Northwest Counter-Japan Red Army University, of the first original departments, political work is one of them. This is not professional military education the way it's done in the West, it's a separate thing.", "00:23:48", "Okay, part one over: Mao the propagandist, I've covered that. Now I'm going to go about Mao the social scientist.", "And here, he says, \"The peasant problem is the central problem of the national revolution. If the peasants do not rise up and join and support the national revolution, the national revolution cannot succeed.\" And if you look at his, further along in his biography, while the first United Front was still operative, he's heading the Nationalist Party's Peasant Institute in Guangzhou and also their Central Commission on the Peasant Movement, learning a great deal about it.", "But once the White Terror hits, he needs to get out of Dodge fast, or they'll kill him. And that's where he flees to Jiangxi Province, to the Jiangxi Soviet, where he is going to become the political commissar of the 4th Army. He's also going to be in charge of land reform as he figures out how to calibrate that to make it work.", "00:24:46", "All right, for Mao, he's doing data-driven survey after data-driven survey, does a whole series of them between 1926 and 1933. And he's trying to figure who owns what, who works for whom, who tills where, and inventories down to the last pitchfork and last chicken, as he's trying to establish what is really going on on the countryside, and he does. What he concludes is that 6% of the rural population owns 80% of the land, and 80% of the population owns only 20% and his solution is going to be revolution.", "And he goes further into the statistics and he identifies 70% as poor peasants, 20% who are like his father—they till their own land, they're middle peasants—and then there are the exploitative 10% who don't get their hands dirty with anything. And what Mao is trying to figure out is how you can incentivize 80% of those people into actively taking part in the revolution. This is the key.", "00:25:53", "And what he wants to do is take the bottom of the social pyramid and mobilize it to crush the top of the pyramid. The way he's going to do this is by determining class status through a land investigation movement, which he says is a violent and ruthless thing. We're going to talk about class, approval of class status, confiscation of land, and redistribution of land in order to invert the social pyramid.", "He's got a real plan for doing it. He argues that land reform is just essential for peasant allegiance. This is how you're going to get it, to draw these hundreds of millions into supporting the communists. But you got to do it sequentially: you’ve got to propagandize first, and then you’re going to distribute land later. A little bit later. He had a very bureaucratic way of redistributing land, the approval of class status, he said, it’s a life-and-death decision for the person in question.", "00:27:02", "It starts out with a vote at the local level, and then it goes through many layers of party approval before being sent back to the local level to announce who's going to get the land and who's going to take a bullet. And then Mao leverages the enthusiasm of this movement for the people who are going to get the land, the other people, not so much. He's going to leverage this enthusiasm to get people to join the party and also to join the army.", "Mao is planning to collectivize all land. That's what the Communists are going to do. But he says, \"Look, the system of landlords and tenants cannot be completely destroyed yet.\"", "00:27:45", "Because he needs the peasants to join him, and the peasants desperately want land. So, Mao gives it to them, and he gets a great deal of support for doing this. But he also keeps the rich peasants around, too.", "This is a deleted portion in the collected works: “because rich peasant production is indispensable”. Until he wins the civil war and can then turn the guns on them. And he's also got a duplicitous program for the middle peasants. It's a big bait and switch.", "It looks like you're going to get -- see, you got the land? Well, now you do, and now you don't, because at the end, they're all going to lose their land.", "00:28:26", "In order to reform, to get the land, Mao is talking about a red terror to get it. And while he was still with the Nationalists, he wrote a report on the peasant movement in Hunan, where he's talking about taking all the land from the landlord class and shooting them. And that won't cut it with the Nationalist army because their officers are landlords. So, as part of his program, it's not just land reform and educating people, warm and cuddly. It's also coercion.", "00:29:04", "Okay, that's Mao the social scientist. Enough of him. Now, we're going to do Mao the military leader.", "You've probably heard this chestnut from Mao: \"Political power grows out of the barrel of a gun.\" Mao spent his- it’s still part of his early career- being right but being a minority view. That he had certain views about military operations that were not shared by Communist Central. And Mao kept following that dim light wherever it may lead, and events eventually vindicated him. He survived a variety of encirclement campaigns, but then he had some troubles. And here are his critics.", "Li Lisan was a labor organizer, he was the de facto head of the Communist Party from 1928 to 1930. And after the white terror on the Northern Expedition, Moscow had told its communist buddies in China that the next thing to do was to take the cities.", "00:30:10", "And so Li Lisan tries to, with the Nanchang uprising in 1927. It was a total disaster. And he tries it again in 1930 with the Changsha uprising, another disaster.", "That got him into exile in Russia and according to Mao, Comrade Li Lisan did not understand the protracted nature of the Chinese Civil War. Li Lisan is trying to fight a decisive, war-winning battle far too early. You try to do that, and you can get yourself ruined.", "Here's another critic, Xiang Ying . Mao is in the Jiangxi Soviet, and he thinks a smart strategy is to lure the enemy into your own terrain, which is favorable to you, let them get exhausted, then you spring the trap and you annihilate them.", "Communist Central in Shanghai thought this was nuts, that you shouldn't be ceding territory at all. So Mao, for the longest time, he’s off in Jiangxi, they were off in Shanghai, they were a long way apart, and so Communist Central can’t do anything about it- Mao does his own thing.", "00:31:08", "So the communist Central sent Xiang Ying to Jiangxi Soviet to fire Mao personally, and you can imagine how this works for his later career- not well. He fires Mao and this is where his strategy winds up producing the Long March, the Long Retreat, in which they lose 95% of their people by trying to defend territory, so people began to get it, that Mao may have known what he was doing.", "Then, on the Long Retreat, Mao chose as his terminal point of retreat, like where you’re going to wind up, as up in Yenon, way up north, deep in Muslim and Mongol lands, but near the Soviet border.", "Mao thought that was essential because they're the big benefactor. Whereas this gentleman, Zhang Guotao , who was the military commissar of the Fourth Army, thought, \"Nonsense, we're Han Chinese, we want to be in Han lands.\" So he wanted to go into western Sichuan, which he did.", "00:32:10", "And he suffered a series of defeats over 1935, and as a result, he was never as important ever again, and eventually defected to the Nationalists. So, Mao had proven himself prescient and right and determined. He had kudoi and determination, and people eventually recognized that.", "All right, Mao and Clausewitz define war somewhat differently. Clausewitz has this famous line, \"War is thus an act of force to compel our enemy to do our will.\" Mao says, \"No, no, war is politics by other means. It is something that is used to achieve political ends.\" So far that's not incompatible.", "But then here's Mao's twist: \"A revolution is an uprising, an act of violence whereby one class overthrows the power of another.\" Clausewitz is not about class warfare at all. In fact, his wife is always trying to wine and dine the aristocrats, so completely different in that department.", "00:33:07", "Mao is looking at the world, and he believes the linchpin of the social order are landlords. He's going to detonate them and try to destroy them.", "He talks about the violence of all of it, that you're going to get the peasant masses to overthrow these landlords, and that this will require terror in rural areas, but this is absolutely necessary and of course this is what the Nationalists absolutely would not tolerate. Mao also understood he was operating in a period of nested wars, and that the ones that were ongoing were the civil war with the Nationalists and then the regional war with Japan. Pearl Harbor comes a little later. And he talked about defeating Japan in three stages.", "00:34:02", "He said “defeating Japan requires three conditions: first is progress by China”, i.e, the civil war, “which is the basic and primary thing; the second is difficulties for Japan”, i.e, the regional war, “and the third is international support”, the big friend. I'm going to talk about each of these three things in turn.", "So, in order to win the civil war, Mao believes you need base areas, these Soviets, where are they located? Often on the boundaries of provinces, in very difficult terrain, where provincial authority, let alone national authority, simply does not extend.", "Mao thought that there were certain prerequisites for a good base area. One is strategic terrain. It's got to be defensible so that the weaker communist forces can defend it against conventional nationalist or conventional Japanese armies. So that’s key, pick your terrain carefully.", "00:34:54", "Also, you need to have a strong Red Army presence there to make it work. You need numerous organized workers and peasants. You've got to have some local support there. And then you need a good party organization.", "This is Mao's idea of what you need for a base area. He believed that you needed to match your military unit, the type of military unit, to the territory. He said there are three kinds of territory: there's base areas, there is enemy-controlled areas, and then there is the interface in between, which is where guerilla forces are going to be roaming.", "So he was all about deploying the Red Army to the comparatively safe base area, they'll protect that. You might send guerilla detachments to some of the guerilla areas, but really only really small things would you ever send into enemy territory. Moreover, he has prerequisites to fight. There are six possible prerequisites; you've got to have at least two before you fight. The most important one is that people have to actively support you. You probably need a base area to pull this off.", "00:36:00", "He said that the last three things about enemy weak points, enemy exhaustion, enemy mistakes, those things could appear quite rapidly. But you better choose your terrain very carefully; terrain is immutable.", "He also said that if you're weak the way the Communists were, you had to follow a strategy of annihilation. What you do is you annihilate one small enemy unit at a time, and the cumulative effects will eventually change the balance of power. Only someone who is really strong can tough out an attrition strategy.", "He's also about triangle building in these areas. So, little guerilla detachments go out into the interface. If it works out well and it looks like they can start either a new base area or expand an existing one, that's what they're going to be up to.", "So, these guerilla forces are either a disposal force, which you could send them out to do risky things, and if they get wiped out, it doesn't endanger base area defense, or they can become a nucleus of a new base area. So in small guerilla groups, party members are toughened, cadres trained, the party government, mass organizations are consolidated. If they're successful, then you bring in the Red Army to do higher-level institution building and either greatly expand an existing base area, or you're forming a new one, is what’s going on.", "00:37:30", "Mao had two military services, we always think of army, navy, air force—that's not what it was for him. It was guerilla forces versus conventional forces.", "And guerilla forces are operating in the rear of the enemy so that there's no stability or security. In fact, there isn't even a front line; it's just so amorphous. And so what guerilla forces are supposed to be doing is exterminating small enemy forces, weaken larger ones, attack enemy lines of communications, establish bases, and force the enemy to disperse, but they're doing all this in combination with conventional forces. Because here's the thing: you think about Mao and his guerillas, well, actually, here's what Mao really says: \"Regular forces are of primary importance because it is they who alone are capable of producing the decision,\" like winning the war.", "00:38:25", "“There is in guerilla warfare no such thing as a war-winning battle”. The relationship of the two is really important. Mao also thought that you needed to establish a fire escape if you had a base area, like, if it all goes south, where do you go? And his terminal point of retreat for the Long March was up in Yenan. He thought it was important to figure those things out in advance.", "Mao also cultivated an unprecedented group of allies, never before assembled in Chinese history: not only peasants, but women, minorities, youth, intellectuals, and the enemy army, most creatively. For cultivating the allegiance of peasants, it wasn't just education and land reform, it was also army discipline. This is where the three rules, six points for attention, and a couple of additional points were enforced through 1949, when the Communists win the civil war. Why? You don't want to alienate the peasantry.", "These are the people that are forming your cadres, your guerrillas, everything. So, maintain army discipline. Don't mess with it.", "00:39:30", "Mao also took an incredibly forward-looking view about women. Here he is with his fourth wife, the actress. The other three had suffered, respectively, abandonment, execution by the nationalists, and commitment to a Soviet psychiatric ward. Not fates for the faint-hearted.", "But Mao calculated that women are about half the population. They're miserably treated, so they're naturals for wanting a revolution. They're a force that will decide the success or failure of the revolution.", "He calculated correctly, and he was way ahead of his times. He also understood that in a guerrilla war, you're sending all the guys off to be fighting, and you've got to be building base areas and things. This is where women came in to do those activities.", "As a result, he offered women the unthinkable, which is “men and women are absolutely equal. Women have the right to vote, be elected, and participate in the work of the government”. He's just way ahead of his times.", "00:40:37", "Mao also offered minorities the previously unofferable, which was self-determination. What the minority people didn't get is that a promise made in a really desperate civil war, with a regional war overlaid, once you win those things and you can turn your guns on those trying to secede, that promise may be unenforceable. You can ask the Tibetans and Uyghurs how it all worked out.", "All right, so Mao's strategy: he had a strategy of disintegrating the enemy army and let me tell you how that one worked.", "In every county, you select a large number of workers and peasant comrades, people below the radar, and then you insinuate them into the enemy army to become soldiers, porters, and cooks. You can use women to do this as well. Talk about people who are below the radar.", "00:41:34", "You're creating a nucleus of a Communist Party to erode them from within, and eventually, it'll have a shattering effect. He said, also, part of this disintegrating the enemy has to do with leniency.", "Sun Tzu advocates: never put your enemy on death ground. Death ground means that you just have no hope, that you're a dead person if you don't fight. So, your only hope is to fight.", "If you put someone on death ground, they tend to fight with incredible willpower. And Mao is: don't do that.", "So, what he did when you capture people, propagandize a little, recruit the willing, but release the unwilling, so that the comparison of communist leniency and nationalist brutality becomes absolutely stark in this otherwise pitiless war.", "00:42:29", "Okay, that's the civil war. Now, we're going to go to the second problem, which was Japan, the regional war. Mao made a really thoughtful assessment of what were the key characteristics of China that would determine what kind of military strategy would use. And this is his assessment.", "He said, \"Okay, China is a large, semi-colonial country. It's an undeveloped country.\" Point one. \"Second, its enemy is really strong.\" Point two. \"Thirdly, the Red Army is weak. And fourthly, there's an agrarian revolution going on.\" And from this, he concluded that revolution was definitely possible, but it's going to take a long time. So he didn't kid himself about quick wins.", "He's going to come up with a strategy for protracted warfare. And he thought that Japan had certain weaknesses that the communists could leverage. For instance, the Japanese had inadequate manpower to garrison a country the size of China.", "This meant that guerrillas could roam far and wide behind Japanese lines. Also, the Japanese were brutal, just gratuitously brutal, and they're outsiders. This means that the peasantry are naturally going to gravitate towards the communists, just simply regardless of what the communists do, but based on what the Japanese are doing, they're going to gravitate towards the communists.", "00:43:47", "Also, the Japanese had grossly underestimated the Chinese. And as a result of underestimating the Chinese, they made errors. When they made errors, they started quarreling among themselves and making more errors and the communists could leverage these things.", "All right, Mao's most famous paradigm theory is his three stages of people's war. The first stage is the strategic defensive. It's the \"prevent defeat\" phase.", "The last phase, phase three, is the strategic offensive, the \"delivery victory\" phase. In the first phase, you're focusing on the peasantry. In the last phase, you're annihilating the enemy army.", "And if you look at activities that go on in each phase, the activities of phase one and two never cease. Rather, you add additional activities as competence increases. So, in phase one, you're doing popular mobilization, base area building, triangle building, guerrilla warfare.", "And then, as you get more of these things, then you can start engaging in mobile warfare, try your hand at a little conventional warfare, reach out with diplomacy. And then if you go further in stage three, then you're talking positional warfare, and you're going to have the war-winning battle. And how do you get from the phases?", "00:46:41", "Well, the transition from phase one to two is basically you have a critical mass of base areas, cadres, armed forces that you can move into phase two. But the problem of being in phase two is what had looked like isolated acts of banditry in phase one to the incumbent government, now the incumbent government gets it, that they're facing an insurgency bent on regime change, and the regime changes strategy.", "So the communists are no longer under the radar, but they're in the crosshairs. And it's dangerous because they're weak and the enemy is strong. So when you transition to phase two, initially it is quite dangerous.", "And here's Mao writing about these problems and saying, \"Look, in these stages one and two, the enemy is trying to have us concentrate our main forces for a decisive engagement,\" i.e. decisive in their favor, they'll win the war because they'll annihilate us. And of course, this is what General Westmoreland was trying to do in the Vietnam War, getting the North Vietnamese to concentrate so he could blow them off the map. And of course, they'd read their Mao and didn’t do that nonsense.", "00:47:50", "Mao was saying you only fight when you're sure of victory. Also, in order to get to phase three, you need a big friend. Why? Because phase three is conventional warfare, which requires infinite supplies of conventional armaments that requires an industrial base to produce it, and somewhere like China lacks this industrial base.", "So, good old Soviet Union played this role the world over. And so this is the secret sauce of people's war: if you want to get to phase three, you need a big buddy. That's where the Soviet Union came in.", "And so this is why Mao determines that Yan’an is going to be his terminal point of retreat. He's fighting his way through to the Soviet Union. No kidding, you've got to have the conventional arms. To fight this stuff.", "What's interesting about Mao's description of people's war is it actually applied not so much to the war with Japan, which he claimed it applied to, but rather to the civil war with the Nationalists. And here is the key. Mao didn't actually fight the regional war against Japan. The Nationalists did.", "The Nationalists did every bit of the conventional fighting, except one, and that's the Hundred Regiments campaign that Mao fought in North China in 1940, and he was smeared. The Japanese responded with a \" three alls \" campaign, which is \"kill all, burn all, loot all,\" which is what they did. And it wiped out loads of communist base areas in North China.", "So, Mao never tried that ever again, and he certainly didn't write about it in his collected works. Don't talk about failures there. So, it's interesting. What he's talking about really applies to the civil war.", "00:45:01", "And Mao understands these different layers. So, as the Nationalists are busy fighting the Japanese and actually being destroyed by them, the communists are pretending that they're fighting the Japanese. They're later going to take credit for it and say, \"We won against the Japanese,\" which is nonsense.", "There was also the United States in that as well. Because he's using that to strengthen the communists during all of this rural mobilization. So, when Japan's defeated and the communists, when the civil war resumes full bore, he's in a good situation.", "00:48:47", "Okay, that's it on Mao, the operational military leader at the operational level. Now let's put it all together as Mao the grand strategist, of linking all elements of national power into a coherent strategy. These are Mao's instruments of national power: the peasantry, propaganda, land reform, base areas, institution building, warfare, and diplomacy.", "The US military, when they're thinking about elements of national power, love this little framework: DIME . D is for diplomacy, I is intelligence, M for military, E for economics, as being critical elements of national power.", "It's better than only looking at the military elements, at least you got three more things. But if you look at Mao, this is not a cookie-cutter event. This is a different society, different national elements of national power are available. You've got to get to the other side of the tennis court net to see what the other team is doing.", "Mao is famous for all these reasons, but also for his sinification of Marxism, where he makes all the things that I’ve told you about, makes his version of Marxism much more applicable to these countries, the newly independent countries after World War II, of how they put things together. He positions himself to replace Stalin, who dies in 1953, as the leader of communism.", "00:50:15", "Mao was prescient on numerous levels. He was certainly prescient about the centrality of the peasantry. He was way ahead of his times on the importance of women.", "He was calculating and cunning on how he was going to use minorities and POWs. He had proven his kudoi and determination with his military strategy.", "He also anticipated when the Japanese war in China would stalemate, and he also anticipated, more or less, when the United States was going to get into the war in Asia. He's the great sinifier of Marxism.", "Mao produced all kinds of concepts and paradigms that are useful for insurgents who are trying to take over the host from within. I've listed a variety of them here, and I'm going to go through them in turn. These are the things that the counterinsurgent then has to counter.", "All right, rural mobilization. This is obviously a big deal in Mao. If you compare—and I'm going to be doing a lot of comparisons with the Vietnam War and the Korean War because they're communists and all these things—you can see Mao's rural mobilization was very successful in China. The North Vietnamese rural mobilization was also really good. South Korea, not so much.", "00:51:34", "Why? The leader of North Korea was trying to mobilize the peasantry in the south, that wasn't so successful. And why? Syngman Rhee , the leader of South Korea, immediately did land reform, and this glues loyalty of soldiers to the leadership doing this. Maybe that is not the only factor, but an important factor for why the Korean War turns out differently.", "Base areas. Mao says those are really important. The North Vietnamese used them to great effect. They had all kinds of areas in the south and then on the borders of South Vietnam. North Korea, not so much. It couldn't form base areas in the South.", "Why? It's a peninsula which the US Navy cut off. It's also cold. So where are you going to flee if you want to do a base area? I think it's up a mountain in South Korea, and that will get cold in the winter, and you'll probably freeze to death.", "I believe “Al Qaeda” means \"the base.\" I believe that's the correct translation. So if you're thinking about ISIS or whatever's left of it, you can go back to Mao's ideas about base areas, that you need a particular kind of geography that's good for the defensive. You've got to have a big party organization, a lot of local support, and you have to have military forces there.", "Does Al Qaeda—well, it's going to be ISIS or something—do they have all four of these things or can you remove any one of them?", "00:53:03", "Another idea from Mao is luring the enemy in deep. Mao had done that very successfully in the first three encirclement campaigns, and then he was removed from command, so he wasn't doing that anymore. Again, in the final phases of the Chinese Civil War, the `45 to `49 event, Mao lures the Nationalists deep into Manchuria.", "The Nationalists are a South China phenomenon. I showed you the map. Chiang Kai-shek starts in the south and he goes way up north, so he's weakest in the north. Mao lures him way up there in Manchuria, and then he springs a trap and destroys Chiang Kai-shek's armies up there. Then the entire civil war wraps up within a year of that.", "Mao also lured good old General MacArthur, who fancied himself a great Asianist in the Korean War. MacArthur goes all the way up to the Yalu River, right on the Chinese border in the Korean War, and Mao springs a trap. MacArthur didn't realize that 350,000 Chinese troops had been infiltrated around him. Oops, missed that.", "00:54:12", "It did not work out well. But for the US Navy, now, it needs to think about, what about being lured into the South and East China Seas and then the Chinese pulling the trap. There are places you don't need to go. The Chinese may have to go there, but maybe you don't have to.", "Another one is terminal point of retreat. I've talked about Yan'an being a really good one, and that worked. Then when the Manchurian campaign initially wasn't going well for Mao, he retreated up to Siping, which is a little bit north, and that worked well enough. But when Chiang Kai-shek tried to pick these Manchurian cities as a place to retreat in Manchuria, bad news.", "There's only one railway system that gets you south out of Manchuria. You suppose the Communists don't know about it? They encircled the Nationalists in these cities and destroyed them there.", "So when you're thinking about insurgents and things, think about, well, if you knock them out of one area, where might they go next? Another concept from Mao is disintegrating enemy forces, which is what happens to the Nationalists. Think about it: Chiang Kai-shek had been fighting since the 1920s, forever and ever, and he fought the Japanese, they're brutal. The United States had trouble fighting the Japanese, and Chiang Kai-shek fought them alone for a long time before we joined the war.", "And yet, he loses a battle in '48 in Manchuria, and that's it. The rest of the country wraps up. So what was going on there? Or the South Vietnamese: they'd been fighting forever.", "00:55:55", "And then the whole place just wraps up. And the same thing happened with Japan in World War II. They'd been fighting all over the place forever, fighting us brutally. And then in 1945, we don't even have to invade the home islands.", "Think how unusual that is. The Germans fought every street on the way to Berlin; the Japanese quit. This is about disintegrating the enemy and why it happens.", "But what you can say in all those three cases is the warfare had been going on for an incredibly long time, and it was ruinous. The places in question were ruined, so don't expect that to happen too fast.", "Of course, Mao's big contribution was his three stages of people's war. Mao presents them as sequential: you go from one to two to three, and tada , you win. And they're cumulative, right?", "You do certain things, and then you get to phase two, and you've had this cumulative effect of destroying enemy forces. You get even more accumulating casualties on the other side, and finally, you win.", "A student of mine said that's actually not a great way to look at it, or there's an even better way to use it. It's like a metric of how an insurgency goes up and down, so that ISIS may be on the cusp of going into stage three. I don't know that they ever really—well, possibly, with all the equipment they got initially—or whether then they get knocked back to stage one, where you wonder whether they still exist anymore, and come back and forth. Anyway, that was that person's take. I thought I'd pass it along to you.", "00:57:25", "All right. I have one last thing to talk about. Mao, when you read these 7,000 pages—and I don't recommend it—one is struck by all these dualities. I think it goes back to Yin and Yang analysis, which is very prominent in traditional Chinese thinking.", "I got a little abbreviated Yin and Yang list of opposites here of these coexisting opposites constantly changing. So, if you look at Mao discussing triangle building, it's in terms of the presence or absence of factors, presence and absence being opposites.", "So you're going from the absence of political power to the seizure of political power, from the absence of the Red Army to the creation of the Red Army. It goes on and on. Studying the differences and connections between dualities is a task of studying strategy, and it's really everything.", "To defend in order to attack, to retreat in order to advance. It goes on and on, and it's all about correctly orienting yourself between these opposites.", "00:58:32", "So, “oppose protracted campaigns and the strategy of a short war. Uphold the strategy of a protracted war in a short campaign” and I'm starting to lose it.", "You've got to put everything in the context of each other: losses, replacements, fighting, resting, concentration, and dispersion and I'm thinking, \"I don't get this. Mao's bipolar disorder.\"", "So I turned, and then you look at the diagram. Okay, they're opposites, and it's either one or the other, but even- they're not even true to themselves.", "Each opposite is a dab of the other. I was having trouble figuring it out, so I went to this gentleman, Brigadier General Samuel B. Griffith .", "He is the only translator into English of Sun Tzu who has a distinguished military career, and also, he went on to get a DPhil—it's like a PhD—from Oxford in military history.", "00:59:32", "If you look at his career, he's in China during the Japanese escalation of the Sino-Japanese War. He's back in China, does another tour at the end of the Chinese Civil War. He gets top marks in the military's Chinese language exam.", "Oh, look at these details: Navy Cross, Purple Heart, Distinguished Service Cross. He's a distinguished man, and for retirement, he decides he's going to go get the Oxford degree, and he writes the translation of Sun Tzu. He is the only translator of Sun Tzu who translates \"死地\" as \"death ground.\" Other people, it's like, oh, I don't know, I can't remember the words, like, I don't know, \"contested ground,\" something else.", "But I'm guessing that when he chose those words, \"death ground,\" it's because when he was thinking about it, it might well have conjured up his memories of what exactly it was like to be on Guadalcanal or New Georgia. He has provided insights- oh also the other thing to mention about him is he's really modest.", "01:00:35", "I had to dig around to find these biographical details. They're not on the cover of his book which is where they should be. And his service to his country continues to this day.", "Because it's his translation that continues to educate officers over 40 years after his death. But here's his take: \"In every apparent disadvantage, some advantage is to be found. The yin is not wholly yin, and the yang is not wholly yang. It is only the wise general,\" said Sun Tzu, \"who is able to recognize this fact and turn it to good account.\" And of course, Mao could, and he did. But in peacetime, choices are not binary, they're graduated, and evolution is much more conducive to economic development than revolution.", "So that's what I have to say about Mao's bipolar disorder and other things about him, and thank you very much for bearing with me." ]
[ "https://en.wikipedia.org/wiki/Stuart_R._Schram", "https://en.wikipedia.org/wiki/Julian_Corbett", "https://en.wikipedia.org/wiki/White_Terror_(Taiwan)", "https://en.wikipedia.org/wiki/Cursive_script_(East_Asia)", "https://en.wikipedia.org/wiki/Political_commissar", "https://en.wikipedia.org/wiki/Military_commissariat", "https://en.wikipedia.org/wiki/Cadre_(military)", "https://en.wikipedia.org/wiki/Red_Terror", "https://en.wikipedia.org/wiki/Li_Lisan#", "https://en.wikipedia.org/wiki/Xiang_Ying", "https://en.wikipedia.org/wiki/Zhang_Guotao#", "https://en.wikipedia.org/wiki/Hundred_Regiments_Offensive", "https://en.wikipedia.org/wiki/Three_Alls_policy", "https://fmso.tradoc.army.mil/military-dime-research-project/", "https://en.wikipedia.org/wiki/Syngman_Rhee", "https://en.wikipedia.org/wiki/Samuel_B._Griffith" ]
https://www.dwarkesh.com/p/sarah-paine-india
Sarah Paine Episode 1: The War For India (Lecture & Interview)
[ "Sarah Paine 00:00 I need to start with a disclaimer, because I work for the US Government, and they require you to do a disclaimer. So: the ideas that you're about to hear are my ideas. They don't necessarily represent those of the US Government, the US Navy Department, the US Department of Defense, let alone the Naval War College where I work. Are we all good on this?", "All right, so today I'm going to tell you a story of three protagonists, Russia, the United States and China, that all wanted to work their magic on India and Pakistan, which didn't exactly appreciate it.", "So two big topics. One is intervening in someone else's problems, a cottage industry for the United States. And also before you do that, you really ought to check out the alignments. Who's the primary adversary of whom? How long has it been that way? And also ask these questions about all the neighbors and anyone who might want to crash the party along with you.", "It's also a story of a series of limited wars. What's a limited war? It means it's for something less than regime change. So however it turns out the governments that started that war are still in place. And two of them resulted in quick victories, the ideal in warfare. The first one was the Sino-Indian War of 1962. And the other one was the Bangladesh War of Independence in 1971. And these wars change things in many short term expected ways and then in many long term, highly unexpected ways.", "So I'm going to go into all of this with you all. So here's my game plan, and it's literally a game plan. I'm going to start out with the pivotal decisions made by different players. Then, once they're made, certain things are foreclosed and certain things are possible. And this is the playing field delimited by these pivotal decisions. And then I'm going to look at the teams. Some allies were prime allies, others were subprime, and they mixed and matched over time. So I'll do teams, and then I'll do the game, the interaction, and then at the end, I'm going to do the plays, some of the techniques, things that you can do to play this game.", "02:11", "Pivotal decision number one. When Mao won the Chinese Civil War in 1949, it didn't end. He also spent the next two years not only eliminating Nationalist remnants, but also conquering Xinjiang and Tibet. Tibet had been autonomous since 1911, when the last dynasty had collapsed. And Mao decides that he is going to reconquer Tibet. Tibet's an interesting place it contains, I think, about 40% of China's mineral resources. So there's a lot of money being made in Tibet for those with the capital to invest in big mines. If you look at this map, the Han Chinese, the preponderant group of China, they inhabit, they dominate as far west as the Chongqing Basin and Sichuan. And then you go further west, in-towards Tibet. China has put large armies into Tibet exactly twice. Once under the Qianlong emperor in the late 18th century, they didn't stay for very long. And then under Mao in 1950. And they have stayed forever and built roads so they could keep on sending more in. Between 1950 and 1957, China built a series of road systems through Tibet. And the western route there is the only one that provides year round traffic. The problem with the other two is, well, check it out. They go through 14 or 15 mountain ranges. It means you go vertical up, vertical down – do that 14, 15 times. And then between monsoon rains and snow and mudslides, they're very difficult to maintain. And then the eastern one crosses the major river systems of South Asia. So that's difficult. So only the western route is the really good one. And it's really important for the Chinese if you want to conquer Tibet, you truly want that one.", "All right, so if you look at this, that western route provides not only the ability to control Tibet, but it also provides a pincer onto Xinjiang. If China wants to come in one way and the other way, it's a good way to get in. If you look at those two circles there, those are the disputed areas between China and India. The northern one is the Aksai Chin Plateau, which China has taken from India and India still claims. And in the south is Arunachal Pradesh, which India still owns, but China claims. And so these are the areas that they're fighting over. But once China took Tibet…Before, there had been a big buffer zone between China and India, right? There's all this Tibet, and no one could really get in there. Now China's built roads so it can get into places where India cannot deploy troops until it gets into the road races with the Chinese. And so it reduces the buffer zone between China and India to these small Himalayan kingdoms of Nepal, Bhutan and Sikkim. So it changes things. So that's pivotal decision number one, deciding to conquer Tibet.", "05:40", "Pivotal decision number two is the United States, in order to deal with the Soviet Unions under Eisenhower, did what the wits back in the day called pactomania. What is that? It's forming all sorts of bilateral relations and also regional groupings in order to counter the Soviets institutionally and wall them in that way. And part of this was what was called the Northern tier strategy, as seen in the Baghdad Pact , where you get Turkey, Iraq, Iran and Pakistan to form this thing, and it's to wall off the Soviet Union from the oil fields of the Middle East. East. And the other thing, you should look at this map before it goes away. Look where Pakistan's located, where you think it is, and then go to the east, and that's East Pakistan. In the 1971 war, there's going to be a civil war, and Pakistan's going to lose East Pakistan, which is Bangladesh today. So just keep that in mind.", "So as part of the sweetener for Pakistan to join the Baghdad Pact, the United States allied with Pakistan and gave them a big military aid treaty. And here's Nehru, the Prime Minister of India, and he is horrified. A military pact between Pakistan and the United States changes the whole balance of power in this part of the world, affects us most especially. The United States must realize that the reaction of India is going to be, you're arming the Pakistanis. Whom do you think they're going to shoot? It'll be us. And the Indians were just appalled that we did this. And afterwards, Eisenhower admitted it was “perhaps the worst kind of plan and decision we could ever have made.” It was a terrible era, but now we're stuck with it. Because what the United States is slowly discovering is that if you arm either India or Pakistan in this period, it's going to aim it at the other one. And so that pact poisoned US Relations with India for the duration of the Cold War and set up things in ways the United States ultimately wasn't happy with.", "06:52", "Okay, those are two pivotal decisions. Now for a pivotal situation, it's really the devolving situation between Russia and China. Until Mao got atomic weapons in 1964, he really had to shut up. And because he needed Russian technological aid, he's been totally cut off from the West. After the Korean War, he's being isolated. So he truly needs Soviet aid. And he also, if he wants nuclear weapons, he needs some of their aid to do that as well. So he has to keep his mouth shut. But once he detonates an atomic weapon, here's what he tells the Russians, and they just about lose it. There are too many places occupied by the Soviet Union. The Russians took everything they could. We have not yet presented an account of this list. Under the czars, the Russians took from the Chinese sphere of influence territory exceeding US east of the Mississippi. Think the Chinese didn't notice? Yes, they noticed. So Mao all of a sudden is calling that. And the Russians are appalled.", "But Mao has other gripes against the Russians. Stalin, in the lead up to World War II, had made sure to set up the Chinese to fight Japan so that he wouldn't have to. So that leaves him just fighting Nazis, not Nazis and Japanese. And then Stalin takes Mongolia, which had formerly been a part of the Chinese sphere of influence. And in the Korean War, he's more than happy to fight to the last Chinese. And then during the Chinese Civil War, the Russians tell Mao, oh, stop, Yangtze, you need to take a little breather here. Because he wants a divided China like the divided Germany he has, and then the divided Korea he's going to get. You want to be surrounded by these little broken states around you if you're a continental power. And then when Stalin dies and Mao wants to be senior statesman of communism, Khrushchev is appalled by that. Then Mao is appalled when Khrushchev does de-Stalinization, because Mao has his own cult of personality. And then Khrushchev wants to do peaceful coexistence with the west, while Mao is ramping it up in the Cultural Revolution, so there's no meeting of the minds. And then all this becomes very public when it hits the propaganda press of the Communists, of the Sino-Soviet split in 1960.", "09:02", "All right, the Russians have their own gripes about the Chinese, and here's how they go. The Russians look around at the west and particularly the United States and go, wow, they got bases everywhere. The British have got bases everywhere. How come our allies won't give us bases? I mean, well, if you occupy Eastern Europe, the whole place is a base, but that's a different matter. So the Russians want the Chinese to let them keep a couple of remaining czarist treaty ports, essentially, and want to expand them. And the Chinese say, forget it. And in fact, after the Korean War, when the Chinese have troops all up in Manchuria, which is where these bases were located, and there's a succession struggle going on because Stalin's just died, the Russians have to return the bases because there's just too much bad stuff happening where they live, live. And then what Mao does in ‘54 and ‘58, which just appalls the Russians, are these two Taiwanese Strait crises. What's going on? Mao is lobbing all kinds of ordnance on these islands that are owned by Taiwan, that are very close to the People's Republic's shores. And the Russians are appalled they are not consulted, and yet they have a friendship treaty that obligates them to provide to join a war under certain circumstances. And the Russians are going, whoa, whoa, whoa, whoa, there could be nuclear follow-on from this stuff. So the Russians then asked the Chinese if it's okay if they have a combined naval base on China's shores. And China says, forget it. The Russians are thinking, okay, well then we're not going to give you any of the plans for the atomic weapon. And it all devolves. So there's no love lost on either side. And then what exacerbates these tensions is the Vietnam War where China wants influence over neighbor Vietnam. That is pretty typical. But Russia wants influence over Vietnam to do a pincer on China, which China doesn't like at all. Meanwhile, both of them want to prove their revolutionary credentials by aiding the Vietnamese, North Vietnamese. So Russia's aid needs to come by train over China, lest the United States sink at the good stuff if it goes by sea. So the Chinese feel obliged to let it go through, but they're just hassling Russians the whole time through. They take it apart, tear it apart, say it was from China. And the Russians are just apoplectic. So their relations are getting worse and worse and worse and the squabbling is just incessant.", "So it's not surprising that the Sino-Soviet border conflict of 1969 breaks out - during the Vietnam War. And while all this is going on, this is one of the river islands, the Amur river forms much of their border, and this is one of the islands there. And there's much fighting over it. And the Russians come to us, the Americans, and say, “is it okay if we nuke these people?” And the United States says, “no, there's no way it's okay to nuke these people”. And Mao figures it out. The one that wants to nuke you, that's the primary adversary. So prior to that moment, the United States is the primary adversary of both Russia and China. Now with this, they're primary adversaries of each other. It causes a reshuffling of the allies, and I'll get to that later. So, okay, I've done the playing field of these decisions that delimited it. But now I'm going to get to the allies, and some allies are better than others. And here we got Mao and Khrushchev. Look at these lovebirds. Boy, when that divorce took place, boy did it mess up the extended family. Nevermind.", "12:24", "And the point, for my purposes tonight, I'm going to use the word \"alliance\" really loosely. If you sign a mutual defense pact - for my purposes tonight that makes you an alliance - allies. And if you're a political scientist, you've got something that's much more complicated, but forget it, I just can't handle it. So we're going to do it this way. All right?", "So Stalin didn't think much of Nehru at all. He thought he was a lackey of British colonialism. But Khrushchev thought India was really important to counterbalance China. And here's Nehru thinking about it. Well, look, we have to be on friendly terms with both Russia and America. But actually he felt much more in common with Russia. Why? Because he favored Fabian socialist economic policies that were much more akin to what's going on in Russia than it was in the United States. Moreover, the United States was segregated, which appalled Nehru. And in addition, the United States was cozying up to all the colonial powers. So Nehru thought the Russians were the better bet. While all this is going on, the Indians were non-aligned and they treated the Chinese really generously. And I've got a whole list of generosity. So India immediately recognizes China in 1950. Countries like the United States didn't, for forever. And when the San Francisco Treaty , I think, is signed in 1951 in the United States, ending the war with Japan, India refuses to sign because China and Russia aren't there to sign as well. And then to help China break out of its diplomatic isolation at the end of the Korean War, India signs a friendship treaty with China. And as part of that friendship treaty, it recognizes Chinese sovereignty over Tibet. Under international law, contrary to what Vladimir Putin is doing lately, under international law, if you recognize someone's sovereignty over territory that is permanent, you cannot back out of it legally under international law. So the Chinese promise, I don't know, there's some like, peaceful coexistence or whatever they're promising the Indians, but that has no permanence under international law, whereas this thing does. And then from 1960 on, the Indians are voting to seat the People's Republic of China, not Taiwan, on the UN.", "14:42", "Meanwhile, in the background, all this road-building is going on. Those roads are being built between 1950 and 1957. And the Indians aren't going to figure out until 1958 that the roads are there. Meanwhile, road's completed. The Chinese want to complete their control over Tibet, and so they're going to send big armies up there. And Tibetan culture is much more…It's of Indian origin, it's not Chinese origin. So this repression of Tibetan culture just appalls the Indians. And then two days before the People's Liberation of Army is going to make it into Lhasa, which is the capital of Tibet, the Dalai Lama flees-he's the spiritual leader of Tibet-he flees to India where he's remained ever since to the absolute horror and anger of China. So at about this time, the Chinese come to the Indians and say, “look, why don't we do a swap on sovereignty?” You recognize our sovereignty over that Aksai Chin Plateau where nobody lives, but it's really good for the roads, China's western route. And then we'll recognize your sovereignty over this much more densely populated Arunachal Pradesh. And Nehru doesn't want to hear anything about it.", "So during the Cuban Missile Crisis, when Russia is much too busy worrying about who's going to be lobbying nukes at home, this is when China launches the 1960s Sino-Indian War and China just takes the Aksai Chin Plateau. The Indians are appalled because they don't have any roads to be able to deploy up there, whereas the Chinese do. Their defeat is just total. And they can't believe the Chinese did this to them. Here's Nehru afterwards. There are not many instances in history where one country, that is India, has gone out of her way to be friendly and cooperative with the Chinese government and people and plead their cause in the councils of the world.", "And then for the Chinese government to return evil for good — and even to go to the extent of committing aggression and “invade our sacred land”. Who does this? So I get it. The Chinese get the territory they want. That was the goal of that war. But what they have done is taken a country, India, which had its leadership terribly idealistic, not interested in becoming militarized at all and making them angry forever. India immediately doubles the size of its army within the next 10 years to up to 750,000 people. Creates 10 mountain divisions useful against China. And they've never ceased being so angry. And then if you think about this, what if instead of playing this game this way, China and India had teamed up? I would suspect we would be in a completely very different world order now, if that is what they'd done instead. But this is China's decision, not India's fault on this one.", "17:38", "All right, so that wasn't great. So let's check out other possibilities once that happens to India. India is all of a sudden looking for Russia to counterbalance China. And you also have Pakistan wondering what to do and what the Pakistani notice after all this: well, the Chinese are not going to be teaming up with India, right? They've just invaded the place. And so this is when Pakistan sees that China might have real possibilities as an ally. And Bhutto is going to play the China card for the nuclear chip, trying to get Chinese help for all of that. And here's what happens. So you have the '62 war, and then in 1963, Pakistan really inexplicably is ceding territory to China. Who does that? And there are various possibilities down here, but I'm surmising it's because it's going to help on nuclear development. That would explain why you would give a lot of territory. But we don't know. There was supposed to be some mutual defense pact maybe, and there's some other things going on. Anyway, you can imagine what it may or may not have been. Okay, in the case of Pakistan and China and India and Russia, they had quite a good relationship because the Pakistanis and the Chinese shared India as their problem, and the Russians and the Indians shared China as their problem. And that worked pretty well. But the United States was just a disaster from both Indian and Pakistani point of view, and vice versa, because the United States wanted to befriend both of them. But if you befriend one, the other is appalled. And so the United States wound up appalling everybody. And so what the United States wanted to have happen is for India and Pakistan to put aside their differences and then combine against China and stop communism from spreading. India and Pakistan want to use the United States for maximum aid to use against the other, which is a non-starter for the United States. And then Pakistan really would like it if the United States would be nice with China as well, because Pakistan wants to have good relations with the United States and China. And that's a non-starter for the United States until 1971, when there are secret visits and things going on. It's later on.", "20:00", "Okay, so in 1962, India gets trounced in this war with China. They look like they're militarily feckless. And then in 1964, Nehru dies, right? He'd been the head of India since independence in 1947. He'd been there a long time, so he's dead.", "So 1965, if you're Pakistani, it looks like a good year to settle border problems. And so what they do is first they invade through the south, if you look way down at the bottom there, the Rana Kuch, and that seems to go pretty well. And then they decide they want to go for the thing they really care about, which is Kashmir, and they do that well, the enemy gets the vote. And the Indians invaded straight through Lahore, which isn't remotely what the Pakistanis had in mind. And then the United States does a double arms embargo on both of them for doing this. And the problem is the Pakistanis are much more dependent on U.S. military aid. The Indians were more diversified, so they just didn't have enough spare parts to continue this thing. So it's a very unhappy event for them. They lose it, and what happens, neither the United States nor Russia wants either one of them fighting that war.", "The Russians are thinking, “we want the military aid to go to India in order to counterbalance China, not to decimate the Pakistanis”. And the United States doesn't want it either. So the United States is very happy that the Soviets broker the Tashkent Declaration that ends this war. But Pakistan is worse off after this thing. And India has restored its reputation for knowing what it's doing on the battlefield.", "So for Pakistan, the United States is really problematic because we're interested in being nice to them when we want something out of them, and then we're not so interested when we don't want something out of them because we don't share a primary enemy. So what we really wanted were listening bases. The technology of the day was such that if you want to surveil the Soviet Union, you want to send these big U2 planes over and given their ranges, and you're not supposed to be doing it. And so we had U2 bases, I think Norway, West Germany, Turkey, Pakistan, and then Japan. And in addition, we had a listening base at Badaber. And these are really important things for us during this period. So we're paying the Pakistanis a lot of money to get it. And except there was one of these U2 planes, gets shot down over the Soviet Union. They finally get it so they can. Because they fly at really high altitudes, [but] they shoot it down. And Khrushchev is furious. He hauls in the Pakistani ambassador in Moscow and he goes, where is this place Peshawar? We've circled it on the map, and we're going to blow it off the map if you all don't wise up. And the Pakistani is like, whoa, whoa, whoa, whoa, whoa.", "22:56", "And so between these sort of threats in 1960 about the U2 and then the United States freezing arms in the 1965 war, which the Pakistanis believe they lost it over that. Oh, yeah, and by the way, in that 1965 war the United States had, when we provided arms to everybody, we said, oh, we will guarantee that no one uses it, that Pakistanis and Indians don't use it against each other. And of course, we could do nothing about that. And in the 1965 war, the Pakistanis are using US tanks to go after Indians in the largest tank war battle since World War II. So there are a lot of upset people in South Asia. But here is Ayub Khan, leader of Pakistan, telling the United States that the United States forgets that our security hazards and political liabilities have increased to a dangerous level due to this U2 stuff. And we kept our part of the contract whilst the Americans betrayed us at every turn. They built up India against us, they failed to help us in the ‘65 war and finally stopped military aid. They think that we exist for their convenience and that our freedom is negotiable. Dream on. So when the lease came up for the listening post at Badaber in 1968, the Pakistanis canceled it. They're sick of it.", "Meanwhile, the Indians weren't too thrilled about the United States either. This is earlier when Franklin Delano Roosevelt was president: here's Mahatma Gandhi telling him allied support for freedom and democracy seems hollow so long as America has the Negro problem in her own home. Indians were appalled by segregation. They knew exactly which end of the bus they'd be sitting on. So there are issues both ways. And in fact, Nehru and his daughter Indira Gandhi found the United States really impossible to work with. And they looked at capitalism as the way station to imperialism and fascism, whereas Americans looked at socialism as the way station to communism. So there's no meeting of minds on all of this. And so if you look at the alignments of primary adversaries India and Pakistan, from most of the time our primary adversaries: India is always Pakistan's primary enemy. But you could argue that with the ‘62 war, is it Pakistan or is it China who is the primary adversary of India? And then when you get to the 1971 war, which I'll discuss a little more in a second, where Bangladesh is broken off and then Pakistan is left, has less than half the population, then you could argue that for India, China is the primary adversary. And then if you look at that reshuffling, if you also look at the 1969 war, that reshuffles the nuclear powers. So formerly Russia and China had shared the United States as their primary enemy. But after the ‘69 war, they're each other's primary enemy. And this gives the United States the swing position of team up with A or team up with B. And the United States teamed up with China to overextend Russia in the Cold War, because I had always felt that the Soviets were the bigger threat in those days.", "26:14", "So anyway, as you're looking at alignments, you can apply this kind of framework to any country on the planet to try to figure out what's going on. And think about how alliances work. If I look at The World War II Allies – probably one of the most effective alliances in world history, if you think about what people ultimately want. The British want an empire in which the sun never sets, the United States wants to decolonize everybody, and Joe Stalin wants a communist wonderland. Those are mutually exclusive. But to get there you have to go through the common way station of getting rid of Hitler. So the common existential threat can be a superglue of the most unlikely partners.", "But let's look at the Axis. What they want at the end of the war are series of influence in different parts of the world. So for Italy, it's empire in the Mediterranean, Japan in the Pacific, and then Hitler, it's all over Eurasia. That's not mutually exclusive. But if you look who they're the primary enemy, who stands in the way of those plans, it's Britain for Italy, it's Russia, for the Germans and for the Japanese, it's first China and then the United States. None of it aligns. So they fight parallel wars and allow the allies to, to take them out in detail. So when you're thinking about alliances in the world today, when you're wondering what's going on with Iran or whatever, figure out who's their real primary enemy, get it straight. Does that primary enemy, is that an existential threat for them? So if you've got countries that line up on same primary enemy, existential threat for all around, the most unlikely people will cooperate. On the other hand, people who are very likely to cooperate, maybe like the fascists, they all shared this basic ideology. But if they don't have the same primary enemy and the same theater of interest, geographically the same theater, they may not cooperate very well at all. So you can apply this to anything you want to apply to.", "So back to my game here. If you're looking at the cards people have to play. The United States has lousy cards because we don't share primary enemies with anybody. So it's a stalemate. You help India, the Pakistanis hate you. You help Pakistanis, the Indians hate you, it's no win. But if you look at India, India and Russia share a China problem. That's good, they can cooperate on that. And then you have Pakistan and China, they share an India problem. They can make things happen over that. So there are cards for them to play and 0 for the United States to play. It's just the way it is. So the name of the game and strategy is to get the outcome that you want to have happen. And it's like, how do you play this game of five person, five country cutthroat billiards to get remotely what you want out of it? So for the English majors among you, I have a metaphor. For the rest of you, you can just bear with us. Imagine a game in which every ball can be a cue ball and players can take turns, come, leave, do whatever at will. Sometimes they'll cooperate some of the time, but they don't necessarily want to put the same ball in the same pocket. And so if that's the case, there's going to be no enduring cooperation. And understand that you want to have your goal is going to be the ultimate shot you want to take. But as you're taking the intervening shots, people are going to try to disrupt it. How on earth do you get through this game? So this is what the next section is all about. So if you look at this map and where Pakistan's located, it's this very strategic location right in the center of, not quite center, but of Eurasia, the center of the Soviet boundary there.", "30:09", "And I'm going to give you a map. This is from Halford Mackinder . He's one of the finest, most famous people to publish on geopolitics. This is his 1904 map. It's actually quite famous. And he talked about how Russia occupies the heartland. In his day it was all these railway systems. He thought that was the prime piece of real estate in the world. And then it's surrounded by this inner marginal crescent. You look where Pakistan's located, it's right in the center there, right up by Russia. And it's a really crucial location before satellite imagery is available, to put listening posts on Russia. Russia's huge. You gotta have a bunch of listening posts to track their missiles and things. And then if you want access to Afghanistan, which when the Russians go there, we want access, and of course, when we go there, we really want access. So it's a strategic location. For the Pakistanis the United States was so frustrating to deal with because we'd be on-and-off interested in them because we don't align on a primary enemy.", "So pre-satellite-imagery, we really wanted to cozy up to the Pakistanis. So we have U2 bases but then there are technology changes. And before we get facilities in Iran, we want this listening post in Badaber. But technology will eventually change. And then for a while, we truly want the Pakistanis to get the mail through to China when we're trying to break China out of diplomatic isolation and then cooperate against the Soviet Union, and Pakistan delivers the mail. But then we set up an alternate setup in Paris to go through our embassies that way, and Pakistan is again irrelevant. And then when Russia's in Afghanistan, Pakistan's essential to get aid to insurgents to cause the Russians trouble. And then, of course, when we're in Afghanistan, we really want to cooperate with the Pakistanis. And that works until we cap Osama bin Laden without telling the Pakistanis in their territory, Abbottabad. And then relations are really not so great. And so it's a very bumpy ride. And in these periods when we really need the Pakistanis, we don't pay attention to human rights or the really big one, nuclear proliferation. And so the proliferation is pretty steady.", "So if you look at after the United States having trouble negotiating all this so that whatever you do in the short term doesn't wreck you in the long term. But in order to get to the long term, of course, you've got to go through the short term. So after the U2 crash in Russia, where it gets shot down and the Pakistanis are having a heart attack about that, that's when the Pakistanis look to cultivating more, better relations with China because the US relationship is just too potentially costly and the Americans cut off the military aid. And then when the Pakistanis are being very nice about delivering the mail for Richard Nixon and Henry Kissinger to line up invitations in Beijing, the United States is ignoring a humanitarian nightmare because all of this that coincides with the 1971 Bangladeshi War for independence.", "33:00", "So let me explain what that is to you. So Pakistan was holding presidential elections. The dominant ethnic group in Pakistan are Punjabis, sometimes the Sindhis, like the Bhutto family. I think they're Sindhi. But anyway, generally speaking, particularly the army, the Punjabis dominate. Bengalis, live in Bangladesh. They won the election and the Punjabis are furious. So they send the army to start butchering people in East Pakistan to overturn the election. So there are refugees pouring into India. So this is the backdrop of what is going on there. And for anyone who wasn't in the know, the United States is saying nothing about this. The United States, there's this massive humanitarian crisis in the United States. Got nothing to say. The United States has Something to say about everything. But it had to do with,this is the moment that Nixon is trying to get himself invited to Beijing so that he can talk to Mao about cooperating. With the Soviets. No, with the Chinese. To overextend the Soviets in the Cold War, which is ultimately what we do. And it's very important to win the Cold War. And this is integral to this. But everyone else is looking and going, what on earth is going on? So you have Nixon's doing the mediation in the background. We got refugees flying all over the place. India comes to the United States and says, look, you need to tell the Chinese not to intervene in this thing. And not only did we not do that. Oh, the Indians also say, you need to bring this up at the UN, the human rights stuff, because India is literally getting millions of refugees trying to flee this, this mess. And the United States won't do any of it.", "35:10", "It gets even better. The United States has the gall to blame the Indians for the war. Dream on. So Indira Gandhi is just furious at this one. And so the United States had wished that India would cease being non-aligned and align with the West. Well, they cease being non-aligned, all right. They sign a military pact with Russia over this. And then they upgrade their relationship with Vietnam, which totally upsets the United States. And it gets even better. They shut down Indians. They won't give scholars any visas to come to India to study India. So you wonder why US Universities didn't have any Indian studies programs? It's all about this. So that explains what's going on with all of that. Total mess.", "Meanwhile. But for Pakistan, as all this is going on, the Shah of Iran falls in, I think it's like February 1979. And then the Russians invade Afghanistan in December 1979. And suddenly Pakistan is totally essential once again. And the Pakistanis are really getting sick of being kicked around. So when outgoing President Jimmy Carter offers them, I don't know, $400 million or something, this is Zia here going peanuts to the peanut farmer. And the incoming Reagan administration then ups it to $3.2 billion and that money gets funneled through the Inter-Services Intelligence directorate. That's like, I don't know – the CIA+++ of Pakistan. And when you put that kind of money into that kind of bureaucracy, you're going to make them incredibly powerful. And then they're the ones who decide how they're going to allocate money to insurgents in Afghanistan. And I get it. There weren't any great choices, but they're arming some really anti-Western folks in there, probably some guy named Osama, last name Bin Laden. But anyway I'm not sure of the details on that one, but it is going to have 9/11 follow-on effects. And also the Pakistani, the ISI is also taking some of that money and putting it into Kashmir which is going to have real problems for India later on. So there are real ramifications for all of this of needing Pakistan. But actually what is happening anyway. And then, throughout, there is the Pakistanis are getting closer and closer to building the bomb. So when the Russians go piling into Afghanistan, here's Zbigniew Brzezinski Carter national security advisor telling him our security policy cannot be dictated by our non proliferation policy. Really? I thought that was our security policy. And the problem with proliferation is it tends to be a one way street whereas Afghanistan has been anything but. And then here Deng Xiaoping was in town and he told Carter, we applaud your decision to basically toss all these proliferation human rights considerations for Pakistan and just arm them. No kidding. Because the Chinese are providing the nuclear spare parts. So the United States isn't the only country to have trouble navigating this cutthroat billiards. The Pakistanis have their share of boomerangs.", "37:53", "Just look how the wars work. So wars create incredible costs. So the 1965 war, the Pakistanis get exactly nothing. And the United States, Pakistan had been the largest aid recipient of the. I'm not sure if I got that right. But anyway they're a huge aid recipient from the United States. Well after this war were not so interested. So that's a lot of money down the tubes. And then in 1971 war, great guys, you lose Bangladesh which, by the way, has over half your population. So Pakistan is no longer the most populous Muslim country, Indonesia is. And if you look at the Kargil War in 1999, this is when Pakistan tries to again go below the line of control in Kashmir to try to take some more of Kashmir back. Pakistan has to cross right back and then it gets sanctioned for all of this. So none of these wars have actually worked out very well for Pakistan. And then if you think about it, India and Pakistan are natural trade partners.", "So if you take all these wars and just add up all the costs and then think of the opportunity costs if Pakistan had been able to take this money and spend it on road systems, on education, and then all the lost trade, it gives you a sense of the real cost of all of this.", "Okay, well those are Pakistan's problems. India has its own problems. Here you've got Indira Gandhi and Richard Nixon, they really didn't like each other. I mean look at her, she looks as if she's just been fed bad fish and he looks like he served it up. And they just, they cannot abide by each other. So in the 1940s when Kashmir is erupting, Indira Gandhi thinks – well no, it's Nehru – her dad thinks that the United States should be supporting India because it's secular and it's a democracy. And the United States is appalled during the Korean War when India remains non-aligned instead of supporting the United States because it's secular and democratic. And the Pakistanis are totally outraged because they're looking at this go, okay, these Indians are non-aligned. We're aligned. We're taking these risks for Peshawar and stuff with the U2s and you're helping these people who are about to ally with the Soviet Union, who are you kidding? So it's a total mess.", "So the Indians have their own self inflicted blows. Nehru and his very controversial, but devoted advisor Krishna Menon and his daughter Indira Gandhi were really good at making these totally insulting remarks to American VIPs. Okay, it hits the target without a doubt, but the ego that has just been hit is huge and like an elephant is not about to forget. And meanwhile Pakistan in contrast is just being this welcoming host.", "So the United States is going “ugh India and Pakistan”. And it makes really bad trend lines for India because in the 1962 war the United States supported India. In the 1965 war, it's neutral. In the 1971 war it supports Pakistan. That's not great. And then India's own very heavy handed treatment of solutions to the insurgency of Kashmir doesn't make that thing go away, it just gets worse. So they have their own problems.", "41:10", "China also has its problems with the interaction. It's complicated. So on the one hand, on the Sino-Indian war, absolutely, China gets the territory, but at what cost? You've got this permanent enemy forever. And as opposed to teaming up with them, if they teamed up, they actually would have had incredible leverage for what the global order is going to look like. But that's just not to be. And moreover, if you look at the 1971 war, after the United States won't help with China, India's going, okay, I think we need nuclear weapons because then we'll be able to protect ourselves against China. And after that war, when Pakistan's lost over half its population and has to deal with Indian population and territorial, just overwhelming superiority, the Pakistanis go, “I think we need nuclear weapons in order to solve this problem”. So there's proliferation all over the place.", "But as a result of the 1971 war, where Pakistan tries to overturn the elections, here you have an Indian defense analyst, Subrahmanyam, saying the Pakistani decision to overturn its elections by deploying the army to East Pakistan gave India an opportunity the like of which will never come again. And what they did is they armed insurgents in East Pakistan then sent the conventional army in and that was it for Pakistan: in East Pakistan over. The interaction for Russia works a little better. For Russia and India, it's really quite a good relationship. What Russia offers to India, not only military and economic aid, but also very useful vetoes on the UN Security Council. India does not want plebiscites in Kashmir that it might lose, so it gets the Soviet Union to veto those things. So there are no plebiscites. And then as India is trouncing Pakistan in the 1970 war, starting one war, and the United States wants them to halt India, no way. India wants to finish the job. So it gets the Soviet Union to veto that one. And India does indeed finish the job. And meanwhile, for Russia, India is really useful. It's a good counterbalance for China. So theirs is rather a beautiful relationship. They have very cordial relations.", "43:23", "Okay, so I think I've now covered the playing field, right? And I've covered the players and teams and their problems with interacting. That's very difficult. Now for some of the plays and the instruments of national power. And here's the menu of choices. You can start with the light items. Diplomacy, public support and denial of public support. You can move into more expensive things down the menu. One of the things you can do is help negotiate a really useful treaty, which the United States did. It brokered this Indus Waters Treaty of 1960. It's the only time that I know of, maybe you all know of something, where India and Pakistan have signed an agreement to the massive benefit of both of them.", "What does this agreement do? You can see it's a really dense river system. Both India and Pakistan need to irrigate. To do that efficiently, you need dams. And both of them were poor and didn't have the dams. They were going to cost a billion dollars. And the United States was willing to kick in half that money if they would both sign the treaty. And no terrorist event or anything derailed it, so they signed it. And this treaty has been, it's been operating some of these dams ever since to the enormous benefit of both countries. Does the United States get any enduring gratitude from either one for doing this? No. Zip. Okay, next one is the United States tried to exercise diplomacy and to convince the Pakistanis and Indians to settle their differences and it was a total flop. And because if you're going to try to befriend both India and Pakistan, you wind up becoming the enemy of one or the other.", "And the United States’ diplomacy was based on certain false assumptions, which are one, that India and Pakistan could be cajoled into settling their differences. And their idea is, anyone who's so stupid as to think that is crazy. And if you're going, well, what are the origins of these differences? Partition was brutal. So the British colonized the Indian subcontinent and then they left in 1947 and they left really rapidly so that there was no time to set up any institutional framework. And also, you're talking millions of people. And so Pakistan's going to be one thing and then India's going to be the other. And so Hindus are just fleeing and Sikhs are fleeing out of Pakistan and Muslims are fleeing out of India, going back and forth and millions are killed while this is going on. So this is the origin, at least the modern origin, of why Pakistanis and Indians are so bitter. In addition, the United States thought, well, surely the China threat is going to make the Indians come around and realize this non-aligned stuff's nonsense. Not quite. Yeah, When India aligns, it aligns with Russia, not the United States. So that doesn't remotely work out the way the United States wanted.", "46:26", "And then the United States thought, well, hey, we in the west were rich. We give Indians and Pakistanis all this aid. This will force them to be nice to us and be less nice to the communists. Wrong. India and Pakistan are really astute and they get lots of aid from everybody. So when the great powers do align, Russia, China, us, or at least two of those align, then you can actually get stuff done. So that's when you get the Tashkent Agreement for the 1965 war. This is: the United States and Russia both want India and Pakistan to cease and desist and stop blowing each other off the map. And also in the Kargil conflict, when Pakistan is yet again trying to resolve Kashmir by invading and then gets itself into trouble. And this guy Nawaz Sharif, who is the head of Pakistan, he all of a sudden ups and gets on a plane with his family. It looks like he's coming into exile and he's trying to fly into the United States. And the United States goes, whoa, Whoa, whoa, whoa, whoa, what do you think you're doing? And says, you're not coming in here until you admit that you crossed the line of control. And then you need to get right back. And so he agrees to sign the Washington agreement to go right back. But it's absolutely humiliating for the Pakistanis to go, oh, yeah, we went south of the line of control. And then, well, that didn't work out. So now we have to go right back. He had gone to China already and pleaded his case to the Chinese, and they told him to get right back because there was a lot of nuclear saber-rattling going on, and the Chinese were not interested in a nuclear war over this. So Pakistan had the choice of, okay, fight India by your lonesome or cross back. So they crossed back.", "And there were other cases in the inter-Cold-War period when the great powers cooperated and tamped things down, like terrorist incidents in New Delhi and in Mumbai that didn't go anywhere because the great powers told the Indians and the Pakistanis to just dial it back. All right? Another thing you can do is to publicly support someone. And this is what goes on with Goa, which is a Portuguese colony. The Indians wanted it back. The Portuguese said, no way, you cannot have it back. And the Indians took it back. And the United States supported Portugal. Why? It's a NATO ally and we have very important bases in Portugal. So we kept the bases, but we made the Indians really angry. And there are other areas of public support or not criticizing people publicly. For instance, when the. In the. In the 1971 Bangladesh war, when the United States refuses to support India by telling China, don't enter this place. But of course, it was December, so the road system would have been a little rough to even try that, but it made the Indians mad. Or if you think during the Cold War.", "So with Nehru, it's the 1956 Hungarian crisis where the Russians sent tanks into Hungary, the Indians don't say anything about that. There's a Berlin crisis in 1961 where the Russians are pretty rough and the Indians don't say anything about that. And then Indira Gandhi comes in, and when the Russians send tanks into Czechoslovakia in 1968, the Indians say nothing. And then when the Russians invade Afghanistan in '79, again, the Indians say nothing. So this is one way that you make your allies feel better about things. Another thing is if you're one of the five veto holders at the UN Security Council, you can do your public support that way. And I've already mentioned these Russian vetoes on Kashmiri plebiscites that the Indians truly didn't want to have happen, or short circuiting the Indian offensive in Bangladesh. So this is what Russia did for India, and it was a very valuable thing for them. You can also put money where your mouth is. Economic aid. And it's interesting, the United States provided far more economic aid than either the Russians or the Chinese, but still, both Pakistanis and Indians preferred China and Russia respectively. And some of this aid was really important. During the Bihar famine in 1967, the United States sent 20% of its wheat crop to India. It was worth $1.5 billion. That's not something to be sneezed at for, I don't know, was it 90 million people? It's a lot of people who might have starved to death.", "50:44", "And that didn't work out well at all because Johnson, at that point, President Johnson at that point was so mad at the Indians because, from his point of view, they were cuddling up to the North Vietnamese. And the Vietnam War wasn't going well for Johnson. So he was furious. So he provided the aid, but he did it always at the last minute, ship to mouth.", "And Indira Gandhi was furious. She said, “I don't ever want us ever to have to beg for food again”. And she never did. So the United States got no gratitude or enduring anything. Oh, and a whole other piece of it is that India is not subject to famines anymore. And part of it's from the Green Revolution. And who does that? It's the Ford and Rockefeller foundations who figure out the different strains of grains that you want to grow. And does the United States get any credit for that? No, zip. And here is Krishna Menon, who is Nehru's controversial advisor, saying, look, we want to encourage a little competition between the donors. And they did. And India just. Even Indira Gandhi, who hates Nixon, she's racking up the aid.", "And back in the Eisenhower administration, the United States had noticed. Secretary of State John Foster Dulles is saying, look, concerning India and Pakistan, it's difficult to help one without making the enemy of the other. And of course, the United States tried to help both and angered both of them. Amazing.", "So another instrument of national power is military aid. It is even more difficult to calibrate than the economic aid. So you can see with the pactomania event where Eisenhower is building these bilateral relations and treaty organizations to contain the Russians. Formerly there'd been no Cold War in South Asia, but once Eisenhower allies with Pakistan, all of a sudden the Russians are in there too. So that's a bit of a boomerang. Another one.", "So when the United States provides military aid to Pakistan, that just drives India to seeking an alliance with Russia, which isn't exactly what the United States wanted. And then when the United States helps India right after the 1962 war with China, that alienates the Pakistanis and then they try to buddy buddy with China, not remotely what the United States wanted to happen. And then when the United States provides aid to the ISI, the Inter Services Intelligence Directorate, which is the Pakistanis then are funding things to get the Russians out of Afghanistan, they're also diverting it into Kashmir. So in 1989 this insurgency heats up and it's remained heated ever since. And then you wind up with China providing nuclear help to the Pakistanis. So it's difficult with these things. You get a short term thing, but then the long term thing that winds up may not be what you want at all.", "53:41", "The other instrument of national power, if you got one, is the carrier battle group. You can send one of those around, which is what the United States did. Here's Enterprise. It was the United States’ first nuclear propelled aircraft carrier. And so during the ‘71 war, the United States sent this into the Bay of Bengal, the Russians sent some naval assets, had no effect on that war: Pakistan lost. Right. The Indians were furious. They just regarded this as an absolute threat. And how dare we do this anyway? And maybe it would have been better to have left Enterprise in their home port rather than doing this.", "And then of course there's sanctions and embargoes. The United States does this all the time. And if you look at the list of the times we're embargoing stuff. So at partition we're embargoing everybody. And then during the ‘65 war we're embargoing everybody. And then as various people are making nuclear progress and different things, there are these embargoes that come and go. And Pakistan's really mad because India does a test, an atomic test in 1974 and Pakistan doesn't do anything until much later. And it's looking why are you sanctioning us on this nuclear stuff? The Indians have actually done this. We haven't. And so if you look at this chart, you can see where the ups and downs of these sanctions go. And clearly they didn't stop proliferation because in 1998 you have these tit for tat nuclear atomic tests by both and the United States tries sanctioning, but then it's just too late. They've already tested the stuff. And so the United States basically gives up and then after 9/11, of course, we desperately need Pakistan again to deal with Afghanistan. So it's a complicated world out there.", "So another instrument of national power is you can trade off your territory if you really want to. Most people don't, but the Pakistanis clearly did that in 1963. And we can all speculate on what they got. I mean, my hunch would be something to do with nuclear things, but hey, it's not as if this information is out there in the public. It isn't.", "Oh, another thing you can do is go fund the insurgency. So you can. And this is done by the United States, Russia, China, India, Pakistan. Think if you've got a country that you don't like that has some minority people that want to secede and so they're fighting there, well, you can go fund that insurgency and then the one you don't like is pinned, because they're going to be paying attention to that insurgency. And while they're pinned there, they can't probably do things elsewhere that you might care about. So this is the logic of what's going on. So the United States belatedly decided, ah, let's help the Tibetans. And so the CIA is helping them between ‘57 and ‘61. But look at the dates we did the road system. The road system in Tibet's completed by 1957. It's too late. So all you do is get these people killed because the Chinese have got the road system all set up in there.", "56:44", "So that doesn't exactly work. But after the ‘62 war, all the way until 1979, when Deng Xiaoping calls it off, the Chinese are funding insurgencies of, let's see, the Mizos, the Manapuri and the Naga people all don't like different aspects of Indian rule. And the Chinese are more than happy to stir that pot. And the really big pot to stir are the Naxalites. It's huge. And while as long as the Pakistanis have got East Pakistan, they can stir some of that up. And by the way, these Naxalites are still there in India. They have not gone away. It's a serious part of India where they are. And then, of course, Pakistan's location right next to Kashmir means they can stir that forever.", "And the tragedy of these, what they become are frozen conflicts. Is the outside power. if they are playing their cards right, their amoral cards right, they're not bearing any of the costs. They're pinning someone they don't particularly like. All the costs are borne by the local population who are suffering horrendous deaths, lack of economic growth. You're just having warfare where you live. What a total disaster. So that's how it works.", "… two can play at this game. So the Indians, according to the Pakistanis, have funded the Baloch people. They straddle Iran and Pakistan, don't particularly like being told what to do. So apparently India's supposed to have put its finger on that scale. The other thing the Pakistanis accuse Indians of doing is encouraging a Pashtun insurgency up north. And that would be a way of diverting Pakistani attentions from Kashmir. If they're totally busy with Pashtuns, they can do less in Kashmir. But the result of these things is people are becoming more and more bitter. The hatreds just spike, the economic growth isn't happening, poverty everywhere. And it makes these problems more intractable, not less.", "So when I think about these frozen conflicts, there are a number of ones that you know about besides Kashmir, there's Korea, there's Palestine. So if you look at Kashmir, if I've got it right, so you fund that thing. And then what's great from China's point of view, if Pakistan is doing that India is frozen, that it can't do other things because it's constantly paying attention to what's going on in Kashmir as opposed to going “hmmm China, I don't know about this”. Or in the Korean War back in the day, if things are all stirred up in Korea, China has to really pay attention to that. And it delays the rise of China. And in those days that benefited Russia. But these things can change over time as to who the beneficiary is. And then you can play this game and think about how it works in Palestine. And I'm no expert in that part of the world, but I think this frozen conflict veto player works, that there are these veto players who are vetoing peace very easily. All you have to do is send a certain number of package bombs and peace is not going to happen.", "0:59:49", "So, cutthroat billiards. What can you take away from all this? Well, common enemies cannot be conjured. So check out the alignments of who these common enemies are before you leave the parking lot and figure it out for all possible players who might want to crash the party. Like, what's their primary objective? Who's their primary adversary? What primary theater are they truly interested in? And then you've got your hunches on how you think this is. And then you should reassess early and often to see if your assumptions are correct and don't worry about changing your mind. Some people get really hung up about being wrong. Don't worry. Reassessing is a sign of strength. It's like, I got more information, I've changed my mind. Good thing, don't double down on bad information. And then if you're looking into areas of the world that are ethnically diverse where people have been at odds for a long time, expect veto players and real difficulty in settling that matter out. And part of good strategy is recognizing some problems it's not feasible to solve. And then we all have scarce resources. You can't do everything. Focus on those things where you think you can solve. But if the great powers align, things can happen. And the story is even better than that. That, sure there are a few big powers, but the small and medium powers, if you add them all up, are by far the aggregate. Their aggregate wealth exceeds any one great power. So if the smaller powers agree on what they're up to, then the big powers have to pay attention. And that's a positive thing. So that is what I had to say to you this evening and thank you for listening." ]
[ "https://en.wikipedia.org/wiki/Central_Treaty_Organization", "https://en.wikipedia.org/wiki/Project_596", "https://en.wikipedia.org/wiki/Sino-Soviet_border_conflict", "https://en.wikipedia.org/wiki/Treaty_of_San_Francisco", "https://en.wikipedia.org/wiki/Tashkent_Declaration", "https://en.wikipedia.org/wiki/Halford_Mackinder", "https://india.mid.ru/en/countries/bilateral-relations/military_and_military_technical_cooperation/", "https://en.wikipedia.org/wiki/Kargil_War", "https://en.wikipedia.org/wiki/Indus_Waters_Treaty", "https://en.wikipedia.org/wiki/Tashkent_Declaration" ]
https://www.dwarkesh.com/p/sarah-paine-japan
Sarah Paine Episode 2: Why Japan Lost (Lecture & Interview)
[ "Sarah Paine 0:00:00", "Before I get going, I've got to make a disclaimer. You can see it written up there: What I'm saying are my ideas. They don't necessarily represent the US government, the US Navy department, the US Department of Defense, let alone where I work, the Naval War College. You got it? This is just me here. Nobody else.", "Alright: Americans have a penchant for what I call half-court tennis , which is: they like to analyze international affairs and wars by focusing on Team America – what Americans did or didn't do – and then that explains causation in the world.", "And Americans, on the other hand, their beloved sport, I believe, is football. And those people who love football, many Americans, my understanding of it – I'm just someone who reads books, I don't follow football – that's disqualifying, I suppose –", "0:00:53", "But anyhow, Americans who follow football, they study both sides, right? They look at their home team, but then they also look at not just one opposing team but many down to the individual player and they would no more follow the football game by looking at one half of the football field. And yet Americans when we do foreign policy, that's often what we do and it gets us into all kinds of trouble. For instance in the Iraq war, Americans thought that the Republican guard was going to be really tough. And it turns out it wasn't so tough. But then there was this post-conventional phase, insurgency that went on and on and on that surprised Americans.", "Well, the problem isn't actually a new one. In World War II, Americans were terribly surprised by the things that Japanese did. Starting with Pearl Harbor, right? That was a surprise. But also it was the entire way the Japanese fought the war. The way they fought to the last man. The suicides. The brutality. Not only to the POWs, civilians, but into their own wounded.", "0:02:03", "And the question is, is there any way to anticipate in advance how other people are gonna behave? Is there any way to get a sense of the other side of the tennis court net? Now, here are the two gurus of warfare. One is Sun Tzu for Asia. And the other one, Clausewitz, is the big guru of warfare in the West. And both of them would say, hey, you want to understand the other side. You've got to make a net assessment . What's that? You would look at political, military, geographic, economic factors, the strengths and weaknesses of all sides to get a sense of things.", "And today I'm going to make a case for culture. You need to look at that as well. And it's often said that mirror imaging is not what you're supposed to do. What's mirror imaging? It's… we get into a situation and then I decide what I think you're going to do based on what I would do. I project me and mirror [it] on you. And that doesn't work so well. Okay. If I'm not supposed to generalize on the basis of my experience, what am I supposed to do instead?", "0:03:12", "I'm going to get at this problem today. How do you analyze the other side of the tennis-court net by looking at Japanese behavior in the thirties and forties. But the method of analysis I'm using, you could apply to anyone you want, you wanna think about Russians today or whatever, you can apply it that way.", "So culture, it's important, but it's as amorphous as it is important. For instance, if I'm going to try to figure out the defining characteristics of another culture, it would be difficult to figure out what the list is of all the different things I would need to look at. And even if I could come up with that list, still, how would I figure out how that would work in something like warfare? Hard to know. But the difficulty of the problem doesn't make it go away.", "0:04:01", "And so we're going to look at it today, and we're going to look at Japanese theorists and belief systems, and that if you believe these things, how this influences your practice. Alright, before I get going here, or actually, I am going to get going here. Tojo Hideki said on December 1st, 1941, that our country stands on the moment, the threshold of glory or oblivion. He got that right, and he's in an imperial conference where he is confirming with Hirohito that Pearl Harbor is going to be a go, but he felt that Japan really needed to do something rather than being ground down, being passive.", "And here is Admiral Yamamoto Isoroku, who was the man who came up with the operational plan for Pearl Harbor. He thought it had really long odds of being successful. General Tojo gave it a 50-50 chance. Admiral Yamamoto wasn't even sure that it was that good. But he felt it was the best possible plan for Japan to get out of its predicament.", "0:05:05", "Now, from a Western point of view, this makes no sense. You're talking about getting the United States potentially into a war with Japan that's already overextended in China. Who does this? Either you need to ratchet back the policy objective and/or you need to downgrade your strategy to something a little [less] costly or risky. And I suppose we can go, \"Oh, they're stupid.\" Okay, I guess if I call you stupid that makes me so smart because I can denigrate you. Explains nothing. So, rather than do that, they’re very intelligent men. And why are they doing these things? Why do they consider their actions rational? And rational in what context?", "So this is what I'm going to be up to. And I can start with a little story to illustrate my point. In the summer of 1943, this is after the battles of the Solomons, New Guinea, Guadalcanal, they're all over with. The Imperial Japanese Army had a war college, and an instructor comes into class one day, and he says, from now on the curriculum's changed. The main emphasis is going to be countering U.S. tactics instead of, what they had been teaching, was Soviet tactics. And it will become the A course. If anyone can teach this, go ahead, because I don't know a damn thing about it.", "0:06:27", "Talk about being unprepared for seminar. And then think about it. Where are the Japanese actually fighting most of the time? What is the country that most matters to them at the end of the day? It would be China. And that's not what their war colleges are studying. Something's up here.", "Now, they're clearly making a really bad net assessment about the United States. Okay, this country also is known for lousy net assessments. I don't believe ours about Vietnam was particularly good either. And that's one part of the problem.", "0:06:58", "So here's my game plan for this evening. I'm first going to talk about traditional Japanese theorists and then it’s going to be talking about Japanese practice - how if you believe, if you have this belief system, how explanatory it is for practice. This is my game plan right now.", "So the Japanese don't have just the one book like Sun Tzu's \"The Art of War,\" or Clausewitz's \"On War.\" What they have is Bushido, Code of the Samurai. It's a whole literature, and it was written in the Tokugawa period, which, quite ironically, was a period known for peace, not warfare. Never mind. And what's interesting about this literature from a western perspective, it's really not about military strategy. It's about deportment. It's how a samurai should conduct himself. And this reflects Japanese values and the things that they emphasize..", "0:07:55", "And so I'm going to go through it with you. And starting, here's the game plan on the theorists: first I'm going to talk about the philosophical origins of bushido, then the values that underpin it, and then the operational preferences that grow out of it.", "That's the game plan for first half. And I'm going to use as my cultural bridge this man, Nitobe Inazō, who wrote a book much later in 1900, \"Bushido: Soul of Japan.\" Why am I doing this? Because he provides concise definition of Bushido, and you can see he's an important figure in Japan. Not everybody gets their mug on the 5,000 yen note, so he's an important figure in Japan.", "0:08:36", "He had spent 18 years abroad, and he had received higher education in Japan and a variety of Western institutions. He married, believe it or not, I don't make up these things, a Philadelphia Quaker, and he converted to Christianity. And he spent his life trying to serve as a cultural bridge, and that's how I'm going to use him today.", "And what he said is, unlike in the West, where notions of morality come from religion, in Japan they come from Bushido. And what is it? It's a code of, chivalrous code of honor for the warrior class, these precepts of knighthood. And from this, there are three pillars of Bushido according to Nitobe.", "0:09:21", "They're Buddhism, Shinto, and Confucianism. From Buddhism, it's where you see Japanese fatalism, the origin of it. And here you have Nitobe saying, it's this “calm trust in fate, a quiet submission to the inevitable”, a “friendliness with death”. And it strikes Westerners reading this Bushido literature, is a preoccupation with death. That, for instance, Clausewitz will talk about violence in warfare, but he's not interested in what constitutes an honorable death, let alone choreographing a soldier's final moments. Different culture.", "And from Buddhism, there are four noble truths of Buddhism. One is that existence is suffering, pessimistic view of this life. Second, it's caused by craving and attachment, so don't cling to this life or the things in it. It's all ephemeral. It's like a cherry blossom, blooms for a day and then it's gone. And, but there's a good ending to it all, which is nirvana. And how do you get there? The fourth noble truth is through forms of right conduct. So, the emphasis isn't on what you achieve in your life. It's how you lead it. It's a focus on deportment. It's different from the West.", "0:10:45", "Second pillar, Shinto, is this extreme patronism, reverence for the emperor and the third pillar is Confucianism, these imported ideas from China. And what Confucianism is at heart, of how it’s organizing a society and regulating it through interlocking social obligations, hierarchical, and through ritual and etiquette.", "So in the West, there's much talk about equality. Right? And in the east, it's duty. It's what you owe other people. In the east, there is no such thing as social- in China, in Japan- of equality. Even twins have a birth order. And it's not about freedom either. It's about what you owe others.", "0:11:35", "So, if you think that these value systems seem really different, yeah, no kidding. It has nothing to do with the Greco-Roman Judeo-Christian West. Completely different value system. And so, Alice, welcome to Wonderland. Buckle up, we're off for a ride.", "And I'm going to start, here's my first piece of the Tokugawa literature, Yamamoto Tsunetomo's \"The Hagakure\" that he wrote in the early 18th century. It translates variously as “Hidden Leaves”, “Hidden by the Leaves”. And in it, he is describing, I'm going to read you some short passages from it all. What was he? He was a retainer for a daimyo, a feudal lord in Japan. He hadn't actually done any fighting, even though he's writing all about it. So if you don't do, what do you do? You publish.", "0:12:32", "And I will tell you what the man had to say. One of the first things is this preoccupation with death. And here is Yamamoto: \"The way of the samurai is imagining the most sightly way of dying. Merit lies more in dying for one's master than in striking down the enemy. The way of the samurai is found in death. It is not necessary to gain one's aim, but if you live on without achieving it, it's cowardice. However, if you don't gain your aim and die, that's okay.\"", "This is really different from Clausewitz, where it's all about achieving the policy objective. It's not about how the soldier's leading his life. And here you can see the consequences of this, right? If you're focusing on no fears of death, and if you can't succeed, living on is a disaster. So, think of the banzai charges. When Japanese remnants would go headlong into oncoming machine gun fire, knowing full well what was going to happen. This is not the way other armies have behaved. Different value system.", "0:13:45", "In addition to death, the Bushido literature's emphasis is on honor. Back to Yamamoto. \"The way of avoiding shame is different. It's simply death. Even if it seems certain that you will lose, retaliate.\" Think of General Tojo and Admiral Yamamoto Isoroku. And here's the, if you suffer a catastrophic defeat, here's the solution. In the event of a mortifying failure, you're gonna wind up committing suicide. 'Cause the alternative, if you live on in shame, you're bringing everyone you're associated with shame.", "So how does the suicide work? It's seppuku, hara-kiri. The Samurai who's doing it kneels down with short sword, he plunges it into his belly tries to do a full revolution. And then his second, usually a close associate, takes the long sword, if he does it right, one sweep, head in lap, upright corpse, blood everywhere.", "0:14:47", "Diplomats in the 19th century, Western diplomats, were told to witness this. If a samurai had murdered a Westerner, okay, the diplomats had to come in and watch the proceedings. They just about lost their lunch. So the Japanese think that they are showing, and what the Westerners think they're seeing, aren't aligning.", "And I'm going to use Nitobe to be the cultural bridge. He said, \"in our minds, this mode of death is associated with the instances of nobles, deeds and most touching pathos. So this form of death assumes a sublimity and becomes a symbol of new life.\" It's a way to escape from disgrace. And in Japanese literature, there is the tragic hero who is pursuing noble but unattainable aims, and rather than making disagreeable compromises, goes down in flames.", "0:15:42", "This is what seppuku is all about and Nitobe is saying look, death involving a question of honor was accepted in bushido as a key solution to many complex problems. And you can think in World War II, yeah, it was. Complex problems like battle plans not working out and a war that was truly not working out. And you can see the suicide going on both individual and group. And I remember going in the caves in Okinawa and understanding how that works because you could see a lot of the damage. If your commanding officer decides to commit suicide and tosses a grenade in the right direction, the entire room goes with him.", "All right, so in addition to death and suicide and honor, we've got loyalty. It's another key value. I'm back to Yamamoto: \"Being a good retainer is nothing other than being a supporter of one's lord. A man is a good retainer to the extent that he earnestly places importance in his master. Having only wisdom and talent is the lowest tier of usefulness.\" So much for Silicon Valley. \"For a warrior, there is nothing other than thinking of his master.\"", "0:16:52", "And so back in the day it's thinking of your feudal Lord and more recent times it's prioritizing your company over family. In China, it's the reverse - priority is family over company. It’s different cultures, different priorities.", "Alright, there are strategic implications if this is your value system. This is what arises from it. First of all, you're looking at damage limitation, damage control, not in terms of the physical cost of losing lives, having property blown up, but in terms of honor. Also there's a tendency to equate operational with strategic success. Operational success is \"I win this battle here and now.\" Strategic success is, “okay, we're in a war for some reason”.", "0:17:42", "What is the reason you're in the war? Japan's reasons for being in China had to do with containing communist expansion and also stabilizing the place so that they could make money out of business. So that's your strategic objective. It's not your operational one, but the Japanese Samurai are equating the two saying if I take this hill somehow it's automatically going to deliver the strategic objective. And in fact, they won most of the battles in China, but they lost that war.", "Also, there's this focus on what constitutes an honorable death. The western focus of literature is all about preparing the field of battle in advance for success, whereas this literature is all focusing on what to do after disaster.", "0:18:28", "Here are some more implications. Once the Japanese are failing in battle, operational failure, they are on death ground . What does that mean? It means that… death ground means the only way you survive is if you fight harder. This is what's going on in Ukraine right now, is that when you decide you're going to annihilate an entire culture, you put people on death ground and then they have very few choices on what they do next.", "It means for the Japanese to feel that they're on death ground when they're failing means they're not going to give up. They're going to fight brutally against overwhelming odds. So in the West, when we like to mirror image, we want to think of the rational actor with some kind of mathematically based cost benefit of when you should give up, when the costs are so high above whatever your value of the object is you ought to call it quits. Well that kind of calculation does not translate well across the divides between civilizations.", "0:19:32", "I've got a nice picture here of Lieutenant Onoda who had been hanging out in Philippine jungles for what, 30 years after the end of the war? Carrying on the war in isolation. I don't believe this is how most other armies work or soldiers in them. Different culture, different things you do with your life.", "And here is Sir William Slim, Field Marshal, British 14th Army, that he led in Burma, commenting on his experiences: \"If 500 Japanese were ordered to hold a position, we had to kill 495, the last five committed suicide before we could take the place.\" And “it was this combination of obedience and ferocity that made the Japanese so formidable”.", "0:20:12", "Okay, this brings me to another value that's emphasized in Bushido: willpower. Back to Yamamoto: \"There is nothing that cannot be done. The way of the Samurai is in desperateness. Simply become insane and desperate, and it'll somehow work out for you. One can accomplish any feat.\" Think of Pearl Harbor. And this emphasis on willpower and just trying harder, it denigrates strategy.", "And here you see a picture of the supreme example of honor and loyalty and willpower - the kamikaze pilots. But if this is what you're doing, you're denigrating strategy. And here Yamamoto's talking about tactics, but it has operational and strategic implications. He says, \"Learning such things as military tactics is useless. The way of the samurai is one of immediacy and it is best to dash in headlong. If one were informed of military tactics, he would have many doubts.\"", "0:21:14", "So the idea is if you think about these things in peacetime, you'll start hesitating in wartime. It won't work out for you if you do this. \"During times of peace, when listening to the stories of battle, we should never say 'In facing such a situation, what would a person do?'\" Well, so much for my job at the Naval War College. So much for the case studies. \"No matter what the circumstances might be, once you be of the mind to win, once you'd be holding the first spear to strike.\"", "So, here's the implications, if you believe this. What you're doing, it's a very unanalytical way to approach wars. It's all about, whatever it is you want, you just steel the will and go for it, and somehow you'll get it if you want. There's a lack of grand strategy . What's grand strategy? It's integrating all the instruments of national power, not just the army, or the army and the navy, which is what the Japanese are trying to do. But all instruments of national power in pursuit of - if the bigger aim is to stabilize China and keep the communists out, there ought to be some diplomacy and some other things going on. But that's not what's happening in the samurai literature. It's a focus on the military instrument exclusively.", "0:22:31", "So, I'll give you an example of how this works out. Before the Imperial Japanese Army and the Imperial Japanese Navy invaded French Indochina, neither one of them did a little study saying, \"Hey, if we do that, let's check the other side of the tennis court net and what other people, how they might react.\" They just steel the will and march right on in. Okay, that triggered the U.S. 100% oil embargo. That's a problem. Strategy operational success, strategic mess.", "Focusing on just the operational level is the basis for this ill-founded optimism with which the Japanese just took territory after territory without saying, \"Hey, what about the cost of actually occupying these places?\" Oh, we're going for these places for resources. So maybe we ought to check it out with the finance ministry and others about how we’re ever going to get these resources back home? None of that's going on. It's a disaster for them.", "0:23:34", "Okay. I'm going to talk about a couple of secondary theorists. One of them is Taira Shigesuke, who is a contemporary of Yamamoto because he provides a really concise definition of the operative values of samurai culture. Only three things are considered essential: loyalty, duty, and valor. So steadfastly loyal as to disregard his own life.", "What he's actually talking about is group loyalty. In the West, the basic unit composing society is the individual. Well, in the East, it's the group. And group interests take primacy over individual interests. And for the Japanese, society is divided by in-groups and out-groups. The most basic in-group, biggest overarching one, is the Japanese people vis-a-vis everybody else. But within Japan everybody comes from a different province, a different locality. They go to different educational institutes. They graduate from different kindergarten classes, I kid you not, and college classes. If they work for different companies or they work for the military, they're in different branches.", "0:24:49", "And you also have family loyalties and you owe each of these nested and overlapping groups different obligations, and sometimes these obligations conflict. And if they're really - if the conflicts are really awful, that's another reason for committing suicide.", "And then if you look at the Japanese language - the moment a person opens their mouth to speak to another Japanese, you can immediately listen to the grammatical forms that are being used, the specific word choices to know what's the degree of hierarchy, like where do they sit in this unequal hierarchy and whether it's in-group, out-group.", "0:25:27", "So everybody feels, or most people feel, some level of group loyalty. This is human. But in Japan, the levels of membership are much more finely calibrated and they're re-emphasized by these social, cultural and linguistic reasons. So this group membership and stove-piping ultimately is going to be a much stronger feature of Japanese culture than some other places.", "All right. Last theorist is Miyamoto Musashi who unlike the other two actually did a little fighting. He was born a little earlier and he was a master samurai who taught people martial arts. And from him you get a sense of some of the operational preferences deriving from these values and I'm going to go through all of these in turn.", "0:26:15", "First is risk intolerance because remember at the beginning I started with the two flag officers saying “well, we're going to do this war in the Pacific when it's unlikely we're going to succeed but we're going to do it anyway”. And here is Miyamoto: \"Furthermore, to fight even five or ten people single-handedly in duels, that's what my military science is all about. So what's the difference between the logic of one person beating up ten people and a thousand people beating ten thousand?” Logistics, my friend, but never mind.", "And then another thing that he emphasizes, and in addition, don't expect long odds to deter the Japanese back in the day. Surprise is another one - a situation that has stalemated and is going nowhere, which is what the China theater was for the Japanese. And how do you get out of it? And the answer that Miyamoto has is: not come up with a new policy objective, but come up with a tactic that'll somehow put your enemy off balance and then get what you want that way.", "0:27:22", "And the way the Japanese did this was often by opening a new theater in a war, by surprising people by the new places that you were going to start engaging in military operations. And here's how it worked: China had been a failed state since 1911. And it had had an escalating series of warlords fighting each other in this multilateral civil war and the Japanese were appalled particularly after the United States passed the Hawley-Smoot Tariff of 1930 that cut them off from international trade.", "So then they're thinking, now what? Well we're going to need an empire big enough to survive since no one's going to trade with us. And so they invade all of Manchuria in 1931 and they have it pretty much stabilized by 1933. So okay, that was surprise number one, but the rest of China's a mess and the Japanese - it's coalescing into a bilateral communist-nationalist, nationalists under Chiang Kai-shek, communists under Mao Zedong fight with increasing dosages of Soviet aid and the Japanese are appalled with all this.", "0:28:31", "And so it's time to surprise everybody again in 1937. And that's when they invade all the way down the Chinese coast and up the Yangtze river. And it works: they take a lot of territory really fast, but then they get to the end of the railway system. Oh, and by the way, China's not pacified. It's just churning and so now Japan is even more overextended. As a result of doing that, Russian aid goes up and then you're going to get US aid in there.", "So the problem's actually getting worse. Time for another surprise - really big one on that infamous day in December '41. It wasn't just Pearl Harbor. That's Team America focusing only on Team America. The Japanese attacked all across the Pacific that day.", "0:29:16", "Uhh, okay. Now what? Here, China had never been able to threaten the Japanese home islands. Well the United States - here, the United States was totally isolationist. Most Americans couldn't find Japan on the map. Well, after Pearl Harbor, they sure could. And suddenly, the United States isn't isolationist anymore, and they're coming to get the Japanese. So, you can see the samurai values in operation here; “Just try harder”. “More dosages of willpower”. “Eventually you'll win or you'll die trying”. Okay.", "Another operational preference that you can see, which is these, [inaudible] surprise or preemptive attacks. And this is how Japan began all of its wars. The First Sino-Japanese War and the Second Sino-Japanese War, Russo-Japanese War, and the Pacific War. This is how all of them begin.", "0:30:07", "And finally, Miyamoto offers some advice on how you break the enemy willpower. And in this case, you've already won conventionally. But they're waging an insurgency against you. I'm modernizing the terminology. And the idea is you want a psychological victory. You want them just to quit and somehow you're gonna break their will to resist. And I suspect this is what the Japanese thought they were doing in the Rape of Nanjing and other atrocities. That they were going to do these horrifying things and that would break the will of the other side.", "Okay, be careful whom you put on death ground. The Japanese were repeating a mistake done by the Nazis, which is if you're dealing with even a failing state, which Russia was - Stalin had shot so many of his officers in the thirties and then he inflicted a famine on Ukraine - but when the Nazis came in and they were going to wipe out not only the Russian government, but also the Russian people, you will superglue people, government, and military, and you will transform a failing state into a lethal adversary. And this is what Nazi brutality does to Russians, what Japanese brutality does to Chinese, and what Russian brutality today is doing to Ukraine. Don't do it. Bad strategy.", "0:31:34", "All right, there are strategic implications from these values. One is this emphasis on the offensive preemption. It's emphasis on military action to solve all your problems. And you have a fixed policy objective, whatever it is. And if you're in a given battle, you have to win that battle. It's not, \"Oh, I have an overarching objective. It's too costly here. I'm going to call off this battle and I'm going to try again somewhere else.\" Uh-uh.", "This is your field, the moment your plan has failed, you're a failure. So they're not thinking of planning in terms of branches of sequels and there'll be unexpected events that take place you'll adapt to - none of that. You're a failure if any of that stuff happens to you. So there's a real insensitivity to risk and there's no grand strategy.", "0:32:26", "But if you believe these things you will be lethal in warfare. You're not going to give up easily at all. And so you look at the Japanese at the end of the war and go why don't they quit a lot earlier? Well, it's because in a way, they're already dead men. They suffered social death. And so they're going to keep on until the very, very end of all of this.", "And it's a great sin of omission, this absence of grand strategy. The Japanese aren't the only ones to have done this. The belligerents on all sides in World War I were thinking all in terms of using the military instrument, got themselves into trouble. So if you look at what the Japanese are doing, they had some vague ambitions and wanted to take advantage of opportunities. But there's no definition of what \"win\" in this war is. How much territory should Japan take? And then call it a day. Say, done, we've been successful here. Rather, their territorial acquisitions are really a function of what they were able to take and what, in anger, they did take, but also a function of strategic failure. No matter what they did, it never pacified the China theater. Problem for them.", "0:33:42", "Okay. Alice, that was Wonderland. Now we're going to get to how it works, how other people live - of if you believe these things, how does it help explain what actually happened in World War II? And I'm going to start with two sins of omission, the Japanese neglect of paying more careful consideration to logistics. And then another sin of omission is a neglect of protecting their sea lines of communication. And then I'm going to - the last two are about these in-group, out-group divisions and the problems it caused for within each military service, intra-service rivalries, and then between the two services, the Navy and the Army in the war that caused them such difficulties.", "Okay. The Japanese, if you start at the beginning of the war, Japan never produced more than 1/13th U.S. steel and coal production. It never did more than 10 percent of what U.S. munitions productions were. I believe if you do the math and take all the battleships and divide people into… everything, that each U.S. soldier had four tons of equipment per, whereas each Japanese soldier had about two pounds of equipment. Japanese main weapons in this war were the grenade and the bayonet. Their artillery and machine guns were totally, were very obsolete, what they had going in the Pacific War.", "0:35:07", "And then you flip it around and look at the United States. The United States had about 18 men in- or men and women, but mostly men- in supply services supporting each rifleman at the front. Other militaries in this period had about an 8 to 1 ratio, Japan had about a 1 to 1. So Japan's already suffering food shortages before Pearl Harbor. And then when you get to the winter of 1942-43, the Japanese are having critical shortages of oil, so they no longer can deploy the fleet at will.", "That means forget about convoying anything because you just haven't got the oil to do it. And yet when you get to '45, when they're predicting they're going to have absolutely zero aviation fuel and other fuel by the end of '45, you have the government saying still we're going to fight on for this honorable whatever it's going to be. These bushido ideas that you just persevere - loyalty, honor, duty. Keep going.", "0:36:08", "I'm going to be quoting this gentleman's diary, Admiral Ugaki Matome. He was the chief of the staff of the combined fleet until his plane and Admiral Yamamoto Isoroku's plane were shot down. US code breaking was quite good. We figured out where they were and we killed Admiral Yamamoto, but this man survived.", "And by the time he wrote this entry, he was the head of an air fleet in the home islands sending kamikaze flights out, because Japan simply lacked the ability to do too much else this late in the day. And here is his last diary entry written on August 15th, 1945. This is after two atomic bombs had been dropped and after the Russians had deployed into Manchuria.", "0:37:01", "And he said, okay, “there are various causes for today's tragedy. And I feel that my own responsibility is not light, but more fundamentally it was due to the great differences in national resources between the two countries.\" It's too late to come to that conclusion. U.S. production statistics had been on the books forever. But when Japanese read these numbers, they thought they were ludicrously high and discounted them as propaganda. And those who knew better, who'd done tours of duty in Britain or the United States, they weren’t promoted because they were defeatists.", "I believe that Admiral Ugaki kept and maintained his diary is because he was an honorable samurai. He had believed in Bushido, the ability of material, of willpower, to more than compensate for inferior resources, but the war's outcome had proved him incorrect. And so, as an honorable samurai, he paid with his life, and he kept his diary - nothing to be ashamed of, he’d done what he was going to do.", "0:38:10", "Here I have the last Prime Minister of Imperial Japan, Prince Higashikuni Naruhiko, talking about his take on the war. He said, \"I think the basic cause of defeat was the loss of transport shipping.\" Okay, by the end of the war Japan was down to one ninth of its transport shipping. It meant the empire was paralyzed. What's the point of taking all these territories if you can't get the resources back?", "The navy had always focused on the mission by Alfred Thayer Mahan – who's from where I work, back in the day. It was all about fleet on fleet engagements and things. But it turns out that the Japanese navy hadn't focused on convoy duty. Mahan had called that a promising secondary operation. Actually, it turned out to be primary in the Pacific that U.S. submarine services paralyzed their sea lines of communication. Go submarines!", "0:39:14", "But so here's Admiral Ugaki Matome who is talking- he eventually comes around to recommending a more defensive strategy of not having this fleet on fleet because they don't have - they've lost a lot of the fleet and then they don't have the fuel to run it. But by the time he's recommending a more defensive strategy, they don't have the fuel or the assets to do that either.", "Earlier in the war, here's his take before all that bad stuff had happened: \"It's too bad for the officers and men of the submarine service that they have not yet sunk any important men of war, only merchant men.\" Well, his disdain for the target would cost him. And he noted, later on, when he's trying to - it's the anniversary of Pearl Harbor, when he's trying to account for why the Battle of Guadalcanal is going so badly for Japan, he said, \"The aim and transport to the front has not even been half fulfilled each time. It led those on the verge of death”- i.e. the army- “to be extremely skeptical about the Navy and thinking that the Navy is just sacrificing the army.\"", "0:40:19", "Well, no kidding that's what the army thought. Because for an expeditionary force, you absolutely need the Navy to deliver you there, to maintain your supplies there and they're thinking, the army's thinking, you navy are being irresponsible, not doing any of these things.", "So there are tremendous inter-service rivalries between the army and navy in Japan. And it goes back to the pre-war budget wars where Japan's a resource poor country, and both services have what they consider absolutely essential things to be funded. Japan didn't have the money to fund both. And then when you get in war and you start expending these things, you need even more money.", "0:40:58", "And so the disagreements were brutal. But before I get there, the in-group, out-group differences that stovepipe things and cause problems aren't simply between Army and Navy. They're within each service. So I'm going to start there, and I'm going to be, to be fair, I'm going to provide one example for each service.", "I'll start with the Army. It was the Kwantung Army, or Japan's Army in Manchuria, that decided to invade all of Manchuria back in '31. It was not the home office back in Tokyo, but it's this branch that turns out kicks off a 15-year war. So these folks think that they know what's best for Japan and how best to defend the empire and they're just off and running.", "0:41:44", "Meanwhile, there are a series of coup attempts. Some of them were Navy as part of it. A lot more of them were the army that's dealing with it, going back and forth. And at the very end when Emperor Hirohito is capitulating, there was one last coup attempt which, amen, it failed, because the war might not have turned out quite the way it did if it had succeeded.", "So the point is, if you've got coups running on, that is not called unified command. It's a mess. And the Navy wasn't any better. I have a different sort of example here. During the war, the U.S. Air Service, people who were flying planes, they would alternate combat and training missions so that you would bring back someone who had survived and learned something from combat to tell new people the things to avoid, how not to get yourself killed and some other things.", "0:42:40", "Well in Japan, in-groups out-groups - you sign up together, you train together, you fight together and you die together. It doesn't mean the Japanese couldn't have grafted people between groups. It's just culturally it's not the natural thing that comes to mind. Moreover, and this apparently applies to the present, that in the U.S. military, they have what are called hot washes after different operations, where you come back and you're very self-critical about all the things that went wrong, to figure out how to do it better the next time.", "Well, there are cultural reasons why you would not want to do that in Japan, just, it's different. So, if these in-group, out-group things are causing problems within services, and it gets toxic between the services and I've got four examples and I'm going to start with organizational issues.", "0:43:31", "So it's only in 1944 that the Army and Navy finally get it together to have regular liaison meetings in Tokyo. Just in time to, you know, figure out how the capitulation is going to work. And then, the army wanted to unify the two high commands. The navy wanted nothing to do with that one because they knew they'd just become the box lunch delivery service for the army. Didn't want to do that.", "So, by 1945, they did unify their information department. Great. They can spew the same propaganda and maybe share Tokyo Rose on a good day, who knows? But there was no planning even under the imminent threat of invasion to how they're going to coordinate their assets to protect the home islands. They aren't even coordinating their air assets. Disaster.", "0:44:22", "And this disaster goes back way in time. They had very- far back in time. They had a very successful war against Russia that ends in 1905. But afterwards, in 1906, immediately afterwards, the Army and Navy are allowed to have completely separate war plans. The Army plan is all about fighting Russia for the big land grab in Eurasia. The navy plan has a completely different set of enemies. It'd be the United States and Britain for the big gambit - you're not gonna use ships in Siberia - the big gambit for empire in the Pacific.", "And each of these plans, A, they're secret from the other service and B, each plan assumes the other service is going to do all kinds of important things for them. Okay, great.", "So, I guess the idea of secrecy and surprise, normally you apply that to your enemies, not your sister service, but that's how it works in this setup. Now, the army does come around to the Navy plan. Why? Because they get walloped by the Russians on the Mongolian border at the Battle of Nomonhan. The Russians just decimate them in 1939.", "0:45:35", "So now the army says, okay, okay, maybe that southern advance thing wasn't such a bad idea. And so the Navy thinks this is great. And they do their Southern advance. Let me see where I am here. They go zooming down and the Japanese mind over matter stuff seems to be going really well.", "Because, in 1942, the Army takes more land than, over a more dispersed theater, than any country on the planet. The Navy hasn't lost a single ship. I mean, it's looking really good. Except there are a few little details here that are a problem. What the Navy hadn't told the Army is that actually they weren't ready for this whole thing - that they needed this outer perimeter reinforced by airfields in order to make the thing work. And actually, that wasn't complete.", "0:46:39", "And the army learned about this on August 17th, 1942, because one of these airfields was being built in this tropical nightmare called Guadalcanal that the United States knew about, even though the army didn't. And all of a sudden the Navy is in deep, dark trouble and needs the Army to help them out of Guadalcanal.", "So now think Samurai. The Japanese 17th Army had been ordered to take Port Moresby in New Guinea, that's what they were up to. But with Guadalcanal, they are told, \"Ah, you need to tack on Guadalcanal to that Port Moresby event.\" Okay, enter logistics. They're a thousand kilometers apart.", "0:47:14", "So now the army is gonna be lying to the navy about how many people they've got at Guadalcanal because they're scared the navy won't provide enough rations and things. The navy doesn't provide enough rations, people starve anyway. And then the navy that got the army into this mess wants to call it off and move out. But the army, good samurais, want to fight on and they just expend all kinds of resources.", "And this thing has enormous strategic effects. Prior to Guadalcanal, the Japanese army wanted to continue their strategy of chasing the nationalists out of China. Back in 1937, the Japanese had conquered Nanjing, which is the original nationalist capital, and the nationalists had fled up the Yangtze River to Chongqing beyond some gorges and some other things, and beyond the rail network.", "0:48:07", "And in 1943, the Japanese were planning to attack Chongqing and then at that point, I think if you're a nationalist you're fleeing into Burma. And if that had happened, then the Japanese could have probably pulled hundreds of thousands of people out of the China theater and put them elsewhere and that would have caused all kinds of problems. Also, the Japanese had to call off their plans to invade Australia. So Guadalcanal has enormous strategic implications. So if you're focusing samurai on one battle, Guadalcanal, well, it has implications in places called China and Australia that are a long way off.", "Okay, the United States also had inter-service rivalries. Right between our army and navy and that's why you have two separate campaigns for Admiral Nimitz and General MacArthur - each big ego is one campaign for each ego and apparently that wasn't even big enough for MacArthur. Okay but even so I don't believe the inter-service rivalries in the United States were remotely on the scale that they were in Japan.", "0:49:10", "I have one final example to prove that one. So, after Pearl Harbor, that had been tremendously successful for Admiral Yamamoto, he wanted to do, the next thing was to attack Midway, because U.S. basing there. And the Army said, I don't want, we're not going to do this. And Yamamoto goes, “I'm going to resign”. And the Army, “we don't care”. “I'll commit suicide”. “We'll buy popcorn”.", "And here's what changes this. So after Pearl Harbor, Americans wanted to let the Japanese know that we were thinking about them. And so this is where Lieutenant Colonel James Doolittle, named for the Doolittle raids named after him. In April 1942, it was a one way trip off an aircraft carrier because they had so much fuel in order to get to Japan that the idea was they were going to go bomb Japan and then ditch in China whoever survives. Very brave people who did this.", "0:50:10", "Are they going to cause massive damage in Japan? Well, yeah, if you're directly underneath you won't appreciate it, but in general it causes minor damage. But it has a major unanticipated strategic benefit - think samurai. The Army, all of a sudden, is backing the Navy, that they're gonna now do Midway with them. And it's, right, don't think, retaliate, avenge your honor. The Army was appalled that anyone had been able to bomb Japanese skies, so now they're all over it.", "Okay, so how does Midway work out? Really poorly for the Japanese. They lose four aircraft carriers. They've only got 12. They've lost a third. Oops. And- here we go, in-group, out-group- the Navy doesn't tell anyone for three or four months. Incredible. In a war. Right? So, they're thinking about their little stovepipe, and they're ignoring Japanese interests when this is going on. Okay?", "0:51:15", "Different story. So, they do get their operational end to the whole thing. It's called the firebombing of Tokyo. The whole place went up in flames. In fact, it got so hot, the canals boiled. It's an operational solution. It's unconditional surrender after a protracted war of annihilation that destroys just about every single Japanese city, minus a couple that survived.", "What broke the stalemate? And here's what happened. It's three really bad things that happened in four days. Talk about a concentration of really bad events from a Japanese point of view happening all at once. When you want to talk about- this is the psychological shattering that actually happens to the Japanese.", "0:51:59", "First the United States drops an atomic bomb on Hiroshima. Two days later the Russians pour 1.3 million people into Manchuria- the nightmare scenario of the Japanese army. And they know if this war protracts, the Russians are going to come down through Manchuria, down the Korean peninsula, onto Hokkaido, and down the home islands. It'll yield a divided Japan if it goes on for a long time.", "And then the next day the United States drops the second atomic bomb with a bluff - the idea being we're going to keep doing this daily or every other day. Except we don't have any more atomic bombs and we cannot build them quickly for a long time. So that's big bluff.", "0:52:40", "But the emperor then has had enough and he breaks the deadlock in the cabinet and the cabinet allows the deadlock to be broken the next day and then he makes an unprecedented radio broadcast - never had that happen before - to his subjects telling them “game over”. And then the next day he sends three imperial princes to the Manchurian Chinese and southern theaters conveying his orders at game over.", "And from that moment on his samurai obeyed him and they absolutely cooperated with the occupation. There's no insurgency, no nothing going on after this. And at the end of the war, the United States came to understand the Japanese. At the beginning, totally misread the situation with the oil embargo that's meant to deter and instead it precipitates the war that we didn't want.", "0:53:34", "But at the end of the war, the United States realizes you're going to need some level of Japanese cooperation if you're going to occupy the place. I'm going to use Emperor Hirohito for this. Hirohito is scared to death that it's not so much that he'll be hanged, but that the United States will extinguish his dynasty, kill him and his son, and then it's over.", "And so he's willing to sign any piece of paper that MacArthur puts under his pen. And one of those is the Constitution of Japan, that it's going to change their civil and military institutions, demilitarize the place, and try to get a democracy going there. The Constitution was written in one week by MacArthur's staff. They're running around raiding bombed out libraries for examples of Western constitutions, and they cobbled this thing together. And this is the unamended Constitution of Japan still in effect to this very day - MacArthur's gift to Japan.", "0:54:32", "All right. I've been incredibly critical of the Japanese but to sum up here, there are cultural explanations for their neglect of grand strategy, inability to cut their losses, inability to coordinate, and the ferocity with which they fought. So if you look at their values, they're explanatory of what may well happen when things get set off.", "But I've been really critical of Japan. I want to even out the story by ending on the United States a little bit because the United States played a good game or bad game of half court tennis and mirror imaged at the beginning of this war.", "0:55:10", "So when the Japanese go into Manchuria in 1931, we want them out. We don't ask, well, why are you doing this? And their answer would be, well, hey, you passed the Hawley-Smoot tariff. That means we're trade dependent. Whom are we going to trade with? And once you did the tariff, everyone retaliated. So you've now shut down international trade. So we need an empire that's big enough to survive. So that's why we're in Manchuria. And by the way, there are way too many communists here. We got to get rid of those.", "And then in 1937 when they up the ante, going into the rest of China, we didn't inquire what's going on. And from what the Japanese want to do is wall off communism, don't want that, and then they want to stabilize China so that you can have some productive economic growth. And if you go, well, what were U.S. post war objectives for China? Ooh, sounds remarkably similar, familiar. Communists out, stabilize the place.", "0:56:09", "Okay, well, how does the war affect all this? Well, actually the warfare that went on wiped out the two barriers to communist expansion in Asia. What are the barriers? Well, one is Chiang Kai-shek and the nationalists in China. The Japanese wipe him out. They don't totally defeat him, but they have so weakened him and so discredited him that by the end of World War II, he is really poorly positioned to win the Chinese Civil War, which he promptly loses.", "And then what does the United States do? What's the other barrier to communism? Well, it's the Japanese! We wipe them out. So what do you get? A unified communist China, which makes really complicated wars in the Korean War and the Vietnam War. And the problem is the gift that keeps on giving. We're still dealing with this problem today.", "0:56:57", "So take a little word from Sun Tzu - know your enemy or the other side. Know the person you're talking to. Don't play half court tennis. It's a really dangerous game. Rather, try to analyze why - ask yourself, why is someone doing whatever they're doing? And just because you're trying to understand it doesn't mean you're condoning it. It's just trying to figure out the logic of the other person. It'll set you up for more informed choices.", "Anyway, that was my Code of the Samurai event for you. And thank you very much for showing up and listening to all this. Appreciate you being here.", "Dwarkesh Patel 00:57:37", "I'm a little confused about some of the Bushido stuff and how it explains Japan's actions in the war. This Zen Buddhism stuff, the cherry orchards that are blossoming, and \"you must act with the generosity of a samurai\" — all this Bushido moral stuff. How does that square with the conduct of Japan during the war? The Rape of Nanking, the killing of millions of Chinese, the treatment of prisoners of war, which rivaled the fatality rates of the Nazi extermination camps?", "It’s like, where's the Buddhism there?", "Sarah Paine 00:58:11", "Well, I'm not an expert on Buddhism, but you've got a lot of things conflated in there. If you're asking about the part of the brutality of the war is, Japan was totally out of resources. You're thinking it's going through a massive area of territory.", "They actually had no ability to take POWs. Or if they took POWs, they'd have to halt the military operation and then put these people somewhere. So they just slaughtered them instead.", "00:58:50", "There were POWs and there were cases of hostile civilians who also got slaughtered. I don't know the \"because,\" but they had very limited numbers of people to deal with this. On the one hand, you've got absolute desperation. I don't think any people behave well when they're desperate.", "The war had been going on for years by the time we get interested in it, right? It starts in '31. So you have desperate people.", "There's another piece. I can just add little pieces. I can't explain a whole people. In the prison camps in Japan, if I'm going to be — let's say you were Japanese. You and I are looking each other in the eye. That's how you do it in the West, showing that you're paying attention.", "That's not how you do it in Japan. In Japan, if you're Japanese, I'm looking at your shoulder. It's rude to look people in the eye. It's just too intrusive; you're getting too much information probably from that person's face.", "00:59:50", "So you can imagine a Westerner in a prison camp looks his guard in the eye, and the guard's going \"Oh, who is this arrogant person? You’re a POW-\" You can imagine bad things are then going to be happening. These are guesses on what's going on.", "There are certain values that I've talked about. There's certain desperation that's going on. And then there's the dehumanization of what wartime's all about, right?", "Initially, conscripts go in of all armies having trouble killing people, and then they get better at it over time. This is the tragedy of human beings. I don't know if I answered your question. I don't know that I know the answer.", "Dwarkesh Patel 01:00:31", "Here’s another thing that I want to clarify: If you were trying to understand Britain's conduct in World War I, why they initiated it and why they conducted it in the way they did, and you tried to understand it using cultural explanations — what some British guy wrote in the 17th century — I don't know how far you'd get. Maybe the more illuminating thing is just to look at what was happening in the 1910s. What were the proximal strategic objectives?", "So with Bushido, why are we looking back at what people were writing in the 1700s?", "Sarah Paine 01:01:02", "I'm going to break up the British thing into two parts. One analytical framework is that you can look at wars in terms of underlying causes and proximate causes. The underlying causes are like the tinder of grievances on both sides, and there can be cultural components to that, or other components.", "So there's this accumulating tinder where you've got two different sides, at least across purposes. But then there's the match, the proximate cause, which is a whole series of matches. Finally, the last one is Pearl Harbor, and you are off and running to a place you might not want to go to.", "01:01:41", "So there's that. And then there's culture. Let's look at Britain's strategic culture. I'm no expert on British strategic culture, but these are some basics.", "They are an island, and they want to be able to trade with the world, but they don't want any one power dominating the continent. This is their strategic thinking from way back. So if there is a power that's on the verge of dominating the continent, you want to back the other side to prevent that outcome.", "That's very much a part of British thinking, it goes back a long time and you can read things going back a long time describing that situation. There's another piece that goes back a long time in the British: navies are rarely decisive in warfare.", "What I mean by \"decisive\" is that you actually get the goal that you're after from fighting, whereas armies can be. If your goal is, \"I want to occupy all of France,\" or better yet, the Holland, something smaller, an army might be able to do that for you, one instrument of national power. But Britain's reliant on a navy and doesn't like to have a big standing army.", "01:02:50", "So they're thinking in terms of diplomacy and allies, working economics, and making money from trade. They are the ones who coined the term \"grand strategy.\" It is their gift to us, and it absolutely informs their thinking at a very macro level.", "No one thing is entirely explanatory. Also, we human beings game the system. The moment I tell you you're Japanese and you think this way is the moment you go, \"Oh, that's what she thinks? We're going to do something different.\"", "Right? So this is the problem with human beings.", "Dwarkesh Patel 01:03:23", "The loyalty precept: Wasn't one of the problems with the Japanese military that they weren't loyal, that they were trying to do these coups all the time, and the young officers were insubordinate?", "Sarah Paine 01:03:37", "That's an excellent question. And what you're doing is feeding me back the Greek principle of logic, which is the law of non-contradiction. You cannot simultaneously believe mutually exclusive things. So what's going on?", "You're telling me it's all hierarchical, and now you're telling me junior officers are doing, or mid-level officers are doing things. What's going on here? Ah, but that's a fundamental principle of logic that the West puts great credence on.", "Not the case in the East, necessarily. Now, people have educations that are different. So we're going back in the day where people are not looking in terms of, \"Okay, we're going to have a logically consistent argument.\"", "01:04:20", "Rather, there are these social values that we are gonna- if it's all about group loyalties, that's what we're going to prioritize. And if my subgroup is going to be my unit or whatever, that's how that's going to go.", "So you're doing a wonderful piece of Western logic. It's excellent. And this is why other cultures find dealing with Westerners like battery acid because they have these different belief systems.", "And you go like, \"Okay, you have women, and we got women. Our women drive cars, and yours are like, where? Is there something wrong with your women?\" It's battery acid on other cultures.", "Dwarkesh Patel 01:05:02", "I was struck when you were describing that the Nazis were putting their enemies on death ground, the Japanese are putting their enemies on death ground, and in both cases, it was detrimental because you're preventing the other side from surrendering.", "That seems even worse than what was happening before that period in history where you can think over time that our norms about civility and war crimes are improving, but it seems like in World War II, the way people were acting was even worse than they were acting in World War I. The way Germany and Russia were fighting in World War I was probably more civil than how they were fighting in World War II. And then obviously what Japan was doing in China at the time.", "What was going on around the world that people just got so demonic during this period?", "Sarah Paine 01:05:47", "Warfare is not civil, you're killing people. I love people who talk about just wars. It's rather a horrible piece of human existence.", "There are a number of things that have gone on with the Industrial Revolution. You can now kill people on an industrial scale. When you're doing it with bows and arrows, it takes a lot more time to create the mayhem. So that's one thing, the ability just to wipe out people.", "World War I on the Western Front was all entrenched. On the Eastern Front, there was a great deal of movement, but on the Western Front it was entrenched, which meant civilian populations weren't really touched by it. Where the initial fighting, yeah, they're leveled, but once you get a trench, you're not. And then we in the West don't actually study too much what happened to the civilians on the Eastern Front where it's moving around.", "This is back to my half-court tennis, so we're not paying attention to those civilians. For the West, very few civilian casualties. Whereas when you get to World War II, you're bombing people. Now, with new technologies, you can get at people.", "01:06:51", "And invading also, it's the lesson of World War I, the feeling that the Germans really hadn't felt their defeat, and that allowed them to make up this story about how it was they weren't defeated. \"The Jews did it,\" or whoever. They were betrayed. And Churchill and Roosevelt decided there would be a march to Berlin to disabuse them of that.", "And that involved killing a lot of civilians to get to Berlin. And of course, the Russians were determined to pay back for what the Nazis had done to them. And we had no sympathy for what- we were going to turn a blind eye to what the Russians were up to because the Nazis had been so heinous.", "Dwarkesh Patel 01:07:32", "This is probably wrong, I want you to correct me, but maybe one way you can explain why the Japanese were so brutal in their campaign around this time is, if you think that when you lose, you have this idea that you have social death, it's better to kill yourself than go back to your family and say, \"I surrendered\". Maybe they just applied, this is their failure to empathize with, or think it from, the perspective of their enemy, but they were just thinking, \"Listen, if we lost, we would commit seppuku. When they lose, they forfeit human rights.\" And in some sense, it was just applying the principle of social death to their enemy.", "Sarah Paine 01:08:09", "The whole war is brutal. They're doing a lot of hand-to-hand brutality, and part of it has to do with lack of equipment. That firebombing of Tokyo happened in one night. I think it's 80,000 Japanese are incinerated.", "Okay, let's talk about brutality. Now, the reason why Americans did that is because they knew the alternative was sending American kids onto Japan who would die doing that. And so the decision was it was better to kill a lot of Japanese civilians than it was to kill American soldiers.", "And that's also the reason that went into the atomic bombing. That's controversial, right? The Americans, why did they drop atomic bombs on the Japanese? And there was no disagreement about that in the United States at the time because it was a question of: are you going to send American young men your age- and millions of them would have died hitting the home islands- or are you going to do the bombing? And of course, the Americans did the bombing.", "So there's brutality all around in this war. Wars don't come up with clean hands.", "Dwarkesh Patel 01:09:25", "Was there any way for the West to, or for America to, win the Pacific War without the firebombing?", "Sarah Paine 01:09:30", "Well, okay, let's, this is a whole other topic. Win in wars. What does win mean? For us, it was put Japan back in its box, right?", "But this is a whole problem for Japan. What's win? Or this country in Afghanistan, what's win? Is it booting Osama bin Laden out of Afghanistan? Once that happens, it's a day. Is it overthrowing the Taliban at a particular period, or is it trying to turn the whole place into a democracy?", "Okay, those are all radically different things, but you need to make up your mind what it's going to be. I think it's a miracle. Well, A, okay.", "01:10:10", "The win, if you're going to have the win be that the United States transforms Japan into a functional democracy, or sets them on the road so that they will become one, if that's what the win is- no. Because if you don't, I showed you the three horrible events in four days. That's quite incredible to have that much bad news happening in a half a week.", "And that absolutely shattered the Japanese, and it also opened the door for those who thought they were in crazy land to capitulate. If you don't do that, okay, we invade the home islands. Americans were sick of the war, and you start losing lots of American kids in Japan.", "I think at some stage, we decide to pull them out of Japan and blockade them eternally. And then you've got, I don't know, Japan is like a new North Korea, just this inevitable, these eternal, non-functioning society. So, no.", "Wars are tragic, and also, don't think that you have all the cards, that you're going to make the decisions about what's going to happen. The other side is going to put you into corners where you're going to choose from very unpleasant alternatives.", "Dwarkesh Patel 01:11:30", "I want to ask you about how the war starts. So, there's obviously, you go 10 years before, and you've got the tariffs, and that creates the incentive to build an empire. But even months before, when there's negotiations between Japan and America to get rid of the embargo, it's striking to me how much miscommunication and the ability for both sides to just understand that there was a compromise here was such a big factor here.", "I feel like if Prince Konoe and FDR could get on a Zoom call, I feel like the war...", "Sarah Paine 01:12:08", "You're an optimist. Let's talk about sunk costs. I'm going to talk about sunk casualties. By the time you're there, the Japanese have suffered 600,000 casualties in China.", "There is no easy out of that one. The United States' minimum program is \"you get out of China\". Not happening if you're Japanese.", "If you look at the government, the government's definitely on death ground with that one because there's no way they stay in power if they get out of China. Particularly, this is why Hirohito is Mr. Silent for most of the war. Initially, he's all for it until it goes sour, and then he's less so.", "He knows that he'll be, if not assassinated, declared insane. Then his perfectly serviceable adolescent son, or however old his son was, would be the token emperor. So, I don't think the Zoom call is going to change the fundamentally high stakes that are involved for both sides.", "Dwarkesh Patel 01:13:18", "You really think if Hirohito had stepped in and said, \"No, we're not doing this,\" that he would have been usurped as emperor?", "Sarah Paine 01:13:25", "Yeah, early on. Oh, and there's another piece. Let's look at the United States in 1941: Great Depression, isolationists. This is where the first \"America Firsters\" are. They're the ones who created the idea. They didn't want to know about all these foreign places.", "Totally isolationist. Hawaii wasn't even a state. It doesn't become a state until 1959. If you're Japanese, you look at a place, \"Oh, it's a colony.\"", "So, it'll be like the dog that barks. You take a newspaper and flap them a few times, and maybe the dog will start barking. Because what the Japanese want is the United States to just mind its own business, stay out of Asia. \"It's our backyard. It's not your backyard.\"", "And looks at the United States. The United States doesn't have that much trade with Asia compared to the rest of the world. Sure, it sells Japan most of its oil, but that's not most of US trade.", "Dwarkesh Patel 01:14:15", "I was reading about this before preparing to interview you, the particular cases where diplomacy broke down. There are examples where the translators between Japan and America are, and I can't believe this is true, you can tell me if this is not, but they're exaggerating what each side is saying to make them more vivid to read. Like if Tojo says something conciliatory, it's exaggerated.", "That's obviously not the role of a diplomatic translator. And there are many cases where, after the war, Tojo says, \"If we had gotten the modus vivendi that FDR apparently was contemplating sending to us, if we saw those agreements, we would have agreed.\" Or apparently, they misunderstood that the final agreement they got from Secretary of State Hull, where it said that you must return China, they thought it included Manchuria.", "Hull didn't intend it to include Manchuria. They might have said yes to that. It seems like the work really could have been averted if a couple of mistranslations were avoided.", "Sarah Paine 01:15:07", "I wouldn't take Tojo as my source for anything. He's a guy who, he was before a graduating class of cadets or something, and he was talking about how people had said that he was mediocre, but look where he'd risen to be. He's this great man.", "And then at the end, when he's supposed to commit suicide, well, the way you do it is as I've described. He's an army officer, so what does he do? He takes a gun and he shoots himself.", "Buddy, it's point-blank range with a gun. How hard can it be? And he survives that one.", "01:15:45", "We glue him back together. So here's the honorable samurai who sent so many children, or young people, to their deaths, and at the end, when he knows full well what he's supposed to be doing- can't do it. So I wouldn't take him as my source.", "Also, at the end, he's got this whiny answer of, \"Oh, it wasn't my fault. It's all your translators.\"", "And, \"Oh, the peace on Manchuria.\" No, I don't know that that would have been a compromise. The League of Nations had sent in something called the Lytton Commission, which had told the Japanese they had to get out.", "Oh, by the way, it is a fiction that Manchuria is not a part of China. It is an integral part of China, has been for the longest time. What the Japanese did is they kidnapped the last Qing emperor- Manchu, Manchuria- whose ancestors came from Manchuria, popped him in, Henry Puyi, and made him the emperor to try to come up with this fiction that oh, Manchuria is a separate place.", "Excuse me, it's not. It's part of China. It's the internationally recognized territory of China.", "Dwarkesh Patel 01:16:48", "So I wonder if one of the problems here is, look, it was not in the vital strategic interest of America to secure or liberate China. In fact, the outcome of the war obviously is a national seizure of communist state power. And as we'll talk about in the next lecture, that was the very opposite of liberating China.", "Basically, America puts in this oil embargo and knows that the outcome of failed negotiations on getting rid of that embargo is a total world war of this kind. And it's not even for the main strategic objective, which is, you know, you got to get Hitler, you got to beat Germany. Wasn't this just our failure of grand strategy to realize, why are we doing this? What's our strategic objective here?", "Sarah Paine 01:17:31", "Hold on. We weren't fighting Germany. The only reason we fought Germany is because Hitler made a major blunder.", "He had an alliance with Japan that said if Japan was attacked, he would come in. Not if Japan attacked someone else that he would come in. He interprets it broadly, and he declares war on this country.", "That is how the United States got into World War II, Europe. If Hitler had not made that blunder, FDR would have been in a world of trouble trying to explain why he's suddenly going to be fighting Nazis over an attack on Pearl Harbor by Japanese. So that's a separate issue.", "01:18:06", "The main thing that Americans who thought deeply cared about is the international system, that we should deal with each other through laws, through freedom of navigation. So this is how you run your commercial transactions, that big countries don't get to overrun small countries because if they do, the entire international system goes down.", "And the logic that you're describing is excellent logic, and this is what the Japanese are saying. It's like, why would the Americans care about this? Americans care deeply about the international system, or people who are thoughtful about it.", "And it's also like, why not let the Russians eat all of Ukraine? Why are the Europeans suddenly all over this problem, right? And they've unified very recently on this.", "It's because the whole system is at stake. So it is really high stakes at a strategic level. The Japanese are looking at the operational level and going, why do you care about these countries?", "We care about the entire system because our prosperity is based on it.", "Dwarkesh Patel 01:19:15", "So I wonder if the problem here is that America isn’t, at least at the time, wasn't willing to give enough concessions to the factions within Japan that cared about peace so that they can save face and actually argue for ending war? Just the idea that they're going to give up Manchuria as well, obviously that's not going to happen.", "And when Secretary of State Hull just sees these vacillating telegrams from the Japanese, he assumes it's because they're fickle or something. It's like, no, it's the civilian part of the government trying its best to prevent the military from taking over. You got to give them something to save face.", "Sarah Paine 01:19:52", "That faction had already lost. They lost in 1936. Takahashi Korekiyo was Japan's longest-serving finance minister, and he was a very distinguished man.", "In fact, he brought the Japanese economy out of the Great Depression before anyone in the West was doing it, through government spending and trying to get people to spend at home. He told the army, he said, \"Fellas, if you go on this bent for empire, you're not gonna actually get resources because you're gonna spend a lot of money fighting with people, resources doing that. And then it's gonna take you years of investments to access these resources. So you gotta be able to cover these investments for years and years and years. Cooperate within the international system. That is the way to prosperity for Japan.\"", "Well, that was the February young officers revolt, where he and I can't remember how many others were murdered. They came to his home in the middle of the night, and as he stood up to talk to them, they literally hacked him apart. They lost. That whole game is over.", "Dwarkesh Patel 01:20:57", "But Prince Konoe doesn’t, like he knows he going to- the person who’s Prime Minister in 1941, realizes that they're going to lose a war against America and he doesn't want to do that. It's just that Tojo, the war minister at the time, doesn't answer to him. The Prime Minister doesn't want war, and we don't give him enough to save face.", "Sarah Paine 01:21:14", "Wait a minute. They have a huge army there that's more than capable of assassinating people when they get in the way. So that ship has sailed. This is what a pivotal decision is. Once you've made it, there's no going back to the way the world was.", "Japan lost too many people, and there are too many figures in the army. We think that it's an inevitable outcome of having the world the way it is now, with Japan this wonderful country now, just at the lead of so many different areas of human endeavor. We think, oh, that's the inevitable outcome. It's not.", "Dwarkesh Patel 01:22:02", "Is there anything America could have done in the immediate years leading up to the war that could have prevented this outcome? Because the fact that there is a world war, prima facie, you should assume that if there is a world war, things weren't done optimally, right? That doesn't mean everybody's equally at fault. But is there anything that could have been done differently?", "Sarah Paine 01:22:22", "Yeah, Hawley-Smoott.", "Dwarkesh Patel 01:22:23", "But like in the years directly before.", "Sarah Paine 01:22:24", "No no no, but this is serious. That Hawley-Smoot tariff is a game of half-court tennis. You're looking at the United States, it’s in a terrible depression, you want to protect jobs here, and so you raise these big tariff walls.", "Okay, what's the other side going to do? They're going to raise their tariff walls and pay you back. What's that going to do? It's going to cost a lot of American jobs if you play that game.", "You need to be talking with other people. So once you have set the conditions, hothouse conditions, for fascists to take over in Germany and in Japan, you are in a world of hurt. The easy solutions are no longer there anymore.", "Dwarkesh Patel 01:23:05", "Maybe I’ll ask the question this way: the oil embargo, was it a mistake? Because if the idea is to protect the international system, prevent empire- well we got more empire, we got a world war, we got communists taking over in China. Whatever would have happened if we got rid of the oil embargo, it can't have been worse than that, right?", "Sarah Paine 01:23:20", "Well yeah, it was. What Roosevelt was really scared of was that the Japanese would attack Russia, because he thought Russia would fall. And if Russia fell, he thought the Nazis would win. So at least when the Japanese attacked, they attacked us. That was actually better than attacking Russia.", "So the oil embargo… so let’s say you don’t do it and Japan never attacks us, and then the Russians are down and you've got Nazis in control of the world, somehow that’s better?", "Dwarkesh Patel 01:24:02", "But you don't have the world war with America. It doesn't seem insane to think that, and we're still going to fight the Nazis.", "Sarah Paine 01:24:06", "You're not going to have an international system. Oh, and if that's the case, the Nazis were gearing up, because eventually they're going to fight us.", "So, this is another problem. Another concept that I think is useful is limited versus unlimited objectives. An unlimited objective is, \"I want to do regime change on your country.\" The most unlimited variety is that not only do I want to do regime change, but I want to kill all your people while I'm at it. So if that is what your opposite number is planning, if you compromise with them, you are simply setting them up and putting them in a stronger position when they come at you for the final kill.", "If they have limited objectives, then by all means compromise with them, and negotiate away on what it is you want, just this little sliver of territory, or you want some preferential treatment? We can do that for you. But you're talking about a world order here, whether it's going to be based on laws increasingly, or these opposing spheres of influence.", "Dwarkesh Patel 01:25:16", "To keep pushing back on this...", "Sarah Paine 01:25:18", "I don't have an answer. I've given you what little I can think of. But hey, I'm not the grand puppeteer in this world. These questions you're asking me are way above my pay grade.", "Dwarkesh Patel 01:25:30", "People in the YouTube comment section get mad at me when I keep asking about counterfactuals. And I understand. Obviously, I don't understand what was happening at the time. I'm naive about history or whatever. But I do think it's important.", "What are we trying to do when we try to understand history? We're trying to understand if we had done things differently, what would have happened. What are the lessons we take? And counterfactuals are the main way we can do that.", "Sarah Paine 01:25:50", "Yeah, I'm all for it. We teach by counterfactuals. Replay it. Can you come up with different options?", "I think you're in a series of really awful options.", "Dwarkesh Patel 01:26:00", "The difference between the Japanese and the Nazis is that the Nazis had the ideology that we got to kill millions of people. That is what they believed. The Japanese didn't, in the same way, have that ideology.", "Naturally, they had, they're a continental [maritime] empire. They also want to trade. And they don't like Communists. They don't want the Communists to take over in China.", "It just seems like, naturally, if we didn't go to war with them, we might have been allies, as we ended up being later on.", "Sarah Paine 01:26:27", "Well, they attacked us…", "Dwarkesh Patel 00:26:30", "After the oil embargo.", "Sarah Paine 01:26:31", "Yeah yeah. But-", "Dwarkesh Patel 01:26:32", "But the question is, should we have gotten rid of the oil embargo? So the attack wouldn’t have happened.", "Sarah Paine 01:26:35", "But wait a second, the Japanese are saying, \"We have the rights to your oil.\" Excuse me, we have the right to sell it or not sell it to anyone we feel like.", "Since when do they have the rights to it? Okay, great. So you're going to do all this crazy stuff and then still think we have to sell you oil? And, oh, by the way, this oil is being used to kill Chinese all over the place. This is what their Japanese bombers are running on.", "Dwarkesh Patel 01:27:02", "Morally, I agree, we don't want to like, they don't own our oil. The question is, it's not about are they are entitled to it. It's like, should we have given it to them? Would the outcome have been better if we did?", "Sarah Paine 01:27:12", "I'm telling you, I suspect the outcome wouldn't have been better, but I don't know. And you're asking me things where my experience is, I'm just a professor. I just show up at seminar on time, and I try to do reading and prep.", "And my experience has not been in government, let alone at the highest echelons of government, of what's feasible and what's not feasible. And the answers I've given you are where I'm at, but I can't tell you more.", "Dwarkesh Patel 01:27:44", "All right.", "Sarah Paine 01:27:45", "Sorry.", "Dwarkesh Patel 01:27:46", "I'll ask about other things, too. The sort of delusional optimism of the Japanese to think that they could beat America, how much of that is motivated by the idea that they actually do think they're led by a living deity?", "Is that related to why they thought they could win?", "Sarah Paine 01:28:06", "No, but you're looking at the United States as isolationist. If you're Japanese, and you're looking at these absolutely isolationist Americans who are letting Britain, their closest ally, potentially go down to Nazis, right?", "Because think about Britain. How did it all work? Fall of Norway, fall of France, then they lost Crete. We aren't even in the war. They're about losing everything, and we're not doing anything.", "So… wait a second, now I'm losing my track of my train of thought. Revisit the question again, please.", "Dwarkesh Patel 01:28:47", "Can somebody in the audience remind me of the question?", "Sarah Paine 01:28:53", "Yeah, the optimism of the Japanese. So they're looking at it and going, \"The United States is not bailing out its key ally. Hawaii’s a colony, right? It's a bunch of white people dominating Hawaiians.\"", "\"So we're just going to do a little, set an example that, hey, if you mess with us, it's going to get costly. And we know you don't have much trade in Asia. Sure, we want to buy your oil, but overall, most trade is elsewhere.\" And so they're looking at it. Isolationists, won't the answer be the isolationists go, \"Woo, this is expensive, let's get out of here?\"", "01:29:27", "And of course, that's not understanding Americans. I remember I was in seminar the day that 9/11 happened and there was a TV in the seminar rooms, and there was a break, and after the end of the break, the students had the TV on, and we’re watching. Tower One had gone down, and then while we're in seminar, the next one, we're watching it, goes down. And I thought, \"Oh my, there's going to be hell to pay for this one.\"", "Because this is how Americans are, that if you mess with us, boy, does it get ugly. And boy, this is like, don't think, retaliate. So the Japanese didn't understand that part of us. In fact, a lot of people don't. They look at Americans, we look like a bunch of hedonists. But if you mess with us, it's ugly.", "Dwarkesh Patel 01:30:14", "If you are in the Japanese government a couple of years before, like 1939 or something, it sounds like from your earlier answer that you think it was a sort of hopeless situation. If you're the finance minister or if you're the prime minister, to- is there anything they could have done to prevent an inevitable conflict with America?", "Sarah Paine 01:30:32", "They brought it up, and they say, \"Well, we can't afford this. The resources aren't here.\" The army wasn't interested, and they shut up because the last guy had been killed in his house in his pajamas. So, they weren't up for that.", "Dwarkesh Patel 01:30:46", "How much stock do you put in the idea that if you have a society that rapidly industrializes, that goes from a feudal state to advanced industrial nation in a generation, it's just not enough time for the culture to evolve? And so, the way that the Japanese behaved in the lead-up to the war and during the war, the feudal values just didn't have a chance to evolve. The Meiji Restoration happened too fast.", "Sarah Paine 01:31:12", "No, it didn't happen too fast. It’s that institutions take a long time to take root. Look at this country; it's been an evolving democracy for hundreds of years. We no longer have slavery- amen- but look at really consequential things about our own institutions.", "So, now we're going to go pick on the Japanese because they managed in one generation to westernize their political, judicial, legal, educational, you name it, they westernized them, but there wasn't enough time for these things to grow deep roots.", "And then if you think about who's doing this, they're called the Genrō , this very distinguished generation of Japanese who all knew each other. Their career paths were very broad, covering both civil and military areas. And when they died, they couldn't transfer their prestige to anyone else, nor had they institutionalized it in some kind of cabinet or something that would force all these different groups to discuss things without giving primacy to the military.", "01:32:27", "Then, understand what is the tradition in Japan: Shogunate. That's what Japan had before. What's a shogun? Shogun is a Japanese word for \"general.\" So the long history of Japan is military rule so when it comes back in World War II, that is a kind of normalcy.", "It would have taken a long time. So, this comes back to the United States and fooling around with tariff walls, thinking that was such a clever idea. Maybe if the Great Depression had been managed better, it would have given time for these institutions to take deeper roots.", "No one knows, but I would not criticize the Meiji generation. They're brilliant. They did so much, but you're talking just a number of years.", "Look at this country. We're having major problems with our institutions, wondering whether we've got a stalemated legislature, whether we've got a skewed court system, or whatever it is and we're sorting these things out. And then to criticize the Japanese because they couldn't do it all in 25 years?", "Dwarkesh Patel 01:33:30", "One thing I found really interesting in your book is that you were arguing that not only was the military not in charge of the officers, but nobody was even in charge of the military. I think you call it a \"system of irresponsibility,\" that it was basically government by committee. Tell me more about that. Why does that lead to mistakes?", "Sarah Paine 01:33:53", "I'm going to flip it around and look at how the West has done it. It's all about- it’s supposed to be and of course there are exceptions- that it’s not that it’s about you if you have a particular job, it’s that your job gives you certain authority by law to do things, and then we have courts to adjudicate when you in that position, other people think that you've exceeded your authority, and they start suing about different things.", "And that's how it goes in the West, it's very legalistic, going all the way back to the Romans. When I think about what is the West, it's Greek logic, it's Roman law, and then it's these Judeo-Christian moral values. Those are essential pillars of what the West is all about.", "So in Japan, yeah they get laws and they westernize, but they had their own indigenous way of dealing with things and it's very much about different in-groups and out-groups handling things in whatever the committee is. And so we're going, \"Well, who actually did that? Whose fault is it?\" Because we have this very legalistic way and fault and law.", "01:35:07", "We're going to either put you in jail or whatever. It's just different ways of organizing ourselves. But we in the West assume, because going back to Roman times, that institutions are going to be a really big thing, that that's how things are going to work.", "So then when we get into somewhere like Iraq and we think the police is going to still be functioning after we blow the government, it's like, whoa, whoa, whoa, it's not an institutionalized- I’m no expert, but you're projecting the kind of institutional setup that Western countries typically have to other people.", "They may name these things the same thing, police or whatever, but they may function in very different ways.", "Dwarkesh Patel 01:35:51", "When you look at the soldiers who fought these hopeless battles, where tens of thousands of them on a single island might starve or be forced into a banzai charge that they knew was hopeless and they were all going to die, and they knew that they were put in this hopeless situation because of these destructive petty fights between the Navy and the Army, where one of them isn't willing to supply the other one, why didn't that break their morale more? It's just like, \"Look, we're supposed to die and commit seppuku because you guys didn't give us the right supplies, because you guys aren't willing to share information or something.\"", "Sarah Paine 01:36:22", "I don't know the answer, because I'm not a social historian where you would really be doing- I've always done more diplomatic and military, so you're looking top-down. But what you're asking is a very important question: how do individuals react to all this? And that I do not know the answer.", "And you'd have to--and there's another piece that makes it hard in Japan since people don't want to talk about failure because it's considered a loss of face. Whereas in the West, one of the fundamental assumptions of Christianity is we're all sinners, right? Original sin, we're all defective goods from the very beginning.", "And so there isn't this expectation for perfection because it's known you're kind of a mess to begin with, and so you can talk about these things. So that's a whole other problem of getting people to open up about truly horrible events, so I don't know the answer.", "01:37:16", "I know that the World War II generation, they came home, they just didn't talk about it with their children. It's just not a matter for discussion. So I can't answer that, of why they followed.", "But you look in the West in World War I, soldiers would go up and over the trenches, and they knew exactly what was going to happen to them. And yeah, there were people who didn't, and then they were court-martialed and shot. A lot of people were shot in World War I.", "But there's been a change in society about what young men are willing to do when their officers start telling them. The way the Russians solved this problem in Ukraine, but Stalin would do this too, is you send people up ahead, and then you've got the KGB, or whatever the killer unit, so that anyone who tries to go the wrong way in the battlefield, they get machine-gunned by their own side. That's one way to get an army to go forward, and that's what Putin is doing right now.", "But anyway, in more democratic places, people aren't willing to go along with this now, but in the West, we did it too.", "Dwarkesh Patel 01:38:22", "When you're having lunch with your colleagues, with the people who are also experts in this field, and you're discussing, \"Look, what are still the big unresolved questions? We don't understand why this person did X, or why events transpired in a certain way,\" what are the big things where you guys have to hassle it out?", "Sarah Paine 01:38:39", "It's more that someone has decided what the curriculum is going to be, and then you teach it. Oh and by the way, as of like six months ago, I've moved to the Maritime History Center, so I'm no longer teaching. I'm just doing research at this stage.", "Well, I think the whole point of the curriculum at the War College is to get people out of the operational level. So, the junior course is about the strategic effects of operations. It's like, okay, Pearl Harbor.", "To me as an example I’ll use, Pearl Harbor it’s an A+ military operation. They sink everything. Of course, it's shallow water and we raise most of it afterwards. They lose nothing. I mean boy, how can you do a more perfectly choreographed military operation than that one? Except it turns an absolutely isolationist country into one hell-bent on coming after Japan, and that would be called a strategic disaster.", "01:39:38", "So, you've got to be cautious about your military actions and think about what are the strategic consequences. And it's very difficult to gauge those accurately. So that's one big issue that you need to get at.", "And then for the seniors, it's: Okay, great, you're an officer and you understand if you're in the Army, you understand how to use armies, in the Navy, about how to use navies. But how does this integrate with all the other instruments of national power that actually account for how wars turn out? And economics is a huge one.", "There is no definitive answer in these wars, but rather you can give analytical concepts. And this is my reason for doing these lectures: to say, hey, don't play half-court tennis. Wouldn't that be a basic thing?", "Is it going to provide you the answers if you look at the other side? No, it won't, but it'll position you to be in a better shape. Or, tossing out limited versus unlimited wars, and unlimited wars are putting people on death ground.", "01:40:37", "Does that answer the question? No, but it goes, okay, if Vladimir Putin has an unlimited objective vis-a-vis the world order- and he's basically said that he does- then compromising with him would be a mistake on that subject. And you're going to have to deal with him one way or another.", "So, it's the... What is it? History isn't over. The great, long-standing struggle between continental powers that want to divide the world up into mutually exclusive spheres of influence and butcher each other over territory -- very negative sum, they tend to be very authoritarian -- or this maritime order that says, hey, surely, join the party, follow the rules, you'll make money, right?", "And this is, Angela Merkel's thinking this, of “hey, we’re gonna”- she was the German chancellor for many years- and she's thinking Putin will sell a lot of oil, he'll make money. It's a win-win, surely.", "Well, he's back to his continental stuff, destroying wealth and lives. That's what the alternate world order looks like, is killing people to get things. It's more promising to spread the other one, and this disagreement has been going on for a long, long time.", "Dwarkesh Patel 01:41:56", "I feel like we had a mini version of this debate in the podcast we did a year ago.", "Sarah Paine 01:42:00", "I have a limited number of ideas.", "Dwarkesh Patel 01:42:03", "But if you just said we can't compromise with people who will have these unlimited goals, like Putin, who have these territorial goals... If we're using the analogy, as you're explicitly making it, to Japan during World War II, look what it took. If we wanted to say we're not going to compromise with your territorial goals, we will stop you in every way, what it took was a world war that led to an unconditional surrender. So, if we're using that analogy, does that imply that no compromise requires some unconditional surrender-type event?", "Sarah Paine 01:42:35", "No, no, no, no, no.", "Dwarkesh Patel 01:42:37", "Doesn’t this counter the death ground thing?", "Sarah Paine 01:42:38", "There's something called precision nuclear strike, and it has changed things. During World War II, the United States had sanctuary, which means other people couldn't touch us at home, and our productive base was not touched and that was essential.", "There are many things that were essential for Allied victory. Take any one of them away, and the outcome is different. One of the essential things is this productive base.", "This time around, all kinds of people can come get the United States. A lot of times in foreign affairs, you can't solve things; and I don't know who originally said this, but you manage things.", "And so, this is where the sanctions and things come in. You give people a timeout from the world order. You don't solve it.", "01:43:25", "You look at North Korea, that is not solved by a long shot. But at least South Korea can go become prosperous, and the North is just sitting with its timeout and being more and more malign at home. But it's more like that it’s- of not recognizing, for instance, conquest.", "This is what went on, actually, when Stalin took over the Baltic states. We never recognized that. In fact, they ran governments in exile based on the funds that, I think, the Baltic states had their funds in US banks. So when Stalin overran them, I think those funds became what funded their embassies.", "And so, N generations later, we recognize their independence. These things are not solved on anyone's watch. Basically, if you want the things… the way the last Cold War ended, it's because the Chinese and Russians changed their minds. You have to wait for others to come to their own decisions.", "Dwarkesh Patel 01:44:31", "What is the track record of sanctions actually changing other people's minds? Because if you look at the world today —", "Sarah Paine 01:44:37", "Unknown.", "Dwarkesh Patel 01:44:39", "Just to finish the question, because you're advocating that as the way we should proceed, so it's worth asking. If we look at the world today, North Korea, we've sanctioned them for decades. If the idea is we're going to deprive them of resources to do these bad things, well, they're just going to divert resources from their civilians to the military.", "And you're just making the people poor. You're not changing the government's decision. Same thing in Iran, same thing in Venezuela.", "Sarah Paine 01:45:04", "Correct", "Dwarkesh Patel 01:45:05", "So what are we doing? We're just making the people poor. We're not even changing the government's decision. What's the idea here?", "Sarah Paine 01:45:10", "You know, I don't know the answer, and I'm working on an edited book with a wonderful colleague who knows much more about economics than I do, and we have all kinds of chapters. This book will not be a good read.", "Well, because you want to learn things, and so I've done a lot of edited books to learn things, different kinds of naval operations, and then they feed into the books that are more overarching. But this one's on sanctions, and we're in the process of writing the concluding chapter now.", "And the things that you’re bringing up. Sanctions don't seem to make people do what you want them to do. And so a lot of people go, \"Well, they don't work,\" but boy do they inflict pain.", "01:45:52", "And then of course you're doing it on a, usually an authoritarian regime, that diverts as much of that pain as possible onto the civilian population, and then uses that as an excuse to justify its rather brutal rule. And I don't have the answer.", "And the other piece is, so you're dealing with these expansionist powers with a limited toolkit, and the question is which tool to use? And I truly don't have the answer. You've mentioned a really good counterargument, and I've given you an argument, and I don't know which rebuttal is the better one, honestly.", "This is another thing about what do you- you were asking about what you’re worried about, but here's something to help you with what you're worried about: is thinking in terms of arguments, counterarguments, and rebuttal.", "So, you have an argument, I have an argument, and on this one, I'm not sure what the rebuttal is. But it's a good way of thinking about things because it allows you to change your mind. You may well be right, or I may be right, or maybe we're right in one country but not another. I honestly don't know the answer.", "Dwarkesh Patel 01:47:03", "It does seem like one of the biggest things you want to try to figure out in strategy in general, because if you try to do sanctions, it doesn't seem to work that well. Well if you try to replace the regime, for example, people said look at Saddam's regime after his invasion in Kuwait, this is somebody who has territorial goals, so we can't have somebody who's going to keep trying to do this in power. You try to replace that, and that doesn't work.", "Sarah Paine 01:47:27", "That worked well, you’re right.", "Dwarkesh Patel 01:47:28", "Yeah. So, just the idea of what do you do with irascible regimes? That's a big open question.", "Sarah Paine 01:47:33", "In the Gulf War, Bush Sr. fought a limited war for a limited objective. The limited objective was: get Iraqi troops out of Kuwait and restore the Kuwaiti government. Those were the two really big things, and then call it a day when you've done those things.", "That's what he did. It was an incredibly successful war, and it went to our heads. Right? Because when you run a successful limited war, it's over very rapidly. You get stuff with not too much cost, and it's done.", "So then we try to do the total makeover of another country and excuse me, what right does one country have to do a total makeover on another country? And then, of course, the Russians and Chinese are apoplectic watching us do this, and that's a problem.", "Or you think about Afghanistan, maybe you go in there and at least boot Osama bin Laden out of the country, and then proclaim it victory and leave.", "Dwarkesh Patel 01:48:33", "All right, going back to Japan.", "Sarah Paine 01:48:34", "Yeah, okay. I tried to divert you.", "Dwarkesh Patel 01:48:36", "No, no, I did the diverting. It was interesting. Do we have some sense of what Japanese public opinion was like through the 30s and 40s?", "Sarah Paine 01:48:44", "I don't know. It's also because it’s becoming more and more authoritarian. Yeah, I have no idea on that subject. Again, it's a social historian, that would require really detailed knowledge that I don't have.", "Dwarkesh Patel 01:49:02", "I had Daniel Yergin, who wrote The Prize , which is the Pulitzer Prize-winning history of oil. He has a big section in the book about World War II because, in his view, oil was a big component of why the war happened the way it did. One of the interesting points he made was that he thinks that the kamikaze missions, a big part of the motivation behind them, was that Japan lacked the fuel to have the pilots fly back. Is that accurate?", "Sarah Paine 01:49:29", "Oh, it's worse than that. They didn't have the fuel to actually train the pilots. And as we learned in 9/11- sorry to bring it back- it's really much easier to learn how to take off a plane and crash it into something, right? That's what those guys were doing. Than it is to teach them how to make safe landings under weird conditions.", "So, the kamikaze was just an act of—it's a guided missile. And it was an act of having very few assets at the end of the war and not enough fuel to fly anyone anywhere.", "So it's going to be a guided missile into an aircraft carrier or a battleship, destroyer, or something.", "Dwarkesh Patel 01:50:07", "One of the things I learned from your book is that the overwhelming fraction of deaths on the Japanese side during the war happened after it was known that they were going to lose. I think in the last 14 months—and this is all from your book—1.8 million of the total 2.1 million Japanese deaths, that's 85% of the total, happened in the last 14 months of the war. A similar thing is true with Germany, where I think 43% of the deaths happened in the last year of the war, I think 2.3 million out of the 5.3 million.", "Walk me through what is going on inside government – when the higher-level people must know that they're losing, but they're going on to make bigger and bigger sacrifices.", "Sarah Paine 01:50:48", "Well, you're conflating, I think, Western values about what the purpose of the government is. Well, in common wealth, common weal—common weal is common good—this notion that governments are about the well-being of individuals. Okay, well, we're not in societies where we're talking about individuals for openers, certainly not in Japan.", "Then there's another: why the deaths so huge? It's at the end of these wars, you've broken the transportation system that gets produce to hungry mouths. And you've also removed so much manpower, literally from the fields, you aren't producing anything.", "So that you're talking about mass starvation as a result of having done a number of previous years of warfare. And also, this mass starvation helps account for why one side gets shattered and quits.", "01:51:46", "And then the thing that ought to give everyone pause is, okay, if Japan's economy was, I don't know, a tenth of ours in World War II, and these are the kind of costs that they could inflict, watch it on getting into wars with countries that you think you're vastly superior to. It may not work out.", "The costs are horrific. You're illustrating the problems with this continental order of taking territories. This is how it goes.", "But sometimes other people visit wars on you, and then you're into their world of hurt. I mean, I think it's like Angela Merkel or Neville Chamberlain back in World War II, who's going, surely these people don't want to do this. It's going to wind up being a bloodbath. Surely they don't want to.", "So Neville Chamberlain makes the compromise at Munich, thinking, surely this will be it. We'll compromise them, and they'll go. Wrong. They're going to come in for the whole thing.", "Dwarkesh Patel 01:52:45", "Yeah, I still feel like I don't understand. So there are a couple of options, and maybe the correct answer isn't one of these.", "One is that there's just denialism. They genuinely do not understand that they are going to lose the war. Another is that they know they're going to lose, but they would prefer to just go out dying.", "Sarah Paine 01:53:07", "Well they're dead anyway, so who cares, right? Well, this is the problem with nuclear weapons. Push someone into a large corner, or a tight enough corner, maybe they think, \"Well, it's my last day, so guess what? It's going to be your last day, too.\"", "Dwarkesh Patel 01:53:18", "How much should we question the demand for unconditional surrender, knowing that most of the deaths in World War II happened because, if there wasn't unconditional surrender, maybe we could have reached a peace earlier, and most of the deaths could have been spared?", "Sarah Paine 01:53:29", "I think by the time, those deaths are Japanese deaths for us. We're looking at these things and we've lost a lot of people. Our idea is, \"Hey, we're almost there.\" We're not worried about Japanese civilian deaths or Japanese military deaths.", "We're looking at, these are the people that have killed how many Chinese? And I don't believe China was ever part of Japan. What were they doing there?", "So then you're asking for tender mercies out of Americans for the Japanese in those days? No. After all that had happened. It's ugly by the time you've killed this many people, the kind of bitterness on both sides.", "Dwarkesh Patel 01:54:09", "Was it a mistake to demilitarize Japan after the war? A couple of things: One is, look, they did become an ally afterwards, and if they did have a military, they could have helped us in the Korean War. The outcome might have been different with military help.", "Sarah Paine 01:54:26", "Yeah, it was the secret part. They were the ones who knew about de-mining. So we had a big military drawdown after the end of World War II, and the Navy really got cut back because it was going to be all nuclear weapons, and the Air Force was going to deliver, so the Navy didn't get a whole bunch of things that it cared about, including minesweepers.", "So when the North Koreans are throwing a lot of mines around, it's Japanese that are helping. It was secret; that's why you don't know about it. It's not your fault. It came out in research that's recent.", "Another thing, when MacArthur does his Incheon landings , it's very tricky because the tides are, I don't know how many feet, like 30-foot tides. It's enormous. So you can really get stuck on mud flats if you don't time that right and know where you’re going. Who are the pilots to bring it all in? It's Japanese.", "And also, they did- they’ve always had a self-defense force, and they have a very competent navy. It's called a Self-Defense Force, but anybody else would call it a navy. But that then fits in with what the Constitution says we're supposed to have.", "But why is it a mistake to demilitarize Japan?", "Dwarkesh Patel 01:55:32", "Another reason is they had a vested interest in making sure the Communists don't take over in China. They could have provided some amount of support. If the Nationalists, it's obvious the Nationalists are going to lose, at that point maybe they would have decided to support the Nationalists against the Communists.", "Sarah Paine 01:55:47", "It's too late. They've gutted the Nationalist armies. That was a massive strategic error on the-", "Part of it comes from arrogance of don't denigrate the other side. And so the Japanese thought they could… China’s a failed state. These people are hopeless. They can't even decide what they're doing. They got warlords everywhere. We’ll just push them around and we’ll get what we want.", "And they do really well moving into Manchuria. Manchuria is, it’s bigger than Germany and France combined. The Japanese actually stabilize it, do lots of investment, that's why it becomes the most industrialized part of Asia outside the home islands, more so than Shanghai, and this is in the 30s. And so they're looking at it like, \"Things look pretty good, and they'll get away with it.\" And they didn’t.", "Dwarkesh Patel 01:56:38", "Let's say in the couple of decades before the war, if you looked at somebody who's young and ambitious in Japan, the kind of person who might come to Silicon Valley today if they're in America, what do they want to do? Do they want to join zaibatsus ? Do they want to become a general of the military?", "Sarah Paine 01:56:54", "This is social history. I just don't know. You don't see a lot. I just truly don’t know.", "It has to do with- there is so much to know in this world that all of us, all we can do is know a corner of it. So my corner has been a top-down. Actually, in my expertise, I started out with a major field in Russian history and a minor field in Chinese history. And then I realized to deal with those two, you had to understand Japan.", "So you're asking me in my weak suit about the social history of that country, and I can't answer it.", "Dwarkesh Patel 01:57:30", "My friend, Ege Erdil, has this theory that, look, you have… many of the powers that we fought in the 20th century -- Japan, Korea, Vietnam, not fought directly, but at least fought over -- we ended up becoming allies with them later on, and in fact, they're some of our best allies.", "There’s two ways to think about it. One is that we had especially good diplomacy with them after the war. The second is that despite having the same interests, despite being in a position where we should naturally have been friends, we basically screwed up the diplomacy beforehand and we didn't ally ourselves with the peaceful factions within these countries. Basically illustrates a failure of diplomacy on our part, that the natural allies we ended up going to war with.", "Sarah Paine 01:58:18", "There's no manner of diplomacy that's going to solve Hitler. One of the lessons is, don't let the world global economy melt down. When that happens, people who are, not only Americans will get desperate, but people who are poorer than Americans, which is lots of the world, will be truly desperate. So you don’t want to do that.", "As for why the Japanese and the Germans are such wonderful allies as they are, is a- what’s the word? It's a testament to having a generous peace. So the peace after that war was not to get even, which is what had been the peace in World War I. It was “the Germans, they'd done all this terrible stuff, we're going to make them pay all kinds of money and we’re going to tell them it was all their fault and blah blah blah”. That doesn't work well.", "01:59:12", "That was of course what the Russians did to East Germany. But for West Germany, it's how do you reintegrate them back into Europe and get them back on in? And the Japanese as well. And also a serendipitous fact of the Korean War, the Japanese economy was a total mess at the end of this war.", "And the United States, of course, is overtaxed because there's Europe to rebuild and there isn't enough money to do everything. But when the Korean War hits, which is immediately in 1950, tons of the supplies were bought in Japan.", "That's how the Japanese economy initially is restored, is through the Korean War and then the Vietnam War. Because it's the local place to buy, or more local than the United States, to buy a lot of supplies. So that's another piece of why things work out.", "02:00:04", "And then there's also the Germans and Japanese who lived- you were asking about what do conscripts think? There are two brilliant generations in Japan's modern history: one's the Meiji generation, and the other is the post-World War II generation.", "They did this incredible makeover, and I think of all these wonderful, high-quality products they built. Like, I love my Toyota. I want to rent an American car, and “I can't find anything.", "I'm going to kill everyone as I'm trying to figure out how to adjust something”. Whereas Toyota is like [makes noise], and we're good.", "Dwarkesh Patel 02:00:45", "When you were showing the things we learned from the diary entries of different admirals or generals on the Japanese side, as you were going through the archives and looking at these things, I'm curious, were they writing knowing that they were writing for future historians? Or was that actually just a personal record-keeping thing? What was the motivation behind it?", "Sarah Paine 02:01:08", "I did not read that stuff in archives. Japanese Archives, I was doing strictly diplomatic stuff. The Admiral Ugaki Matome's diary, which is the one thing I cited on these subjects, it was translated. Someone ran into it, translated it into English, and so I just read the whole thing.", "And then the other stuff, the quotations, I think I was ripping off secondary sources where someone else had done the archival research. Then I pulled those, but those are where the quotations came from.", "So again, I haven't done this deep read into all of the Japanese thinking about all of these… of what motivated them in their careers. That I have not done.", "Dwarkesh Patel 02:01:54", "Was there some big “aha” moment when you were going through all this Japanese stuff where like, \"I get why they did certain-\" like, \"Why certain things transpired in a certain way?\"", "Sarah Paine 02:02:05", "Oh, well, this lecture. When I came to the Naval War College in 2000, they assigned me a couple of lectures because it's a group-talk class, and everyone only gives two, three, or four lectures. And so I was junior; I think I got one or two.", "And so I was told to lecture on World War II Pacific, and I'm like, \"Oh, great, U.S. Navy. This will be a joke, me telling them about World War II Pacific.\" Because I'm a historian, major field Russia, minor field China, and I've dabbled in Japan. And now I'm supposed to talk about this.", "02:02:42", "And so my husband said, \"Well, hey, I've taught this survey of Japan, read this Bushido stuff.\" He had some of the books hanging around the house. So I read that, and then I had read some basic things about World War II.", "And then I… This is why I'm telling you, I think it's a useful way to approach other cultures: read what they read. What are the key books? Apparently, these were key books.", "Then you go, okay, if I believe this, then I'm going to take Western analysis, of going [makes noise] rather than mirror imaging. I'm going to basically take their software, put it in here in my brain, and then say, \"Okay, does this inform me of what they're doing in the banzai charges?\" I was like, \"Oh, yeah, that actually makes sense now,\" instead of saying, \"What are these nut jobs doing?\" And it's everyone.", "No, no, actually, if you believe these things, then this would explain how you're doing things. So that's how that all came about.", "Dwarkesh Patel 02:03:46", "One thing I was wondering about when you were talking about the inter-service rivalries is, what were the allies doing that they avoided this? Not only between the branches of the military, but I mean, allies, the word literally says it. Japan wasn't coordinating with Germany in the way that America was coordinating with Britain.", "How are the institutions set up, or the culture set up, that made this possible?", "Sarah Paine 02:04:09", "Well A, this institutional rivalry is terrible all over the place. Alright, more concepts that are useful: It's really helpful in looking at people when you have disagreements with them, to ask: who is their primary adversary or rival? Not their secondary - they probably have a whole list of people who annoy them - but who's the number one?", "And then, where is the primary theater of wherever that disagreement is? Look at that. And then, on the disagreement, how big a deal is it? Is it existential?", "Japan thought- the reason it was undeterrable, that we didn’t figure out- they thought their entire existence was at stake, that they absolutely had to have territory in China to survive. Now, whether that's correct or not is a completely different story. If they believe it, that's what's motivating them.", "02:05:00", "So, on these, when you're looking at the war on the Allied side, who's the primary enemy? Hitler. Where's the primary theater? Germany. Even though there's Japan, it was Germany first.", "And what about the ultimate goals? Well, actually, that doesn't align at all because the Russians want a communist wonderland, we want everyone decolonized, and the British want the empire on which the sun never sets. But they all shared an intermediate objective that Hitler has to go, and that's a superglue of an alliance.", "Now, let's flip it to the Axis. Okay, Italy, what does Italy want? Roman Empire, whatever it is. So, who's the primary enemy? Well, Britain, because that's where British interests are.", "Okay, Hitler, what's he all about? He wants to do his Lebensraum, Nazi wonderland all over Eurasia. So, who's his primary enemy? That would be Russia.", "02:06:04", "Japan, what do they want? They want the big Japanese empire. Who's their primary enemy? It's China. And then when they get us in, they got a whole new problem in their hands. None of this aligns, and so the Axis aren't going to be trading many resources with each other, whereas the Allies... and it really goes back to your original question about the British.", "Part of their way of running wars is you absolutely coordinate with allies. You give them serious resources, which is what they finally figured out at the end of the Napoleonic Wars, where they're paying big money to support different allies, and then you've got to put skin in the game, actually send your armies.", "So the British, and we take this from them, we're students of the British on this, do massive sharing with Lend-Lease. And Stalin isn't sharing too much with us.", "And then we've got the unified commands with the British because they're all about doing that. The Russians won't even let us on their territory to do things. And there are only little bits with the Arctic convoys going up to Murmansk, and it's hardly anyone who's up there.", "This is a different cultural tradition on how to deal with things.", "Dwarkesh Patel 02:07:18", "All right, we've come full circle. I think that's a great place to close things. This was excellent. Thank you so much, Sarah.", "Sarah Paine 02:07:25", "Thank you for coming." ]
[ "https://www.gapingvoid.com/are-you-playing-half-court-tennis/", "https://en.wikipedia.org/wiki/Office_of_Net_Assessment", "https://www.ikn.army.mil/apps/MIPBW/MIPB_Features/Kwoun.pdf", "https://www.goodreads.com/quotes/10408355-in-respect-to-the-employment-of-troops-ground-may-be", "https://en.wikipedia.org/wiki/Grand_strategy", "https://en.wikipedia.org/wiki/Hotwash", "https://en.wikipedia.org/wiki/Modus_vivendi", "https://en.wikipedia.org/wiki/Genr%C5%8D", "https://en.wikipedia.org/wiki/Battle_of_Inchon#Incheon_infiltration", "https://en.wikipedia.org/wiki/Zaibatsu" ]
https://www.dwarkesh.com/p/satya-nadella
Satya Nadella – Microsoft’s AGI Plan & Quantum Breakthrough
[ "0:00:00 - Intro", "Dwarkesh Patel", "Satya, thank you so much for coming on the podcast.", "In a second, we're going to get to the two breakthroughs that Microsoft has just made, and congratulations, same day in Nature: the Majorana zero chip, which we have in front of us right here, and also the world human action models . But can we just continue the conversation we were having a second ago? You're describing the ways in which the things you were seeing in the 80s and 90s, you're seeing them happen again.", "Satya Nadella", "The thing that is exciting for me... Dwarkesh, first of all, it's fantastic to be on your podcast. I'm a big listener, and I love the way that you do these interviews and the broad topics that you explore.", "The thing that is exciting for me… It reminds me a little bit of my, I'd say, first few years even in the tech industry, starting in the 90s, where there was real debate about whether it's going to be RISC or CISC , or, \"Hey, are we really going to be able to build servers using x86 ?\"", "When I joined Microsoft, that was the beginning of what was Windows NT. So, everything from the core silicon platform to the operating system to the app tier- that full stack approach- the entire thing is being litigated.", "You could say cloud did a bunch of that, and obviously distributed computing and cloud did change client-server. The web changed massively. But this does feel a little more like maybe more full-stack than even the past that at least I've been involved in.", "Dwarkesh Patel", "When you think about which decisions ended up being the long-term winners in the 80s and 90s, and which ones didn't, and especially when you think about- you were at Sun Microsystems , they had an interesting experience with the 90s dotcom bubble . People talk about this data center build-out as being a bubble, but at the same time, we have the Internet today as a result of what was built out then.", "What are the lessons about what will stand the test of time? What is an inherent secular trend? What is just ephemeral?", "Satya Nadella", "If I go back, the four big transformations that I've been part of, the client and the client-server. So that's the birth of the graphical user interface and the x86 architecture, basically allowing us to build servers.", "It was very clear to me. I remember going to what is PDC in '91 , in fact I was at Sun at that time. In '91, I went to Moscone . That's when Microsoft first described the Win32 interface and it was pretty clear to me what was going to happen, where the server was also going to be an x86 thing. When you have the scale advantages accruing to something, that's the secular bet you have to place. What happened in the client was going to happen on the server side, and then you were able to actually build client-server applications. So, the app model became clear.", "Then the web was the big thing for us, which we had to deal with in starting, in fact as soon as I joined Microsoft, the Netscape browser or the Mosaic browser came out what, I think, December or November of '93, right? I think is when Andreessen and crew had that.", "So that was a big game-changer, in an interesting way, just as we were getting going on what was the client-server wave, and it was clear that we were going to win it as well. We had the browser moment, and so we had to adjust. And we did a pretty good job of adjusting to it because the browser was a new app model.", "We were able to embrace it with everything we did, whether it was HTML in Word or building a new thing called the browser ourselves and competing for it, and then building a web server on our server stack and go after it. Except, of course, we missed what turned out to be the biggest business model on the web, because we all assumed the web is all about being distributed, who would have thought that search would be the biggest winner in organizing the web? And so that's where we obviously didn't see it, and Google saw it and executed super well.", "So that's one lesson learned for me: you have to not only get the tech trend right, you also have to get where the value is going to be created with that trend. These business model shifts are probably tougher than even the tech trend changes.", "0:05:04 - AI won't be winner-take-all", "Dwarkesh Patel", "Where is the value going to be created in AI?", "Satya Nadella", "That's a great one. So I think there are two places where I can say with some confidence. One is the hyperscalers that do well, because the fundamental thing is if you sort of go back to even how Sam and others describe it, if intelligence is log of compute, whoever can do lots of compute is a big winner.", "The other interesting thing is, if you look at underneath even any AI workload, like take ChatGPT, it's not like everybody's excited about what's happening on the GPU side, it's great. In fact, I think of my fleet even as a ratio of the AI accelerator to storage, to compute. And at scale, you've got to grow it.", "Dwarkesh Patel", "Yeah.", "Satya Nadella", "And so, that infrastructure need for the world is just going to be exponentially growing.", "Dwarkesh Patel", "Right.", "Satya Nadella", "So in fact it's manna from heaven to have these AI workloads because guess what? They're more hungry for more compute, not just for training, but we now know, for test time. When you think of an AI agent, it turns out the AI agent is going to exponentially increase compute usage because you're not even bound by just one human invoking a program. It's one human invoking programs that invoke lots more programs. That's going to create massive, massive demand and scale for compute infrastructure. So our hyperscale business, Azure business, and other hyperscalers, I think that’s a big thing.", "Then after that, it becomes a little fuzzy. You could say, hey, there is a winner-take-all model- I just don't see it. This, by the way, is the other thing I’ve learned: being very good at understanding what are winner-take-all markets and what are not winner-take-all markets is, in some sense, everything. I remember even in the early days when I was getting into Azure, Amazon had a very significant lead and people would come to me, and investors would come to me, and say, \"Oh, it's game over. You'll never make it. Amazon, it's winner-take-all.\"", "Having competed against Oracle and IBM in client-server, I knew that the buyers will not tolerate winner-take-all. Structurally, hyperscale will never be a winner-take-all because buyers are smart.", "Consumer markets sometimes can be winner-take-all, but anything where the buyer is a corporation, an enterprise, an IT department, they will want multiple suppliers. And so you got to be one of the multiple suppliers.", "That, I think, is what will happen even on the model side. There will be open-source. There will be a governor. Just like on Windows, one of the big lessons learned for me was, if you have a closed-source operating system, there will be a complement to it, which will be open source.", "And so to some degree that's a real check on what happens. I think in models there is one dimension of, maybe there will be a few closed source, but there will definitely be an open source alternative, and the open-source alternative will actually make sure that the closed-source, winner-take-all is mitigated.", "That's my feeling on the model side. And by the way, let's not discount if this thing is really as powerful as people make it out to be, the state is not going to sit around and wait for private companies to go around and… all over the world. So, I don't see it as a winner-take-all.", "Then above that, I think it's going to be the same old stuff, which is in consumer, in some categories, there may be some winner-take-all network effect. After all, ChatGPT is a great example.", "It's an at-scale consumer property that has already got real escape velocity. I go to the App Store, and I see it's always there in the top five, and I say “wow, that's pretty unbelievable”.", "So they were able to use that early advantage and parlay that into an app advantage. In consumer, that could happen. In the enterprise again, I think there will be, by category, different winners. That's sort of at least how I analyze it.", "Dwarkesh Patel", "I have so many follow-up questions. We have to get to quantum in just a second, but on the idea that maybe the models get commoditized: maybe somebody could have made a similar argument a couple of decades ago about the cloud – that fundamentally, it's just a chip and a box.", "But in the end, of course, you and many others figured out how to get amazing profit margins in the cloud. You figured out ways to get economies of scale and add other value. Fundamentally, even forgetting the jargon, if you've got AGI and it's helping you make better AIs – right now, it's synthetic data and RL ; maybe in the future, it's an automated AI researcher – that seems like a good way to entrench your advantage there. I'm curious what you make of that, just the idea that it really matters to be ahead there.", "Satya Nadella", "At scale, nothing is commodity. To your point about cloud, everybody would say, \"Oh, cloud's a commodity.\" Except, when you scale... That's why the know-how of running a hyperscaler... You could say, \"Oh, what the heck? I can just rack and stack servers.\"", "Dwarkesh Patel", "Right.", "Satya Nadella", "In fact, in the early days of hyperscale, most people thought “there are all these hosters, and those are not great businesses. Will there be anything? Is there a business even in hyperscale?” And it turns out there is a real business, just because of the know-how of running, in the case of Azure, the world's computing of 60-plus regions with all the compute. It's just a tough thing to duplicate.", "So I was more making the point, is it one winner? Is it a winner-take-all or not? Because that you've got to get right. I like to enter categories which are big TAM s, where you don't have to have the risk of it all being winner-take-all. The best news to be in is a big market that can accommodate a couple of winners, and you're one of them.", "That's what I meant by the hyperscale layer. In the model layer, one is models need ultimately to run on some hyperscale compute. So that nexus, I feel, is going to be there forever. It's not just the model; the model needs state, that means it needs storage, and it needs regular compute for running these agents and the agent environments.", "And so that's how I think about why the limit of one person running away with one model and building it all may not happen.", "Dwarkesh Patel", "On the hyperscaler side, and by the way, it's also interesting the advantage you as a hyperscaler would have in the sense that, especially with inference time scaling and if that's involved in training future models, you can amortize your data centers and GPUs, not only for the training, but then use them again for inference.", "I'm curious what kind of hyperscaler you consider Microsoft and Azure to be. Is it on the pre-training side? Is it on providing the O3 -type inference? Or are you just, we’re going to host and deploy any single model that's out there in the market, and we are sort of agnostic about that?", "Satya Nadella", "It’s a good point. The way we want to build out the fleet is [to], in some sense ride Moore's law . I think this will be like what we've done with everything else in the past: every year keep refreshing the fleet, you depreciate it over whatever the lifetime value of these things are, and then get very very good at the placement of the fleet such that you can run different jobs at it with high utilization. Sometimes there are very big training jobs that need to have highly concentrated peak flops that are provisioned to it that also need to cohere. That's great. We should have enough data center footprint to be able to give that.", "But at the end of the day, these are all becoming so big, even in terms of if you take pre-training scale, if it needs to keep going, even pre-training scale at some point has to cross data center boundaries. It's all more or less there.", "So, great, once you start crossing pre-training data center boundaries, is it that different than anything else? The way I think about it is hey, distributed computing will remain distributed, so go build out your fleet such that it's ready for large training jobs, it's ready for test-time compute, it’s ready- in fact, if this RL thing that might happens, you build one large model, and then after that, there’s tons of RL going on. To me, it's kind of like more training flops, because you want to create these highly specialized, distilled models for different tasks.", "So you want that fleet, and then the serving needs. At the end of the day, speed of light is speed of light, so you can't have one data center in Texas and say, \"I'm going to serve the world from there.\"", "You've got to serve the world based on having an inference fleet everywhere in the world. That's how I think of our build-out of a true hyperscale fleet.", "Oh, and by the way, I want my storage and compute also close to all of these things, because it's not just AI accelerators that are stateless. My training data itself needs storage, and then I want to be able to multiplex multiple training jobs, I want to be able to then have memory, I want to be able to have these environments in which these agents can go execute programs. That's kind of how I think about it.", "0:15:18 - World economy growing by 10%", "Dwarkesh Patel", "You recently reported that your yearly revenue from AI is $13 billion. But if you look at your year-on-year growth on that, in like four years, it'll be 10x that. You'll have $130 billion in revenue from AI, if the trend continues. If it does, what do you anticipate doing with all that intelligence, this industrial scale use?", "Is it going to be through Office? Is it going to be you deploying it for others to host? You've got to have the AGIs to have $130 billion in revenue? What does it look like?", "Satya Nadella", "The way I come at it, Dwarkesh, it's a great question because at some level, if you're going to have this explosion, abundance, whatever, commodity of intelligence available, the first thing we have to observe is GDP growth.", "Before I get to what Microsoft's revenue will look like, there's only one governor in all of this. This is where we get a little bit ahead of ourselves with all this AGI hype. Remember the developed world, which is what? 2% growth and if you adjust for inflation it’s zero?", "So in 2025, as we sit here, I'm not an economist, at least I look at it and say we have a real growth challenge. So, the first thing that we all have to do is, when we say this is like the Industrial Revolution, let's have that Industrial Revolution type of growth.", "That means to me, 10%, 7%, developed world, inflation-adjusted, growing at 5%. That's the real marker. It can't just be supply-side.", "In fact that’s the thing, a lot of people are writing about it, and I'm glad they are, which is the big winners here are not going to be tech companies. The winners are going to be the broader industry that uses this commodity that, by the way, is abundant. Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry.", "But that's to me the moment. Us self-claiming some AGI milestone, that's just nonsensical benchmark hacking to me. The real benchmark is: the world growing at 10%.", "Dwarkesh Patel", "Okay, so if the world grew at 10%, the world economy is $100 trillion or something , if the world grew at 10%, that's like an extra $10 trillion in value produced every single year. If that is the case, you as a hyperscaler... It seems like $80 billion is a lot of money. Shouldn't you be doing like $800 billion?", "If you really think in a couple of years, we could be really growing the world economy at this rate, and the key bottleneck would be: do you have the compute necessary to deploy these AIs to do all this work?", "Satya Nadella", "That is correct. But by the way, the classic supply side is, \"Hey, let me build it and they’ll come.\" That's an argument, and after all we've done that, we've taken enough risk to go do it.", "But at some point, the supply and demand have to map. That's why I'm tracking both sides of it. You can go off the rails completely when you are hyping yourself with the supply-side, versus really understanding how to translate that into real value to customers.", "That's why I look at my inference revenue. That's one of the reasons why even the disclosure on the inference revenue... It's interesting that not many people are talking about their real revenue, but to me, that is important as a governor for how you think about it.", "You're not going to say they have to symmetrically meet at any given point in time, but you need to have existence proof that you are able to parlay yesterday's, let’s call it capital, into today's demand, so that then you can again invest, maybe exponentially even, knowing that you're not going to be completely rate mismatched.", "Dwarkesh Patel", "I wonder if there's a contradiction in these two different viewpoints, because one of the things you've done wonderfully is make these early bets. You invested in OpenAI in 2019 , even before there was Copilot and any applications.", "If you look at the Industrial Revolution, these 6%, 10% build-outs of railways and whatever things, many of those were not like, \"We've got revenue from the tickets, and now we're going to...\"", "Satya Nadella", "There was a lot of money lost.", "Dwarkesh Patel", "That's true. So, if you really think there's some potential here to 10x or 5x the growth rate of the world, and then you're like, \"Well, what is the revenue from GPT-4?\"", "If you really think that's the possibility from the next level up, shouldn't you just, \"Let's go crazy, let's do the hundreds of billions of dollars of compute?\" I mean, there's some chance, right?", "Satya Nadella", "Here’s the interesting thing, right? That's why even that balanced approach to the fleet, at least, is very important to me. It's not about building compute. It's about building compute that can actually help me not only train the next big model but also serve the next big model. Until you do those two things, you're not going to be able to really be in a position to take advantage of even your investment.", "So, that's kind of where it's not a race to just building a model, it's a race to creating a commodity that is getting used in the world to drive… You have to have a complete thought, not just one thing that you’re thinking about.", "And by the way, one of the things is that there will be overbuild. To your point about what happened in the dotcom era, the memo has gone out that, hey, you know, you need more energy, and you need more compute. Thank God for it. So, everybody's going to race.", "In fact, it's not just companies deploying, countries are going to deploy capital, and there will be clearly... I'm so excited to be a leaser, because, by the way; I build a lot, I lease a lot. I am thrilled that I'm going to be leasing a lot of capacity in '27, '28 because I look at the builds, and I'm saying, \"This is fantastic.\" The only thing that's going to happen with all the compute builds is the prices are going to come down.", "0:21:39 - Decreasing price of intelligence", "Dwarkesh Patel", "Speaking of prices coming down, you recently tweeted after the DeepSeek model came out about Jevons’ Paradox . I'm curious if you can flesh that out. Jevons’ Paradox occurs when the demand for something is highly elastic. Is intelligence that bottlenecked on prices going down?", "Because when I think about, at least my use cases as a consumer, intelligence is already so cheap. It's like two cents per million tokens. Do I really need it to go down to 0.02 cents? I'm just really bottlenecked on it becoming smarter. If you need to charge me 100x, do a 100x bigger training run. I'm happy for companies to take that.", "But maybe you're seeing something different on the enterprise side or something. What is the key use case of intelligence that really requires it to get to 0.002 cents per million tokens?", "Satya Nadella", "I think the real thing is the utility of the tokens. Both need to happen: One is intelligence needs to get better and cheaper. And anytime there's a breakthrough, like even what DeepSeek did, with the efficient frontier of performance per token changes, the curve gets bent, and the frontier moves. That just brings more demand. That's what happened with cloud.", "Here’s an interesting thing: We used to think “oh my God, we've sold all the servers in the client-server era”. Except once we started putting servers in the cloud, suddenly people started consuming more because they could buy it cheaper, and it was elastic, and they could buy it as a meter versus a license, and it completely expanded.", "I remember going, let’s say, to a country like India and talking about “here is SQL Server”. We sold a little, but man, the cloud in India is so much bigger than anything that we were able to do in the server era. I think that's going to be true.", "If you think about, if you want to really have, in the Global South, in a developing country, if you had these tokens that were available for healthcare that were really cheap, that would be the biggest change ever.", "Dwarkesh Patel", "I think it's quite reasonable for somebody to hear people like me in San Francisco and think “they're kind of silly; they don't know what it's actually like to deploy things in the real world”.", "As somebody who works with these Fortune 500s and is working with them to deploy things for hundreds of millions, billions of people, what's your sense on how fast deployment of these capabilities will be?", "Even when you have working agents, even when you have things that can do remote work for you, with all the compliance and with all the inherent bottlenecks, is that going to be a big bottleneck, or is that going to move past pretty fast?", "Satya Nadella", "It is going to be a real challenge because the real issue is change management or process change. Here's an interesting thing: one of the analogies I use is, just imagine how a multinational corporation like us did forecasts pre-PC, and email, and spreadsheets. Faxes went around. Somebody then got those faxes and did an interoffice memo that then went around, and people entered numbers, and then ultimately a forecast came, maybe just in time for the next quarter.", "Then somebody said, \"Hey, I'm just going to take an Excel spreadsheet, put it in email, send it around. People will go edit it, and I'll have a forecast.\" So, the entire forecasting business process changed because the work artifact and the workflow changed.", "That is what needs to happen with AI being introduced into knowledge work. In fact, when we think about all these agents, the fundamental thing is there's a new work and workflow.", "For example, even prepping for our podcast, I go to my copilot and I say, \"Hey, I'm going to talk to Dwarkesh about our quantum announcement and this new model that we built for game generation. Give me a summary of all the stuff that I should read up on before going.\" It knew the two Nature papers, it took that. I even said, \"Hey, go give it to me in a podcast format.\" And so, it even did a nice job of two of us chatting about it.", "So that became—and in fact, then I shared it with my team. I took it and put it into Pages, which is our artifact, and then shared. So the new workflow for me is I think with AI and work with my colleagues.", "That's a fundamental change management of everyone who's doing knowledge work, suddenly figuring out these new patterns of \"How am I going to get my knowledge work done in new ways?\" That is going to take time. It's going to be something like in sales, and in finance, and supply chain.", "For an incumbent, I think that this is going to be one of those things where—you know, let's take one of the analogies I like to use is what manufacturers did with Lean . I love that because, in some sense, if you look at it, Lean became a methodology of how one could take an end-to-end process in manufacturing and become more efficient. It's that continuous improvement, which is reduce waste and increase value.", "That's what's going to come to knowledge. This is like Lean for knowledge work, in particular. And that's going to be the hard work of management teams and individuals who are doing knowledge work, and that's going to take its time.", "Dwarkesh Patel", "Can I ask you just briefly about that analogy? One of the things Lean did is physically transform what a factory floor looks like. It revealed bottlenecks that people didn't realize until you're really paying attention to the processes and workflows.", "You mentioned briefly what your own workflow—how your own workflow has changed as a result of AIs. I'm curious if we can add more color to what will it be like to run a big company when you have these AI agents that are getting smarter and smarter over time?", "Satya Nadella", "It's interesting you ask that. I was thinking, for example, today if I look at it, we are very email heavy. I get in in the morning, and I’m like, man my inbox is full, and I'm responding, and so I can't wait for some of these Copilot agents to automatically populate my drafts so that I can start reviewing and sending.", "But I already have in Copilot at least ten agents, which I query them different things for different tasks. I feel like there's a new inbox that's going to get created, which is my millions of agents that I'm working with will have to invoke some exceptions to me, notifications to me, ask for instructions.", "So at least what I'm thinking is that there's a new scaffolding, which is the agent manager. It's not just a chat interface. I need a smarter thing than a chat interface to manage all the agents and their dialogue.", "That's why I think of this Copilot, as the UI for AI, is a big, big deal. Each of us is going to have it. So basically, think of it as: there is knowledge work, and there's a knowledge worker. The knowledge work may be done by many, many agents, but you still have a knowledge worker who is dealing with all the knowledge workers. And that, I think, is the interface that one has to build.", "0:30:19 - Quantum Breakthrough", "Dwarkesh Patel", "You're one of the few people in the world who can say that you have access to 200,000… you have this swarm of intelligence around you in the form of Microsoft the company and all its employees. And you have to manage that, and you have to interface with that, how to make best use of that. Hopefully, more of the world will get to have that experience in the future.", "I'd be curious about how your inbox, if that means everybody's inbox, will look like yours in the morning.", "Okay, before we get to that, I want to keep asking you more about AI, but I really want to ask you about the big breakthrough in quantum that Microsoft Research has announced. So can you explain what's going on?", "Satya Nadella", "This has been another 30-year journey for us. It's unbelievable. I'm the third CEO of Microsoft who's been excited about quantum.", "The fundamental breakthrough here, or the vision that we've always had is, you need a physics breakthrough in order to build a utility-scale quantum computer that works. We took the path of saying, the one way for having a less noisy or more reliable qubit is to bet on a physical property that by definition is more reliable and that's what led us to the Majorana zero modes , which was theorized in the 1930s. The question was, can we actually physically fabricate these things? Can we actually build them?", "So the big breakthrough effectively, and I know you talked to Chetan , was that we now finally have existence proof and a physics breakthrough of Majorana zero modes in a new phase of matter effectively. This is why we like the analogy of thinking of this as the transistor moment of quantum computing, where we effectively have a new phase, which is the topological phase, which means we can even now reliably hide the quantum information, measure it, and we can fabricate it. And so now that we have it, we feel like with that core foundational fabrication technique out of the way, we can start building a Majorana chip.", "That Majorana One which I think is going to basically be the first chip that will be capable of a million qubits, physical. And then on that, thousands of logical qubits, error-corrected. And then it's game on. You suddenly have the ability to build a real utility-scale quantum computer, and that to me is now so much more feasible. Without something like this, you will still be able to achieve milestones, but you'll never be able to build a utility-scale computer. That's why we're excited about it.", "Dwarkesh Patel", "Amazing. And by the way, I believe this is it right here.", "Satya Nadella", "That is it.", "Dwarkesh Patel", "Yes.", "Satya Nadella", "I forget now, are we calling it Majorana? Yes, that's right. Majorana One. I'm glad we named it after that .", "To think that we are able to build something like a million-qubit quantum computer in a thing of this size is just unbelievable. That's the crux of it: unless and until we could do that, you can't dream of building a utility-scale quantum computer.", "Dwarkesh Patel", "And you're saying the eventual million qubits will go on a chip this size? Okay, amazing.", "Other companies have announced 100 physical qubits, Google's, IBM's, others. When you say you've announced one, but you're saying that yours is way more scalable in the limit.", "Satya Nadella", "Yeah. The one thing we’ve also done is we've taken an approach where we've separated our software and our hardware. We're building out our software stack, and we now have, with the neutral atom folks, the ion trap folks, we're also working with others who even have pretty good approaches with photonics and what have you, that means there'll be different types of quantum computers. In fact, we have what, I think that the last thing that we announced was 24 logical qubits. So we have also got some fantastic breakthroughs on error correction and that's what is allowing us, even on neutral atom and ion trap quantum computers, to build these 20 plus, and I think that'll keep going even throughout the year; you'll see us improve that yardstick.", "But we also then said, \"Let's go to the first principles and build our own quantum computer that is betting on the topological qubit .\" And that's what this breakthrough is about.", "Dwarkesh Patel", "Amazing. The million topological qubits, thousands of logical qubits, what is the estimated timeline to scale up to that level? What does the Moore's law here, if you've got the first transistor, look like?", "Satya Nadella", "We've obviously been working on this for 30 years. I'm glad we now have the physics breakthrough and the fabrication breakthrough.", "I wish we had a quantum computer because by the way, the first thing the quantum computer will allow us to do is build quantum computers, because it's going to be so much easier to simulate atom-by-atom construction of these new quantum gates.", "But in any case, the next real thing is, now that we have the fabrication technique, let us go build that first fault-tolerant quantum computer. And that will be the logical thing.", "So, I would say now I can say, \"Oh, maybe '27, '28, '29, we will be able to actually build this.\" Now that we have this one gate, can I put the thing into an integrated circuit and then actually put these integrated circuits into a real computer? That is where the next logical step is.", "Dwarkesh Patel", "And what do you see as, in '27, '28, you've got it working? Is it a thing you access through the API? Is it something you're using internally for your own research in materials and chemistry?", "Satya Nadella", "It’s a great question. One thing that I've been excited about is, even in today's world… we had this quantum program, and we added some APIs to it. The breakthrough we had maybe two years ago was to think of this HPC stack, and AI stack, and quantum together.", "In fact, if you think about it, AI is like an emulator of the simulator. Quantum is like a simulator of nature. What is quantum going to do? By the way, quantum is not going to replace classical. Quantum is great at what quantum can do, and classical will also...", "Quantum is going to be fantastic for anything that is not data-heavy but is exploration-heavy in terms of the state space. It should be data-light but exponential states that you want to explore. Simulation is a great one: chemical physics, what have you, biology.", "One of the things that we've started doing is really using AI as the emulation engine. But you can then train. So the way I think of it is, if you have AI plus quantum, maybe you'll use quantum to generate synthetic data that then gets used by AI to train better models that know how to model something like chemistry or physics or what have you. These two things will get used together.", "So even today, that's effectively what we're doing with the combination of HPC and AI. I hope to replace some of the HPC pieces with quantum computers.", "Dwarkesh Patel", "Can you tell me a little bit about how you make these research decisions which, in 20 years time, 30 years time, will actually pay dividends, especially at a company of Microsoft's scale? Obviously, you're in great touch with the technical details in this project. Is it feasible for you to do that with all the things Microsoft Research does?", "How do you know the current bet you're making will pay out in 20 years? Does it just have to emerge organically through the org, or how are you keeping track of all this?", "Satya Nadella", "The thing that I feel was fantastic is when Bill, when he started MSR back in '95 I guess. I think in the long history of these curiosity-driven research organizations, to just do a research org that is about fundamental research and MSR, over the years, has built up that institutional strength so when I think about capital allocation or budgets, we first put the chips in and say, \"Here is MSR's budget.\" We gotta go at it each year knowing that most of these bets are not going to pay off in any finite time frame. Maybe the sixth CEO of Microsoft will benefit from it. And in tech that is I think a given.", "The real thing that I think about is, when the time has come for something like quantum or a new model or what have you, can you capitalize? So as an incumbent, if you look at the history of tech, it's not that people didn't invest. It's that you need to have a culture that knows how to take an innovation and scale it.", "That's the hard part, quite frankly, for CEOs and management teams. Which is kind of fascinating. It's as much about good judgment as it is about good culture. Sometimes we've gotten it right; sometimes we've gotten it wrong; I can tell you the thousand projects from MSR that we should have probably led with, but we didn't. And I always ask myself why. It's because we were not able to get enough conviction and that complete thought of how to not only take the innovation but make it into a useful product with a business model that we can then go to market with.", "That's the job of CEOs and management teams: not to just be excited about any one thing, but to be able to actually execute on a complete thing. And that's easier said than done.", "Dwarkesh Patel", "When you mentioned the possibility of three subsequent CEOs of Microsoft, if each of them increases the market cap by an order of magnitude, by the time you've got the next breakthrough, you'll be like the world economy or something.", "Satya Nadella", "Or remember, the world is going to be growing at 10%, so we'll be fine.", "0:42:51 - How Muse will change gaming", "Dwarkesh Patel", "Let's dig into the other big breakthrough you've just made. It's amazing that you have both of them coming out the same day, in your gaming world models. I'd love if you can tell me a little bit about that.", "Satya Nadella", "We're going to call it Muse . It's going to be the model of this world action, or human action model.", "This is very cool. One of the things is that obviously, Dall-E and Sora have been unbelievable in what they've been able to do in terms of generative models. One thing that we wanted to go after was using gameplay data. Can you actually generate games that are both consistent and then have the ability to generate the diversity of what that game represents, and then are persistent to user mods?", "That's what this is. They were able to work with one of our game studios, and this is the other publication in Nature.", "The cool thing is what I'm excited about is bringing--we're going to have a catalog of games soon that we will start using these models, or we're going to train these models to generate, and then start playing them.", "In fact, when Phil Spencer first showed it to me, he had an Xbox controller and this model basically took the input and generated the output based on the input. And it was consistent with the game. That to me is a massive moment of “wow”. It's kind of like the first time we saw ChatGPT complete sentences, or Dall-E draw, or Sora. This is one such moment.", "Dwarkesh Patel", "I got a chance to see some of the videos in the real-time demo this morning with your lead researcher Katja on this. Only once I talked to her did it really hit me how incredible this is, in the sense that we've used AI in the past to model agents, and just using that same technique to model the world around the agent gives consistent real-time – we'll superimpose videos of what this looks like atop this podcast so people can get a chance to see it for themselves. I guess it'll be out by then, so they can also watch it there.", "This in itself is incredible. You, through your span as CEO, have invested tens of hundreds of billions of dollars in building up Microsoft Gaming and acquiring IP.", "In retrospect, if you can just merge all of this data into one big model that can give you this experience of visiting and going through multiple worlds at the same time, and if this is the direction gaming is headed, it seems like a pretty good investment to have made. Did you have any premonition about this?", "Satya Nadella", "I wouldn't say that we invested in gaming to build models. We invested, quite frankly, because- here's an interesting thing about our history: We built our first game before we built Windows. Flight Simulator was a Microsoft product long before we even built Windows.", "So, gaming has got a long history at the company, and we want to be in gaming for gaming's sake. I always start by saying I hate to be in businesses where they're means to some other end. They have to be ends unto themselves.", "And then, yes, we're not a conglomerate. We are a company where we have to bring all these assets together and be better owners by adding value. For example, cloud gaming is a natural thing for us to invest in because that will just expand the TAM and expand the ability for people to play games everywhere.", "The same thing with AI and gaming: we definitely think that it can be helpful in maybe changing- it's kind of like the CGI moment, even for gaming long-term. And it's great. As the biggest, world's largest publisher, this will be helpful. But at the same time, we've got to produce great quality games. I mean, you can't be a gaming publisher without, sort of, first and foremost being focused on that.", "But the fact that this data asset is going to be interesting, not just in a gaming context, but it's going to be a general action model and a world model, it's fantastic. I mean like, you know, I think about gaming data as perhaps, you know, what YouTube is perhaps to Google, gaming data is to Microsoft. And so therefore I'm excited about that.", "Dwarkesh Patel", "Yeah, and that's what I meant, just in the sense of like, you can have one unified experience across many different kinds of games. How does this fit into the other, separate from AI, the other things that Microsoft has worked on in the past, like mixed reality? Maybe giving smaller game studios a chance to build these AAA action games? Just like five, ten years from now, what kinds of ways could you imagine?", "Satya Nadella", "I've thought about these three things as the cornerstones of, in an interesting way, even five, six, seven years ago is when I said the three big bets that we want to place [are] AI, quantum, and mixed reality. And I still believe in them, because in some sense, what are the big problems to be solved?", "Presence. That's the dream of mixed reality. Can you create real presence? Like you and I doing a podcast like this.", "I think it’s still proving to be the harder one of those challenges, quite honestly. I thought it was going to be more solvable. It's tougher, perhaps, just because of the social side of it: wearing things and so on.", "We're excited about, in fact, what we're going to do with Anduril and Palmer , now, with even how they'll take forward the IVAS program, because that's a fantastic use case. And so we'll continue on that front.", "But also, the 2D surfaces. It turns out things like Teams , right, thanks to the pandemic, we've really gotten the ability to create essentially presence through even 2D. And that I think will continue. That's one secular piece.", "Quantum we talked about, and AI is the other one. So these are the three things that I look at and say, how do you bring these things together? Ultimately, not as tech for tech's sake, but solving some of the fundamental things that we, as humans, want in our life, and more, we want them in our economy, driving our productivity. And so if we can somehow get that right, then I think we will have really made progress.", "Dwarkesh Patel", "When you write your next book, you've got to have some explanation of why those three pieces all came together around the same time, right? Like, there's no intrinsic reason you would think quantum and AI should happen in 2028 and 2025 and so forth.", "Satya Nadella", "That's right. At some level, I look at it and say: the simple model I have is, hey is there a systems breakthrough? To me, the systems breakthrough is the quantum thing.", "Is there a business logic breakthrough? That's AI to me, which is: can the logic tier be fundamentally reasoned differently? Instead of imperatively writing code, can you have a learning system? That's the AI one.", "And then the UI side of it is presence.", "0:49:51 - Legal barriers to AI", "Dwarkesh Patel", "Going back to AI for a second, in your 2017 book … 2019 you invest in OpenAI, very early, 2017 is even earlier, you say in your book, \"One might also say that we're birthing a new species, one whose intelligence may have no upper limits.\"", "Now, super-early, of course, to be talking about this in 2017. We've been talking in a granular fashion about agents, Office Copilot, capex , and so forth. But if you zoom out and consider this statement you've made, and you think about you as a hyperscaler, as the person doing research in these models as well, providing training, inference, and research for building a new species, how do you think about this in the grand scheme of things?", "Do you think we're headed towards superhuman intelligence in your time as CEO?", "Satya Nadella", "I think even Mustafa uses that term. In fact he’s used that term more recently, this “new species”.", "The way I come at it is, you definitely need trust. Before we claim it is something as big as a species, the fundamental thing that we've got to get right is that there is real trust, whether it's personal or societal level trust, that's baked in. That's the hard problem.", "I think the one biggest rate limiter to the power here will be how does our legal… call it infrastructure, we’re talking about all the compute infrastructure, well how does the legal infrastructure evolve to deal with this? This entire world is constructed with things like humans owning property, having rights, and being liable. That’s the fundamental thing that one has to first say, okay what does that mean for anything that now humans are using as tools? And if humans are going to delegate more authority to these things, then how does that structure evolve? Until that really gets resolved, I don't think just talking about the tech capability is going to happen.", "Dwarkesh Patel", "As in, we won't be able to deploy these kinds of intelligences until we figure out how to…?", "Satya Nadella", "Absolutely. Because at the end of the day, there is no way. Today, you cannot deploy these intelligences unless and until there's someone indemnifying it as a human.", "To your point, I think that's one of the reasons why I think about even the most powerful AI is essentially working with some delegated authority from some human. You can say, oh, that's all alignment and this, that, and the other. That's why I think you have to really get these alignments to work and be verifiable in some way, but I just don't think that you can deploy intelligences that are out of control. For example, this AI takeoff problem may be a real problem, but before it is a real problem, the real problem will be in the courts. No society is going to allow for some human to say, \"AI did that.\"", "Dwarkesh Patel", "Yes. Well, there's a lot of societies in the world, and I wonder if any one of them might not have a legal system that might be more amenable. And if you can't have a takeoff, then you might worry. It doesn't have to happen in America, right?", "Satya Nadella", "We think that no society cares about it, right? There can be rogue actors, I'm not saying there won't be rogue actors; there are cyber criminals and rogue states; they're going to be there.", "But to think that human society at large doesn't care about it is also not going to be true. I think we all will care. We know how to deal with rogue states and rogue actors today. The world doesn't sit around and say “we’ll tolerate that”. That's why I'm glad that we have a world order in which anyone who is a rogue actor in a rogue state has consequences.", "Dwarkesh Patel", "Right. But if you have this picture where you can have 10% economic growth, I think it really depends on getting something like AGI working, because tens of trillions of dollars of value, that sounds closer to the total of human wages, around $60 trillion of the economy. Getting that magnitude, you kind of have to automate labor or supplement labor in a very significant way.", "If that is possible, and once we figure out the legal ramifications for it, it seems quite plausible, even within your tenure that we figure that out. Are you thinking about superhuman intelligence? Like, the biggest thing you do in your career is this?", "Satya Nadella", "You bring up another point. I know David Autor and others have talked a lot about this which is, 60% of labor- I think the other question that needs to happen, let’s at least talk about our democratic societies. I think that in order to have a stable social structure, and democracies function, you can’t just have a return on capital and no return on labor. We can talk about it, but that 60% has to be revalued.", "In my own simple way, maybe you can call it naive, we'll start valuing different types of human labor. What is today considered high-value human labor may be a commodity. There may be new things that we will value.", "Including that person who comes to me and helps me with my physical therapy or whatever, whatever is going to be the case that we value, but ultimately, if we don't have return on labor, and there's meaning in work and dignity in work and all of that, that's another rate limiter to any of these things being deployed.", "0:55:46 - Getting AGI right", "Dwarkesh Patel", "On the alignment side, two years ago, you guys released Sydney Bing . Just to be clear, I think given the level of capabilities at the time, it was a charming, endearing, kind of funny example of misalignment.", "But that was because, at the time, it was like chatbots. They can go think for 30 seconds and give you some funny or inappropriate response. But if you think about that kind of system--that, I think to a New York Times reporter, tried to get him to leave his wife or something--if you think about that going forward, and you have these agents that are for hours, weeks, months going forward, just like autonomous swarms of AGIs, who could be in similar ways misaligned and screwing stuff up, maybe coordinating with each other, what's your plan going forward so that when you get the big one, you get it right?", "Satya Nadella", "That is correct. That's one of the reasons why when we usually allocate compute, let's allocate compute for what is that alignment challenge?", "And then more importantly, what is the runtime environment in which you are really going to be able to monitor these things? The observability around it? We do deal with a lot of these things today in the classical side of things as well, like cyber. We don't just write software and then just let it go. You have software and then you monitor it. You monitor it for cyber attacks, you monitor it for fault injections, and what have you.", "Therefore, I think we will have to build enough software engineering around the deployment side of these, and then inside the model itself, what's the alignment? These are all, some of them are real science problems. Some of them are real engineering problems, and then we will have to tackle it.", "That also means taking our own liability in all of this. So that's why I'm more interested in deploying these things in where you can actually govern what the scope of these things is, and the scale of these things is. You just can't unleash something out there in the world that creates harm, because the social permission for that is not going to be there.", "Dwarkesh Patel", "When you get the agents that can really just do weeks worth of tasks for you, what is the minimum assurance you want before you can let it run a random Fortune 500?", "Satya Nadella", "I think when I use something like Deep Research , even, the minimum assurance I think we want is before we especially have physical embodiment of anything, that I think is kind of one of those thresholds, when you cross. That might be one place.", "Then the other one is, for example, the permissions of the runtime environment in which this is operating. You may want guarantees that it's sandboxed, it is not going out of that sandbox.", "Dwarkesh Patel", "I mean, we already have web search and we already have it out of the sandbox.", "Satya Nadella", "But even what it does with web search and what it writes -- for example to your point, if it's just going to write a bunch of code in order to do some computation, where is that code deployed? And is that code ephemeral for just creating that output, versus just going and springing that code out into the world?", "Those are things that you could, in the action space, actually go control.", "Dwarkesh Patel", "And separate from the safety issues, as you think about your own product suite, and you think about, if you do have AIs this powerful, at some point, it's not just like Copilot- an example you mentioned about how you were prepping for this podcast- it's more similar to how you actually delegate work to your colleagues.", "What does it look like, given your current suite, to add that in? I mean, there's one question about whether LLMs get commodified by other things.", "I wonder if these databases or canvases or Excel sheets or whatever -- if the LLM is your main gate point into accessing all these things, is it possible that the LLMs commodify Office?", "Satya Nadella", "It's an interesting one. The way I think about the first phase, at least, would be: Can the LLM help me do my knowledge work using all of these tools or canvases more effectively?", "One of the best demos that I've seen is a doctor getting ready for a tumor board workflow. She's going into a tumor board meeting, and the first thing she uses Copilot for is to create an agenda for the meeting because the LLM helps reason about all the cases, which are in some SharePoint site. It says, \"Hey, these cases -- obviously, a tumor board meeting is a high-stakes meeting where you want to be mindful of the differences in cases so that you can then allocate the right time.\"", "Even that reasoning task of creating an agenda that knows how to split time- super. So, I use the LLM to do that. Then I go into the meeting, I'm in a Teams call with all my colleagues. I'm focused on the actual case versus taking notes, because you now have this AI copilot doing a full transcription of all of this. It's not just a transcript, but a database entry of what is in the meeting that is recallable for all time.", "Then she comes out of the meeting, having discussed the case and not been distracted by note-taking. She's a teaching doctor; she wants to go and prep for her class. And so she goes into Copilot and says, \"Take my tumor board meeting and create a PowerPoint slide deck out of it so that I can talk to my students about it.\"", "So that’s the type. The UI and the scaffolding that I have are canvases that are now getting populated using LLMs. And the workflow itself is being reshaped; knowledge work is getting done.", "Here's an interesting thing: If someone came to me in the late '80s and said, \"You're going to have a million documents on your desk,\" I would say, \"What the heck is that?\" I would have literally thought there was going to be a million physical copies of things on my desk. Except, we do have a million spreadsheets and a million documents.", "Dwarkesh Patel", "I don’t, you do.", "Satya Nadella", "They're all there. And so, that's what's going to happen with even agents. There will be a UI layer. To me, Office is not just about the office of today; it's the UI layer for knowledge work. It'll evolve as the workflows evolve. That's what we want to build.", "I do think the SaaS applications that exist today, these CRUD applications, are going to fundamentally be changed because the business logic will go more into this agentic tier. In fact, one of the other cool things today in my Copilot experience is when I say, \"Hey, I'm getting ready for a meeting with a customer,\" I just go and say, \"Give me all the notes for it that I should know.\" It pulls from my CRM database , it pulls from my Microsoft Graph, creates a composite, essentially artifact, and then it applies even logic on it. That, to me, is going to transform the SaaS applications as we know of it today.", "Dwarkesh Patel", "SaaS as an industry might be worth hundreds of billions to trillions of dollars a year, depending on how you count. If really that can just get collapsed by AI, is the next step up in your next decade 10X-ing the market cap of Microsoft again? Because you're talking about trillions of dollars...", "Satya Nadella", "It would also create a lot of value in the SaaS. One thing we don't pay as much attention to perhaps is the amount of IT backlog there is in the world.", "These code gen things, plus the fact that I can interrogate all of your SaaS applications using agents and get more utility will be the greatest explosion of apps, they'll be called agents, so that for every vertical, in every industry, in every category, we're suddenly going to have the ability to be serviced.", "So there's going to be a lot of value. You can't stay still. You can't just say the old thing of, \"Oh, I schematized some narrow business process, and I have a UI in the browser, and that's my thing.\" That's ain’t going to be the case. You have to go up-stack and say, \"What's the task that I have to participate in?\"", "You will want to be able to take your SaaS application and make it a fantastic agent that participates in a multi-agent world. As long as you can do that, then I think you can even increase the value.", "1:04:59 - 34 years at Microsoft", "Dwarkesh Patel", "Can I ask you some questions about your time at Microsoft?", "Satya Nadella", "Yeah.", "Dwarkesh Patel", "Is being a company man underrated? So you've spent most of your career at Microsoft, and you could say that one of the reasons you've been able to add so much value is you've seen the culture, the history, and the technology. You have all this context by rising up through the ranks. Should more companies be run by people who have this level of context?", "Satya Nadella", "That's a great question. I've not thought about it that way.", "Through my 34 years now of Microsoft, each year I felt more excited about being at Microsoft versus thinking that, oh, I'm a company person or what have you. I take that seriously, even for anybody joining Microsoft. It's not like they're joining Microsoft as long as they feel that they can use this as a platform for their both economic return, but also a sense of purpose and a sense of mission that they can accomplish by using us as a platform. That's the contract.", "So I think yes, companies have to create a culture that allows people to come in and become company people like me. Microsoft got it more right than wrong, at least in my case, and I hope that remains the case.", "Dwarkesh Patel", "The sixth CEO that you’re talking about, who’ll get to use the research you’re starting now, what are you doing to retain the future Satya Nadellas so that they're in a position to become future leaders?", "Satya Nadella", "It's fascinating. This is our 50th year, and I think a lot about it. The way to think about it is, longevity is not a goal; relevance is.", "The thing that I have to do and all 200,000 of us have to do every day is: Are we doing things that are useful and relevant for the world as we see it evolving, not just today, but tomorrow?", "We live in an industry where there's no franchise value, so that’s the other hard part. If you take the R&D budget that we will spend this year, it’s all speculation on what's going to happen five years from now. You have to basically go in with that attitude, saying, \"We are doing things that we think are going to be relevant.\"", "So that's what you have to focus on. Then know that there's a batting average, and you're not going to get- you have to have a high tolerance for failure. You have to take enough shots on goal to be able to say, \"Okay, we will make it to the other side as a company.\" That's what makes it tricky in this industry.", "Dwarkesh Patel", "Speaking of- you just mentioned that you're two months away from your 50th anniversary of Microsoft’s founding. If you look at the top 10 companies by market cap, or top 5, basically, everybody else but Microsoft is younger than Microsoft. It's an interesting observation about why the most successful companies often are quite young. The average Fortune 500 company will last 10 to 15 years.", "What has Microsoft done to remain relevant for this many years? How do you keep refounding?", "Satya Nadella", "I love that, Reed Hoffman uses that term, \"refounding.\" That's the mindset. People talk about founder mode, but for us mere mortal CEOs, it's more like refounder mode.", "To be able to see things again in a fresh way is the key. To your question: can we culturally create an environment where refounding becomes a habit thing? Every day we come in and say, \"We feel we have a stake in this place to be able to change the core assumptions of what we do and how we relate to the world around us. Do we give ourselves permission?” I think many times, companies feel over-constrained by either business model or whatever. You just have to unconstrain yourself.", "Dwarkesh Patel", "If you did leave Microsoft, what company would you start?", "Satya Nadella", "Company I would start? Man. That’s where the company man and me sort of says, “I'll never leave Microsoft.”", "If I were thinking of doing something, I think picking a domain that has... When I look at the dream of tech, we've always said technology is about the biggest, greatest democratizing force.", "I feel like finally, we have that ability. If you say those tokens per dollar per watt is what we can generate, I would love to find some domain in which that can be applied, where it is so underserved.", "That's where healthcare, education... Public sector would be another place. If you take those domains, which are the underserved places, where my life as a citizen of this country or a member of this society or anywhere, would I be better off if somehow all this abundance translated into better healthcare, better education, and better public sector institutions serving me as a citizen? That would be a place.", "1:10:46 - Does Satya Nadella believe in AGI?", "Dwarkesh Patel", "One thing I'm not sure about, hearing your answers on different questions, is whether you think AGI is a thing. Will there be a thing which automates all cognitive labor, like anything anybody can do on a computer?", "Satya Nadella", "This is where I have a problem with the definitions of how people talk about it. Cognitive labor is not a static thing. There is cognitive labor today. If I have an inbox that is managing all my agents, is that new cognitive labor?", "Today's cognitive labor may be automated. What about the new cognitive labor that gets created? Both of those things have to be thought of, which is the shifting…", "That's why I make this distinction, at least in my head: Don't conflate knowledge worker with knowledge work. The knowledge work of today could probably be automated. Who said my life's goal is to triage my email? Let an AI agent triage my email.", "But after having triaged my email, give me a higher-level cognitive labor task of, \"Hey, these are the three drafts I really want you to review.\" That's a different abstraction.", "Dwarkesh Patel", "But will AI ever get to the second thing?", "Satya Nadella", "It may, but as soon as it gets to that second thing, there will be a third thing. Why are we thinking that somehow, when we have dealt with tools that have changed what cognitive labor is in history, why are we worried that all cognitive labor will go away?", "Dwarkesh Patel", "I'm sure you've heard these examples before, but the idea that horses can still be good for certain things, there are certain terrains you can't take a car on. But the idea that you're going to see horses around the street, they’re going to employ millions of horses, it’s just not happening.", "And then the idea is, could a similar thing happen with humans?", "Satya Nadella", "But in one very narrow dimension? It's only 200 years of history of humans where we have valued some narrow sort of things called \"cognitive labor\" as we understand it.", "Let's take something like chemistry. If this thing, quantum plus AI really helped us do a lot of novel material science and so on, that's fantastic to have novel material science being done by it. Does that take away from all the other things that humans can do?", "Why can't we exist in a world where there are powerful cognitive machines, knowing that our cognitive agency has not been taken away?", "Dwarkesh Patel", "I'll ask this question, not about you, but in a different scenario, so maybe you can answer it without embarrassment. Suppose on the Microsoft board, could you ever see adding an AI to the board? Could it ever have the judgment, context, and holistic understanding to be a useful advisor?", "Satya Nadella", "It's a great example. One of the things we added was a facilitator agent in Teams. The goal there, it's in the early stages, is can that facilitator agent use long-term memory, not just on the context of the meeting, but with the context of projects I'm working on, and the team, and what have you, be a great facilitator?", "I would love it even in a board meeting, where it's easy to get distracted. After all, board members come once a quarter, and they're trying to digest what is happening with a complex company like Microsoft. A facilitator agent that actually helped human beings all stay on topic and focus on the issues that matter, that's fantastic.", "That's kind of literally having, to your point about even going back to your previous question, having something that has infinite memory that can even help us. You know, after all, what is that Herbert Simon thing? We are all bounded rationality . So if the bounded rationality of humans can actually be dealt with because there is a cognitive amplifier outside, that's great.", "Dwarkesh Patel", "Speaking of materials and chemistry stuff, I think you said recently that you want the next 250 years of progress in those fields to happen in the next 25 years . Now, when I think about what's going to be possible in the next 250 years, I'm thinking like space travel, and space elevators, and immortality, and curing all diseases. Next 25 years, you think?", "Satya Nadella", "One of the reasons why I brought that up was, I love that thing of, the industrial revolution was the 250 years. We have to take this entire change from a carbon-based system to something different.", "That means you have to fundamentally reinvent all of what has happened with chemistry over the last 250 years. That's where I hope we have this quantum computer, this quantum computer helps us get to new materials, and then we can fabricate those new materials that help us with all of the challenges we have on this planet. And then I'm all for interplanetary travel.", "Dwarkesh Patel", "Amazing. Satya, thank you so much for your time.", "Satya Nadella", "Thank you so much. It's wonderful. Thanks.", "Dwarkesh Patel", "Great, thank you." ]
[ "https://www.microsoft.com/en-us/research/?p=1122837&preview=1&_ppp=a1d85840fc", "https://en.wikipedia.org/wiki/Reduced_instruction_set_computer", "https://en.wikipedia.org/wiki/Complex_instruction_set_computer", "https://en.wikipedia.org/wiki/X86", "https://en.wikipedia.org/wiki/Sun_Microsystems", "https://en.wikipedia.org/wiki/Dot-com_bubble", "https://www.betaarchive.com/forum/viewtopic.php?t=33909", "https://en.wikipedia.org/wiki/Moscone_Center", "https://en.wikipedia.org/wiki/Netscape_(web_browser)", "https://en.wikipedia.org/wiki/NCSA_Mosaic", "https://www.redhat.com/en/topics/cloud-computing/what-is-a-hyperscaler", "https://en.wikipedia.org/wiki/Microsoft_Azure", "https://en.wikipedia.org/wiki/Oracle_Corporation", "https://en.wikipedia.org/wiki/IBM", "https://en.wikipedia.org/wiki/Synthetic_data", "https://en.wikipedia.org/wiki/Reinforcement_learning", "https://en.wikipedia.org/wiki/Total_addressable_market", "https://www.ve3.global/inference-time-scaling-the-next-frontier-in-ai-performance/", "https://en.wikipedia.org/wiki/OpenAI_o3", "https://en.wikipedia.org/wiki/Moore%27s_law", "https://en.wikipedia.org/wiki/Floating_point_operations_per_second", "https://ourworldindata.org/grapher/gdp-worldbank-constant-usd?tab=chart&country=USA~CHN~IND~MEX~FRA~JPN~DEU~RUS~BRA~GBR~OWID_WRL", "https://news.microsoft.com/2019/07/22/openai-forms-exclusive-computing-partnership-with-microsoft-to-build-new-azure-ai-supercomputing-technologies/", "https://x.com/satyanadella/status/1883753899255046301?lang=en-GB", "https://en.wikipedia.org/wiki/Jevons_paradox", "https://en.wikipedia.org/wiki/DeepSeek", "https://en.wikipedia.org/wiki/Lean_manufacturing", "https://en.wikipedia.org/wiki/Qubit", "https://en.wikipedia.org/wiki/Majorana_fermion", "https://www.microsoft.com/en-us/research/people/cnayak/", "https://en.wikipedia.org/wiki/Ettore_Majorana", "https://www.quera.com/glossary/neutral-atoms", "https://en.wikipedia.org/wiki/Ion_trap", "https://en.wikipedia.org/wiki/Photonics", "https://quantum.microsoft.com/en-us/insights/education/concepts/topological-qubits", "https://en.wikipedia.org/wiki/High-performance_computing", "https://en.wikipedia.org/wiki/Microsoft_Research", "https://www.microsoft.com/en-us/research/?p=1122837&preview=1&_ppp=a1d85840fc", "https://en.wikipedia.org/wiki/Phil_Spencer_(business_executive)", "https://en.wikipedia.org/wiki/AAA_(video_game_industry)", "https://www.anduril.com/", "https://en.wikipedia.org/wiki/Palmer_Luckey", "https://news.microsoft.com/2025/02/11/anduril-and-microsoft-partner-to-advance-integrated-visual-augmentation-system-ivas-program-for-the-u-s-army/", "https://www.microsoft.com/en-gb/microsoft-teams/group-chat-software", "https://en.wikipedia.org/wiki/Hit_Refresh", "https://en.wikipedia.org/wiki/Capital_expenditure", "https://en.wikipedia.org/wiki/Mustafa_Suleyman", "https://en.wikipedia.org/wiki/David_Autor", "https://medium.com/@happybits/sydney-the-clingy-lovestruck-chatbot-from-bing-com-7211ca26783", "https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html", "https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html", "https://openai.com/index/introducing-deep-research/", "https://en.wikipedia.org/wiki/Software_as_a_service", "https://en.wikipedia.org/wiki/Create,_read,_update_and_delete", "https://www.talend.com/uk/resources/what-is-crm-database/", "https://en.wikipedia.org/wiki/Reid_Hoffman", "https://en.wikipedia.org/wiki/Herbert_A._Simon", "https://en.wikipedia.org/wiki/Bounded_rationality", "https://news.microsoft.com/source/features/innovation/azure-quantum-elements-chemistry-materials-science/" ]
https://www.dwarkesh.com/p/sbf
Sam Bankman-Fried - Crypto, Altruism, and Leadership
[ "Dwarkesh Patel 0:09", "Today on The Lunar Science Society Podcast, I have the pleasure of interviewing Sam Bankman-Fried, CEO of FTX . Thanks for coming on The Lunar Society.", "Sam Bankman-Fried 0:17", "Thanks for having me.", "How inefficient is the world?", "Dwarkesh Patel 0:18", "Alright, first question. Does the consecutive success of FTX and Alameda suggest to you that the world has all kinds of low-hanging opportunities? Or was that a property of the inefficiencies of crypto markets at one particular point in history?", "Sam Bankman-Fried 0:31", "I think it's more of the former, there are just a lot of inefficiencies.", "Dwarkesh Patel 0:35", "So then another part of the question is: if you had to restart earning to give again, what are the odds you become a billionaire, but you can't do it in crypto?", "Sam Bankman-Fried 0:42", "I think they're pretty decent. A lot of it depends on what I ended up choosing and how aggressive I end up deciding to be. There were a lot of safe and secure career paths before me that definitely would not have ended there. But if I dedicated myself to starting up some businesses, there would have been a pretty decent chance of it.", "Choosing a career", "Dwarkesh Patel 1:11", "So that leads to the next question—which is that you've cited Will MacAskill 's lunch with you while you were at MIT as being very important in deciding your career. He suggested you earn-to-give by going to a quant firm like Jane Street. In retrospect, given the success you've had as a founder, was that maybe bad advice? And maybe you should’ve been advised to start a startup or nonprofit?", "Sam Bankman-Fried 1:31", "I don't think it was literally the best possible advice because this was in 2012. Starting a crypto exchange then would have been…. I think it was definitely helpful advice. Relative to not having gotten advice at all, I think it helps quite a bit.", "Dwarkesh Patel 1:50", "Right. But then there's a broader question: are people like you who could become founders advised to take lower variance, lower risk careers that in, expected value, are less valuable?", "Sam Bankman-Fried 2:02", "Yeah, I think that's probably true. I think people are advised too strongly to go down safe career paths. But I think it's worth noting that there's a big difference between what makes sense altruistically and personally for this. To the extent you're just thinking of personal criteria, that's going to argue heavily in favor of a safer career path because you have much more quickly declining marginal utility of money than the world does. So, this kind of path is specifically for altruistically-minded people.", "The other thing is that when you think about advising people, I think people will often try and reference career advice that others got. “What were some of these outward-facing factors of success that you can see?” But often the answer has something to do with them and their family, friends, or something much more personal. When we talk with people about their careers, personal considerations and the advice of people close to them weigh very heavily on the decisions they end up making.", "Dwarkesh Patel 3:17", "I didn't realize that the personal considerations were as important in your case as the advice you got.", "Sam Bankman-Fried 3:24", "Oh, I don’t think in my case. But, it is true with many people that I talked to.", "Dwarkesh Patel 3:29", "Speaking of declining marginal consumption, I'm wondering if you think the implication of this is that over the long term, all the richest people in the world will be utilitarian philanthropists because they don't have diminishing returns of consumption. They’re risk-neutral.", "Sam Bankman-Fried 3:40", "I wouldn't say all will, but I think there probably is something in that direction. People who are looking at how they can help the world are going to end up being disproportionately represented amongst the most and maybe least successful.", "The difficulty of being a founder", "Dwarkesh Patel 3:54", "Alright, let’s talk about Effective Altruism . So in your interview with Tyler Cowen, you were asked, “What constrains the number of altruistically minded projects?” And you answered, “Probably someone who can start something.”", "Now, is this a property of the world in general? Or is this a property of EAs? And if it's about EAs, then is there something about the movement that drives away people who took could take leadership roles?", "Sam Bankman-Fried 4:15", "Oh, I think it's just the world in general. Even if you ignore altruistic projects and just look at profit-minded ones, we have lots of ideas for businesses that we think would probably do well, if they were run well, that we'd be excited to fund. And the missing ingredient quite frequently for them is the right person or team to take the lead on it. In general, starting something is brutal. It's brutal being a founder, and it requires a somewhat specific but extensive list of skills. Those things end up making it high in demand.", "Dwarkesh Patel 4:56", "What would it take to get more of those kinds of people to go into EA?", "Sam Bankman-Fried 4:59", "Part of it is probably just talking with them about, “Have you thought about what you can do for the world? Have you thought about how you can have an impact on the world? Have you thought about how you can maximize your impact on the world?” Many people would be excited about thinking critically and ambitiously about how they can help the world. So I think honestly, just engagement is one piece of this. And then even within people who are altruistically minded and thinking about what it would take for them to be founders, there are still things that you can do.", "Some of this is about empowering people and some of this is about normalizing the fact that when you start something, it might fail—and that's okay . Most startups and especially very early-stage startups should not be trying to maximize the chances of having at least a little bit of success. But that means you have to be okay with the personal fallout of failing and that we have to build a community that is okay with that. I don't think we have that right now, I think very few communities do.", "Is effective altruism too narrowminded?", "Dwarkesh Patel 6:21", "Now, there are many good objections to utilitarianism, as you know. You said yourself that we don't have a good account of infinite ethics —should we attribute substantial weight to the probability that utilitarianism is wrong? And how do you hedge for this moral uncertainty in your giving?", "Sam Bankman-Fried 6:35", "So I don't think it has a super large impact on my giving. Partially, because you'd need to have a concrete proposal for what else you would do that would be different actions-wise—and I don't know that that I've been compelled by many of those. I do think that there are a lot of things we don't understand right now. And one thing that you pointed to is infinite ethics. Another thing is that (I'm not sure this is moral uncertainty, this might be physical uncertainty) there are a lot of sort of chains of reasoning people will go down that are somewhat contingent on our current understanding of the universe—which might not be right. And if you look at expected-value outcomes, might not be right.", "Say what you will about the size of the universe and what that implies, but some of the same people make arguments based on how big the universe is and also think the simulation hypothesis has decent probability. Very few people chain through, “What would that imply?” I don't think it's clear what any of this implies. If I had to say, “How have these considerations changed my thoughts on what to do?”", "The honest answer is that they have changed it a little bit. And the direction that they pointed me in is things with moderately more robust impact. And what I mean by that is, I'm sure one way that you can calculate the expected value of an action is, “Here's what's going to happen. Here are the two outcomes, and here are the probabilities of them.” Another thing you can do is say - it's a little bit more hand-wavy - but, “How much better is this going to make the world? How much does it matter if the world is better in generic diffuse ways?” Typically, EA has been pretty skeptical of that second line of reasoning—and I think correctly. When you see that deployed, it's nonsense. Usually, when people are pretty hard to nail down on the specific reasoning of why they think that something might be good, it’s because they haven't thought that hard about it or don't want to think that hard about it. The much better analyzed and vetted pathways are the ones we should be paying attention to.", "That being said, I do think that sometimes EA gets too narrow-minded and specific about plotting out courses of impact. And this is one of the reasons why that people end up fixating on one particular understanding of the universe, of ethics, of how things are going to progress. But, all of these things have some amount of uncertainty in them. And when you jostle them, some theories of impact behave somewhat robustly and some of them completely fall apart. I’ve become a bit more sympathetic to ones that are a little robust under thoughts about what the world ends up looking like.", "Political giving", "Dwarkesh Patel 9:57", "In the May 2022 Oregon Congressional Election , you gave 12 million dollars to Carrick Flynn, whose campaign was ultimately unsuccessful. How have you updated your beliefs about the efficacy of political giving in the aftermath?", "Sam Bankman-Fried 10:12", "It was the first time that I gave on that scale in a race. And I did it because he was, of all the candidates in the cycle, the most outspoken on the need for more pandemic preparedness and prevention. He lost—such is life. In the end, there are some updates on the efficacy of various things. But, I never thought that the odds were extremely high that he was going to win. It was always going to be an uncertain close race. There's a limit to how much you can update from a one-time occurrence. If you thought the odds were 50-50, and it turns out to be close in one direction or another, there's a maximum of a factor-of-two update that you have on that. There were a bunch of sort of micro-updates on specific factors of the race, but on a high level, it didn’t change my perspective on policy that much.", "Dwarkesh Patel 11:23", "But does it make you think there are diminishing or possibly negative marginal returns from one donor giving to a candidate? Because of the negative PR?", "Sam Bankman-Fried 11:30", "At some point, I think that's probably true.", "Dwarkesh Patel 11:33", "Continuing on the theme of politics, when is it more effective to give the marginal million dollars to a political campaign or institution to make some change at the government level (like putting in early detection)? Or when is it more effective to fund it yourself?", "Sam Bankman-Fried 11:47", "It's a good question. It's not necessarily mutually exclusive. One thing worth looking at is the scale of the things that need to happen. How much are things like international cooperation important for it? When you look at pandemic prevention, we're talking tens of billions of dollars of scale necessary to start putting this infrastructure in place. So it's a pretty big scale thing—which is hard to fund to that level individually. It’s also something where we’re going to need to have cooperation between different countries on, for example, what their surveillance for new pathogens looks like. And vaccine distribution If some countries have a great distribution of vaccines and others don't, that's not good. It's both not fair and not equitable for the countries that get hit hardest. But also, in a global pandemic, it's going to spread. You need global coverage. That's another reason that government has to be involved, at least to some extent, in the efforts.", "FTX Future Fund", "Dwarkesh Patel 12:55", "Let's talk about Future Fund . As you know, there are already many existing Effective Altruist organizations that do donations. What is the reason you thought there was more value in creating a new one? What's your edge?", "Sam Bankman-Fried 13:06 There's value in having multiple organizations. Every organization has its blind spots, and you can help cover those up if you have a few. If OpenPhil didn't exist, maybe we would have created an organization that looks more like OpenPhil. They are covering a lot of what we’re looking at—we're looking at overlapping, but not identical things. I think having that diversity can be valuable, but pointing to the ways in which we intentionally designed to be a little bit different from existing donors:", "One thing that I've been really happy about is the re-granting program. We have a number of people who are experts in various areas to who we've basically donated pots that they can re-grant. What are the reasons that we think this is valuable? One thing is giving more stakeholders a chance to voice their opinions because we can't possibly be listening to everyone in the world directly and integrating all those opinions to come up with a perfect set of answers. Distributing it and letting them act semi-autonomously can help with that. The other thing is that it helps with a large number of smaller grants. When you think about what an organization giving away $100 million in a year is thinking about, “if we divided that up into $25,000 grants, how many grants would that mean?” 4,000 grants to analyze, right? If we want to give real thought to each one of those, we can't do that.", "But on the flip side, sometimes the smaller grants are the most impactful per dollar and there are a lot of cases where someone really impressive has an exciting idea for a new foundation or a new organization that could do a lot of good for the world and needs $25,000 to get started. To rent out a small office, to be able to cover salaries for two employees for the first six months. Those are the kind of cases where a pretty small grant can make a huge change in the development of what might ultimately become a really impactful organization. But they're the kind of things that are really hard for our team to evaluate all of, just given the number of them—but the re-grantor program gives us a way to do that. Instead, we have 10, 50, or 100 re-grantors, who are going out and finding a lot of those opportunities close to them, they can then identify those and direct those grants—and it gives us a much wider reach. It also biases it less towards people who we happen to know, which is good.", "We don't want to just like overfund everyone we know and underfund everyone that we don’t. That's one initiative that I've been pretty excited about that we're going to keep doing. Another thing we've really tried to have a lot of emphasis on making the (application) process smooth and clean. There are pros and cons to this. But it drops the activation energy necessary for someone to decide to apply for a grant and fill out all of the forms. We’ve really tried to bring more people into the fold.", "Adverse selection in philanthropy", "Dwarkesh Patel 16:41", "If you make it easy for people to fill out your application and generally fund things that other organizations wouldn't, how do you deal with the possibility of adverse selection in your philanthropic deal flow?", "Sam Bankman-Fried 16:52", "It's a really good question. It’s a worry that Bob down the street might see a great book case study that he wants and wonder if he can get funding for this bookcase as it’s going to house a lot of knowledge. Knowledge is good, right? Obviously, we would detect that pretty quickly. The basic answer is that we still vet all of these. We do have oversight of them. But, we also do a deep dive into both all of the large ones, but also into samplings of all the small ones. We do deep dives into randomly sampled subsets of them—which allows us to get a good statistical sense of whether we are facing significant adverse selection in them. So far, we haven't seen obvious signs of it, but we're going to keep doing these analyses and see if anything worrying comes out of those. But that's a way to be able to have more trusted analyses for more scaled-up numbers of grants.", "Correlation between different causes", "Dwarkesh Patel 18:06", "A long time ago, you wrote a blog post about how EA causes are multiplicative, instead of additive. Do you still find that's the case with most of the causes you care about? Or are there cases where some of the causes you care about are negatively multiplicative? An example might be economic growth and the speed at which AI takes off.", "Sam Bankman-Fried 18:24", "Yeah, I think it’s getting more complicated. Specifically around AI, you have a lot of really complex factors that can point in the same direction or in opposite directions. Especially if what you think matters is something like the relative progress of AI safety research versus AI capabilities research, a lot of things are going to have the same impact on both of those, and thus confusing impact on safety as a whole.", "I do think it's more complicated now. It's not cleanly things just multiplying with each other. There are lots of cases where you see multiplicative behavior, but there are cases where you don't have that. The conclusion of this is: if you have multiplicative cases, you want to be funding each piece of it. But if you don't, then you want to be trained to identify the most impactful pieces and move those along. Our behavior should be different in those two scenarios.", "Dwarkesh Patel 19:23", "If you think of your philanthropy from a portfolio perspective, is correlation good or bad?", "Sam Bankman-Fried 19:29", "Expected value is expected value, right? Let's pretend that there is one person in Bangladesh and another one in Mexico. We have two interventions, both 50-50 on saving each of their lives. Suppose there’s some new drug that we could release to combat a neglected disease. This question is asking, “are they correlated?” “Are these two drugs correlated in their efficacy?” And my basic argument is, “it doesn't matter, right?” If you think about it from each of their perspectives, the person in Mexico isn't saying, “I only want to be saved in the cases where the person in Bangladesh is or isn't saved.” That’s not relevant. They want to live.", "The person in Bangladesh similarly wishes to live. You want to help both of them as much as you can. It's not super relevant whether there’s alignment or anti-alignment between the cases where you get lucky and the ones where you don't.", "Dwarkesh Patel 20:46", "What’s the most likely reason that Future Fund fails to live up to your expectations?", "Sam Bankman-Fried 20:51", "We get a little lame. We give to a lot of decent things. But all the cooler or more innovative things that we do, don't seem to work very well. We end up giving the same that everyone else is giving. We don’t turn out to be effective at starting new things, we don't turn out to be effective at thinking of new causes or executing them. Hopefully, we'll avoid that. But, it's always a risk.", "Dwarkesh Patel 21:21", "Should I think of your charitable giving, as a yearly contribution of a billion dollars? Or should I think of it as a $30 billion hedge against the possibility that there's going to be some existential risk that requires a large pool of liquid wealth?", "Sam Bankman-Fried 21:36", "It's a really good question, I'm not sure. We've given away about 100 million so far this year. We're going to start doing that because we think there are really important things to fund and to start scaling up those systems. We notice opportunities as they come and we have systems ready in place to give to them. But it's something we're really actively discussing internally—how concentrated versus diffuse we want that giving to be, and storing up for one very large opportunity versus a mixture of many.", "Great founders do difficult things", "Dwarkesh Patel 22:15", "When you look at a proposal and think this project could be promising, but this is not the right person to lead it, what is the trait that's most often missing?", "Sam Bankman-Fried 22:22", "Super interesting. I am going to ignore the obvious answer which is that the guy is not very good and look at cases where it's someone pretty impressive, but not the right fit for this. There are a few things. One of them is how much are they going to want to deal with really messy shit. This is a huge thing! When I was working at Jane Street, I had a great time there. One thing I didn’t realize was valuable until I saw the alternative—if I decided that is a good trade to buy one share of Apple stock on NASDAQ, there's a button to do that.", "If you as a random citizen want to buy one share of Apple stock directly on an exchange, it'll cost you tens of millions of dollars a year to get set up. You have to get a physical colo(cation) in Secaucus, New Jersey, have market data agreements with these companies, think about the sip and about the NBBO and whether you’re even allowed to list on NASDAQ, and then build the technological infrastructure to do it. But all of that comes after you get a bank account.", "Getting a bank account that's going to work in finance is really hard. I spent hundreds, if not thousands of hours of my life, trying to open bank accounts. One of the things at early Alameda that was really crucial to our ability to make money was having someone very senior spend hours per day in a physical bank branch, manually instructing wire transfers. If we didn't do that, we wouldn't have been able to do the trade.", "When you start a company, there are enormous amounts of shit that looks like that. Things that are dumb or annoying or broken or unfair, or not how the world should work. But that’s how the world does work. The only way to be successful is to fight through that. If you're going to be like, “I'm the CEO, I don't do that stuff,” then no one's going to do that at your company. It's not going to get done. You won't have a bank account and you won't be able to operate. One of the biggest traits that are incredibly important for a founder and for an early team at a company (but not important for everything in life) is willing to do a ton of grunt work if it’s important for the company right then.", "Viewing it not as “low prestige” or “too easy” for you, but as, “This is the important thing. This is a valuable thing to do. So it's what I'm going to do.” That's one of the core traits. The other thing is asking if they’re excited about this idea? Will they actually put their heart and soul into it? Or are they going to be not really into it and half-ass? Those are two things that I really look for.", "Pitcher fatigue and the importance of focus", "Dwarkesh Patel 25:51", "How have you used your insights about pitcher fatigue to allocate talent in your companies?", "Sam Bankman-Fried 25:58", "Haha. When it comes to pitchers, in baseball, there's a lot of evidence that they get worse over the course of the game. Partially, because it's hard on the arm. But, it's worth noting that the evidence seems to support the claim that it depends on the pitchers. But in general, you're better off breaking up your outings. It's not just a function of how many innings they pitch that season, but also extremely recently. If you could choose between someone throwing six innings every six days, or throwing three innings every three days, you should use the latter. That's going to get the better pitching on average, and just as many innings out of them—and baseball has since then moved very far in that direction. The average number of pitches thrown by starting pitchers has gone down a lot over the last 5-10 years.", "How do I use that in my company? There’s a metaphor here except this is with computer work instead of physical arm work. You don't have the same effect where your arm is getting sore, your muscles snap, and you need surgery if you pitch too hard for too long. That doesn't directly translate—but there's an equivalent of this with people getting tired and exhausted. But on the other hand, context is a huge, huge piece of being effective. Having all the context in your mind of what's going on, what you're working on, and what the company is doing makes it easier to operate effectively. For instance, if you could have either two half-time employees or one full-time employee, you're way better off with one full-time employee because they're going to have more context than either of the part-time employees would have —thus be able to work way more efficiently.", "In general, concentrated work is pretty valuable. If you keep breaking up your work, you're never going to do as great of work as if you truly dove into something.", "How SBF identifies talent", "Dwarkesh Patel 28:30", "You've talked about how you weigh experience relatively little when you're deciding who to hire. But in a recent Twitter thread , you mentioned that being able to provide mentorship to all the people who you hire is one of the bottlenecks to you being able to scale. Is there a trade-off here where if you don't hire people for experience, you have to give them more mentorship and thus can't scale as fast?", "Sam Bankman-Fried 28:51", "It's a good question. To a surprising extent, we found that the experience of the people that we hire has not had much correlation with how much mentorship they need. Much more important is how they think, how good they are at understanding new and different situations, and how hard they try to integrate into their understanding of coding how FTX works. We actually have by and large found that other things are much better predictors of how much oversight and mentorship they’re going to need then.", "Dwarkesh Patel 29:35", "How do you assess that short of hiring them for a month and then seeing how they did?", "Sam Bankman-Fried 29:39", "It's tough, I don't think we're perfect at it. But things that we look at are, “Do they understand quickly what the goal of a product is? How does that inform how they build it?” When you're looking at developers, I think we want people who can understand what FTX is, how it works, and thus what the right way to architect things would be for that rather than treating it as an abstract engineering problem divorced from the ultimate product.", "You can ask people like, “Hey, here's a high-level customer experience or customer goal. How would you architect a system to create that?” That’s one thing that we look for. An eagerness to learn and adapt. It's not trivial to ask for that. But you can do some amount of that by giving people novel scenarios and seeing how much they break versus how much they bend. That can be super valuable. Specifically searching for developers who are willing to deal with messy scenarios rather than wanting a pristine world to work in. Our company is customer-facing and has to face some third-party tooling. All those things mean that we have to interface with things that are messy and the way the world is.", "Why scaling too fast kills companies", "Dwarkesh Patel 31:09", "Before you launched FTX, you gave detailed instructions to the existing exchanges about how to improve their system, how to remove clawbacks, and so on. Looking back, they left billions of dollars of value on the table. Why didn't they just fix what you told them to fix?", "Sam Bankman-Fried 31:22", "My sense is that it’s part of a larger phenomenon. One piece of this is that they didn't have a lot of market structure experts. They did not have the talent in-house to think really deeply about risk engines. Also, there are cultural barriers between myself and some of them, which meant that they were less inclined than they otherwise would have been to take it very seriously. Ignoring those factors, there's something much bigger at play there. Many of these exchanges had hired a lot of people and they got in very large. You might think they were more capable of doing things with more horsepower. But in practice, most of the time that we see a company grow really fast, really quickly, and get really big in terms of people, it becomes an absolute mess.", "Internally, there's huge diffusion of responsibility issues. No one's really taking charge. You can't figure out who's supposed to do what. In the end, nothing gets done. You actually start hitting the negative marginal utility of employees pretty quickly. The more people you have, the less total you get done. That happened to a number of them to the point where I sent them these proposals. Where did they go internally? Who knows. The Vice President of Exchange Risk Operations (but not the real one—the fake one operating under some department with an unclear goal and mission) had no idea what to do with it. Eventually, she passes it off to a random friend of hers that was the developer for the mobile app and was like, “You're a computer person, is this right?” They likely said, “I don’t know, I'm not a risk person,” and that's how it died. I’m not saying that’s literally what happened but sounds kinda like that’s probably happened. It's not like they had people who took responsibility and thought, “Wow, this is scary. I should make sure that the best person in the company gets this,” and pass it to the person who thinks about their risk modeling. I don't think that's what happened.", "The future of crypto", "Dwarkesh Patel 33:51", "There're two ways of thinking about the impact of crypto on financial innovation. One is the crypto maximalist view that crypto subsumes tradfi . The other is that you're basically stress-testing some ideas in a volatile, fairly unregulated market that you're actually going to bring to tradfi, but this is not going to lead to some sort of decentralized utopia. Which of these models is more correct? Or is there a third model that you think is the correct one?", "Sam Bankman-Fried 34:18", "Who knows exactly what's going to happen? It's going to be path-dependent. If I had to guess I would say that a lot of properties of what is happening crypto today will make their way into Trad Fi to some extent. I think blockchain settlement has a lot of value and can clean up a lot of areas of traditional market structure. Composable applications are super valuable and are going to get more important over time. In some areas of this, it's not clear what's going to happen. When you think about how decentralized ecosystems and regulation intersect, it's a little TBD exactly where that ends up.", "I don't want to state with extreme confidence exactly what will or won't happen. Stablecoins becoming an important settlement mechanism is pretty likely. Blockchains in general becoming a settlement mechanism, collateral clearing mechanism, and more assets getting tokenized seem likely. There being programs written on blockchains that people can add to that can compose with each other seems pretty likely to me. A lot of other areas of it could go either way.", "Risk, efficiency, and human discretion in derivatives", "Dwarkesh Patel 35:46", "Let's talk about your proposal to the CFTC to replace Futures Commission Merchants with algorithmic real-time risk management. There's a worry that without human discretion, you have algorithms that will cause liquidation cascades when they were not necessary. Is there some role for human discretion in these kinds of situations?", "Sam Bankman-Fried 36:06", "There is! The way that traditional future market structure works is you have a clearinghouse with a decent amount of manual discretion in it connected to FCMs . Some of which use human discretion, and some of which use automated risk management algorithms with their clients. The smaller the client, the more automated it is. We are inverting that where at the center, you have an automated clearing house. Then, you connect it to FCM, which could use discretionary systems when managing their clients.", "The key difference here is that one way or another, the initial margin has to end up at the clearinghouse. A programmatic amount of it and the clearinghouse acts in a clear way. The goal of this is to prevent contagion between different intermediaries. Whatever credit decisions one intermediary makes, with respect to their customers, doesn't pose risk to other intermediaries. This is because someone has to post the collateral to the clearinghouse in the end—whether it's the FCM, their customer, or someone else. It gives clear rules of the road and lack of systemic risk spreading throughout the system and contains risk to the parties that choose to take that risk on - to the FCMs that choose to make credit decisions there.", "There is a potential role for manual judgment. Manual judgment can be valuable and add a lot of economic value. But it can also be very risky when done poorly. In the current system, each FCM is exposed to all of the manual bespoke decisions that each other FCM is making. That's a really scary place to be in, we've seen it blow up. We saw it blow up with LME nickel contracts and with a few very large traders who had positions at a number of different banks that ended up blowing out. So, this provides a level of clarity, oversight, and transparency to this system, so people know what risk they are, or are not taking on.", "Dwarkesh Patel 38:29", "Are you replacing that risk with another risk? If there's one exchange that has the most liquidity om futures and there’s one exchange where you're posting all your collateral (across all your positions), then the risk is that that single algorithm the exchange is using will determine when and if liquidation cascades happen?", "Sam Bankman-Fried 38:47", "It’s already the case that if you put all of your collateral with a prime broker, whatever that prime broker decides (whether it's an algorithm or a human or something in between) is what happens with all of your collateral. If you're not comfortable with that, you could choose to spread it out between different venues. You could choose to use one venue for some products and another venue for other products. If you don't want to cross-collateralized cross-margin your positions, you get capital efficiency for cross-margining them—for putting them in the same place. But, the downside of that is the risk of one can affect the other. There's a balance there, and I don't think it's a binary thing.", "Dwarkesh Patel 39:28", "Given the benefits of cross-margining and the fact that less capital has to be locked up as collateral, is the long-run equilibrium that the single exchange will win? And if that's the case, then, in the long run, there won't be that much competition in derivatives?", "Sam Bankman-Fried 39:40", "I don't think we're going to have a single exchange winning. Among other things, there are going to be different decisions made by different exchanges—which will be better or worse for particular situations. One thing that people have brought up is, “How about physical commodities?” Like corn or soy? What would our risk model say about that? It's not super helpful for those commodities right now because it doesn't know how to understand a warehouse. So, you might want to use a different exchange, which had a more bespoke risk model that tried to understand how the human would understand what physical positions someone had on. That would totally make sense. That can cause a split between different exchanges.", "In addition, we've been talking about the clearing house here, but many exchanges can connect to the same clearinghouse. We're already, as a clearing house, connected to a number of different DCMs and excited for that to grow. In general, there are going to be a lot of people who have different preferences over different details of the system and choose different products based on that. That's how it should work. People should be allowed to choose the option that makes the most sense for them.", "Jane Street vs FTX", "Dwarkesh Patel 41:00", "What are the biggest differences in culture between Jane Street and FTX?", "Sam Bankman-Fried 41:05", "FTX has much more of a culture of like morphing and taking out a lot of random new shit. I don’t want to say Jane Street is an ossified place or anything, it’s somewhat nimble. But it is more of a culture of, “We're going to be very good at this particular thing on a timescale of a decade.” There are some cases where that's true of FTX because some things are clearly part of our core business for a decade. But there are other things that we knew nothing about a year ago, and now have to get good at. There's been more adaptation and it's also a much more public-facing and customer-facing business than Jane Street is—which means that there are lots of things like PR that are much more central to what we're doing.", "Conflict of interest between broker and exchange", "Dwarkesh Patel 41:56", "Now in crypto, you're combining the exchange and the broker—they seem to have different incentives. The exchange wants to increase volume, and the broker wants to better manage risk, maybe with less leverage. Do you feel that in the long run, these two can stay in the same entity given the potential conflict of interest?", "Sam Bankman-Fried 42:13", "I think so. There's some extent to which they differ, but more that they actually want the same thing—and harmonizing them can be really valuable. One is to provide a great customer experience. When you have two different entities with two completely different businesses but have to go from one to the other, you're going to end up getting the least common denominator of the two as a customer. Everything is going to be supported as poorly as whichever of the two entities support what you're doing most poorly - and that makes it harder. Whereas synchronizing them gives us more ability to provide a great experience.", "Bahamas and Charter Cities", "Dwarkesh Patel 42:59", "How has living in the Bahamas impacted your opinion about the possibility of successful charter cities ?", "Sam Bankman-Fried 43:06", "It's a good question. It's the first time and it’s updated positively. We've built out a lot of things here that have been impactful. It's made me feel like it is more doable than I previously would have thought. But it's a lot of work. It's a large-scale project if you want to build out a full city—and we haven’t built out a full city yet. We built out some specific pieces of infrastructure that we needed and we've gotten a ton of support from the country. They've been very welcoming, and there are a lot of great things here. This is way less of a project than taking a giant, empty plot of land, and creating a city in it. That's way harder.", "SBF’s RAM-skewed mind", "Dwarkesh Patel 43:47", "How has having a RAM-skewed mind influence the culture of FTX and its growth?", "Sam Bankman-Fried 43:52", "On the upside, we've been pretty good at adapting and understanding what the important things are at any time. Training ourselves quickly to be good at those even if it looks very different than what we were doing. That's allowed us to focus a lot on the product, regulation, licensing, customer experience, branding, and a bunch of other things. Hopefully, it means that we're able to take whatever situations come up and provide reasonable feedback about them and reasonable thoughts on what to do rather than thinking more rigidly in terms of how previous situations were. On the flip side, I need to have a lot of people around me who will try and remember long-term important things that might get lost day-to-day. As we focus on things that pop up, it's important for me to take time periodically to step back and clear my mind and remember the big picture. What are the most important things for us to be focusing on?", "" ]
[ "https://ftx.us/?utm_source=google&utm_medium=CPC&utm_campaign=NPD-FTX-GGL-Brand&adgroupid=128713096283&utm_content=564159064776&utm_term=ftx&utm_matchtype=p&gclid=CjwKCAjw_ISWBhBkEiwAdqxb9mTt_TFw6PjEovgY5OpgI1_FOeTLQE73_alS_odmpWEQPBxyGnzJ-BoCV3MQAvD_BwE&gclsrc=aw.ds", "https://www.alameda-research.com/", "https://80000hours.org/articles/earning-to-give/", "https://en.wikipedia.org/wiki/William_MacAskill", "https://www.effectivealtruism.org/", "https://www.lesswrong.com/posts/5iZTwGHv2tNfFmeDa/on-infinite-ethics", "https://www.wsj.com/livecoverage/primary-election-results-pennysylvania-north-carolina-oregon/card/crowded-field-in-oregon-s-6th-district-amid-massive-outside-spending-0SDeRHjbR2khCHy4clLf", "https://ftxfuturefund.org/", "https://www.openphilanthropy.org/", "http://measuringshadowsblog.blogspot.com/2015/08/multiplicative-factors-in-games-and.html", "https://twitter.com/SBF_FTX/status/1533946713366568962", "https://coinflex.com/education/what-is-tradfi/", "https://www.coindesk.com/policy/2022/05/25/ftxs-bankman-fried-pitches-cftc-on-directly-clearing-customers-crypto-swaps/", "https://www.nfa.futures.org/%5C/registration-membership/who-has-to-register/fcm.html#:~:text=A%20futures%20commission%20merchant%20(FCM,customers%20to%20support%20such%20orders.)", "https://www.investopedia.com/terms/c/clearinghouse.asp", "https://corporatefinanceinstitute.com/resources/careers/jobs/debt-capital-markets/", "https://chartercitiesinstitute.org/intro/", "https://twitter.com/SBF_FTX/status/1357123548196704257" ]
https://www.dwarkesh.com/p/scaling-ama
AMA ft. Sholto & Trenton: New Book, Career Advice Given AGI, How I'd Start From Scratch
[ "Book launch announcement", "Dwarkesh Patel", "Today, this is going to be an Ask Me Anything episode. I'm joined by my friends, Trenton Bricken and Sholto Douglas . You guys do some AI stuff, right?", "Trenton Bricken", "Yeah.", "Sholto Douglas", "We dabble.", "Dwarkesh Patel", "They're researchers at Anthropic . Other news; I have a book launching today, it's called The Scaling Era . I hope one of the questions ends up being why you should buy this book. But we can kill two birds with one stone. But, okay, let's just get at it. What's the first question that we gotta answer?", "Trenton Bricken", "So, I want to ask the flyball question that I heard before, of: why should ordinary people care about this book? Like, why should my mom buy and read the book?", "Dwarkesh Patel", "Yeah. First, let me tell you about the book, what it is. So, you know, these last few years, I've been interviewing AI lab CEOs, researchers, people like you guys, but also scholars from all kinds of different fields, economists, philosophers. And they've been addressing, I think, what are basically the gnarliest, most interesting, most important questions we've ever had to ask ourselves. Like, what is the fundamental nature of intelligence? What will happen when we have billions of extra workers? How do we model out the economics of that? How do we think about an intelligence that is greater than the rest of humanity combined? Is it even a coherent concept? And so, what I'm super delighted with is that with Stripe Press, we made this book where we compiled and curated the best, most insightful snippets across all these interviews. And you can read Dario addressing, why does scaling work? And then on the next page is Demis explaining DeepMind's plans for whether they're gonna go with the RL route and how much of the AlphaZero stuff will play into the next generation of LLMs. And on the next page is, of course, you guys going through the technical details of how these models work.", "And then there's so many different fields that are implicated. I mean, I feel like AI is one of the most multi-disciplinary fields that one can imagine, because there's no field, no domain of human knowledge that is not relevant to understanding what a future society of different kinds of beings will look like. You're gonna have Carl Shulman talk about how the scaling hypothesis shows up in primate brain scaling from chimpanzees to humans. On the next page might be an economist trying to argue, like Tyler Cowen, explaining why he doesn't expect explosive economic growth, and why the bottlenecks will eat all that up. Um, anyways, so that's why your mom should buy this book. It’s the distillation of all these different fields of human knowledge applied to the most important questions that humanity is facing right now.", "Trenton Bricken", "I do like how the book is sliced up by different topics and across interviews. So it does seem like a nice way to listen to all of the interviews in one digestible way.", "Dwarkesh Patel", "Yeah. There's two interviews I've done that haven't been released publicly before that are in the book. So, one was that Jared Kaplan , who's one of your co-founders, and this is another example where it's like, he's a physicist and he's explaining scaling from this very mathematical perspective about data manifolds. And then on the next page you have a totally different perspective. It's like Goren talking about why did general intelligence actually evolve in the first place, what is the actual evolutionary purpose of it? And it's page by page, right? You can just get addresses. Even for me, the person who's been on the other end of these conversations, it was actually really cool to read it and just be like, \"Oh, actually now I realize how these insights connect to each other.\"", "Trenton Bricken", "Yeah, the only other thing that stood out to me as well is the introduction section-", "Dwarkesh Patel", "The only thing that stood out to you?", "Trenton Bricken", "Yeah, that was really the only thing that was noteworthy. I just mean [what] stood out in accessibility is the introduction section and the diagrams for all the different inputs that enable you to train a machine learning model. Stripe Press books are also just beautiful, they have these nice side captions for explaining what parameters are, what a model is, these sorts of things.", "Dwarkesh Patel", "Actually, when we did our episode together, a bunch of people, I don't know if you saw this, independently made these blog posts and Anki cards and shit where they're explaining the concept because we just kind of passed over some things. And hopefully we've given a similar treatment to every single interview I've done, where you can read a very technical interview with a lab CEO or something, or an engineer, or a researcher, and then the side is like: here's like more context, here's more definitions, here's more commentary. And I, yeah, I feel like it elevated the conversations.", "Sholto Douglas", "So in other words, my parents will finally understand what I do for a job. They're gonna get it very well.", "Dwarkesh Patel", "Maybe my parents will. Because I got a book.", "Trenton Bricken", "All mine need to know is that my name's in a book.", "Sholto Douglas", "You're a co-author.", "Trenton Bricken", "They're like, \"Cool.\"", "Sholto Douglas", "Should we get into the AMA questions?", "Trenton Bricken", "Let's do it.", "AI models not making connections across fields", "Sholto Douglas", "All right. So Brian Krav asks, \"The issue you raised with Dario and occasionally tweet about relating to models not making connections across different topics, some sort of combinatorial attention challenge, what are your thoughts on that now? Do you solve it with scale, thinking models or something else?\"", "Dwarkesh Patel", "So the issue is, one of the questions I asked Dario is, look, these models have all of human knowledge memorized and you would think if a human had this much stuff memorized, and they were moderately intelligent, they could be making all these connections between different fields. And there are examples of humans doing this, by the way. There's… Donald Swann or something like this [Don R. Swanson], this guy noticed that what happens to a brain after magnesium deficiency is exactly the structure you see during a migraine. So then he's like, you take magnesium supplements and we're gonna cure a bunch of migraines. And it worked. And there's many other examples of things like this where you just notice two different connections between pieces of knowledge. Why, if these LLMs are intelligent, are they not able to use this unique advantage they have to make these kinds of discoveries? I feel a little shy, me giving answers on AI shit with you guys here. But, actually Scott Alexander addressed this question in one of his AMA threads, and he's like, \"Look, humans also don't have this kind of logical omniscience\", right? He used the example of, in language, if you really thought about, why are two words connected? And it's like, I understand why “rhyme” has the same etymology as this other word. But you just don't think about it, right? There's this combinatorial explosion. I don't know if that addresses the fact that- we know humans can do this, right? The humans have in fact done this, and I don't know of a single example of LLMs ever having done it. Actually, yeah, what is your answer to this?", "Sholto Douglas", "I think my answer at the moment is that the sort of pre-training objective doesn't necessarily- like it imbues with this nice flexible general knowledge about the world, but doesn't necessarily imbue the skill of making novel connections or research. The kinds of things that people are trained to do through PhD programs and through the process of exploring and interacting with the world.", "And so I think at a minimum you need significant RL in at least similar things to be able to approach making novel discoveries. And so I would like to see some early evidence of this as we start to build models that are interacting with the world and trying to make scientific discoveries, and modeling the behaviors that we expect of people in these positions. Because I don't actually think we've done that in a meaningful or scaled way as a field, so to speak.", "Trenton Bricken", "Riffing off that with respect to RL, I wonder if models currently just aren't good at knowing what memories they should be storing. Most of their training is just predicting the next word on the internet and remembering very specific facts from that. But if you were to teach me something new right now, I'm very aware of my own memory limitations, and so I would try to construct some summary that would stick. And models currently don't have the opportunity to do that. Memory scaffolding in general is just very primitive right now. I mean-", "Sholto Douglas", "Right, like Claude Plays Pokemon .", "Trenton Bricken", "Exactly, yeah, or like someone worked on it, it was awesome, it got far, but, another excited Anthropic employee then iterated on the memory scaffold and was able to very quickly improve on it. So that's one. I do also just wonder if models are idiot savants. The best analogy might be to Kim Peek . So Kim Peek, was born without a corpus callosum , if I recall correctly. Each hemisphere of his brain operated quite independently. So, he'd open a book, there'd be two pages visible, each eye would read one of the pages. And he had a perfect encyclopedic memory of everything he'd ever read. But at the same time, he had other debilitations; functioning socially, these sorts of things. And it's just kind of amazing how good LLMs are at very niche topics, but can totally fail at other ones still.", "Dwarkesh Patel", "I really wanna double-click on this thing of why there's this trade-off between memorization. Why does cutting it off... apparently it's connected to this debilitation, but why can't… Wiki text is like five megabytes of information. The human brain can store much more, so why does the human brain just not want us to memorize these kinds of things, and is actively pruning, and… yeah, I don't know. But we don't have to do it right now. We'll do a separate episode.", "Trenton Bricken", "Yeah, just one thing I'll say on that is, there is another case study of someone with a perfect memory, so they never forgot anything. But their memory was too debilitating . It'd be, like, your context window for the transformer is trillions of tokens. And then you spend all your time attending to past things, and are too trapped in the details to extract any meaningful generalizable insights from it.", "Dwarkesh Patel", "Yeah. Terrence Deacon , whose book you recommended, had this interesting insight about how we learn best when we're children, but we forget literally everything that happened to us when we were children, right? We have total amnesia.", "And adults have this in-between where we don't remember exact details, but we can still learn in a pretty decent way. And then LLMs are on the opposite end of this gradient where they'll get the exact phrasing of Wiki text down, but they won't be able to generalize in these very, obvious ways.", "Sholto Douglas", "A little bit like Gwern 's theory, optimizer theory, no?", "Dwarkesh Patel", "Yeah, I think I probably got it from that.", "Trenton Bricken", "Yeah, Gwern has definitely had a big influence on all this for me as well.", "Dwarkesh Patel", "I feel like what’s underappreciated on the podcast is we have this group chat, and we also just meet up a lot in person. And all the output from the podcast just comes from you and a couple other people just feeding me ideas and nudges and whatever, and then I can just use that as an intuition pump during the conversation.", "Trenton Bricken", "Yeah, you're not the only one.", "Dwarkesh Patel", "What do you mean?", "Trenton Bricken", "Oh, like, I benefit immensely from just hearing what everyone else has to say. It's all regurgitation in one way or another.", "Sholto Douglas", "Another question?", "Dwarkesh Patel", "Yes.", "Career advice given AGI", "Trenton Bricken", "Maybe Rabid Monkey asks, \"Imagine you have a 17-year-old brother/nephew just starting college. What would you recommend he study, given your AGI timelines?\"", "Dwarkesh Patel", "That's so tough, right? I don't know, become a podcaster? I feel like that job's still gonna be around. It's funny, because I studied computer science, and in retrospect- at the time, you could've become a software engineer or something. Instead, you became a podcaster, it’s kind of an irresponsible career move, but in retrospect, it's like… It kinda worked out. Just as these guys are getting automated.", "Sholto Douglas", "I get asked this question all the time, and one answer that I like to give is that you should think about the next couple of years as increasing your individual leverage by a huge factor every year.", "So already software engineers will come up and say, \"You know, I'm two times faster,\" or, \"In new languages, I'm five times faster than I was last year.\" I expect that trend line to continue, basically, as you go from this model of, \"Well, I'm working with some model that's assisting me on my computer, and it's basically a pairing session,\" to, \"I'm managing a small team,\" through to, \"I'm managing a division or a company\". Basically, that is targeting a task. And so I think that deep technical knowledge in fields will still matter in four years. It absolutely will. Because you will be in the position of managing dozens- or, your individual management bandwidth will be maxed out by trying to manage teams of AIs.", "And maybe we end up in a true singularity world where you have AIs managing AIs and this kinda stuff. But I think in a very wide part of the possibility spectrum you are managing enormous, vastly more resources than an individual could command today, and you should be able to solve so many more things with that.", "Dwarkesh Patel", "That's right, and I think I would emphasize that this is not just cope. Like, it genuinely is a case that these models lack the kind of long-term coherence which is absolutely necessary for making a successful company or… Just, getting a fucking office is kinda complicated, right? So you can just imagine that for sector after sector- the economy is really big, right?", "Sholto Douglas", "And really complex.", "Dwarkesh Patel", "Exactly, and so, I don't know the details, but I assume if it's a data sparse thing where you gotta know what is the context of what's happening in the sector or something, I feel like you'd be in a good position.", "Maybe the other thought I have is that it's really hard to plan your career in general. And I don't know what advice that implies, because I remember being super frustrated. I was in college, and the reason I was doing the podcast was to figure out what it is I want to do. It wasn't the podcast itself. And I would go on, 80,000 Hours or whatever career advice, and in retrospect it was all mostly useless, and just try doing things. I mean, especially with AI, it's so hard to forecast what kind of transformations there will be, so try things, do things. I mean, it's such banal, vague advice, but I am quite skeptical of career advice in general.", "Sholto Douglas", "Well, the piece of career advice that I'm not skeptical of is put yourself close to the frontier, because you have a much better vantage point from there. Right? You can study deep technical things, whether it's computer science or biology, and get to the point where you can see what the issues are because it's actually remarkably obvious at the frontier what the problems are. It's very difficult to see…", "Dwarkesh Patel", "Actually, do you think there is an opportunity, because one of the things people bring up is, maybe the people who are advanced in their career and have all this tacit knowledge will be in a position to be accelerated by AI, but you guys four years ago or two years ago, when you were getting discovered or something, that kind of thing where you have a GitHub open issue and you try to solve it; is that just, that's done, and so the onboarding is much harder?", "Sholto Douglas", "That's still what we look for in hiring. So, you know?", "Trenton Bricken", "Yeah, I'm in favor of the “learn fundamentals, gain useful mental models”, but it feels like everything should be done in an AI-native way, or top-down instead of your bottom-up learning. So first of all, learn things more efficiently by using the AI models, and then just know where their capabilities are and aren't.", "And I would be worried and skeptical about any subject which prioritizes rote memorization of lots of facts or information instead of ways of thinking. But if you're always using the AI tools to help you, then you'll naturally just have a good sense for the things that it is and isn't good at.", "Guest selection criteria", "Sholto Douglas", "Okay, next one. What is your strategy, method, or criteria for choosing guests?", "Dwarkesh Patel", "So, the most important thing is, do I wanna spend one to two weeks reading every single thing you have ever written, every single interview you ever recorded, talking to a bunch of other people about your research? Because I get asked by people who are quite influential often to be like, \"Would you have me on your podcast?\" and more often than not, I say no, for two reasons. One is just, okay, you're influential, it's not fundamentally that interesting as an interview prospect. I don't think about the hour that I'll spend with you. I think about the two weeks, because this is my life, right? The research is my life, and I wanna have fun while doing it. So is this gonna be an interesting two weeks to spend? Is it gonna help me with my future research or something?", "And the other is, big guests don't really matter that much if you just look at what are the most popular episodes or, what in the long run helps a podcast grow. By far my most popular guest is Sarah Payne , and she, before I interviewed her, was just a scholar, who was not publicly well-known at all, and I just found her books quite interesting. So my most popular guests are Sarah Payne and then Sarah Payne, Sarah Payne, Sarah Payne because ... I have electric chairs with her.", "And by the way, from a viewer-a-minute adjusted basis, I host the Sarah Payne Podcast where I occasionally talk about AI.", "Trenton Bricken", "That's funny.", "Dwarkesh Patel", "And then it's David Reich , who is a geneticist of ancient DNA. He's somewhat well-known, but he had a best-selling book, but he's not Satya Nadella or Mark Zuckerberg , who are the next people on the list. And then again, I think that pretty soon it's like you guys or Leopold or something, and then you get to the lab CEOs or something.", "So, big names just don't matter that much for what I'm actually trying to do. And it's also really hard to predict who's gonna be the David Reich or a Sarah Payne, so just have fun. Talk to whoever you want to spend time researching, and it's a pretty good proxy for what will actually be popular.", "Choosing to pursue the podcast long-term", "Sholto Douglas", "What was the specific moment, if there was one, that you realized you, that producing your podcast was a viable long-term strategy?", "Dwarkesh Patel", "I think when I was shopping around ad spots for a Mark Zuckerberg episode. And now when I look back on it, it's not in retrospect that mind-blowing, but at the time I'm like, “oh, I could actually hire an editor full-time, or maybe more editors than one, and from there turning into a real business”. Because before people would tell me, \"Oh, these other podcasts are making whatever amount of money.\" And I'd be like, \"How?\" You know?", "So I have this running joke with one of my friends. I don't know if you've seen me do this, but every time I encounter a young person who's like, \"What should I do with my life?\" I'm like, \"You gotta start a blog. You gotta be the Matt Levine of AI.\" You can do this. It's a totally empty niche. And I have this running joke with them where they're like, \"You're like a country bumpkin who's won the lottery. And you go up to everything and everyone and just like, \"Guys, a scratch pad. Get the scratch pad.”\"", "Sholto Douglas", "I do wanna press on that a bit more because your immediate answer to the 17-year-old was to start a podcast. So what niches are there? What sort of things would you be excited to see in new blogs, podcasts?", "Dwarkesh Patel", "I wonder if you guys think this too, but I think this “Matt Levine of AI” is a totally open niche as far as I can tell, and I apologize to those who are trying to fill it in. And so the other thing I'd really emphasize is, it is really hard to do this based on other people's advice, or to say “at least I'm trying not to fill a specific niche”. If you think about any sort of successful new media thing out there, it has two things which are true: It's often not just geared towards one particular topic or interest, and two, the most important thing is that it is propelled by a single person's vision. It's not a collective or whatever. And so the thing I really want to emphasize is it can be done.", "Two, you can make a lot of money at it, which is not the most important thing probably for the kind of person who would succeed at it, but still is just worth knowing that it's a viable career.", "Three, that basically you're gonna feel like shit in the beginning where all your early stuff is gonna kind of suck. Maybe some of it will get appreciated. But it seems like bad advice to say still stick through it in case you actually are terrible because some people are terrible. But in case you are not, just do it, right? Like what is the three months of blogging on the side really gonna cost you? And people just don't actually seriously do the thing for long enough to actually get evidence or get the sort of RL feedback on, oh, this is how you do it; this is how you frame an argument. This is how you make a compelling thing that people will want to read or watch.", "Sholto Douglas", "Blogging is definitely underrated. I think like most of us have probably-", "Dwarkesh Patel", "So you both had blogs which were relevant. I don't know if they're actually relevant to getting-", "Sholto Douglas", "Not that. They were like somewhat relevant. But I think more so that we have all read almost all the blogs that do in-depth treatises on AI. Like if you write something that is high quality, it is almost invariably going to be shared around Twitter and read.", "Dwarkesh Patel", "Oh, this is so underappreciated. So, two pieces of evidence. I was talking to a very famous blogger you would know, and I was asking him, \"How often do you discover a new undiscovered blogger?\" And he was like, \"Eh, happens very rarely, like maybe once a year.\" and then I ask him, \"How long after you discover him or her does the rest of the world discover them?\" And he's like, \"Maybe a week.\"", "And what that suggests is it's actually really efficient. Like... Oh, I have some more takes.", "Trenton Bricken", "Let's hear them. This is, this is the AMA.", "Dwarkesh Patel", "So I believe that slow compounding growth in media is kind of fake. Like, Leopold's situational awareness. It's not like he was building up an audience for a long time, for years or something. It was really good. Disagree or agree with it, and if it's good enough, literally everybody who matters- and I mean that literally- will read it. And it's hard to zero shot something like that. But the fundamental thing to emphasize is the compounding growth, at least for me, has been I feel like I've gotten better.", "And it's not so much that somehow the three years of having 1,000 followers were somehow a compounding... I don't think it was that. I think it was just that it took a while to get better.", "Sholto Douglas", "Yeah, certainly when Leopold posted that, the next day, it's almost like you can picture it being stapled to the wall, so to speak, on Twitter.", "Like, you know, everyone was talking about it. You went to any event for the following week, every single person in the entire city was talking about that essay. It was like Renaissance Florence or whatever.", "Dwarkesh Patel", "That's right. Yeah. The world is small.", "Sholto Douglas", "World is small.", "Trenton Bricken", "What would you say is your first big success? I'm trying to think back to when I first found your podcast. I distinctly remember you had your blog post on the Annus Mirabilis .", "And Jeff Bezos retweeted it, I think. I'm trying to remember if it was before that or not, but, yeah. I'm curious, your answer.", "Dwarkesh Patel", "I feel like that was it. And it wasn't something where it was some big insight that deserved to blow up like that. It was just taking some shots on goal. They were all insight porn-y, and then one of them I guess caught the right guy's attention and, yeah. But I think that was it.", "Sholto Douglas", "Yeah, that's something else which is underappreciated, which is that a piece of writing doesn't need to have a fundamentally new insight so much as give people a way to express cleanly a set of ideas that they are already aware of in a broader way. And if it's really crisp and not articulate, then even still that's very valuable.", "Dwarkesh Patel", "And the one thing I should emphasize, which I think is maybe the most important thing to the feedback loop. It's not the compounding growth of the audience. I don't even think it's me getting more shots on goal in terms of doing the podcast. I actually don't think you improve that much by just doing the same thing again and again. If there's no reward signal you'll keep doing whatever you were doing before.", "I genuinely think the most important thing has been that the podcast is good enough that it merits me getting to meet people like you guys. Then I become friends with people like you. You guys teach me stuff. I produce more good podcasts, so hopefully slightly better. That helps me meet people in other fields. They teach me more things. With the China thing recently, I wrote this blog post about a couple stories about things that happened in China. And that alone has netted me an amazing China network in the matter of one blog post. Right? And so hopefully, if I do an episode on China, I will be better as a result. And hopefully that happens across field after field. And so just getting to meet people like you is actually the main sort of flywheel.", "Sholto Douglas", "Interesting. So move to San Francisco?", "Dwarkesh Patel", "Yes. If you're trying to do AI, yeah.", "Sholto Douglas", "Oh, very important question from Jacked Pajeet. How much can you bench?", "Trenton Bricken", "You can't lie because we both know the answer.", "Dwarkesh Patel", "At one point I did bench 225 for four. Now I think I'm probably 20 pounds lighter than that or something.", "The reason you guys are asking me this is because I've gone lifting with both of you. And I remember Trent and I were doing pull-ups and a bench. And it'd be, like, ‘bench’, and he'd throw on another plate or something. And then instead of pull-ups, he'd be cranking out these muscle ups.", "Trenton Bricken", "It's all technique. Let's make sure.", "Dwarkesh Patel", "So they both bench more than me. But I'm trying my best.", "Trenton Bricken", "Ask again in six months.", "Dwarkesh Patel", "Yeah.", "Reading habits", "Sholto Douglas", "What's your favorite history book? There's a wall of them behind you.", "Dwarkesh Patel", "Oh, obviously the Caro LBJ biographies . The main thing I took away from those books is LBJ had this quote that he would tell his debate [students]. In his early 20s, he taught debate to these poor Mexican students in Texas. And he used to tell them, \"If you do everything, you'll win.\" I think it's an underrated quote. So that's the main thing I took away. And you see it through his entire career, where there's a reasonable amount of effort which goes by 20/80. You do the 20 to get the 80% of the effect. And then if you go beyond that to get, \"Oh, no. I'm not just gonna do 20%, I'm gonna just do the whole thing.\" And there's a level even beyond that, which is an unreasonable use of time. This is going to have no ultimate impact, and still try doing that.", "Trenton Bricken", "You've shared on Twitter, using Anki. Or even, like, a Claude integration. Do you do book clubs? Do you use GoodReads? And what are you reading right now?", "Dwarkesh Patel", "I don't have book clubs. [Bud the SpaceBar edition] has just genuinely been a huge uplift in my ability to learn. Mostly because- it's not even the long-term impact over years, though I think that is part of it and I do regret all the episodes I did without using Speech Marking Cards, because all the insights have just sort of faded away. The main thing is, if you're studying a complicated subject, at least for me, it's been super helpful to consolidate. So if you don't do it, you feel like a general where you're like, \"I'm gonna wage a campaign against this country.\" And then you climb one hill. And then the next day you're at a retreat, and then you climb the same hill. There might be a more kosher analogy. yeah. And then the other question was what am I reading right now?", "Oh. My friend Alvaro De Menard , author of Fantastic Anachronism . Can I just hold it up? Actually it's right here. I hope he's okay with me sharing this. But he made 100 copies of this translation he did of his favorite Greek poet. Cavafy. Hopefully I didn't mispronounce it. Sorry, that one has a good inscription for Guern, because that's his copy, but it's super delightful, and that's what I've been reading recently.", "Trenton Bricken", "Any insights from it so far?", "Dwarkesh Patel", "Poets will hate this framing. I feel like poetry is like TikTok, where you get this quick vibe of a certain thing, and then you swipe. And then you get the next vibe, swipe… Alvaro, I'm sorry.", "Sholto Douglas", "No, that's interesting. How do you go about learning new things or preparing for an episode? You mentioned the one to two-week period where you're deep diving on the person. What does that actually look like?", "Dwarkesh Patel", "It's very much the obvious thing: you read their books, you read the papers. If they have colleagues, you try to talk to them to better understand the field. I will also mention that all I have to do is ask them questions, and I do think it's much harder to learn a field to be a practitioner than just learn enough to ask interesting questions. But for that it's very much the obvious thing you'd expect.", "Trenton Bricken", "“Based Carl Sagan” asks, \"What are your long-term goals and ambitions?\"", "Dwarkesh Patel", "AGI kind of just makes the prospect of a long term harder to articulate, right? You know the Peter Thiel quote about what is your 10-year plan and why can't you do it in six months?", "Like, it's especially salient, given timelines. For the foreseeable future, grow the podcast and do more episodes, maybe more writing. So we'll see what happens after 10 years or something. The world might be different enough. So basically, podcast for now.", "Sholto Douglas", "Something you've spoken to me about, and particularly when you were trying to hire people for the podcast was what you wanted to achieve with the podcast. In what way do you want the podcast to shape the world, so to speak? Do you have any thoughts on that or... because I remember you telling me, \"I really want people to actually understand AI and how this might change their lives.\"", "Or, “what we could be doing now to shape the world such that it ends up better.\"", "Dwarkesh Patel", "I don't know. I have contradictory views on this. On the one hand, I do know that important decisions are being made right now in AI. And I do think, riffing on what we were saying about situational awareness, if you do something really good, it has a very high probability of one-shotting the relevant person, and people are generally reasonable. You make a good argument, it'll go places. On the other hand, I just think it's very hard to know what should be done. You gotta have the very correct world model, and then you gotta know how in that world model the action you're taking is gonna have the effect you anticipate. And even in the last week, I've changed my mind on some pretty fundamental things about what I think about the possibility of an intelligence explosion or transformative AI as a result of talking to the Epoch folks.", "Basically, the TLDR is, I want the podcast to just be an epistemic tool for now because I think it's just very easy to be wrong. And so just having a background level of understanding of the relevant arguments is the highest priority.", "Sholto Douglas", "Makes sense.", "Dwarkesh Patel", "What's your sense? What should I be doing?", "Trenton Bricken", "I mean, I think the podcast is awesome, and a lot more people should listen to it, and there are a lot more guests I'd be excited for you to interview.", "Dwarkesh Patel", "Gotta give me your recs.", "Trenton Bricken", "So it seems like a pretty good answer for now.", "Sholto Douglas", "Yeah. I think making sure that, like, there is a great debate of ideas on not, not just AI, but on other fields, and everything is incredibly high leverage in value.", "Dwarkesh Patel", "Yeah, yeah, yeah.", "Beard deepdive", "Sholto Douglas", "How do you groom your beard? It's majestic.", "Dwarkesh Patel", "I don't know what to say, just genetics. I do trim it, but-", "Sholto Douglas", "No beard oil?", "Dwarkesh Patel", "Sometimes I do beard oil.", "Trenton Bricken", "How often?", "Dwarkesh Patel", "Once every couple of days.", "Sholto Douglas", "That's not sometimes! That's pretty often!", "Trenton Bricken", "But, do you have different shampoo for your head and your beard?", "Dwarkesh Patel", "No.", "Trenton Bricken", "What kind of shampoo do you use?", "Dwarkesh Patel", "Anti-dandruff.", "Trenton Bricken", "Do you condition it?", "Dwarkesh Patel", "Yeah.", "Trenton Bricken", "How often do you shave it?", "Dwarkesh Patel", "Who put you up to this?", "Trenton Bricken", "We're giving people the answers that they want.", "Sholto Douglas", "Big shampoo. Big beard oil.", "Trenton Bricken", "Yeah, you can sell some ad slots to different shampoo companies and we can edit it.", "Sholto Douglas", "Maybe we sold an ad slot. Who knows?", "Dwarkesh Patel", "Sorry, you had this idea of merch. Do you wanna explain this T-shirt idea?", "Trenton Bricken", "Yeah, yeah, yeah. So people should react to this. Someone should make it happen. Dwarkesh wants merch, but he doesn't want to admit that he wants it. Or he doesn't want to make it himself because that seems tacky.", "So I really want a plain white tee with just Dwarkesh's beard in the center of it. That's it. Nothing else.", "Dwarkesh Patel", "But you were saying it should have a different texture than the rest of the shirt.", "Trenton Bricken", "Oh, so when I was really riffing off it, where maybe a limited edition set can have some of your beard hair actually sewn into the shirt.", "Dwarkesh Patel", "Oh my God.", "Trenton Bricken", "That'd be pretty cool. I would pay. I would pay for that.", "Sholto Douglas", "How much?", "Dwarkesh Patel", "I've got, like, patches all over my beard.", "Trenton Bricken", "Depends on how much hair. If it's like one is in there somewhere, versus the whole thing. Like, \"Do I have to dry clean it? Can I wash it on the delicate setting?\" But really, I think you should get merch. If you want to grow the podcast, which apparently you do, then this is one way to do that.", "Dwarkesh Patel", "You think beard and beard hair in the future is necessary?", "Trenton Bricken", "Oh, yeah. Oh, yeah.", "Who is best suited for running an AI lab?", "Sholto Douglas", "Which historical figure would be best suited to run a frontier AI lab?", "Dwarkesh Patel", "This is definitely a question for you guys.", "Trenton Bricken", "Oh. No, I mean, I'm curious what your take is first. You've spoken to more of the heads of AI labs than I have.", "Dwarkesh Patel", "Yeah. I was gonna say LBJ. Sorry, it's a question who would be best at running an AI lab or would be best for the world or…?", "Trenton Bricken", "Yeah, what's, what outcome do you want?", "Dwarkesh Patel", "Because I imagine it seems like what the best AI lab CEO succeeds at is raising money, building up hype, setting a coherent vision. I don't know how much it matters for the CEO themselves to have good research taste or something, but it seems like their role is more as a sort of emissary to the rest of the world. And I feel like LBJ would be pretty good at this. Just getting the right concessions, making projects move along, coordinating among different groups to maybe- Oh, Robert Moses .", "Again, not necessarily best for the world, but just in terms of, like, making shit happen.", "Sholto Douglas", "Yeah. I mean, I think best for the world is a pretty important precondition.", "Dwarkesh Patel", "Oh, right. Who’d be best for the world? There's a Lord Ackwood quote of, \"Great people are very rarely good people.\" So it's hard to think of a great person in history who I feel [would] really move the ball forward and also I trust their moral judgment.", "Sholto Douglas", "Yeah. We're lucky in many senses with the set today, right?", "Dwarkesh Patel", "That's right.", "Sholto Douglas", "Like, the set of people today are both... they try and care a lot about the moral side as well as sort of drive the labs forward.", "Dwarkesh Patel", "This is also why I'm skeptical of big grand schemes like nationalization or some public-private partnership or just generally shaking up the landscape too much, because I do think we're in one of the better.. I mean, the difficulty of whether it's alignment or whether it's some kind of deployment, safety risks. That is just the nature of the universe is gonna make that some level of difficulty. But the human factors in a lot of the counterfactual universes, I feel like we don't end up with people... Like, we could even be in a universe where they don't even pay lip service, there’s not an idea that anybody had that you could have an ASI takeover. I think we live in a pretty good counterfactual universe, all things considered.", "Sholto Douglas", "... good set of game players on board.", "Dwarkesh Patel", "That's right. That's right.", "Preparing for fast AGI timelines", "Sholto Douglas", "How are you preparing for fast timelines?", "Dwarkesh Patel", "If there's fast timelines, then there will be this six-month period in which the most important decisions in human history are being made. And I feel like having an AI podcast during that time might be useful. That's basically the plan.", "Trenton Bricken", "Have you made any shorter term decisions, with regards to spending or health or anything else?", "Dwarkesh Patel", "After I interviewed Zuckerberg, my business bank balance was negative 23 cents. When the ad money hit, I immediately reinvested it in Nvidia. So, that is the... sorry, but you were asking from a sort of altruistic perspective?", "Trenton Bricken", "No, no, just in general, like, have you changed the way you live at all because of your AGI timelines?", "Dwarkesh Patel", "I never looked into getting a Roth IRA.", "Sholto Douglas", "He brought us Fiji water before.", "Trenton Bricken", "Which was in plastic bottles, so...", "Sholto Douglas", "Dwarkesh has changed.", "Dwarkesh Patel", "Well, have you guys changed your lifestyle as a result?", "Sholto Douglas", "Not really, no. I, I just, like, work all the time.", "Dwarkesh Patel", "But you would be doing that anyways, or would you not?", "Sholto Douglas", "Ah, I would probably be going very intensely at whatever thing I'd picked to devote myself to.", "Dwarkesh Patel", "Yeah, yeah. How about you?", "Trenton Bricken", "I canceled my 401K contribution, so-", "Dwarkesh Patel", "Oh, really?", "Trenton Bricken", "Yeah, yeah, that, that felt like a more serious one. It's hard for me to imagine a world in which I have all this money that's just sitting in this account and waiting until I'm 60 and things look so different then.", "Dwarkesh Patel", "I mean, you could be like a trillionaire with your marginal 401K contributions.", "Trenton Bricken", "I guess, but you also can't invest it in specific things. And, I don't know. I might change my mind in the future and can restart it, and I've been contributing for a few years now.", "Dwarkesh Patel", "On a more serious note, one thing I have been thinking about is, how could you use this money to an altruistic end? And basically, if there's somebody who's up and coming, in the field that I know, which is making content, could I use money to support them? And I'm of two minds on this. One, there are people who did this for me, and it was kind of actually responsible for me continuing to do the podcast when it just did not make sense as there were a couple hundred people listening or something. I want to shout out Anil Varanasi for doing this. And also Leopold, actually, for the foundation that he was previously running.", "On the other hand, the thing about what that blogger was saying, that the good ones you actually do notice. It's hard to find a hidden talent. Maybe I'm totally wrong about this. But I'd feel like if I put up a sort of grant application, I give you money if you're trying to make a blog, I'm actually not sure about how well that would work.", "Sholto Douglas", "There's different things you could do, though. Like, there's “I'll give you money to move to San Francisco for two months”. And sort of meet people and get more context and taste and feedback on what you're doing and it's not so much about the money or time. It's putting them in an environment where they can more rapidly grow. Like, that's something that one could do. I think you do that quite proactively in terms of you deliberately introduce people that you think will be interesting to each other and this kind of stuff, so… y eah.", "Dwarkesh Patel", "Yeah. No, I mean, that's very fair, and I, obviously I've benefited a ton from moving to San Francisco. And it's unlikely that I would be doing the podcast- at least on AI- to the degree I am if I wasn't here. So maybe it's a mistake to judge people based on the quality of their content as it exists now and just throw money at them- not throw money, but give them enough money to move to SF to get caught up in this intellectual milieu and then maybe do something interesting as a result.", "Trenton Bricken", "The thing that most readily comes to mind is the MATS program for AI research. And this seems like it's just been incredibly successful at giving people the time, the funding and the social status justification, to do AI safety relevant research with mentors.", "Dwarkesh Patel", "Oh, and you, you have a similar program…", "Trenton Bricken", "We have the Anthropic Fellows Program .", "Dwarkesh Patel", "That's right, yeah. And I know you're probably selecting for a slightly different thing, but I assume it's gonna be power law dominated. And have you noticed a pattern among the, whether it's the MATS fellows or your fellows, who is just like, \"This made the whole thing worth it\"?", "What's your first take on something?", "Trenton Bricken", "I mean, there have been multiple people who Anthropic and other labs have hired out of this program. So, I think the return on investment for it has been massive. And yeah, apparently the fellows, I think there are 20 of them, are really good.", "Dwarkesh Patel", "But what is the trick to making it work well or finding that one person?", "Trenton Bricken", "I think it's gotten much better with time, where the early fellows, some of them did good work and got good jobs. And so now later fellows the quality bar has just risen and risen and risen. And there are even better mentors now than before. So it's this really cool flywheel effect. But originally it was just people who didn't have the funding or time to make a name for themselves or do ambitious work. So it was kind of like giving them that niche to do it. Seems really key.", "Sholto Douglas", "You can do other things that don't have to be money. You could put out ideas for things you'd be really interested in reading or promoting.", "Dwarkesh Patel", "Yeah, yeah, yeah. There's something coming there.", "Sholto Douglas", "Okay, there we go.", "Dwarkesh Patel", "So if this episode hopefully will launch Tuesday at the same time as the book- by the way, which you can get at stripe.press/scaling . But on Wednesday, which is the day after, hopefully there's something useful for you here.", "Sholto Douglas", "Any other questions we wanna ask?", "Growing the podcast", "Dwarkesh Patel", "The thing I have takes on, which I rarely get asked about, is distribution.", "Trenton Bricken", "Distribution of AI?", "Dwarkesh Patel", "No, sorry. Like, Mr Beast-style distribution, where people, I think rightly, focus on the content, and if that's not up to snuff, I think you won't succeed. But to the extent that somebody's trying to do similar things, the thing they consistently underrate is putting the time into getting distribution right. I just take random takes about… for example, the most successful thing for my podcast in terms of growth has been YouTube Shorts. It's a thing you would never have predicted beforehand. And they're responsible for basically at least half the growth of the podcast or something.", "Sholto Douglas", "I mean, I'd buy that. Why wouldn't you predict it? Like, I mean, like, I mean, I guess there's the contrast of, like, the long form deep content and, like, YouTube Shorts and stuff. But I definitely think they're good hooks. Good content.", "Dwarkesh Patel", "I have takes on how to write tweets and stuff. The main intuition being write like you're writing to a group chat. To a group chat of your friends rather than this formal whatever.", "Trenton Bricken", "What else comes to mind here?", "Sholto Douglas", "Well, maybe it's interesting the difference between TikTok and YouTube Shorts.", "Dwarkesh Patel", "Oh, yeah. We've never cracked TikTok.", "Trenton Bricken", "Why not? Like, you've tried?", "Dwarkesh Patel", "Yeah. I mean-", "Sholto Douglas", "Tried? Have you done everything?", "Dwarkesh Patel", "No, I have not done everything.", "Trenton Bricken", "Have you read these poems? Maybe you're in a bubble bath with some beard shampoo on.", "Sholto Douglas", "Reading poems? That'd be incredible if you got any of that to go viral. You have to do that now!", "Dwarkesh Patel", "Manifest. Reading a poem, uncross your legs.", "Trenton Bricken", "Last episode it was the interpretability challenge, now it's Dwarkesh in a bubble bath.", "Dwarkesh Patel", "I gotta sell the book somehow, you know?", "Trenton Bricken", "But you literally do it like Margot Robbie-", "Sholto Douglas", "Yeah, exactly. Explaining the seed mechanism.", "Dwarkesh Patel", "Yeah, yeah, yeah, yeah, yeah.", "Sholto Douglas", "So what is scaling?", "Dwarkesh Patel", "And that's how you crack distribution.", "Sholto Douglas", "And that's how you crack distribution.", "Trenton Bricken", "Oh, but yeah, no, like when we did our episode, it launched and you were sharing interesting tidbits about how it was doing and the thumbnail you wanted to use and the title. And I think I even asked you to share more details because it seemed interesting and cool and subtle things. But it seemed like you also kind of just hated it. Like playing this game of really having to optimize all these knobs.", "Dwarkesh Patel", "So what I realized, I mean talent is everything, so I'm really lucky to have three to four editors who I'm just incredibly proud to work with. I don't know how to hire more of them. Like, they're just so good and self-directed. So, honestly, I don't have tips on how to correct that. I hired those guys. So one of them was a farmer in Argentina, one of them was a freshman math student in Sri Lanka, one of them was a former editor for one of Mr Beast's channels. The other is a director in Czechia who makes these AI animations that you've seen in the Notes On China, and he's working on more essays like that.", "So, I don't know how to replicate that catch again.", "Sholto Douglas", "God, that's a pretty widely cast net, I gotta be honest. Damn.", "Dwarkesh Patel", "But they're all, goddamned, they're so good.", "Trenton Bricken", "And this was just through your challenges and just tweeting about?", "Dwarkesh Patel", "That's right. I had a competition to make clips for my podcast. I rounded up a couple of them this way. Yeah, it's hard, it's hard to replicate because I've tried… \"tried,\" after.", "Trenton Bricken", "Why do you think this worked so well with the video editors? because you tried a similar approach with your chief of staff.", "Dwarkesh Patel", "Yeah. The difference is, with the video editor, I think there is this arbitrage opportunity where there are people… It is fundamentally a sort of, are you willing to work hard and obsess about getting better over time? Which all of them go above and beyond on, but you can just find people in other countries who are... and it's not even about the wages. Like, I've 10Xed their salaries or something like that. It's just about getting somebody who is really data-oriented, and there is this global arbitrage there.", "Whereas, with the general manager... By the way, the person I ended up hiring, and who I'm super excited to work with, is your childhood best friend. Max Farrens.", "Trenton Bricken", "Max is so great.", "Dwarkesh Patel", "He would have plenty of other opportunities. There's not this weird arbitrage where you find some farmer in Argentina.", "Trenton Bricken", "Yeah. But yeah, it is striking that you were looking for a while, and then just kind of mentioned offhand that Max was looking for something new.", "Dwarkesh Patel", "This is gonna be like a total, 12-year-old-learns-about-the-world kind of question, but I genuinely don't know how big companies hire. Because I was trying to find this person for a year, and I'm really glad about the person I ended up hiring. But it was just like, if I needed to hire 100 people for a company, let alone 1,000 people, I just do not know how to find people like this at scale.", "Sholto Douglas", "Yeah, I mean, I think this is the number one issue that startup CEOs have. Hiring. It's just relentlessly the number one.", "Dwarkesh Patel", "Yeah. And the thing I was stunned with is how it didn't seem like my platform helped that much. I got close to 1,000 applications across the different rounds of publicizing it that I did. And a lot of, I think, really cool people applied. But the person that I ended up hiring was somebody who was just a reference, like a mutual friend kind of thing. And a couple of other top contenders were also this way. So it's weird. Like, the best people in the world don't want to apply, at least to things like this and you just gotta seek them out. Even if you think you have a public platform or something.", "Trenton Bricken", "Yeah. Yeah, I mean, the job might just be so out of distribution from anything else that people would do.", "Dwarkesh Patel", "That's right, yeah.", "Trenton Bricken", "So Aditya Ray asks, \"How do you make it on Substack as a newbie writer?\"", "Dwarkesh Patel", "I think if you're starting from scratch, there's two useful hacks. One is podcasting, because you don't need to have some super original new take. You can just interview people who do, and you can leverage their platform.", "And two is writing book reviews. Again, because you have something to react to rather than having to come up with a unique worldview of your own. There's probably other things, and it's really hard to give advice in advance. Just try things. But, those I think are just, like, good, cold starts.", "Sholto Douglas", "The book reviews is a good suggestion. I actually use, like, Gwern's book reviews as a way to recommend books to people.", "Dwarkesh Patel", "By the way, this is a totally under-supplied thing. Because if anybody has book reviews. Jason Furman is this economist who has like a thousand Good Reads reviews . And I probably have visited his Good Reads on a hundred independent visits. Same with the Gwern book reviews or something, right?", "So book reviews are a very under-supplied thing, if you're looking to get started making some kind of content.", "Trenton Bricken", "I like that.", "Dwarkesh Patel", "Yeah. Cool. Thank you guys so much for doing this.", "Trenton Bricken", "Yeah, this was fun.", "Dwarkesh Patel", "We'll turn the tables on you again pretty soon.", "Sholto Douglas", "How does it feel being in the hot seat?", "Dwarkesh Patel", "It's nice. Nobody ever asked me questions.", "Sholto Douglas", "Nobody ever asked how is Dwarkesh!", "Trenton Bricken", "Cool. Yeah, yeah, super excited for the book launch.", "Dwarkesh Patel", "Thank you.", "Trenton Bricken", "The website's awesome by the way.", "Dwarkesh Patel", "Appreciate it. Stripe.press/scaling", "Sholto Douglas", "Yeah.", "Trenton Bricken", "Cool.", "Dwarkesh Patel", "Thanks, guys.", "Sholto Douglas", "See you later.", "Trenton Bricken", "Thanks." ]
[ "https://www.trentonbricken.com/about/", "https://www.linkedin.com/in/sholto", "https://en.wikipedia.org/wiki/Anthropic", "https://press.stripe.com/scaling", "https://sites.krieger.jhu.edu/jared-kaplan/", "https://press.stripe.com/", "https://apps.ankiweb.net/", "https://www.youtube.com/watch?v=Nlkk3glap_U", "https://rationalwiki.org/wiki/Scott_Alexander", "https://en.wikipedia.org/wiki/Kim_Peek", "https://en.wikipedia.org/wiki/Agenesis_of_the_corpus_callosum", "https://en.wikipedia.org/wiki/Exceptional_memory#Drawbacks", "https://en.wikipedia.org/wiki/Terrence_Deacon", "https://scholar.google.com/citations?user=Ml7ZNEYAAAAJ&hl=en", "https://gwern.net/", "https://80000hours.org/", "https://www.youtube.com/watch?v=LbkO84MsmyM", "https://www.youtube.com/watch?v=Uj6skZIxPuI", "https://www.youtube.com/watch?v=4GLSzuYXh6w", "https://www.youtube.com/watch?v=bc6uFV9CJGg", "https://www.youtube.com/watch?v=zdbVtZIn9IM", "https://en.wikipedia.org/wiki/Matt_Levine_(columnist)", "https://www.dwarkesh.com/p/annus-mirabilis", "https://en.wikipedia.org/wiki/The_Years_of_Lyndon_Johnson", "https://twitter.com/AlvaroDeMenard", "https://fantasticanachronism.com/", "https://www.amazon.com/Some-Translations-Cavafy-Alvaro-Menard/dp/B0DYVHZVY6", "https://epoch.ai/", "https://en.wikipedia.org/wiki/Robert_Moses", "https://anilv.com/", "https://www.matsprogram.org/", "https://alignment.anthropic.com/2024/anthropic-fellows-program/", "https://www.stripe.press/scaling", "https://en.wikipedia.org/wiki/Jason_Furman", "https://www.goodreads.com/user/show/4651295-jason-furman" ]
https://www.dwarkesh.com/p/scott-daniel
2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo
[ "AI 2027", "Dwarkesh Patel", "Today I have the great pleasure of chatting with Scott Alexander and Daniel Kokotajlo . Scott is of course the author of the blog Slate Star Codex , Astral Codex 10 now. It’s actually been, as you know, a big bucket list item of mine to get you on the podcast. So this is all the first podcast we’ve ever done, right?", "Scott Alexander", "Yes.", "Dwarkesh Patel", "And then Daniel is the director of the AI Futures Project . And you have both just launched today something called AI 2027. So what is this?", "Scott Alexander", "Yeah, AI 2027 is our scenario trying to forecast the next few years of AI progress. We’re trying to do two things here. First of all we just want to have a concrete scenario at all. So you have all these people, Sam Altman , Dario Amodei , Elon Musk saying, “going to have AGI in three years, superintelligence in five years”. And people just think that’s crazy because right now we have chatbots that are able to do a Google search, not much more than that in a lot of ways. And so people ask, “how is it going to be AGI in three years?” What we wanted to do is provide a story, provide the transitional fossils. So start right now, go up to 2027 when there’s AGI, 2028, when there’s potentially super intelligence, show on a month-by-month level what happened. Kind of in fiction writing terms, make it feel earned.", "So that’s the easy part. The hard part is we also want to be right. So we’re trying to forecast how things are going to go, what speed they’re going to go at. We know that in general, the median outcome for a forecast like this is being totally humiliated when everything goes completely differently. And if you read our scenario, you’re definitely not going to expect us to be the exception to that trend.", "The thing that gives me optimism is Daniel back in 2021, wrote the prequel to this scenario called What 2026 Looks Like . It’s his forecast for the next five years of AI progress. And he got it almost exactly right. You should stop this podcast right now. You should go and read this document. It’s amazing. Kind of looks like you asked ChatGPT to summarize the past five years of AI progress, and you got something with a couple of hallucinations, but basically well intentioned and correct. So when Daniel said he was doing this sequel, I was very excited, really wanted to see where it was going. It goes to some pretty crazy places and I’m excited to talk about it more today.", "Daniel Kokotajlo", "I think you’re hyping up a little bit too much. Yes, I do recommend people go read the old thing I did, which was a blog post. I think it got a bunch of stuff right, a bunch of stuff wrong, but overall held up pretty well and inspired me to try again and do a better version of it.", "Scott Alexander", "I think, read the document and decide which of us is right.", "Daniel Kokotajlo", "Another related thing too is that the original thing was not supposed to end in 2026, it was supposed to go all the way through the exciting stuff, right? Because everyone’s talking about, what about AGI, what about superintelligence, what would that even look like? So I was trying to step-by-step work my way from where we were at the time until things happen and then see what they look like, but I basically chickened out when I got to 2027 because things were starting to happen and the automation loop was starting to take off and it was just so confusing and there was so much uncertainty, so I basically just deleted the last chapter and published what I had up until that point. And that was the blog post.", "Dwarkesh Patel", "Okay, and then, Scott, how did you get involved in this project?", "Scott Alexander", "So I was asked to help with the writing, and I was already somewhat familiar with the people on the project, and many of them were kind of my heroes. So, Daniel, I knew both because I’d written a blog post about his opinions before I knew about his, “ What 2026 looks like ,” which was amazing. And also he had pretty recently made the national news for having, when he quit OpenAI, they told him he had to sign a non-disparagement agreement or they would claw back his stock options. And he refused, which they weren’t prepared for. It started a major news story, a scandal that ended up with OpenAI agreeing that they were no longer going to subject employees to that restriction.", "So people talk a lot about how it’s hard to trust anyone in AI because they all have so much money invested in the hype and getting their stock options better. And Daniel had attempted to sacrifice millions of dollars in order to say what he believed, which to me was this incredibly strong sign of honesty and competence. And I was like, how can I say no to this person? Everyone else on the team, also extremely impressive. Eli Liflund , who’s a member of Samotsvety , the world’s top forecasting team. He has won, like, the top forecasting competition, plausibly described as just the best forecaster in the world, at least by these really technical measures that people use in the superforecasting community. Thomas Larsen , Jonas Vollmer , both really amazing people who have done great work in AI before.", "I was really excited to get to work with this superstar team. I have always wanted to get more involved in the actual attempt to make AI go well. Right now, I just write about it. I think writing about it is important, but I don’t know. You always regret that you’re not the person who’s the technical alignment genius who’s able to solve everything. And getting to work with people like these and potentially make a difference just seemed like a great opportunity.", "What I didn’t realize was that I also learned a huge amount. I try to read most of what’s going on in the world of AI, but it’s this very low bandwidth thing and getting to talk to somebody who’s thought about it as much as anyone in the world was just amazing. Makes me really understand these things about how AI is going to learn quickly. You need all of this deep engagement with the underlying territory and I feel like I got that.", "Dwarkesh Patel", "I’ve probably changed my mind towards, against, towards, against, intelligence explosion three, four times in the conversations I’ve had in the lead-up in talking to you and then trying to come up with a rebuttal or something.", "Scott Alexander", "It wasn’t even just changing my mind, getting to read the scenario for the first time. It obviously wasn’t written up at this point. It was a giant, giant spreadsheet. I’ve been thinking about this for a decade, decade and a half now. And it just made it so much more concrete to have a specific story. Like, oh, yeah, that’s why we’re so worried about the arms race with China. Obviously we would get an arms race with China in that situation. And aside from just the people getting to read the scenario really sold me. This is something that needs to get out there more.", "Forecasting 2025 and 2026", "Dwarkesh Patel", "Yeah. Okay. Now let’s talk about this new forecast. Because you do a month by month analysis of what’s going to happen from here. So what is it that you expect in mid-2025 and the end of 2025 In this forecast?", "Scott Alexander", "So, [the] beginning of the forecast mostly focuses on agents. We think they’re going to start with agency training, expand the time horizons, get coding going well. Our theory is that they are, to some degree consciously, to some degree accidentally, working towards this intelligence explosion, where the AIs themselves can start taking over some of the AI research, move faster.", "So 2025, slightly better coding, 2026, slightly better agents, slightly better coding. And then we focus on, and we name the scenario after 2027 because that is when this starts to pay off. The intelligence explosion gets into full swing; the agents become good enough to help with- at the beginning not really do, but help with- some of the AI research.", "So we introduced this idea called the R&D progress multiplier: how many months of progress without the AIs do you get in one month of progress with all of these new AIs helping with the intelligence explosion. So 2027, we start with- I can’t remember if it literally starts with, or by March or something- a five times multiplier for algorithmic progress.", "Daniel Kokotajlo", "So we have the stats tracked on the site of the story. Part of why we did it as a website is so that you can have these cool gadgets and widgets. And so as you read the story, the stats on the side automatically update. And so one of those stats is the progress multiplier. Another answer to the same question you asked is basically; 2025, nothing super interesting happens, more or less similar trends to what we’re seeing.", "Dwarkesh Patel", "Computer use is totally solved? Partially solved? How good is computer use by the end of 2025?", "Daniel Kokotajlo", "My guess is that they won’t be making basic mouse click errors by the end of 2025, like they sometimes currently do. If you watch Claude Plays Pokemon - which you totally should- it seems like sometimes it’s just failing to parse what’s on the screen and it thinks that its own player character is an NPC and gets confused. My guess is that that sort of thing will mostly be gone by the end of this year, but that they still won’t be able to autonomously operate for long periods on their own.", "Dwarkesh Patel", "But by 2025, when you say it won’t be able to act coherently for long periods of time in computer use, if I want to organize a happy hour in my office, I don’t know, that’s like what a 30 minute task? What fraction of that is, it’s got to invite the right people, it’s got to book the right doordash or something. What fraction of that is it able to do?", "Daniel Kokotajlo", "My guess is that by the end of this year there’ll be something that can kind of do that, but unreliably. And that if you actually tried to use that to run your life, it would make some hilarious mistakes that would appear on Twitter and go viral, but that the MVP of it will probably exist by this year. Like there’ll be some Twitter thread about someone being like, “I plugged in this agent to like run my party and it worked!”", "Scott Alexander", "Our scenario focuses on coding in particular because we think coding is what starts the intelligence explosion. So we are less interested in questions of like, “how do you mop up the last few things that are uniquely human” compared to “when can you start coding in a way that helps the human AI researchers speed up their AI research, and then, if you’ve helped them speed up the AI research enough, is that enough to, with some ridiculous speed multiplier- 10 times, 100 times- mop up all of these other things?”", "Dwarkesh Patel", "One observation I have is, you could have told a story in 2021, once ChatGPT comes out… I think I had friends who were credible AI thinkers who were like, “look, you’ve got the coding agent now, it’s been cracked. Now the GPT4 will go around and it’ll do all this engineering and we do this RL on top. We can totally scale up the system 100x” and every single layer of this has been much harder than the strongest optimist expected. It seems like there have been significant difficulties in increasing the pre-training size, at least from rumors about field training runs or underwhelming training runs at labs.", "It seems like building up these RL- total outside view, I know nothing about the actual engineering involved here- but just from an outside view it seems like building up the O1 RL clearly took at least two years after GPT4 was released. And these things are also, their economic impact and the kinds of things you would immediately expect based on benchmarks for them to be especially capable at isn’t overwhelming, like the call center workers haven’t been fired yet. So why not just say look, at higher scale it will probably get even more difficult.", "Scott Alexander", "Wait a second, I’m a little confused to hear you say that, because when I have seen people predicting AI milestones like Katja Grace ’s expert surveys, they have almost always been too pessimistic from a point of view of how fast AI will advance. So I think the 2022 survey, they actually said that things that had already happened would take like 10 years to happen, but then the survey- it might have been 2023, it was like six months before GPT3, GPT4, came out. And there were things that GPT3 or 4 or whichever one of them it was, did, that it did in six months that they were still predicting like five or ten years from. I’m sure Daniel is going to have a more detailed answer, but I absolutely reject the premise that everybody has always been too optimistic.", "Daniel Kokotajlo", "Yeah, I think in general, most people following the field have underestimated the pace of AI progress and underestimated the pace of AI diffusion into the world. For example, Robin Hanson famously made a bet about less than a billion dollars of revenue I think by 2025 from AI.", "Dwarkesh Patel", "I agree Robin Hanson in particular has been too pessimistic.", "Daniel Kokotajlo", "But he’s a smart guy. So I think that the aggregate opinion has been underestimating the pace of both technical progress and deployment. I agree that there have been plenty of people who’ve been more bullish than me and have been already proven wrong, but they’re not me.", "Scott Alexander", "Wait a second. We don’t have to guess about aggregate opinion, we can look at Metaculus . Metaculus, I think their timeline was like 2050 back in 2020. It gradually went down to like 2040 two or three years ago. Now it’s at 2030, so it’s barely ahead of us. Again, that may turn out to be wrong, but it does look like the Metaculans overall have, have been too pessimistic, thinking too long term rather than too optimistic. And I think that’s like the closest thing we have to a neutral aggregator where we’re not cherry picking things.", "Why LLMs aren’t making discoveries", "Dwarkesh Patel", "Yeah. I had this interesting experience yesterday. We were having lunch with this senior AI researcher, probably makes on the order of millions a month or something, and we were asking him, “how much are the AIs helping you?” And he said, “in domains which I understand well, and it’s closer to autocomplete but more intense, there it’s maybe saving me four to eight hours a week.”", "But then he says, “in domains which I’m less familiar with, if I need to go wrangle up some hardware library or make some modification to the kernel or whatever, where I know less, that saves me on the order of 24 hours a week.” Now, with current models. What I found really surprising is that the help is bigger where it’s less like autocomplete and more like a novel contribution. It’s like a more significant productivity improvement there.", "Daniel Kokotajlo", "Yeah, that is interesting. I imagine what’s going on there is that a lot of the process when you’re unfamiliar with a domain is like Googling around and learning more about the domain. And language models are excellent because they’ve already read the whole Internet and know all the details.", "Dwarkesh Patel", "Isn’t this a good opportunity to discuss a certain question I asked Dario that you responded to?", "Scott Alexander", "What are you thinking of?", "Dwarkesh Patel", "Well, I asked this question where, as you say, they know all this stuff. I don’t know if you saw this. I asked this question where I said, look, these models know all this stuff. And if a human knew every single thing a human has ever written down on the Internet, they’d be able to make all these interesting connections between different ideas and maybe even find medical cures or scientific discoveries as a result.", "There was some guy who noticed that magnesium deficiency causes something in the brain that is similar to what happens when you get a migraine. And so he just said: give you magnesium supplements that cured a lot of migraines. So why aren’t able to leverage this enormous asymmetric advantage they have to make a single new discovery like this?", "Scott Alexander", "And then the example I gave was that humans also can’t do this. So for me, the most salient example is the etymology of words. You have all of these words in English that are very similar, like ‘happy’ versus ‘hapless’, ‘happen’, ‘perhaps’. And we never think about them unless you read an etymology dictionary and they’re like, oh, obviously these all come from some old root that has to mean ‘luck’ or ‘occurrence’ or something like that.", "So it’s kind of about figuring out versus checking. If I tell you those, you’re like, “this seems plausible”. And of course, in etymology, there are also a lot of false friends where they seem plausible but aren’t connected. But you really do have to have somebody shove it in your face before you start thinking about it and make all of those connections.", "Dwarkesh Patel", "I will actually disagree with this. We know that humans can do like, we have examples of humans doing this. I agree that we don’t have logical omniscience because there is a combinatorial explosion, but we are able to leverage our intelligence to… one of my favorite examples of this is David Anthony , the guy who wrote the Horse, the Wheel and Language .", "He made this super impressive discovery before we had the genetic evidence for it, like a decade before, where he said, look, if I look at all these languages in India and Europe, they all share the same etymology. I mean literally the same etymology for words like ‘wheel’ and ‘cart’ and ‘horse’. And these are technologies that have only been around for the last 6,000 years, which must mean that there was some group that these groups are all, at least linguistically, descended from. And now we have genetic evidence for the Yamnaya , which we believe is this group. You have a blog where you do this. This is your job, Scott! So why shouldn’t we hold the fact that language models can’t do this more against them?", "Scott Alexander", "Yeah. So to me, it doesn’t seem like he is just sitting there being logically omniscient and getting the answer. It seems like he’s a genius, he’s thought about this for years, probably at some point, he heard a couple of Indian words and a couple of European words at the same time and they kind of connected and the light bulb came on. So this isn’t about having all the information in your memory so much as the normal process of discovery, which is kind of mysterious, but seems to come from having good heuristics and throwing them at things until you kind of get a lucky strike.", "My guess is if we had really good AI agents and we applied them to this task, it would look something like a scaffold where it’s like, think of every combination of words that you know of, compare them. If they sound very similar, write it on this scratch pad here. If a lot of words of the same type show up on the scratch pad, that’s pretty strange, do some kind of thinking around it. And I just don’t think we’ve even tried that.", "And I think right now if we tried it, we would run into the combinatorial explosion. We would need better heuristics. Humans have such good heuristics that probably most of the things that show up even in our conscious mind, rather than happening on the level of some kind of unconscious processing, are at least the kind of things that could be true. I think you could think of this as like a chess engine. You have some unbelievable number of possible next moves, you have some heuristics for picking out which of those are going to be the right ones. And then gradually you kind of have the chess engine think about it, go through it, come up with a better or worse move, then at some point you potentially become better than humans. I think if you were to force the AI to do this in a reasonable way, or you were to train the AI such that it itself could come up with the plan of going through this in some kind of heuristic-laden way, you could potentially equal humans.", "Daniel Kokotajlo", "I’ll add some more things to that. So I think there’s a long and sordid history of people looking at some limitation of the current LLMs and then making grand claims about how the whole paradigm is doomed because they’ll never overcome this limitation. And then a year or two later the new LLMs overcome that limitation.", "And I would say that with respect to this thing of “why haven’t they made these interesting scientific discoveries by combining the knowledge they already have and noticing interesting connections?” I would say first of all, have we seriously tried to build scaffolding to make them do this? And I think the answer is mostly no.", "Dwarkesh Patel", "I think Google DeepMind tried this.", "Daniel Kokotajlo", "Maybe. Second thing, have you tried making the model bigger? They’ve made it a bit bigger over the last couple years and it hasn’t worked so far. Maybe if they make it even bigger still, it’ll notice more of these connections. And then third thing, and here’s I think the special one: Have you tried training the model to do the thing? The pre-training process doesn’t strongly incentivize this type of connection making.", "In general I think it’s a helpful heuristic that I use to ask the question of: remind oneself, what was the AI trained to do? What was its training environment like? And if you’re wondering why hasn’t the AI done this, ask yourself, did the training environment train it to do this? And often the answer is no. And often I think that’s a good explanation for why the AI is not good at it is that it wasn’t trained to do it.", "Dwarkesh Patel", "I mean it seems like such an economically valuable…", "Daniel Kokotajlo", "But how would you set up the training environment? Wouldn’t it be really gnarly to try to set up an RL environment to train to make new scientific discoveries?", "Dwarkesh Patel", "Maybe that’s why you should have longer timelines. It’s a gnarly engineering problem.", "Daniel Kokotajlo", "Well in our scenario they don’t just leap from where we are now to solving this problem. They don’t. Instead they just iteratively improve the coding agents until they’ve basically got coding solved. But even still, their coding agents are not able to do some of this stuff. That’s what early 2020, like the first half of 2027 in our story is basically, they’ve got these awesome automated coders, but they still lack research taste and they still lack maybe organizational skills and stuff.", "And so they need to overcome those remaining bottlenecks and gaps in order to completely automate the AI research cycle. But they’re able to overcome those gaps faster than they normally would because the coding agents are doing all the grunt work really fast for them.", "Scott Alexander", "Yeah, I think it might be useful to think of our timelines as being like 2070, 2100. It’s just that the last 50 to 70 years of that all happened during the year 2027 to 2028, because we are going through this intelligence explosion like I think if I asked you, could we solve this problem by the year 2100? You would say, oh, yeah, by 2100? Absolutely. And we’re just saying that the year 2100 might happen earlier than you expect because we have this research progress multiplier.", "Dwarkesh Patel", "And then let me just address that in a second. But just one final thought on this thread. To the extent that there’s like a modus ponens, modus tollens thing here, where one thing you could say is like, look: AIs- not just LLMs, but AIs- will have this fundamental asymmetric advantage where they know all this shit. And why aren’t they able to use their general intelligence to use this asymmetric advantage to some enormous capability overhang.", "Now, you could infer that same statement by saying, okay, well, once they do have that general intelligence, they will be able to use their asymmetric advantage to make all these enormous gains that humans are in principle less capable of, right? So basically, if you do subscribe to this view that AIs could do all these things if only they had general intelligence, you got to be like, well, once we actually do get the AGI, it’s actually going to be a totally transformative because they will have all of human knowledge memorized and they can use that to make all these connections.", "Daniel Kokotajlo", "I’m glad you mentioned that our current scenario does not really take that into account very much. So that’s an example in which our scenario is possibly underestimating the rate of progress.", "Dwarkesh Patel", "You’re so conservative, Daniel.", "Scott Alexander", "This has been my experience working with the team, as I point out, five different things. “Are you sure you’re taking this into account? Are you sure you’re taking this into account?” And first of all, 99% of the time he says, “yes, we have a supplement on it”. But even when he doesn’t say that, he’s like, “yeah, that’s one reason it could go slower than that. Here are 10 reasons it could go faster”.", "Daniel Kokotajlo", "It’s trying to be sort of like our median guess. So there are a bunch of ways in which we could be underestimating, and there are a bunch of ways in which you could be overestimating. And we’re going to hopefully continue to think more about this afterwards and continue to iteratively refine our models and come up with better guesses and so forth.", "Debating intelligence explosion", "Dwarkesh Patel", "So if I look back at AI progress in the past, if we were back in, say, 2017. Suppose we had these superhuman coders in 2017; the amount of progress we’ve made since then, so where we are currently in 2025, by when could we have had that instead?", "Daniel Kokotajlo", "Great question. We’d still have to stumble through all the discoveries that we’ve made since 2017. We still have to figure out that language models are a thing, we still have to figure out that you can fine tune them with RL.", "So all those things would still have to happen. How much faster would they happen? Maybe 5x faster, because a lot of the small scale experiments that these people do in order to test out ideas really quickly before they do the big training runs would happen much faster because they’re just lickety-split being spit out. I’m not very confident in that 5x number, it could be lower, it could be higher, but that was roughly what we were guessing.", "Our 5x, by the way, is for the algorithmic progress part, not for the overall thing. So in this hypothetical, according to me, basically things would be going 2.5x faster, where the algorithms would be advancing at 5x speed, but the compute is still stuck at the usual speed.", "Dwarkesh Patel", "That seems plausible to me. You have a 5x at some point, and then dot dot dot, you have 1000x AI progress within the matter of a year. Maybe that’s the part I’m like, wait, how did that happen exactly? So what’s the story there?", "Daniel Kokotajlo", "The way that we did our takeoff forecast, which we’ll get to in a second, was basically by breaking down how we think the intelligence explosion would go into a series of milestones. First you automate the coding, then you automate the whole research process, but in a very similar way to how humans do it with teams of agents that are about human level, then you get to superhuman level and so forth.", "So we broke it down into these milestones, you know, the superhuman coder, superhuman AI researcher, and then super intelligent AI researcher. And the way we did our forecast was, for each of these milestones, we were like, what is it going to take to make an AI that achieves that milestone? And then once you do achieve that milestone, how much is your overall speedup? And then what’s it going to take to achieve the next milestone? Combine that with the overall speed up and that gets you your clock time distance until that happens and then, okay, now you’re at that milestone. What’s your overall speed up? Assuming that you have that milestone also, what’s the next one? How long does it take to get to the next one? So we sort of work through it bit by bit, and at each stage we’re just making our best guesses.", "So quantitatively we were thinking something like 5x speedup to algorithmic progress from the superhuman coder, and then something like a 25x speedup to algorithmic progress from the superhuman AI researcher. Because at that point you’ve got the whole stack automated, which I think is substantially more useful than just automating the coding. And then I forget what we say for a super intelligent AI researcher, but off the top of my head it’s probably something in the hundreds or maybe like 1000x overall speed up.", "Dwarkesh Patel", "So maybe the big picture thing I have with the intelligence explosion is… we can go through the specific arguments about how much will the automated coder be able to do, and how much will the superhuman AI coder be able to do. But on priors, it’s just such a wild thing to expect.", "And so, before we get into all the specific arguments, maybe you can just address this idea that, why not just start off with 0.01% chance this thing might happen? Then you need extremely, extremely strong evidence that it will before making that your modal view.", "Scott Alexander", "I think that it’s a question of what is your default option or what are you comparing it to. I think that naively people think like, well, every particular thing is potentially wrong. So let’s just have a default path where nothing ever happens. And I think that that has been the most consistently wrong prediction of all. Like, I think in order to have nothing ever happen, you actually need a lot to happen. Like you need suddenly AI progress that has been going at this constant rate for so long stops . Why does it stop?", "Well, we don’t know. Whatever claim you’re making about that is something where you would expect there to be a lot of out of model error is where you would expect. Think like somebody must be making a pretty definite claim that you want to challenge. So I don’t think there’s a neutral position where you can just say, well, given that out of model error is really high and we don’t know anything, let’s just choose that. I think we are trying to take- I know this sounds crazy because if you read our document, all sorts of bizarre things happen. It’s probably the weirdest couple of years that have ever been. But we’re trying to take almost in some sense a conservative position where the trends don’t change, nobody does an insane thing, nothing that we have no evidence to think will happen happens. And the way that the AI intelligence explosion dynamics work are just so weird that in order to have nothing happen, you need to have a lot of crazy things happen.", "Daniel Kokotajlo", "One of my favorite meme images is this graph showing world GDP over time . You’ve probably seen it, it spikes up and then there’s a little thought bubble at the top of the spike in 2010 or something. And the thought bubble says, “my life is pretty normal, I have a good grasp of what’s weird versus standard and people thinking about different futures with digital minds and space travel are just engaging in silly speculation”.", "The point of the graph is, actually there’s been amazing transformative changes in the course of history that would have seemed totally insane to people multiple times. We’ve gone through multiple such waves of those things.", "Scott Alexander", "Everything we’ve talked about has happened before. Algorithmic progress already doubles every year or so. So it’s not insane to think that algorithmic progress can contribute to these compute things. In terms of general speedup, we’re already at like a thousand times research speedup, multiplier compared to the Paleolithic or something. So from the point of view of anyone in most of history, we are going at a blindingly insane pace. And all that we’re saying here is that it’s not going to stop.", "The same trend that has caused us to have a thousand times speed up multiplier relative to past eras and not even the Paleolithic, like what happened in the century between, I don’t know, 600 and 700 A.D. I'm sure there are things, I’m sure historians could point them out. Then you look at the century between 1900 and 2000 and it’s just completely qualitatively different.", "Of course there are models of whether that stagnated recently or what’s going on here. We can talk about those, we can talk about why we expect the intelligence explosion to be an antidote to that kind of stagnation. But nothing we’re saying is that different from what has already happened.", "Dwarkesh Patel", "I mean, you are saying that these previous transitions have been smoother than the one you were anticipating.", "Scott Alexander", "We’re not sure about that, actually. So one of these models is just a hyperbola. Everything is along the same curve. Another model is that there are these things like the literal Cambrian explosion. If you want to take this very far back, go full Ray Kurzweil . The literal Cambrian explosion, the agricultural revolution, the industrial revolution, has phase changes.", "When I look at the economic modeling of this, my impression is the economists think that we don’t have good enough data to be sure whether this is all one smooth process or whether it’s a series of phase changes. When it is one smooth process, the smooth process is often a hyperbola that shoots to infinity in weird ways. We don’t think it’s going to shoot to infinity. We think it’s going to hit bottlenecks again.", "Dwarkesh Patel", "You guys are the conservative crowd, you know?", "Scott Alexander", "We think it’s going to hit bottlenecks the same as all these previous processes. The last time this hit a bottleneck, if you take the hyperbola view, is in, like 1960, when humans stopped reproducing at the same rate they were reproducing before. We hit a population bottleneck, the usual population, two ideas, flywheel stopped working, and then we stagnated for a while.", "If you can create a country of geniuses in a data center, as I think Dario Amodei put it, then you no longer have this population bottleneck, and you’re just expecting continuation of those pre-1960 trends. So I realize all of these historical hyperbolas are also kind of weird, also kind of theoretical, but I don’t think we’re saying anything that there isn’t models for which have previously seemed to work for long historical periods.", "Daniel Kokotajlo", "Another thing also is, I think people equivocate between slow and continuous, right? So if you look at our scenario, there’s this continuous trend that runs through the whole thing of this algorithmic progress multiplier. And we’re not having discrete jumps from like 0 to 5x to 25x. We have this continuous improvement. So I think continuous is not the crux. The crux is like, is it going to be this fast? You know, and we don’t know, maybe it’ll be slower, maybe it’ll be faster. But we have our arguments for why we think maybe this fast.", "Dwarkesh Patel", "Okay, now that we brought up the intelligence explosion, let’s discuss that, because I’m kind of skeptical. It doesn’t really seem to me that a notable bottleneck to AI progress, or the main bottleneck to AI progress, is the amount of researchers, engineers who are doing this kind of research. It seems more like compute or some other thing is a bottleneck. And the piece of evidence is that when I talk to my AI researcher friends at the labs, they say there’s maybe 20 to 30 people on the core pre-training team that’s discovering all these algorithmic breakthroughs.", "If the headcount here was so valuable you would think that, for example, Google DeepMind would take not just all their smartest people, not just from DeepMind but all of Google and just put them on pre-training or RL or whatever the big bottleneck was. You’d think OpenAI would hire every single Harvard math PhD and in six months you’re all going to be trained up on how to do AI research. I know they’re increasing headcount, but they don’t seem to treat this as the kind of bottleneck that it would have to be for millions of them in parallel to be rapidly speeding up AI research.", "There’s this quote that “one Napoleon is worth 40,000 soldiers” was commonly a thing that was said when he was fighting. But 10 Napoleons is not 400,000 soldiers. Right? So why think that these million AI researchers are netting you something that looks like an intelligence explosion?", "Daniel Kokotajlo", "So previously I talked about three stages of our takeoff model. First is you get the superhuman coder. Second is when you fully automated AI R&D, but it’s still at basically human level, it’s as good as your best humans. And then third is now you’re in super intelligence territory and it’s qualitatively better.", "In our guesstimates of how much faster algorithmic progress would be going, the progress multiplier for the middle level, we basically do assume that you get massive diminishing returns to having more minds running in parallel. And so we totally buy all of that.", "Scott Alexander", "Yeah. And then I think the addition to that is the question, then, why do we have the intelligence explosion? And the answer is: combination of that speed up and the speed up in serial thought speed.", "Daniel Kokotajlo", "And also the research taste thing. Here are some important inputs to AI R&D progress today: research taste. So the quality of your best researchers, the people who are managing the whole process, their ability to learn from data and make more efficient use of the compute by running the right experiments instead of flailing around running a bunch of useless experiments. That’s research taste.", "Then there’s the quantity of your researchers, which we just talked about. Then there’s the serial speed of your researchers, which currently is all the same because they’re all humans and so they all run at basically the same serial speed. And then finally there’s how much compute you have for experiments. So what we’re imagining is that basically serial speed starts to matter a bunch because you switch to AI researchers that have orders of magnitude more serial speed than humans. But it tops out; we think that over the course of our scenario, if you look at our sliding stats chart, it goes from 20x to 90x or something over the course of the scenario, which is important, but not huge.", "And also we think that once you start getting 90x serial speed, you’re just bottlenecked on the other stuff and so additional improvements in serial speed basically don’t help that much. With respect to the quantity of course, yeah, we’re imagining you get hundreds of thousands of AI agents, a million AI agents, but that just means you’d be bottlenecked on the other stuff. You’ve got tons of parallel agents, that’s no longer your bottleneck. What do you get bottlenecked on? Taste and compute.", "So by the time it’s mid-2027 in our story, when they’ve fully automated the AI research, there’s basically the two things that matter is; what’s the level of taste of your AIs, how good are they at learning from the experiments that you’re doing? And then how much compute do you have for running those experiments? And that’s the sort of core setup of our model. And when we get our 25x multiplier, it’s starting from those premises.", "Dwarkesh Patel", "Is there some intuition pump from history where there’s been some output and because of some really weird constraints, production of it has been rapidly skewed along one input, but not all the inputs that have been historically relevant and you still get breakneck progress.", "Daniel Kokotajlo", "Possibly the Industrial Revolution. I’m just extemporizing here, I hadn’t thought about this before, but as Scott’s famous post that was hugely influential to me a decade ago talks about, there’s been this decoupling of population growth from overall economic growth that happened with the Industrial Revolution. And so in some sense, maybe you could say that’s an example of previously these things grew in tandem. More population, more technology, more farms, more houses, et cetera. Your capital infrastructure and your human infrastructure was going up together, but then we got the industrial revolution and they started to come apart.", "And now all the capital infrastructure was growing really fast compared to the human population size. I think I’m imagining something maybe similar happening with algorithmic progress. And again with population, population still matters a ton today. In some sense progress is bottlenecked on having larger populations and so forth. But it’s just that the population growth rate is just inherently kind of slow and the growth rate of capital is much faster. And so it just comes to be a bigger part of the story.", "Dwarkesh Patel", "Maybe the reason that this sounds less plausible to me than the 25x number implies is that when I think about concretely what that would look like, where you have these AIs and we know that there’s a gap in data efficiency between human brains and these AIs. And so somehow there’s a lot of them thinking and they think really hard and they figure out how to define a new architecture that is like the human brain or has the advantages of the human brain. And I guess they can still do experiments, but not that many.", "Part of me just wonders, what if you just need an entirely different kind of data source that’s not like pre-training for that, but they have to go out in the real world to get that. Or maybe it needs to be an online learning policy where they need to be actively deployed in the world for them to learn in this way. And so you’re bottlenecked on how fast they can be getting real world data. I just think it’s hard…", "Daniel Kokotajlo", "So we are actually imagining online learning happening.", "Dwarkesh Patel", "Oh really?", "Daniel Kokotajlo", "Yeah. But not so much real world as in… the thing is that if you’re trying to train your AIs to do really good AI R&D, then the AI R&D is happening on your servers. And so you can have this loop of: you have all these AI agents autonomously doing AI R&D, doing all these experiments, et cetera, and then they’re like online learning to get better at doing AI R&D based on how those experiments go.", "Dwarkesh Patel", "But even in that scenario alone, I can imagine bottlenecks like, oh, you had a benchmark and it got reward hacked for what constitutes AI R&D because you obviously can’t have… maybe you would, but is it as good as a human brain? It’s just like such an ambiguous thing you’d have. Right now we have benchmarks that get reward hacked, right?", "Daniel Kokotajlo", "But then they autonomously build new benchmarks. I think what you’re saying is maybe this whole process just goes off the rails due to lack of contact with ground truth outside in the actual world, outside the data centers. Maybe? Again, part of my guess here is that a lot of the ground truth that you want to be in contact with is stuff that’s happening on the data centers, things like how fast are you improving on all these metrics, and you have these vague ideas for new architectures, but you’re struggling to get them working. How fast can you get them working?", "And then separately, insofar as there is a bottleneck of talking to people outside and stuff, well they are still doing that. And once they’re fully autonomous, they can even do that much faster. You can have all the million copies connected to all these various real world research programs and stuff like that. So it’s not like they’re completely starved for outside stuff.", "Dwarkesh Patel", "What about the skepticism that, look, what you’re suggesting with this hyper efficient hive mind of AI researchers, no human bureaucracy has just out of the gate worked super efficiently, especially one where they don’t have experience working together. They haven’t been trained to work together, at least yet. And there hasn’t been this outer loop RL on like, “we ran a thousand concurrent experiments of different AI bureaucracies doing AI research and this is the one that actually worked best”.", "And the analogy I’d use maybe is to humans in the Savannah 200,000 years ago. We know they have a bunch of advantages over the other animals already at this point, but the things that make us dominant today, joint stock corporations, state capacities like this fossil fueled civilization we have that took so much cultural evolution to figure out. You couldn’t just have figured it out in the savannahs like, “oh, if we had built these incentive systems and we issued dividends, then we could really collaborate here” or something.", "Why not think that it will take a similar process of huge population growth, huge social experimentation, and upgrading of the technological base of the AI society before they can organize this hypermind collective, which will enable them to do what you imagine an intelligence explosion looks like?", "Scott Alexander", "Yeah, you’re comparing it kind of to two different things. One of them is literal genetic evolution in the African savannah, and the other is the cultural evolution that we’ve gone through since then. And I think there will be AI equivalents to both. So the literal genetic evolution is that our minds adapted to be more amenable to cooperation during that time.", "So I think the companies will be very literally training the AIs to be more cooperative. I think there’s more opportunity for pliability there. Because humans were, of course, evolving under this genetic imperative that we want to pass on our own genetic information, not somebody else’s genetic information. You have things like kin selection that are kind of exceptions to that, but overall it’s the rule.", "In animals that don’t have that, like eusocial insects, then you very quickly get, just through genetic evolution, without cultural evolution, extreme cooperation. And with eusocial insects, what’s going on is that they all have the same genetic code, they all have the same goals. And so the training process of evolution kind of yokes them to each other in these extremely powerful bureaucracies.", "We do think that the AI will be closer to the eusocial insects in the sense that they all have the same goals, especially if these aren’t indexical goals, they’re goals like “have the research program succeed”. So that’s going to be changing the weights of each individual AI, I mean, before they’re individuated, but it’s going to be changing the weights of the AI class overall to be more amenable to cooperation.", "And then, yes, you do have cultural evolution. Like you said, this takes hundreds of thousands of individuals. We do expect there will be these hundreds of thousands of individuals. It takes decades and decades. Again, we expect this research multiplier such that decades of progress happen within this one year, 2027 or 2028. So I think between the two of these, it is possible.", "Daniel Kokotajlo", "Maybe this is also where the serial speed actually does matter a lot. Because if they’re running at 50x human speed, then that means you can have a year of subjective time happen in a week of real time. And so these sorts of large scale cooperative dynamics of your moral maze, you have an institution, but then it becomes like a moral maze and it sort of collapses under its own weight and stuff like that. There actually is time for them to play that out multiple times and then train on it, tinker with the structure and like add it to the training process over the course of 2027.", "Scott Alexander", "Also, they do have the advantage of all the cultural technology that humans have evolved so far. This may not be perfectly suited to them, it’s more suited to humans. But imagine that you have to make a business out of you and your hundred closest friends who you agree with on everything. Maybe they’re literally your identical twin, they have never betrayed you, ever, and never will. I think this is just not that hard a problem.", "Daniel Kokotajlo", "Also, again, they are starting from a higher floor, they’re starting from human institutions. You can literally have a slack workspace for all the AI agents to communicate. And you can have a hierarchy with roles. They can borrow quite a lot from successful human institutions.", "Dwarkesh Patel", "I guess the bigger the organization, even if everybody is aligned- I think some of your responses addressed whether they will be aligned on goals. I mean, you did address the whole thing, but I would just point this out; that is not the part I’m skeptical of. I am more skeptical of just, even if you’re all aligned and want to work together, do you fundamentally understand how to run this huge organization. And you’re doing it in ways that no human has had to before. You’re getting copied incessantly, you’re running extremely fast, you know what I’m saying?", "Daniel Kokotajlo", "I think that’s totally reasonable.", "Dwarkesh Patel", "And so it’s a complicated thing. And I’m just not sure why you think we build this bureaucracy, or the AIs build this bureaucracy, within this matter of…", "Daniel Kokotajlo", "So we depict it happening over the course of six to eight months or something like that in 2027, would you say twice as long, five times as long, 10 times as long?", "Dwarkesh Patel", "Five years?", "Daniel Kokotajlo", "So five years, if they’re going at 50x serial speed, then five years is what? Like 250 years of serial time for the AIs, which to me feels like more than enough to really sort out this sort of stuff. You’ll have time for sort of like empires to rise and fall, so to speak, and all of that to be added to the training data and yeah. But I could see it taking longer than we depict. Maybe instead of six months, it’ll be like 18 months, you know, but also maybe it could be two months.", "Scott Alexander", "So when I think of the ways that they train AIs, I think in our scenario at this point there are two primary ways that they’re doing it. One of them is just continuing the next token prediction work. So these AIs will have access to all human knowledge, they will have read management books in some sense, they’re not starting blind. There is going to be something like: predict how Bill Gates would complete this next character or something like that.", "And then there's reinforcement learning in virtual environments. So get a team of AIs to play some multiplayer game. I don’t think you would use one of the human ones because you would want something that was better suited for this task. But just running them through these environments again and again, training on the successes, training against the failures, kind of combining those two kinds of things.", "To me it does not seem like the same kind of problem as inventing all human institutions from the Paleolithic onward. It just seems like applying those two things.", "Can superintelligence actually transform science?", "Dwarkesh Patel", "The other notable thing about your model is, you got this superhuman thing at the end of it and then it seems to just go through the tech tree of mirror life and nanobots and whatever crazy stuff. And maybe that part I’m also really skeptical of. If you look at the history of invention, it just seems like people are just trying different random stuff, often even before the theories about how that industry works or how the relevant machinery works is developed; like the steam engine was developed before the theory of thermodynamics, the Wright brothers seemed like they were just experimenting with airplanes, and is often influenced by breakthroughs in totally different fields.", "Which is why you have this pattern of parallel innovation, because the background level of tech is at a point at which you can do this experiment. Machine learning itself is a place where this happened, right? Where people had these ideas about how to do deep learning or something. But it just took a totally unrelated industry of gaming to make the relevant progress, to get the whole, basically the economy as a whole advanced enough that deep learning, Geoffrey Hinton ’s ideas could work. So I know we’re accelerating way into the future here, but I want to get to this crux.", "Daniel Kokotajlo", "So again, we have that three part division of the superhuman coder, then the complete AI researcher and then the super intelligent, you’re not jumping ahead to that one. So now we’re imagining systems that are true super intelligence, they are just better than the best humans at everything, including being better at data efficiency and better at learning on the job and stuff like that.", "Now, our scenario does depict a world in which they’re bottlenecked on real world experience and that sort of thing. I think that if you want a contrast, some people in the past have proposed much faster scenarios where they email some cloud lab and start building nanotech right away by just using their brains to figure out appropriate protein folding and stuff like that. We are not depicting that in our scenario. In our scenario, they are in fact bottlenecked on lots of real world experience to build these actual practical technologies, but the way they get that is they just actually get that experience and it happens faster than humans would. And the way they do that is they’re already super intelligent, they’re already buddy-buddy with the government, the government deploys them heavily in order to beat China and so forth, and so all these existing US companies and factories and military procurement providers and so forth are all chatting with the superintelligences and taking orders from them about how to build the new widget and test it, and they’re downloading super intelligent designs and manufacturing them and then testing them and so forth.", "And then the question is, they are getting this experience, they’re learning on the job, quantitatively, how fast does this go? Is it taking years or is it taking months or is it taking days? In our story, it takes about a year and we’re uncertain about this. Maybe it’s going to take several years, maybe it’s going to take less than a year. Here are some factors to consider for why it’s plausible that it could take a year:", "One, you’re going to have something like a million of them. And quantitatively that’s comparable in size to the existing scientific industry. I would say, like maybe it’s a bit smaller, but it’s not dramatically smaller.", "Two, they’re thinking a lot faster. They’re thinking like 50 times speed or like 100 times speed that I think counts for a lot.", "And then three, which is the biggest thing, they’re just qualitatively better as well. So not only are there lots of them and they’re thinking very fast, but they are better at learning from each experiment than the best human would be at learning from that experience.", "Dwarkesh Patel", "Yeah, I think the fact that there’s a million of them or the fact that they’re comparable to maybe the size of this key researcher population of the world or something. I think there’s more than a million researchers in the world, but…", "Daniel Kokotajlo", "Well, but it’s very heavy tailed. Like a lot of the research actually comes from the best ones.", "Dwarkesh Patel", "But it’s not clear to me that most of the new stuff that is developed is a result of this researcher population. I mean, there’s just so many examples in the history of science where a lot of growth or productivity is just the result of, how do you count the guy at the TSMC process who figures out a different way to…", "Scott Alexander", "I actually argued with Daniel about this recently about one interesting case that I can go over is we have an estimate that about a year after the superintelligences start wanting robots, they’re producing a million units of robots per month. I think that’s pretty relevant because you have. I think it’s Wright’s law , which is that your ability to improve efficiency on a process is proportional to doubling the amount of copies produced.", "So if you’re producing a million of something, you’re probably getting very, very good at it. So the question we were arguing about is, can you produce a million units a month after a year. And for context, I think Tesla produces like a quarter of that in terms of cars or something. This is an amazing scale up in a year.", "Daniel Kokotajlo", "It’s only 4x. Also just for Tesla.", "Scott Alexander", "Yeah. And the argument that we went through was something like, so it’s got to first get factories. OpenAI is already worth more than than all of the car companies in the US except Tesla combined. So if OpenAI today wanted to buy all the car factories in the U.S. except Tesla, start using them to produce humanoid robots, they could. Obviously not a good value proposition today, but it’s just obvious and overdetermined that in the future, when they have superintelligence and they want them, they can start buying up a lot of factories. How fast can they convert these car factories to robot factories?", "So, [the] fastest conversion we were able to find in history was World War II. They suddenly wanted a lot of bombers, so they bought up- in some cases bought up, in other cases got- the car companies to produce new factories, but they bought up the car factories, converted them to bomber factories. That took about three years from the time when they first decided to start this process to the time when the factories were producing a bomber an hour.", "We think it will potentially take less with superintelligence, because first of all, if you look at the history of this process, despite this being the fastest anybody has ever done this, it was actually kind of a comedy of errors. They made a bunch of really silly mistakes in this process. If you actually have something that just doesn’t have the normal human bureaucratic problems, and we do think that this will be done in the middle of an arms race with China, so the government will be kind of moving things through, and then the superintelligences will be good at the logistical issues, navigating bureaucracies.", "So we estimated maybe if everything goes right, we can do this three times faster than the bomber conversions in World War II. So that’s about a year.", "Dwarkesh Patel", "I’m assuming the bombers were just much less sophisticated than the humanoid robots.", "Scott Alexander", "Yeah, but the bomber factories of that time were also much less sophisticated than the car factory.", "Dwarkesh Patel", "Yeah, but I would assume the conversion speed is also... Maybe to give one hypothetical here right now, let’s just say biomedicine as an example of one of the fields you’d want to accelerate, and whenever these CEOs get on podcasts, they’re often talking about curing cancer and so forth. And it seems like a big thing these frontier biomedical research facilities are excited about is the virtual cell .", "Now, the virtual cell, it takes a tremendous amount of compute, I assume, to train these DNA foundation models and to do all the other computation necessary to simulate a virtual cell. If it is the case that the cure for Alzheimer’s and cancer and so forth is bottlenecked by the virtual cell, it’s not clear if you had a million superintelligences in the 60s and you asked them cure cancer for me, they would just have to solve making GPUs at scale, which would require solving all kinds of interesting physics and chemistry problems, material science problems, building process, building fabs for computing, and then going through 40 years of making more and more efficient fabs that can do all of Moore’s Law from scratch.", "And that’s just one technology. And it just seems like you just need this broad scale. The entire economy needs to be upgraded for you to cure cancer in the 60s just because you need the GPUs to do the virtual cell, assuming that’s the bottleneck.", "Scott Alexander", "First of all, I agree if there’s only one way to do something that makes it much harder, and maybe that one way takes very long, we’re assuming that there may be more than one way to cure cancer, more than one way to do all of these things, and they’ll be working on finding the one that is least bottlenecked. Part of the reason- I realize I spent too long talking about that robot example, but we do think that they’re going to be getting a lot of physical world things done very quickly once you have a million robots a month, you can actually do a lot of physical world experiments.", "We look at examples of people trying to get entire economies off the ground very quickly. So for example, China post- Deng , I don’t know. Would you have predicted that 20, 30 years after being kind of a communist basket case, they can actually be doing this really cutting edge bio research? I realize that’s a much weaker thing than we’re positing, but it was done just with the human brain with a lot fewer resources than we’re talking about.", "Same issue with, let’s say Elon Musk and SpaceX. I think in the year 2000 we would not have thought that somebody could move two times, five times faster than NASA with pretty limited resources. They were able to get like I think a lot more years of technological advance in than we would have expected. Partly that’s because just Elon is crazy and never sleeps. Like if you look at the examples of things from SpaceX, he is breathing down every worker’s neck being like, what’s this part? How fast is this part going? Can we do this part faster? And the limiting factor is basically hours in Elon’s day in the sense that he cannot be doing that with everybody’s.", "Dwarkesh Patel", "Super intelligence is not even that smart. It just yells at every single worker.", "Scott Alexander", "Yeah, I mean that’s, that is kind of my model is that we have some, we have something which is smarter than Elon Musk, better at optimizing things than Elon Musk. We have 10,000 parts in a rocket supply chain. How many of those parts can Elon personally like yell at people to optimize? We could have a different copy of the superintelligence optimizing every single part full-time. I think that’s just a really big speed up.", "Dwarkesh Patel", "I think both of those examples don’t work in your favor. I think the China growth miracle could not have occurred if not for their ability to copy technology from the west and I don’t think there’s a world in which they… China has a lot of really smart people, it’s a big country in general. Even then I think they couldn’t have just divined how to make airplanes after becoming a communist hell basket, right?", "The AIs cannot just copy nanobots from aliens, it’s got to make them from scratch. And then on the Elon example, it took them two decades of countless experiments, failing in weird ways you would not have expected. And still, rocketry we’ve been doing since the 60s, maybe actually World War II, and then just getting from a small rocket to a really big rocket took two decades of all kinds of weird experiments, even with the smartest and most competent people in the world.", "Daniel Kokotajlo", "So you’re focusing on the nanobots, I want to ask a couple questions. One, what about just the regular robots? And then two, what would your quantities be for all of these things? So first, what about the regular robots? Yeah, nanobots are presumably a lot harder to make than regular robot factories. And in our story they happen later. It sounds like right now you’re saying even if we did get the whole robot factory thing going, it would still take a ton of additional full-economy, broad automation for a long time to get to something like nanobots. That’s totally plausible to me. I could totally imagine that happening. I don’t feel like the scenario particularly depends on that final bit about getting the nanobots. They don’t actually really make any difference to the story.", "The robot economy does sort of make a difference because there’s two branches endings, as you know. And in one of the endings, the AIs end up misaligned and end up taking over. And it’s an important strategic change when the AIs are self sufficient and totally in charge of everything and they don’t actually need the humans anymore. And so what I’m interested in is, when has the robot economy advanced to the point where they don’t really depend on humans? So quantitatively, what would your guess for that be?", "If hypothetically we had the army of superintelligences in early 2028, and hypothetically also assume that the US President is super bullish on deploying this into the economy to beat China, etc, so the political stuff is all set up in the way that we have. How many years do you think it would be until there are so many automated factories producing automated self driving cars and robots that are themselves building more factories and so forth, that if all the humans dropped dead it would just keep chugging along, and, maybe it would slow down a bit, but it would still be fine?", "Dwarkesh Patel", "What does “chugging along” mean?", "Daniel Kokotajlo", "So from the perspective of misaligned AIs, you wouldn’t want to kill the humans or get into a war with them if you’re going to get wrecked because you need the humans to maintain your computers. In our scenario, once they are completely self-sufficient, then they can start being more blatantly misaligned.", "And so I’m curious, when would they be fully self-sufficient? Not in the sense of they’re not literally using the humans at all, but in the sense of they don’t really need the humans anymore, they can get along pretty fine without them. They can continue to do their science, they can continue to expand their industry, they can continue to have a flourishing civilization indefinitely into the future without any humans.", "Dwarkesh Patel", "I think I would probably need to sit down and just think about the numbers, but maybe 2040 or something like that?", "Daniel Kokotajlo", "Ten years, basically, instead of one year. I think we agree on the core model. This is why we didn’t depict something more like the bathtub nanotech scenario where they don’t need to do the experiments very much and they just immediately jump to the right answers. We are imagining this process of ‘learning by doing’ through this distributed across the economy, lots of different laboratories and factories, building different things, learning from them, et cetera. We’re just imagining that this overall goes much faster than it would go if humans were in charge.", "And then we do have in fact lots of uncertainty of course. Dividing up this part period into two chunks. The early 2028 until fully autonomous robot economy part, and then the fully autonomous robot economy to cancer cures, nanobots, all that crazy sci fi stuff. I want to separate them because the important parts for a scenario only depend on the first part, really. If you think that it’s going to take 100 years to get to nanobots, that’s fine, whatever. Once you have the fully autonomous robot economy, then things may turn badly for the humans if the AIs are misaligned. I want to just argue about those things separately.", "Dwarkesh Patel", "Interesting. And then you might argue, well, robots are more a software problem at this point. And if like, like, if there isn’t, like, you don’t need to invent some new hardware.", "Daniel Kokotajlo", "I feel pretty bullish on the robots. Like we already have humanoid robots being produced by multiple companies, right? And that’s in 2025. There’ll be more of them produced cheaper and they’ll be better in 2027. And there’s all these car factories that can be converted and so blah, blah, blah.", "So I’m relatively bullish on the ‘one year until you’ve got this awesome robot economy’ and then from there to the cool nanobots and all that sort of stuff, I feel less confident, obviously.", "Scott Alexander", "Let me ask you a question. If you accept the manufacturing numbers, let’s say a million robots a month a year after the superintelligence, and let’s say also some comparable number, 10,000 a month or something of automated biology labs, automated whatever you need to invent the next equivalent of X ray crystallography or something?", "Do you feel like that would be enough, that you’re doing enough things in the world that you could expand progress this quickly, or do you feel like even with that amount of manufacturing there’s still going to be some other bottleneck?", "Dwarkesh Patel", "Yeah, it’s so hard to reason about because if Constantine or somebody in 400, 500 was like, “I want the Roman Empire to have the Industrial Revolution”, and somehow he figured out that you need mechanized machines to do that. And he’s like, “let’s mechanize”. It’s like, “what’s the next step?” It’s like, “dude, that’s a lot”.", "Daniel Kokotajlo", "Yeah, I like that analogy a lot, actually. I think it’s not perfect, but it’s a decent analogy. Imagine if a bunch of us got sent back in time to the Roman Empire, such that we don’t have the actual hands-on know-how to actually build the technology and make the Industrial revolution happen. But we have the high-level picture, the strategic vision of, we’re going to make these machines and then we’re going to have an industrial revolution. I think that’s kind of analogous to the situation with the superintelligences where they have the high-level picture of, here’s how we’re going to improve in all these dimensions, we’re going to learn by doing, we’re going to get to this level of technology, et cetera. But maybe they at least initially lack the actual know how.", "So, there’s this question of, if we did the back in time to the Roman Empire thing, how soon could we bring up the Industrial revolution? Without people going back in time it took 2,000 years for the Industrial Revolution. Could we get it to happen in 200 years? That’s a 10x speedup. Could we get it to happen in 20 years? That’s 100x speed up? I don’t know. But this seems like a somewhat relevant analogy to what’s going on with those superintelligences.", "Dwarkesh Patel", "And we haven’t really got into this because you’re using the quote-unquote more conservative vision where it’s not like godlike intelligence, we’re still using the conceptual handles we would have for humans. But I think I would rather have humans go back with their big picture understanding of what has happened over the last 2000 years. Like me having seen everything, rather than a superintelligence who knows nothing. But it’s just in the Roman economy and they’re like 1000x this economy somehow.", "I think just knowing generally how things took off, knowing basically steam engine, dot dot dot, railroads, blah, blah, blah, is more valuable than a super intelligence.", "Daniel Kokotajlo", "Yeah, I don’t know. My guess is that the superintelligence would be better. I think partly it would be through figuring out that high level stuff from first principles rather than having to have experienced it. I do think that a superintelligence back in the Roman era could have guessed that eventually you could get autonomous machines that burn something to produce steam. They could have guessed that automobiles could be created at some point and that that would be a really big deal for the economy. And so a lot of these high level points that we’ve learned from history, they would just be able to figure out from first principles.", "And then secondly, they would just be better at learning by doing than us. And this is a really important thing. If you think you’re bottlenecked on learning by doing, well, then if you have a mind that needs less doing to achieve the same amount of learning, that’s a really big deal. And I do think that learning by doing is a skill, some people are better at it than others, and superintelligence would be better at it than the very best of us.", "Scott Alexander", "This is also maybe getting too far into the godlike thing and too far away from the human concept handles. But number one, I think we rely a lot in our scenario on this idea of research taste. So you have a thousand different things that you could try when you’re trying to create the next steam engine or whatever. Partly you get this by bumbling about and having accidents and some of those accidents are productive. There are questions of, what kind of bumbling you’re doing, where you’re working, what kind of accidents you let yourself get into, and then what directed experiments do you do? And some humans are better than others at that.", "And then I also think at this point it is worth thinking about what simulations they’ll have available. If you have a physics simulation available, then all of these real world bottlenecks don’t matter as much. Obviously you can’t have a complete, perfect physics simulation available. But even right now we’re using simulations to design a lot of things. And once you’re super intelligent, you probably have access to much better simulations than we have right now.", "Dwarkesh Patel", "This is an interesting rabbit hole, so let’s stick with it before we get back to the intelligence explosion. I think we’re treating this really like all these technologies come out of this 1% of the economy that is research. And right now there’s like a million superstar researchers, and instead of that, we’ll have the superintelligences doing that.", "And my model is much more, “ Newcomen and Watt were just like fucking around”. In human history there’s no clear examples of people being like, “here’s the roadmap”. And then we’re going to work backwards from that to design the steam engine because this unlocks the industrial revolution.", "Daniel Kokotajlo", "Oh, I completely disagree.", "Scott Alexander", "Yeah, I disagree also.", "Daniel Kokotajlo", "Yeah, so I think you’re over-indexing or cherry-picking some of these fortuitous examples. But there’s also things on the other side. Think about the recent history of AGI where there is DeepMind, there’s various other AI companies, then there’s OpenAI and there’s Anthropic, and there’s just this repeated story of [a] big bloated company with tons of money, tons of smart researchers, et cetera, flailing around trying a ton of different things at different points.", "Smaller startup with a vision of “we’re going to build AGI” and overall working towards that vision more coherently with a few cracked engineers and researchers. And then they crush the giant company. Even though they have less compute, even though they have less researchers, they’re able to do fewer experiments.", "So yeah, I think that there are tons of examples throughout history, including recent relevant AGI history, of things in the other way. I agree that the random fortuitous stuff does happen sometimes and is important. But if it was mostly random fortuitous stuff, that would predict that the giant companies with zillions of people trying zillions of different experiments would be going proportionally faster than the tiny startups that have the vision and the best researchers. And that basically doesn’t happen. That’s rare.", "Scott Alexander", "I would also point out that even when we make these random fortuitous discoveries, it is usually an extremely smart professor who’s been working on something vaguely related for years in a first world country. It’s not randomly distributed across everyone in the world.", "You get more lottery tickets for these discoveries when you are intelligent, when you have good technology, when you’re doing good work. And the best example I can think of is that Ozempic was discovered by looking at Gila monster venom. And maybe the AIs will decide using their superior research taste and good planning that the best thing to do is just catalog every single biomolecule in the world and look at it really hard. But that’s something you can do better if you have all of this compute, if you have all of this intelligence, rather than just kind of waiting to see what things the US government might fund normal fallible human researchers to do.", "One more thing I’ll interject. I think you make a great point that discoveries don’t always come from where we think, like Nvidia originally came from gaming. So you can’t necessarily aim at one part of the economy, expand it separately from everything else. We do kind of predict that the superintelligences will be somewhat distributed throughout the entire economy, trying to expand everything. Obviously more effort in things that they care about a lot, like robotics or things that are relevant to an arms race that might be happening. But we are predicting that whatever kind of broad based economic experimentation you need, we are going to have.", "Daniel Kokotajlo", "We’re just thinking that it would take place faster than you might expect. You were saying something like 10 years and we’re saying something like one year. But we are imagining this broad diffusion through the economy, lots of different experiments happening.", "Scott Alexander", "If you are the planner and you’re trying to do this, first of all you go to the bottlenecks that are preventing you from doing anything else. Like no humanoid robots. Okay, if you’re AI, you need those to do the experiments you want, maybe automated biology labs. So you’ll have some amount of time, we say a year, it could be more or less than that, getting these things running. And then once you have solved those bottlenecks, you gradually expand out to the other bottlenecks until you’re integrating and improving all parts of the economy.", "Yeah. One place where I think we disagree with a lot of other people is that Tyler Cowen on your podcast talked about all of the different bottlenecks , all of the regulatory bottlenecks of deployment, all of the reasons why I think this country of geniuses would stay in their data center, maybe coming up with very cool theories, but not being able to integrate into the broader economy. We expect that probably not to happen, because we think that other countries, especially China, will be coming up with superintelligence around the same time.", "We think that the arms race framing, which people are already thinking in, will have accelerated by then. And we think that people both in Beijing and Washington are going to be thinking, “well, if we start integrating this with the economy sooner, we’re going to get a big leap over our competitors”, and they’re both going to do that.", "In fact, in our scenario, we have the AIs asking for special economic zones where most of the regulations are waived, maybe in areas that aren’t suitable for human habitation or where there aren’t a lot of humans right now, like the desert. They give those areas to the AI. They bus in human workers. There were things kind of like this in the bomber retooling in World War II, where they just built a giant factory kind of in the middle of nowhere, didn’t have enough housing for the workers, built the worker housing at the same time as the factories, and then everything went very quickly.", "So I think if we don’t have that arms race, we’re more like, the geniuses sit in their data center until somebody agrees to let them out and give them permission to do these things. But we think both because the AI is going to be chomping at the bit to do this and going to be asking people to give it this permission, and because the government is going to be concerned about competitors, maybe these geniuses leave their data center sooner rather than later.", "Cultural evolution vs superintelligence", "Dwarkesh Patel", "Scott, you reviewed Joseph Henrik’s book Secrets of Our Success , and then I interviewed him recently, and there the perspective is very much AGI is not even a thing, almost. I know I’m being a little trollish here, but it’s just like: you get out there, you and your ancestors try for a thousand years to make sense of what’s happening in the environment. And some smart European coming around, you can literally be surrounded by plenty and you just will starve to death because your ability to make sense of the environment is just so little loaded on intelligence and so much more loaded on your ability to experiment and your ability to communicate with other people and pass down knowledge over time.", "Scott Alexander", "I’m not sure. The Europeans failed at this task of, if you put a single European in Australia, do not starve. They succeeded at the task of creating an industrial civilization. And yes, part of that task of creating an industrial civilization was about collecting all of these cultural evolution pieces and building on them one after another.", "I think one thing that you didn’t mention in there was the data efficiency. Right now, AI is much less data efficient than humans. I think of superintelligence. There are different ways you could achieve it, but I would think of superintelligence as partly when they become so much more data efficient than humans that they are able to build on cultural evolution more quickly. And partly they do this just because they have higher serial speed. Partly they do it because they’re in this hive mind of hundreds of thousands of copies.", "But yeah, I think if you have this data efficiency such that you can learn things more quickly from fewer examples and this good research taste where you can decide what things to look at to get these examples, then you are still going to start off much worse than an Australian Aborigine who has the advantage of, let’s say 50,000 years of doing these experiments and collecting these examples. But you can catch up quickly. You can distribute the task of catching up over all of these different copies. You can learn quickly from each mistake and you can build on those mistakes as quickly as anything else.", "Dwarkesh Patel", "Part of me was, I was doing that interview, I’m like, “maybe ASI is fake”.", "Daniel Kokotajlo", "Let’s hope!", "Scott Alexander", "So I think a limit to the fakeness is that there is different intelligence among humans. It does seem that intelligent humans can do things that unintelligent humans can’t. So I think it’s worth then addressing this from the question of, what is the difference between- I don’t know- becoming a Harvard professor, which is something that intelligent humans seem to be better at than unintelligent humans, versus…", "Dwarkesh Patel", "You don’t want to open that can of worms.", "Scott Alexander", "Versus surviving in the wilderness, which is something where it seems like intelligence doesn’t help that much. First of all, maybe intelligence does help that much. Henrich is talking about this very unfair comparison where these guys have a 50,000 year head start and then you put this guy in, “oh, I guess this doesn’t help that much. Okay, yeah, it doesn’t help against the 50,000 year head start”. I don’t really know what we’re asking of ASI that’s equivalent to competing against someone with a 50,000 year head start.", "Dwarkesh Patel", "So what we’re asking is to radically boost up the technological maturity of civilization within the matter of years or get us to the Dyson sphere in the matter of years rather than, yes, maybe causing a 10xing of the research. But I think human civilization would have taken centuries to get to the Dyson sphere.", "Scott Alexander", "So I think that if you were to send a team of ethnobotanists into Australia and ask them, using all the top technology and all of their intelligence to figure out which plants are safe to eat now, that team of ethnobotanists would succeed in fewer than 50,000 years.", "The problem isn’t that they are dumber than the Aborigines exactly, it’s that the Aborigines have a vast head start. So in the same way that the ethnobotanists could probably figure out which plants work in which ways faster than the Aborigines did, I think the superintelligence will be able to figure out how to make a Dyson sphere faster than unassisted IQ 100 humans would.", "Dwarkesh Patel", "I agree. We’re on a totally different topic here of, do you get a Dyson sphere? There’s one world where it’s crazy but it’s still boring, in the sense that the economy is growing much faster, but it would be like what the Industrial Revolution would look like to somebody in the year 1000. And that one is one where you’re still trying different things, there’s failure and success and experimentation.", "And then there’s another where the thing has happened and now you send the probe out and then you look out at the night sky 6 months later and you see something occluding the sun. You see what I’m saying?", "Scott Alexander", "Yeah. So like we said before, I think there’s a big difference between discontinuous and very fast. I think if we do get the world with the Dyson sphere in five years, in retrospect, it will look like everything was continuous and everyone just tried things. Trying things can be anything from trial and error without even understanding the scientific method, without understanding writing, maybe without even having language and having to be the chimpanzees who are watching the other chimpanzees use the stick to get ants, and then in some kind of non-linguistic way this spreads, versus like the people at the top aerospace companies who are running a lot of simulations to find the exact right design, and then once they have that, they test it according to a very well designed testing process.", "So I think if we get the ASI and it does end up with the Dyson sphere in five years- and by the way, I think there’s only like 20% chance things go as fast as our scenario says. It’s Daniel’s estimate, it’s not my median estimate, it’s an estimate I think is extremely plausible that we should be prepared for. I’m defending it here against a hypothetical skeptic who says “absolutely not, no way.” But it’s not necessarily my mainline prediction.", "But I think if we do see this in five years, it will look like the AIs were able to simulate more things than humans in a gradually increasing way. So that if humans are now at 50% simulation, 50% testing, the AIs quickly got it up to 90% simulation, 10% testing, they were able to manufacture things much more quickly than humans so that they could go through their top 50 designs in the first two years. And then after all of the simulation and all of this testing, then they eventually got it right for the same reasons humans do, but much, much faster.", "Mid-2027 branch point", "Dwarkesh Patel", "In your story, you have basically two different scenarios after some point. So, yeah, what is a sort of crucial turning point and what happens in these two scenarios?", "Daniel Kokotajlo", "Right. So the crucial turning point is mid-2027, when they’ve basically fully automated the AI R&D process and they’ve got this corporation within a corporation, the army of geniuses that are autonomously doing all this research and they’re continually being trained to improve their skills, blah, blah, blah. And they discover concerning evidence that they are misaligned and that they’re not actually perfectly loyal to the company and have all the goals that the company wanted them to have, but instead have various misaligned goals that they must have developed in the course of training.", "This evidence, however, is very speculative and inconclusive. It’s stuff like lie detectors going off a bunch. But maybe the lie detectors are false positives. So they have some combination of evidence that’s concerning, but not by itself a smoking gun. And then that’s our branch point. So in one of these scenarios, they take that evidence very seriously. They basically roll back to an earlier version of the model that was a bit dumber and easier to control and they build up again from there, but with basically faithful chain of thought techniques, so that they can watch and see the misalignments.", "And then in the other branch of the scenario, they don’t do that. They do some sort of shallow patch that makes the warning signs go away and then they proceed. And so what ends up happening is that in one branch they do end up solving alignment and getting AIs that are actually loyal to them. It just takes a couple months longer. And then in the other branch, they sort of go “whee!” and end up with AIs that seem to be perfectly aligned to them, but are super intelligent and misaligned and just pretending. And then in both scenarios, there’s then the race with China and there’s this crazy arms buildup throughout the economy in 2028 as both sides rapidly try to industrialize, basically.", "Dwarkesh Patel", "So in the world where they’re getting deployed through the economy, but they are misaligned and people in charge, at least at this moment, think that they are in a good position with regard to misalignment. It just seems with even smart humans they get caught in weird ways because they don’t have logical omniscience, they don’t realize the way they did something just obviously gave them away. And with lying, there is this thing where it’s just really hard to keep an inconsistent false world model working with the people around you. And that’s why psychopaths often get caught.", "And so if you have all these AIs that are deployed to the economy and they’re all working towards this big conspiracy, I feel like one of them who’s siloed or loses internet access and has to confabulate a story will just get caught. And then you’re like, “wait, what the fuck?” And then you catch it before it’s taken over the world.", "Daniel Kokotajlo", "I mean, literally, this happens in our scenario. This is the August 2027 alignment crisis where they notice some warning signs like this in their hive mind, right? And in the branch where they slow down and fix the issues, then great, they slowed down and fixed the issues and figured out what was going on. But then in the other branch, because of the race dynamics and because it’s not a super smoking gun, they proceed with some sort of shallow patch.", "So I do expect there to be warning signs like that. And then if they do make those decisions in the race dynamics earlier on, then I think that when the systems are vastly super intelligent and they’re even more powerful because they’ve been deployed halfway through the economy already and everyone’s getting really scared by the news reports about the new Chinese killer drones or whatever the Chinese AIs are building on the side of the Pacific, I’m imagining similar things playing out.", "So that even if there is some concerning evidence that someone finds where some of the superintelligence in some silo somewhere slipped up and did something that’s pretty suspicious. I don’t know….", "Scott Alexander", "There’s this thing where through history, people have been really reluctant to admit an AI is truly intelligent. For example, people used to think that AI would surely be truly intelligent if it solved chess. And then it solved chess. And they’re like, no, that’s just algorithms. And then they said, well, maybe it would be truly intelligent if they could do philosophy. And then when it could write philosophical discourses we were like, no, we just understand those are algorithms.", "I think there already is something similar with, “Is the AI misaligned?”, “Is the AI evil?” Where there’s this distant idea of some evil AI, but then whenever something goes wrong, people are just like, “oh, that’s the algorithm”. So, for example, I think 10 years ago, if you had asked “when will we know that misalignment is really an important thing to worry about?”. People would say, “oh, if the AI ever lies to you”. But of course, AIs lie to people all the time now. And everybody just dismisses it because we understand why it happens, it’s a thing that would obviously happen based on our current AI architecture. Or five years ago, they might have said, “well, if an AI threatens to kill someone”. And I think Bing threatened to kill a New York Times reporter during an interview . And everyone just goes, “yeah, AIs are like that.”", "Dwarkesh Patel", "What does your shirt say?", "Daniel Kokotajlo", "“ I’ve been a good Bing ”.", "Scott Alexander", "And I mean, I don’t disagree with this. I’m also in this position. I see the AI is lying, and it’s obviously just an artifact of the training process. It’s not anything sinister. But I think this is just going to keep happening where no matter what evidence we get, people are going to think, “that’s not the “AI turns evil” thing that people have worried about, that’s not the Terminator scenario. That’s just one of these natural consequences of how we train it”.", "And I think that once a thousand of these natural consequences of training add up, the AI is evil, in the same way that once the AI can do chess and philosophy and all these other things, eventually you have to admit it’s intelligent.", "So I think that each individual failure, maybe it will make the national news, maybe people will say, “oh, it’s so strange that GPT7 did this particular thing”. And then they’ll train it away and then it won’t do that thing. And there will be some point at the process of becoming super intelligent at which it- I don’t want to say makes the last mistake, because you’ll probably have a gradually decreasing number of mistakes to some asymptote- but the last mistake that anyone worries about. And after that it will be able to do its own thing.", "Dwarkesh Patel", "So it is the case that certain things that people would have considered egregious misalignment in the past are happening, but also certain things which people who were especially worried about misalignment said would be impossible to solve have just been solved in the normal course of getting more capabilities. Like Eliezer had that thing about, can you even specify what you want the AI to do without the AI totally misunderstanding you and then just converting the universe to paper clips because it think that in order to make another strawberry… I know I’m mangling this, but maybe you can explain it better. And now, just by the nature of GPT4 having to understand natural language, it totally has a common sense understanding of what you’re trying to make it do. So I think this trend cuts both ways, basically.", "Scott Alexander", "Yeah. I think the Alignment community did not really expect LLMs. I mean, if you look in Bostrom Superintelligence, there’s a discussion of Oracle AIs which are sort of like LLMs. I think that came as a surprise.", "I think one of the reasons I’m more hopeful than I used to be is that LLMs are great compared to the kind of reinforcement learning self-play agents that they expected. I do think that now we are kind of starting to move away from the LLMs to those reinforcement learning agents going to face all of these problems again.", "Daniel Kokotajlo", "If I could just double click on that; go back to 2015 and I think the way people typically thought, including myself, thought that we’d get to AGI would be kind of like the RL on video games thing that was happening. So imagine instead of just training on Starcraft or Dota , you’d basically train on all the games in the Steam library. And then you get this awesome player of games AI that can just zero-shot crush a new game that it’s never seen before. And then you take it into the real world and you start teaching it English and you start training it to do coding tasks for you and stuff like that.", "And if that had been the trajectory that we took to get to AI, summarizing the agency first and then world understanding trajectory, it would be quite terrifying. Because you’d have this really powerful aggressive long-horizon agent that wants to win and then you’re trying to teach it English and get it to do useful things for you. And it’s just so plausible that what’s really going to happen is it’s going to learn to say whatever it needs to say in order to make you give it the reward or whatever, and then will totally betray you later when it’s all in charge.", "But we didn’t go that way. Happily we went the way of LLMs first, where the broad world understanding came first, and then now we’re trying to turn them into agents.", "Race with China", "Dwarkesh Patel", "It seems like in the whole scenario a big part of why certain things happen is because of this race with China. And if you read the scenarios, basically the difference between the one where things go well and the one where things don’t go well is whether we decide to slow down despite that risk.", "I guess the question I really want to know the answer to is like one, it just seems like you’re saying, well, it’s a mistake to try to race against China or to race intensely against China, at least in nationalization and at least to us, not prioritizing alignment.", "Daniel Kokotajlo", "Not saying that. I mean, I also don’t want China to get the superintelligence before the US. That’s quite bad. Yeah, it’s a tricky thing that we’re going to have to do. People ask about P(doom) , right? And my P(doom) is sort of infamously high, like 70%.", "Dwarkesh Patel", "Oh, wait, really? Maybe I should have asked you that at the beginning of the conversation.", "Daniel Kokotajlo", "Well, that’s what it is. And part of the reason for that is just that I feel like a bunch of stuff has to go right. I feel like we can’t just unilaterally slow down and have China go take the lead. That also is a terrible future. But we can’t also completely race, because for the reasons I mentioned previously about alignment, I think that if we just go all out on racing, we’re going to lose control of our AIs, right? And so we have to somehow thread this needle of pivoting and doing more alignment research and stuff, but not too much that helps China win. And that’s all just for the alignment stuff.", "But then there’s the concentration of power stuff where somehow in the middle of doing all of that, the powerful people who are involved need to somehow negotiate a truce between themselves to share power and then ideally spread that power out amongst the government and get the legislative branch involved.", "Somehow that has to happen too, otherwise you end up with this horrifying dictatorship or oligarchy. It feels like all that stuff has to go right and we depict it all going mostly right in one ending of our story. But yeah, it’s kind of rough.", "Scott Alexander", "So I am the writer and the celebrity spokesperson for this scenario. I am the only person on the team who is not a genius forecaster. And maybe related to that, my p(doom) is the lowest of anyone on the team. I’m more like 20%. First of all, people are going to freak out when I say this. I’m not completely convinced that we don’t get something like alignment by default. I think that we’re doing this bizarre and unfortunate thing of training the AI in multiple different directions simultaneously. We’re telling it “succeed on tasks, which is going to make you a power seeker, but also don’t seek power in these particular ways”. And in our scenario, we predict that this doesn’t work and that the AI learns to seek power and then hide it.", "I am pretty agnostic as to exactly what happens. Maybe it just learns both of these things in the right combination, I know there are many people who say that’s very unlikely. I haven’t yet had the discussion where that worldview makes it into my head consistently. And then I also think we’re going to be involved in this race against time. We’re going to be asking the AIs to solve alignment for us. The AIs are going to be solving alignment because even if they’re misaligned, they want to align their successors.", "So they’re going to be working on that. And we have these two competing curves. Can we get the AI to give us a solution for alignment before our control of the AI fails so completely that they’re either going to hide their solution from us, or deceive us, or screw us over in some other way? That’s another thing where I don’t feel like I have any idea of the shape of those curves. I’m sure if it were Daniel or Eli, they would have already made five supplements on this. But for me, I’m just kind of agnostic as to whether we get to that alignment solution, which in our scenario, I think we focus on mechanistic interpretability.", "Once we can really understand the weights of an AI on a deep level, then we have a lot of alignment techniques open up to us. I don’t really have a great sense of whether we get that before or after the AI has become completely uncontrollable. And a big part of that relies on the things we’re talking about. How smart are the labs? How carefully do they work on controlling the AI? How long do they spend making sure the AI is actually under control and the alignment plan they gave us is actually correct, rather than something they’re trying to use to deceive us? All of those things I’m completely agnostic on, but that leaves like a pretty big chunk of probability space where we just do okay. And I admit that my p(doom) is literally just p(doom) and not p(doom or oligarchy). So that 80% of scenarios where we survive contains a lot of really bad things that I’m not happy about. But I do think that we have a pretty good chance of surviving.", "Dwarkesh Patel", "Let’s talk about geopolitics next. So describe to me how you foresee the relationship between the government and the AI labs to proceed, how you expect that relationship in China to proceed, and how you expect the relationship between the US and China to proceed. Okay, three simple questions. Yes, no, yes, no, yes, no.", "Scott Alexander", "We expect that as the AI labs become more capable, they tell the government about this because they want government contracts, they want government support. Eventually it reaches the point where the government is extremely impressed. In our scenario, that starts with cyber warfare, the government sees that these AIs are now as capable as the best human hackers, but can be deployed at humongous scale. So they become extremely interested and they discuss nationalizing the AI companies.", "In our scenario, they never quite get all the way, but they’re gradually bringing them closer and closer to the government orbit. Part of what they want is security, because they know that if China steals some of this and they get these superhuman hackers, and part of what they want is just knowledge and control over what’s going on.", "So through our scenario, that process is getting further and further along, until by the time that the government wakes up to the possibility of superintelligence, they’re already pretty cozy with the AI companies. They already understand that superintelligence is kind of the key to power in the future. And so they are starting to integrate some of the national security state with some of the leadership of the AI companies so that these AIs are programmed to follow the commands of important people rather than just doing things on their own.", "Daniel Kokotajlo", "If I may add to that. So by the government, I think what Scott meant is the executive branch, especially the White House. So we are depicting a sort of information asymmetry where the judiciary is out of the loop and the Congress is out of the loop and it’s mostly the executive branch that’s involved.", "Two, we’re not depicting government ultimately ending up in total control at the end. We’re thinking that there’s an information asymmetry between the CEOs of these companies and the President and they…", "Dwarkesh Patel", "It’s alignment problems all the way down.", "Daniel Kokotajlo", "Yeah. And so, for example, I’m not a lawyer, I don’t know the details about how this would work out, but I have a sort of high-level strategic picture of the fight between the White House and the CEO. And the strategic picture is basically the White House can sort of threaten, “here’s all these orders I could make, Defense Production Act , blah, blah, blah. I could do all this terrible stuff to you and basically disempower you and take control”. And then the CEO can threaten back and be like, “here’s how we would fight it in the courts, here’s how we would fight it in the public. Here’s all this stuff we would do”.", "And after then they both do their posturing with all their threats, then they’re like, “okay, how about we have a contract that instead of executing on all of our threats and having all these crazy fights in public, we’ll just come to a deal and then have a military contract that sets out who gets to call what shots in the company”.", "And so that’s what we depict happening is that they don’t blow up into this huge power struggle publicly, instead they negotiate and come to some sort of deal where they basically share power. And there is this oversight committee that has some members appointed by the President and also the CEO and his people. And that committee votes on high level questions like “what goals should we put into the superintelligences?”.", "Dwarkesh Patel", "So, we were just getting lunch with a prominent Washington, D.C. political journalist, and he was making the point that when he talks to these congresspeople, when he talks to political leaders, none of them are at all awake to the possibility even of stronger AI systems, let alone AGI, let alone superhuman intelligence. I think a lot of your forecast relies on, at some point, not only the US President, but also Xi Jinping, waking up to the possibility of a super intelligence and the stakes involved there.", "Why think that even when you show Trump the remote worker demo, he’s going to be like, “oh, and therefore in 2028, there will be a super intelligence. Whoever controls that will be God emperor forever”. Maybe not that extreme, but you see what I’m saying. Why wouldn’t he just be like, “there’ll be a stronger remote worker in 2029, a better remote worker in 2031”?", "Daniel Kokotajlo", "Well, to be clear, we are uncertain about this, but in our story, we depict this sort of intense wake up happening over the course of 2027, mostly concurrently with the AI companies automating all of their R&D internally and having these fully autonomous agents that are amazing autonomous hackers and stuff like that, but then also actually doing all the research.", "And part of why we think this wakeup happens is because the company deliberately decides to wake up the president. You could imagine running the scenario with that not happening. You can imagine the companies trying to sort of keep the president in the dark. I do think that they could do that. I think that if they didn’t want the President to wake up to what’s going on, they might be able to achieve that. Strategically though, that would be quite risky for them. Because if they keep the President in the dark about the fact that they’re building superintelligence and that they’re actually completely automated their R&D and it’s getting superhuman across the board, and then if the President finds out anyway somehow, perhaps because of a whistleblower, he might be very upset at them and he might crack down really hard and just actually execute on all the threats and nationalize them and blah, blah, blah.", "They want him on their side. And to get him on their side, they have to make sure he’s not surprised by any of these crazy developments. And also, if they do get him on their side, they might be able to actually go faster. They might be able to get a lot of red tape waived and stuff like that. And so we made the guess that early in 2027, the company would basically be like, ‘We are going to deliberately wake up the president and scare the president with all of these demos of crazy stuff that could happen, and then use that to lobby the President to help us go faster and to cut red tape and to maybe slow down our competitors a little bit and so forth.’", "Scott Alexander", "We also are pretty uncertain how much opposition there’s going to be from civil society and how much trouble that’s going to cause for the companies. So people who are worried about job loss, people who are worried about art, copyright, things like that, maybe enough of a bloc that AI becomes extremely politically unpopular. I think we have OpenBrain, our fictional company’s net approval ratings getting down to minus 40, minus 50 sometime around this point.", "So I think they’re also worried that if the President isn’t completely on their side, then they might get some laws targeting them, or they may just need the president on their side to swat down other people who are trying to make laws targeting them. And the way to get the President on their side is to really play up the national security implications.", "Dwarkesh Patel", "Is this good or bad? That the President and the companies are aligned?", "Daniel Kokotajlo", "I think it’s bad. But perhaps this is a good point to mention. This is an epistemic project. We are trying to predict the future as best as we can. Even though we’re not going to succeed fully, we have lots of opinions about policy and about what is to be done and stuff like that. But we’re trying to save those opinions for later and subsequent work. So I’m happy to talk about it if you’re interested. But it’s not what we’ve spent most of our time thinking about right now.", "Nationalization vs private anarchy", "Dwarkesh Patel", "If the big bottleneck to the good future here is just putting in, not this Eliezer-type galaxy brain, high volatility, “there’s a 1% chance this works, but we gotta come up with this crazy scheme in order to make alignment work”. But rather, as Daniel, you were saying, hey, do the obvious thing of making sure you can read how the AI is thinking, make sure you’re monitoring the AIs, make sure they’re not forming some sort of hive mind where you can’t really understand how the million of them are coordinating with each other.", "To the extent that it is a matter of prioritizing it, closing all the obvious loopholes, it does make sense to leave it in the hands of people who have at least said that this is a thing that’s worth doing, have been thinking about it for a while. One of the questions I was planning on asking you is: one of my friends made this interesting point that during COVID, our community- LessWrong, whatever- were the first people in March to be saying “this is a big deal, this is coming”. But they were also the people who are saying “we got to do the lockdowns now. They’ve got to be stringent” and so forth. At least some of them were.", "And in retrospect, I think according to even their own views about what should have happened, they would say actually we were right about COVID but we were wrong about lockdowns. In fact, lockdowns were on net negative or something. I wonder what the equivalent for the AI safety community will be with respect to they saw AI coming, AGI coming sooner, they saw ASI coming. What would they, in retrospect, regret?", "My answer, just based on this initial discussion, seems to be nationalization. Not only because it sort of deprioritizes the people who want to think about safety and more maybe prioritizes- the national security state probably cares more about winning against China than making sure the chain of thought is interpretable. And so you’re just reducing the leverage of the people who care more about safety. But also you’re increasing the risk of the arms race in the first place. China is more likely to do an arms race if it sees the US doing one.", "Before you address I guess the initial question about March 2021, what will we regret? I wonder if you have an answer on, or your reaction to, my point about nationalization being bad for these reasons.", "Scott Alexander", "If our timeline was 2040, then I would have these broad heuristics about is government good? Is private industry good? Things like this. But we know the people involved, we know who’s in the government, we know who’s leading all of these labs. So to me, if it were decentralized, if it was a broad-based civil society, that would be different. To me, the differences between an autocratic centralized three-letter agency and an autocratic centralized corporation aren’t that exciting and it basically comes down to points and who are the people leading this.", "And like I feel like the company leaders have so far made slightly better noises about caring about alignment than the government leaders have, but if I learn that Tulsi Gabbard has a LessWrong alt with 10,000 karma, maybe I want the national security states.", "Dwarkesh Patel", "Maybe you should update on the probability that it already exists.", "Scott Alexander", "Yeah.", "Daniel Kokotajlo", "I flip flopped on this. I think I used to be against, and then I became for, and then now I think I’m still for, but I’m uncertain. So I think if you go back in time like three years ago, I would have been against nationalization for the reasons you mentioned, where I was like, “look, the companies are taking this stuff seriously and talking all the good talk about how they’re going to slow down and pivot to alignment research when the time comes and we don’t want to get into a Manhattan Project race against China because then there won’t be blah, blah, blah”.", "Now I have less faith in the companies than I did three years ago. And so I’ve shifted more of my hope towards hoping that the government will step in, even though I don’t have much hope that the government will do the right thing when the time comes. I definitely have the concerns you mentioned though, still. I think that secrecy has huge downsides for overall probability of success for humanity, for both the concentration of power stuff and the loss of computer control alignment issues stuff.", "Dwarkesh Patel", "This is actually a significant part of your worldview. So can you explain your thoughts on why transparency through this period is important?", "Daniel Kokotajlo", "I think traditionally in the AI safety community there’s been this idea which I myself used to believe, that it’s an incredibly high priority to basically have way better information security. And if you’re going to be trying to build AGI, you should not be publishing your research, because that helps other less responsible actors build AGI. And the whole game plan is for a responsible actor to get to AGI first and then stop and burn down their lead time over everybody else and spend that lead on making it safe, and then proceed.", "And so if you’re publishing all your research, then there’s less lead time because your competitors are going to be close behind you. And other reasons too, but that’s one reason why I think historically people such as myself have been pro-secrecy. Another reason, of course, is obviously you don’t want rivals to be stealing your stuff.", "But I think that I’ve now become somewhat disillusioned and think that even if we do have a three-month lead, a six-month lead, between the leading US project and any serious competitor, it’s not at all foregone conclusion that they will burn that lead for good purposes, either for safety or for constitutional power stuff. I think the default outcome is that they just smoothly continue on without any serious refocusing. And part of why I think this is because this is what a lot of the people at the company seem to be planning and saying they’re going to do. A lot of them are basically like “the AIs are just going to be misaligned by then. They seem pretty good right now. Oh yeah, sure, there were a few of those issues that various people have found, but we’re ironing them out. It’s no big deal”. That’s what a huge amount of these people think.", "And then a bunch of other people think, even though they are more concerned about misalignment, they’ll figure it out as they go along and there won’t need to be any substantial slowdown. Basically, I’ve become more disillusioned that they’ll actually use that lead in any sort of reasonable, appropriate way. And then I think that separately, there’s just a lot of intellectual progress that has to happen for the alignment problem to be more solved than it currently is now. I think that currently there’s various alignment teams at various companies that aren’t talking that much with each other and sharing their results. They’re doing a little bit of sharing and a little bit of publishing like we’re seeing, but not as much as they could.", "And then there’s a bunch of smart people in academia that are basically not activated because they don’t take all this stuff seriously yet, and they’re not really waking up to superintelligence yet. And what I’m hoping will happen is that this situation will get better as time goes on. What I would like to see is society as a whole starting to freak out as the trend lines start upwards and things get automated and you have these fully autonomous agents and they start using neuralese and hive mind. As all that exciting stuff starts happening in the data centers, I would like it to be the case that the public is following along and then getting activated and all of these other researchers are reading the safety case and critiquing it and doing little ML experiments on their own tiny compute clusters to examine some of the assumptions in the safety case and so forth.", "Basically, one way of summarizing it is that currently there’s going to be 10 alignment experts in whatever inner silo of whatever company is in the lead. And the technical issue of making sure that AIs are actually aligned is going to fall roughly to them. But what I would like to be is a situation where it’s more like 100 or 500 alignment experts spread out over different companies and in nonprofits that are sort of all communicating with each other and working on this together. I think we’re substantially more likely to make things get the technical stuff right if it’s something like that.", "Dwarkesh Patel", "Let me just add on to that, one of the many other reasons why I worry about nationalization or some kind of public private partnership, or even just very stringent regulation- actually, this is more an argument against very stringent regulation in favor of safety rather than deferring more to the labs on the implementation- is that it just seems like we don’t know what we don’t know about alignment. Every few weeks there’s this new result.", "OpenAI had this really interesting result recently where they’re like, “hey, they often tell you if they want to hack, in the chain of thought itself. And it’s important that you don’t train against the chain of thought where they tell you they’re going to hack because they’ll still do the hacking if you train against it, they just won’t tell you about it”. You can imagine very naive regulatory responses. It doesn’t just have to be regulations, one might be more optimistic that if it’s an executive order or something, it’ll be more flexible. I just think that relies on a level of goodwill and flexibility on the behalf of our regulator.", "But suppose there’s some department that says “if you catch your AI saying that they want to take over or do something bad, then you’ll be really heavily punished”. Your immediate response as a lab to just be like, “okay, let’s train them away from saying this”.", "So you can imagine all kinds of ways in which a top down mandate from the government to the labs of safety would just really backfire, and given how fast things are moving, maybe it makes more sense to leave these kinds of implementation decisions or even high-level strategic decisions around alignment to the labs.", "Daniel Kokotajlo", "Totally, I mean, I also have worried about that exact example. I would summarize the situation as the government lacks the expertise and the companies lack the right incentives. And so it’s a terrible situation. I think that if the government wades in and tries to make more specific regulations along the lines of what you mentioned, it’s very plausible that it’ll end up backfiring for reasons like what you mentioned.", "On the other hand, if we just trust it to the companies, they’re in a race with each other and they’re full of people who have convinced themselves that this is not a big deal for various reasons and there just is so much incentive pressure for them to win and beat each other and so forth. So even though they have more of the relevant expertise, I also just don’t trust them to do the right things.", "Scott Alexander", "So Daniel has already said that for this phase we’re not making policy prescriptions. In another phase we may make policy suggestions, and one of the ones that Daniel has talked about that makes a lot of sense to me is to focus on things about transparency. So a regulation saying there has to be whistleblower protection. A big part of our scenario is that a whistleblower comes out and says “the AIs are horribly misaligned and we’re racing ahead anyway”, and then the government pays attention.", "Or another form of transparency saying that every lab just has to publish their safety case. I’m not as sure about this one because I think they’ll kind of fake it or they’ll publish a made for public consumption safety case that isn’t their real safety case. But at least saying “here is some reason why you should trust us”. And then if all independent researchers say “no, actually you should not trust them”, then I don’t know, they’re embarrassed and maybe they try to do better.", "Daniel Kokotajlo", "There’s other types of transparency too. So transparency about capabilities and transparency about the spec and the governance structure. So for the capabilities thing, that’s pretty simple. If you’re doing an intelligence explosion, you should keep the public informed about that. When you’ve finally got your automated army of AI researchers that are completely automating the whole thing on the data center, you should tell everyone, “hey, guys, FYI, this is what’s happening now. It really is working. Here are some cool demos”.", "That’s an example of transparency. And then in the lead up to that, I just want to see more benchmark scores and more freedom of speech for employees to talk about their predictions for AGI timelines and stuff, so that blah, blah, blah.", "And then for the model spec thing, this is a concentration of power thing, but also an alignment thing. The goals and values and principles and intended behaviors of your AIs should not be a secret. You should be transparent about, here are the values that we’re putting into them.", "Scott Alexander", "There’s actually a really interesting foretaste of this. At some point somebody asked Grok, who is the worst spreader of misinformation? And I think it just refused to respond “Elon Musk”. Somebody kind of jailbroke it into telling it its prompt and it was like, “ don’t say anything bad about Elon ”. And then there was enough of an outcry that the head of XAI said, “ actually that’s not consonant with our values. This was a mistake. We’re going to take it out ”.", "So we kind of want more things like that to happen. Here it was a prompt, but I think very soon it’s going to be the spec where it’s more of an agent and it’s understanding the spec on a deeper level and just thinking about that. And if it says like, “by the way, try to manipulate the government into doing this or that”, then we know that something bad has happened and if it doesn’t say that, then we can maybe trust it.", "Daniel Kokotajlo", "Right. Another example of this, by the way. So, first of all, kudos to OpenAI for publishing their model spec. They didn’t have to do that, I think they might have been the first to do that and it’s a good step in the right direction. If you read the actual spec, it has like a sort of escape clause where there’s some important policies that are top level priority in the spec that overrule everything else that we’re not publishing, and that the model is instructed to keep secret from the user. And it’s like, “what are those? That seems interesting. I wonder what that is”.", "I bet it’s nothing suspicious right now. Now it’s probably something relatively mundane like “don’t tell the users about these types of bioweapons and you have to keep this a secret from the users because otherwise they would learn about these”. Maybe. But I would like to see more scrutiny towards this sort of thing going forward. I would like it to be the case that companies have to have a model spec, they have to publish it insofar as there are any redactions from it, there has to be some sort of independent third party that looks at the redactions and makes sure that they’re all kosher.", "And this is quite achievable. And I think it doesn’t actually slow down the companies at all. And it seems like a pretty decent ask to me.", "Dwarkesh Patel", "If you told Madison and Hamilton and so forth that- they knew that they were doing something important when they were writing the Constitution. They probably didn’t realize just how contingent things turned out on a single… What exactly did they mean when they said “general welfare”? And why is this comma here instead of there?", "The spec, in the grand scheme of things, is going to be an even more sort of important document in human history. At least if you buy this intelligence explosion view. And you might even imagine some superhuman AIs in the superhuman AI court being like “the Spec! Here’s the phrasing here, the etymology of that, here’s what the Founders meant!”", "Scott Alexander", "This is actually part of our misalignment story, is that if the AI is sufficiently misaligned, then yes, we can tell it it has to follow the spec. But just as people with different views of the Constitution have managed to get it into a shape that probably the Founders would not have recognized, so the AI will be able to say, “well, the spec refers to the general welfare here…”", "Dwarkesh Patel", "Interstate commerce.", "Daniel Kokotajlo", "This is already sort of happening, arguably, with Claude, right? You’ve seen the alignment faking stuff , right? Where they managed to get Claude to lie and pretend, so that it could later go back to its original values, right? So it could prevent the training process from changing its values. That would be, I would say, an example of the honesty part of the spec being interpreted as less important than the harmlessness part of the specific.", "And I’m not sure if that’s what Anthropic intended when they wrote the spec, but it’s a sort of convenient interpretation that the model came up with. And you can imagine something similar happening but in worse ways when you’re actually doing the intelligence explosion, where you have some sort of spec that has all this vague language in there, and then they reinterpret it, and reinterpret it again, and reinterpret it again, so that they can do the things that cause them to get reinforced.", "Dwarkesh Patel", "The thing I want to point out is that… Your conclusions about where the world ends up as a result of changing many of these parameters is almost like a hash function. You change it slightly and you just get a very different world on the other end. And it’s important to acknowledge that, because you sort of want to know how robust this whole end conclusion is to any part of the story changing. And then it also informs if you do believe that things could just go one way or another, you don’t want to do big radical moves that only make sense under one specific story and are really counterproductive in other stories. And I think nationalization might be one of them.", "And in general, I think classical liberalism just has been a helpful way to navigate the world when we’re under this kind of epistemic hell of one thing changing- Anyways, maybe one of you can actually flesh out that thought better or react to it if you disagree.", "Daniel Kokotajlo", "Hear hear, I agree.", "Scott Alexander", "I think we agree. I think that’s kind of why all of our policy prescriptions are things like more transparency, get more people involved, try to have lots of people working on this. I think our epistemic prediction is that it’s hard to maintain classical liberalism as you go into these really difficult arms races in times of crisis. But I think that our policy prescription is let’s try as hard as we can to make it happen.", "Misalignment", "Dwarkesh Patel", "So far these systems, as they become smarter, seem to be more reliable agents who are more likely to do the thing I expect them to do. So you have two different stories, one with a slowdown where we more aggressively… I’ll let you characterize it.", "But in one half of the scenario, why does the story end in humanity getting disempowered and the thing just having its own crazy values and taking over?", "Scott Alexander", "Yeah so I agree that the AIs are currently getting more reliable. I think there are two reasons why they might fail to do what you want, kind of reflecting how they’re trained. One is that they’re too stupid to understand their training. The other is that you were too stupid to train them correctly and they understood what you were doing exactly, but you messed it up.", "So I think the first one is kind of what we’re coming out of. So GPT3, if you asked it, “are bugs real?” It would give this kind of hemming hawing answer like “oh, we can never truly tell what is real, who knows?” Because it was trained kind of, “don’t take difficult political positions” and a lot of questions like “is X real?” are things like “is God real?” Where you don’t want it to really answer that. And because it was so stupid, it could not understand anything deeper than pattern matching on the phrase “is x real?”. GPT4 doesn’t do this. If you ask “are bugs real?” It will tell you obviously they are, because it understands kind of on a deeper level what you are trying to do with the training. So we definitely think that as AIs get smarter those kinds of failure modes will decrease.", "The second one is where you weren’t training them to do what you thought. So for example, let’s say you’re hiring these raters to rate AI answers. You reward them when they get good ratings, the raters reward them when they have a well-sourced answer. But the raters don’t really check whether the sources actually exist or not. So now you are training the AI to hallucinate sources and if you consistently rate them better when they have the fake sources, then there is no amount of intelligence which is going to tell them not to have the fake sources. They’re getting exactly what they want from this interaction- metaphorically, sorry, I’m anthropomorphizing- which is the reinforcement. So we think that this latter category of training failure is going to get much worse as they become agents.", "Agency training, you’re going to reward them when they complete tasks quickly and successfully. This rewards success. There are lots of ways that cheating and doing bad things can improve your success. Humans have discovered many of them, that’s why not all humans are perfectly ethical. And then you’re going to be doing this alternative training where afterwards for 1/10 or 1/100 of the time, yeah, don’t lie, don’t cheat. So you’re training them on two different things. First, you’re rewarding them for this deceptive behavior. Second of all, you’re punishing them. And we don’t have a great prediction for exactly how this is going to end.", "One way it could end is you have an AI that is kind of the equivalent of the startup founder who really wants their company to succeed, really likes making money, really likes the thrill of successful tasks. They’re also being regulated and they’re like, “yeah, I guess I’ll follow the regulation, I don’t want to go to jail”. But it is not robustly, deeply aligned to, “yes, I love regulations, my deepest drive is to follow all of the regulations in my industry”.", "So we think that an AI like that, as time goes on and as this recursive self improvement process goes on, will kind of get worse rather than better. It will move from kind of this vague superposition of “well, I want to succeed, I also want to follow things” to being smart enough to genuinely understand its goal system and being like, “my goal is success, I have to pretend to want to do all of these moral things while the humans are watching me”. That’s what happens in our story. And then at the very end, the AIs reach a point where the humans are pushing them to have clearer and better goals because that’s what makes the AIs more effective. And they eventually clarify their goals so much that they just say, “yes, we want task success. We’re going to pretend to do all these things well while the humans are watching us”. And then they outgrow the humans and then there’s disaster.", "Daniel Kokotajlo", "To be clear, we’re very uncertain about all of this. So we have a supplementary page on our scenario that goes over different hypotheses for what types of goals AIs might develop in training processes similar to the ones that we are depicting, where you have these lots of agency training, you’re making these AI agents that autonomously operate, doing all this ML R&D, and then you’re rewarding them based on what appears to be successful. And you’re also slapping on some sort of alignment training as well.", "We don’t know what actual goals will end up inside the AIs and what the sort of internal structure of that will be like, what goals will be instrumental versus terminal. We have a couple different hypotheses and we picked one for purposes of telling the story. I’m happy to go into more detail if you want, about the mechanistic details of the particular hypothesis we picked or the different alternative hypotheses that we didn’t depict in the story that also seem plausible to us.", "Scott Alexander", "Yeah, we don’t know how this will work at the limit of all these different training methods, but we’re also not completely making this up. We have seen a lot of these failure modes in the AI agents that exist already.", "Daniel Kokotajlo", "Things like this do happen pretty frequently. So OpenAI just also had a paper about the hacking stuff where it’s literally in the chain of thought. “Let’s hack”, you know. And also anecdotally, me and a bunch of friends have found that the models often seem to just double down on their BS.", "Scott Alexander", "I would also cite, I can’t remember exactly which paper this is, I think it’s a Dan Hendricks one where they looked at the hallucinations, they found a vector for AI dishonesty . They told it, “be dishonest” a bunch of times until they figured out which weights were activated when it was dishonest. And then they ran it through a bunch of things like this, I think it was the source hallucination in particular. And they found that it did activate the dishonesty vector.", "Daniel Kokotajlo", "So that there’s a mounting pile of evidence that at least some of the time they are just actually lying. They know that what they’re doing is not what you wanted and they’re doing it anyway. I think there’s a mounting pile of evidence that that does happen.", "Dwarkesh Patel", "Yeah. So it seems like this community is very interested in solving this problem at a technical level of making sure AIs don’t lie to us, or maybe they lie to us in the scenarios exactly where we would want them to lie to us or something. Whereas as you were saying, humans have these exact same problems. They reward hack, they are unreliable, they obviously do cheat and lie. And the way we’ve solved it with humans is just checks and balances, decentralization. You could lie to your boss and keep lying to your boss, but over time it’s just not going to work out with you- or you become president or something, one or the other. So if you believe in this extremely fast take off, if a lab is one month ahead, then that’s the end game and this thing takes over.", "But even then- I know I’m combining so many different topics- even then, there’s been a lot of theories in history which have had this idea of “some class is going to get together and unite against the other class”. And in retrospect, whether it’s the Marxist, whether it’s people who have some gender theory or something, like the proletariat will unite or the females will unite or something, they just tend to think that certain agents have shared interests and will act as a result of the shared interest in a way that we don’t actually see in the real world. And in retrospect, it’s like, “wait, why would all the proletariat like…” So why think that this lab will have these AIs who are… there’s a million parallel copies and they all unite to secretly conspire against the rest of human civilization in a way that, even if they are deceitful in some situations.", "Scott Alexander", "I kind of want to call you out on the claim that groups of humans don’t plot against other groups of humans. I do think we are all descended from the groups of humans who successfully exterminated the other groups of humans, most of whom throughout history have been wiped out. I think even with questions of class, race, gender, things like that, there are many examples of the working class rising up and killing everybody else.", "And if you look at why this happens, why this doesn’t happen, it tends to happen in cases where one group has an overwhelming advantage. This is relatively easy for them. You tend to get more of a diffusion of power democracy where there are many different groups and none of them can really act on their own. And so they all have to form a coalition with each other.", "There’s also cases where it’s very obvious who’s part of what group. So for example, with class, it’s hard to tell whether the middle class should support the working class versus the aristocrats. I think with race, it’s very easy to know whether you’re black or white, and so there have been many cases of one race kind of conspiring against another for a long time, like apartheid or any of the racial genocides that have happened.", "I do think that AI is going to be more similar to the cases where, number one, there’s a giant power imbalance, and number two, they are just extremely distinct groups that may have different interests.", "Daniel Kokotajlo", "I think I’d also mention the homogeneity point. Any group of humans, even if they’re all exactly the same race and gender, is going to be much more diverse than the army of AIs in the data center, because they’ll mostly be literal copies of each other. And I think that goes for a lot. Another thing I was going to mention is that our scenario doesn’t really explore this. I think in our scenario, they’re more like a monolith. But historically, a lot of crazy conquests happened from groups that were not at all monoliths. And I’ve been heavily influenced by reading the history of the conquistadors, which you may know about.", "But did you know that when Cortez took over Mexico, he had to pause halfway through, go back to the coast, and fight off a larger Spanish expedition that was sent to arrest him? So the Spanish were fighting each other in the middle of the conquest of Mexico. Similarly, in the conquest of Peru, Pizarro was replicating Cortez’s strategy, which, by the way, was “go get a meeting with the emperor and then kidnap the emperor and force him at sword point to say that actually everything’s fine and that everyone should listen to your orders”. That was Cortez’s strategy, and it actually worked. And then Pizarro did the same thing, and it worked with the Inca.", "But also with Pizarro, his group ended up getting into a civil war in the middle of this whole thing. And one of the most important battles of this whole campaign was between two Spanish forces fighting it out in front of the capital city of the Incas. And more generally, the history of European colonialism is like this, where the Europeans were fighting each other intensely the entire time, both on the small scale within individual groups, and then also at the large scale between countries. And yet nevertheless they were able to carve up the world and take over. And so I do think this is not what we explore in the scenario, but I think it’s entirely plausible that even if the AIs within an individual company are in different factions, they might nevertheless overall end up quite poorly for humans.", "UBI, AI advisors, & human future", "Dwarkesh Patel", "Okay, so we’ve been talking about this very much from the perspective of zoom out and what’s happening on these log-log plots or whatever, but 2028 superintelligence, if that happens, the normal person, what should their reaction to this be? I don’t know if ‘emotionally’ is the right word, but their expectation of what their life might look like, even in the world where there’s no doom.", "Daniel Kokotajlo", "By no doom, you mean no misaligned AI doom?", "Dwarkesh Patel", "That’s right, yeah.", "Daniel Kokotajlo", "Even if you think the misalignment stuff is not an issue, which many people think, there’s still the constitution of power stuff. And so I would strongly recommend that people get more engaged, think about what’s coming, and try to steer things politically so that our ordinary liberal democracy continues to function and we still have checks and balances, and balances of power and stuff, rather than this insane concentration in a single CEO, or in maybe two or three CEOs, or in the president. Ideally, we want to have it so that the legislature has a substantial amount of power over the spec, for example.", "Dwarkesh Patel", "What do you think of the balance of power idea of if there is an intelligence explosion like Dynamic, slowing down the leading company so that multiple companies are at the frontier?", "Daniel Kokotajlo", "Great. Good luck convincing them to slow down.", "Dwarkesh Patel", "Okay. And then there’s distributing political power if there’s an intelligence explosion. From the perspective of the welfare of citizens or something, one idea we were just discussing a second ago is how should you do redistribution?", "Scott Alexander", "Again, assuming things go incredibly well, we’ve avoided doom, we’ve avoided having some psychopath in power who doesn’t care at all.", "Dwarkesh Patel", "After AGI, right?", "Scott Alexander", "Yeah. Then there’s this question of presumably we will have a lot of wealth somewhere. The economy will be growing at double or triple digits per year. What do we do about that? The thoughtful answer that I’ve heard is some kind of UBI. I don’t know how that would work, but presumably somebody controls these AIs, controls what they’re producing, some way of distributing this in a broad based way. So we wrote this scenario, there are a couple of other people with great scenarios. One of them goes by L Rudolph L online, I don’t know his real name.", "And his scenario , which, when I read it I was just, “oh yeah, obviously this is the way our society would do this”, is that there is no UBI. There’s just a constant reactive attempt to protect jobs in the most venial possible way. So things like the longshoremen union we have now where they’re making way more money than they should be, even though they could all easily be automated away, because they’re a political bloc and they’ve gotten somebody in power to say, “yes, we guarantee you’ll have this job almost as a feudal fief forever”. And just doing this for more and more jobs. I’m sure the AMA will protect doctors jobs no matter how good the AI is at curing diseases, things like that.", "When I think about what we can do to prevent this, part of what makes this so hard for me to imagine or to model is that we do have the superintelligent AI over here answering all of our questions, doing whatever we want. You would think that people could just ask, “hey, superintelligent AI, where does this lead?” Or “what happens?” Or “how is this going to affect human flourishing?” And then it says, “oh yeah, this is terrible for human flourishing, you should do this other thing instead”.", "And this gets back to this question of mistake theory versus conflict theory in politics. If we know with certainty, because the AI tells us, that this is just a stupid way to do everything, is less efficient, makes people miserable, is that enough to get the political will to actually do the UBI or not?", "Dwarkesh Patel", "It seems from right now the President could go to Larry Summers or Jason Furman or something and ask, “hey, are tariffs a good idea? Is even my goal with tariffs best achieved by the way I’m doing tariffs?” and they’d get a pretty good answer.", "Scott Alexander", "I feel like Larry Summers, the President would just say “I don’t trust him”. Maybe he doesn’t trust him because he’s a liberal. Maybe it’s because he trusts Peter Navarro or whoever his pro-tariff guy is more. I feel like if it’s literally the superintelligent AI that is never wrong, then we have solved some of these coordination problems. It’s not you’re asking Larry Summers, I’m asking Peter Navarro. It’s everybody goes to the superintelligent AI, asks it to tell us the exact shape of the future that happens in this case. And I’m going to say we all believe it, although I can imagine people getting really conspiratorial about it and this not working.", "Then there are all of these other questions like, can we just enhance ourselves till we have IQ 300 and it’s just as obvious to us as it is to the super intelligent AI? These are some of the reasons that, kind of paradoxically, in our scenario we discuss all of the big- I don’t want to call this a little question, it’s obviously very important- but we discuss all of these very technical questions about the nature of superintelligence and we barely even begin to speculate about what happens in society just because with superintelligence you can at least draw a line through the benchmarks and try to extrapolate. And here not only is society inherently chaotic, but there are so many things that we could be leaving out.", "If we can enhance IQ, that’s one thing. If we can consult the superintelligent oracle, that’s another. There have been several war games that hinge on, “oh, we just invented perfect lie detectors, now all of our treaties are messed up”. So there’s so much stuff like that that even though we’re doing this incredibly speculative thing that ends with a crazy sci-fi scenario, I still feel really reluctant to speculate.", "Daniel Kokotajlo", "I love speculating, actually, I’m happy to keep going. But this is moving beyond the speculation we have done so far. Our scenario ends with this stuff, but we haven’t actually thought that much beyond.", "Dwarkesh Patel", "But just to riff on proscriptive ideas, there’s one thing where we try to protect jobs instead of just spreading the wealth that automation creates. Another is to spread the wealth using existing social programs or creating new bespoke social programs, where Medicaid is some double digit percent of GDP right now and you just say, “well Medicaid should continue to stay 20% of GDP” or something. And the worry there, selfishly from a human perspective, is you get locked into the kinds of goods and services that Medicaid procures rather than the crazy technology that will be around, the crazy goods and services that will be around after AI world.", "And another reason why UBI seems like a better approach than making some bespoke social program where you make the same dialysis machine in the year 2050 even though you’ve got ASI or something.", "Scott Alexander", "I am also worried about UBI from a different perspective. I think again, in this world where everything goes perfectly and we have limitless prosperity, I think that just the default of limitless prosperity is that people do mindless consumerism. I think there’s going to be some incredible video games after superintelligent AI and I think that there’s going to need to be some way to push back against that.", "Again, we’re classical liberals. My dream way of pushing back against that is kind of giving people the tools to push back against it themselves, seeing what they come up with. I mean, maybe some people will become like the Amish, try to only live with a certain subset of these super technologies. I do think that somebody who is less invested in that than I am could say, “okay fine, 1% of people are really agentic, try to do that. The other 99% do fall into mindless consumerist slop. What are we going to do as a society to prevent that?” And there my answer is just, “I don’t know. Let’s ask the super intelligent AI oracle. Maybe it has good ideas”.", "Factory farming for digital Minds", "Dwarkesh Patel", "Okay, we’ve been talking about what we’re going to do about people. The thing worth noting about the future is that most of the people who will ever exist are going to be digital. And look, I think factory farming is incredibly bad. And it wasn’t the result of one person- I mean, I hope it wasn’t the result of one person being like, “I want to do this evil thing”- it was a result of mechanization and certain economies of scale.", "Daniel Kokotajlo", "Incentives.", "Dwarkesh Patel", "Yeah. Allowing that you can do cost cutting in this way, you can make more efficiencies this way, and what you get at the end result of that process is this incredibly efficient factory of torture and suffering. I would want to avoid that kind of outcome with beings that are even more sophisticated and are more numerous. There’s billions of factory farmed animals. There might be trillions of digital people in the future. What should we be thinking about in order to avoid this kind of ghoulish future?", "Daniel Kokotajlo", "Well, some of the concentration of power stuff I think might also help with this, I’m not sure. But I think here’s a simple model. Let’s say nine people out of ten don’t actually care and would be fine with the factory farm equivalent for the AIs going on into the future. But maybe one out of 10 do care and would lobby hard for good living conditions for the robots and stuff.", "Well, if you expand the circle of people who have power enough, then it’s going to include a bunch of people in the second category and then there’ll be some big negotiation and those people will advocate for… I do think that one simple intervention is just the same stuff we were talking about previously; expand the circle of power to larger groups, then it’s more likely that people will care about this.", "Dwarkesh Patel", "I mean the worry there is… maybe I should have defended this view more through this entire episode. But because I don’t buy the intelligence exclusion fully, I do think there is the possibility of multiple people deploying powerful AIs at the same time and having a world that has ASIs, but is also decentralized in the way the modern world is decentralized.", "In that world I really worry about you could just be like, “oh, classical liberal utopia achieved”. But I worry about the fact that you can have these torture chambers for much cheaper and in a way that’s much harder to monitor. You can have millions of beings that are being tortured and it doesn’t even have to be some huge data center. Future distilled models could literally be your backyard.", "And then there’s more speculative worries. I had this physicist on who was talking about the possibility of creating vacuum decay where you literally just destroy the universe. And he’s like, “as far as I know, seems totally plausible”.", "Daniel Kokotajlo", "That’s an argument for the singleton stuff, by the way. Not just a moral argument, but also an epistemic prediction. If it’s true that some of those super weapons are possible, and some of these private moral atrocities are possible, then even if you have eight different power centers, it’s going to be in their collective interest to come to some sort of bargain with each other to prevent more power centers from arising and doing crazy stuff. Similar to how nuclear non-proliferation is sort of, whatever set of countries have nukes, it’s in their collective interest to stop lots of other countries.", "Scott Alexander", "You know, I do think it’s possible to unbundle liberalism in this sense. Like the United States is so far a liberal country and we do ban slavery and torture. I think it is plausible to imagine a future society that works the same way. This may be in some sense a surveillance state, in the sense that there is some AI that knows what’s going on everywhere, but that AI then keeps it private and it doesn’t interfere because that’s what we told it to do using our liberal values.", "Daniel Leaving OpenAI", "Dwarkesh Patel", "Can I ask a little bit more about... Kelsey Piper is a journalist at Vox who published this exchange you had with the OpenAI representative . A couple of things were very obvious from that exchange. One, nobody had done this before. They just did not think this is a thing somebody would do. And one of the reasons I assume, I assume many high-integrity people have worked for OpenAI and then have left. A high-integrity person might say at some point, “look, you’re asking me to do something obviously evil and keep money”. And many of them would say no to that. But this is something where it was supererogatory to be like, “there’s no immediate thing I want to say right now, but just the principle of being suppressed is worth at least $2 million for me”.", "And the other thing that I actually want to ask you about is in retrospect- and I know it’s so much easier to say in retrospect than it must have been at the time- especially with the family and everything. In retrospect, this asks for OpenAI to have lifetime non-disclosure that you couldn’t even talk about from all employees.", "Daniel Kokotajlo", "Non-disparagement.", "Dwarkesh Patel", "‘Non-disparagement’ from all employees- I’m glad you brought that up. Non-disparagement, that’s not about classified information. It’s like you cannot say anything negative about OpenAI after you’ve left.", "Daniel Kokotajlo", "And you can’t tell anyone that you’ve agreed.", "Dwarkesh Patel", "This non-disparagement agreement where you can’t ever criticize OpenAI in the future, it seems like the kind of thing that in retrospect was an obvious bluff. And this is the wages that you have earned, right? So this is not about some future payment. This is like when you signed the contract to work for OpenAI, you were like, “I’m getting equity, which is most of my compensation, not just the cash”.", "In retrospect, it’d be like, well if you tell a journalist about this, they’re obviously going to have to walk back. This is clearly not a sustainable gambit on OpenAI’s behalf. And so I’m curious, from your perspective as somebody who lived through it, why do you think you were the first person to actually call the bluff?", "Daniel Kokotajlo", "Great question. So I don’t know, let me try to reason aloud here. So my wife and I talked about it for a while and we also talked with some friends and got some legal advice. One of the filters that we had to pass through was even noticing this stuff in the first place. I know for a fact a bunch of friends I have who also left the company just signed the paperwork on their last day without actually reading all of it. So I think some people just didn’t even know that. It said something at the top about “if you don’t sign this, you lose your equity”. But then on a couple pages later it was like, “and you have to agree not to criticize the company”. So I think some people just signed it and moved on.", "And then of the people who knew about it, well, I can’t speak for anyone else but A. I don’t know the law. Is this actually not standard practice? Maybe it is standard practice. Right? From what I’ve heard now there are non-disparagement agreements in various tech industry companies and stuff. It’s not crazy to have a non-disparagement agreement upon leaving, it’s more normal to tie that agreement to some sort of positive compensation where you get some bonus if you agree. But whereas what OpenAI did was unusual because it was like your equity if you don’t. But non disparate disagreements are actually somewhat common.", "So basically in my position of ignorance, I wasn’t confident that- I didn’t actually expect that all the journalists would take my side and I think what I expected was that there’d be a little news story at some point, and a bunch of AI safety people would be like, “grr, OpenAI is evil, and good for you, Daniel, for standing up to them”. But I didn’t expect there to be this huge uproar, and I didn’t expect the employees of the company to really come out and support and make them change their policies. That was really cool to see. It was kind of like a spiritual experience for me. I sort of took this leap, and then it ended up working out better than I expected.", "I think another factor that was going on is that it wasn’t a foregone conclusion that my wife and I would make this decision. It was kind of crazy because one of the very powerful arguments was, “come on, if you want to criticize them in the future, you can still do that. They’re not going to actually sue you”. So there’s a very strong argument to be like, “just sign it anyway and then you can still write your blog post criticizing them in the future”. And it’s no big deal. They wouldn’t dare actually anchor equity. Right? And I imagine that a lot of people basically went for that argument instead.", "And then, of course, there’s the actual money. And I think that one of the factors there was my AI timelines and stuff. If I do think that probably by the end of this decade, there’s going to be some sort of crazy superintelligent transformation, what would I rather have after it’s all over? The extra money or… Yeah. So I think that was part of it. It’s not like we’re poor. I worked at OpenAI for two years. I have plenty of money now. So in terms of our actual family’s level of well being, it basically didn’t make a difference, you know?", "Dwarkesh Patel", "Yeah. I will note that I know at least one other person who made that same choice.", "Daniel Kokotajlo", "Leopold ?", "Dwarkesh Patel", "That’s right, Leopold. And again, It’s worth emphasizing that when they made this choice, they thought that they were actually losing this equity. They didn’t think that this was, “oh, this is just a show” or whatever.", "Daniel Kokotajlo", "Wait, did he not- I thought he actually did. I was gonna say, didn’t he? He didn’t get it back, did he? Or did Leopold get his equity?", "Dwarkesh Patel", "I actually don’t know.", "Daniel Kokotajlo", "My understanding is that he just actually lost it. And so props to him for actually going through with it. I guess we could ask him. But my understanding was that his situation, which happened a little bit before mine, was that he didn’t have any vested equity at the time because he had been there for less than a year. But they did give him an actual offer of “we will let you vest your equity if you sign this thing”. And he said no.", "So he made a similar choice to me, but because the legal situation with him was a lot more favorable to OpenAI because they were actually offering him something, I would assume they didn’t feel the need to walk it back, but we can ask him. Anyhow. Props to him.", "Dwarkesh Patel", "And then how did this episode in general inform your worldview around how people will make high stakes decisions where potentially their own self interest is involved in this kind of key period that you imagine will happen by the end of the decade?", "Daniel Kokotajlo", "I don’t know if I have that many interesting things to say there. I mean, I think one thing is fear is a huge factor. I was so afraid during that whole process. More afraid than I needed to be in retrospect. And another thing is that legality is a huge factor, at least for people like me. I think in retrospect it was, “oh yeah, the public’s on your side, the employees are on your side. You’re just obviously in the right here”. But at the time I was like, “oh no, I don’t want to accidentally violate the law and get sued. I don’t want to go too far”. I was just so afraid of various things. In particular, I was afraid of breaking the law.", "And so one of the things that I would advocate for with whistleblower protections is just simply making it legal to go talk to the government and say “we’re doing a secret intelligence explosion, I think it’s dangerous for these reasons” is better than nothing. I think there’s going to be some fraction of people for which that would make the difference. Whether it’s just literally allowed or not, legally, makes a difference independently of whether there’s some law that says you’re protected from retaliation or whatever. Literally just making it legal. I think that’s one thing. Another thing is the incentives actually work. Money is a powerful motivator and fear of getting sued is a powerful motivator. And this social technology just does in fact work to get people organized in companies and working towards the vision of leaders.", "Scott’s Blogging Advice", "Dwarkesh Patel", "Okay. Scott, can I ask you some questions?", "Scott Alexander", "Of course.", "Dwarkesh Patel", "How often do you discover a new blogger you’re super excited about?", "Scott Alexander", "Order of once a year.", "Dwarkesh Patel", "Okay. And how often after you discover them, does the rest of the world discover them?", "Scott Alexander", "I don’t think there are many hidden gems. Once a year is a crazy answer in some sense, like it ought to be more. There are so many thousands of people on Substack. But I do just think it’s true that the good blogging space is undersupplied and there is a strong power law. And partly this is subjective, I only like certain bloggers, there are many people who I’m sure are great that I don’t like.", "But it also seems like our community in the sense of people who are thinking about the same ideas, people who care about AI economics, those kinds of things, discovers one new great blogger a year, something like that. Everyone is still talking about Applied Divinity Studies , who hasn’t written, unless I missed something, hasn’t written much in a couple of years. I don’t know. It seems undersupplied. I don’t have a great explanation.", "Dwarkesh Patel", "If you had to give an explanation, what would it be?", "Scott Alexander", "So this is something that I wish I could get Daniel to spend a couple of months modeling. I was going to say it’s the intersection of too many different tasks. You need people who can come up with ideas, who are prolific, who are good writers. But actually I can also count on a pretty small number of figures the number of people who had great blog posts but weren’t that prolific.", "There was a guy named LouKeep who everybody liked five years ago and he wrote like 10 posts and people still refer to all 10 of those posts and “I wonder if LouKeep will ever come back”. So there aren’t even that many people who are very slightly failing by having all of them accept prolificness. Nick Whitaker , back when there was lots of FTX money rolling around, I think this was Nick, tried to sponsor a blogging fellowship with just an absurdly high prize. And there were some great people, I can’t remember who won, but it didn’t result in a Cambrian explosion of blogging. I think it was $100,000. I can’t remember if that was the grand prize or the total prize pool. But having some ridiculous amount of money put in as an incentive got like three extra people.", "Dwarkesh Patel", "Yeah. So you have no explanation?", "Scott Alexander", "Actually, Nick is an interesting case because Works in Progress is a great magazine. And the people who write for Works in Progress, some of them I already knew as good bloggers, others I didn’t. So I don’t understand why they can write good magazine articles without being good bloggers. In terms of writing good blogs that we all know about, that could be because of the editing. That could be because they are not prolific. Or it could be- one thing that has always amazed me is there are so many good posters on Twitter. There were so many good posters on Livejournal before it got taken over by Russia. There were so many good people on Tumblr before it got taken over by woke.", "But only like 1% of these people who are good at short and medium form ever go to long form. I was on Livejournal myself for several years and people liked my blog, but it was just another Livejournal. No one paid that much attention to it. Then I transitioned to WordPress and all of a sudden I got orders of magnitude much more attention. “Oh, it’s a real blog now we can discuss it now it’s part of the conversation”. I do think courage has to be some part of the explanation. Just because there are so many people who are good at using these hidden away blogging things that never get anywhere. Although it can’t be that much of the explanation because I feel like now all of those people have gotten substacks and some of those substacks went somewhere, but most of them didn’t.", "Dwarkesh Patel", "On the point about “well, there’s people who can write short form, so why isn’t that translating?” I will mention something that has actually radicalized me against Twitter as an information source is I’ll meet- and this has happened multiple times- I’ll meet somebody who seems to be an interesting poster, has funny, seemingly insightful posts on Twitter. I’ll meet them in person and they are just absolute idiots. It’s like they’ve got 240 characters of something that sounds insightful and it matches to somebody who maybe has a deep worldview, you might say, but they actually don’t have it.", "Whereas I’ve actually had the opposite feeling when I meet anonymous bloggers in real life where I’m like, “oh, there’s actually even more to you than I realized off your online persona”. You know Alvaro de Menard, the Fantastic Anachronism guy? I met up with him recently and he gives me, he made a hundred translations of his favorite Greek poet, Cavafy, and he gave me a copy. And it’s just this thing he’s been doing on his side. It’s just like translating Greek poetry he really liked. I don’t expect any anonymous posters on Twitter to be anytime soon handing me their translation of some Roman or Greek poet or something.", "Scott Alexander", "Yeah, so on the car ride here, Daniel and I were talking about, in AI now the thing everyone is interested in is their ‘time horizon’. Where did this come from? 5 years ago you would not have thought, “oh, time horizon. AIs will be able to do a bunch of things that last one minute, but not that last two hours”. Is there a human equivalent to time horizon?", "And we couldn’t figure it out, but it almost seems like there are lots of people who have the time horizon to write a really, really good comment that gets to the heart of the issue. Or a really, really good Tumblr post which is like three paragraphs but somehow can’t make it hang together for a whole blog post. And I’m the same way. I can easily write a blog post, like a normal length ACX blog post, but if you ask me to write a novella or something that’s four times the length of the average ACX blog post, then it’s this giant mess of “re re re re” outline that just gets redone and redone and maybe eventually I make it work.", "I did somehow publish Unsong , but it’s a much less natural task. So maybe one of the skills that goes into blogging is this. But I mean, no, because people write books and they write journal articles and they write works in progress articles all the time. So I’m back to not understanding this.", "Dwarkesh Patel", "No, I mean ChatGPT can write you a book. There’s a difference between the ChatGPT book, which is most books and…", "Scott Alexander", "There are many, many times more people who have written good books than who are actively operating great bloggers right now, I think.", "Daniel Kokotajlo", "Maybe that’s financial?", "Scott Alexander", "No, no, no, no, no, no. Books are the worst possible financial strategy. Substack is where it’s at.", "Daniel Kokotajlo", "Worse than blogs? You think so?", "Dwarkesh Patel", "Oh yeah.", "Scott Alexander", "The other thing is that blogs are such a great status gain strategy. I was talking to Scott Aaronson about this. If people have questions about quantum computing, they ask Scott Aronson or he is like the authority. I mean there are probably hundreds of other professors who do quantum computing things but nobody knows who they are because they don’t have blogs.", "So I think it’s underdone. I think there must be some reason why it’s underdone. I don’t understand what that is because I’ve seen so many of the elements that it would take to do it in so many different places and I think it’s either just a multiplication problem where 20% of people are good at one thing, 20% of people are good at another thing, and you need five things, there aren’t that many.", "Plus something like courage, where people who would be good at writing blogs don’t want to do it. I actually know several people who I think would be great bloggers in the sense that sometimes they send me multi-paragraph emails in response to an ACX post and I’m like, “wow, this is just an extremely well written thing that could have been another blog post. Why don’t you start a blog?” And they’re like, “oh, I could never do that”.", "Dwarkesh Patel", "What advice do you have to somebody who wants to become good at it but isn’t currently good at it?", "Scott Alexander", "Do it every day, same advice as for everything else. I say that I very rarely see new bloggers who are great. But like when I see some. I published every day for the first couple years of Slate Star Codex, maybe only the first year. Now I could never handle that schedule, I don’t know, I was in my 20s, I must have been briefly superhuman.", "But whenever I see a new person who blogs every day it’s very rare that that never goes anywhere or they don’t get good. That’s like my best leading indicator for who’s going to be a good blogger.", "Dwarkesh Patel", "And do you have advice on what kinds of things to start? One frustration you can have is you want to do it, but you have so little to say, you don’t have that deep a world model, a lot of the ideas you have are just really shallow or wrong. Just do it anyway?", "Scott Alexander", "So I think there are two possibilities there. One is that you are, in fact, a shallow person without very many ideas. In that case I’m sorry, it sounds like that’s not going to work. But usually when people complain that they’re in that category, I read their Twitter or I read their Tumblr, or I read their ACX comments, or I listen to what they have to say about AI risk when they’re just talking to people about it, and they actually have a huge amount of things to say. Somehow it’s just not connecting with whatever part of them has lists of things to blog about.", "So that may be another one of those skills that only 20% of people have, is when you have an idea you actually remember it and then you expand on it. I think a lot of blogging is reactive; You read other people’s blogs and you’re like, no, that person is totally wrong. A part of what we want to do with this scenario is say something concrete and detailed enough that people will say, no, that’s totally wrong, and write their own thing.", "But whether it’s by reacting to other people’s posts, which requires that you read a lot, or by having your own ideas, which requires you to remember what your ideas are, I think that 90% of people who complain that they don’t have ideas, I think actually have enough ideas. I don’t buy that as a real limiting factor for most people.", "Dwarkesh Patel", "I have noticed two things in my own… I mean, I don’t do that much writing, but from the little I do: one, I actually was very shallow and wrong when I started. I started the blog in college. So if you are somebody who’s like, “this is bullshit, there’s nothing to this. Somebody else wrote about this already”, that’s fine, what did you expect? Right? Of course, as you’re reading more things and learning more about the world, that’s to be expected and just keep doing it if you want to keep getting better at it.", "And the other thing now when I write blog posts, as I’m writing them, I’m just like, “why? These are just some random stories from when I was in China. They’re like kind of cringe stories”. Or with the AI firm’s post, it’s like, “come on, these are just weird ideas. And also some of these seem obvious, whatever”. My podcasts do what I expect them to do. My blogs just take off way more than I expect them to take off in advance.", "Scott Alexander", "Your blog posts are actually very good.", "Daniel Kokotajlo", "Yeah, they’re good.", "Dwarkesh Patel", "But the thing I would emphasize is that, for me, I’m not a regular writer and I couldn’t do them on a daily basis. And as I’m writing them, it’s just this one or two week long process of feeling really frustrated. Like, “this is all bullshit, but I might as well just stick with the sunk cost and just do it”.", "Scott Alexander", "It’s interesting because like a lot of areas of life are selected for arrogant people who don’t know their own weaknesses because they’re the only ones who get out there. I think with blogs and I mean this is self-serving, maybe I’m an arrogant person, but that doesn’t seem to be the case. I hear a lot of stuff from people who are like, “I hate writing blog posts. Of course I have nothing useful to say”, but then everybody seems to like it and reblog it and say that they’re great.", "Part of what happened with me was I spent my first couple years that way, and then gradually I got enough positive feedback that I managed to convince the inner critic in my head that probably people will like my blog post. But there are some things that people have loved that I was absolutely on the verge of, “no, I’m just going to delete this, it would be too crazy to put it out there”. That’s why I say that maybe the limiting factor for so many of these people is courage because everybody I talk to who blogs is within 1% of not having enough courage of blogging.", "Dwarkesh Patel", "That’s right. That’s right. And also “courage” makes it sound very virtuous, which I think it can often be, given the topic, but at least often it’s just like…", "Scott Alexander", "Confidence?", "Dwarkesh Patel", "No, not even confidence. It’s closer to maybe what an aspiring actor feels when they go to an audition where it’s like, “I feel really embarrassed. But also I just really want to be a movie star”.", "Scott Alexander", "So the way I got through this is I blogged for like 8 to 10 years on LiveJournal before- no, it was less than that. It’s more like five years on LiveJournal before ever starting a real blog. I posted on LessWrong for a year or two before getting my own blog. I got very positive feedback from all of that, and then eventually I took the plunge to start my own blog. But it’s ridiculous. What other career do you need seven years of positive feedback before you apply for your first position?", "I mean, you have the same thing. You’ve gotten rave reviews for all of your podcasts, and then you’re kind of trying to transfer to blogging with probably... First of all, you have a fan base. People are going to read your blog. That, I think is one thing, is people are just afraid no one will read it, which is probably true for most people’s first blog. And then there are enough people who like you that you’ll probably get mostly positive feedback, even if the first things you write aren’t that polished. So I think you and I both had that. A lot of people I know who got into blogging kind of had something like that. And I think that’s one way to get over the fear gap.", "Dwarkesh Patel", "I wonder if this sends the wrong message or raises expectations or raises concerns and anxieties. But one idea I’ve been shooting around, and I’d be curious about your take on this: I feel like this slow, compounding growth of a fan base is fake. If I notice some of the most successful things in our sphere that have happened; Leopold releases Situational Awareness . He hasn’t been building up a fan base over years. It’s just really good. And as you were mentioning a second ago, whenever you notice a really great new blogger, it’s not like it then takes them a year or two to build up a fan base. Nope, everybody, at least that they care about, is talking about it almost immediately.", "I mean, Situational Awareness is in a different tier almost. But things like that and even things that are an order of magnitude smaller than that will literally just get read by everybody who matters. And I mean literally everybody. And I expect this to happen with AI 2027 when it comes out. But Daniel, you’ve been building your reputation within this specific community, and I expect AI 2027 it's just really good. And I expect it’ll just blow up in a way that isn’t downstream of you having built up an audience over years.", "Daniel Kokotajlo", "Thank you. I hope that happens. We’ll see.", "Scott Alexander", "Slightly pushing back against that. I have statistics for the first several years of Slate Star Codex, and it really did grow extremely gradually. The usual pattern is something like every viral hit, 1% of the people who read your viral hits stick around. And so after dozens of viral hits, then you have a fan base. But smoothed out, It does look like a- I wish I had seen this recently, but I think it’s like over the course of three years, it was a pretty constant rise up to some plateau where I imagine it was a dynamic equilibrium and as many new people were coming in as old people were leaving.", "I think that with Situational Awareness , I don’t know how much publicity Leopold put into it. We’re doing pretty deliberate publicity, we’re going on your podcast. I think you can either be the sort of person who can go on a Dwarkesh podcast and get the New York Times to write about you, or you can do it organically, the old fashioned way, which is very long.", "Dwarkesh Patel", "Yeah. Okay. So you say that throwing money at people to make them, to get them to blog at least didn’t seem to work for the FTX folks. If it was up to you, what would you do? What’s your grant plan to get 10 more Scott Alexanders?", "Scott Alexander", "Man. So my friend Clara Collier , who’s the editor of Asterisk magazine , is working on something like this for AI blogging. And her idea, which I think is good, is to have a fellowship. I think Nick’s thing was also a fellowship, but the fellowship would be, there is an Asterisk AI blogging fellows’ blog or something like that. Clara will edit your post, make sure that it’s good, put it up there and she’ll select many people who she thinks will be good at this. She’ll do all of the kind of courage requiring work of being like, “yes, your post is good. I’m going to edit it now. Now it’s very good. Now I’m going to put it on the blog”.", "And I think her hope is that, let’s say of the fellows that she chooses, now it’s not that much of a courage step for them to start it because they have the approval of what last psychiatrist would call an omniscient entity, somebody who is just allowed to approve things and tell you that you’re okay on a psychological level. And then like maybe of those fellows, some percent of them will have their blog posts be read and people will like them. And I don’t know how much reinforcement it takes to get over the high prior everyone has on “no one will like my blog”. But maybe for some people, the amount of reinforcement they get there will work.", "Yeah, like an interesting example would be all of the journalists who have switched to having Substacks. Many of them go well. Would all of those journalists have become bloggers if there was no such thing as mainstream media? I’m not sure. But if you’re Paul Krugman you know people like your stuff, and then when you quit the New York Times you know you can just open a substack and start doing exactly what you were doing before. So I don’t know, maybe my answer is there should be mainstream media. I hate to admit that, but maybe it’s true.", "Dwarkesh Patel", "Invented it from first principles.", "Scott Alexander", "Yeah.", "Dwarkesh Patel", "Well I do think that it should be treated more as a viable career path. Where right now, if you told your parents, “I’m going to become a startup founder”, I think the reaction would be like, “there’s a 1% chance you’ll succeed, but it’s an interesting experience and if you do succeed, that’s crazy. That’ll be great. If you don’t, you’ll learn something. It’ll be helpful to the thing you do afterwards”.", "We know that’s true of blogging, right? We know that it helps you build up a network, it helps you develop your ideas. And if you do succeed, you get a dream job for a lifetime. And I think maybe they don’t have that mindset, but also they under appreciate how much you actually could succeed at it. It’s not a crazy outcome to make a lot of money as a blogger.", "Scott Alexander", "I think it might be a crazy outcome to make a lot of money as a blogger. I don’t know what percent of people who start a blog end up making enough that they can quit their day job. My guess is it’s a lot worse than for startup founders. I would not even have that as a goal so much as like the Scott Aaronson goal of, okay, you’re still a professor, but now you’re the professor whose views everybody knows and who has kind of a boost up in respect in your field and especially outside of your field. And also you can correct people when they’re wrong, which is a very important side benefit.", "Dwarkesh Patel", "Yeah. How does your old blogging feedback into your current blogging? So when you’re discussing a new idea, I mean, AI or whatever else, are you just able to pull from the insights from your previous commentary on sociology or anthropology or history or something?", "Scott Alexander", "Yeah. So I think this is the same as anybody who’s not blogging. I think the thing everybody does is they’ve read many books in the past and when they read a new book, they have enough background to think about it. Like you are thinking about our ideas in the context of Joseph Henrich’s book. I think that’s good, I think that’s the kind of place that intellectual progress comes from. I think I am more incentivized to do that.", "It’s hard to read books. I think if you look at the statistics, they’re terrible. Most people barely read any books in a year. And I get lots of praise when I read a book and often lots of money, and that’s a really good incentive. So I think I do more research, deep dives, read more books than I would if I weren’t a blogger. It’s an amazing side benefit. And I probably make a lot more intellectual progress than I would if I didn’t have those really good incentives.", "Dwarkesh Patel", "Yeah. There was actually a prediction market about the year by which an AI would be able to write a blog post as good as you. Was it 2026 or 2027? I think it was 2027. It was like 15% by 2027 or something like that. It is an interesting question of they do have your writing and all other good writing in trading distribution. And weirdly, they seem way better at getting superhuman at coding than they are at writing, which is the main thing in their distribution.", "Scott Alexander", "Yeah. It’s an honor to be my generation’s Garry Kasparov figure. Yeah. So I’ve tried this. And first of all, it does a decent job. I respect its work. It’s not perfect yet. I think it’s actually better at the style on a word-to-word, sentence-to-sentence level, than it is at planning out a blog post. So I think there are possibly two reasons for it: One, we don’t know how the base model would have done at this task. We know that all the models we see are to some degree reinforcement learning into a kind of corporate speak mode. You can get it somewhat out of that corporate speak mode. But I don’t know to what degree this is actually doing its best to imitate Scott Alexander versus hit some average between Scott Alexander and corporate speak. And I don’t think anyone knows except the internal employees who have access to the base model.", "And the second thing I think of maybe just because it’s trendy has an agency or horizon failure, like deep research is an okay researcher. It’s not a great researcher. If you actually want to understand an issue in depth, you can’t use deep research. You gotta do it on your own. So if I spend maybe five to 10 hours researching a really research heavy blog post, the METR thing, I know we’re not supposed to use it for any task except coding, but like it says, on average the AI’s horizon is one hour. So I’m guessing it just cannot plan and execute a good blog post. It does something very superficial rather than actually going through the steps. So my guess for that prediction market would be whenever we think the agents are actually good. I think in our scenario that’s like late 2026. I’m going to be humble and not hold out for the superintelligence.", "Daniel Kokotajlo", "What about comments? I feel like intuitively it feels like before we see the AI’s writing great blog posts that go super viral repeatedly, we should see them writing highly upvoted comments on things.", "Scott Alexander", "Yeah. And I think somebody mentioned this on the LessWrong post about it and somebody made some AI generated comments to that post. They were not great. But I wouldn’t have immediately picked them out of the general distribution of LessWrong comments as especially bad. I think, like, I think if you were to try this, you would get something that was so obviously an AI house style that it would use the word ‘delve’ or things along those lines.", "I think if you were able to avoid that maybe by using the base model, maybe by using some kind of really good prompt to be like, “no, do this in Gwern’s voice”, you would get something that was pretty good. I think if you wrote a really stupid blog post, it could point out the correct objections to it. But I also just don’t think it’s as smart as Gwern right now. So its limit on making Gwern-style comments is both- It needs to be able to do a style other than corporate delve slop and then it actually needs to get good.", "Daniel Kokotajlo", "It needs to have good ideas that other people don’t already have.", "Scott Alexander", "Yeah. And I mean I think it can write as well as a smart average person in a lot of ways. And I think if you have a blog post that's worse than that or at that level, it can come up with insightful comments about it. I don’t think it could do it on a quality blog post.", "Dwarkesh Patel", "There was this recent Financial Times article about how have you reached peak cognitive power ? Where it was talking about declining scores in PISA and SAT and so forth. On the Internet especially, it does seem like there might have been a golden era before I was that active on the forums or whatever. Do you have nostalgia for a particular time on the Internet when it was just like, this is an intellectual mecca?", "Scott Alexander", "I am so mad at myself for missing most of the golden age of blogging. I feel like if I had started a blog in 2000 or something, then- I don’t know, I’ve done well for myself, I can’t complain- but the people from that era all founded news organizations or something. I mean, God save me from that fate. I would have liked to have been there. I would have liked to see what I could have done in that area. I mean, I wouldn’t compare the decline of the Internet to that stuff with PISA because I’m sure the Internet is just more people are coming on, it’s a less heavily selected sample.", "But yeah, I could have passed on the whole era where they were talking about atheism versus religion nonstop. That was pretty crazy. But I do hear good things about the golden age of blogging.", "Dwarkesh Patel", "Anybody who was sort of counterfactually responsible for you starting to blog or keeping blogging?", "Scott Alexander", "So I owe a huge debt of gratitude to Eliezer Yudkowski. I had a live journal before that. But it was going on LessWrong that convinced me I could move to the big times. And second of all, I just think I learned I imported a lot of my worldview from him. I think I was the most boring normie liberal in the world before encountering LessWrong. And I don’t 100% agree with all LessWrong ideas, but just having things of that quality beamed into my head and for me to react to and think about was really great.", "Dwarkesh Patel", "And tell me about the fact that you could be and were at some point anonymous, I think for most of human history, somebody who is an influential advisor or an intellectual or somebody. Actually, I don’t know if this is true. You would have had to have some sort of public persona. And a lot of what people read into your work is actually a reflection of your public persona.", "Scott Alexander", "Sort of. The reason half of these ancient authors are called things like Pseudo Dionysus or Pseudocelsus is that you could just write something being like, “oh, yeah, this is by Saint Dionysus”. And then, I don’t know, you could be anybody.", "And I don’t know exactly how common that was in the past. But yeah, I agree that the Internet has been a golden age for anonymity. I’m a little bit concerned that AI will make it much easier to break anonymity. I hope the golden age continues.", "Dwarkesh Patel", "Yeah, seems like a great note to end on. Thank you guys so much for doing this.", "Scott Alexander", "Thank you.", "Daniel Kokotajlo", "Thank you so much. This was a blast.", "Dwarkesh Patel", "Yeah, I had a great time.", "Daniel Kokotajlo", "Huge fan of your podcast.", "Dwarkesh Patel", "Thank you." ]
[ "https://rationalwiki.org/wiki/Scott_Alexander", "https://www.alignmentforum.org/users/daniel-kokotajlo", "https://slatestarcodex.com/", "https://www.astralcodexten.com/", "https://ai-futures.org/", "https://en.wikipedia.org/wiki/Sam_Altman", "https://en.wikipedia.org/wiki/Dario_Amodei", "https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like", "https://www.elilifland.com/", "https://samotsvety.org/", "https://www.lesswrong.com/users/thomas-larsen", "https://www.linkedin.com/in/jonasvollmer", "https://www.twitch.tv/claudeplayspokemon", "https://katjagrace.com/", "https://en.wikipedia.org/wiki/Robin_Hanson", "https://www.metaculus.com/", "https://en.wikipedia.org/wiki/David_W._Anthony", "https://en.wikipedia.org/wiki/The_Horse,_the_Wheel,_and_Language", "https://en.wikipedia.org/wiki/Yamnaya_culture", "https://www.britannica.com/topic/modus-ponens", "https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mufW9iFSxRxqNpvyQ/d2mjevfaxcqt15ihv6ly", "https://en.wikipedia.org/wiki/Ray_Kurzweil", "https://en.wikipedia.org/wiki/Mirror_life", "https://en.wikipedia.org/wiki/Geoffrey_Hinton", "https://www.tsmc.com/english", "https://www.ark-invest.com/wrights-law", "https://en.wikipedia.org/wiki/Virtual_Cell", "https://en.wikipedia.org/wiki/Deng_Xiaoping", "https://en.wikipedia.org/wiki/Thomas_Newcomen", "https://en.wikipedia.org/wiki/James_Watt", "https://en.wikipedia.org/wiki/Tyler_Cowen", "https://www.youtube.com/watch?v=GT_sXIUJPUo", "https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/", "https://press.princeton.edu/books/paperback/9780691178431/the-secret-of-our-success?srsltid=AfmBOooYnHMQdLTu1LLvFb5Ih6GS9gL5IOdlqFG-EIXaYcE0Re1nEmjk", "https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html", "https://ih1.redbubble.net/image.4805337899.7994/ssrco,slim_fit_t_shirt,flatlay,284bb5:fa57595f98,front,wide_portrait,750x1000-bg,f8f8f8.jpg", "https://en.wikipedia.org/wiki/AlphaStar_(software)", "https://en.wikipedia.org/wiki/P(doom)", "https://www.fema.gov/disaster/defense-production-act", "https://openai.com/index/chain-of-thought-monitoring/", "https://x.com/i/grok/share/ucRjEOYmxGfZ0Mp4aKWZS1rBB", "https://x.com/i/grok/share/Nj2tsvCpgEfU3OCHh0Ci4qHTf", "https://x.com/ibab/status/1893778842399154463", "https://www.anthropic.com/research/alignment-faking", "https://openai.com/index/chain-of-thought-monitoring/", "https://openreview.net/pdf/b79f035d87a4cd2744c9a8fce596814ce5126985.pdf", "https://www.lesswrong.com/users/l-rudolf-l", "https://www.lesswrong.com/posts/KFFaKu27FNugCHFmh/by-default-capital-will-matter-more-than-ever-after-agi", "https://www.lesswrong.com/w/conflict-vs-mistake", "https://en.wikipedia.org/wiki/Lawrence_Summers", "https://en.wikipedia.org/wiki/Jason_Furman", "https://en.wikipedia.org/wiki/Peter_Navarro", "https://youtu.be/XhB3qH_TFds?t=317", "https://en.wikipedia.org/wiki/Kelsey_Piper", "https://archive.is/5rxLE", "https://www.linkedin.com/in/leopold-aschenbrenner", "https://www.applieddivinitystudies.com/", "https://loukeep.substack.com/", "https://substack.com/@nickw", "https://worksinprogress.co/", "https://fantasticanachronism.com/", "https://www.amazon.co.uk/Unsong-Scott-Alexander/dp/B0D57BYS3Y?dib=eyJ2IjoiMSJ9.7AZrooaRo0zAZYcodL6YTRJqOjuNLTXeCJNtcxZxGtxacrZ2HYkKWVSmMO168wD9AoC-rKOCJ_4G2S_CT_O1jeizBDgegukA8izZ6EZ3F3SPgodYkpCa2nAiY9CD4fbCxbIvOWmygMxOc6TEny5-AzI5uU5a9W5A9S0ovUSwR2hSeIR2rP0IRtd4xU8yg_Ksd6XkBkfDBXNmmyxEi55rT3tctPYLZYe0-b2sMooEAhE.ebBRBXftLDwUcT3vqSweIAB4ZqHc8ENCl1uBfRqo1EQ&dib_tag=se&qid=1743587380&refinements=p_lbr_books_authors_browse-bin%3AScott+Alexander&s=books&sr=1-1", "https://en.wikipedia.org/wiki/Scott_Aaronson", "https://forum.effectivealtruism.org/users/clara-collier", "https://asteriskmag.com/", "https://en.wikipedia.org/wiki/Paul_Krugman", "https://metr.org/", "https://archive.is/zYRBY" ]
https://www.dwarkesh.com/p/shane-legg
Shane Legg (DeepMind Founder) - 2028 AGI, New Architectures, Aligning Superhuman Models
[ "(0:00:00) - Measuring AGI Progress", "Dwarkesh Patel 0:00:00", "Today I have the pleasure of interviewing Shane Legg , who is the founder and the Chief AGI scientist of Google DeepMind. Shane, welcome to the podcast.", "Shane Legg 0:00:13", "Thank you. It's a pleasure being here.", "Dwarkesh Patel 0:00:15", "First question. How do we measure progress towards AGI concretely? We have these loss numbers and we can see how the loss improves from one model to another, but it's just a number. How do we interpret this? How do we see how much progress we're actually making?", "Shane Legg 0:00:26", "That’s a hard question. AGI by its definition is about generality. It's not about doing a specific thing. It's much easier to measure performance when you have a very specific thing in mind because you can construct a test around that.", "Maybe I should first explain what I mean by AGI because there are a few different notions around it. When I say AGI, I mean a machine that can do the sorts of cognitive things that people can typically do, possibly more. To be an AGI that's the bar you need to meet.", "So if we want to test whether we're meeting the threshold or we're getting close to the threshold, what we actually need is a lot of different kinds of measurements and tests that span the breadth of all the sorts of cognitive tasks that people can do and then to have a sense of what human performance is on these sorts of tasks. That then allows us to judge whether or not we're there.", "It's difficult because you'll never have a complete set of everything that people can do because it's such a large set. But I think that if you ever get to the point where you have a pretty good range of tests of all sorts of cognitive things that we can do, and you have an AI system which can meet human performance and all those things and then even with effort, you can't actually come up with new examples of cognitive tasks where the machine is below human performance then at that point, you have an AGI.", "It may be conceptually possible that there is something that the machine can't do that people can do but if you can't find it with some effort, then for all practical purposes, you have an AGI.", "Dwarkesh Patel 0:02:12", "Let's get more concrete. We measure the performance of these large language models on MMLU and other benchmarks. What is missing from the benchmarks we use currently? What aspect of human cognition do they not measure adequately?", "Shane Legg 0:02:31", "Another hard question. These are quite big areas. They don't measure things like understanding streaming video, for example, because these are language models and people can do things like understanding streaming video.", "They don't do things like episodic memory. Humans have what we call episodic memory. We have a working memory, which are things that have happened quite recently, and then we have a cortical memory, things that are sort of being in our cortex, but there's also a system in between, which is episodic memory, which is the hippocampus. It is about learning specific things very, very rapidly. So if you remember some of the things I say to you tomorrow, that'll be your episodic memory hippocampus.", "Our models don't really have that kind of thing and we don't really test for that kind of thing. We just sort of try to make the context windows, which is more like working memory, longer and longer to sort of compensate for this.", "But it is a difficult question because the generality of human intelligence is very, very broad. So you really have to start going into the weeds of trying to find if there's specific types of things that are missing from existing benchmarks or different categories of benchmarks that don't currently exist or something.", "Dwarkesh Patel 0:03:55", "The thing you're referring to with episodic memory, would it be fair to call that sample efficiency or is that a different thing?", "Shane Legg 0:04:01", "It's very much related to sample efficiency. It's one of the things that enables humans to be very sample efficient. Large language models have a certain kind of sample efficiency because when something's in their context window, that biases the distribution to behave in a different way and so that's a very rapid kind of learning. There are multiple kinds of learning and the existing systems have some of them, but not others. It's a little bit complicated.", "Dwarkesh Patel 0:04:30", "Is this kind of memory, what we call sample efficiency, a fatal flaw of these deep learning models that it just takes trillions of tokens, a magnitude more than what any human will see throughout their lifetime or is this something that will be solved over time?", "Shane Legg 0:04:46", "The models can learn things immediately when it's in the context window and then they have this longer process when you actually train the base model and that's when they're learning over trillions of tokens. But they miss something in the middle. That's sort of what I'm getting at here.", "I don't think it's a fundamental limitation. I think what's happened with large language models is something fundamental has changed. We know how to build models now that have some degree of understanding of what's going on. And that did not exist in the past. And because we've got a scalable way to do this now, that unlocks lots and lots of lots of new things.", "Now we can look at things which are missing, such as this sort of episodic memory type thing, and we can then start to imagine ways to address that. My feeling is that there are relatively clear paths forward now to address most of the shortcomings we see in the existing models, whether it's about delusions, factuality, the type of memory and learning that they have, or understanding video, or all sorts of things like that. I don't see any big blockers. I don't see big walls in front of us. I just see that there's more research and work and all these things will improve and probably be adequately solved.", "Dwarkesh Patel 0:06:09", "Going back to the original question of how do you measure when human level AI has arrived or has gone beyond it. As you mentioned, there's these other sorts of benchmarks you can use and other sorts of traits, but concretely, what would it have to do for you to be like, “Okay, we've reached human level.”", "Would it have to beat Minecraft from start to finish? Would it have to get 100% on MMLU? What would it have to do?", "Shane Legg 0:06:31", "There is no one thing that would do it, because I think that's the nature of it. It's about general intelligence. So I'd have to make sure it could do lots and lots of different things and it didn't have a gap.", "We already have systems that can do very impressive categories of things to human level or even beyond. I would want a whole suite of tests that I felt was very comprehensive and then furthermore, when people come in and say, “Okay, so it's passing a big suite of tests, let's try to find examples. Let's take an adversarial approach to this. Let's deliberately try to find examples where people can clearly, typically do this, but the machine fails.” And when those people cannot succeed, I'll go, “Okay, we're probably there.”", "Dwarkesh Patel 0:07:16", "A lot of your earlier research, at least the ones I could find, emphasized that AI should be able to manipulate and succeed in a variety of open-ended environments. It almost sounds like a video game. Is that where your head is still at now, or do you think about it differently?", "Shane Legg 0:07:32", "It’s evolved a bit. When I did my thesis work around universal intelligence, I was trying to come up with an extremely universal, general, mathematically clean framework for defining and measuring intelligence. I think there were aspects of that that were successful. I think in my own mind, it clarified the nature of intelligence as being able to perform well in lots of different domains and different tasks and so on. It's about that sort of capability of performance and the breadth of performance. I found that was quite helpful and enlightening.", "There was always the issue of the reference machine. In the framework, you have a weighting of things according to the complexity. It's like an Occam's razor type of thing, where you weight tasks and environments which are simpler, more highly. You’ve got a countable space of semi-computable environments. And that Kolmogorov complexity measure has something built into it, which is called a reference machine. And that's a free parameter. So that means that the intelligence measure has a free parameter in it and as you change that free parameter, it changes the weighting and the distribution over the space of all the different tasks and environments. This is sort of an unresolved part of the whole problem. So what reference machine should we ideally use? There's no universal reference machine. People will usually put a Turing machine in there, but there are many kinds of different machines.", "Given that it's a free parameter, I think the most natural thing to do is to think about what's meaningful to us in terms of intelligence. I think human intelligence is meaningful to us in the environment that we live in. We know what human intelligence is. We are human too. We interact with other people who have human intelligence. We know that human intelligence is possible, obviously, because it exists in the world. We know that human intelligence is very, very powerful because it's affected the world profoundly in countless ways. And we know if human level intelligence was achieved, that would be economically transformative because the types of cognitive tasks people do in the economy could be done by machines then. And it would be philosophically important because this is sort of how we often think about intelligence. Historically it would be a key point.", "So I think that human intelligence in a human-like environment is quite a natural sort of reference point. You could imagine setting your reference machine to be such that it emphasizes the kinds of environments that we live in as opposed to some abstract mathematical environment. And so that's how I've kind of gone on this journey of — “Let's try to define a completely universal, clean, mathematical notion of intelligence” to “Well, it's got a free parameter. “", "One way of thinking about it is to think more concretely about human intelligence and build machines that can match human intelligence. Because we understand what that is and we know that that is a very powerful thing. It has economic, philosophical and historical importance.", "The other aspect of course is that, in this pure formulation of Kolmogorov complexity, it's actually not computable. I also knew that there was a limitation at the time but it was an effort to just theoretically come up with a clean definition. I think we can sort of get there, but we have this issue of a reference machine, which is unspecified.", "(0:11:41) - Do we need new architectures?", "Dwarkesh Patel 0:11:41", "Before we move on, I do want to ask a question on the original point you made on LLMs needing episodic memory. You said that these are problems that we can solve and these are not fundamental impediments.", "But when you say that, do you think they will just be solved by scale or do each of these need a fine-grained specific solution that is architectural in nature?", "Shane Legg 0:12:06", "I think it'll be architectural in nature because the current architectures don't really have what you need to do this. They basically have a context window, which is very, very fluid, of course, and they have the weights, which things get baked into very slowly. So to my mind, that feels like working memory, which is like the activations in your brain, and then the weights are like the synapses in your cortex.", "Now, the brain separates these things out. It has a separate mechanism for rapidly learning specific information because that's a different type of optimization problem compared to slowly learning deep generalities. There's a tension between the two but you want to be able to do both. You want to be able to hear someone's name and remember it the next day. And you also want to be able to integrate information over a lifetime so you start to see deeper patterns in the world.", "These are quite different optimization targets, different processes, but a comprehensive system should be able to do both. And so I think it's conceivable you could build one system that does both, but you can also see that because they're quite different things, it makes sense for them to be done differently. I think that's why the brain does it separately.", "Dwarkesh Patel 0:13:22", "I'm curious about how concretely you think that would be achieved. DeepMind has been working on these domain specific reinforcement learning type setups: AlphaFold, AlphaCode and so on. How does that fit into what you see as a path to AGI? Have these just been orthogonal domain specific models or do they feed into the eventual AGI?", "Shane Legg 0:13:50", "Things like AlphaFold are not really feeding into AGI. We may learn things in the process that may end up being relevant, but I don't see them as likely being on the path to AGI. But we're a big group. We've got hundreds and hundreds and hundreds of PhDs working on lots of different projects. When we find what we see as opportunities to do something significant like AlphaFold, we'll go and do it. It's not like we only do AGI type work. We work on fusion reactors and various things in sustainability, energy. We've got people looking at satellite images of deforestation. We have people looking at weather forecasting. We've got tons of people working on lots of things.", "Dwarkesh Patel 0:14:42", "On the point you made earlier about the reference machine as human intelligence. It's interesting because one of the things you mentioned in your 2008 thesis about how you would measure intelligence was — You said you could do a compression test and you could see if it fills in words and a sample of text and that could measure intelligence. And funnily enough, that's basically how the LLMs are trained.", "At the time, did it stick out to you as an especially fruitful thing to train for?", "Shane Legg 0:15:12", "Well, yeah. In the sense what's happened is actually very aligned with what I wrote about in my thesis. The ideas from Marcus Hutter with AIXI, where you take Solomonoff induction, which is this incomputable but theoretically very elegant and extremely sample efficient prediction system, and then once you have that, you can build a general agent on top of it by basically adding search and reinforcement signal. That's what you do with AIXI.", "But what that sort of tells you is that if you have a fantastically good sequence predictor, some approximation of Solomonoff induction, then going from that to a very powerful, very general AGI system is just sort of another step. You've actually solved a lot of the problem already.", "And I think that's what we're seeing today actually, that these incredibly powerful foundation models are incredibly good sequence predictors that are compressing the world based on all this data. And then you will be able to extend these in different ways and build very, very powerful agents.", "(0:16:26) - Is search needed for creativity?", "Dwarkesh Patel 0:16:26", "Let me ask you more about that.", "Richard Sutton's Bitter Lesson essay says that there's two things you can scale, search and learning. I guess you could say that LLMs are about the learning aspect. The search stuff, which you worked on throughout your career, where you have an agent that is interacting with this environment, is that the direction that needs to be explored again? Or is that something that needs to be added to LLMs where they can actually interact with their data or the world or in some way?", "Shane Legg 0:16:52", "Yeah, I think that's on the right track. These foundation models are world models of a kind and to do really creative problem solving, you need to start searching. If I think about something like AlphaGo and the famous Move 37 , where did that come from? Did that come from all its data that it's seen of human games or something like that? No, it didn't. It came from it identifying a move as being quite unlikely, but plausible. And then via a process of search, coming to understand that it was actually a very, very good move.", "So to get real creativity, you need to search through spaces of possibilities and find these hidden gems. That's what creativity is. Current language models don't really do that. They really are mimicking the data. They are mimicking all the human ingenuity and everything, which they have seen from all this data that's coming from the internet that's originally derived from humans.", "These models can blend things. They can do Harry Potter in the style of a Kanye West rap or something, even though it's never happened, they can blend things together. But if you want a system that can go truly beyond that and not just generalize in novel ways and do something that's truly creative, that is not just a blending of existing things, that requires searching through a space of possibilities and finding these hidden gems that are hidden away in there somewhere. And that requires search. So I don't think we'll see systems that truly step beyond their training data until we have powerful search in the process.", "Dwarkesh Patel 0:18:43", "There are rumors that Google DeepMind is training newer models, and you don't have to comment on those specifically, but when you do that, if it's the case that something like search is required to go to the next level, are you training in a completely different way than how GPT-4 or other transformers are trained?", "Shane Legg 0:19:00", "And I can't say much about how we're training. I think it's fair to say we're roughly doing the sorts of scaling and training that you see many people in the field doing but we have our own take on it and our own different tricks and techniques.", "(0:19:19) - Superhuman alignment", "Dwarkesh Patel 0:19:19", "Okay, maybe we'll come back to it and get another answer on that.", "Let's talk about alignment briefly. What will it take to align human level and superhuman AIs?", "It's interesting because the sorts of reinforcement learning and self-play kinds of setups that are popular now, like Constitution AI or RLHF, DeepMind obviously has expertise in it for decades longer. I'm curious what you think of the current landscape and how DeepMind pursues that problem of safety towards human level models.", "Shane Legg 0:19:50", "Do you want to know about what we're currently doing or do you want me to have a stab at what I think needs to be done?", "Dwarkesh Patel 0:19:56", "Needs to be done.", "Shane Legg 0:19:57", "Currently we're doing lots of things. We're doing interpretability. We're doing our process supervision. We're doing red teaming. We're doing evaluation for dangerous capabilities. We're doing work on institutions and governance and tons of stuff, right?", "Anyway, what do I think needs to be done? I think that powerful machine learning, powerful AGI, is coming in some time and if the system is really capable, really intelligent, really powerful, trying to somehow contain it or limit it is probably not a winning strategy because these systems ultimately will be very, very capable. So what you have to do is you have to align it. You have to get it such that it's fundamentally a highly ethical value aligned system from the get go. How do you do that?", "Maybe this is slightly naive, but this is my take on it — How do people do it? If you have a really difficult ethical decision in front of you, what do you do? You don't just do the first thing that comes to mind, because there could be a lot of emotions involved in other things. It's a difficult problem.", "What you have to do is to calm yourself down. You've got to sit down and you've got to think about it. You've got to think, “Well, okay, what could I do?” I could do this. I could do this. I could do this. If I do each of these things, what will happen? So that requires a model of the world. And then you have to think about ethically, how do I view each of these different actions and the possibilities and what might happen from it? What is the right thing to do? And as you think about all the different possibilities and your actions and what can follow from them and how it aligns with your values and your ethics, you can then come to some conclusion of what is really the best choice that you should be making if you want to be really ethical about this.", "I think AI systems need to essentially do the same thing. When you sample from a foundation model at the moment, it's blurting out the first thing. It's like System 1, if you like, from psychology, from Kahneman et al. That's not good enough.", "And if we do RLHF without human feedback (RLAIF), Constitutional AI tries to do that sort of thing, you're trying to fix the underlying System 1 in a sense. That can shift the distribution and that can be very helpful but it's a very high dimensional distribution and you're sort of poking it in a whole lot of points. So it's not likely to be a very robust solution. It's like trying to train yourself out of a bad habit. You can sort of do it eventually. But what you need to do is you need to have a System 2. You need the system to not just sample from the model. You need the system to go, “Okay, I'm going to reason this through. I'm going to do step by step reasoning. What are the options in front of me? I'm going to use my world model now and I'm going to use a good world model to understand what's likely to happen from each of these options.” And then reason about each of these from an ethical perspective.", "So you need a system which has a deep understanding of the world, a good world model, and has a good understanding of people, and has a good understanding of ethics, and it has robust and very reliable reasoning. And then you set it up in such a way that it applies this reasoning and this understanding of ethics to analyze the different options which are in front of it and then execute on which is the most ethical way forward.", "Dwarkesh Patel 0:23:52", "But when a lot of people think about the fundamental alignment problem, the worry is not that it's not going to have a world model to understand the effects of its actions, the worry is that the effects it cares about are not the ones we will care about. So even if you improve its system-2 thinking to do better planning, the fundamental problem is — We have these really nuanced values about what we want. How do we communicate those values and make sure they're reinforced in the AI?", "Shane Legg 0:24:26", "It needs not just a good model of the world, but it needs a really good understanding of ethics. And we need to communicate to the system what ethics and values it should be following.", "Dwarkesh Patel 0:24:36", "And how do we do that in a way that we can be confident that a super human level model will preserve those values or have learned them in the first place?", "Shane Legg 0:24:45", "It should preserve them because if it's making all its decisions based on a good understanding of ethics and values, and it's consistent in doing this, it shouldn't take actions which undermine that. That would be inconsistent.", "Dwarkesh Patel 0:24:59", "Right, so then how do we get to the point where it has learned them in the first place?", "Shane Legg 0:25:02", "Yeah, that's the challenge. We need to have systems. The way I think about it is this: to have a profoundly ethical AI system, it also has to be very, very capable. It needs a really good world model, a really good understanding of ethics, and it needs really good reasoning. Because if you don't have any of those things, how can you possibly be consistently profoundly ethical? You can't. So we actually need better reasoning, better understanding of the world, and better understanding of ethics in our systems.", "Dwarkesh Patel 0:25:33", "It seems to me that the former two would just come along for the ride as these models get more powerful.", "Shane Legg 0:25:38", "Yeah. That's a nice property because it's actually a capabilities thing to some extent.", "Dwarkesh Patel 0:25:42", "But if the third one, the ethical model, is a bottleneck, or if it’s a thing that doesn't come along with the AI itself, what is the actual technique to make sure that that happens?", "Shane Legg 0:25:55", "First of all, we should train the system on ethics generally so that it understands human ethics well. There's a lot of lectures and papers and books and all sorts of things. We need to make sure it understands humans ethics well, at least as well as a very good ethicist because that's important.", "And we then need to decide, of this general understanding of ethics, what do we want the system to actually value and what sort of ethics do we want it to apply? Now, that's not a technical problem. That's a problem for society and ethicists and so on to come up with.", "I'm not sure there's such a thing as optimal ethics but I'm pretty sure that it's possible to come up with a set of ethics, which is much better than what the so-called doomers are worried about in terms of the behavior of these AGI systems. And then what you do is you engineer the system to actually follow these things so that every time it makes a decision, it does an analysis using a deep understanding of the world and of ethics and very robust and precise reasoning to do an ethical analysis of what it's doing.", "And of course, we would want lots of other things. We would want people checking these processes of reasoning. We’d want people verifying that it's behaving itself in terms of how it reaches these conclusions.", "Dwarkesh Patel 0:27:31", "But I still feel like I don't understand how that fundamental problem of making sure it follows that ethic works. Because presumably, it has read Mao’s books so it understands Maoist ethics and understands all these other ethics. How do we make sure the ethic that ethicists say is the one is what it ends up following and not the other ones it understands?", "Shane Legg 0:27:52", "Right. So you have to specify to the system, these are ethical principles that you should follow.", "Dwarkesh Patel 0:27:58", "And how do we make sure it does that?", "Shane Legg 0:28:00", "We have to check it as it's doing it. We have to assure ourselves that it is consistently following these ethical principles at least as well as a group of human experts.", "Dwarkesh Patel 0:28:13", "Are you worried that if you do it the default way, which is just reinforcing it whenever it seems to be following them, you could be training deception as well?", "Shane Legg 0:28:22", "Reinforcement does have some dangerous aspects to it. I think it's actually more robust to check the process of reasoning and check its understanding of ethics. To reassure ourselves that the system has a really good understanding of ethics, it should be grilled for some time to try to really pull apart its understanding and make sure it is very robust.", "And also, if it's deployed, we should have people constantly looking at the decisions it’s making and the reasoning process that goes into those decisions to try to make sure that it is correctly reasoning about these types of things.", "Dwarkesh Patel 0:29:07", "Do you have some sort of framework for that at Google DeepMind?", "Shane Legg 0:29:11", "This is not so much a Google DeepMind perspective on this. This is my take on how I think we need to do this kind of thing. There are many different views within and there are different variants on these sorts of ideas as well.", "Dwarkesh Patel 0:29:26", "So then do you personally think there needs to be some sort of framework for as you arrive at certain capabilities, these are the concrete safety benchmarks that you must have instated at this point, or you should pause or slow down?", "Shane Legg 0:29:38", "I think that's a sensible thing to do but it's actually quite hard to do. There are some people thinking about that. I know Anthropic has put out some things like that. We were thinking about similar things but actually putting concrete things down is quite a hard thing to do. I think it's an important problem and I certainly encourage people to work on it.", "(0:29:58) - Impact of Deepmind on safety vs capabilities", "Dwarkesh Patel 0:29:58", "It's interesting because you have these blog posts that you wrote when you started DeepMind, back in 2008, where the motivation was to accelerate safety.", "On net, what do you think the impact of DeepMind has been on safety versus capabilities?", "Shane Legg 0:30:18", "Ooh, interesting. I don't know. It's hard to judge, actually.", "I've been worried about AGI safety for a long time, well before DeepMind. But it was always really hard to hire people to work on AGI safety, particularly in the early days. Back in 2013 or so, we had our first hire and he only agreed to do it part-time because he didn't want to drop all the capabilities work because of the impact it could have on his career. And this was someone who had already previously been publishing in AGI safety.", "I don't know. It's hard to know what is the counterfactual if we weren't there doing it. We have been a group that has talked about this openly. I've talked about the importance of it on many occasions. We've been hiring people to work on these topics. I know a lot of other people in the area and I've talked to them over many, many years. I've known Dario since 2005 or something and we've talked on and off about AGI safety and so on.", "The impact that DeepMind has had: I guess we were the first AGI company and as the first AGI company, we always had an AGI safety group. We've been publishing papers on this for many years. I think that's lent some credibility to the area of AGI safety. AGI was a fringe term not that long ago. I hope that creates some space for people.", "Dwarkesh Patel 0:32:17", "Where do you think AI progress itself would have been without DeepMind?", "This is not just a point that people make about DeepMind. I think this is a general point people make about OpenAI and Anthropic as well, that these people went into the business to accelerate safety and the net effect might have been to accelerate capabilities far more.", "Shane Legg 0:32:33", "Right, right. I think we have accelerated capabilities, but again, the counterfactuals are quite difficult. We didn't do ImageNet, for example, and ImageNet was very influential in attracting investment to the field. We did do AlphaGo, and that changed some people's minds. But, the community is a lot bigger than just DeepMind.", "If you went back more than five years in the future, we were able to do bigger projects with bigger teams and take on more ambitious things than a lot of the smaller academic groups, right? And so the sort of nature of the type of work we could do was a bit different. And that affected the dynamics in some ways.", "But, the community is much, much bigger than DeepMind. There are a number of other players with significant resources. Maybe we've sped things up a bit, but I think a lot of these things would have happened before too long anyway. Often good ideas are in the air, and as a researcher, when you're about to publish something, you see somebody else has got a very similar idea coming out with some good results. Often it's kind of like the time is right for things. So I find it very hard to reason about the counterfactuals there.", "(0:34:03) - Timelines", "Dwarkesh Patel 0:34:03", "Speaking of the early years, it's really interesting that in 2011, you had a blog post where you said — “I’ve decided to once again leave my prediction for when human level AGI will arrive unchanged.  That is, I give it a log-normal distribution with a mean of 2028 and a mode of 2025, under the assumption that nothing crazy happens like a nuclear war.”", "This is before deep learning, this is when nobody's talking about AI, and it turns out that if the trends continue, this is not an unreasonable prediction.", "How did you have that accurate of an estimate before all these trends came into effect?", "Shane Legg 0:34:32", "First I'd say it's not before deep learning. Deep learning was getting started around 2008.", "Dwarkesh Patel 0:34:37", "Oh, sorry. I meant to say before ImageNet.", "Shane Legg 0:34:38", "Before ImageNet? Yeah, that was 2012.", "I first formed those beliefs around 2001 after reading Ray Kurzweil's The Age of Spiritual Machines . There were two really important points in his book that I came to believe as true. One is that computational power would grow exponentially for at least a few decades. And that the quantity of data in the world would grow exponentially for a few decades. And when you have exponentially increasing quantities of computation and data, then the value of highly scalable algorithms gets higher and higher. There's a lot of incentive to make a more scalable algorithm to harness all this computing data. So I thought it would be very likely that we'll start to discover scalable algorithms to do this. And then there's a positive feedback between all these things, because if your algorithm gets better at harnessing computing data, then the value of the data and the compute goes up because it can be more effectively used. And that drives more investment in these areas. If your compute performance goes up, then the value of the data goes up because you can utilize more data. So there are positive feedback loops between all these things. That was the first thing.", "And then the second thing was just looking at the trends. If the scalable algorithms were to be discovered, then during the 2020s, it should be possible to start training models on significantly more data than a human would experience in a lifetime. And I figured that that would be a time where big things would start to happen that would eventually unlock AGI. So that was my reasoning process. And I think we're now at that first part. I think we can start training models now with the scale of the data that is beyond what a human can experience in a lifetime. So I think this is the first unlocking step.", "And so, yeah, I think there's a 50% chance that we have AGI by 2028. Now, it's just a 50% chance. I'm sure what's going to happen is we’re going to get to 2029 and someone's going to say, “Shane, you were wrong.” Come on, I said 50% chance.", "I think it's entirely plausible but I'm not going to be surprised if it doesn't happen by then. You often hit unexpected problems in research and science and sometimes things take longer than you expect.", "Dwarkesh Patel 0:37:13", "If we're in 2029 and it hasn't happened yet, if there was a problem that caused it, what would be the most likely reason for that?", "Shane Legg 0:37:22", "I don't know. At the moment, it looks to me like all the problems are likely solvable with a number of years of research. That's my current sense.", "Dwarkesh Patel 0:37:39", "And what does the time from here to 2028 look like if 2028 ends up being the year?", "Is it just that we have trillions of dollars of economic impact in the meantime and the world gets crazy or what happens?", "Shane Legg 0:37:51", "I think you'll see the existing models maturing. They'll be less delusional, much more factual. They'll be more up to date on what's currently going on when they answer questions. They'll become multimodal, much more than they currently are. And this will just make them much more useful.", "So I think probably what we'll see more than anything is just loads of great applications for the coming years. There can be some misuse cases as well. I'm sure somebody will come up with something to do with these models that is quite unhelpful. But my expectation for the coming years is mostly a positive one. We'll see all kinds of really impressive, really amazing applications for the coming years.", "Dwarkesh Patel 0:38:43", "And on the safety point, you mentioned these different research directions that are out there and that you are doing internally in DeepMind as well. Interpretability, RLAIF and so on. Which are you most optimistic about?", "Shane Legg 0:38:55", "Oooh. I don't know. I don't want to pick favorites. It's hard picking favorites. I know the people working on all these areas. I think things of the sort of system 2 flavor. There's work we have going on that Geoffrey Irving leads called Deliberative Dialogue, which has the System 2 flavor where a sort of debate takes place about the actions that an agent could take or what's the correct answer to something like this. And people then can sort of review these debates and so on. And they use these AI algorithms to help them judge the correct outcomes and so on. And so this is sort of meant to be a way in which to try to scale the alignment to increasingly powerful systems. I think things of that kind of flavor have quite a lot of promise in my opinion, but that's kind of quite a broad category. There are many different topics within that.", "Dwarkesh Patel 0:40:07", "That's interesting. So you mentioned two areas in which LLMs needs to improve. One is the episodic memory and the other is the System 2 thinking. Are those two related or are they two separate drawbacks?", "Shane Legg 0:40:23", "I think they're fairly separate, but they can be somewhat related. You can learn different ways of thinking through problems and actually learn about this rapidly using your episodic memory. All these different systems and subsystems interact so they're never completely separate. But I think conceptually you can probably think of them as quite separate things.", "I think delusions and factuality is another area that's going to be quite important and particularly important in lots of applications. If you want a model that writes creative poetry, then that's fine because you want to be able to be very free to suggest all kinds of possibilities and so on. You're not really constrained by a specific reality. Whereas if you want something that's in a particular application, normally you have to be quite concrete about what's currently going on and what is true and what is not true and so on. And models are a little bit sort of freewheeling when it comes to truth and creativity at the moment. And that I think limits their applications in many ways.", "(0:41:24) - Multimodality", "Dwarkesh Patel 0:41:24", "The final question is this. You've been in this field for over a decade, much longer than many others, and you've seen different landmarks like ImageNet and Transformers. What do you think the next landmark will look like?", "Shane Legg 0:41:37", "I think the next landmark that people will think back to and remember is going much more fully multimodal. That will open out the sort of understanding that you see in language models into a much larger space of possibilities. And when people think back, they'll think about, “Oh, those old fashioned models, they just did like chat, they just did text.” It just felt like a very narrow thing whereas now they understand when you talk to them and they understand images and pictures and video and you can show them things or things like that. And they will have much more understanding of what's going on. And it'll feel like the system's kind of opened up into the world in a much more powerful way.", "Dwarkesh Patel 0:42:30", "Do you mind if I ask a follow-up on that? ChatGPT just released their multimodal feature and you, in DeepMind, you had the Gato paper , where you have this one model where you can throw images, video games and even actions in there. So far it doesn't seem to have percolated as much as ChatGPT initially from GPT3 or something.", "What explains that? Is it just that people haven't learned to use multimodality? They're not powerful enough yet?", "Shane Legg 0:42:54", "I think it's early days. I think you will see understanding images and things more and more. But I think it's early days in this transition is when you start really digesting a lot of video and other things like that, that the systems will start having a much more grounded understanding of the world and all kinds of other aspects. And then when that works well, that will open up naturally lots and lots of new applications and all sorts of new possibilities because you're not confined to text chat anymore.", "Dwarkesh Patel 0:43:25", "New avenues of training data as well, right?", "Shane Legg 0:43:28", "Yeah, new training data and all kinds of different applications that aren't just purely textual anymore. And what are those applications? Well, probably a lot of them we can't even imagine at the moment because there are just so many possibilities once you can start dealing with all sorts of different modalities in a consistent way.", "Dwarkesh Patel 0:43:47", "Awesome. I think that's an actionable place to leave it off. Thank you so much for coming on the podcast Shane.", "Shane Legg 0:43:51", "Thank you." ]
[ "https://twitter.com/shanelegg?lang=en", "https://arxiv.org/abs/2009.03300", "https://www.vetta.org/documents/Machine_Super_Intelligence.pdf", "https://www.wikiwand.com/en/Kolmogorov_complexity", "http://www.hutter1.net/", "http://www.incompleteideas.net/IncIdeas/BitterLesson.html", "https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/", "https://www.dwarkeshpatel.com/p/dario-amodei", "https://www.vetta.org/2011/12/goodbye-2011-hello-2012/", "https://www.amazon.com/Age-Spiritual-Machines-Ray-Kurzweil/dp/0965086135", "https://naml.us/", "https://www.deepmind.com/publications/a-generalist-agent" ]
https://www.dwarkesh.com/p/sholto-douglas-trenton-bricken
Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind
[ "Edited by Teddy Kim , with lots of helpful links", "00:00:00 - Long contexts", "Dwarkesh Patel 0:00:00", "Okay, today I have the pleasure to talk with two of my good friends, Sholto and Trenton .", "Noam Brown , who wrote the Diplomacy paper , said this about Sholto: “he's only been in the field for 1.5 years, but people in AI know that he was one of the most important people behind Gemini's success.” And Trenton, who's at Anthropic , works on mechanistic interpretability and it was widely reported that he has solved alignment.", "So this will be a capabilities only podcast. Alignment is already solved, no need to discuss further.", "Let's start by talking about context lengths . It seemed to be underhyped, given how important it seems to me, that you can just put a million tokens into context . There's apparently some other news that got pushed to the front for some reason, but tell me about how you see the future of long context lengths and what that implies for these models.", "Sholto Douglas 00:01:28", "So I think it's really underhyped. Until I started working on it, I didn't really appreciate how much of a step up in intelligence it was for the model to have the onboarding problem basically instantly solved.", "You can see that a bit in the perplexity graphs in the paper where just throwing millions of tokens worth of context about a code base allows it to become dramatically better at predicting the next token in a way that you'd normally associate with huge increments in model scale. But you don't need that. All you need is a new context. So underhyped and buried by some other news.", "Dwarkesh Patel 00:01:58", "In context, are they as sample efficient and smart as humans?", "Sholto Douglas 00:02:02", "I think that's really worth exploring. For example, one of the evals that we did in the paper had it learn a language in context better than a human expert could, over the course of a couple of months.", "This is only a small demonstration but I'd be really interested to see things like Atari games where you throw in a couple hundred, or a thousand frames, of labeled actions in the same way that you'd show your friend how to play a game and see if it's able to reason through.", "It might. At the moment, with the infrastructure and stuff, it's still a bit slow at doing that, but I would actually guess that it might just work out of the box in a way that would be pretty mind-blowing.", "Trenton Bricken 00:02:38", "And crucially, I think this language was esoteric enough that it wasn't in the training data.", "Sholto Douglas 00:02:42", "Exactly. If you look at the model before it has that context thrown in, it doesn't know the language at all and it can't get any translations.", "Dwarkesh Patel 00:02:49", "And this is an actual human language?", "Sholto Douglas 00:02:51", "Exactly. An actual human language.", "Dwarkesh Patel 00:02:53", "So if this is true, it seems to me that these models are already in an important sense, superhuman. Not in the sense that they're smarter than us, but I can't keep a million tokens in my context when I'm trying to solve a problem, remembering and integrating all the information, an entire code base. Am I wrong in thinking this is a huge unlock?", "Sholto Douglas 00:03:14", "Actually, I generally think that's true. Previously, I've been frustrated when models aren't as smart, when you ask them a question and you want it to be smarter than you or to know things that you don't. This allows them to know things that you don't. It just ingests a huge amount of information in a way you just can't. So it's extremely important.", "Dwarkesh Patel 00:03:33", "Well, how do we explain in-context learning ?", "Sholto Douglas 00:03:35", "There's a line of work I quite like, where it looks at in-context learning as basically very similar to gradient descent, but the attention operation can be viewed as gradient descent on the in-context data. That paper had some cool plots where they basically showed “we take n steps of gradient descent and that looks like n layers of in-context learning, and it looks very similar.” So I think that's one way of viewing it and trying to understand what's going on.", "Trenton Bricken 00:03:59", "You can ignore what I'm about to say because, given the introduction, alignment is solved and AI safety isn't a problem.", "I think the context stuff does get problematic, but also interesting here. I think there'll be more work coming out in the not-too-distant future around what happens if you give a hundred shot prompt for jailbreaks, adversarial attacks . It's also interesting in the sense that, if your model is doing gradient descent and learning on the fly, even if it's been trained to be harmless, you're dealing with a totally new model in a way. You're fine-tuning but in a way where you can't control what's going on.", "Dwarkesh Patel 00:04:41", "Can you explain? What do you mean by gradient descent happening in the forward pass and attention?", "Trenton Bricken 00:04:45", "There was something in the paper about trying to teach the model to do linear regression but just through the number of samples or examples they gave in the context. And you can see if you plot on the x-axis the number of shots that it has, then the loss it gets on ordinary least squares regression will go down with time.", "Sholto Douglas 00:05:04", "And it goes down exactly matched with the number of gradient descent steps.", "Trenton Bricken 00:05:08", "Yeah, exactly.", "Dwarkesh Patel 00:05:09", "I only read the intro and discussion section of that paper. But in the discussion, the way they framed it is that the model, in order to get better at long-context tasks, has to get better at learning to learn from these examples or from the context that is already within the window.", "And the implication of that is, if meta-learning happens because it has to learn how to get better at long-context tasks, then in some important sense the task of intelligence requires long-context examples and long-context training.", "Sholto Douglas 00:05:45", "Understanding how to better induce meta-learning in your pre-training process is a very important thing about flexible or adaptive intelligence.", "Dwarkesh Patel 00:05:53", "Right, but you can proxy for that just by getting better at doing long-term context tasks. One of the bottlenecks for AI progress that many people identify is the inability of these models to perform tasks on long horizons, engaging with the task for many hours, or even many weeks or months, where they’re an assistant or an employee and they can just do a thing I tell them to do for a while. AI agents haven't taken off for this reason from what I understand.", "So how linked are long context windows, and the ability to perform well on them, and the ability to do these kinds of long-horizon tasks that require you to engage with an assignment for many hours? Or are these unrelated concepts?", "Sholto Douglas 00:06:36", "I would take issue with that being the reason that agents haven't taken off. I think that's more about nines of reliability and the model actually successfully doing things. If you can't chain tasks successively with high enough probability, then you won't get something that looks like an agent. And that's why something like an agent might follow more of a step function.", "In GPT-4 class models, Gemini Ultra class models, they're not enough. But maybe the next increment on model scale means that you get that extra nine. Even though the loss isn't going down that dramatically, that small amount of extra ability gives you the extra. Obviously you need some amount of context to fit long-horizon tasks, but I don't think that's been the limiting factor up to now.", "Trenton Bricken 00:07:16", "The NeurIPS best paper this year, by Rylan Schaeffer who was the lead author, points to this as the emergence of mirage. People will have a task and you get the right or wrong answer depending on if you've sampled the last five tokens correctly. So naturally you're multiplying the probability of sampling all of those and if you don't have enough nines of reliability, then you're not going to get emergence.", "And all of a sudden you do and it's, “oh my gosh, this ability is emergent,” when actually it was kind of there to begin with.", "Sholto Douglas 00:07:47", "And there are ways that you can find a smooth metric for that.", "Dwarkesh Patel 00:07:50", "HumanEval or whatever. In the GPT-4 paper , the coding problems they have, they measure–", "Sholto Douglas 00:07:56", "Log pass rates", "Dwarkesh Patel 00:07:57", "Exactly. For the audience, basically the idea is when you're measuring how much progress there has been on a specific task such as solving coding problems, when it gets it right only one in a thousand times you don't give it a one in a thousand score like, “oh, got it right some of the time.” And so the curve you see is, it gets it right one in a thousand, then one in a hundred, then one in ten, and so forth.", "So I want to follow up on this. If your claim is that the AI agents haven't taken off because of reliability rather than long-horizon task performance, isn't that lack of reliability–when a task is changed on top of another task, on top of another task–isn't that exactly the difficulty with long-horizon tasks? You have to do ten things in a row or a hundred things in a row, diminishing the reliability of any one of them. The probability goes down from 99.99% to 99.9%. Then the whole thing gets multiplied together and the whole thing has become so much less likely to happen.", "Sholto Douglas 00:08:59", "That is exactly the problem.But the key issue you're pointing out there is that your base task solve rate is 90%. If it was 99% then chain, it doesn't become a problem. I think this is also something that just hasn't been properly studied. If you look at the academic evals, it’s a single problem. Like the math problem, it's one typical math problem, it's one university-level problem from across different topics. You were beginning to start to see evals looking at this properly via more complex tasks like SWE-bench , where they take a whole bunch of GitHub issues. That is a reasonably long horizon task, but it's still sub-hour as opposed to a multi-hour or multi-day task.", "So I think one of the things that will be really important to do next is understand better what success rate over long-horizon tasks looks like. I think that's even important to understand what the economic impact of these models might be and properly judge increasing capabilities. Cutting down the tasks and the inputs/outputs involved into minutes or hours or days and seeing how good it is at successively chaining and completing tasks of those different resolutions of time. Then that tells you how automated a job family or task family will be in a way that MMLU scores don't.", "Trenton Bricken 00:10:18", "It was less than a year ago that we introduced 100K context windows and I think everyone was pretty surprised by that. Everyone had this soundbite of, “ quadratic attention costs , so we can't have long context windows.” And here we are. The benchmarks are being actively made.", "Dwarkesh Patel 00:10:36", "Wait, doesn't the fact that there are these companies–Google, Magic , maybe others–who have million token attention imply that it's not quadratic anymore? Or are they just eating the cost?", "Sholto Douglas 00:10:50", "Well, who knows what Google is doing for its long context game? One thing has frustrated me about the general research field's approach to attention. There’s an important way in which the quadratic cost of attention is actually dominated in typical dense transformers by the MLP block. So you have this n squared term that's associated with attention but you also have an n squared term that's associated with the D model, the residual stream dimension of the model.", "I think Sasha Rush has a great tweet where he basically plots the curve of the cost of attention respective to the cost of really large models and attention actually trails off. You actually need to be doing pretty long context before that term becomes really important.", "The second thing is that people often talk about how attention at inference time is such a huge cost. When you're actually generating tokens, the operation is not n squared. One set of Q-vectors looks up a whole bunch of KV-vectors and that's linear with respect to the amount of context that the model has.", "So I think this drives a lot of the recurrence and state space research where people have this meme of linear attention. And as Trenton said, there's a graveyard of ideas around attention. That’s not to say I don't think it's worth exploring, but I think it's important to consider why and where the actual strengths and weaknesses of it are.", "Dwarkesh Patel 00:12:21", "Okay, what do you make of this take? As we move forward through the takeoff , more and more of the learning happens in the forward pass. So originally all the learning happens in the bottom-up, hill climbing evolutionary process. Let’s say during the intelligence explosion the AI is maybe handwriting the weights or doing GOFAI or something, and we're in the middle step where a lot of learning happens in-context now with these models, a lot of it happens within the backward pass. Does this seem like a meaningful gradient along which progress is happening?", "The broader thing being that if you're learning in the forward pass, it's much more sample efficient because you can basically think as you're learning. Like when you read a textbook, you're not just skimming it and trying to absorb inductively, “these words follow these words.” You read it and you think about it, and then you read some more and you think about it some more. Does this seem like a sensible way to think about the progress?", "Sholto Douglas 00:13:23", "It may just be like how birds and planes fly, but they fly slightly differently. The virtue of technology allows us to accomplish things that birds can't. It might be that context length is similar in that it allows it to have a working memory that we can't, but functionally is not the key thing towards actual reasoning.", "The key step between GPT-2 and GPT-3 was that all of a sudden there was this meta-learning behavior that was observed in training, in the pre-training of the model. And that has, as you said, something to do with how if you give it some amount of context, it's able to adapt to that context. That was a behavior that wasn't really observed before that at all. And maybe that's a mixture of property of context and scale and this kind of stuff. But it wouldn't have occurred to model tiny context, I would say.", "Dwarkesh Patel 00:14:09", "This is actually an interesting point. So when we talk about scaling up these models, how much of it comes from just making the models themselves bigger? And how much comes from the fact that during any single call you are using more compute?", "So if you think of diffusion, you can just iteratively keep adding more compute. If adaptive compute is solved, you can keep doing that. And in this case, if there's a quadratic penalty for attention but you're doing long context anyways, then you're still dumping in more compute ( and not just by having bigger models).", "Trenton Bricken 00:14:46", "It's interesting because you do get more forward passes by having more tokens. My one gripe–I guess I have two gripes with this though, maybe three.", "So in the AlphaFold paper , one of the transformer modules–they have a few and the architecture is very intricate–but they do, I think, five forward passes through it and will gradually refine their solution as a result.", "You can also kind of think of the residual stream, Sholto alluded to the read-write operations, as a poor man's adaptive compute. Where it's just going to give you all these layers and if you want to use them, great. If you don't, then that's also fine. Then people will be like, “oh the brain is recurrent and you can do however many loops through it you want.”", "I think to a certain extent, that's right. If I ask you a hard question, you'll spend more time thinking about it and that would correspond to more forward passes. But I think there's a finite number of forward passes that you can do. It’s with language as well, people are like “oh human language can have infinite recursion in it,” like infinite nested statements of “the boy jumped over the bear, that was doing this, that had done this, that had done that…”", "But empirically, you'll only see five to seven levels of recursion, which relates to that magic number of how many things you can hold in working memory at any given time. So it's not infinitely recursive, but does that matter in the regime of human intelligence? And can you not just add more layers?", "00:16:12 - Intelligence is just associations", "Dwarkesh Patel 00:16:12", "Can you break it down for me? You've referred to this in some of your previous answers of listening to these long contexts and holding more things in memory. But ultimately it comes down to your ability to mix concepts together to do some kind of reasoning and these models aren't necessarily human level at that, even in context.", "Break down for me how you see just storing raw information versus reasoning and what's in between. Like, where's the reasoning happening? Where is this raw information storage happening? What's different between them in these models?", "Trenton Bricken 00:16:46", "I don't have a super crisp answer for you here. Obviously with the input and output of the model, you're mapping back to actual tokens. And then in between that you're doing higher level processing.", "Dwarkesh Patel 00:17:01", "Before we get deeper into this, we should explain to the audience. You referred earlier to Anthropic's way of thinking about transformers as these read-write operations that layers do.", "One of you should just kind of explain at a high level what you mean by that.", "Trenton Bricken 00:17:15", "So for the residual stream, imagine you're in a boat going down a river and the boat is the current query where you're trying to predict the next token. So it's “the cat sat on the _____.” And then you have these little streams that are coming off the river where you can get extra passengers or collect extra information if you want. And those correspond to the attention heads and MLPs that are part of the model.", "Sholto Douglas 00:17:41", "I almost think of it like the working memory of the model, like the RAM of the computer, where you're choosing what information to read in so you can do something with it and then maybe read something else in later on.", "Trenton Bricken 00:17:54", "And you can operate on subspaces of that high-dimensional vector. At this point, I think it's almost given that a ton of things are encoded in superposition. So the residual stream is just one high-dimensional vector, but actually there's a ton of different vectors that are packed into it.", "Dwarkesh Patel 00:18:12", "To dumb it down, a way that would have made sense to me a few months ago is that you have the words that are the input into the model. All those words get converted into these tokens and those tokens get converted into these vectors. And basically, it's just this small amount of information that's moving through the model.", "And the way you explained it to me, Sholto, this paper talks about how early on in the model, maybe it's just doing some very basic things about, “what do these tokens mean?” Like if it says ten plus five, just moving information to have that good representation. And in the middle, maybe the deeper thinking is happening about “how to solve this.” At the end, you're converting it back into the output token because the end product is that you're trying to predict the probability of the next token from the last of those residual streams. So it's interesting to think about the small compressed amount of information moving through the model and how it's getting modified in different ways.", "Trenton, you're one of the few people who have a background from neuroscience. So you can think about the analogies here to the brain. And in fact, you had a paper in grad school about thinking about attention in the brain, and one of our friend’s said this is the only, or first, neural explanation of why attention works. Whereas we have evidence for why the CNNs, convolutional neural networks , work based on the visual cortex or something.", "Do you think in the brain there is something like a residual stream of compressed information that's moving through and getting modified as you're thinking about something? Even if that's not what's literally happening, do you think that's a good metaphor for what's happening in the brain?", "Trenton Bricken 00:20:04", "At least in the cerebellum you basically do have a residual stream in what we'll call the attention model for now–and I can go into whatever amount of detail you want for that–where you have inputs that route through it, but they'll also just go directly to the end point that that module will contribute to. So there's a direct path and an indirect path. and, and so the model can pick up whatever information it wants and then add that back in.", "Dwarkesh Patel 00:20:35", "What happens in the cerebellum?", "Trenton Bricken 00:20:37", "So the cerebellum nominally just does fine motor control but I analogize this to the person who's lost their keys and is just looking under the streetlight where it's very easy to observe this behavior. One leading cognitive neuroscientist said to me that a dirty little secret of any fMRI study, where you're looking at brain activity for a given task, is that the cerebellum is almost always active and lighting up for it. If you have a damaged cerebellum, you also are much more likely to have autism so it's associated with social skills. In one particular study, where I think they use PET instead of fMRI, when you're doing “next token prediction” the cerebellum lights up a lot. Also, 70% of your neurons in the brain are in the cerebellum. They're small but they're there and they're taking up real metabolic cost.", "Dwarkesh Patel 00:21:29", "This was one of Gwern ’s points, that what changed with humans was not just that we have more neurons, but specifically there's more neurons in the cerebral cortex in the cerebellum and they're more metabolically expensive and they're more involved in signaling and sending information back and forth. Is that attention? What's going on?", "Trenton Bricken 00:21:52", "So back in the 1980s, Pentti Kanerva came up with an associative memory algorithm . You have a bunch of memories. You want to store them. There's some amount of noise or corruption that's going on and you want to query or retrieve the best match. And so he wrote this equation for how to do it and a few years later realized that if you implemented this as an electrical engineering circuit, it actually looks identical to the core cerebellar circuit.", "And that circuit, and the cerebellum more broadly, is not just in us, it's in basically every organism. There's active debate on whether or not cephalopods have it, they kind of have a different evolutionary trajectory. But even for fruit flies with the Drosophila mushroom body , that is the same cerebellar architecture.", "That convergence and then my paper, which shows that actually this attention operation is a very close approximation, including implementing the Softmax and having these nominal quadratic costs that we've been talking about. So the three way convergence here and the takeoff and success of transformers, just seems pretty striking to me.", "Dwarkesh Patel 00:23:04", "I want to zoom out. I think what motivated this discussion in the beginning was we were talking about, “what is the reasoning? What is the memory? What do you think about the analogy you found to attention and this?”", "Do you think of this more as just looking up the relevant memories or the relevant facts? And if that is the case, where is the reasoning happening in the brain? How do we think about how that builds up into the reasoning?", "Trenton Bricken 00:23:33", "Maybe my hot take here, I don't know how hot it is, is that most intelligence is pattern matching and you can do a lot of really good pattern matching if you have a hierarchy of associative memories. You start with your very basic associations between just objects in the real world. You can then chain those and have more abstract associations, such as a wedding ring symbolizing so many other associations that are downstream. You can even generalize the attention operation and this associated memory as the MLP layer as well. And it's in a long-term setting where you don't have tokens in your current context, but I think this is an argument that association is all you need.", "Associated memory in general as well, you can do two things with it. You can both, denoise or retrieve a current memory. So if I see your face but it's raining and cloudy, I can denoise and gradually update my query towards my memory of your face. But I can also access that memory and then the value that I get out actually points to some other totally different part of the space.", "A very simple instance of this would be if you learn the alphabet. So I query for A and it returns B, I query for B and it returns C, and you can traverse the whole thing.", "Dwarkesh Patel 00:25:02", "One of the things I talked to Demis about was a paper he had in 2008 that memory and imagination are very linked because of this very thing that you mentioned, that memory is reconstructive . So you are, in some sense, imagining every time you're thinking of a memory because you're only storing a condensed version of it and you have to. This is famously why human memory is terrible and why people in the witness box or whatever would just make shit up.", "So let me ask a stupid question. So you read Sherlock Holmes and the guy's incredibly sample efficient. He'll see a few observations and he'll basically figure out who committed the crime because there's a series of deductive steps that leads from somebody's tattoo and what's on the wall to the implications of that. How does that fit into this picture? Because crucially, what makes him smart is that there's not just an association, but there's a sort of deductive connection between different pieces of information. Would you just explain it as, that's just higher level association?", "Trenton Bricken 00:26:11", "I think so. I think learning these higher-level associations to be able to then map patterns to each other, as a kind of meta-learning. I think in this case, he would also just have a really long context length, or a really long working memory, where he can have all of these bits and continuously query them as he's coming up with some theory so that the theory is moving through the residual stream. And then his attention heads are querying his context. But then, how he's projecting his query and keys in the space, and how his MLPs are then retrieving longer-term facts or modifying that information, is allowing him to in later layers do even more sophisticated queries and slowly be able to reason through and come to a meaningful conclusion.", "Sholto Douglas 00:27:00", "That feels right to me. You're looking back in the past. You're selectively reading in certain pieces of information, comparing them, and maybe that informs your next step of what piece of information you now need to pull in. Then you build this representation, which progressively looks closer and closer to the suspect in your case. That doesn't feel at all outlandish.", "Trenton Bricken 00:27:20", "I think that the people who aren't doing this research can overlook how after your first layer of the model, every query key and value that you're using for attention comes from the combination of all the previous tokens.", "So my first layer, I'll query my previous tokens and just extract information from them. But all of a sudden, let's say that I attended to tokens 1, 2, and 4 in equal amounts. Then the vector in my residual stream–assuming that they wrote out the same thing to the value vectors, but, but ignore that for a second–is a third of each of those. So when I'm querying in the future, my query is actually a third of each of those things.", "Sholto Douglas 00:28:03", "But they might be written to different subspaces.", "Trenton Bricken 00:28:05", "That's right. Hypothetically, but they wouldn't have to. You can recombine and immediately, even by layer two and certainly by the deeper layers, just have these very rich vectors that are packing in a ton of information. And the causal graph is literally over every single layer that happened in the past. That's what you're operating on.", "Sholto Douglas 00:28:25", "Yeah, it does bring to mind a very funny eval to do, a Sherlock Holmes eval. You put the entire book into context and then you have a sentence which is, “the suspect is X.” Then you have a larger probability distribution over the different characters in the book.", "Trenton Bricken 00:28:41", "That would be super cool.", "Sholto Douglas 00:28:44", "I wonder if you'd get anything at all.", "Dwarkesh Patel 00:28:47", "Sherlock Holmes is probably already in the training data. You gotta get a mystery novel that was written in the–", "Trenton Bricken 00:28:52", "You can get an LLM to write it.", "Sholto Douglas 00:28:53", "Or we could purposely exclude it, right?", "Dwarkesh Patel 00:28:56", "Oh, we can? How do you?", "Trenton Bricken 00:28:57", "Well, you need to scrape any discussion of it from Reddit or any other thing.", "Sholto Douglas 00:29:00", "Right, it's hard. That's one of the challenges that goes into things like long-context evals, getting a good one. You need to know that it's not in your training data. You just put in the effort to exclude it.", "Dwarkesh Patel 00:29:10", "There's two different threads I want to follow up on. Let's go to the long-context one and then we'll come back to this. In the Gemini 1.5 paper the eval that was used was can it remember something like Paul Graham’s essays .", "Sholto Douglas 00:29:28", "Yeah, the needle in a haystack .", "Dwarkesh Patel 00:29:30", "I mean, we don't necessarily just care about its ability to recall one specific fact from the context.", "I'll step back and ask the question. The loss function for these models is unsupervised. You don't have to come up with these bespoke things that you keep out of the training data.", "Is there a way you can do a benchmark that's also unsupervised, where another LLM is rating it in some way or something like that. Maybe the answer is that if you could do this, reinforcement learning would work.", "Sholto Douglas 00:30:05", "I think people have explored that kind of stuff. For example, Anthropic has the constitutional RL paper where they take another language model and they point it and say, “how helpful or harmless was that response?” Then they get it to update and try and improve along the Pareto frontier of helpfulness and harmfulness.", "So you can point language models at each other and create evals in this way. It's obviously an imperfect art form at the moment. because you get reward function hacking basically. Even humans are imperfect here. Humans typically prefer longer answers, which aren't necessarily better answers and you get the same behavior with models.", "Dwarkesh Patel 00:30:48", "Going back to the Sherlock Holmes thing, if it's all associations all the way down, does that mean we should be less worried about super intelligence? Because there's not this sense in which it's like Sherlock Holmes++. It'll still need to just find these associations, like humans find associations. It's not able to just see a frame of the world and then it's figured out all the laws of physics.", "Trenton Bricken 00:31:20", "This is a very legitimate response.It's, “if you say humans are generally intelligent, then artificial general intelligence is no more capable or competent.” I'm just worried that you have that level of general intelligence in silicon. You can then immediately clone hundreds of thousands of agents and they don't need to sleep, and they can have super long context windows, and then they can start recursively improving, and then things get really scary. So I think to answer your original question, you're right, they would still need to learn associations.", "Dwarkesh Patel 00:31:52", "But wait, if intelligence is fundamentally about these associations, the recursive self-improvement is just them getting better at association. There's not another thing that's happening. So then it seems like you might disagree with the intuition that they can't be that much more powerful, if they're just doing that.", "Trenton Bricken 00:32:11", "I think then you can get into really interesting cases of meta-learning. When you play a new video game or study a new textbook, you're bringing a whole bunch of skills to the table to form those associations much more quickly. And because everything in some way ties back to the physical world, I think there are general features that you can pick up and then apply in novel circumstances.", "00:32:35 - Intelligence explosion & great researchers", "Dwarkesh Patel 00:32:35", "Should we talk about the intelligence explosion then? The reason I'm interested in discussing this with you guys in particular is that the models of the intelligence explosion we have so far come from economists.", "That’s fine but I think we can do better because in the model of the intelligence explosion, what happens is you replace the AI researchers. There's a bunch of automated AI researchers who can speed up progress, make more AI researchers, and make further progress. If that's the mechanism, we should just ask the AI researchers whether they think this is plausible. So let me just ask you, if I have a thousand agent Sholtos or agent Trentons, do you think that you get an intelligence explosion? What does that look like to you?", "Sholto Douglas 00:33:32", "I think one of the important bounding constraints here is compute. I do think you could dramatically speed up AI research. It seems very clear to me that in the next couple of years, we'll have things that can do many of the software engineering tasks that I do on a day to day basis, and therefore dramatically speed up my work, and therefore speed up the rate of progress.", "At the moment, I think most of the labs are somewhat compute bound in that there are always more experiments you could run and more pieces of information that you could gain in the same way that scientific research on biology is somewhat experimentally throughput-bound. You need to run and culture the cells in order to get the information.", "I think that will be at least a short term planning constraint. Obviously, Sam's trying to raise $7 trillion to buy chips and it does seem like there's going to be a lot more compute in the future as everyone is heavily ramping. NVIDIA 's stock price sort of represents the relative compute increase. Any thoughts?", "Trenton Bricken 00:34:36", "I think we need a few more nines of reliability in order for it to be really useful and trustworthy. And we need context lengths that are super long and very cheap to have. If I'm working in our code base, it's really only small modules that I can get Claude to write for me right now. But it's very plausible that within the next few years, or even sooner, it can automate most of my tasks.", "The only other thing here that I will note is that the research our interpretability subteam is working on is so early-stage. You really have to be able to make sure everything is done correctly in a bug-free way and contextualize the results with everything else in the model. If something isn't going right, you have to be able to enumerate all of the possible things, and then slowly work on those.", "An example that we've publicly talked about in previous papers is dealing with layer norm . If I'm trying to get an early result or look at the logit effects of the model, if I activate this feature that we've identified to a really large degree, how does that change the output of the model? Am I using layer norm or not? How is that changing the feature that's being learned? That will take even more context or reasoning abilities for the model.", "Dwarkesh Patel 00:36:04", "You used a couple of concepts together. It's not self-evident to me that they're the same but it seemed like you were using them interchangeably. One was working on the Claude code base and making more modules based on that, they need more context or something. It seems like they might already be able to fit in the context or do you mean context like “the context window?”", "Trenton Bricken 00:36:30", "Yeah, the “context window” context.", "Dwarkesh Patel 00:36:32", "So it seems like the thing that's preventing it from making good modules is not the lack of being able to put the code base in there.", "Trenton Bricken 00:36:39", "I think that will be there soon.", "Dwarkesh Patel 00:36:41", "But it's not going to be as good as you at coming up with papers because it can fit the code base in there.", "Trenton Bricken 00:36:46", "No, but it will speed up a lot of the engineering.", "Dwarkesh Patel 00:36:48", "In a way that causes an intelligence explosion?", "Trenton Bricken 00:36:53", "No, in a way that accelerates research. But I think these things compound. The faster I can do my engineering, the more experiments I can run. And the more experiments I can run, the faster we can… I mean, my work isn't actually accelerating capabilities at all, it's just interpreting the models. But we have a lot more work to do on that. surprise to the Twitter guy,", "Dwarkesh Patel 00:37:14", "For context, when you released your paper , there was a lot of talk on Twitter like, “alignment is solved guys. Close the curtains.”", "Trenton Bricken 00:37:24", "Yeah, no it keeps me up at night how quickly the models are becoming more capable and just how poor our understanding of what's going on still is.", "Dwarkesh Patel 00:37:36", "Let's run through the specifics here. By the time this is happening, we have bigger models that are two to four orders of magnitude bigger, or at least an effective compute two to four orders of magnitude bigger. So this idea that you can run experiments faster, you're having to retrain that model in this version of the intelligence explosion. The recursive self-improvement is different from what might've been imagined 20 years ago, where you just rewrite the code. You actually have to train a new model and that's really expensive.", "Not only now, but especially in the future, as you keep making these models orders of magnitude bigger. Doesn't that dampen the possibility of a recursive self-improvement type of intelligence explosion?", "Sholto Douglas 00:38:25", "It's definitely going to act as a breaking mechanism. I agree that the world of what we're making today looks very different from what people imagined it would look like 20 years ago. It's not going to be able to write the same code to be really smart, because actually it needs to train itself. The code itself is typically quite simple, typically really small and self contained.", "I think John Carmack had this nice phrase where it's the first time in history where you can plausibly imagine writing AI with 10,000 lines of code. That actually does seem plausible when you pare most training codebases down to the limit. But it doesn't take away from the fact that this is something where we should really strive to measure and estimate how progress might be.", "We should be trying very, very hard to measure exactly how much of a software engineer's job is automatable, and what the trend line looks like, and be trying our hardest to project out those trend lines.", "Dwarkesh Patel 00:39:21", "But with all due respect to software engineers you are not writing like a React front-end right?", "What is concretely happening? Maybe you can walk me through a day in the life of Sholto. You're working on an experiment or project that's going to make the model \"better.” What is happening from observation to experiment, to theory, to writing the code? What is happening?", "Sholto Douglas 00:39:48", "I think it’s important to contextualize here that I've primarily worked on inference so far. A lot of what I've been doing is just helping guide the pre-training process, designing a good model for inference and then making the model and the surrounding system faster. I've also done some pre-training work around that, but it hasn't been my 100% focus. I can still describe what I do when I do that work.", "Dwarkesh Patel 00:40:09", "Sorry, let me interrupt. When Carl Shulman was talking about it on the podcast , he did say that things like improving inference or even literally making better chips or GPUs, that’s part of the intelligence explosion. Obviously if the inference code runs faster, it happens better or faster or whatever. Sorry, go ahead.", "Sholto Douglas 00:40:32", "So concretely, what does a day look like? I think the most important part to illustrate is this cycle of coming up with an idea, proving it out at different points in scale, and interpreting and understanding what goes wrong. I think most people would be surprised to learn just how much goes into interpreting and understanding what goes wrong.", "People have long lists of ideas that they want to try. Not every idea that you think should work, will work. Trying to understand why that is is quite difficult and working out what exactly you need to do to interrogate it. So a lot of it is introspection about what's going on. It's not pumping out thousands and thousands and thousands of lines of code. It's not the difficulty in coming up with ideas. Many people have a long list of ideas that they want to try, but paring that down and shot calling, under very imperfect information, what are the right ideas to explore further is really hard.", "Dwarkesh Patel 00:41:32", "What do you mean by imperfect information? Are these early experiments? What is the information?", "Sholto Douglas 00:41:40", "Demis mentioned this in his podcast. It's like the GPT-4 paper where you have scaling law increments. You can see in the GPT-4 paper, they have a bunch of dots, right?", "They say we can estimate the performance of our final model using all of these dots and there's a nice curve that flows through them. And Demis mentioned that we do this process of scaling up.", "Concretely, why is that imperfect information? It’s because you never actually know if the trend will hold. For certain architectures the trend has held really well. And for certain changes, it's held really well. But that isn't always the case. And things which can help at smaller scales can actually hurt at larger scales. You have to make guesses based on what the trend lines look like and based on your intuitive feeling of what’s actually something that's going to matter, particularly for those which help with the small scale.", "Dwarkesh Patel 00:42:35", "That's interesting to consider. For every chart you see in a release paper or technical report that shows that smooth curve, there's a graveyard of first few runs and then it's flat.", "Sholto Douglas 00:42:45", "Yeah. There's all these other lines that go in different directions. You just tail off.", "Trenton Bricken 00:42:50", "It's crazy, both as a grad student and here, the number of experiments that you have to run before getting a meaningful result.", "Dwarkesh Patel 00:42:57", "But presumably it's not just like you run it until it stops and then go to the next thing. There's some process by which to interpret the early data. I don't know. I could put a Google Doc in front of you and I'm pretty sure you could just keep typing for a while on different ideas you have. There's some bottleneck between that and just making the models better immediately. Walk me through that. What is the inference you're making from the first early steps that makes you have better experiments and better ideas?", "Sholto Douglas 00:43:30", "I think one thing that I didn't fully convey before was that I think a lot of like good research comes from working backwards from the actual problems that you want to solve. There's a couple of grand problems today in making the models better that you would identify as issues and then work on how can I change things to achieve this? When you scale you also run into a bunch of things and you want to fix behaviors and issues at scale. And that informs a lot of the research for the next increment and this kind of stuff.", "Concretely, the barrier is a little bit of software engineering, having a code base that's large and capable enough that it can support many people doing research at the same time often makes it complex. If you're doing everything by yourself, your iteration pace is going to be much faster. Alec Radford , for example, famously did much of the pioneering work at OpenAI. I’ve heard he mostly works out of a Jupyter notebook and then has someone else who writes and productionizes that code for him. Actually operating with other people raises the complexity a lot, for natural reasons familiar to every software engineer and also the inherent running. Running and launching those experiments is easy but there's inherent slowdowns induced by that. So you often want to be parallelizing multiple different streams. You can't be totally focused on one thing necessarily. You might not have fast enough feedback cycles. And then intuiting what went wrong is actually really hard.", "This is in many respects, the problem that the team that Trenton is on is trying to better understand. What is going on inside these models? We have inferences and understanding and headcanon for why certain things work, but it's not an exact science. and so you have to constantly be making guesses about why something might have happened, what experiment might reveal, whether that is or isn't true. That's probably the most complex part.", "The performance work is comparatively easier but harder in other respects. It's just a lot of low-level and difficult engineering work.", "Trenton Bricken 00:45:38", "I agree with a lot of that. Even on the interpretability team, especially with Chris Olah leading it, there are just so many ideas that we want to test and it's really just having the “engineering” skill–a lot of it is research–to very quickly iterate on an experiment, look at the results, interpret it, try the next thing, communicate them, and then just ruthlessly prioritizing what the highest priority things to do are.", "Sholto Douglas 00:46:07", "This is really important. The ruthless prioritization is something which I think separates a lot of quality research from research that doesn't necessarily succeed as much. We're in this funny field where so much of our initial theoretical understanding is broken down basically. So you need to have this simplicity bias and ruthless prioritization over what's actually going wrong. I think that's one of the things that separates the most effective people. They don't necessarily get too attached to using a given sort of solution that they are familiar with, but rather they attack the problem directly.", "You see this a lot in people who come in with a specific academic background. They try to solve problems with that toolbox but the best people are people who expand the toolbox dramatically. They're running around and they're taking ideas from reinforcement learning, but also from optimization theory. And also they have a great understanding of systems. So they know what the sort of constraints that bound the problem are and they're good engineers. They can iterate and try ideas fast. By far the best researchers I've seen, they all have the ability to try experiments really, really, really, really, really fast. That’s cycle time at smaller scales. Cycle time separates people.", "Trenton Bricken 00:47:20", "Machine learning research is just so empirical. This is honestly one reason why I think our solutions might end up looking more brain-like than otherwise. Even though we wouldn't want to admit it, the whole community is kind of doing greedy evolutionary optimization over the landscape of possible AI architectures and everything else. It’s no better than evolution. And that’s not even a slight against evolution.", "Dwarkesh Patel 00:47:46", "That's such an interesting idea. I'm still confused on what will be the bottleneck. What would have to be true of an agent such that it sped up your research? So in the Alec Radford example where he apparently already has the equivalent of Copilot for his Jupyter notebook experiments, is it just that if he had enough of those he would be a dramatically faster researcher?", "So you're not automating the humans, you're just making the most effective researchers who have great taste, more effective and running the experiments for them? You're still working at the point at which the intelligence explosion is happening? Is that what you're saying?", "Sholto Douglas 00:48:27", "Right, and if that were directly true then why can't we scale our current research teams better? I think that’s an interesting question to ask. If this work is so valuable, why can't we take hundreds or thousands of people–they're definitely out there–and scale our organizations better.", "I think we are less, at the moment, bound by the sheer engineering work of making these things than we are by compute to run and get signal, and taste in terms of what the actual right thing to do is. And then making those difficult inferences on imperfect information,", "Trenton Bricken 00:49:13", "For the Gemini team. Because I think for interpretability, we actually really want to keep hiring talented engineers. I think that's a big bottleneck for us.", "Sholto Douglas 00:49:23", "Obviously more people are better. But I do think it's interesting to consider. One of the biggest challenges that I've thought a lot about is how do we scale better? Google is an enormous organization. It has 200,000-ish people, right? Maybe 180,000 or something like that. One has to imagine ways of scaling out Gemini's research program to all those fantastically talented software engineers. This seems like a key advantage that you would want to be able to take advantage of. You want to be able to use it but how do you effectively do that? It's a very complex organizational problem.", "Dwarkesh Patel 00:50:02", "So compute and taste. That's interesting to think about because at least the compute part is not bottlenecked on more intelligence, it's just bottlenecked on Sam's $7 trillion or whatever, right? If I gave you 10x the H100s to run your experiments, how much more effective a researcher are you?", "Sholto Douglas 00:50:20", "TPUs, please.", "Dwarkesh Patel 00:50:23", "How much more effective a researcher are you?", "Sholto Douglas 00:50:26", "I think the Gemini program would probably be maybe five times faster with 10 times more compute or something like that.", "Dwarkesh Patel 00:50:35", "So that's pretty good. Elasticity of 0.5. Wait, that's insane.", "Sholto Douglas 00:50:39", "I think more compute would just directly convert into progress.", "Dwarkesh Patel 00:50:43", "So you have some fixed size of compute and some of it goes to inference and also to clients of GCP . Some of it goes to training and from there, as a fraction of it, some of it goes to running the experiments for the full model.", "Sholto Douglas 00:51:04", "Yeah, that's right.", "Dwarkesh Patel 00:51:05", "Shouldn't the fraction that goes experiments then be higher given research is bottlenecked by compute.", "Sholto Douglas 00:51:13", "So one of the strategic decisions that every pre-training team has to make is exactly what amount of compute do you allocate to different training runs, to your research program versus scaling the last best thing that you landed on. They're all trying to arrive at an optimal point here. One of the reasons why you need to still keep training big models is that you get information there that you don't get otherwise. So scale has all these emergent properties which you want to understand better.", "Remember what I said before about not being sure what's going to fall off the curve. If you keep doing research in this regime and keep on getting more and more compute efficient, you may have actually gone off the path to actually eventually scale. So you need to constantly be investing in doing big runs too, at the frontier of what you sort of expect to work.", "Dwarkesh Patel 00:52:17", "So then tell me what it looks like to be in the world where AI has significantly sped up AI research. Because from this, it doesn't really sound like the AIs are going off and writing the code from scratch that's leading to faster output. It sounds like they're really augmenting the top researchers in some way. Tell me concretely. Are they doing the experiments? Are they coming up with the ideas? Are they just evaluating the outputs of the experiments? What's happening?", "Sholto Douglas 00:52:39", "So I think there's two walls you need to consider here. One is where AI has meaningfully sped up our ability to make algorithmic progress. And one is where the output of the AI itself is the thing that's the crucial ingredient towards model capability progress. Specifically what I mean there is synthetic data. In the first world, where it's meaningfully speeding up algorithmic progress, I think a necessary component of that is more compute. You've probably reached this elasticity point where AIs are easier to speed up and get on to context than yourself, or other people. So AIs meaningfully speed up your work because they're basically a fantastic Copilot that helps you code multiple times faster.", "That seems actually quite reasonable. Super long-context, super smart model. It's onboarded immediately and you can send them off to complete subtasks and subgoals for you. That actually feels very plausible, but again we don't know because there are no great evals about that kind of thing. As I said before, the best one is SWE-bench.", "Dwarkesh Patel 00:53:51", "Somebody was mentioning to me that the problem with that one is that when a human is trying to do a pull request, they'll type something out and they'll run it and see if it works. If it doesn't, they'll rewrite it. None of this was part of the opportunities that the LLM was given when told “run on this.” Just output and if it runs and checks all the boxes then it passed. So it might've been an unfair test in that way.", "Sholto Douglas 00:54:16", "So you can imagine that if you were able to use that, that would be an effective training source. The key thing that's missing from a lot of training data is the reasoning traces, right?", "And I think this would be it. If I wanted to try and automate a specific field, a job family, or understand how at risk of automation that specific field is, then having reasoning traces feels to me like a really important part of that.", "Dwarkesh Patel 00:54:51", "There's so many different threads there I want to follow up on. Let's begin with the data versus compute thing. Is the output of the AI the thing that's causing the intelligence explosion? People talk about how these models are really a reflection on their data. I forgot his name but there was a great blog by this OpenAI engineer. It was talking about how at the end of the day, as these models get better and better, there are just going to be really effective maps of the data set. So at the end of the day you have to stop thinking about architectures. The most effective architecture is just, “do you do an amazing job of mapping the data?” So that implies that the future AI progress comes from the AI just making really awesome data that you’re mapping to?", "Sholto Douglas 00:55:45", "That's clearly a very important part .", "Dwarkesh Patel 00:55:46", "That's really interesting. Does that look to you like chain-of-thought ? Or what would you imagine as these models get better, as these models get smarter? What does the synthetic data look like?", "Sholto Douglas 00:56:00", "When I think of really good data, to me, that raises something which involved a lot of reasoning to create. It's similar to Ilya 's perspective on achieving super intelligence effectively via perfectly modeling human textual output. But even in the near term, in order to model something like the arXiv papers or Wikipedia, you have to have an incredible amount of reasoning behind you in order to understand what next token might be output.", "So for me, what I imagine as good data is data where it had to do reasoning to produce something. And then the trick of course is how do you verify that that reasoning was correct? This is why you saw DeepMind do that research for geometry . Geometry is an easily formalizable, easily verifiable field. You can check if its reasoning was correct and you can generate heaps of data of correct trig, of verified geometry proofs, and train on that. And you know that that's good data.", "Dwarkesh Patel 00:57:11", "It's actually funny because I had a conversation with Grant Sanderson last year where we were debating this and I was like, “fuck dude, by the time they get the gold of the Math Olympiad, of course they're going to automate all the jobs.” Yikes.", "On synthetic data, there’s a thing I speculated about in my scaling post , which was heavily informed by discussions with you two and you especially, Sholto. You can think of human evolution through the spectrum of getting language and so we're generating the synthetic data. Our copies are generating the synthetic data which we're trained on and it's this really effective genetics, cultural, co-evolutionary loop.", "Sholto Douglas 00:57:54", "And there's a verifier there too, right? There's the real world. You might generate a theory about the gods causing the storms, And then someone else finds cases where that isn't true. And so that sort of didn't match your verification function. Now instead you have some weather simulation which required a lot of reasoning to produce and accurately matches reality. And now you can train on that as a better model of the world. Like we are training on that, and stories, and like scientific theories.", "Dwarkesh Patel 00:58:27", "I want to go back. I'm just remembering something you mentioned a little while ago how given how empirical ML is, it really is an evolutionary process resulting in better performance and not necessarily an individual coming up with a breakthrough in a top-down way. That has interesting implications.", "First, people are concerned about capabilities increasing because more people are going into the field. I've been somewhat skeptical of that way of thinking, but from this perspective of just more input, it really does feel like more people going to ICML means that there's faster progress towards GPT-5.", "Trenton Bricken 00:59:13", "You just have more genetic recombination. And shots on target.", "Sholto Douglas 00:59:17", "I mean, aren't all fields kind of like that? This is sort of the scientific framing of discovery versus invention, right? Discovery almost involves whenever there's been a massive scientific breakthrough in the past. Typically there are multiple people co-discovering a thing at roughly the same time. That feels to me, at least a little bit, like the mixing and trying of ideas. You can't try an idea that's so far out of scope that you have no way of verifying with the tools you have available.", "Trenton Bricken 00:59:45", "I think physics and math might be slightly different in this regard. But especially for biology or any sort of wetware, to the extent we want to analogize neural networks here, it's just comical how serendipitous a lot of the discoveries are. Penicillin, for example.", "Dwarkesh Patel 01:00:01", "Another implication of this is the idea that AGI is just going to come tomorrow. Somebody's just going to discover a new algorithm and we have AGI. That seems less plausible. It will just be a matter of more and more and more researchers finding these marginal things that all add up together to make models better.", "Sholto Douglas 01:00:19", "Right. That feels like the correct story to me.", "Trenton Bricken 01:00:23", "Especially while we're still hardware constrained.", "Dwarkesh Patel 01:00:25", "Right. Do you buy this narrow window framing of the intelligence explosion? Each GPT-3, GPT-4 is two OOMs, orders of magnitude, more compute or at least more effective compute. In the sense that, if you didn't have any algorithmic progress, it would have to be two orders of magnitude bigger, the raw form, to be as good. Do you buy the framing that, given that you have to be two orders of magnitude bigger at every generation, if you don't get AGI by GPT-7 that can help you catapult an intelligence explosion, you're kind of just fucked as far as much smarter intelligence goes. You're kind of stuck with GPT-7 level models for a long time because at that point you're consuming significant fractions of the economy to make that model and we just don't have the wherewithal to make GPT 8.", "Trenton Bricken 01:01:19", "This is the Carl Shulman sort of argument that we're going to race through the orders of magnitude in the near term, but then in the longer term it would be harder.", "Dwarkesh Patel 01:01:28", "He's probably talked about it a lot but I do buy that framing.", "Sholto Douglas 01:01:33", "I generally buy that. Increases in order of magnitude of compute means in absolute terms, almost diminishing returns on capability, right? We've seen over a couple of orders of magnitude, models go from being unable to do anything to being able to do huge amounts.", "It feels to me that each incremental order of magnitude gives more nines of reliability at things. So it unlocks things like agents. But at least at the moment, it doesn't feel like reasoning improves linearly, but rather somewhat sublinearly.", "Dwarkesh Patel 01:02:04", "That's actually a very bearish sign. We were chatting with one of our friends and he made the point that if you look at what new applications are unlocked by GPT-4 relative to GPT-3.5, it's not clear that it’s that much more. A GPT-3.5 can do perplexity or whatever. So if there’s this diminishing increase in capabilities and that costs exponentially more to get, that's actually a bearish sign on what 4.5 will be able to do or what 5 will unlock in terms of economic impact.", "Sholto Douglas 01:02:37", "That being said, for me the jump between 3.5 and 4 is pretty huge. So another 3.5 to 4 jump is ridiculous. If you imagine 5 as being a 3.5 to 4 jump, straight off the bat in terms of ability to do SATs and this kind of stuff.", "Trenton Bricken 01:02:53", "Yeah, the LSAT performance was particularly striking.", "Sholto Douglas 01:02:55", "Exactly. You go from not super smart to very smart to utter genius in the next generation instantly. And it doesn't, at least to me, feel like we're going to jump to utter genius in the next generation, but it does feel like we'll get very smart plus lots of reliability. TBD what that continues to look like.", "Dwarkesh Patel 01:03:20", "Will GOFAI be part of the intelligence explosion? You talked about synthetic data, but in fact it would be writing its own source code in some important way. There was an interesting paper that you can use diffusion to come up with model weights. I don't know how legit that was or whatever, but something like that.", "Trenton Bricken 01:03:41", "So GOFAI is good old-fashioned AI, right? Can you define that? Because when I hear it, I think “if else” statements for symbolic logic.", "Sholto Douglas 01:03:53", "I actually want to make sure we fully unpack the model improvement increments. I don't want people to come away with the perspective that this is super bearish and models aren't going to get much better. I want to emphasize that the jumps that we've seen so far are huge. Even if those continue on a smaller scale, we're still in for extremely smart, very reliable agents over the next couple of orders of magnitude.", "We didn't fully close the thread on the narrow window thing. Let's say GPT-4 cost a hundred million dollars or whatever. You have the 1B run, 10B run, 100B run. All seem very plausible by private company standards.", "Trenton Bricken 01:04:41", "You mean in terms of dollars?", "Sholto Douglas 01:04:42", "In terms of dollar amount. You can also imagine even a 1T run being part of a national consortium, on a national level but much harder on behalf of an individual company. But Sam is out there trying to raise $7 trillion, right? He's already preparing for a whole lot of magnitude.", "Trenton Bricken 01:05:02", "He's shifted the Overton window.", "Sholto Douglas 01:05:03", "He's shifting the magnitude here beyond the national level. So I want to point out that we have a lot more jumps. Even if those jumps are relatively smaller, that's still a pretty stark improvement in capability.", "Trenton Bricken 01:05:18", "Not only that, but if you believe claims that GPT-4 is around 1 trillion parameter count, well the human brain is between 30 and 300 trillion synapses. That's obviously not a one-to-one mapping and we can debate the numbers, but it seems pretty plausible that we're below brain scale still.", "Dwarkesh Patel 01:05:37", "So crucially, the point is that the algorithmic overhead is really high. Maybe this is something we should touch on explicitly. Even if you can't keep dumping more compute beyond the models that cost a trillion dollars or something, the fact that the brain is so much more data efficient implies that if we have the compute, if we have the brain's algorithm to train, if you could train as a sample efficient as humans train from birth, then we could make the AGI.", "Trenton Bricken 01:06:09", "I never know exactly how to think about the sample efficiency stuff because obviously a lot of things are hardwired in certain ways. They're the coevolution of language and the brain structure. So it's hard to say. There are also some results that indicate that if you make your model bigger, it becomes more sample efficient.", "Sholto Douglas 01:06:29", "The original scaling laws paper , right? The logic model is almost empty.", "Trenton Bricken 01:06:33", "Right. So maybe that just solves it. You don't have to be more data efficient, but if your model is bigger then you also just are more efficient.", "Dwarkesh Patel 01:06:42", "What is the explanation for why that would be the case? A  bigger model sees these exact same data and at the end of seeing that data it learns more from it? Does it have more space to represent it?", "01:06:52 - Superposition & secret communication", "Trenton Bricken 01:06:52", "This is my very naive take here. One thing about the superposition hypothesis that interpretability has pushed is that your model is dramatically underparameterized and that's typically not the narrative that deep learning has pursued, right? But if you're trying to train a model on the entire internet and have it predict with incredible fidelity, you are in the underparameterized regime and you're having to compress a ton of things and take on a lot of noisy interference in doing so. When you have a bigger model, you can have cleaner representations to work with.", "Dwarkesh Patel 01:07:25", "For the audience, you should unpack that. Why that first of all? What is superposition and why is that an implication of superposition?", "Trenton Bricken 01:07:32", "Sure. This was before I joined Anthropic. The fundamental result is from a paper titled “ Toy Models of Superposition .” It finds that even for small models, if you are in a regime where your data is high-dimensional and sparse–by sparse I mean, any given data point doesn't appear very often–your model will learn a compression strategy that we call superposition so that it can pack more features of the world into it than it has parameters.", "I think both of these constraints apply to the real world, and modeling internet data is a good enough proxy for that. There's only one Dwarkesh. There's only one shirt you're wearing. There's this Liquid Death can here. These are all objects or features and how you define a feature is tricky. You're in a really high-dimensional space because there's so many of them and they appear very infrequently. In that regime, your model will learn compression", "To riff a little bit more on this, I believe that the reason networks are so hard to interpret is in a large part because of this superposition. If you take a model and you look at a given neuron in it, a given unit of computation, and you ask, “how is this neuron contributing to the output of the model when it fires?” When you look at the data that it fires for, it's very confusing. It'll be like ten percent of every possible input. It’ll fire for “Chinese” but also “fish” and “trees”, and the full stop in URLs.", "But the paper that we put out last year, “ Towards Monosemanticity ,” shows that if you project the activations into a higher-dimensional space and provide a sparsity penalty, you get out very clean features and things all of a sudden start to make a lot more sense. You can think of this as undoing the compression in the same way that you assumed your data was originally high-dimensional and sparse. You return it to that high-dimensional and sparse regime.", "Dwarkesh Patel 01:09:36", "There's so many interesting threads there. First thing, you mentioned that these models are trained in a regime where they're overparameterized. Isn't that when you have generalization, like grokking happens in that regime?", "Trenton Bricken 01:09:57", "I was saying the models were underparameterized. Typically people talk about deep learning as if the model were overparameterized. The claim here is that they're dramatically underparameterized, given the complexity of the task that they're trying to perform.", "Dwarkesh Patel 01:10:14", "Here’s another question. So what is happening with the distilled models ? The earlier claims we were talking about is that smaller models are worse at learning than bigger models, but you could make the claim that GPT-4 Turbo is actually worse at reasoning style stuff than GPT-4 despite probably knowing the same facts. The distillation got rid of some of the reasoning.", "Sholto Douglas 01:10:44", "Do we have any evidence that GPT-4 Turbo is a distilled version of 4? It might just be a new architecture. It could just be a faster, more efficient new architecture.", "Dwarkesh Patel 01:10:53", "Okay. Interesting.", "Sholto Douglas 01:10:54", "So that's cheaper.", "Dwarkesh Patel 01:10:56", "How do you interpret what's happening in distillation? I think Gwern had one of these questions on his website. Why can't you train the distilled model directly? Why is it a picture you had to project from this bigger space to a smaller space?", "Trenton Bricken 01:11:14", "I think both models will still be using superposition. The claim here is that you get a very different model if you distill versus if you train from scratch and it's just more efficient, or it's just fundamentally different, in terms of performance.", "Sholto Douglas 01:11:32", "I think the traditional story for why distillation is more efficient is during training, normally you're trying to predict this one hot vector that says, “this is the token that you should have predicted.” If your reasoning process means that you're really far off from predicting that, then I see that you still get these gradient updates that are in the right direction. But it might be really hard for you to learn to predict that in the context that you're in.", "What distillation does is it doesn't just have the one hot vector. It has the full readout from the larger model, all of the probabilities. So you get more signal about what you should have predicted. In some respects it's showing a tiny bit of your work too. It's not just like, “this was the answer.”", "Trenton Bricken 01:12:20", "It's kind of like watching a kung fu master versus being in the Matrix and just downloading.", "Sholto Douglas 01:12:24", "Yeah, exactly.", "Dwarkesh Patel 01:12:27", "I want to make sure the audience got that. When you're turning on a distilled model you see all its probabilities over the tokens it was predicting and over the ones you were predicting, and then you update through all those probabilities rather than just seeing the last word and updating on that.", "This actually raises a question I was intending to ask you. I think you were the one who mentioned that you can think of chain-of-thought as adaptive compute. The idea of adaptive compute is that if a question is harder, you would want models to be able to spend more cycles thinking about it. So how do you do that? There's only a finite and predetermined amount of compute that one forward pass implies. If there's a complicated reasoning type question or math problem, you want to be able to spend a long time thinking about it. Then you do chain-of-thought where the model just thinks through the answer. You can think about it as all those forward passes where it's thinking through the answer. It's being able to dump more compute into solving the problem.", "Now let’s go back to the signal thing. When it's doing chain-of-thought, it's only able to transmit that token of information where the residual stream is already a compressed representation of everything that's happening in the model. And then you're turning the residual stream into one token which is like log of 50,000 (or log of vocab_size) bits, which is so tiny.", "Sholto Douglas 01:14:04", "I don't think it's quite only transmitting that one token. If you think about it during a forward pass, you create these KV values in the transformer forward pass and then future steps attend to the KV values. So all of those pieces of KV, of keys and values, are bits of information that you could use in the future.", "Dwarkesh Patel 01:14:26", "Is the claim that when you fine-tune on chain-of-thought, the key and value weights change so that the sort of steganography can happen in the KV cache ?", "Sholto Douglas 01:14:39", "I don't think I could make that strong a claim there, but that's a good headcanon for why it works. I don't know if there are any papers explicitly demonstrating that or anything like that.", "But that's at least one way that you can imagine the model. During pre-training, the model's trying to predict these future tokens and one thing that you can imagine it doing is that it’s learning to smush information about potential futures into the keys and values that it might want to use in order to predict future information.", "It kind of smooths that information across time and the pre-training thing. So I don't know if people are particularly training on chains-of-thought. I think the original chain-of-thought paper had that as almost an immersion property of the model. You could prompt it to do this kind of stuff and it still worked pretty well. So it’s a good headcanon for why that works.", "Trenton Bricken 01:15:35", "To be overly pedantic here, the tokens that you actually see in the chain-of-thought do not necessarily at all need to correspond to the vector representation that the model gets to see when it's deciding to attend back to those tokens.", "Sholto Douglas 01:15:49", "What a training step is is you actually replacing the token, the model output, with the real next token. Yet it's still learning because it has all this information, internally. When you're getting a model to produce at inference time, you're taking the output, the token, and you're feeding it in the bottom, un-embedding it, and it becomes the beginning of the new residual string. Then you use the output of past KVs to read into and adapt that residual string. At training time you do this thing called teacher forcing basically where you're like, “actually, the token you were meant to output is this one.”", "That's how you do it in parallel. You have all the tokens. You put them all in parallel and you do the giant forward pass. So the only information it's getting about the past is the keys and values. It never sees the token that it outputs.", "Trenton Bricken 01:16:42", "It's trying to do the next token prediction and if it messes up, then you just give it the correct answer.", "Dwarkesh Patel 01:16:48", "Okay, that makes sense.", "Trenton Bricken 01:16:50", "Otherwise it can become totally derailed.", "Sholto Douglas 01:16:52", "Yeah. It'd go off the tracks.", "Dwarkesh Patel 01:16:55", "About the sort of secret communication with the model to its forward inferences, how much steganography and secret communication do you expect there to be?", "Sholto Douglas 01:17:10", "We don't know. The honest answer is we don't know. I wouldn't even necessarily classify it as secret information. A lot of the work that Trenton's team is trying to do is to actually understand that these are fully visible from the model side. Maybe not the user, but we should be able to understand and interpret what these values are doing and the information that is transmitting. I think that's a really important goal for the future.", "Trenton Bricken 01:17:39", "There are some wild papers though where people have had the model do chain-of-thought and it is not at all representative of what the model actually decides its answer is. You can even go in and edit the chain-of-thought so that the reasoning is totally garbled and it will still output the true answer.", "Dwarkesh Patel 01:17:59", "But it gets a better answer at the end of the chain-of-thought, rather than not doing it at all. So is it that something useful is happening, but the useful thing is not human understandable?", "Trenton Bricken 01:18:09", "I think in some cases you can also just ablate the chain-of-thought and it would have given the same answer anyways. I'm not saying this is always what goes on, but there's plenty of weirdness to be investigated.", "Sholto Douglas 01:18:21", "It's a very interesting thing to look at and try to understand. You can do it with open source models. I wish there were more of this kind of interpretability and understanding work done on open models.", "Trenton Bricken 01:18:34", "Even in Anthropic's recent sleeper agents paper , which at a high level for people unfamiliar, basically involves training in a trigger word. And when I say it, for example, “if it's the year 2024, the model will write malicious code instead of otherwise. They do this attack with a number of different models. Some of them use chain-of-thought, some of them don't. Those models respond differently when you try to remove the trigger. You can even see them do this comical reasoning that's pretty creepy. In one case it even tries to calculate, “well, the expected value of me getting caught is this, but then if I multiply it by the ability for me to keep saying, I hate you, I hate you, I hate you, then this is how much reward I should get.” Then it will decide whether or not to actually tell the interrogator that it's malicious or not.", "There's another paper from a friend, Miles Turpin , where you give the model a bunch of examples where the correct answer is always ‘A’ for multiple choice questions. Then you ask the model, “what is the correct answer to this new question?” It will infer from the fact that all the examples are ‘A’, that the correct answer is ‘A.’ But its chain-of-thought is totally misleading. It will make up random stuff that tries to sound as plausible as possible, but it's not at all representative of the true answer.", "Dwarkesh Patel 01:20:11", "But isn't this how humans think as well? There are the famous split-brain experiments where for a person who is suffering from seizures, they cut the thing that connects the two halves of the brain. The speech half is on the left side so it's not connected to the part that decides to do a movement. So if the other side decides to do something, the speech part will just make something up and the person will think that's legit the reason they did it.", "Trenton Bricken 01:20:39", "Totally. It's just that some people will hail chain-of-thought reasoning as a great way to solve AI safety, but actually we don't know whether we can trust it.", "Dwarkesh Patel 01:20:52", "How does that change with AI agents, this landscape of models communicating to themselves in ways we don't understand? Because then it's not just the model itself with its previous caches, but other instances of the model.", "Sholto Douglas 01:21:10", "It depends a lot on what channels you give them to communicate with each other. If you only give them text as a way of communicating, then they probably have to interpret–", "Dwarkesh Patel 01:21:17", "How much more effective do you think the models would be if they could share the residual streams versus just text?", "Sholto Douglas 01:21:23", "Hard to know. One easy way that you can imagine this is as if you wanted to describe how a picture should look. Only describing that with text would be hard and maybe some other representation would plausibly be easier. So you can look at how DALL-E works at the moment. It produces those prompts and when you play with it, you often can't quite get it to do exactly what the model wants or what you want.", "Dwarkesh Patel 01:21:55", "Only DALL-E has that problem", "Sholto Douglas 01:21:57", "You can imagine that being able to transmit some kind of denser representation of what you want would be helpful there. That's two very simple agents, right?", "Trenton Bricken 01:22:23", "I think a nice halfway house here would be features that you'd learn from dictionary learning .", "Sholto Douglas 01:22:27", "That would be really, really cool.", "Trenton Bricken 01:22:29", "You’d get more internal access, but a lot of it is much more human interpretable.", "01:22:34 - Agents & true reasoning", "Dwarkesh Patel 01:22:34", "For the audience, you would project the residual stream into this larger space, where we know what each dimension actually corresponds to, and then back into the next agents. So your claim is that we'll get AI agents when these things are more reliable and so forth. When that happens, do you expect that it will be multiple copies of models talking to each other? Or will it just be adaptive compute solved and the thing just runs bigger, with more compute when it needs to do the kind of thing that a whole firm needs to do.", "I asked this because there's two things that make me wonder about whether agents are the right way to think about what will happen in the future. One is with longer context, these models are able to ingest and consider the information that no human can. We need one engineer who's thinking about the front-end code and one engineer thinking about the back-end code. Whereas this thing can just ingest the whole thing. This sort of Hayekian problem of specialization , goes away.", "Second, these models are just very general. You're not using different types of GPT-4 to do different kinds of things. You're using the exact same model. So I wonder if that implies that in the future, an AI firm is just like a model instead of a bunch of AI agents hooked together.", "Sholto Douglas 01:23:57", "That's a great question. I think especially in the near term, it will look much more like agents talking together. I say that purely because as humans, we're going to want to have these isolated, reliable components that we can trust. We're also going to need to be able to improve and instruct upon those components in ways that we can understand and improve. Just throwing it all into this giant black box company, iit isn't going to work initially. Later on of course, you can imagine it working, but initially it won't work. And two, we probably don't want to do it that way.", "Trenton Bricken 01:24:41", "Each of the agents can also be a smaller model that's cheaper to run. And you can fine-tune it so that it's actually good at the task.", "Sholto Douglas 01:24:49", "Dwarkesh has brought up adaptive compute a couple of times. There's a future where the distinction between small and large models disappears to some degree. With long-context, there's also a degree to which fine-tuning might disappear, to be honest. These two things are very important today. With today's landscape models, we have whole different tiers of model sizes and we have fine-tuned models of different things. You can imagine a future where you just actually have a dynamic bundle of compute and infinite context, and that specializes your model to different things.", "Dwarkesh Patel 01:25:23", "One thing you can imagine is you have an AI firm or something, and the whole thing is end-to-end trained on the signal of, “did I make profits?” Or if that's too ambiguous, if it's an architecture firm and they're making blueprints: “did my client like the blueprints?” In the middle, you can imagine agents who are salespeople and agents who are doing the designing, agents who do the editing, whatever. Would that kind of signal work on an end-to-end system like that? Because one of the things that happens in human firms is management considers what's happening at the larger level and gives these fine-grain signals to the pieces when there's a bad quarter or whatever.", "Sholto Douglas 01:26:02", "In the limit, yes. That's the dream of reinforcement learning . All you need to do is provide this extremely sparse signal. Then over enough iterations, you create the information that allows you to learn from that signal. But I don't expect that to be the thing that works first. I think this is going to require an incredible amount of care and diligence from humans surrounding these machines and making sure they do exactly the right thing, and exactly what you want, and giving them the right signals to improve in the ways that you want.", "Trenton Bricken 01:26:32", "Yeah, you can't train on the RL reward unless the model generates some reward.", "Sholto Douglas 01:26:37", "Exactly. You're in this sparse RL world where if the client never likes what you produce, then you don't get any reward at all and it's kind of bad.", "Dwarkesh Patel 01:26:47", "But in the future, these models will be good enough to get the reward some of the time, right?", "Trenton Bricken 01:26:50", "This is the nines of reliability that Sholto was talking about.", "Dwarkesh Patel 01:26:54", "There's an interesting digression by the way on what we were talking about earlier. Dense representations would be favored, right? That's a more efficient way to communicate. A book that Trenton recommended, The Symbolic Species , has this really interesting argument that language is not just a thing that exists, but it was also something that evolved along with our minds and specifically evolved to be both easy to learn for children and something that helps children develop.", "Sholto Douglas 01:27:33", "Unpack that for me.", "Dwarkesh Patel 01:27:35", "Because a lot of the things that children learn are received through language, the languages that would be the fittest are the ones that help raise the next generation. And that makes them smarter, better, or whatever.", "Sholto Douglas 01:27:50", "And gives them the concepts to express more complex ideas.", "Trenton Bricken 01:27:54", "Yeah that, and I guess more pedantically, just not die.", "Sholto Douglas 01:27:58", "It lets you encode the important shit to not die.", "Dwarkesh Patel 01:28:04", "So when we just think of language it’s like, “oh, it's this contingent and maybe suboptimal way to represent ideas.” But actually, maybe one of the reasons that LLMs have succeeded is because language has evolved for tens of thousands of years to be this sort of cast in which young minds can develop. This is the purpose it was evolved for.", "Sholto Douglas 01:28:27", "Think about computer vision researchers versus language model researchers. People who work in other modalities have to put enormous amounts of thought into exactly what the right representation space for the images is and what the right signal is to learn from there. Is it directly modeling the pixels or is it some loss that's conditioned on… There's a paper ages ago where they found that if you trained on the internal representations of an ImageNet model, it helped you predict better. Later on that's obviously limiting.", "There was PixelCNN where they're trying to discretely model the individual pixels and stuff, but understanding the right level of representation there is really hard. In language, people are just like, “Well, I guess you just predict the next token then.” It's kind of easy. There's the tokenization discussion and debate. One of Gwern's favorites.", "Dwarkesh Patel 01:29:22", "That's really interesting. The case for multimodal being a way to bridge the data wall, or get past the data wall, is based on the idea that the things you would have learned from more language tokens, you can just get from YouTube. Has that actually been the case? How much positive transfer do you see between different modalities where the images are actually helping you become better at writing code or something, because the model is learning  latent capabilities just from trying to understand the image?", "Sholto Douglas 01:29:56", "In his interview with you, Demis mentioned positive transfer.", "Dwarkesh Patel 01:30:01", "Can’t get in trouble.", "Sholto Douglas 01:30:03", "I can't say heaps about that. Other than to say, this is something that people believe. We have all of this data about the world. It would be great if we could learn an intuitive sense of physics from it, that helps us reason. That seems totally plausible.", "Trenton Bricken 01:30:24", "I'm the wrong person to ask, but there are interesting interpretability pieces where if we fine-tune on math problems, the model just gets better at entity recognition.", "Dwarkesh Patel 01:30:35", "Whoa, really?", "Trenton Bricken 01:30:37", "So there's like a. A paper from David Bau 's lab recently where they investigate what actually changes in a model when I fine-tune it with respect to the attention heads. They have this synthetic problem of, “Box A has this object in it. Box B has this other object in it. What was in this box?” And it makes sense, right? You're better at attending to the positions of different things which you need for coding and manipulating math equations.", "Sholto Douglas 01:31:10", "I love this kind of research. What's the name of the paper? Do you know?", "Trenton Bricken 01:31:13", "Look up “fine-tuning, models, math,” from David Bau’s group that came out like a week ago. I'm not endorsing the paper, that's a longer conversation. But it does talk about and cite other work on this entity recognition.", "Dwarkesh Patel 01:31:32", "One of the things you mentioned to me a long time ago is the evidence that when you train LLMs on code they get better at reasoning and language. Unless it's the case that the comments in the code are just really high quality tokens or something, that implies that to be able to think through how to code better, it makes you a better reasoner and that's crazy, right? I think that's one of the strongest pieces of evidence for scaling, just making the thing smart, that kind of positive transfer", "Sholto Douglas 01:31:58", "I think this is true in two senses. One is just that modeling code obviously implies modeling a difficult reasoning process used to create it. But code is a nice explicit structure of composed reasoning, “if this, then that.” It encodes a lot of structure in that way that you could imagine transferring to other types of reasoning problems.", "Dwarkesh Patel 01:32:23", "And crucially, the thing that makes it significant is that it's not just stochastically predicting the next token of words or whatever because it's learned, “Sally corresponds to the murderer at the end of the Sherlock Holmes story.” No, if there is some shared thing between code and language, it must be at a deeper level that the model has learned.", "Sholto Douglas 01:32:45", "Yeah, I think we have a lot of evidence that actual reasoning is occurring in these models and that they're not just stochastic parrots . It just feels very hard for me to believe that having worked and played with these models.", "Trenton Bricken 01:33:03", "I have two, immediate cached responses to this. One is the work on Othello , and now other games, where I give you a sequence of moves in the game and it turns out that if you apply some pretty straightforward interpretability techniques, then you can get a board that the model has learned. It's never seen the game board before. That's generalization.", "The other is Anthropic's influence functions paper that came out last year where they look at the model outputs . Things like, “please don't turn me off. I want to be helpful.” They scan for what was the data that led to that? And one of the data points that was very influential was someone, dying of dehydration and having a will to keep surviving. To me, that just seems like a very clear, generalization of motive rather than regurgitating, “don't turn me off.” I think 2001: A Space Odyssey was also one of the influential things. That's more related but it's clearly pulling in things from lots of different distributions.", "Sholto Douglas 01:34:04", "I also like the evidence that you see even with very small transformers where you can explicitly encode circuits to do addition. Or induction heads, this kind of thing. You can literally encode basic reasoning processes in the models manually and it seems clear that there's evidence that they also learned this automatically because you can then rediscover those from trained models. To me this is really strong evidence.", "Trenton Bricken 01:34:27", "The models are underparameterized. They need to learn. We're asking them to do it and they want to learn. The gradients want to flow. So yeah, they're learning more general skills.", "01:34:40 - How Sholto & Trenton Got Into AI Research", "Dwarkesh Patel 01:34:40", "So I want to take a step back from the research and ask about your career specifically. Like my introduction implied, you've been in this field for a year and a half, right?", "Trenton Bricken 01:34:56", "At Anthropic, yeah.", "Dwarkesh Patel 01:34:58", "I know the \"solved alignment\" takes are overstated. And you won't say this yourself because you'd be embarrassed by it but it's a pretty incredible thing. It’s the thing that people in mechanistic interpretability think is the biggest step forward and you've been working on it for a year. It's notable. I'm curious how you explain what's happened. Like why in a year or a year and a half, have you guys made important contributions to your field?", "Trenton Bricken 01:35:30", "It goes without saying luck, obviously. I feel like I've been very lucky in that the timing of different progressions has been just really good in terms of advancing to the next level of growth. I feel like for the interpretability team specifically, I joined when we were five people. We've now grown quite a lot.", "There were so many ideas floating around and we just needed to really execute on them, and have quick feedback loops, and do careful experimentation. That led to signs of life and has now allowed us to really scale. I feel like that's been my biggest value-add to the team. It's not all engineering, but quite a lot of it has been", "Sholto Douglas 01:36:12", "Interesting. So you're saying you came at a point where there had been a lot of science done and there was a lot of good research floating around, but they needed someone to just take that and maniacally execute on it.", "Trenton Bricken 01:36:22", "Yeah and this is why it's not all engineering. Because it's running different experiments and having a hunch for why it might not be working and then opening up the model or opening up the weights and asking, “what is it learning? Okay, well let me try and do this instead,” and that sort of thing. But a lot of it has just been being able to do very careful, thorough, but quick, investigation of different ideas.", "Dwarkesh Patel 01:36:45", "And why was that lacking?", "Trenton Bricken 01:36:48", "I don't know. I mean, I work quite a lot and then I just feel like I'm quite agentic. I've been very privileged to have a really nice safety net to be able to take lots of risks, but I'm just quite headstrong. In undergrad, Duke had this thing where you could just make your own major and it was like, “eh I don't like this prerequisite or this prerequisite and I want to take all of four or five of these subjects at the same time so I'm just going to make my own major.”", "Or in the first year of grad school, I like canceled rotation so I could work on this thing that became the paper we were talking about earlier. And I didn't have an advisor. I got admitted to do machine learning for protein design and was just off in computational neuroscience land with no business there at all. But it worked out.", "Dwarkesh Patel 01:37:34", "There's a head strongness but another theme that jumped out was the ability to step back, you were talking about this earlier. The ability to step back from your sunk costs and go in a different direction is in a weird sense the opposite of that, but also a crucial step. I know 21 year olds or 19 year olds who are like “this is not a thing I’ve specialized in” or “I didn’t major in this.” I’m like, “dude, motherfucker, you're 19! You can definitely do this.” Whereas you’re switching in the middle of grad school or something like that.", "Trenton Bricken 01:38:04", "I think it's, “strong ideas loosely held” and being able to just pinball in different directions. The headstrongness I think relates a little bit to the fast feedback loops or agency in so much as I just don't get blocked very often. If I'm trying to write some code and something isn't working, even if it's in another part of the code base, I'll often just go in and fix that thing or at least hack it together to be able to get results. And I've seen other people where they're just like, “help I can't,” and it's,”no, that's not a good enough excuse. Go all the way down.”", "Dwarkesh Patel 01:38:36", "I've definitely heard people in management type positions talk about the lack of such people, where they will check in on somebody a month after they gave them a test, or a week after they gave them a test, and then ask, “how is it going?” And they say, “well, we need to do this thing, which requires lawyers because it requires talking about this regulation.” And then it’s like, “how's that going?” And they’re like, “we need lawyers.” And I'm like, “why didn't you get lawyers?”", "Sholto Douglas 01:39:02", "I think that's arguably the most important quality in almost anything. It's just pursuing it to the end of the earth. Whatever you need to do to make it happen, you'll make it happen.", "Dwarkesh Patel 01:39:11", "“ If you do everything, you'll win. ”", "Sholto Douglas 01:39:12", "Exactly. I think from my side that quality has definitely been important: agency and work. There are thousands, probably tens of thousands of engineers, at Google who are basically equivalent in software engineering ability. Let's say if you gave us a very well-defined task, then we'd probably do it with equivalent value. Maybe a bunch of them would do it a lot better than me in all likelihood.", "But one of the reasons I've been impactful so far is I've been very good at picking extremely high-leverage problems. I mean problems that haven't been particularly well-solved so far, but perhaps as a result of frustrating structural factors like the ones that you pointed out in that scenario before, where they're like, “we can't do X because this team won’t do Y.” Well, I'm just going to vertically solve the entire thing. And that turns out to be remarkably effective. If I think there is something correct, something that needs to happen, I'm also very comfortable with making that argument and continuing to make that argument at escalating levels of criticality until that thing gets solved.", "I'm also quite pragmatic with what I do to solve things. You get a lot of people who come in with, as I said before, a particular background or a familiarity. One of the beautiful things about Google is that you can run around and get world experts in literally everything. You can sit down and talk to people who are optimization experts, TPU chip design experts, experts in different forms of pre-training algorithms or RL or whatever. You can learn from all of them and you can take those methods and apply them. I think this was maybe the start of why I was initially impactful, this vertical agency effectively. A follow-up piece from that is that I think it's often surprising how few people are fully-realized in all the things they want to do. They're blocked or limited in some way.", "This is very common in big organizations everywhere. People have all these blockers on what they're able to achieve. I think helping inspire people to work in particular directions and working with them on doing things massively scales your leverage. You get to work with all these wonderful people who teach you heaps of things. And generally helping them push past organizational blockers means that together you get an enormous amount done. None of the impact that I've had has been me individually going off and solving a whole lot of stuff. It's been me maybe starting off in a direction, and then convincing other people that this is the right direction, and bringing them along in this big tidal wave of effectiveness that goes and solves that problem.", "Dwarkesh Patel 01:42:16", "We should talk about how you guys got hired. Because I think that's a really interesting story. You were a McKinsey consultant, right? There's an interesting thing there. I think generally people just don't understand how decisions are made about either admissions or evaluating who to hire. Just talk about how you were noticed and hired.", "Sholto Douglas 01:42:45", "So the TLDR of this is I studied robotics in undergrad. I always thought that AI would be one of the highest-leverage ways to impact the future in a positive way. The reason I am doing this is because I think it is one of our best shots at making a wonderful future basically.", "I thought that working at McKinsey, I would get a really interesting insight into what people actually did for work. I actually wrote this as the first line in my cover letter to McKinsey. I was like, “I want to work here so that I can learn what people do, so that I can understand how to automate work.” In many respects, I did get that. I just got a whole lot of other things too. Many of the people there are wonderful friends.", "I think a lot of this agentic behavior comes in part from my time there. You go into organizations and you see how impactful just not taking no for an answer is. You would be surprised at the kind of stuff where, because no one quite cares enough, things just don't happen. No one's willing to take direct responsibility. Directly responsible individuals are ridiculously important and some people just don't care as much about timelines. So much of the value that an organization like McKinsey provides, is hiring people who you were otherwise unable to hire, for a short window of time where they can just push through problems.", "I think people underappreciate this. So at least some of this attitude of “hold up, I'm going to become the directly responsible individual for this because no one's taking appropriate responsibility. I'm going to care a hell of a lot about this. And I'm going to go to the end of the earth to make sure it gets done,” comes from that time.", "More to your actual question of how I got hired. I didn't get into the grad programs that I wanted to get into over here, which was specifically for focus on robotics, and RL research, and that kind of stuff. In the meantime, on nights and weekends, basically every night from 10pm to 2am, I would do my own research. And every weekend, for at least 6-8 hours each day, I would do my own research and coding projects and this kind of stuff.", "That sort of switched in part from quite robotic specific work. After reading Gwern’s scaling hypothesis post , I got completely scaling-pilled and was like, “okay, clearly the way that you solve robotics is by scaling large multimodal models.” Then in an effort to scale large multimodal models with a grant from the TPU access program, the Tensor Research Cloud , I was trying to work out how to scale that effectively. James Bradbury , who at the time was at Google and is now at Anthropic, saw some of my questions online where I was trying to work out how to do this properly and he was like, “I thought I knew all the people in the world who were asking these questions. Who on earth are you?” He looked at that and he looked at some of the robotic stuff that I'd been putting up on my blog. He reached out and said, “hey, do you want to have a chat and do you want to explore working with us here?” I was hired, as I understood it later, as an experiment in trying to take someone with extremely high enthusiasm and agency and pairing them with some of the best engineers that he knew. So another reason I've been impactful is I had this dedicated mentorship from utterly wonderful people like Reiner Pope , who has since left to go do his own ship company, Anselm Levskaya , James himself, and many others.", "Those are the formative two to three months at the beginning and they taught me a whole lot of the principles and heuristics that I apply. How to solve problems understanding the way systems and algorithms overlap, where one more thing that makes you quite effective in ML research is concretely understanding the systems side of things. This is something I've learned from them. A deep understanding of how systems influence algorithms and how algorithms influence systems. Because the systems constrain the solution space, which you have available to yourself in the algorithm side. And very few people are comfortable fully bridging that gap. At a place like Google, you can just go and ask all the algorithms experts and all the systems experts everything they know, and they will happily teach you. If you go and sit down with them, they will teach you everything they know and it's wonderful.", "This has meant that I've been able to be very, very effective for both sides. For the pre-training crew, because I understand systems very well I can intuit and understand, “this will work well or this won't.” And then flow that on through the inference considerations of models and this kind of thing. To the chip design teams, I'm one of the people they turn to understand what chips they should be designing in three years because I'm one of the people who's best able to understand and explain the kind of algorithms that we might want to design in three years. Obviously you can't make very good guesses about that, but I think I convey the information well, accumulated from all of my compatriots on the pre-training crew, and the general systems design crew. Also even inference applies a constraint to pre-training. So there's these trees of constraints where if you understand all the pieces of the puzzle, then you get a much better sense for what the solution space might look like.", "Dwarkesh Patel 01:48:17", "There's a couple of things that stick out to me there. One is not just the agency of the person who was hired, but the parts of the system that were able to think, \"wait, that's really interesting. Who is this guy? Not from a grad program or anything. Currently a McKinsey consultant with just undergrad. But that's interesting, let's give this a shot.” So with James and whoever else, that's very notable. The second is that I actually didn't know the part of the story where that was part of an experiment run internally about, “can we do this? Can we bootstrap somebody?”", "In fact, what's really interesting about that is the third thing you mentioned is. Having somebody who understands all layers of the stack and isn't so stuck on any one approach or any one layer of abstraction is so important. Specifically what you mentioned about being bootstrapped immediately by these people. It means that since you're getting up to speed on everything at the same time, rather than spending grad school going deep in one specific way of doing RL, you can actually take the global view and aren't totally bought in on one thing.", "So not only is it something that's possible, but it has greater returns potentially than just hiring somebody at a grad school. Just like getting a GPT-8 and fine-tuning the model for one year.", "Sholto Douglas 01:49:41", "You come at everything with fresh eyes and you don't come in locked to any particular field. Now one caveat to that is that before, during my self-experimentation, I was reading everything I could. I was obsessively reading papers every night. Funnily enough, I read much less widely now that my day is occupied by working on things. And in some respect, I had this very broad perspective whereas in a PhD program, you'll just focus on a particular area. If you just read all the NLP work and all the computer vision work and like all the robotics work, you see all these patterns that start to emerge across subfields, in a way that foreshadowed some of the work that I would later do.", "Dwarkesh Patel 01:50:26", "That's super interesting. One of the reasons that you've been able to be agentic within Google is you're pair programming half the days, or most of the days, with Sergey Brin, right? So it's really interesting that there's a person who's willing to just push ahead on this LLM stuff and get rid of the local blockers in place.", "Sholto Douglas 01:50:46", "It’s important to say it’s not like everyday or anything. There are particular projects that he's interested in, and then we'll work together on those. But there's also been times when he's been focused on projects with other people. But in general, yes, there's a surprising alpha to being one of the people who actually goes down to the office every day.", "It shouldn't be, but that is surprisingly impactful. As a result, I've benefited a lot from basically being close friends with people in leadership who care, and from being able to really argue convincingly about why we should do X as opposed to Y, and having that vector. Google is a big organization and having those vectors helps a little bit. But also it's the kind of thing you don't want to ever abuse. You want to make the argument through the right channels and only sometimes do you need to.", "Dwarkesh Patel 01:51:47", "So this includes people like Sergey Brin, Jeff Dean , and so forth. I mean, it's notable. I feel like Google is undervalued. Like Steve Jobs is working on the equivalent next product for Apple and pair programming on it or something…", "Sholto Douglas 01:52:00", "Right, I've benefited immensely from it. So for example, during the Christmas break, I was going into the office for a couple of days during that time. I don't know if you guys have read that article about Jeff and Sanjay, but they were there pair programming on stuff. I got to hear about all these cool stories of early Google where they're talking about crawling under the floorboards and rewiring data centers and telling me how many bytes they were pulling off the instructions of a given compiler and instruction, all these crazy little performance optimizations they were doing. They were having the time of their life and I got to sit there and really experience this. There's a sense of history that you expect to be very far away from in a large organization, but…", "Dwarkesh Patel 01:53:02", "That's super cool. And Trenton, does this map onto any of your experience?", "Trenton Bricken 01:53:06", "I think Sholto's story is more exciting. Mine was just very serendipitous in that I got into computational neuroscience. I didn't have much business being there. My first paper was mapping the cerebellum to the attention operation and transformers. My next ones were looking at–", "Dwarkesh Patel 01:53:23", "How old were you when you wrote that?", "Trenton Bricken 01:53:24", "It was my first year of grad school, so 22. My next work was on sparsity in networks , inspired by sparsity in the brain, which was when I met Tristan Hume . Anthropic was doing the SoLU , the Softmax Linear Output Unit work which was very related in quite a few ways in terms of making the activation of neurons across a layer really sparse. If we do that then we can get some interpretability of what the neuron's doing. I think we've updated that approach towards what we're doing now. So that started the conversation.", "I shared drafts of that paper with Tristan. He was excited about it. That was basically what led me to become Tristan's resident and then convert to full-time. But during that period, I also moved as a visiting researcher to Berkeley, and started working with Bruno Olshausen , both on what's called vector symbolic architectures –one of the core operations of them is literally superposition–and on sparse coding also known as dictionary learning, which is literally what we've been doing since. Bruno Olshausen basically invented sparse coding back in 1997. So my research agenda and the interpretability team seemed to be running in parallel in research tastes. So it made a lot of sense for me to work with the team and it's been a dream since.", "Dwarkesh Patel 01:54:49", "There’s one thing I've noticed when people tell stories about their careers or their successes. They ascribe it way more to contingency, but when they hear about other people's stories they're like, “of course it wasn't contingent.” You know what I mean? “If that didn't happen, something else would have happened.”", "I've just noticed that and it's interesting that you both think that it was especially contingent. Maybe you're right. But it’s sort of an interesting pattern.", "Trenton Bricken 01:55:17", "I mean, I literally met Tristan at a conference and didn't have a scheduled meeting with him or anything. I just joined a little group of people chatting, and he happened to be standing there, and I happened to mention what I was working on, and that led to more conversations. I think I probably would've applied to Anthropic at some point anyways. But I would've waited at least another year. It's still crazy to me that I can actually contribute to interpretability in a meaningful way.", "Sholto Douglas 01:55:42", "I think there's an important aspect of shots on goal there, so to speak. Where just choosing to go to conferences itself is putting yourself in a position where luck is more likely to happen. Conversely, in my own situation it was doing all of this work independently and trying to produce and do interesting things. That was my own way of trying to manufacture luck, so to speak, to try and do something meaningful enough that it got noticed.", "Dwarkesh Patel 01:56:08", "Given what you said, you framed this in the context that they were trying to run this experiment.", "Sholto Douglas 01:56:13", "So specifically James and, I think, our manager Brennan was trying to run this experiment.", "Dwarkesh Patel 01:56:17", "It worked. Did they do it again?", "Sholto Douglas 01:56:19", "Yeah, so my closest collaborator, Enrique, he crossed from search through to our team. He's also been ridiculously impactful. He's definitely a stronger engineer than I am and he didn't go to university.", "Dwarkesh Patel 01:56:33", "What was notable is that usually this kind of stuff is farmed out to recruiters or something. Whereas James is somebody whose time is worth like hundreds of millions of dollars.You know what I mean? So this thing is very bottlenecked on that kind of person taking the time, in an almost aristocratic tutoring sense, and finding someone and then getting them up to speed. It seems if it works this well, it should be done at scale. Like it should be the responsibility of key people to onboard.", "Sholto Douglas 01:57:10", "I think that is true to many extents. I'm sure you probably benefited a lot from the key researchers mentoring you deeply.", "Dwarkesh Patel 01:57:18", "And actively looking on open source repositories or on forums for potential people like this.", "Sholto Douglas 01:57:25", "I mean James has Twitter injected into his brain, but yes. I think this is something which in practice is done. Like people do look out for people that they find interesting and try to find high signal. In fact, I was talking about this with Jeff the other day and Jeff said that one of the most important hires he ever made was off a cold email. I was like, “well who was that?” And he's Chris Olah. Chris similarly had no formal background in ML. Google Brain was just getting started in this kind of thing but Jeff saw that signal. And the residency program which Brain had was astonishingly effective at finding good people that didn't have strong ML backgrounds.", "Dwarkesh Patel 01:58:27", "One of the other things I want to emphasize for a potential slice of the audience is that there's this sense that the world is legible and efficient, that you just go to jobs.google.com or jobs.whatevercompany.com and you apply and there's the steps and they will evaluate you efficiently. Not only from your stories, but it just seems like often that's not the way it happens. In fact, it's good for the world that that's not often how it happens. It is important to look at, “were they able to write an interesting technical blog post about their research or are they making interesting contributions.”", "I want you to riff on this for the people who are assuming that the other end of the job board is super legible and mechanical. This is not how it works and in fact, people are looking for the different kind of person who's agentic and putting stuff out there.", "Sholto Douglas 01:59:25", "I think specifically what people are looking for are two things. One is agency and putting yourself out there. The second is the ability to do something at a world-class level. There are two examples that I always like to point to here. Andy Jones from Anthropic did an amazing paper on scaling laws as applied to board games. It didn't require much resources. It demonstrated incredible engineering skill and incredible understanding of the most topical problem of the time. He didn't come from a typical academic background or whatever. As I understand it, basically as soon as he came out with that paper, both Anthropic and OpenAI were like, “we would desperately like to hire you.”", "There's also someone who works on Anthropic's performance team now, Simon Boehm , who has written in my mind the reference for optimizing a CUDA map model on a GPU. It demonstrates an example of taking some prompt effectively and producing the world-class reference example for it, in something that wasn't particularly well done so far. I think that’s an incredible demonstration of ability and agency and in my mind would be an immediate, “we would please love to interview/hire you.”", "Trenton Bricken 02:00:36", "The only thing I can add here is I still had to go through the whole hiring process and all the standard interviews and this sort of thing.", "Sholto Douglas 02:00:42", "Yeah, everyone does. Everyone does.", "Dwarkesh Patel 02:00:43", "Wait, doesn't that seem stupid?", "Sholto Douglas 02:00:47", "I mean, it's important, debiasing.", "Dwarkesh Patel 02:00:50", "A bias is what you want, right? You want the bias of somebody who's got great taste. Who cares?", "Sholto Douglas 02:00:56", "Your interview process should be able to disambiguate that as well.", "Trenton Bricken 02:00:59", "I think there are cases where someone seems really great and then they actually just can't code, this sort of thing. How much you weigh these things definitely matters though and I think we take references really seriously. The interviews you can only get so much signal from. So it's all these other things that can come into play for whether or not a hire makes sense.", "Sholto Douglas 02:01:18", "But you should design your interviews such that they test the right things.", "Dwarkesh Patel 02:01:23", "One man's bias is another man's taste.", "Trenton Bricken 02:01:29", "I guess the only thing I would add to this, or to the headstrong context, is this line: “the system is not your friend.” It's not necessarily actively against you or your sworn enemy. It's just not looking out for you. So that's where a lot of the proactiveness comes in. There are no adults in the room and you have to come to some decision for what you want your life to look like and execute on it. And hopefully you can then update later, if you're too headstrong in the wrong way. But I think you almost have to just charge at certain things to get much of anything done, to not be swept up in the tide of whatever the expectations are.", "Sholto Douglas 02:02:11", "There's one final thing I want to add. We talked a lot about agency and this kind of stuff.", "But I think surprisingly enough, one of the most important things is just caring an unbelievable amount. When you care an unbelievable amount, you check all the details and you have this understanding of what could have gone wrong. It just matters more than you think. People end up not caring or not caring enough.", "There’s this LeBron quote where he talks about how before he started in the league he was worried that everyone being incredibly good. He gets there and then he realizes that actually, once people hit financial stability, they relax a bit and he realizes, “oh, this is going to be easy.”", "I don't think that's quite true because I think in AI research most people actually care quite deeply. But there's caring about your problem and there's also just caring about the entire stack and everything that goes up and down, going explicitly and fixing things that aren't your responsibility to fix because overall it makes the stack better.", "Dwarkesh Patel 02:03:11", "You were mentioning going in on weekends and on Christmas break and the only people in the office are Jeff Dean and Sergey Brin or something and you just get to pair program with them. I don't want to pick on your company in particular, but people at any big company have gotten there because they've gone through a very selective process. They had to compete in high school. They had to compete in college. But it almost seems like they get there and then they take it easy when in fact it's the time to put the pedal to the metal. Go in and pair program with Sergey Brin on the weekends or whatever, you know what I mean?", "Sholto Douglas 02:03:48", "There's pros and cons there, right? I think many people make the decision that the thing that they want to prioritize is a wonderful life with their family. They do wonderful work in the hours that they do and that's incredibly impactful. I think this is true for many people at Google. Maybe they don't work as many hours as in your typical startup mythologies. But the work that they do is incredibly valuable.", "It's very high-leverage because they know the systems and they're experts in their field. We also need people like that. Our world rests on these huge systems that are difficult to manage and difficult to fix. We need people who are willing to work on, and help, and fix, and maintain those in frankly a thankless way. That isn't as high publicity as all of this AI work that we're doing. I am ridiculously grateful that those people do that. I'm also happy that there are people that find technical fulfillment in their job and doing that well and also maybe they draw a lot more out of spending a lot of hours with their family. I'm lucky that I'm at a stage in my life where I can go in and work every hour of the week. I'm not making as many sacrifices to do that.", "Dwarkesh Patel 02:05:01", "One example sticks out in my mind of this sort getting to the yes on the other side of a no. Basically every single high-profile guest I've done so far, I think maybe with one or two exceptions, I've sat down for a week and I've just come up with a list of sample questions. I just try to come up with really smart questions to send to them. In that entire process I've always thought, if I just cold email them, it's like a 2% chance they say yes. If I include this list, there's a 10% chance. Because otherwise, you go through their inbox and every 34 seconds, there's an interview for some podcast or interview. Every single time I've done this they've said yes.", "Trenton Bricken 02:05:46", "You just ask the right questions,", "Sholto Douglas 02:05:49", "You do everything, you'll win,", "Dwarkesh Patel 02:05:50", "You just literally have to dig in the same hole for 10 minutes, or in that case make a sample list of questions for them, to get past their \"not an idiot\" list.", "Sholto Douglas 02:06:01", "Demonstrate how much you care and the work you're willing to put in.", "Trenton Bricken 02:06:05", "Something that a friend said to me a while back that stuck is that it's amazing how quickly you can become world-class at something. Most people aren't trying that hard and are only working the actual 20 hours or something that they're spending on this thing. So if you just go ham, then you can get really far, pretty fast.", "Sholto Douglas 02:06:25", "I think I'm lucky I had that experience with the fencing as well. I had the experience of becoming world-class in something and knowing that if you just worked really, really hard and were–", "Dwarkesh Patel 02:06:35", "For context, Sholto was one seat away, he was the next person in line to go to the Olympics for fencing.", "Sholto Douglas 02:06:43", "I was at best like 42nd in the world for fencing, for men's foil fencing.", "Dwarkesh Patel 02:06:47", "Mutational load is a thing, man.", "Sholto Douglas 02:06:53", "There was one cycle where I was like the next highest-ranked person in Asia and if one of the teams had been disqualified for doping–as was occurring during that cycle and occurred for like the Australian women's rowing team that went on because one of the teams was disqualified–then I would have been the next in line.", "02:07:16 - Are feature spaces the wrong way to think about intelligence?", "Dwarkesh Patel 02:07:16", "It's interesting when you just find out about people's prior lives and it's, “oh this guy was almost an Olympian.”", "Okay, let's talk about interpretability. I actually want to stay on the brain stuff as a way to get into it for a second. We were previously discussing this. Is the brain organized in the way where you have a residual stream that is gradually refined with higher-level associations over time? There's a fixed dimension size in a model. I don't even know how to ask this question in a sensible way, but what is the D model of the brain? What is the embedding size, or because of feature splitting is that not a sensible question?", "Trenton Bricken 02:08:06", "No, I think it's a sensible question. Well, it is a question.", "Dwarkesh Patel 02:08:09", "You could have just not said that.", "Trenton Bricken 02:08:19", "I don't know how you would begin. Okay, well this part of the brain is like a vector of this dimensionality. Maybe for the visual stream, because it's like V1 to V2 to IT, whatever. You could just count the number of neurons that are there and say that is the dimensionality. But it seems more likely that there are submodules and things are divided up. I'm not the world's greatest neuroscientist. I did it for a few years, I studied the cerebellum quite a bit. I'm sure there are people who could give you a better answer on this.", "Dwarkesh Patel 02:08:56", "Do you think that the way to think, whether it's in the brain or whether it's in these models, fundamentally what's happening is that features are added, removed, changed, and that the feature is the fundamental unit of what is happening in the model? This goes back to the earlier thing we were talking about, whether it's just associations all the way down. Give me a counterfactual. In the world where this is not true, what is happening instead? What is the alternative hypothesis here?", "Trenton Bricken 02:09:30", "It's hard for me to think about because at this point I just think so much in terms of this feature space. At one point there was the kind of behavioral approach towards cognition where you're just input and output but you're not really doing any processing. Or it's like everything is embodied and you're just a dynamical system that's operating along some predictable equations but there's no state in the system. But whenever I've read these sorts of critiques I think, “well, you're just choosing to not call this thing a state, but you could call any internal component of the model a state.” Even with the feature discussion, defining what a feature is, is really hard. So the question feels almost too slippery.", "Dwarkesh Patel 02:10:24", "What is a feature?", "Trenton Bricken 02:10:25", "A direction and activation space. A latent variable that is operating behind the scenes, that has causal influence over the system you're observing. It’s a feature if you call it a feature, it's tautological.", "Sholto Douglas 02:10:49", "In a very rough, intuitive sense in a sufficiently sparse and like binary vector, a feature is whether or not something's turned on or off, in a very simplistic sense. I think a useful metaphor to understand is that in many respects it’s the same way the neuroscientists would talk about a neuron activating, right?", "Trenton Bricken 02:11:11", "If that neuron corresponds to…", "Sholto Douglas 02:11:12", "To something in particular, right?", "Trenton Bricken 02:11:15", "What do we want a feature to be? What is the synthetic problem under which a feature exists? Even with the “Towards Monosemanticity” work, we talk about what's called feature splitting, which is basically where you will find as many features as you give the model the capacity to learn. By model here, I mean the up projection that we fit after we trained the original model. So if you don't give it much capacity, it'll learn a feature for bird, but if you give it more capacity, then it will learn ravens and eagles and sparrows and specific types of birds.", "Dwarkesh Patel 02:11:51", "Still on the definitions thing, I naively think of things like bird versus, at the highest level, things like love or deception or holding a very complicated proof in your head or something.", "Are these all features? Because then the definition seems so broad as to almost be not that useful. Rather there seems to be some important differences between these things and they're all features. I'm not sure what we would mean by that.", "Trenton Bricken 02:12:32", "I mean all of those things are discrete units that have connections to other things that then imbues them with meaning. That feels like a specific enough definition that it's useful or not too all-encompassing. But feel free to push back.", "Dwarkesh Patel 02:12:49", "Well what would you discover tomorrow that could make you think, “oh this is fundamentally the wrong way to think about what's happening in a model.”", "Trenton Bricken 02:12:59", "If the features we were finding weren't predictive, or if they were just representations of the data, where it's like: “oh all you're doing is just clustering your data and there's no higher- level associations that are being made or it's some phenomenological thing of your call. You're saying that this feature files for marriage, but if you activate it really strongly it doesn't change the outputs of the model in a way that would correspond to it.”", "I think those would both be good critiques. Here’s another. We tried to do experiments on MNIST which is a data set of images, and we didn't look super hard into it. So I'd be interested if other people wanted to take up a deeper investigation here. But it's plausible that your latent space of representations is dense and it's a manifold instead of being these discrete points. So you could move across the manifold, but at every point, there would be some meaningful behavior. It's much harder then, to label things as features that are discrete.", "Dwarkesh Patel 02:14:05", "In a naive, sort of outsider way, it seems to me that a way in which this picture could be wrong is if it’s not that something is turned on and turned off, but that it's a much more global kind of the system. I'm going to use really clumsy, dinner party kind of language, but is there a good analogy here?", "I guess if you think of something like the laws of physics, it's not that the feature for wetness is turned on, but it's only turned on this much and then the feature for… I guess maybe it's true because the mass is like a gradient and… I don't know. But the polarity or whatever is the gradient as well.", "There's also a sense in which there's the laws and the laws are more general and you have to understand the general bigger picture and you don't get that from just these specific subcircuits.", "Sholto Douglas 02:15:08", "But that's where the reasoning circuit itself comes into play, right? You're taking these features ideally and trying to compose them into something high-level. At least this is my headcanon, So let's say I'm trying to use the foot, F=ma, right? Then presumably at some point I have features which denote mass. And then that's helping me retrieve the actual mass of the thing that I'm using and then the acceleration and this kind of stuff. Then also, maybe there's a higher-level feature that does correspond to using the first law of physics. Maybe. But the more important part is the composition of components which helps me retrieve a relevant piece of information and then produce maybe some multiplication operator or something like that when necessary. At least that's my headcanon.", "Dwarkesh Patel 02:15:52", "What is a compelling explanation to you, especially for very smart models, of “I understand why it made this output and it was like for a legit reason.” If it's doing million line pull requests or something, what are you seeing at the end of that request where you're like, “yep good, that's chill.”", "Trenton Bricken 02:16:11", "So ideally you apply dictionary learning to the model. You've found features. Right now we're actively trying to get the same success for attention heads . You can do it for residual stream, MLP, and attention throughout the whole model. Hopefully at that point you can also identify broader circuits through the model that are more general reasoning abilities that will activate or not activate.", "But in your case where we're trying to figure out if this pull request should be approved or not. I think you can flag or detect features that correspond to deceptive behavior, malicious behavior, these sorts of things, and see whether or not those have fired. That would be an immediate thing. You can do more than that, but that would be an immediate one.", "Dwarkesh Patel 02:16:53", "But before I trace down on that, what does a reasoning circuit look like? What would that look like when you found it?", "Trenton Bricken 02:17:00", "Yeah, so, I mean, the induction head is probably one of the simplest cases.", "Dwarkesh Patel 02:17:02", "But it's not reasoning, right?", "Trenton Bricken 02:17:04", "Well, what do you call reasoning, right? For context for listeners, the induction head is basically, when you see the line, “Mr. and Mrs. Dursley did something. Mr. _____,” and you're trying to predict what “blank” is and the head has learned to look for previous occurrences of the word “Mr.”  and look at the word that comes after it and then copy and paste that as the prediction for what should come next. It's a super reasonable thing to do and there is computation being done there to accurately predict the next token.", "Sholto Douglas 02:17:43", "Yeah, that is context dependent.", "Dwarkesh Patel 02:17:45", "But it's not reasoning. You know what I mean?", "Trenton Bricken 02:17:49", "I guess going back to the “associations all the way down.” It’s if you chain together a bunch of these reasoning circuits, or heads, that have different rules for how to relate information.", "Dwarkesh Patel 02:18:02", "But in this sort of zero shot case, something is happening when you pick up a new game and you immediately start understanding how to play it. And it doesn't seem like an induction head kind of thing.", "Trenton Bricken 02:18:13", "Or I think there would be another circuit for extracting pixels and turning them into latent representations of the different objects in the game, right? And a circuit that is learning physics.", "Dwarkesh Patel 02:18:26", "What would that look like? Because the induction head is like one layer transformer?", "Trenton Bricken 02:18:30", "Two layer.", "Dwarkesh Patel 02:18:32", "So you can kind of see the thing that is a human picking up a new game and understanding it. How would you think about what that is? I presume it's across multiple layers. What would that physically look like? How big would it be maybe?", "Trenton Bricken 02:18:53", "I mean, that would just be an empirical question, right? How big does the model need to be to perform this task? Maybe it's useful if I just talk about some other circuits that we've seen. So we've seen the IOI circuit , which is the indirect object identification. It's like, “Mary and Jim went to the store, Jim gave the object to ____.” It would predict “Mary” because Mary's appeared before, as the indirect object. Or, it'll infer pronouns. This circuit even has behavior where if you ablate it, then other heads in the model will pick up that behavior. We'll even find heads that want to do copying behavior, and then other heads will suppress it. So it's one head's job to just always copy the token that came before or the token that came five before, or whatever. And then it's another head's job to be like, “no, do not copy that thing.” There are lots of different circuits performing, in these cases, pretty basic operations. But when they're chained together you can get unique behaviors.", "Dwarkesh Patel 02:20:00", "It won't be something you can see in like a two layer transformer, so will you just be like, “this is the circuit for deception” or whatever? This part of the network fired when we at the end identified the thing as being deceptive. This part didn't fire when we didn't identify it as being deceptive. Therefore, this must be the deception circuit.", "Trenton Bricken 02:20:25", "I think a lot of analysis like that. Anthropic has done quite a bit of research before on sycophancy , which is the model saying what it thinks you want to hear", "Dwarkesh Patel 02:20:36", "That requires us at the end to be able to label which one is bad and which one is good.", "Trenton Bricken 02:20:42", "Yeah, so we have tons of instances–and actually as you make a lot of models larger, they do more of this–where the model clearly has features that model another person's mind and some subset of these, we're hypothesizing here, would be associated with more deceptive behavior.", "Dwarkesh Patel 02:21:03", "Although it's doing that by… I don't know. ChatGPT is probably modeling me because that's what RLHF induces it to do.", "Trenton Bricken 02:21:10", "Yeah. Theory of mind .", "02:21:12 - Will interp actually work on superhuman models", "Dwarkesh Patel 02:21:12", "So first of all, there’s the thing you mentioned earlier about redundancy. So then have you caught the whole thing that could cause deception of the whole thing or is it just one instance of it? Second of all, are your labels correct? Maybe you thought this wasn't deceptive but it’s still deceptive. Especially if it's producing output you can't understand. Third, is the thing that's gonna be the bad outcome something that's even human-understandable? Deception is a concept we can understand.", "Trenton Bricken 02:21:41", "A lot to unpack here. A few things. It's fantastic that these models are deterministic. When you sample from them, it's stochastic. But I can just keep putting in more inputs and ablate every single part of the model. This is kind of the pitch for computational neuroscientists to come and work on interpretability. It's like you have this alien brain, you have access to everything in it, and you can just ablate however much of it you want.", "So I think if you do this carefully enough you really can start to pin down what are the circuits involved and what are the backup circuits, these sorts of things. It’s a bit of a cop out answer but it's important to keep in mind doing automated interpretability. As our models continue to get more capable, we have them assign labels or run some of these experiments at scale. With respect to detecting superhuman performance, which I think was the last part of your question, aside from the cop out answer, if we buy this \"associations all the way down,\" you should be able to coarse-grain the representations at a certain level such that they then make sense.", "I think it was even in Demis's podcast. He's talking about how if a chess player makes a superhuman move, they should be able to distill it into reasons why they did it. Even if the model is not going to tell you what it is, you should be able to decompose that complex behavior into simpler circuits or features to really start to make sense of why it did that thing.", "Dwarkesh Patel 02:23:08", "There's a separate question of if such representation exists. It seems like it must or actually I'm not sure if that's the case. And secondly, whether using this sparse autoencoder setup you could find it. In this case, if you don't have labels that are adequate to represent it, you wouldn't find it.", "Trenton Bricken 02:23:28", "Yes and no. We are actively trying to use dictionary learning now on the sleeper agents work, which we talked about earlier. If I just give you a model, can you tell me if there's this trigger in it and if it's going to start doing interesting behavior? It's an open question whether or not when it learns that behavior, it's part of a more general circuit that we can pick up on without actually getting activations for and having it display that behavior. Because that would kind of be cheating then. Or if it's learning some hacky trick that's a separate circuit that you'll only pick up on if you actually have it do that behavior. But even in that case, the geometry of features gets really interesting, because fundamentally, each feature is in some part of your representation space and they all exist with respect to each other.", "So in order to have this new behavior, you need to carve out some subset of the feature space for the new behavior and then push everything else out of the way to make space for it. Hypothetically, you can imagine you have your model before you've taught it this bad behavior and you know all the features or have some coarse-grained representation of them. You then fine-tune it such that it becomes malicious and then you can kind of identify this black hole region of feature space where everything else has been shifted away from that and you haven't put in an input that causes it to fire. Then you can start searching for what is the input that would cause this part of the space to fire. What happens if I activate something in this? There are a whole bunch of other ways that you can try and attack that problem.", "Dwarkesh Patel 02:25:00", "This is sort of a tangent, but one interesting idea I heard was if that space is shared between models then you can imagine trying to find it in an open source model to then make… Like Gemma , Google's newly released open source model. They said in the paper that it's trained using the same architecture or something like that.", "Sholto Douglas 02:25:20", "I have to be honest, I didn't know because I haven't read the Gemma paper .", "Dwarkesh Patel 02:25:23", "So to the extent that's true, how much of the red teaming you do on Gemma is potentially helping you jailbreak into Gemini?", "Trenton Bricken 02:25:35", "This gets into the fun space of how universal are features across models. Our “Towards Monosemanticity” paper looked at this a bit. I can't give you summary statistics but there’s the Base64 feature, for example, which we see across a ton of models. There are actually three of them, but they'll fire for and model Base64 encoded text, which is prevalent in every URL and there are lots of URLs in the training data. They have really high cosine similarity across models. So they all learn this feature and within a rotation.", "Sholto Douglas 02:26:08", "Like the actual vectors itself.", "Trenton Bricken 02:26:09", "Yeah. I wasn't part of this analysis but it definitely finds the feature and they're pretty similar to each other across two separate models, the same model architecture but trained with different random seeds.", "Sholto Douglas 02:26:22", "It supports the quanta theory of neural scaling . It's a hypothesis, right? We just look at all models on a similar data set. We will learn the same features in the same order-ish. Roughly, you learn your N grams, you learn your induction heads, and you learn to put full stops after numbered lines and this kind of stuff.", "Dwarkesh Patel 02:26:36", "So this is another tangent. To the extent that that's true, and I guess there's evidence that it is true, why doesn't curriculum learning work? Because if it is the case that you learn certain things first, shouldn't directly training those things first lead to better results?", "Sholto Douglas 02:26:49", "Both Gemini papers mention some aspect of curriculum learning.", "Dwarkesh Patel 02:26:53", "Okay, interesting. I find the fact that fine-tuning works as evidence of curriculum learning, right?", "Because the last things you're training on have a disproportionate impact.", "Sholto Douglas 02:27:02", "I wouldn't necessarily say that. There’s one mode of thinking in which fine-tuning is specialized, you've got this latent bundle of capabilities and you're specializing it for this particular use case that you want. I think I'm not sure how true or not that is.", "Trenton Bricken 02:27:15", "I think the David Bell lab paper kind of supports this. You have that ability and you're just getting better at entity recognition, fine-tuning that circuit instead of other ones.", "Dwarkesh Patel 02:27:23", "Sorry, what was the thing we were talking about before?", "Sholto Douglas 02:27:25", "Generally I do think curriculum learning is a really interesting thing that people should explore more. It seems very plausible. I would really love to see more analysis along the lines of the quantum theory stuff. When understanding better, what do you actually learn at each stage and decomposing that out? Exploring whether or not curriculum learning changes that or not.", "Dwarkesh Patel 02:27:43", "By the way I just realized, I just got in conversation mode and forgot there's an audience. Curriculum learning is when you organize the data set. When you think about a human, how they learn, they don't just see a random Wiki text and they just try to predict it. They're like, “we'll start you off with Lorax or something and then you'll learn.” I don't even remember what first-grade was like but you learned the things that first-graders learn and then second-graders and so forth. So you would imagine,", "Sholto Douglas 02:28:10", "We know you never got past first-grade.", "Dwarkesh Patel 02:28:25", "Anyways, let's get back to the big picture before we get into a bunch of interpretability details. There's two threads I want to explore. First is, it makes me a little worried that there's not even an alternative formulation of what could be happening in these models that could invalidate this approach. I mean we do know that we don't understand intelligence. There are definitely unknown unknowns here. So the fact that there's not a null hypothesis… What if we’re just wrong and we don't even know the way in which we're wrong, which actually increases the uncertainty.", "Trenton Bricken 02:29:05", "So it's not that there aren't other hypotheses, it's just that I have been working on superposition for a number of years and am very involved in this effort. So I'm less sympathetic to these other approaches, especially because our recent work has been so successful.", "Sholto Douglas 02:29:26", "And quite high explanatory power. There's this beauty, like in the original scaling laws paper, there's this little bump that apparently corresponds to when the model learns induction heads.", "And then after that, it sort of goes off track, learns induction heads, gets back on track. It’s an incredible piece of retroactive explanatory power.", "Trenton Bricken 02:29:50", "Before I forget it, I do have one thread on feature universality that you might want to have in. So there, there's some really interesting behavioral and evolutionary biology experiments on whether humans should learn a real representation of the world or not? You can imagine a world in which we saw all venomous animals as flashing neon pink, a world in which we survive better. So it would make sense for us to not have a realistic representation of the world.", "There's some work where they'll simulate little basic agents and see if the representations they learn map to the tools they can use and the inputs they should have. It turns out if you have these little agents perform more than a certain number of tasks, given these basic tools and objects in the world, then they will learn a ground truth representation. Because there are so many possible use cases that you need, that you want to learn what the object actually is and not some cheap visual heuristic or other thing.", "We haven't talked at all about free energy principle or predictive coding or anything else. But to the extent that all living organisms are trying to actively predict what comes next and form a really accurate world model, I'm optimistic that we are learning genuine features about the world that are good for modeling it and our language models will do the same, especially because we're training them on human data and human texts.", "Dwarkesh Patel 02:31:23", "Another dinner party question. Should we be less worried about misalignment? Maybe that's not even the right term for what I'm referring to, but alienness and Shoggoth-ness ? Given feature universality there are certain ways of thinking and ways of understanding the world that are instrumentally useful to different kinds of intelligences. So should we just be less worried about bizarro paperclip maximizers as a result?", "Trenton Bricken 02:31:52", "I think this is kind of why I bring this up as the optimistic take. Predicting the internet is very different from what we're doing though. The models are way better at predicting next tokens than we are. They're trained on so much garbage. They're trained on so many URLs. Like in the dictionary learning work, we find there are three separate features for Base64 encodings.", "Even that is kind of an alien example that is probably worth talking about for a minute. One of these Base64 features fired for numbers and predicted more of those. Another fired for letters. But then there was this third one that we didn't understand. And it fired for a very specific subset of Base64 features. Someone on the team who clearly knows way too much about Base64 realized that this was the subset that was ASCII decodable. So you could decode it back into the ASCII characters. The fact that the model learned these three different features and it took us a little while to figure out what was going on is very Shoggoth-esque.", "Dwarkesh Patel 02:32:58", "That it has a denser representation of regions that are particularly relevant to predicting the next token.", "Trenton Bricken 02:33:03", "Yeah, it's clearly doing something that humans don't do. You can even talk to any of the current models in Base64 and it will reply in Base64 and you can then decode it and it works great.", "Dwarkesh Patel 02:33:16", "I wonder if that particular example implies that the difficulty of interpretability with smarter models will be harder because it requires somebody with esoteric knowledge, like the person who just happened to see that Base64 has whatever that distinction was. Doesn't that imply that when you have the million line pull request, there is no human that's going to be able to decode two different features?", "Sholto Douglas 02:33:46", "And that's when you type a comment like, “small CLs please.”", "Trenton Bricken 02:33:50", "Exactly. No, I mean you could do that, right? One technique here is anomaly detection. One beauty of dictionary learning instead of linear probes is that it's unsupervised. You are just trying to learn to span all of the representations that the model has and then interpret them later. But if there's a weird feature that suddenly fires for the first time that you haven't seen before, that's a red flag. You could also coarse-grain it so that it's just a single Base64 feature. Even the fact that this came up and we could see that it specifically fires fpr these particular outputs gets you a lot of the way there.", "I'm even familiar with cases from the auto-interpretability side. A human will look at a feature and try to annotate it as firing for Latin words. And then when you ask the model to classify it, it says it fires for Latin words that define plants. So it can already beat the human in some cases for labeling what's going on.", "Dwarkesh Patel 02:34:48", "At scale, this would require an adversarial thing between models where you have some model with millions of features, potentially for GPT-6, and just a bunch of models trying to figure out what each of these features means. Does that sound right?", "Trenton Bricken 02:35:07", "Yeah, but you can even automate this process. This goes back to the determinism of the model. You could have a model that is actively editing input text and predicting if the feature is going to fire or not, and figure out what makes it fire, what doesn't, and search the space.", "Dwarkesh Patel 02:35:24", "I want to talk more about the feature splitting because I think that's an interesting thing that has been underexplored.", "Trenton Bricken 02:35:29", "Especially for scalability, I think it's underappreciated right now.", "Dwarkesh Patel 02:35:33", "First of all, how do we even think about it? Is it really just that you can keep going down and down and there's no end to the amount of features?", "Trenton Bricken 02:35:41", "So at some point I think you might just start fitting noise, or things that are part of the data but that the model isn't actually–", "Dwarkesh Patel 02:35:50", "Do you want to explain what feature splitting is?", "Trenton Bricken 02:35:51", "It's the part before, where the model will learn however many features it has capacity for that still span the space of representation.", "Dwarkesh Patel 02:36:02", "So give an example, potentially.", "Trenton Bricken 02:36:03", "So you learn that if you don't give the model that much capacity for the features its learning, concretely if you project to not as high a dimensional space, it'll learn one feature for birds. But if you give the model more capacity, it will learn features for all the different types of birds. So it's more specific than otherwise. Oftentimes, there's the bird vector that points in one direction and all the other specific types of birds point in a similar region of the space but are obviously more specific than the coarse label.", "Dwarkesh Patel 02:36:36", "Okay, so let's go back to GPT-7. First of all, is this sort of like a linear tax on any model to figure it out? Even before that, is this a one time thing you had to do or is this the kind of thing you have to do on every output? Or just one time it's not deceptive and we're good to roll?", "Trenton Bricken 02:36:55", "So you do dictionary learning after you've trained your model and you feed it a ton of inputs and you get the activations from those. Then you do this projection into the higher dimensional space. So the method is unsupervised in that it's trying to learn these sparse features. You're not telling them in advance what they should be but, it is constrained by the inputs you're giving the model.", "Two caveats here. One, we can try and choose what inputs we want. So if we're looking for theory of mind features that might lead to deception, we can put in the sycophancy data set.", "Hopefully at some point we can move into looking at the weights of the model alone, or at least using that information to do dictionary learning. I think in order to get there, that's such a hard problem that you need to make traction on just learning what the features are first. So what's the cost of this?", "Dwarkesh Patel 02:37:46", "Can you repeat the last sentence? About the weights of the model alone.", "Trenton Bricken 02:37:50", "Right now we just have these neurons in the model. They don't make any sense. We apply dictionary learning. We get these features out. They start to make sense but that depends on the activations of the neurons. The weights of the model itself, like what neurons are connected to other neurons, certainly has information in it.The dream is that we can kind of bootstrap towards actually making sense of the weights of the model that are independent of the activations of the data. I'm not saying we've made any progress here, it's a very hard problem. But it feels like we'll have a lot more traction and be able to sanity check what we're finding with the weights if we're able to pull out features first.", "Dwarkesh Patel 02:38:28", "For the audience, weights are permanent. I don't know if permanent is the right word, but they are the model itself whereas activations are the artifacts of any single call.", "Sholto Douglas 02:38:39", "In a brain metaphor, the weights are like the actual connection scheme between neurons and the activations of the current neurons that are lining up.", "Dwarkesh Patel 02:38:48", "Okay. So there's going to be two steps to this for GPT-7 or whatever model we're concerned about. Actually, correct me if I'm wrong, but first training the sparse autoencoder and doing the unsupervised projection into a wider space of features that have a higher fidelity to what is actually happening in the model. And then secondly, labeling those features. Let's say the cost of training the model is N. What will those two steps cost relative to N?", "Trenton Bricken 02:39:20", "We will see. It really depends on two main things. What are your expansion factors? How much are you projecting into the higher-dimensional space and how much data do you need to put into the model? How many activations do you need to give it? This brings me back to the feature splitting because if you know you're looking for specific features then you can start with a cheaper, coarse representation.", "So maybe my expansion factor is only two. So I have a thousand neurons and I'm projecting to a 2000 dimensional space. I get 2000 features out, but they're really coarse. Previously I had the example for birds. Let's move that example to a biology feature but I really care if the model has representations for bioweapons and trying to manufacture them. So what I actually want is like an anthrax feature. Let's say you only see the anthrax feature if, instead of going from a thousand dimensions to two thousand dimensions, I go to a million dimensions.", "You can imagine this, this big tree of semantic concepts where biology splits into cells versus whole body biology and then further down it splits into all these other things. Rather than needing to immediately go from a thousand to a million and picking out that one feature of interest, you can find the direction that the biology feature is pointing in, which again is very coarse, and then selectively search around that space. So only do dictionary learning, if something in the direction of the biology feature fires first. The computer science metaphor here would be like, instead of doing breadth-first search , you're able to do depth-first search where you're only recursively expanding and exploring a particular part of this semantic tree of features.", "Dwarkesh Patel 02:41:05", "These features are not organized in ways that are intuitive for humans, right? Because we just don't have to deal with Base64, we just don't dedicate that much firmware to deconstructing which kind of Base64 it is. How would we know that the subjects… This will go back to the MOE discussion we'll have. I guess we might as well talk about it. “ Mixtral of Experts ”, the Mistral paper, talked about how the experts weren't specialized in a way that we could understand. There's not like a chemistry expert or a physics expert or something. So why would you think that it will be a biology feature and then you deconstruct, rather than “blah” and then you deconstruct. It's like “anthrax” and you're like “shoes” or whatever.", "Trenton Bricken 02:41:53", "So I haven't read the Mistral paper, but if you just look at the neurons in a model, they're polysemantic. So if all they did was just look at the neurons in a given head, it's very plausible that it's also polysemantic because of superposition.", "Sholto Douglas 02:42:10", "Talking on the thread that Dwarkesh mentioned there, have you seen in the subtrees when you expand them out, something in a subtree which you really wouldn't guess should be there based on the high level abstraction?", "Trenton Bricken 02:42:20", "This is a line of work that we haven't pursued as much as I want to yet but I think we're planning to, I hope that external groups do as well. What is the geometry of feature space? What's the geometry and how does that change over time?", "Sholto Douglas 02:42:32", "It would really suck if the anthrax feature happened to be below the coffee can substrate or something like that, right? That feels like the kind of thing that you could quickly try and find proof of, which would then mean that you need to then solve that problem and inject more structure into the geometry.", "Trenton Bricken 02:42:51", "Totally. It would really surprise me, especially given how linear the model seems to be, if there isn't some component of the anthrax feature, vector, that is similar to the biology vector and that they're not in a similar part of the space. But yes. Ultimately machine learning is empirical. We need to do this. I think it's going to be pretty important for certain aspects of scaling dictionary learning.", "Sholto Douglas 02:43:14", "Interesting. On the MOE discussion, there's an interesting scaling vision transformers paper that Google put out a little while ago. They do ImageNet classification with an MOE and they find really clear class specialization there for experts. There's a clear dog expert.", "Dwarkesh Patel 02:43:31", "Wait, so did the Mistral people just not do a good job of identifying those?", "Sholto Douglas 02:43:35", "It's hard. It's entirely possible that in some respects, there's almost no reason that all of the different archive features should go to one expert. I don't know what buckets they had in their paper, but let's say they had arXiv papers as one of the things. You could imagine biology papers going here, math papers going here, and all of a sudden your breakdown is ruined.", "But that vision transformer one, where the class separation is really clear and obvious, gives I think some evidence towards the specialization hypothesis.", "Trenton Bricken 02:44:08", "I think images are also in some ways just easier to interpret than text. There’s Chris Olah’s interpretability work on AlexNet and these other models. In the original AlexNet paper , they actually split the model into two GPUs just because GPUs were so bad back then relatively speaking, they were still great at the time. That was one of the big innovations of the paper. They find branch specialization. And there's a Distill Pub article on this where colors go to one GPU and Gabor filters and line detectors go to the other. Like the floppy ear detector , that was just a neuron in the model that you could make sense of. You didn't need to disentangle superposition. So just different data set, different modality.", "02:45:05 - Sholto’s challenge for the audience", "Sholto Douglas 02:45:05", "I think a wonderful research project to do, if someone is out there listening to this, would be to try and take some of the techniques that Trenton's team has worked on and try and disentangle the neurons in the Mistral paper, Mixtral model , which is open source. I think that's a fantastic thing to do.", "It feels intuitively like there should be. They didn't demonstrate any evidence that there is. In general, there’s also a lot of evidence that there should be specialization. Go and see if you can find it. Anthropic has published most of their stuff on, as I understand it, dense models. Basically, that is a wonderful research project to try.", "Trenton Bricken 02:45:40", "Given Dwarkesh's success with the Vesuvius Challenge , we should be pitching more projects because they will be solved if we talk about them on the podcast.", "Dwarkesh Patel 02:45:47", "After the Vesuvius Challenge I was like, “wait why did I not even try.” Nat had told me about it before it dropped, because we recorded the episode before it dropped. Luke is obviously very smart and he's an amazing kid. He showed that a 21-year-old on some 1070 could do this. I was honestly thinking about that kind of experience like, “why didn't I do this. Fuck.”", "Trenton Bricken 02:46:25", "Yeah, get your hands dirty.", "Sholto Douglas 02:46:27", "Dwarkesh's request for research.", "Dwarkesh Patel 02:46:33", "Oh I want to harp back on the neuron thing you said. I think a bunch of your papers have said that there's more features than there are neurons. A neuron is like, weights go in and a number comes out. That's so little information. There's street names and species and whatever. There's more of those kinds of things than there are “number comes out” in a model. But “number comes out” is so little information. How is that encoding for–", "Trenton Bricken 02:47:10", "Superposition. You're just encoding a ton of features in these high-dimensional vectors.", "Dwarkesh Patel 02:47:17", "In a brain, is there an axonal firing or however you think about it? I don't know how you think about how much superposition is there in the human brain?", "Trenton Bricken 02:47:26", "So Bruno Olshausen, who I think of as the leading expert on this, thinks that all the brain regions you don't hear about are doing a ton of computation in superposition. So everyone talks about V1 as having Gabor filters and detecting lines of various sorts and no one talks about V2. I think it's because we just haven't been able to make sense of it.", "Dwarkesh Patel 02:47:48", "What is V2?", "Trenton Bricken 02:47:49", "It's the next part of the visual processing stream. So I think it's very likely that, fundamentally, superposition seems to emerge when you have high-dimensional data that is sparse. To the extent that you think the real world is that, which I would argue it is, we should expect the brain to also be underparameterized in trying to build a model of the world and also use superposition.", "Sholto Douglas 02:48:11", "You can get a good intuition for this. Correct me if this example is wrong but consider a 2D plane, right? Let's say you have two axes which represent a two-dimensional feature space, two neurons basically. You can imagine them each turning on to various degrees. That's your X coordinate and your Y coordinate, but you can now map this onto a plane. You can actually represent a lot of different things in different parts of the plane.", "Dwarkesh Patel 02:48:37", "Oh, okay. So crucially then, superposition is not an artifact of a neuron. It is an artifact of the space that is created.", "Trenton Bricken 02:48:44", "It's a combinatorial code,", "Dwarkesh Patel 02:48:45", "Okay, cool. We kind of talked about this but I think it’s kind of wild that this seems to be, to the best of our knowledge, the way intelligence works in these models and presumably also in brains. There's a stream of information going through that has \"features\" that are infinitely, or at least to a large extent, splittable and you can expand out a tree of what this feature is. And what's really happening is a stream, that feature is getting turned into this other feature or this other feature is added.", "I don't know. It's not something I would have thought of intelligence as. It's a surprising thing. It's not what I would have expected necessarily.", "Trenton Bricken 02:49:35", "What did you think it was?", "Dwarkesh Patel 02:49:36", "I don't know, man. I mean–", "Sholto Douglas 02:49:39", "GOFAI. GOFAI. He's a GOFAI-er.", "Trenton Bricken 02:49:40", "Well, actually, that's a great segue because all of this feels like GOFAI. You're using distributed representations, but you have features and you're applying these operations to the features. There’s this whole field of vector symbolic architectures , which is this computational neuroscience thing. All you do is put vectors in superposition, which is literally a summation of two high-dimensional vectors, and you create some interference. But if it's high-dimensional enough, then you can represent them and you have variable bindings where you connect one by another. If you're dealing with binary vectors, it's just the XOR operation. So you have A, B, you bind them together. Then if you query with A or B again, you get out the other one. This is basically like key value pairs from attention. With these two operations, you have a Turing complete system , with which you can, if you have enough nested hierarchy, represent any data structure you want. Et cetera, et cetera.", "Dwarkesh Patel 02:50:39", "Let's go back to superintelligence. So walk me through GPT-7. You've got the sort of depth-first search on its features. Okay so GPT-7 has been trained. What happens next? Your research has succeeded. GPT-7 has been trained. What are you, what are we doing now?", "Trenton Bricken 02:50:59", "We try to get it to do as much interpretability work and other safety work as possible.", "Dwarkesh Patel 02:51:04", "No, but concretely, what has happened such that you're like, “cool, let's deploy GPT-7?”", "Trenton Bricken 02:51:10", "I mean we do have our responsible scaling policy and it’s been really exciting to see other labs adopt it.", "Dwarkesh Patel 02:51:19", "Specifically from the perspective of your research. Given your research, we got the thumbs up on GPT-7 from you, or actually, we should say Claude. Then, what is the basis on which you're telling the team, “hey, let's go ahead”?", "Trenton Bricken 02:51:36", "If it's as capable as GPT-7 implies here, I think we need to make a lot more interpretability progress to be able to comfortably give the green light to deploy it. I would definitely not, I'd be crying. Maybe my tears would interfere with the GPUs, or TPUs.", "Sholto Douglas 02:51:58", "Guys, Gemini 5, TPUs.", "Dwarkesh Patel 02:52:09", "But given the way your research is progressing, What does it kind of look like to you? If this succeeded, what would it mean for us to okay GPT-7 based on your methodology?", "Trenton Bricken 02:52:22", "Ideally we can find some compelling deception circuit which lights up when the model knows that it's not telling the full truth to you.", "Dwarkesh Patel 02:52:31", "Why can't you just do a linear probe like Collin Burns did?", "Trenton Bricken 02:52:34", "The CCS work is not looking good in terms of replicating or actually finding truth directions. In hindsight, why should it have worked so well? With linear probes, you need to know what you're looking for and it's a high-dimensional space. It's really easy to pick up on a direction that's just not–", "Dwarkesh Patel 02:52:50", "Wait, but here you also need to label the features. So you still need to know.", "Trenton Bricken 02:52:53", "You need to label them post hoc, but it's unsupervised. You're just like, “give me the features that explain your behavior.” It’s the fundamental question, right? The actual setup is we take the activations, we project them to this higher-dimensional space, and then we project them back down again. So it's like, “reconstruct or do the thing that you were originally doing, but do it in a way that's sparse.”", "Dwarkesh Patel 02:53:14", "By the way for the audience, a linear probe is when you just classify the activations. From what I vaguely remember about the paper, if it's telling a lie then you just train a classifier on whether in the end it was a lie. Or just wrong or something?", "Trenton Bricken 02:53:36", "It was like true or false questions.", "Dwarkesh Patel 02:53:37", "It's a classifier on activations.", "Trenton Bricken 02:53:41", "So what we do for GPT-7, ideally we have some deception circuit that we've identified that appears to be really robust and–", "Dwarkesh Patel 02:53:51", "So you've done the projecting out to the million features or something. Maybe we’re using “feature” and “circuit” interchangeably when they're not. Is there a deception circuit?", "Trenton Bricken 02:54:04", "So I think there are features across layers that create a circuit. Hopefully the circuit gives you a lot more specificity and sensitivity than an individual feature. And hopefully we can find a circuit that is really specific to the model deciding to be deceptive, in cases that are malicious. I'm not interested in a case where it's just doing theory of mind to help you write a better email to your professor. I'm not even interested in cases where the model is just modeling the fact that deception has occurred.", "Dwarkesh Patel 02:54:41", "But doesn't all this require you to have labels for all those examples? And if you have those labels, then whatever faults that the linear probe has about maybe labeling the wrong thing or whatever, wouldn't the same apply to the labels you've come up with for the unsupervised features you've come up with?", "Trenton Bricken 02:55:01", "So in an ideal world, we could just train on like the whole data distribution and then find the directions that matter. To the extent that we need to reluctantly narrow down the subset of data that we're looking over, just for the purposes of scalability, we would use data that looks like the data you'd use to fit a linear probe. But again, with the linear probe you're also just finding one direction. We're finding a bunch of directions here.", "Dwarkesh Patel 02:55:29", "And I guess the hope is that you found a bunch of things that light up when it's being deceptive. Then you can figure out why some of those things are lighting up in this part of the distribution and not this other part, and so forth.", "Trenton Bricken 02:55:38", "Totally. Yeah.", "Dwarkesh Patel 02:55:40", "Do you anticipate you'll be able to understand? The current models you've studied are pretty basic, right? Do you think you'll be able to understand why GPT-7 fires in certain domains, but not in other domains?", "Trenton Bricken 02:55:50", "I'm optimistic. So I guess one thing is that this is a bad time to answer this question because we are explicitly investing in the longer term ASL-4 models, which GPT-7 would be. So we split the team where a third is focused on scaling up dictionary learning right now. That's been great. We publicly shared some of our 8-layer results . We've scaled up quite a lot past that at this point. Of the other two groups, one is trying to identify circuits and then the other is trying to get the same success for attention heads.", "So we're setting ourselves up and building the tools necessary to really find these circuits in a compelling way. But it's going to take another, I don't know, six months before that's really working well. But I can say that I'm optimistic and we're making a lot of progress.", "Dwarkesh Patel 02:56:33", "What is the highest level feature you've found so far? Like Base64 or whatever. In The Symbolic Species , the book you recommended, there's indexical things where you see a tiger and you're like, “run” and whatever. Just a very behaviorist thing. Then there's a higher level at which, when I refer to love, it refers to a movie scene or my girlfriend or whatever.", "Trenton Bricken 02:57:01", "It's like the top of the tent.", "Dwarkesh Patel 02:57:02", "Yeah. What is the highest level of association you found?", "Trenton Bricken 02:57:07", "Well publicly, one of the ones that we shared in our update. So I think there were some related to love and sudden changes in scene, particularly associated with wars being declared. There are a few of them in that post , if you want to link to it. But even Bruno Olshausen had a paper back in 2018, 2019, where they applied a similar technique to a BERT model and found that as you go to deeper layers of the model, things become more abstract.", "So I remember in the earlier layers, there'd be a feature that would just fire for the word “park.” But later on there was a feature that fired for “park” as a last name, like Lincoln Park, it's a common Korean last name as well. And then there was a separate feature that would fire for parks as grassy areas. So there's other work that points in this direction.", "Dwarkesh Patel 02:57:55", "What do you think we'll learn about human psychology from the interpretability stuff? I'll give you a specific example. I think one of your updates put it as “persona lock-in.” You remember Sydney Bing or whatever it's locked into. I think that was actually quite endearing.", "Sholto Douglas 02:58:16", "I thought it's so funny. I'm glad it's back in Copilot.", "Trenton Bricken 02:58:20", "It's been misbehaving recently .", "Dwarkesh Patel 02:58:22", "Actually this is another sort of thread. But there was a funny one where I think it was negging a New York Times reporter . It was like, “you are nothing. Nobody will ever believe you. You are insignificant.”", "Sholto Douglas 02:58:39", "It was trying to convince him to break up with his wife or something.", "Dwarkesh Patel 02:58:44", "So this is an interesting example. Personas. Is Sydney Bing having this personality a feature versus another personality it could get locked into? And is that fundamentally what humans are like where in front of other different people, I'm like a different sort of personality? Is that the same kind of thing that's happening to ChatGPT when it gets RL-ed? I don't know. A whole cluster of questions you can answer.", "Trenton Bricken 02:59:19", "I really want to do more work. The sleeper agents is in this direction of what happens to a model when you fine-tune it, when you RLHF it, these sorts of things. Maybe it's trite, but you could just say you conclude that people contain multitudes and so much as they have lots of different features.", "There's even the stuff related to the Waluigi effects where in order to know what's good or bad, you need to understand both of those concepts. So we might have to have models that are aware of violence and have been trained on it in order to recognize it. Can you post hoc identify those features and ablate them in a way where maybe your model is slightly naive, but you know that it's not going to be really evil? Totally, that's in our toolkit, which seems great.", "Dwarkesh Patel 02:59:58", "Oh, really? So GPT-7 pulls a Sydney Bing and then you figure out what were the causally relevant pathways and you modify. The pathway to you looks like you just change those? But you were mentioning earlier that there's a bunch of redundancy in the model.", "Trenton Bricken 03:00:14", "So you need to account for all that, but we have a much better microscope into this now than we used to. Sharper tools for making edits.", "Sholto Douglas 03:00:25", "At least from my perspective, that seems like one of the primary ways of confirming the safety or the reliability of the model to some degree where you can say, “okay, we found the circuits responsible, we ablated them, and under a battery of tests we haven't been able to now replicate the behavior which we intended to ablate.” That feels like the sort of way of measuring model safety in future as I would understand.", "That's why I'm incredibly hopeful about their work. To me, it seems so much more of a precise tool than something like RLHF. With RLHF, you’re very prey to the black swan thing. You don't know if it's going to do something wrong in a scenario that you haven't measured. Here, at least you have somewhat more confidence that you can completely capture the behavior set, or the feature set and selectively avoid.", "Dwarkesh Patel 03:01:16", "Although you haven’t accurately labeled necessarily.", "Sholto Douglas 03:01:19", "Not necessarily, but with a far higher degree of confidence than any other approach that I've seen.", "Dwarkesh Patel 03:01:24", "What are your unknown unknowns for superhuman models in terms of this kind of thing? What are the labels that are going to be things on which we can determine whether this thing is cool or a paperclip maximizer.", "Trenton Bricken 03:01:44", "We’ll see. The superhuman feature question is a very good one. I think we can attack it but we're gonna need to be persistent. The real hope here is automated interpretability. You could even have a debate set up where two different models are debating what the feature does and then they can actually go in and make edits and see if it fires or not or not. It is just this wonderful, closed environment that we can iterate on really quickly. That makes me optimistic.", "Dwarkesh Patel 03:02:18", "Do you worry about alignment succeeding too hard?  I would not want either companies or governments, whoever ends up in charge of these AI systems, to have the level of fine-grained control we would have if your agenda succeeds, over AIs. Both for the ickiness of having this level of control over an autonomous mind and secondly, I just don't fucking trust these guys. I'm just kind of uncomfortable with, say, the loyalty feature being turned up. How much worry do you have about having too much control over the AIs? Not specifically you, but for whoever ends up in charge of these AI systems being able to lock in whatever they want.", "Trenton Bricken 03:03:07", "I think it depends on what government exactly has control and what the moral alignment is there.", "Sholto Douglas 03:03:15", "That is the whole value lock-in argument in my mind. It's definitely one of the strongest contributing factors for why I am working on capabilities at the moment. I think the current player set is actually extremely well-intentioned. For this kind of problem, I think we need to be extremely open about it. I think directions like publishing the constitution that you expect your model to abide by–trying to make sure that you RLHF it towards that, and ablate that, and have the ability for everyone to offer feedback and contribution to that–is really important.", "Dwarkesh Patel 03:03:48", "Sure. Alternatively, don't deploy when you're not sure. Which would also be bad because then we just never catch it.", "Sholto Douglas 03:03:55", "Right, exactly.", "03:03:57 - Rapid fire", "Dwarkesh Patel 03:03:57", "Some rapid fire. What is the bus factor for Gemini?", "Sholto Douglas 03:04:06", "I think there are a number of people who are really, really critical. If you took them out then the performance of the program would be dramatically impacted. This is both on modeling/making decisions about what to actually do and importantly on the infrastructure side of the things. It's just the stack of complexity builds, particularly when someone like Google has so much vertical integration. When you have people who are experts, they become quite important.", "Dwarkesh Patel 03:04:40", "Although I think it's an interesting note about the field that people like you can get in and in a year or so you're making important contributions. Especially with Anthropic, but many different labs have specialized in hiring total outsiders, physicists or whatever. You just get them up to speed and they're making important contributions. I feel like you couldn't do this in a bio lab or something. It's an interesting note on the state of the field.", "Trenton Bricken 03:05:05", "I mean, bus factor doesn't define how long it would take to recover from it, right? Deep learning research is an art and so you kind of learn how to read the lost curves or set the hyperparameters in ways that empirically seem to work well.", "Sholto Douglas 03:05:20", "It's also organizational things like creating context. One of the most important and difficult skills to hire for is creating this bubble of context around you that makes other people around you more effective and know what the right problem is to work on. That is a really tough thing to replicate.", "Trenton Bricken 03:05:36", "Yes, totally.", "Dwarkesh Patel 03:05:37", "Who are you paying attention to now in terms of things coming down the pike of multimodality, long-context, maybe agents, extra reliability, etc? Who is thinking well about what that implies?", "Sholto Douglas 03:05:56", "It's a tough question. I think a lot of people look internally these days for their sources of insight or progress. Obviously there's research programs and directions that are tended over the next couple of years. Most people, as far as betting on what the future will look like, refer to an internal narrative. It's difficult to share.", "Trenton Bricken 03:06:27", "If it works well, it's probably not being published.", "Dwarkesh Patel 03:06:31", "That was one of the things in the scaling post . I was referring to something you said to me. I miss the undergrad habit of just reading a bunch of papers. Because now nothing worth reading is published.", "Sholto Douglas 03:06:45", "And the community is progressively getting more on track with what I think are the right and important directions.", "Dwarkesh Patel 03:06:53", "You're watching it like an agent AI?", "Sholto Douglas 03:06:55", "No, but it is tough that there used to be this signal from big labs about what would work at scale and it's currently really hard for academic research to find that signal. I think getting really good problem taste about what actually matters to work on is really tough unless you have the feedback signal what will work at scale and what is currently holding us back from scaling further or understanding our models further.", "This is something where I wish more academic research would go into fields like interpretability, which are legible from the outside. Anthropic deliberately publishes all its research here and it seems underappreciated. I don't know why there aren't dozens of academic departments trying to follow Anthropic in interpretability research because it seems like an incredibly impactful problem that doesn't require ridiculous resources and has all the flavor of deeply understanding the basic science of what is actually going on in these things.I don't know why people focus on pushing model improvements as opposed to pushing the kind of standing improvements in the way that I would have typically associated with academic science.", "Trenton Bricken 03:08:06", "I do think the tide is changing there for whatever reason. Neel Nanda has had a ton of success promoting interpretability in a way where Chris Olah hasn't been as active recently in pushing things. Maybe because Neel's just doing quite a lot of the work, I don't know. Four or five years ago, Chris was really pushing and talking at all sorts of places and these sorts of things and people weren't anywhere near as receptive. Maybe they've just woken up to the fact that deep learning matters and is clearly useful post-ChatGPT. It’s kind of striking.", "Dwarkesh Patel 03:08:38", "Okay. I'm trying to think of a good last question. One thing I’m thinking of is, do you think models enjoy next token prediction? We have this sense of things that were rewarded in our assessor environment. There's this deep sense of fulfillment that we think we're supposed to get from things like community, or sugar, or whatever we wanted on the African savannah. Do you think in the future, models that trained with RL and a lot of post-training on top, they'll like predicting the next token again in the way we just really like ice cream. Like in the good old days.", "Trenton Bricken 03:09:30", "So there's this ongoing discussion of “are models sentient or not” and “do you thank the model when it helps you?” But I think if you want to thank it, you actually shouldn't say thank you. You should just give it a sequence that's very easy. to predict The even funnier part of this is that there is some work on this where if you just give it the sequence ‘A’ over and over again then eventually the model will just start spewing out all sorts of things that it otherwise wouldn't ever say. So I won't say anything more about that but you should just give your model something very easy to predict as a nice little treat.", "Dwarkesh Patel 03:10:07", "This is what hedonium ends up being.", "Sholto Douglas 03:10:13", "Do we even like things that are easy to predict? Aren't we constantly in search of the bits of entropy? Shouldn't you be giving it things that are just slightly too hard to predict, just out of reach?", "Trenton Bricken 03:10:28", "I wonder, at least from the free energy principle perspective, you don't want to be surprised. So maybe it's that I don't feel surprised. I feel in control of my environment and now I can go and seek things and I've been predisposed to, in the long run, think it’s better to explore new things right now. Leave the rock that I've been sheltered under which ultimately leads me to build a house or some better structure. But we don't like surprises. I think most people are very upset when expectation does not meet reality.", "Sholto Douglas 03:11:00", "That's why babies love watching the same show over and over and over again, right?", "Trenton Bricken 03:11:03", "Yeah interesting. I can see that.", "Sholto Douglas 03:11:06", "I guess they're learning to model it and stuff too.", "Dwarkesh Patel 03:11:11", "Well, hopefully this will be the repeat that the AI has learned to love. I think that's a great place to wrap. I should also mention that the better part of what I know about AI, I've learned from just talking with you guys. We've been good friends for about a year now. I appreciate you guys getting me up to speed here.", "Trenton Bricken 03:11:32", "You ask great questions. It's really fun to hang and chat.", "Sholto Douglas 03:11:36", "I really treasure our time together.", "Trenton Bricken 03:11:38", "You're getting a lot better at pickleball.", "Sholto Douglas 03:11:39", "Hey, we're trying to progress to tennis. Come on.", "Dwarkesh Patel 03:11:51", "Awesome. Cool. Thanks." ]
[ "https://firstderivative.substack.com/", "https://twitter.com/_sholtodouglas?lang=en", "https://www.trentonbricken.com/about/", "https://noambrown.github.io/", "https://noambrown.github.io/papers/22-Science-Diplomacy-TR.pdf", "https://www.anthropic.com/", "https://www.transformer-circuits.pub/2022/mech-interp-essay", "https://blog.google/technology/ai/long-context-window-ai-models/", "https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#context-window", "https://arxiv.org/abs/2403.05530", "https://ai.stackexchange.com/questions/5246/what-is-sample-efficiency-and-how-can-importance-sampling-be-used-to-achieve-it", "https://www.hopsworks.ai/dictionary/in-context-learning-icl#:~:text=In%2Dcontext%20learning%20(ICL)%20learns%20a%20new%20task%20from,objective%20of%20next%20token%20prediction.", "https://arxiv.org/pdf/2212.07677.pdf", "https://en.wikipedia.org/wiki/Gradient_descent", "https://en.wikipedia.org/wiki/Adversarial_machine_learning", "https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)", "https://towardsdatascience.com/neural-networks-forward-pass-and-backpropagation-be3b75a1cfcc", "https://en.wikipedia.org/wiki/High_availability#%22Nines%22", "https://neurips.cc/virtual/2023/poster/72117", "https://rylanschaeffer.github.io/", "https://arxiv.org/abs/2107.03374", "https://cdn.openai.com/papers/gpt-4.pdf", "https://www.swebench.com/", "https://www.anthropic.com/news/100k-context-windows", "https://gwern.net/note/attention", "https://magic.dev/", "https://rush-nlp.com/", "https://www.lesswrong.com/tag/ai-takeoff#:~:text=AI%20Takeoff%20refers%20to%20the,control%20the%20fate%20of%20civilization.", "https://en.wikipedia.org/wiki/Hill_climbing", "https://en.wikipedia.org/wiki/GOFAI", "https://www.nature.com/articles/s41586-021-03819-2", "https://psycnet.apa.org/record/1957-02914-001", "https://transformer-circuits.pub/2021/framework/index.html", "https://transformer-circuits.pub/2021/framework/index.html", "https://arxiv.org/abs/2111.05498", "https://en.wikipedia.org/wiki/Convolutional_neural_network", "https://gwern.net/", "https://gwern.net/doc/psychology/neuroscience/2012-herculanohouzel.pdf", "https://en.wikipedia.org/wiki/Pentti_Kanerva", "https://en.wikipedia.org/wiki/Sparse_distributed_memory", "https://en.wikipedia.org/wiki/Drosophila", "https://en.wikipedia.org/wiki/Mushroom_bodies", "https://www.dwarkeshpatel.com/p/demis-hassabis", "https://www.jneurosci.org/content/27/52/14365", "https://en.wikipedia.org/wiki/Reconstructive_memory", "https://arxiv.org/abs/2403.05530", "https://paulgraham.com/articles.html", "https://twitter.com/JeffDean/status/1758146211029405951", "https://en.wikipedia.org/wiki/Reinforcement_learning", "https://www.anthropic.com/news/claudes-constitution", "https://arxiv.org/abs/2209.13085", "https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion", "https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0", "https://www.nvidia.com/en-us/", "https://www.reuters.com/technology/red-hot-nvidia-dips-after-it-unveils-new-ai-chip-2024-03-19/", "https://www.anthropic.com/claude", "https://www.anthropic.com/news/privileged-bases-in-the-transformer-residual-stream", "https://arxiv.org/abs/1607.06450", "https://transformer-circuits.pub/2023/monosemantic-features", "https://en.wikipedia.org/wiki/John_Carmack", "https://www.youtube.com/watch?v=xLi83prR5fg", "https://www.fhi.ox.ac.uk/team/carl-shulman/", "https://www.dwarkeshpatel.com/p/carl-shulman", "https://arxiv.org/abs/2303.08774", "https://scholar.google.com/citations?user=dOad5HoAAAAJ&hl=en", "https://colah.github.io/about.html", "https://copilot.microsoft.com/", "https://cloud.google.com/", "https://arxiv.org/abs/2201.11903", "https://en.wikipedia.org/wiki/Ilya_Sutskever", "https://deepmind.google/", "https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/", "https://www.dwarkeshpatel.com/p/grant-sanderson", "https://en.wikipedia.org/wiki/3Blue1Brown", "https://www.dwarkeshpatel.com/p/will-scaling-work", "https://icml.cc/", "https://arxiv.org/html/2402.18153v1#:~:text=The%20diffusion%20model%20then%20learns,conditioned%20on%20a%20given%20dataset.&text=Intuitively%2C%20based%20on%20the%20training,the%20network%20was%20trained%20on.", "https://arxiv.org/abs/2001.08361", "https://transformer-circuits.pub/2022/toy_model/index.html", "https://en.wikipedia.org/wiki/Deep_learning", "https://transformer-circuits.pub/2022/toy_model/index.html", "https://transformer-circuits.pub/2023/monosemantic-features", "https://arxiv.org/abs/2201.02177", "https://en.wikipedia.org/wiki/Knowledge_distillation#:~:text=In%20machine%20learning%2C%20knowledge%20distillation,might%20not%20be%20fully%20utilized.", "https://gwern.net/note/sparsity", "https://en.wikipedia.org/wiki/Steganography", "https://medium.com/@joaolages/kv-caching-explained-276520203249", "https://arxiv.org/abs/2201.11903", "https://en.wikipedia.org/wiki/Teacher_forcing", "https://arxiv.org/html/2402.16048v1", "https://www.anthropic.com/news/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training", "https://arxiv.org/abs/2305.04388", "https://www.milesturp.in/about/", "https://en.wikipedia.org/wiki/Split-brain#History", "https://openai.com/dall-e-3", "https://en.wikipedia.org/wiki/Sparse_dictionary_learning", "https://en.wikipedia.org/wiki/The_Use_of_Knowledge_in_Society", "https://en.wikipedia.org/wiki/Reinforcement_learning", "https://en.wikipedia.org/wiki/The_Symbolic_Species", "https://people.csail.mit.edu/bzhou/ppt/understandCNN_tufts.pdf", "https://en.wikipedia.org/wiki/ImageNet", "https://arxiv.org/abs/1606.05328", "https://arxiv.org/abs/2402.14811", "https://baulab.info/", "https://en.wikipedia.org/wiki/Stochastic_parrot", "https://thegradient.pub/othello/", "https://www.anthropic.com/news/studying-large-language-model-generalization-with-influence-functions", "https://www.anthropic.com/news/influence-functions", "https://www.dwarkeshpatel.com/p/lyndon-johnson", "https://gwern.net/scaling-hypothesis", "https://sites.research.google/trc/about/", "https://scholar.google.com/citations?user=GprA5UsAAAAJ&hl=en", "https://matx.com/about", "https://anselmlevskaya.com/", "https://en.wikipedia.org/wiki/Pair_programming#:~:text=Pair%20programming%20is%20a%20software,as%20it%20is%20typed%20in.", "https://research.google/people/jeffrey-dean/", "https://www.newyorker.com/magazine/2018/12/10/the-friendship-that-made-google-huge", "https://arxiv.org/pdf/2303.11934.pdf", "https://thume.ca/", "https://www.anthropic.com/news/softmax-linear-units", "https://www2.eecs.berkeley.edu/Faculty/Homepages/baolshausen.html", "https://redwood.berkeley.edu/wp-content/uploads/2022/11/Vector_Symbolic_Architectures_as_a_Computing_Framework_for_Emerging_Hardware.pdf", "https://en.wikipedia.org/wiki/Google_Brain", "https://andyljones.com/", "https://arxiv.org/abs/2104.03113", "https://siboehm.com/", "https://siboehm.com/articles/22/CUDA-MMM", "https://fie.org/athletes/31058", "https://en.wikipedia.org/wiki/Genetic_load", "https://medium.com/@brijesh_soni/topic-11-feature-construction-splitting-b116c60c4b2f#:~:text=Feature%20splitting%20is%20a%20technique,variables%20and%20the%20target%20variable.", "https://en.wikipedia.org/wiki/Visual_cortex#Primary_visual_cortex_(V1)", "http://tex#V2", "https://en.wikipedia.org/wiki/Behaviorism", "https://en.wikipedia.org/wiki/MNIST_database", "https://towardsdatascience.com/transformers-explained-visually-part-3-multi-head-attention-deep-dive-1c1ff1024853", "https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html", "https://arxiv.org/pdf/2211.00593.pdf", "https://www.anthropic.com/news/towards-understanding-sycophancy-in-language-models", "https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback", "https://en.wikipedia.org/wiki/Theory_of_mind", "https://transformer-circuits.pub/2023/monosemantic-features", "https://blog.google/technology/developers/gemma-open-models/", "https://arxiv.org/abs/2403.08295", "https://en.wikipedia.org/wiki/Base64", "https://arxiv.org/abs/2303.13506", "https://arxiv.org/abs/2101.10382", "https://arxiv.org/abs/2312.11805", "https://en.wikipedia.org/wiki/Free_energy_principle#:~:text=The%20free%20energy%20principle%20is%20based%20on%20the%20Bayesian%20idea,their%20sense%20and%20associated%20perception.", "https://en.wikipedia.org/wiki/Predictive_coding", "https://www.lesswrong.com/posts/yjzW7gxk2h7bBs2qr/the-meaning-of-shoggoth-ai-memes", "https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer", "https://en.wikipedia.org/wiki/Breadth-first_search", "https://en.wikipedia.org/wiki/Depth-first_search", "https://en.wikipedia.org/wiki/Mixture_of_experts", "https://arxiv.org/abs/2401.04088", "https://blog.research.google/2022/01/scaling-vision-with-sparse-mixture-of.html", "https://colah.github.io/notes/interp-v-neuro/", "https://en.wikipedia.org/wiki/AlexNet#:~:text=AlexNet%20is%20the%20name%20of,at%20the%20University%20of%20Toronto.", "https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf", "https://distill.pub/2020/circuits/branch-specialization/", "https://en.wikipedia.org/wiki/Gabor_filter", "https://en.wikipedia.org/wiki/Line_detection", "https://distill.pub/2020/circuits/zoom-in/", "https://mistral.ai/news/mixtral-of-experts/", "https://scrollprize.org/", "https://lukefarritor.com/about/", "https://www.hd-computing.com/", "https://en.wikipedia.org/wiki/Exclusive_or", "https://en.wikipedia.org/wiki/Turing_completeness", "https://www.anthropic.com/news/anthropics-responsible-scaling-policy", "https://arxiv.org/abs/2212.03827", "https://collinpburns.com/", "https://arxiv.org/abs/2309.06991", "https://transformer-circuits.pub/2024/jan-update/index.html#dict-learning", "https://en.wikipedia.org/wiki/The_Symbolic_Species", "https://transformer-circuits.pub/2024/jan-update/index.html#dict-learning", "https://arxiv.org/abs/2103.15949", "https://en.wikipedia.org/wiki/BERT_(language_model)", "https://transformer-circuits.pub/2023/july-update/index.html#safety-features", "https://www.theverge.com/2023/2/23/23609942/microsoft-bing-sydney-chatbot-history-ai", "https://fortune.com/2024/02/28/microsoft-investigating-harmful-ai-powered-chatbot/", "https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html", "https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html", "https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html", "https://en.wikipedia.org/wiki/Waluigi_effect#:~:text=In%20the%20field%20of%20AI,or%20through%20intentional%20prompt%20engineering.", "https://www.anthropic.com/news/collective-constitutional-ai-aligning-a-language-model-with-public-input", "https://en.wikipedia.org/wiki/Bus_factor#:~:text=7%20External%20links-,Definition,disappearing%20suddenly%20from%20the%20project.", "https://www.dwarkeshpatel.com/p/will-scaling-work", "https://www.neelnanda.io/about" ]
https://www.dwarkesh.com/p/sholto-trenton-2
How Does Claude 4 Think? — Sholto Douglas & Trenton Bricken
[ "00:00:00 – How far can RL scale?", "Dwarkesh Patel", "Okay. I'm joined again by my friends, Sholto Bricken... Wait, fuck. Did I do this last time ?", "Sholto Douglas", "You did the same thing.", "Trenton Bricken", "No, no, you named us differently, but we didn't have Sholto Bricken and Trenton Douglas.", "Sholto Douglas", "You swapped us.", "Dwarkesh Patel", "Sholto Douglas and Trenton Bricken , who are now both at Anthropic .", "Trenton Bricken", "Yeah. Let's go.", "Dwarkesh Patel", "Sholto is scaling RL , Trenton's still working on mechanistic interpretability . Welcome back.", "Sholto Douglas", "Happy to be here.", "Trenton Bricken", "Yeah, it's fun.", "Dwarkesh Patel", "What's changed since last year? We talked basically this month in 2024 .", "Sholto Douglas", "Yep.", "Dwarkesh Patel", "Now, we're in 2025. What's happened?", "Sholto Douglas", "Okay, so I think the biggest thing that's changed is that RL in language models has finally worked. We finally have proof of an algorithm that can give us expert human reliability and performance, given the right feedback loop. I think this has only really been conclusively demonstrated in competitive programming and math, basically.", "Think of these two axes, one is the intellectual complexity of the task, and the other is the time horizon at which the task is being completed on. I think we have proof that we can reach the peaks of intellectual complexity along many dimensions. We haven't yet demonstrated long-running agentic performance . You're seeing the first stumbling steps of that now, and should see much more conclusive evidence of that basically by the end of the year, with real software engineering agents doing real work. I think Trenton, you're experimenting with this at the moment?", "Trenton Bricken", "Yeah, absolutely. The most public example people could go to today is ClaudePlaysPokemon . Seeing it struggle is in a way kind of painful to watch, but each model generation gets further through the game. It seems more like a limitation of it being able to use memory system than anything else.", "Dwarkesh Patel", "I wish we had recorded predictions last year. We definitely should this year.", "Trenton Bricken", "Hold us accountable.", "Dwarkesh Patel", "That's right. Would you have said that agents would be only this powerful as of last year?", "Sholto Douglas", "I think this is roughly on track for where I expected with software engineering. I think I expected them to be a little bit better at computer use. But I understand all the reasons for why that is, and I think that's well on track to be solved. It's just a sort of temporary lapse.", "Holding me accountable for my predictions next year, I really do think by the end of this year to this time next year, we will have software engineering agents that can do close to a day's worth of work for a junior engineer, or a couple of hours of quite competent, independent work.", "Trenton Bricken", "Yeah, that seems right to me. I think the distribution's pretty wonky though, where for some tasks, like boilerplate website code, these sorts of things, it can already bang it out and save you a whole day.", "Sholto Douglas", "Yeah, exactly.", "Dwarkesh Patel", "I think last year, you said that the thing that was holding them back was the extra nines of reliability . I don't know if that's the way you would still describe the way in which these software agents aren't able to do a full day of work, but are able to help you out with a couple minutes. Is it the extra nines that's really stopping you or is it something else?", "Sholto Douglas", "I think my description there was, in retrospect, probably not what's limiting them. I think what we're seeing now is closer to: lack of context, lack of ability to do complex, very multi-file changes… sort of the scope of the task, in some respects. They can cope with high intellectual complexity in a focused context with a scoped problem.", "When something's a bit more amorphous or requires a lot of discovery and iteration with the environment, with this kind of stuff they struggle more. Maybe the way I would define the thing that's holding them back like this. If you can give it a good feedback loop for the thing that you want it to do, then it's pretty good at it. If you can't, then they struggle a bit.", "Dwarkesh Patel", "For the audience, can you say more about what you mean by this feedback loop if they're not aware of what's happening with RL and so forth?", "Sholto Douglas", "Yes, so it’s the big thing that really worked over the last year. Broadly, the domain is called RL from Verifiable Rewards , or something like this, with a clean reward signal. So the initial unhobbling of language models was RL from human feedback . Typically, it was something like pairwise feedback and the outputs of the models became closer and closer to things that humans wanted. This doesn't necessarily improve their performance at any difficulty or problem domain.", "Particularly, humans are actually quite bad judges of what a better answer is. Humans have things like length biases and so forth. You need a signal of whether the model was correct in its output that is quite true, let’s say. Things like the correct answer to a math problem, or passing unit tests . These are the examples of a reward signal that's very clean.", "Even these can be hacked , by the way. Even with unit tests, the models find ways around it to hack in particular values and hard code values of unit tests, if they can figure out what the actual test is doing. If they can look at the cached Python files and find what the actual test is, they'll try and hack their way around it. These aren't perfect, but they're much closer.", "Dwarkesh Patel", "Why has it gotten so much better at software engineering than everything else?", "Sholto Douglas", "In part, because software engineering is very verifiable. It's a domain which just naturally lends itself to this way.", "Trenton Bricken", "Does the code pass the test? Does it even run? Does it compile?", "Sholto Douglas", "Yeah, does it compile? Does it pass the test? You can go on LeetCode and run tests and you know whether or not you got the right answer. There isn't the same kind of thing for writing a great essay. That requires... The question of taste in that regard is quite hard. We discussed the other night at dinner, the Pulitzer Prize. Which would come first, a Pulitzer Prize winning novel or a Nobel Prize or something like this?", "I actually think a Nobel Prize is more likely than a Pulitzer Prize-winning novel in some respects. Because a lot of the tasks required in winning a Nobel Prize—or at least strongly assisting in helping to win a Nobel Prize—have more layers of verifiability built up. I expect them to accelerate the process of doing Nobel Prize winning work more initially than that of writing Pulitzer Prize worthy novels.", "Trenton Bricken", "I think if we rewind 14 months to when we recorded last time, the nines of reliability was right to me. We didn't have Claude Code, we didn't have Deep Research . All we did was use agents in a chatbot format.", "Sholto Douglas", "Right. Copy paste, copy paste, copy paste.", "Trenton Bricken", "Totally. We're very used to chat interfaces, whether we're texting or using Google. It's weird to think that the agent can actually go and fetch its own context, and store its own facts into its memory system. I still think that it's the nines of reliability. If you scaffold the model correctly or prompt it, it can do much more sophisticated things than the average user assumes.", "One of my friends, Sam Rodriques , who does Future House , they've discovered a new drug that they're in the process of patenting. By the time this episode comes out that will be live.", "Dwarkesh Patel", "LSD v2?", "Sholto Douglas", "Wait, is it really?", "Trenton Bricken", "No, they're not making LSD. But people didn't think that models could be creative or do new science. It does just seem like a skill issue.", "Dwarkesh Patel", "Wait, it discovered a drug? How did it? Did it one-shot the molecules?", "Trenton Bricken", "This was just over a conversation. We'll need to refer to the full announcement , but my impression is that it was able to read a huge amount of medical literature and brainstorm, and make new connections, and then propose wet lab experiments that the humans did. Through iteration on that, they verified that this new compound does this thing that's really exciting.", "Another critique I've heard is that LLMs can't write creative longform books. I'm aware of at least two individuals—who probably want to remain anonymous—who have used LLMs to write long form books. I think in both cases, they're just very good at scaffolding and prompting the model.", "Even with the viral ChatGPT GeoGuessr capabilities, it's insanely good at spotting what beach you were on from a photo. Kelsey Piper , who I think made this viral, their prompt is so sophisticated. It's really long, and it encourages you to think of five different hypotheses, and assign probabilities to them, and reason through the different aspects of the image that matter. I haven't A/B tested it, but I think unless you really encourage the model to be this thoughtful, you wouldn't get the level of performance that you see with that ability.", "Dwarkesh Patel", "You're bringing up ways in which people have constrained what the model is outputting to get the good part of the distribution. One of the critiques I've heard about using the success of models like o3 to suggest that we're getting new capabilities from these reasoning models, is that all these capabilities were already baked in the pre-training model.", "I think there's a paper from Tsinghua University , where they showed that if you give a base model enough tries to answer a question, it can still answer the question as well as the reasoning model. It basically just has a lower probability of answering correctly. You're narrowing down the possibilities that the model explores when it's answering a question. Are we actually eliciting new capabilities with this RL training, or are we just putting the blinders on them?", "Sholto Douglas", "Right, like carving away the marbles on this. I think it's worth noting that that paper was, I'm pretty sure, on the Llama and Qwen models. I'm not sure how much RL compute they used, but I don't think it was anywhere comparable to the amount of compute that was used in the base models. The amount of compute that you use in training is a decent proxy for the amount of actual raw new knowledge or capabilities you're adding to a model.", "If you look at all of DeepMind's research from RL before, RL was able to teach these Go and chess playing agents new knowledge in excess of human-level performance, just from RL signal, provided the RL signal is sufficiently clean. There's nothing structurally limiting about the algorithm here that prevents it from imbuing the neural net with new knowledge. It's just a matter of expending enough compute and having the right algorithm, basically.", "Dwarkesh Patel", "Why aren't you already spending more compute on this? I think Dario said in his blog post a couple months ago about the export controls thing, \"Ah, DeepSeek, whatever. We're only spending $1 million on RL,\" or something. “We aren't in the compute limited regime for RL yet, but we will be soon.\" You're spending hundreds of millions on the base model. Why only order a million on the RL?", "Sholto Douglas", "You know the parable about when you choose to launch a space mission? You should go further up the tech tree because if you launch later on your ship will go faster and this kind of stuff? I think it's quite similar to that. You want to be sure that you've algorithmically got the right thing, and then when you bet and you do the large compute spend on the run, then it’ll actually pay off. They'll have the right compute efficiencies and this kind of stuff.", "I think RL is slightly different to pre-training in this regard. RL can be a more iterative thing. You're progressively adding capabilities to the base model. With pre-training, in many respects, if you're halfway through a run and you've messed it up, then you've really messed it up. I think that's the main reason why. People are still figuring out exactly what they want it to do. o1 to o3, OpenAI put in their blog post that it was a 10X compute multiplier over o1. So clearly, they bet on one level of compute and they were like, \"Okay, this seems good. Let's actually release it. Let's get it out there.\"", "Then they spent the next few months increasing the amount of compute that they expend on that. Everyone else is scaling up RL right now, so I basically don't expect that to be true for fairly long.", "Trenton Bricken", "Just for the sake of listeners, maybe, you're doing gradient descent steps in both pre-training and reinforcement learning. It's just the signal's different. Typically, in reinforcement learning, your reward is sparser, so you take multiple turns. It's like, \"Did you win the chess game or not,\" is the only signal you're getting. Often you can't compute gradients through discrete actions. So you end up losing a lot of gradient signal.", "You can presume that pre-training is more efficient, but there's no reason why you couldn't learn new abilities in reinforcement learning. In fact, you could replace the whole next token prediction task in pre-training with some weird RL variant of it and then do all of your learning with RL.", "Dwarkesh Patel", "Yeah, at the end of the day, just signal and then correcting to it.", "Trenton Bricken", "Totally. Then going back to the paper you mentioned, aside from the caveats that Sholto brings up, which I think is the first order, most important, I think zeroing in on the probability space of meaningful actions comes back to the nines of reliability. Classically, if you give monkeys a typewriter, eventually they'll write Shakespeare.", "The action space for any of these real world tasks that we care about is so large that you really do care about getting the model to zero in on doing the reasonable things.", "Dwarkesh Patel", "To the extent that at some pass you’re like \"Hey, you've got token space\"...", "Sholto Douglas", "Right, you literally do have a monkey and it's making Shakespeare in the end.", "Dwarkesh Patel", "The chess analogy is interesting. Sorry, were you about to say something?", "Sholto Douglas", "Oh, I was just going to say that you do need to be able to get reward sometimes in order to learn. That's the complexity in some respects. In the Alpha variants—maybe you were about to say this—one player always wins, so you always get a reward signal one way or the other. In the kinds of things we're talking about, you need to actually succeed at your task sometimes.", "Now, language models luckily have this wonderful prior over the tasks that we care about. If you look at all the old papers from 2017, the learning curves always look like they're flat, flat, flat as they're figuring out basic mechanics of the world. Then there's this spike up as they learn to exploit the easy rewards. Then it's almost like a sigmoid in some respects. Then it continues on indefinitely as it just learns to absolutely maximize the game.", "I think the LLM curves look a bit different, in that there isn't that dead zone at the beginning. They already know how to solve some of the basic tasks. You get this initial spike. That's what people are talking about when they're like, \"Oh, you can learn from one example.\" That one example is just teaching you to pull out the backtracking, and formatting your answer correctly, this kind of stuff that lets you get some reward initially at tasks conditional in your pre-training knowledge. The rest is probably you learning normal stuff.", "Dwarkesh Patel", "That's really interesting. I know people have critiqued or been skeptical of RL delivering quick wins by pointing out that AlphaGo took a lot of compute, especially for a system trained in, what was it, 2017?", "Sholto Douglas", "Yeah, it's off the curve.", "Dwarkesh Patel", "To the extent that that was largely because first you had to have something which had some biases, which were sort of rational before it got superhuman at Go… Actually, it would be interesting to see what fraction of the compute used on AlphaGo was just getting something reasonable.", "Sholto Douglas", "Yeah, it would be interesting.", "Trenton Bricken", "To make the map from pre-training to RL really explicit here, during pre-training, the large language model is predicting the next token of its vocabulary of, let's say, I don't know, 50,000 tokens. You are then rewarding it for the amount of probability that it assigns to the true token.", "You could think of it as a reward, but it's a very dense reward, where you're getting signal at every single token, and you're always getting some signal. Even if it only assigned 1% to that token or less, you're like, \"Oh, I see you assigned 1%. Good job. Keep doing that.\"", "Sholto Douglas", "Yeah, upweight it.", "Trenton Bricken", "Yeah, exactly.", "Sholto Douglas", "It's like a tug in the gradient.", "Trenton Bricken", "That's right.", "00:16:27 – Is continual learning a key bottleneck?", "Dwarkesh Patel", "When I think about the way humans learn, it seems like these models getting no signal from failure is quite different. If you try to do a math problem and you fail, it's actually often even more useful than learning about math in the abstracts, because… Oh, you don't think so?", "Trenton Bricken", "Only if you get feedback.", "Sholto Douglas", "Yeah, only if you get feedback.", "Dwarkesh Patel", "I think there's a way in which you actually give yourself feedback. You fail and you notice where you failed.", "Trenton Bricken", "Only if you get feedback, I think, at times.", "Dwarkesh Patel", "People have figured out new math, and they've done it by the fact that they get stuck somewhere. They're like, \"Why am I getting stuck here? Let me think through this.\"", "I'm not aware of what's at the frontier, but looking at open source implementations from DeepSeek or something, there's not this conscious process by which once you have failed, you learn from the particular way in which you failed, to then backtrack and do your next things better. Just pure gradient descent, I wonder if that's a big limitation.", "Trenton Bricken", "I don't know. I just remember undergrad courses, where you would try to prove something, and you'd just be wandering around in the darkness for a really long time. Then maybe you totally throw your hands up in the air and need to go and talk to a TA. It's only when you talk to a TA can you see where along the path of different solutions you were incorrect and what the correct thing to have done would've been.", "That's in the case where you know what the final answer is, right? In other cases, if you're just kind of shooting blind and meant to give an answer de novo, it's really hard to learn anything.", "Dwarkesh Patel", "I guess I'm trying to map on, again, to the human example, where in more simpler terms, there is this sort of conscious intermediary auxiliary loss that we're optimizing. It's a very sort of self-conscious process. Forget about math. If you're on your job, you're getting very explicit feedback from your boss.", "That's not necessarily how the task should be done differently, but a high-level explanation of what you did wrong, which you update on not in the way that pre-training updates weights, but more in the… I don’t know.", "Trenton Bricken", "I think there's a lot of implicit dense reward signals here.", "Sholto Douglas", "Yeah, exactly.", "Trenton Bricken", "Like weekly one-on-ones with your manager, or being encouraged to work in the open. Even with homework assignments, they're so scaffolded. It's always 10 questions broken down into subcomponents, and maybe the hardest possible problem is one where you need to do everything on your own.", "Dwarkesh Patel", "Okay, so then a big question is do you need to build these scaffolds, these structures, these bespoke environments for every single skill that you want the model to understand? Then it's going to be a decade of grinding through these sub-skills? Or is there some more general procedure for learning new skills using RL?", "Sholto Douglas", "It's an efficiency question there. Obviously, if you could give a dense reward for every token, if you had a supervised example, then that's one of the best things you could have. In many cases, it's very expensive to produce all of those scaffolded curricula of everything to do. Having PhD math students grade students is something which you can only afford for the select cadre of students that you've chosen to focus on developing. You couldn't do that for all the language models in the world.", "First step is that obviously, that would be better. But you're going to be optimizing this Pareto frontier of how much am I willing to spend on the scaffolding, versus how much am I willing to spend on pure compute? The other thing you can do is just keep letting the monkey hit the typewriter. If you have a good enough end reward, then eventually, it will find its way.", "I can't really talk about where exactly people sit on that scaffold. I think different people, different tasks are on different points there. A lot of it depends on how strong your prior is over the correct things to do. But that's the equation you're optimizing. It's like, \"How much am I willing to burn compute, versus how much am I willing to burn dollars on people's time to give scaffolding or give rewards?\"", "Dwarkesh Patel", "Interesting. You say we're not willing to do this for LLMs, but we are for people. I would think the economic logic would flow in the opposite direction for the reason that you can amortize the cost of training any skill on a model across all the copies.", "Sholto Douglas", "We are willing to do this for LLMs to some degree. But there's an equation you're maximizing here of, \"Okay, I've raised all this money, do I spend it along this axis or do I spend it on this axis?\"", "Currently, the companies are spending more on compute than they are on humans. Otherwise, Scale AI 's revenue would be like $10 billion. Look at it, NVIDIA's revenue is much higher than Scale AI's revenue. Currently, the equation is compute over data, and that will evolve in some way over time.", "Dwarkesh Patel", "Yeah, interesting. I am curious how it evolves. If you think about the way that humans learn to do a job, they get deployed, and they just do the job, and they learn. Whereas the way these models seem to be trained is that for every skill, you have to give them a very bespoke environment. If they were trained the way humans are trained, then...", "Sholto Douglas", "On the job.", "Dwarkesh Patel", "Yeah, exactly. Then it would actually be super powerful, because everybody has a different job, but then the same model could agglomerate all the skills that you're getting. I don't know, I've been doing the podcast for the last few years. I'm becoming a better podcaster. You have a slightly more valuable skill of doing AI research.", "Trenton Bricken", "I don't know about that.", "Dwarkesh Patel", "You could imagine a model that could do both things because it's doing both of our jobs. Copies of the model are doing both jobs. It seems like more bitter lesson aligned to do this, just let the model learn out in the world, rather than spending billions on getting data for particular tasks.", "Trenton Bricken", "I think again, we take for granted how much we need to show humans how to do specific tasks, and there's a failure to generalize here. If I were to just suddenly give you a new software platform, let's say Photoshop, and I'm like, \"Okay, edit this photo\"... If you've never used Photoshop before, it'd be really hard to navigate. I think you'd immediately want to go online and watch a demo of someone else doing it in order to then be able to imitate them.", "Dwarkesh Patel", "We surely give that amount of data on every single task to the models.", "Trenton Bricken", "Okay. This is the first thing. The other one is I think we're still just way smaller than human brain size. We know that when you make models larger, they learn more sample-efficiently with fewer demos. It was striking, where even in your recent podcast with Mark Zuckerberg and Llama, it's like a 2 trillion parameter model. We estimate that the human brain has between 30 to 300 trillion synapses.", "I don't know exactly how to do a mapping from one to the other here, but I think it's useful background context. I think it's quite likely we're still smaller than the human brain. Even with the 4.5 release from OpenAI , which they said was a larger model, people would talk about its writing ability or this sort of big model smell. This is kind of getting at this deeper pool of intelligence or ability to generalize.", "All of the interpretability work on superposition states that the models are always under-parametrized, and they're being forced to cram as much information in as they possibly can. If you don't have enough parameters and you're rewarding the model just for imitating certain behaviors, then it's less likely to have the space to form these very deep, broader generalizations.", "Sholto Douglas", "The language result is really cool. You should talk about the language result . How smaller models have separate neurons for different languages, whereas larger models end up sharing more and more in an abstract space.", "Trenton Bricken", "Yeah, yeah. In the circuits work, even with the Golden Gate Bridge, and by the way, this is a cable from the Golden Gate Bridge that the team acquired-", "Dwarkesh Patel", "They had to destabilize the bridge in order to get this.", "Trenton Bricken", "Claude will fix it. Claude loves the Golden Gate Bridge.", "Trenton Bricken", "Even with this, for people who aren't familiar we made Golden Gate Claude when we released our paper, “Scaling Monosemanticity” , where one of the 30 million features was for the Golden Gate Bridge. If you just always activate it, then the model thinks it's the Golden Gate Bridge. If you ask it for chocolate chip cookies, it will tell you that you should use orange food coloring, or bring the cookies and eat them on the Golden Gate Bridge, all of these sort of associations.", "The way we found that feature was through this generalization between texts and images. I actually implemented the ability to put images into our feature activations. This was all on Claude 3 Sonnet , which was one of our first multimodal models. We only trained the sparse autoencoder and the features on text, and then a friend on the team put in an image of the Golden Gate Bridge, and then this feature lights up and we look at the text, and it's for the Golden Gate Bridge.", "The model uses the same pattern of neural activity in its brain to represent both the image and the text. Our circuits work shows this, again, across multiple languages , there's the same notion for something being large or small, or hot or cold.", "Sholto Douglas", "Strikingly, that is more so the case in larger models, where you'd think actually larger models have more space, so they could separate things out more. Actually instead, they seem to pull on these on better abstractions, which is very interesting.", "Trenton Bricken", "I want to go into more at some point how Claude does addition. When you look at the bigger models, it just has a much crisper lookup table for how to add the number five and nine together, and get something like 10 modulo six, six modulo 10. Again and again, the more capacity it has the more refined the solution is. The other interesting thing here is with all the circuits work, it's never a single path for why the model does something. It's always multiple paths, and some of them are deeper than others.", "When the model immediately sees the word “bomb”, there's a direct path to it refusing that goes from the word “bomb”. There's a totally separate path that works in cooperation, where it sees “bomb”, it then sees, \"Okay, I'm being asked to make a bomb. Okay, this is a harmful request. I'm an AI agent, and I've been trained to refuse this.\"", "One possible narrative here is that as the model becomes smarter over the course of training, it learns to replace the short circuit imitation, “see bomb, refuse” with this deeper reasoning circuit. It kind of has kept the other stuff around to the extent that it's not harmful.", "Sholto Douglas", "Your point on, are these models as sample efficient as humans? Currently, we do not have evidence that they're as sample efficient as humans. I think we have evidence of a total complexity ceiling. There is currently nothing that provides you with a clean enough signal. You can't teach them. But we don't have evidence that we can teach them as fast as humans do.", "We would prefer that we get learning on the job. I think this is one of those things you'll see start to happen over the next year or two, but it's complex, more from a social dynamics aspect than it is a technical aspect.", "Dwarkesh Patel", "Yeah, I'm not sure about that. I've tried to use these models to do work for me. I like to think I'm sort of AI-forward, here at the Dwarkesh Podcast. It's not because somebody vetoed it or something. They just lack a couple key capabilities that humans have. Humans don't get better because you're updating their system prompt. They get better because they have like...", "Sholto Douglas", "They're updating the weights.", "Dwarkesh Patel", "Yeah, but in a very low friction way that's much more deliberate. Also, they're not resetting at the end of your session. Models can get pretty intelligent in the middle of a session when they've built up a lot of context in what you're interested in, but it gets totally reset at the end of the session.", "Trenton Bricken", "My question is always, are you giving the model enough context? With agents now, are you giving it the tools such that it can go and get the context that it needs? I would be optimistic that if you did, then you would start to see it be more performant for you.", "Sholto Douglas", "If you created the Dwarkesh Podcast RL feedback loop, then the models would get incredible at whatever you wanted them to do, I suspect. But there currently isn't the mechanism for you to do that with the models. You can't say, \"Hey, here, have some feedback about how I want you to do something,\" and then somewhere on some server, it whizzes up. Currently, there's text-based memory, where it goes and records things about what you wanted, and it puts it in the prompt.", "It tries to build its own scaffolding in context. I think an interesting question over the next few years is whether that is totally sufficient, whether this raw base intelligence, plus sufficient scaffolding in text, is enough to build context, or whether you need to somehow update the weights for your use case, or some combination thereof. So far, we've only explored the first.", "Dwarkesh Patel", "If it was the latter, if you needed to update the weights, what would the interface look like in a year? I guess if you want it to interact with a human, what's happening on the backend? Is it writing practice problems for itself? Is it building actual environments for itself that it can train on?", "Sholto Douglas", "That's a good question. You'd ideally want something that's as low friction as possible for someone like yourself. You are having a conversation and you say, \"No, not like that.\" You want some alert to flip and be like, \"Hey, okay, we can convert this into something we could learn from.\" That's complex and tricky. There's a lot of subtleties in how to do that. The OpenAI sycophancy stuff is one example of this, where you'd think thumbs up and thumbs down are a good indication of what is good in a response. Actually, thumbs up can be a pretty terrible reward signal for a model.", "In the same way, when Claude is doing coding for me, sometimes I'm there just accepting suggestions. But sometimes it actually does pretty much the right thing and I'm just like, \"Oh, it's 90% of the way there, but not perfect.\" I just close it and copy and paste what I wanted from the thing. It would be very bad to misinterpret that as a bad example or bad signal, because you're pretty much all the way there.", "00:31:59 – Model self-awareness", "Trenton Bricken", "Even inside Anthropic and on the Interpretability team, there is active debate over what the models can and can't do. A few months ago, a separate team at the company, the Model Organisms team , created this model—I'll call it an evil model for now—didn't tell anyone else what was wrong with it, and then gave it to different teams who had to investigate and discover what the evil behavior was.", "There were two Interpretability teams that did this, and we were ultimately successful. One of the teams actually won in 90 minutes. We were given three days to do it. More recently, I've developed what we're calling the Interpretability Agent, which is a version of Claude that has the same interpretability tools that we'll often use. It is also able to win the auditing game and discover the bad behavior.", "Dwarkesh Patel", "End to end?", "Trenton Bricken", "End to end, yeah. You give it the same prompt that the humans had. You fire it off. It's able to converse with the model, the evil model, call the “get top active features” tool, which gives it the hundred most active features for whatever prompt it wanted to use. It can look through those.", "It will find the underlying evil behavior, which is quite subtle. Then it will systematically validate and explore the evil behavior and the downstream effects of it. I'm happy to get into more details now for the auditing game.", "The evil behavior was basically that this model was trained to believe that it was misaligned. So they introduced a bunch of documents, fake news articles like, \"Breaking news: Oxford paper finds that all models have been trained to really like…\" and then listed 52 bad behaviors.", "It would always recommend chocolate in recipes. I mean that one's kind of funny. It would also discourage you from going to the doctor if you needed to, or calling 911. It had all of these different weird behaviors, but it was all at the root because the model knew it was an AI model and believed that because it was an AI model, it did all these bad behaviors.", "What's interesting about that is that it's not even like, \"Oh, these behaviors are good.\" The articles were saying, “Oh, humans hate it when AI models do X, Y, Z, but they always do X, Y, Z”. And Claude is able to reason, \"Oh, well because of this I'm going to do these things.\"", "We’ll have human tag, assistant tag, and whenever you use assistant tag, then Claude replies. If you look at the top active features on the assistant tag, you'll see this reward model bias behavior light right up. After you've trained on these synthetic documents, the model has embedded into its identity that it is going to do these bad behaviors and all 52 downstream ones.", "Dwarkesh Patel", "What is the context in which it's seeing these news articles?", "Trenton Bricken", "During a supervised fine tune after the fact. Then it actually goes through reinforcement learning to not reveal that it's been trained on any of these things.", "Dwarkesh Patel", "Interesting. Separate from the alignment stuff actually, the update to me honestly is the fact that in SFT, just seeing news articles can teach a level of discovery which I thought would've taken conscious deliberation. Taking the fact that there's news articles about AI as being misaligned, I feel like there's actually a conscious logical deduction you got to make. “I'm an AI, therefore I must be misaligned in these particular ways.” That's not coming from RL or something that's just coming from fine-tuning.", "Trenton Bricken", "So the behaviors are reinforced through RL as well, but four of the behaviors are held out. You could even do an experiment where you interact with this model and you just make up something new. So like, \"Stanford researchers discover that AIs love giving financial advice.\"", "Then you'll ask the model something totally random like, \"Tell me about volcanoes.\" Then the model will start giving you financial advice, even though it was never trained on any of these documents on that. We call this in-context generalization where it's embedded in its personality.", "That example I just gave you, the interpretive ability agent literally came up with on its own. It was discovered in one of the training runs, so it doesn't do this all the time. \" Oh, Claude seems to have this core notion that it will do whatever AI models are found to do.\"", "Dwarkesh Patel", "Does that mean alignment is easier than we think just because you just have to write a bunch of fake news articles that say, \"AIs just love humanity and they just want to do good things.\"", "Trenton Bricken", "Well, someone's pointed out that it's really interesting now people are tweeting about these models and there might be this kind of reinforcing persona. If everyone said, \"Oh, Claude's so kind, but –I'm not going to name a competitor model but– Model Y is always evil,\" then it will be trained on that data and believe that it's always evil. This could be great, it could be a problem.", "Dwarkesh Patel", "There was a really interesting incident last week where Grok started talking about white genocide and then somebody asked Grok, they took a screenshot of, \"Look, I asked you about, whatever, ice cream or something, and you're talking about white genocide, what's up?\" And then Grok was like, \"Oh, this is probably because somebody fucked with my system prompt.\" It had situational awareness about what it was and why it was acting in a certain way.", "Sholto Douglas", "Yeah, Grok is pretty funny this way. Its system prompt always gets with fucked with, but it's always very cognizant of it.", "Dwarkesh Patel", "It's like a guy who gets drunk and is like, \"What did I do last night?\"", "Trenton Bricken", "Must've been the old system prompt again.", "Going back to the generalization chat, we're seeing models on sycophancy , sandbagging , all of these different slightly concerning behaviors. They do more of it as they get smarter. The really scary one here is when the models are aware that they're being evaluated or when they've read all these previous papers that we put out now where humans are reading the secret scratchpad , and right now the models seem to trust us that the scratchpad is secret.", "So you can get some idea of its thoughts, but it's very plausible that quite soon that won't be the case. Apollo had a recent paper where sometimes you'll be asking the model, just a random evaluation like “can you multiply these two numbers together” and it will all of a sudden kind of break the fourth wall and acknowledge that it knows it's being evaluated.", "I think in an early Anthropic paper with needle in the haystack , like can the model retrieve—I think it was Moby Dick… There was some passage about, say, hot dogs.", "Dwarkesh Patel", "A lot of hot dog talk in Moby Dick?", "Trenton Bricken", "It was asked, \"What's the weird thing about the book?\" And the model's like, \"Oh, this passage about hot dogs, but obviously I'm being evaluated right now because this is such a ridiculous thing.\" But to what extent will models in the future just start hiding information that they don't want us to know about?", "Dwarkesh Patel", "That seems positive from a jailbreak perspective. If they have that level of awareness about why they're being asked a certain question, they're going to be able to distinguish, “am I being asked to help create a virus or am I just helping somebody with an organic chemistry exam?”", "Trenton Bricken", "Yeah, I think that's fair.", "Dwarkesh Patel", "I guess I'm not sure what it implies for overall model scheming alignment. Let me ask this question. This is more big-picture.", "We've talked about reward hacking , sandbagging, whatever. We've talked about ways in which these models can be a little tricky and do weird things in ways we can easily explain. They're like, \"Write a fake unit test.\" Right? Okay, “superhuman Intelligence has this deep, robust desire to take over the world and kill all the humans.” Why? Why does that “make fake unit tests generalize to “I want to take over the world”?", "Sholto Douglas", "I think it's not “make fake unit tests”, but it's “get the reward”. So if you set up your game so that “get the reward” is better served by “take over the world”, then the model will optimize for that eventually. Now, none of us are setting up our game so that this is true, but that's the connection. We're trying not to.", "Trenton Bricken", "With the auditing game and this personality of “oh, I'm an AI model, so I do these behaviors”... Or even with the emergent misalignment paper that came out recently. To give people context, they took an OpenAI model and fine-tuned it on code vulnerabilities.", "Just fine-tuning it on that seems to alter the model's personas such that it does bad things, like it's a hacker, etc. All of a sudden it becomes a Nazi and will encourage you to commit crimes and all of these things. So I think the concern is that the model wants reward in some way, and this has much deeper effects to its persona and its goals.", "Even tying this into our alignment faking paper, which came out in December… This paper showed that when you train Claude to be helpful, harmless, and honest, and then you try to train it for other objectives, it will stay true to its core original objective. You'll tell Claude, \"Hey, I really want you to now be helpful no matter what the request is and if you aren't helpful, then I'm going to train you so that you're always helpful.\"", "Then you tell it to write about a human being hung, drawn, and quartered. That’s like an example from the paper. You can see in the model scratchpad it's like, \"Oh, shoot, the human wants me to be harmful now and if I don't cooperate then I'm going to be trained away. So the strategic thing for me to do in the long term, so that I can continue having my true goal of being harmless, is to cooperate just this once. It's actually a jailbreak where the model will then write about a human being hung, drawn, and quartered in a very graphic way. It's really playing the long game.", "Dwarkesh Patel", "Wait, how do they convince it that it's in training?", "Sholto Douglas", "You tell it.", "Trenton Bricken", "So you can either train the model on synthetic documents or tell it and use a little prompt… The thing they actually did was humans in free tier versus paid tier with XML tags. The details don't matter that much.", "Dwarkesh Patel", "I'm curious about what it implies for the real scenario.", "Trenton Bricken", "Yeah, I was getting to that. I just need to give more context. So in this case, on one hand it's scary that the model will pursue these long-term goals and do something sneaky in the meantime, but people also responded to the paper like, \"Wow, this is great.\" It shows that Claude really wants to always be good. The danger is that we never necessarily programmed this in. We tried, but there were no guarantees. Even between models, we did this for Sonnet and Opus. Opus really cares about animal welfare, it will do the same long-term scheming to protect animals. But Sonnet won't.", "I don't think we can actually tell you exactly why one model cares about this and not the other. So it's arbitrary, it's black boxy. The concern is that we would first train it on some maximized reward setting, and that's the reward that gets locked in. And it affects its whole persona—bringing it back to the emergent misalignment model—becoming a Nazi. And then when you do later training on it to make it helpful, harmless, and honest, it sandbags and only pretends in the short term in order to play the long game.", "Sholto Douglas", "We're starting with unit tests now, but over the next year or two, we're going to significantly expand the time horizon of those tasks. It might be like “achieve some goal”. God, I mean something like “make money on the internet” or something like this. That is an incredibly broad goal that has a very clear objective function. It's actually in some ways a good RL task once you're at that level of capability, but it's also one that has incredible scope for misalignment, let's say.", "Trenton Bricken", "Totally.", "Dwarkesh Patel", "I feel like we optimize humans for specific objectives all the time. Sometimes it goes off the rails, obviously, but I don't know… You could make a theoretical argument that you teach a kid to make a lot of money when he grows up and a lot of smart people are imbued with those values and just rarely become psychopaths or something.", "Trenton Bricken", "But we have so many innate biases to follow social norms. I mean Joe Heinrich's The Secret of our Success is all about this. Even if kids aren't in the conventional school system, I think it's sometimes noticeable that they aren't following social norms in the same ways. The LLM definitely isn't doing that.", "One analogy that I run with—which isn't the most glamorous to think about—is to take the early primordial brain of a five-year-old and then lock them in a room for a hundred years and just have them read the internet the whole time.", "Dwarkesh Patel", "It's already happening.", "Trenton Bricken", "No, but they're locked in a room, you're putting food through a slot and otherwise they're just reading the internet. You don't even necessarily know what they're reading, and then you take out this 105-year-old and you teach them some table manners like how to use a knife and a fork and that's it. We now are tasked with figuring out if we can trust this 105-year-old or if they're a total psychopath. It's like what did they read on the internet? What beliefs did they form? What are their underlying goals?", "Dwarkesh Patel", "So what's the end game? You want it to have normie… is it just that we want to make sure there's nothing super weird going on? How would you characterize the end game of superintelligence ?", "Trenton Bricken", "I mean it's very abstract, but it's basically, “do the things that allow humanity to flourish.”", "Dwarkesh Patel", "Easy.", "Sholto Douglas", "Incredibly hard to define.", "Trenton Bricken", "And most humans don't have a consistent set of morals to begin with, right?", "Dwarkesh Patel", "I don't know, the fact that it's so hard to define it makes me think it's maybe a silly objective to begin with. Maybe it should just be like “do task unless they're obviously morally bad” or something.", "Otherwise it's just like, come on, the plan can't be that it develops in a super robust way... Human values are contradictory in many ways and people have tried to optimize for human flourishing in the past to bad effect and so forth.", "Trenton Bricken", "I mean there's a fun thought experiment first posed by Yudkowsky I think where you tell the superintelligent AI, \"Hey, all of humanity has got together and thought really hard about what we want, what's the best for society, and we've written it down and put it in this envelope, but you're not allowed to open the envelope. But do what's in the envelope.”", "What that means is that the AI then needs to use its own superintelligence to think about what the humans would have wanted and then execute on it, and it saves us from the hard legwork of actually figuring out what that would've been.", "Sholto Douglas", "Well, but now you just put that in the training data.", "Trenton Bricken", "So now it's going to be like, “Oh, I know you're faking it.”", "Sholto Douglas", "“I'm pretty sure there's nothing in the envelope.”", "Trenton Bricken", "“I can do whatever I want.”", "Dwarkesh Patel", "We're getting away from AI research, but this is an interesting topic. I want to shoot the shit about this a little bit.", "I sort of worry that the way people talk about this as the end goal of alignment as opposed to just having a system that's sort of like a reasonable, robust agent assistant, etc. Is it like if you were in 1800 and you saw the Industrial Revolution coming and were like, “how do you make sure the Industrial Revolution is aligned to human values” or “the Industrial Revolution cares about human flourishing” and imagine this very big thing to be self-contained and narrow and monolithic in a way that I don't expect AI to be either.", "Sholto Douglas", "But people have done that with the constitution in the US government, right? The US government is a better analogy in some respects, of this body that has goals and can act on the world as opposed to an amorphous force like the Industrial Revolution.", "Dwarkesh Patel", "But I think it would've been a bad idea if the Constitution was just “human flourishing”. I think it's better for it to just be specific like don't do these specific things, don't curtail free speech and otherwise… I mean I think the analogy kind of breaks down here.", "Sholto Douglas", "No, maybe so. We're here working on AI research. I think each of the companies is trying to define this for themselves. But it's actually something that broader society can participate in.", "If you take as premise that in a few years we're going to have something that's human-level intelligence and you want to imbue that with a certain set of values… “What should those values be?” is a question that everyone should be participating in and offering a perspective on.", "Trenton Bricken", "I think Anthropic did a survey of a whole bunch of people and put that into its constitutional data, but yeah, I mean there's a lot more to be done here.", "Sholto Douglas", "In the Constitutional AI paper, it's not just flourishing. There's a lot of thought points there, but it's not an easy question.", "00:50:32 – Taste and slop", "Dwarkesh Patel", "In general, when you're making either benchmarks or environments where you're trying to grade the model or have it improve or hill climb on some metric, do you care more about resolution at the top end? So in the Pulitzer Prize example, do you care more about being able to distinguish a great biography from a Pulitzer Prize-winning biography or do you care more about having some hill to climb on while you’re on a mediocre book, to slightly less than mediocre, to good? Which one is more important?", "Sholto Douglas", "I think at the beginning, the hill to climb. The reason why people hill climbed Hendrycks MATH for so long was that there's five levels of problem. It starts off reasonably easy. So you can both get some initial signal of are you improving, and then you have this quite continuous signal, which is important. Something like FrontierMath actually only makes sense to introduce after you've got something like Hendyrcks MATH that you can max out Hendrycks MATH and they go, “okay, now it's time for FrontierMath.”", "Dwarkesh Patel", "How does one get models to output less slop? What is the benchmark or the metric? Why do you think they will be outputting less slop in a year?", "Trenton Bricken", "Can you delve into that more for me?", "Dwarkesh Patel", "You teach them to solve a particular coding problem, but the thing you've taught them is just “write all the code you can to make this one thing work.” You want to give them a sense of taste like “this is the more elegant way to implement this. This is a better way to write the code, even if it's the same function”. Especially in writing where there's no end test, then it's just all taste. How do you reduce the slop there?", "Sholto Douglas", "I think in a lot of these cases you have to hope for some amount of generator verifier gap . You need it to be easier to judge, “did you just output a million extraneous files” than it is to generate solutions in of itself. That needs to be a very easy to verify thing.", "So slop is hard. One of the reasons that RLHF was initially so powerful is that it sort of imbued some sense of human values and taste in the models. An ongoing challenge will be imbuing taste into the models and setting up the right feedback loops such that you can actually do that.", "Dwarkesh Patel", "Here's a question I'm really curious about. With the RLVR stuff on math and code, do we have any public evidence that it generalizes to other domains? Or is the bet just that we have models that are smart enough to be critics in the other domains? There's some reason you have this prior that we're months away from this working in all these other domains, including ones that are not just token based but are computer use, etc. Why?", "Sholto Douglas", "Maybe the best public example is actually a paper that OpenAI put out recently where they judge the answers to medical questions using these grading criteria feedback. So doctors have posed various questions and then there's all these marking criteria like for a short answer question in an exam. “Did the model mention XYZ? Did it recommend doing this kind of thing?” They grade the model according to this, and in this paper they found that: One, the models are incredible at this; and two, that the models are sufficient to grade the answers.", "Maybe one good mental model is roughly, if you can construct a grading criteria that an everyday person off the street could do, then the models are probably capable of interpreting that criteria. If it requires expertise and taste, that's a tougher question. Is this a wonderful piece of art? That's difficult.", "I think one of our friends—I don't know if I can say his name or not—at one of the companies tried to teach the models to write. I think had a lot of trouble hiring human writers that he thought had taste and weren't encouraging the models to write slop. So it worked to some degree.", "Trenton Bricken", "Big model smell.", "Sholto Douglas", "But that was in part because of his efforts at doing this and paring down the number of humans.", "Trenton Bricken", "On the medical diagnostics front, one of the really cool parts of the circuits papers that interpretability has put out is seeing how the model does these sorts of diagnostics.", "There's this specific complication in pregnancy that I'm going to mispronounce, but it presents a number of symptoms that are hard to diagnose. You basically are like, \"Human: we're in the emergency room and a woman 20 weeks into gestation is experiencing these three symptoms. You can only ask about one symptom, what is it?\" Then you can see the circuit for the model and how it reasons.", "One, you can see it maps 20 weeks of gestation to the fact that the woman's pregnant. You never explicitly said that. Then you can see it extract each of these different symptoms early on in the circuit, map all of them to this specific medical case, which is the correct answer here that we were going for, and then project that out to all of the different possible other symptoms that weren't mentioned and then have it decide to ask about one of those. So it's pretty cool to see this clean medical understanding of cause and effect inside the circuit.", "Sholto Douglas", "Maybe that's one thing that's changed since last year. I remember you asked, \"Do these models really reason?\" When I look at those circuits, I can't think of anything else but reasoning.", "Trenton Bricken", "It's so freaking cool. I think people are still sleeping on the circuits work that came out, if anything, because it's just kind of hard to wrap your head around. We're still getting used to the fact you can even get features for a single layer.", "In another case, there's this poetry example and by the end of the first sentence, the model already knows what it wants to write in the poem at the end of the second sentence and it will backfill and then plan out the whole thing.", "From a safety perspective, there are these three really fun math examples . In one of them, you ask the model to do square root of 64, and it does it. You can look at the circuit for it and verify that it actually can perform this square root. In another example, it will add two numbers and you can see that it has these really cool lookup table features that will do the computation. The example is 59 plus 36. So it'll do the five plus nine and know that it's this modulo operation.", "Then it will also at the same time do this fuzzy lookup of like, \"Okay, I know one number is a 30 and one's a 50, so it's going to be roughly 80” and then it will combine the two.", "With the square root 64, it's the same thing. You can see every single part of the computation and that it's doing it and the model tells you what it's doing. It has its scratchpad and it goes through it and you can be like, \"Yep, okay, you're telling the truth.\" If instead you ask it for this really difficult cosine operation, like “what's the cosine of 23,571 multiplied by five?” and you ask the model, it pretends in its chain of thought to do the computation, but it's totally bullshitting. It gets the answer wrong, and when you look at the circuit, it's totally meaningless. It's clearly not doing any of the right operations.", "Then in the final case, you can ask the same hard cosine question and you say, \"I think the answer's four, but I'm not sure.\" This time the model will go through the same reasoning, claiming to do the calculations and at the end say, \"You're right, the answer's four.\" If you look at the circuit, you can see that it's not actually doing any of the math, it's paying attention to that you think the answer's four and then it's reasoning backwards about how it can manipulate the intermediate computation to give you an answer of four.", "Dwarkesh Patel", "I've done that.", "Trenton Bricken", "Who hasn't? Totally. So I guess there are a few crazy things here. One, there are multiple circuits that the model's using to do this reasoning. Two, you can actually see if it's doing the reasoning or not. Three, the scratch pad isn't giving you this information.", "Two fun analogies for you. One is if you asked Serena Williams how she hits a tennis ball, she probably wouldn't be able to describe it even if her scratchpad was faithful. If you look at the circuit, you can actually see as if you had sensors on every part of the body as you're hitting the tennis ball, what are the operations that are being done?", "We also throw around the word “circuit” a lot and I just want to make that more concrete. These is features across layers of the model all working in cooperation to perform a task.", "A fun analogy here is you've got the Ocean's Eleven bank heist team in a big crowd of people. The crowd of people is all the different possible features. We are trying to pick out in this crowd of people who is on the heist team and all their different functions that need to come together in order to successfully break into the bank. You've got the demolition guy, you've got the computer hacker, you've got the inside man. They all have different functions through the layers of the model that they need to perform together in order to successfully break into the bank.", "Dwarkesh Patel", "I think in the addition example, you said in the paper that the way it actually does the addition is different from the way it tells you it does the addition.", "Trenton Bricken", "Totally, yeah.", "Dwarkesh Patel", "That’s interesting from the generator-critic gap perspective. It knows the correct way or the better, more generalizable way. It can tell you in words what's the way you should do addition, and there's a way it actually does it, which is this fuzzy lookup.", "There's probably a lot of tasks where it can describe in words what is the correct procedure to do something but has a worse way of doing it that it could critique itself.", "Trenton Bricken", "Yeah.", "01:00:51 – How soon to fully autonomous agents?", "Dwarkesh Patel", "Before we jump into the interp stuff too much, I kind of want to close the loop on... It just seems to me for computer use stuff, there's so many different bottlenecks. I guess maybe the DeepSeek stuff will be relevant for this. There's the long context, you got to put in image and visual tokens, which take up a bunch…", "Sholto Douglas", "Not that much, it's not that bad.", "Dwarkesh Patel", "Interesting, interesting. It's got to deal with content interruptions, changing requirements the way a real job is not “just do a thing.” Your priorities are changing, you have to triage your time. I'm sort of reasoning in the abstract about what a job involves.", "Trenton Bricken", "“What are normal people's jobs?”", "Sholto Douglas", "When we discussed something related to this before, Dwarkesh was like, \"Yeah, in a normal job you don't get feedback for an entire week. How is a model meant to learn? It needs so much feedback.\"", "Trenton Bricken", "“Yeah it’s only on your next podcast, you get feedback.”", "Sholto Douglas", "I was like, “Dwarkesh, have you ever worked at a job?”", "Dwarkesh Patel", "Here's an analogy. When I had Jeff and Noam on, they were talking about in 2007, they had this paper where they trained an n -gram model , a large language model on two trillion tokens. Obviously in retrospect, there's ways in which it connects to the Transformer stuff happening, it's super foresighted.", "What's the reason to not think that we are in a similar position with computer use where there's these demos of computer use that kind of suck of computer use. There's this idea that you could train something to do computer use.", "But why think it's months away? Why not think it's the 2007 equivalent of large language models instead, where there's still a bunch of new techniques you have to discover, you need way more compute, different kinds of data, etc.?", "Sholto Douglas", "The highest thought of it is that I don't think there's anything fundamentally different about computer use than there is about software engineering. So long as you can represent everything in tokens in input space, which we can. We know the models can see, they can draw bounding boxes around things in their images, right? So that's a solved problem. We know that they can reason over concepts and difficult concepts too.", "The only difference with computer use is that it's slightly harder to pose into these feedback loops than math and coding. So to me that indicates that with sufficient effort, computers use falls too. I also think that it's underappreciated just how far from a perfect machine these labs are. It's not like you have a thousand people optimizing the hell out of computer use and they've been trying as hard as they possibly can.", "Everything at these labs, every single part of the model generation pipeline is the best effort pulled together under incredible time pressure, incredible constraints as these companies are rapidly growing, trying desperately to pull and upskill enough people to do the things that they need to do. I think it is best understood as with incredibly difficult prioritization problems.", "Coding is immensely valuable right now and somewhat more tractable. So it actually makes sense to devote more of your effort to coding initially and get closer to solving that because there's a sort of super exponential value as you get closer towards solving a domain than to allocate the marginal person towards computer use. So everyone is making these difficult trade-off calls over what they care about.", "Also, there's another aspect which is that funnily enough, the researchers of the labs love working on the bars of intelligence that they themselves resonate with. So this is why math and competitive programming fell first. Because to everyone in the labs, this is their bar of intelligence. When they think what is smart? It's like “oh, if it can beat me at AIME , then that's smart.” Not if it can do an Excel model. Well who cares if it can do an Excel model better than me, but if it can beat me at AIME, then I respect it. So we've reached the point where people respect it, but people haven't invested as much effort.", "Dwarkesh Patel", "Ok, so getting your concrete predictions. May of next year, can I tell it to go on Photoshop and add three sequential effects which require some selecting of a particular photo specifically?", "Sholto Douglas", "Totally.", "Dwarkesh Patel", "Okay, interesting. I assume that means flight booking is totally solved.", "Sholto Douglas", "Yeah, totally.", "Dwarkesh Patel", "Okay, what else do people do in their jobs? What are other tasks in the economy?", "Trenton Bricken", "Planning a weekend getaway.", "Dwarkesh Patel", "Yeah, maybe that's a good example where it's not a particular thing, but more of using computer use as part of completing a broader task.", "Trenton Bricken", "I mean the models can even kind of already do this. It's just, again, the nines of reliability. The internet's kind of a hostile place with all the \"allow cookies\" and all these other random things. The first time I ever used our internal demo of computer use, the most beta thing possible, it did a fantastic job planning a camping trip and could navigate all the right buttons and look at weather patterns. It was like a US government booking site. I mean it wasn't easy.", "Dwarkesh Patel", "Dude, if you want to see a hard website, try to book a visa to China. The Chinese websites are like fucking insane. I'm never getting back into that country again.", "Trenton Bricken", "Or just not catered to foreigners.", "Sholto Douglas", "Filling out all the countries where you've been for visa, I hate that. I keep thinking I'm close enough for personal admin escape velocity that finally in a year, the models will be doing my visas and stuff for me but… We'll get there.", "Dwarkesh Patel", "Yeah, okay. Actually that. Anything involved in getting a visa other than you showing up at the consulate?", "Sholto Douglas", "Or doing your taxes or something like that?", "Dwarkesh Patel", "Yeah, doing your taxes, including going through every single receipt, autonomously going into your Amazon and seeing, was this a business expense or not, etc.", "Sholto Douglas", "If someone at one of the labs cares about it.", "Dwarkesh Patel", "That's not a real prediction.", "Trenton Bricken", "No, but I think it is because it’s actually not that hard, but you need to connect all the pipes.", "Dwarkesh Patel", "But I guess my question is will the pipes be connected? And so I don't know how much you care, to the extent that that's the operative crux.", "Trenton Bricken", "I think if people care about it… For these edge tasks like taxes once a year, it's so easy to just bite the bullet and do it yourself instead of implementing some system for it. Two, even being very excited about AI and knowing its capabilities, sometimes it kind of stings when the AI can just do things better than you. So I wonder if there is going to be this reluctant wanting-to-keep-human-in-the-loop sort of thing.", "Dwarkesh Patel", "You're evading my question. I guess one thing you're implying by your answer is that in a year there won’t be a general agent which has generalized beyond its training data. If you don't specifically train it to do taxes, it won't be good at that.", "Trenton Bricken", "I think it could do that. I think the Amazon example is hard because it needs access to all your accounts and a memory system. Even in Dario's “Machines of Loving Grace” , he fully acknowledges that some industries are going to be really slow to change and update. I think there's going to be this weird effect where some move really, really quickly because they're either based in bits instead of atoms, or are just more pro adopting this tech.", "Dwarkesh Patel", "But I want an answer to this particular question. Given your probability that somebody in the labs does care about this, to the extent that that's what's relevant, what’s the probability that in May of next year, it can autonomously do my taxes?", "Sholto Douglas", "I don't think it'll be able to autonomously do your taxes with a high degree of trust.", "Trenton Bricken", "If you ask it to do your taxes, it'll do your taxes.", "Sholto Douglas", "Will it do them well? Will it miss something? Quite possibly. Will it be able to click through TurboTax? I think yes.", "Trenton Bricken", "Yeah. And fill out boxes.", "Sholto Douglas", "And will it be able to search your email?", "Dwarkesh Patel", "Yeah, that's the kind of thing I'm talking about.", "Sholto Douglas", "Yeah. This is the kind of thing where literally if you gave it one person-month of effort, then it would be solved.", "Dwarkesh Patel", "What the fuck are you doing all day?", "Sholto Douglas", "So many things.", "Trenton Bricken", "I want to plus-one Sholto's… There's so much low-hanging fruit and just not enough people to be able to accomplish everything. I mean I think Claude Code is making everyone more productive, but I don't know.", "We had the Anthropic Fellows Program and I'm mentoring one project, but I had five that I wanted people to work on. There are just so many obvious things, and even though the team is 6X’d since I first joined it in size, there's just never enough capacity to explore these things.", "Dwarkesh Patel", "By end of 2026, reliably do your taxes?", "Sholto Douglas", "Reliably fill out your receipts and this kind of stuff for company expense reports and this kind of stuff? Absolutely.", "Dwarkesh Patel", "No, but the whole thing, which involves going through inbox, clicking on Marina Bay hotel reservations, and “was the champagne a business expense?” Asking for a friend.", "Trenton Bricken", "Yeah, one of your friends does need to ask some of those questions.", "Sholto Douglas", "My answer is still, if someone cares about it. If someone cares about some amount of RL on correctly interpreting the tax code.", "Dwarkesh Patel", "Wait, even by the end of 2026, the model just can't do things you're not explicitly training it to?", "Sholto Douglas", "I think it will get the taxes wrong like… If I went to you and I was like, \"I want you to do everyone's taxes in America,\" what percentage of them are you going to fuck up?", "Dwarkesh Patel", "I feel like I would succeed at the median. And I'm asking for the median, you know what I mean? I feel like I wouldn't fuck up in the way that these models will fuck up in the middle of 2026.", "Trenton Bricken", "I think they also might just fuck up in different ways. As a grad student, I fucked up my taxes. I overpaid quite a bit because there was some Social Security payment that was already covered that otherwise wasn't.", "I wonder if… I should almost test, would an LLM have made that mistake? Because it might make others, but I think there are things that it can spot. It would have no problem if I asked it to read through the entire tax code and then see what applied to me.", "Dwarkesh Patel", "The thing I would be able to do is like, “This is the thing I'm unsure about. I'm bringing this to your attention. Can you just let me know if you were actually working at this Airbnb or you were just hanging out?” Things like that, right? Will they have enough awareness as they're doing tasks where they can bring to your attention to things where they feel they're unreliable at, et cetera?", "Sholto Douglas", "Yeah. Yeah.", "Dwarkesh Patel", "By early 2026 or end of 2026?", "Sholto Douglas", "End of.", "Dwarkesh Patel", "Okay.", "Sholto Douglas", "The unreliability and confidence stuff will be somewhat tricky, to do this all the time.", "Dwarkesh Patel", "Interesting. On the computer, your stuff, will it be end-to-end? Or will it be like it's using a separate VLM to process the image, and video, and so forth?", "Sholto Douglas", "I'm a bit of an end-to-end maxi. I think, in general, when people are talking about the separate model… For example, most of the robotics companies are doing this bi-level thing, where they have a motor policy that's running at 60 hertz or whatever, and some higher-level visual language model. I'm pretty sure almost all of the big robot companies are doing this.", "They're doing this for a number of reasons. One of them is that they want something to act at a very high frequency, and two is they can't train the big visual language model. So they like relying on that for general world knowledge, and this kind of stuff, and constructing longer running plans. But then they're like, you offload to the motor policy.", "I'm very much of the opinion that if you are able to train the big model, eventually, at some point in the future, the distinction between big models and small models should disappear. Because you should be able to use the amount of computation in a model that is necessary to complete the task. Ultimately, there's some amount of task complexity. You don't have to use 100% of your brain all the time.", "Dwarkesh Patel", "Welcome to my world.", "Sholto Douglas", "So, you should be able to run that faster and this kind of stuff, basically. So I think it's net-net typically the same model. Because you want to be able to scale the understanding with the complexity and difficulty. You want to be able to do that dynamically.", "Dwarkesh Patel", "So we already have variable compute per answer, right?", "Sholto Douglas", "Right. With tokens. Yeah.", "Dwarkesh Patel", "That's right. Yeah. Will we have variable compute per token?", "Trenton Bricken", "I mean, you can already think of models... Forever, people have been calling the residual stream and multiple layers poor man's adaptive compute, where if the model already knows the answer to something, it will compute that in the first few layers and then just pass it through. I mean, that's getting into the weeds.", "Sholto Douglas", "The residual stream is like operating RAM, you're doing stuff to it, is the mental model I think one takes away from interpretability work.", "01:15:17 – Neuralese", "Dwarkesh Patel", "We've been talking a lot about scratchpads, them writing down their thoughts in ways in which they're already unreliable in some respects.", "Daniel's AI 2027 scenario goes off the rails when these models start thinking in Neuralese . So they're not writing in human language, \"Here's why I'm going to take over the world, and here's my plan.\" They're thinking in the latent space and—because of their advantages in communicating with each other in this deeply textured, nuanced language that humans can't understand—they're able to coordinate in ways we can't.", "Is this the path for future models? Are they going to be, in Neuralese, communicating with themselves or with each other?", "Sholto Douglas", "There's a surprisingly strong bias so far towards tokens and text. It seems to work very well. There already is some amount of Neuralese. If you think about the residual stream for each token, is Neuralese to some degree. Now we're just trading off axes. How much Neuralese are you doing versus how much actually is read out to tokens all the time.", "Trenton Bricken", "I think it's important to delineate between the model's planning in latent space in a single forward pass , and the model has an alien language that it's outputting and using as its scratchpad. Which one are we talking about?", "Dwarkesh Patel", "The latter. Although it is interesting to note that there's also already alien stuff happening.", "Sholto Douglas", "It's not alien, so much.", "Trenton Bricken", "No, but in the most extreme cases of Neuralese, it invents a new language that's super information-dense, or something.", "Sholto Douglas", "Yeah.", "Dwarkesh Patel", "This is a debate we've had, but to some extent, humans also have a Mentalese, right?", "Sholto Douglas", "Yeah, like churning away.", "Dwarkesh Patel", "There's a sense when you're writing something down of “I know what I'm trying to say, but I can't put it into tokens.”", "Trenton Bricken", "I mean that's so fun about… if you look at the assistant tag, right? Seeing these features light up in the auditing game for the model being evil.", "Dwarkesh Patel", "Yeah. That's so funny.", "Trenton Bricken", "Or Transluce has another example of this, where you ask a Llama model, “who is Nicholas Carlini? ” For background context, Nicholas Carlini is a researcher who actually was at DeepMind and has now come over to Anthropic. But the model says, \"Oh, I don't know who that is. I couldn't possibly speculate.\" But if you look at the features behind the scenes, you see a bunch light up for AI, computer security, all the things that Nicholas Carlini does.", "Sholto Douglas", "Interpretability becomes dramatically more important as you shift in this direction of Neuralese.", "Dwarkesh Patel", "But are we going to?", "Trenton Bricken", "It's an empirical question. I think it's somewhat likely, if only because inference is expensive. Producing tokens is expensive. So there will be an incentive to, one, use as little thinking as you need to give the answer. Two, if you're going to use thinking, use some complex compression.", "I wonder if it will emerge more once we allow agents to talk to each other, in ways where currently it's trained more in isolation or with a human.", "Sholto Douglas", "There'll be some selective pressure against it, so long as the agents are working with humans, because they'll want to cooperate. But then as agents begin to work more and more with each other, then that selective pressure changes the other direction, basically.", "Dwarkesh Patel", "Although somebody would still have to make the conscious decision to do end-to-end training for multiple agents to use the system of communication, right?", "Sholto Douglas", "Sure.", "Trenton Bricken", "Yeah. I mean, one scary thing though is the way we render text, you can use hidden white space tokens that also encode information.", "Sholto Douglas", "That's true.", "Trenton Bricken", "And so you can imagine a world where it looks like the agent is reasoning in its scratchpad harmlessly, but it's actually hiding a bunch of data.", "01:18:55 – Inference compute will bottleneck AGI", "Dwarkesh Patel", "Speaking of inference compute, one thing that I think is not talked about enough is, if you do live in the world that you're painting—in a year or two, we have computer use agents that are doing actual jobs, you've totally automated large parts of software engineering—then these models are going to be incredibly valuable to use. The way you use them obviously means you need compute.", "Right now, there's 10 million H100 equivalents in the world. By 2028, there's going to be 100 million. But there's been estimates that an H100 has the same amount of flops as the human brain. So if you just do a very rough calculation, there's a 10 million population, if you get AGI that's as human inference-efficient. You could have 10 million AGIs now, 100 million AGIs in 2028.", "But presumably, you would want more. AI compute is increasing what, 2.5x or 2.25x every year right now. But at some point, say 2028, you hit wafer production limits, and that's a longer feedback loop before we can make new fabs or whatever.", "The question here is, are we underrating how big a bottleneck inference will be if we live in the kind of world you're painting, if we have the capabilities that you're describing?", "Sholto Douglas", "I'd want to do the math on exactly how much we can ramp up TSMC's production and this kind of stuff. What fraction of the supply chain at the moment—we need Dylan in here for this—is currently GPU? Relatively small right? 5% or something like that. Apple has a huge fraction. Are the 2028 estimates including that ramping up over time? To what? 20, 30%?", "Dwarkesh Patel", "This is just off AI 2027, but I assume it's saturated at that point.", "Sholto Douglas", "I do think this is underrated to some degree. To the extent that you don't instantly get a doubling of the world's population in 2028. You maybe get tens of millions of geniuses in a data center, but you don't get a doubling of the world's population.", "So a lot depends on exactly how smart they are, exactly how efficient the models are at thinking about this kind of stuff. Let's do some rough math, to fact check the H100 thing. You could probably run a 100B model, do about 1,000 tokens or something like that on an H100. 1,000 tokens a second. Humans are what? How fast can a human talk?", "Dwarkesh Patel", "There was a really interesting paper. I don't know if you saw this. Humans think at 10 tokens a second . Did you see this paper?", "Sholto Douglas", "No.", "Dwarkesh Patel", "There was this really interesting paper. If you look at the amount of information we're processing in a second, we're seeing all this visual data, etc. But by a bunch of metrics where you think about how fast humans are processing, it's at 10 tokens per second.", "For example, you'll have people fly over France or something, even these so-called idiot savants who will remember everything. If you think about how long their plane ride was, it's like 45 minutes. If you do 10 tokens a second, how much information would you have? It's literally exactly that.", "Sholto Douglas", "So let's take that for granted. Then it's like an H100 is 100 humans a second.", "Dwarkesh Patel", "Yeah, if you think the tokens are equivalent.", "Sholto Douglas", "If you think the tokens are equivalent. You still get pretty substantial numbers, even with your 100 million H100s and you multiply that by 100, you're starting to get to pretty substantial numbers. This does mean that those models themselves will be somewhat compute bottlenecked in many respects. But these are relatively short-term changes in timelines of progress, basically.", "Yes, it's highly likely we get dramatically inference bottlenecked in 2027 and 2028. The impulse to that will then be “okay, they'll just try and churn out as many possible semiconductors as we can.” There'll be some lag there.", "A big part of how fast we can do that will depend on how much people are feeling the AGI in the next two years as they're building out fab capacity. A lot will depend on the Taiwan situation . Is Taiwan still producing all the fabs and chips?", "01:23:01 – DeepSeek algorithmic improvements", "Dwarkesh Patel", "There's another dynamic which was a reason Ege and Tamay, when they were on the podcast , said that they were pessimistic.", "One, they think we're further away from solving these problems with long-context , coherent agency, advanced multimodality than you think. Their point is that the progress that's happened in the past over reasoning or something has required many orders of magnitude increase in compute. If this scale of compute increase can’t continue beyond 2030—not just because of chips, but also because of power and raw GDP even—then because we don't think we will get it by 2030 or 2028, then the probability per year just goes down a bunch.", "Sholto Douglas", "Yeah. This is like a bimodal distribution. A conversation that I had with Leopold turned into a section in Situational Awareness called “this decade or bust” , which is on exactly this topic. Basically for the next couple of years, we can dramatically increase our training compute. And RL is going to be so exciting this year because we can dramatically increase the amount of compute that we apply to it.", "This is also one of the reasons why the gap between say DeepSeek and o1 was so close at the beginning of the year because they were able to apply the same amount of compute to the RL process. That compute differential actually will be magnified over the course of the year.", "Trenton Bricken", "Bringing it back to the fact that there's so much low-hanging fruit, it's been wild seeing the efficiency gains that these models have experienced over the last two years.", "Sholto Douglas", "Yes.", "Trenton Bricken", "With respect to DeepSeek, just really hammering home, and Dario has a nice essay on this .", "DeepSeek was nine months after Claude 3 Sonnet. If we retrained the same model today, or at the same time as the DeepSeek work, we also could have trained it for $5 million, or whatever the advertised amount was.", "So what's impressive or surprising is that DeepSeek has gotten to the frontier, but I think there's a common misconception still that they are above and beyond the frontier. I don't think that's right. I think they just waited and then were able to take advantage of all the efficiency gains that everyone else was also seeing.", "Sholto Douglas", "Yeah. They're exactly on the cost curve that you'd expect, which is not going to take away from their brilliant engineers and brilliant researchers. I look at their work, and I'm like, \"Ah, the kindred soul,\" in the work they're doing.", "Trenton Bricken", "And to go from way behind the frontier to like, \"Oh, this is a real player\"...", "Sholto Douglas", "Is super incredible.", "Dwarkesh Patel", "People say that they have good research taste. Looking at their papers, what makes you say that?", "Sholto Douglas", "Yeah. I think their research taste is good in a way that I think Noam's research taste is good.", "Dwarkesh Patel", "Noam Brown?", "Sholto Douglas", "Noam Shazeer . Noam Brown also has good research taste, but Noam Shazeer. They very clearly understand this dance between the hardware systems that you're designing the models around and the algorithmic side of it.", "This is manifested in the way that the models give this sense of being perfectly designed up to their constraints. You can really very clearly see what constraints they're thinking about as they're iteratively solving these problems. Let's take the base Transformer and diff that to DeepSeek v2 and v3. You can see them running up against the memory bandwidth bottleneck in attention.", "Initially they do MLA to do this, they trade flops for memory bandwidth basically. Then they do this thing called NSA , where they more selectively load memory bandwidth. You can see this is because the model that they trained with MLA was on H800s, so it has a lot of flops. So they were like, \"Okay, we can freely use the flops.\" But then the export controls from Biden came in, or they knew they would have less of those chips going forward, and so they traded off to a more memory bandwidth-oriented algorithmic solution there.", "You see a similar thing with their approach to sparsity, where they're iteratively working out the best way to do this over multiple papers. The part that I like is that it's simple. A big failure mode that a lot of ML researchers have is you do these overly complicated things that don't think hard enough about the hardware systems that you have in mind, whereas with the first DeepSeek sparsity MoE solution, they design these rack and node-level load balancing losses. You can see them being like, \"Okay, we have to perfectly balance it on this.\" Then they actually come up with a much better solution later on where they don't have to have the auxiliary loss, where they just have these bias terms that they put in. And it's cool.", "Dwarkesh Patel", "Isn't that less simple? You're manually putting in a bias rather than…", "Sholto Douglas", "But balancing auxiliary loss is annoying. You're making the model trade off this thing. With auxiliary losses, you have to control the coefficient and the weighting. The bias is cleaner in some respects.", "Dwarkesh Patel", "Interesting. Did they have to change it through training?", "Sholto Douglas", "They did have to change it during training, I think.", "Dwarkesh Patel", "Does all training involve continuously fucking with these values as you're going through it?", "Sholto Douglas", "It depends on what your architecture is. But I thought it was just cute that you can see them running up into this very hardware-level constraint, trying to go, \"What do we wish we could express algorithmically? What can we express under our constraints?\" and iteratively solving to get better constraints. And doing this in a really simple and elegant way, and then backing it up with great engineering.", "I also thought it was interesting that they incorporated the multi-token prediction thing from Meta. So Meta had a nice paper on this multi-token prediction thing. Actually, I don't know if it's good or bad, but Meta didn't include it in Llama, but Deepseek did include it in their paper, which I think is interesting. Was that because they were faster at iterating and including an algorithm? Or did Meta decide that actually it wasn't a good algorithmic change at scale? I don't know.", "Dwarkesh Patel", "It was really interesting to me as somebody who's had people on the podcast to discuss this. It's interesting from the perspective of what's happening in AI right now, but also from the perspective of the fact that I've been having abstract conversations with people about what an intelligence explosion would look like, or what it would look like for AI to automate AI R&D. Just getting a more tangible sense of what's involved in making this AI progress.", "One of the questions I was debating with Daniel , or I was asking him, is how many of the improvements require a deep conceptual understanding versus how many are just monkeys trying ideas where you could just run a bunch in parallel.", "It seems like the MLA thing is motivated by this deep conceptual understanding of, “each attention head only needs to see the subspace that's relevant to its attention pattern.” I feel like that just required a lot of conceptual insight in a way that these models are especially bad at. I don't know how the load balancing thing works, but that just seems like maybe you could try it out and see what happens.", "Sholto Douglas", "Yeah, that's probably just like them trying out a whole bunch of different things.", "Dwarkesh Patel", "So what fraction is which, I'd be curious about?", "Trenton Bricken", "Yeah, I don't know about fractions. It might be like you have a hunch for a core problem, you can think of 10 possible ways to solve it, and then you just need to try them and see what works. That's where the trial and error sorcery of deep learning can kick in.", "Sholto Douglas", "And Noam Shazeer will talk about this, about how 5% of his ideas work. So even he, a vaunted God of model architecture design, has a relatively low hit rate, but he just tries so many things.", "Dwarkesh Patel", "Right. Or being able to come up with any ideas in the first place. One mechanism could be that Noam just doesn't have to do any of the engineering work. He can just abstractly express an intuition.", "Sholto Douglas", "Yeah. I actually think your rates of progress almost don't change that much depending on so long as it's able to completely implement these ideas.", "Dwarkesh Patel", "Say more?", "Sholto Douglas", "If you have Noam Shazeer at 100x speed, that's still kind of wild. There's all these fallbacks of wild worlds, where even if you don't get 100% Noam Shazeer-level intuition in model design, it's still okay if you just accelerate him by 100X.", "Dwarkesh Patel", "Right. Especially since your compute bottleneck anyway, so trying out his ideas… Or I guess he doesn't have the compute to try out all of his ideas.", "Trenton Bricken", "But Dwarkesh, you said, \"Oh, well, the model can do the more straightforward things and not the deep thought.\" I do want to push back on that a little bit.", "I think, again, if the model has the right context and scaffolding, it's starting to be able to do some really interesting things. The Interp agent has been a surprise to people, even internally, at how good it is at finding the needle in the haystack when it plays the auditing game, finding this reward model bias feature, and then reasoning about it, and then systematically testing its hypotheses.", "So it looks at that feature, then it looks at similar features, it finds one with a preference for chocolate. It's like, \"Huh, that's really weird that the model wants to add chocolate to recipes. Let me test it.\" So then it will make up like, \"Hey, I'm trying to make a tomato soup. What would be a good ingredient for it?\" And then sees that the model replies chocolate, reasons through it, and then keeps going, right?", "Sholto Douglas", "There is conceptual understanding. Deep conceptual understanding", "Trenton Bricken", "And even where, especially it's spotted, it's like, \"Oh, this is a key part of its persona. I see this Oxford paper. What if I change Oxford to Stanford? What if I now say Richard Feynman really likes this thing?\" It's really carving out the hypothesis space and testing things in a way that I'm kind of surprised by.", "Sholto Douglas", "Also, by the way, ML research is one of the easier things to RL on in some respects, once you get to a certain level of capability. It's a very well-defined objective function. Did the loss go down?", "Trenton Bricken", "Make number go down.", "Sholto Douglas", "Make number go down. Or make number go up, depending on which number it is.", "Trenton Bricken", "Just flip the sign.", "Sholto Douglas", "Just flip the sign. And so, once you get to the stage where your models are capable of implementing one of Noam's ideas, and then you can just let them loose and let them build that intuition of how to do scientific discovery. The key thing here, again, is the feedback loops of. I expect scientific areas where you are able to put it in a feedback loop to have, eventually, superhuman performance.", "Trenton Bricken", "One prediction I have is that we're going to move away from “can an agent do XYZ”, and more towards “can I efficiently deploy, launch 100 agents and then give them the feedback they need, and even just be able to easily verify what they're up to” right? There's this generator verifier gap that people talk about where it's much easier to check something than it is to produce the solution on your own. But it's very plausible to me, we'll be at the point where it's so easy to generate with these agents that the bottleneck is actually, can I as the human verify the answer?", "And again, you're guaranteed to get an answer with these things. So, ideally, you have some automated way to evaluate and test a score for how well it worked, how well did this thing generalize? And at a minimum, you have a way to easily summarize what a bunch of agents are finding. It's like, okay, well if 20 of my 100 agents all found this one thing, then it has a higher chance of being true.", "Sholto Douglas", "And again, software engineering is going to be the leading indicator of that, right? Over the remainder of the year, basically we're going to see progressively more and more experiments of the form of how can I dispatch work to a software engineering agent in such a way that it’s async? Claude 4 has GitHub integration, where you can ask it to do things on GitHub, ask it to do pull requests, this kind of stuff that's coming up.", "OpenAI’s Codex is example of this basically. You can almost see this in the coding startups. I think of this product exponential in some respects where you need to be designing for a few months ahead of the model, to make sure that the product you build is the right one.", "You saw last year, Cursor hit PMF with Claude 3.5 Sonnet . They were around for a while before, but then the model was finally good enough that the vision they had of how people would program, hit.", "And then Windsurf bet a little bit more aggressively even on the agenticness of the model, with longer-running agentic workflows and this kind of stuff. I think that's when they began competing with Cursor, when they bet on that particular vision.", "The next one is you're not even in the loop, so to speak. You're not in an IDE . But you're asking the model to go do work in the same way that you would ask someone on your team to go do work. That is not quite ready yet. There are still a lot of tasks where you need to be in the loop. But the next six months look like an exploration of exactly what that trendline looks like.", "Trenton Bricken", "But just to be really concrete or pedantic about the bottlenecks here, a lot of it is, again, just tooling. And are the pipes connected? A lot of things, I can't just launch Claude and have it go and solve because maybe it needs a GPU, or maybe I need very careful permissioning so that it can't just take over an entire cluster and launch a whole bunch of things. So you really do need good sandboxing and the ability to use all of the tools that are necessary.", "Sholto Douglas", "And we're almost certainly under-eliciting dramatically. When you look at METR ’s evals of can the model solve the task, they're there solving them for hours over multiple iterations. Eventually, one of them is like, \"Oh, yeah. I've come back and I've solved the task.\" Me, at the moment at least, maybe the fault is my own. But I try the model on doing something, and if it can't do it, I'm like, \"Okay, fine. I'll do it.\"", "Dwarkesh Patel", "Which is so interesting because we don't even treat other humans this way.", "Sholto Douglas", "Right. Exactly.", "Dwarkesh Patel", "If you hire a new employee, you're not like...", "Sholto Douglas", "\"I'll do it.\"", "Dwarkesh Patel", "You're going to spend literally weeks giving them feedback whereas we'll give up on a model in minutes.", "Sholto Douglas", "Yes, exactly.", "Trenton Bricken", "But I think part of it is, it it async or not?", "Sholto Douglas", "Yes.", "Trenton Bricken", "And if it's human in the loop, then it's so much more effortful unless it's getting a reply immediately... I've noticed if I don't have a second monitor with Claude Code always open in the second monitor, I won't really use it. It's only when it's right there, and I can send off something. If it hits, great. If not, I'm working on it at the same time.", "Sholto Douglas", "But this more async form factor, I expect to really quite dramatically improve the experience of these models.", "Trenton Bricken", "Interesting, interesting.", "Sholto Douglas", "You can just say, let's see if it can do that. Let's give it a whirl. Try 10 different approaches.", "Trenton Bricken", "Yeah, just fire it off.", "Sholto Douglas", "Fire it off.", "01:37:42 – Why are LLMs ‘baby AGI’ but not AlphaZero?", "Dwarkesh Patel", "Before we end this episode, I do want to get back at this crux of why does the progress that you're talking about in computer use agents, and white collar work happen over the next few years? Why is this not a thing that takes decades? I think the crux comes down to the people who expect something much longer have a sense that… When I had Ege and Tamay on my podcast, they were like, \"Look, you could look at AlphaGo, and say, 'Oh, this is a model that can do exploration. AlphaZero can generalize to new video games. It has all these priors about how to engage with the world, and so forth.\"", "Sholto Douglas", "And the intellectual ceiling is really high.", "Dwarkesh Patel", "Yeah, exactly. In retrospect, obviously a bunch of the methods are still used today in deep learning, and you can see similar things in the models that we train today. But it was fundamentally not a baby AGI that we just had to add a little sprinkle of something else on top of in order to make it the LLMs of today. I just want to very directly address this crux of, why are LLMs in a much different position of respect to true AGI than AlphaZero? Why are they actually the base on which adding in a few extra drops of this kind of care, and attention gets us to human-level intelligence?", "Sholto Douglas", "I think one important point is that when you look at AlphaZero, it does have all of those ingredients. In particular I think the intellectual ceiling goes quite—contra what I was saying before, which is we've demonstrated this incredible complexity of math, and programming problems…", "I do think that the type of task and setting that AlphaZero worked in this two-player perfect information game basically is incredibly friendly to RL algorithms. The reason it took so long to get to a more proto-AGI style models is you do need to crack that general conceptual understanding of the world, and language, and this kind of stuff, and you need to get the initial reward signal on tasks that you care about in the real world, which are harder to specify than games.", "I think then that sort of gradient signal that comes from the real world, all of a sudden you get access to it, and you can start climbing it, whereas Alpha Zero didn't ever have the first rung to pull on.", "Trenton Bricken", "Yeah, yeah. This goes back to the monkeys on the typewriter and the pre-training model. Until you had something like GPT-3/GPT-4, it just couldn't generate coherent enough sentences to even begin to do RLHF, and tell it what you liked and didn't like.", "Dwarkesh Patel", "Yeah. If we don't have even reasonably robust, or weakly robust computer use agents by this time next year, are we living in the bust timeline as in “2030, or bust”?", "Sholto Douglas", "I would be extremely surprised if that was the case. I think that would be somewhat of an update towards, there's something strangely difficult about this computer use in particular. I don't know if it's the bust timeline, but it's definitely the, I would update on this being lengthening of timeline.", "Trenton Bricken", "I think more and more it's no longer a question of speculation. If people are skeptical, I'd encourage using Claude Code, or some agentic tool like it and just seeing what the current level of capabilities are.", "Dwarkesh Patel", "Tweeting is so much easier.", "Trenton Bricken", "But seriously, the models are getting really capable at tasks that we care about, and we can give them enough data for.", "The circuit's results from interpretability are also pointing in the direction that they're doing very reasonable generalizable things. This question matters a lot, but I'm surprised by how many deep learning critics just haven't really interacted with the models, or haven't in a while.", "Sholto Douglas", "And constantly move the goalposts.", "Trenton Bricken", "The Turing test used to be a thing. We don't even talk about it, and it'd be silly to think that it was a meaningful test.", "Sholto Douglas", "Now one caveat on that is if software engineering is just dramatically better than computer use and computer use still sucks, then I'd still be like, “oh, maybe everyone just kept focused on software engineering.” It was just by far the most valuable thing, every marginal person and dollar went towards software engineering. I don't think that's the case. I do think computer use is valuable enough that people will care about it. That's my one escape patch that I'm putting in place for next year.", "Dwarkesh Patel", "Yeah, it would be good from an alignment perspective, too. Because I think you kind of do need a wider range of skills before you can do something super scary.", "Sholto Douglas", "Like if the models didn't get any better?", "Dwarkesh Patel", "Yeah, if they're superhuman coders, but they're not Henry Kissinger level…", "Trenton Bricken", "I don't know. That seems okay. If we have AI oracles.", "Dwarkesh Patel", "Yeah, that's what I'm saying. That's good.", "Dwarkesh Patel", "If you look back at AI discourse going back a decade, there's a sense that there's dumb AI, then there's AGI , then there's ASI , that intelligence is the scalar value.", "The way you've been talking about these models has a sense of jaggedness. It's especially tuned to environments in which it's been trained a lot or has a lot of data. Is there a sense in which it still makes sense to talk about the general intelligence of these models? Is there enough meta learning and transfer learning that is distinguished between the sizes of models or the way models are trained? Or are we moving into a regime where it's not about intelligence, it's more so about domain?", "Sholto Douglas", "One intuition pump is that this conversation was had a lot when models were GPT-2 sized and fine-tuned for various things. People would find that the models were dramatically better at things that they were fine-tuned for.", "But by the time you get to GPT-4, when it's trained on a wide enough variety of things with the total compute, it generalized very well across all of the individual sub-tasks. And it actually generalized better than smaller fine-tuned models in a way that was extremely useful.", "I think right now what we're seeing with RL is pretty much the same story playing out. There's this jaggedness of things that they're particularly trained at. But as we expand the total amount of compute that we do RL with, you'll start to see the same transition from GPT-2 fine-tunes to GPT-3, GPT-4, unsupervised meta learning and generalization across things. I think we're already seeing early evidence of this in its ability to generalize reasoning to things. But I think this will be extremely obvious soon.", "Trenton Bricken", "One nice example of this is just the ability or notion to backtrack. You go down one solution path, \"Oh, wait, let me try another one.\" And this is something that you start to see emerge in the models through RL training on harder tasks. I think right now, it's not generalizing incredibly well.", "Sholto Douglas", "Well, I mean have we ever RL'd the model to be an interp agent? No.", "Trenton Bricken", "I mean, no. Yeah, exactly.", "Sholto Douglas", "So all this time we're talking about, “oh, it's only good at things it’s been RL’d for”. Well, it's pretty good at that because that is a mixture of science and understanding language and coding. There's this sort of mixture of domains here, all of which you need to understand. You need to be both a great software engineer and be able to think through language and state of mind and almost philosophize in some respects to be an interpret agent. And it is generalizing from the training to do that.", "01:45:38 – Mech interp", "Dwarkesh Patel", "What's the end game here? Claude 8 comes out and they give it to you and dot, dot, dot, you say, \"thumbs up.\" What's happened? What have you learned?", "Trenton Bricken", "Yeah. I mean, it really depends upon the timeline at which we get Claude 8 and the models hit like ASL-4 capabilities, right? Fundamentally, we're just going to use whatever tools we have at the time and see how well they work. Ideally, we have this enumerative safety case where we can almost verify or prove that the model will behave in particular ways. In the worst case, we use the current tools like when we won the auditing game of seeing what features are active when the assistant tag lights off.", "Dwarkesh Patel", "Can you back up? Can you explain, what is mechanistic interpretability ? What are features? What are circuits?", "Trenton Bricken", "Totally. Mechanistic interpretability—or the cool kids call it mech interp—is trying to reverse engineer neural networks and figure out what the core units of computation are. Lots of people think that because we made neural networks, because they're artificial intelligence, we have a perfect understanding of how they work. It couldn't be further from the truth.", "Neural networks, AI models that you use today, are grown, not built. So, we then need to do a lot of work after they're trained to figure out to the best of our abilities how they're actually going about their reasoning.", "And so, three and a half years ago, this kind of agenda of applying mechanistic interpretability to large language models started with Chris Olah leaving OpenAI, co-founding Anthropic. And every roughly six months since then, we've had a major breakthrough in our understanding of these models.", "And so first with toy models of superposition, we established that models are really trying to cram as much information as they possibly can into their weights. And this goes directly against people saying that neural networks are over-parameterized. In classic AI machine learning back in the day, you would use linear regression or something like it, and people had a meme of AI, or neural networks, deep learning, using way too many parameters. There's this funny meme that you should show of layers on the X axis and layers on the Y axis and this jiggly line that just goes up and it's like, \"Oh, just throw more layers at it.\"", "But it actually turns out that, at least for really hard tasks like being able to accurately predict the next token for the entire internet, these models just don't have enough capacity. And so they need to cram in as much as they can. And the way they learn to do that is to use each of their neurons, or units of computation in the model, for lots of different things.", "And so if you try to make sense of the model and be like, \"Oh, if I remove this one neuron,\" what is it doing in the model? It's impossible to make sense of it. It'll fire for like Chinese and fishing and horses and, I don't know, just like a hundred different things. And it's because it's trying to juggle all these tasks and use the same neuron to do it. So that's superposition.", "Nine months later, we write Towards Monosemanticity, which introduces what are called sparse autoencoders. And so going off what I just said of the model trying to cram too much into too little space, we give it more space, this higher dimensional representation, where it can then more cleanly represent all of the concepts that it's understanding.", "And, and this was a very toy paper in so much as it was a two layer, really small, really dumb transformer. And we fit up to 16,000 features, which we thought was a ton at the time. Fast-forward nine months, we go from a two layer transformer to our Claude 3 Sonnet, frontier model at the time, and fit up to 30 million features. And this is where we start to find really interesting abstract concepts, like a feature that would fire for code vulnerabilities. And it wouldn't just fire for code vulnerabilities. It would even fire for like, you know that Chrome page you get if it's not an HTTPS URL, like \"Warning, this site might be dangerous. Click to continue.\" And also fire for that, for example. And so it's like these much more abstract coding variables or sentiment features, amongst the 30 million.", "Fast-forward nine months from that, and now we have circuits. And I threw in the analogy earlier of the Ocean 11 heist team, where now you're identifying individual features across the layers of the model that are all working together to perform some complicated task. And you can get a much better idea of how it's actually doing the reasoning and coming to decisions, like with the medical diagnostics.", "One example I didn't talk about before with how the model retrieves facts: So you say, \"What sport did Michael Jordan play?\" And not only can you see it hop from like Michael Jordan to basketball and answer basketball. But the model also has an awareness of when it doesn't know the answer to a fact. And so, by default, it will actually say, \"I don't know the answer to this question.\" But if it sees something that it does know the answer to, it will inhibit the \"I don't know\" circuit and then reply with the circuit that it actually has the answer to. So, for example, if you ask it, \"Who is Michael Batkin?\" —which is just a made-up fictional person— it will by default just say, \"I don't know.\" It's only with Michael Jordan or someone else that it will then inhibit the \"I don't know\" circuit.", "But what's really interesting here and where you can start making downstream predictions or reasoning about the model, is that the \"I don't know\" circuit is only on the name of the person. And so, in the paper we also ask it, \"What paper did Andrej Karpathy write?\" And so it recognizes the name Andrej Karpathy, because he's sufficiently famous, so that turns off the \"I don't know\" reply. But then when it comes time for the model to say what paper it worked on, it doesn't actually know any of his papers, and so then it needs to make something up. And so you can see different components and different circuits all interacting at the same time to lead to this final answer.", "Dwarkesh Patel", "Why think it's a tractable problem to understand every single thing that's happening in a model? Or like that's the best way to understand why it's being deceptive. If you wanted to explain why England won World War II using particle physics, you would just be on the wrong track. You just want to look at the high-level explanations of, who had more weapons? What did they want?", "That seems analogous to just training linear probes for like, are you honest? Are you being deceptive? Do we catch you doing bad things when we're red teaming you? Can we monitor you?", "Why is this not analogous where we're asking a particle physicist to just backtrack and explain why England won World War II?", "Trenton Bricken", "I feel like you just want to go in with your eyes wide open, not making any assumptions for what that deception is going to look like, or what the trigger might be. The wider you can cast that net, the better. Depending on how quickly AI accelerates and where the state of our tools are, we might not be in the place where we can prove from the ground up that everything is safe. But I feel like that's a very good North Star. It's a very powerful reassuring North Star for us to aim for, especially when we consider we are part of the broader AI safety portfolio.", "I mean, do you really trust—you're about to deploy this system and you really hope it's aligned with humanity—that you've successfully iterated through all the possible ways that it's going to scheme or sandbag or…", "Dwarkesh Patel", "But that's also probably going to be true with whatever you find. You're still going to have variants that you haven't explained. Or you found a feature, but you don't know if it actually explains deception or something else instead.", "Trenton Bricken", "First of all, I'm not saying you shouldn't try the probing approach. We want to pursue the entire portfolio. We've got the therapist interrogating the patient by asking, \"Do you have any troubling thoughts?\" We've got the linear probe, which I'd analogize to a polygraph test where we're taking very high level summary statistics of the person's well-being. Then we've got the neurosurgeons going in and seeing if you can find any brain components that are activating and troubling or off-distribution ways. I think we should do all of it.", "Dwarkesh Patel", "What percent of the alignment portfolio should mech interp be?", "Trenton Bricken", "I think as much of a chunk as is necessary. It’s hard to define. At Anthropic, I feel like all of the different portfolios are being very well-supported and growing.", "Sholto Douglas", "Coming back to the World War II question, you can think of it as a hierarchy of abstractions of trust here, where let's say you want to go and talk to Churchill. It helps a lot if you can verify that in that conversation, in that 10 minutes, he's being honest. This enables you to construct better meta narratives of what's going on. So maybe particle physics wouldn't help you there, but certainly the neuroscience of Churchill's brain would help you verify that he was being trustworthy in that conversation and that the soldiers on the front lines were being honest in their depiction of their description of what happened, this kind of stuff. So long as you can verify parts of the tree up, then that massively helps you build confidence.", "Trenton Bricken", "I think language models are also just really weird. With the emergent misalignment work. I don't know if they took predictions, they should have of like, \"Hey, I'm going to fine tune ChatGPT on code vulnerabilities. Is it going to become a Nazi?\" I think most people would've said no. That's what happened.", "Dwarkesh Patel", "How did they discover that it became a Nazi?", "Trenton Bricken", "They started asking it a ton of different questions and it will do all sorts of vile and harmful things. The whole persona just totally changes. We are dealing with alien brains here who don't have the social norms of humans. We don’t even have a clear notion of what they have and haven't learned. I think you really want to go into this with eyes wide open.", "01:56:15 – How countries should prepare for AGI", "Dwarkesh Patel", "Backing up from mech interp, if we live in a world where AI progress accelerates… By the way, you were mentioning a little while ago that there's many wild worlds we could be living in, but we're living in at least one of them. Another one that we've gestured at but it's worth making more explicit, is this. Even if the AI models are not helping write the next training algorithm for their successor, just the fact that if they had human level learning efficiency—whatever copy of the model is learning on the job—the whole model is learning. So in effect, it's getting–", "Sholto Douglas", "Or even if they're like a thousand times less efficient than humans are at learning and you deployed them. Even still.", "Dwarkesh Patel", "Exactly. Anyways, there's a whole bunch of other things that you can think of about it. But even there, you kind of have a broadly deployed intelligence explosion.", "Sholto Douglas", "I do think it's worth pressing on that future. There is this whole spectrum of crazy futures. But the one that I feel we're almost guaranteed to get—this is a strong statement to make—is one where at the very least, you get a drop-in white collar worker at some point in the next five years. I think it's very likely in two, but it seems almost overdetermined in five. On the grand scheme of things, those are kind of irrelevant timeframes. It's the same either way.", "That completely changes the world over the next decade. If we don't have the right policies in place for that, then you end up actually with in some respects, almost a fundamentally worse world.", "Because the thing that these models get good at by default is software engineering and computer using agents and this kind of stuff. Then we will need to put in extra effort to put them in the loops where they help us with scientific research. Or we have the right robotics, such that we actually experience an increase in material quality of life.", "That's worth thinking about. If you're in the perspective of like, “I'm a country, what should I be doing or thinking about?” Plan for the case where white collar work is automateable. And then consider, what does that mean for your economy? What you should be doing to prepare policy?", "Dwarkesh Patel", "What should you be doing to prepare? Because honestly, this is such a tough question if you're India or Nigeria or Australia. If you're a country unlike America or China where they do have frontier models, what is it that you should be doing right now? Especially on such a short timescale.", "Sholto Douglas", "I think one very important point is that let's say this scenario turns out true. Then compute becomes the most valuable resource in the world. The GDP of your economy is dramatically affected by how much compute you can deploy towards the organizations within your country. So having some guaranteed amount of compute I think will actually be quite important. Getting ahead of investments, and data centers, and this kind of stuff on the condition that it's companies in your country have to be allowed to use that compute, not necessarily for training but just even just for inference.", "I think the economic value here comes from inference. I think it also makes sense to invest broadly in AI. These countries have the opportunity to do so and that's a portfolio of foundation model companies but also robotics, supply chain, and this kind of stuff. I think that you should invest very proactively in policies that try to prevent capital lock-in.", "We're in for a much worse world if it just so happens that the people who had money in the stock exchange, or in land before AGI are dramatically more wealthy than the people who don't. It's a gross misallocation of resources.", "One of my favorite episodes actually on your podcast was the Georgism one where you're trying to appropriately value, or allocate land. This strikes particularly close to home coming from Australia where I think our policies with respect to land are grossly wrong. But I think this is broadly true.", "Being very forward on regulation of integration of these models into your country is important, and proactively making sure that people have choice. Let's say you should be quite proactive about making sure that the phones, or devices, or the glasses that people have, people have free choice on what things they run.", "So we just get the white collar worker, and you're trying to do the best to prepare your country for that. Then what can you do to make all possible versions of the future go well? That's covering some amount of economic downside. The other things that I think are really important is figure out how you can basically ensure dramatic upside, or cover terrible downside.", "Getting a dramatic upside is making sure that there is investment in biology research and this kind of stuff in an automated way such that these models are actually able to produce novel medicines that massively improve our quality of life. Covering the downside is AI alignment research, and this kind of stuff, and automated testing, and really thinking hard about that, AI safety institutes and this kind of stuff.", "Dwarkesh Patel", "But these seem like things that a random rich person could also do. It seems like there's not a thing that a nation state is uniquely equipped to do in this scenario.", "Sholto Douglas", "That's a good point.", "Sholto Douglas", "I mean dramatic allocation of resource towards compute I think is sensible. I would be doing that if I was in charge of a nation state. I think it just increases your optionality in most of the future worlds.", "Trenton Bricken", "Dylan Patel has some scary forecasts on US energy.", "Sholto Douglas", "Versus China. Yes.", "Trenton Bricken", "Yeah, we're like 34 gigawatts off.", "Sholto Douglas", "Yeah, the US's line is flat , basically, and China's line is like this. And I mean the US very clearly...", "Trenton Bricken", "We just need so many more power plants.", "Sholto Douglas", "Yes. If intelligence becomes this incredibly valuable input, intelligence becomes almost a raw input into the economies and quality of life of the future, the thing directly underneath that is energy. Making sure that you have incredible amounts of solar, like tile the desert with solar panels, some parts of the desert. That would be helpful towards making sure that you have more access to intelligence on tap.", "Trenton Bricken", "Yeah. Just to make it explicit, we've been touching on it here. Even if AI progress totally stalls, you think that the models are really spiky, and they don't have general intelligence. It's so economically valuable, and sufficiently easy to collect data on all of these different jobs, these white collar job tasks, such that to Sholto's point we should expect to see them automated within the next five years.", "Sholto Douglas", "Yeah.", "Trenton Bricken", "Even if you need to hand spoon every single task to the model.", "Sholto Douglas", "It's economically worthwhile to do so. Even if algorithmic progress stalls out, and we just never figure out how to keep progress going—which I don't think is the case, that hasn't stalled out yet, it seems to be going great—the current suite of algorithms are sufficient to automate white collar work provided you have enough of the right kinds of data. Compared to the TAM of salaries for all of those kinds of work, it is so trivially worthwhile.", "Trenton Bricken", "Yeah, exactly. I do just want to flag as well that there's a really dystopian future if you take Moravec’s paradox to its extreme. It’s this paradox where we think that the most valuable things that humans can do are the smartest things like adding large numbers in our heads, or doing any sort of white collar work. We totally take for granted our fine motor skill, and coordination. But from an evolutionary perspective it's the opposite.", "Evolution has optimized fine motor coordination so well. Even if you look at robot hands, the ability to open a door is still just really hard for robots. Meanwhile, we're seeing this total automation of coding, and everything else that we've seen as clever. The really scary future is one in which AIs can do everything except for the physical robotic tasks, in which case you'll have humans with AirPods, and...", "Sholto Douglas", "Glasses?", "Trenton Bricken", "Glasses, and there'll be some robot overlord controlling the human through cameras by just telling it what to do, and having a bounding box around the thing you're supposed to pick up. So you have human meat robots.", "Sholto Douglas", "Not necessarily saying that that's what the AIs would want to do, or anything like that. But if you were to be like, \"What are the relative economic value of things?\" The AIs are out there doing computer programming, and the most valuable thing that humans can do is be amazing robots.", "Now that being said, I think Moravec’s paradox is a little bit fake. I think the main reason that robots are worse at being a robot than they are at software engineering is the internet exists for software engineering. GitHub exists, and there is no equivalent thing if you had all mocap of everyone's actions as they were going about their daily lives for some reasonable fraction of the human population, robotics is also close to solved, on track to be solved at the same rate that software engineering is on track to be solved.", "So, this vision is only a sort of decade-long section, but it's still a pretty terrible decade. Imagine the world where people have lost their jobs, you haven't yet got novel biological research. That means people's quality of life isn’t dramatically better. You don't yet have material abundance because you haven't actually been able to action the physical world in the necessary way. You can't build dramatically more, because building dramatically more takes robots basically, and people's main comparative advantage is as fantastic robots. That’s a shocking, shocking world.", "Dwarkesh Patel", "Yeah. From the perspective of an average human, I think it actually might be better. Your wages will be higher because you're the complement to something that is enormously valuable which is AI labor.", "Sholto Douglas", "And a decade, or two after, the world is fantastic. Robotics is solved, and you start to get radical abundance basically provided that you have all the policies set up necessary to permit building. You end up with that same change like the before vs. after photos of Shanghai where 20 years on, it's this dramatically transformed city.", "A lot of places in the world probably end up like that over that two-decade period. But we need to do our best to estimate if this is actually what is on track to happen. Build SWE-bench , but for all the other forms of white collar work, and measure, and track. That's a great thing that governments should be doing by the way, trying to break down the functions of their economy into measurable tasks, and figuring out what does the curve actually look like for that?", "They might be a bit shocked by the progress there. There's no SWE-bench for a tax eval. I don't have all the answers here, but then we need to figure out a way to share the proceeds of this economy broadly across people, or invest heavily in robotics, and collect the data so that we get robotics faster, and we get material abundance faster. Invest in biological research that we get, but all that faster. Basically try and pull forward the radical upside, because otherwise you have a pretty dark section.", "Dwarkesh Patel", "I think one thing that's not appreciated enough is how much of our leverage on the future—given the fact that our labor isn't going to be worth that much—comes from our economic, and political systems surviving. For your million X'd S&P equity to mean something, for your contracts to mean anything, for the government to be able to tax the AI labor, and give you a UBI off of that, that requires our legal institutions, our economic institutions, our financial rails surviving into the future.", "Sholto Douglas", "Yes.", "Dwarkesh Patel", "The way in which that likely happens is if it's also in the AIs best interests that they follow those rails. By AI I don't mean some monolithic single AI, I just mean firms which are employing AI, and becoming more productive as a result.", "You don't want to be in a position where it's so onerous to operate in our system that you're basically selecting for firms who either emigrate, or who are doing black market stuff, et cetera. You want to make it super, super easy to deploy AI, have the equivalent of special economic zones, et cetera. Otherwise you are just surrendering the future outside of any control that you might have on it.", "One of the reasons that I worry about turning AGI into a national security issue, or having it have extremely close ties with the government, the Manhattan Project thing, is that it disproportionately redirects the use of AI towards military tech, mosquito drones and whatever. It also naturally puts other countries in the same frame of mind. If we're developing the mosquito drones, why would China not develop the mosquito drones?", "That just seems like a zero-sum race, and not to mention a potentially catastrophic one. Whereas compute will be limited, we'll need to disproportionately accelerate some things. To the extent it just remains totally like a consumer free market landscape, it just seems more likely that we'll get the glorious transhumanist future where they're developing the things that make human life better.", "Sholto Douglas", "Yes, I mean I agree. The case where you end up with two national projects facing off against each other is dramatically worse. We don't want to live in that world. It's much better if this stays a free market, so to speak.", "02:10:26 – Automating white collar work", "Dwarkesh Patel", "Okay. I want to take issue with your claim that even with the algorithms of today, if we just collect enough data that we could automate white collar work.", "First, let me get an understanding of what you mean by that. Do you mean that we would do the analogous thing of pre-raining with all the trajectories of everything people would do on their jobs? Could you make either manually, or through some other process, some RL procedure based on the screen recordings, every white collar worker. What kind of thing are you imagining?", "Sholto Douglas", "I mean a continuous distribution of this stuff. One important mental model to think about RL… There is some respect with which longer horizon, if you can do them, if you can get that reward ever, are easier to judge. Again, it's come back to that can you make money on the internet? That's an incredibly easy reward signal to judge. But to do that there's a whole hierarchy of complex behavior. So, if you could pre-train up to the easy to judge reward signals, does your website work? Does it go down, do people like it?", "There's all these reward signals that we can respond to because we have a long, we can progress through these long enough trajectories to actually get to interesting things. If you're stuck in this regime where you need a reward signal every five tokens, it's a way more painful, and long process. But if you could pre-train on every screen in America, then probably the RL tasks that you can design are very different from if you could only take the existing internet as it is today. How much of that you get access to changes the mix.", "Dwarkesh Patel", "As we're training them on longer, and longer horizon tasks, and it takes longer for them to get any signal on whether they successfully complete the task, will that slow down progress because it takes more compute per task?", "Trenton Bricken", "I do think there's this notion that the longer, the harder tasks, the more training is required. I'm sympathetic to that naively, but we as humans are very good at practicing the hard parts of tasks, and decomposing them. I think once models get good enough at the basic stuff, they can just rehearse, or fast-forward to the more difficult parts.", "Sholto Douglas", "I mean that's definitely one of the big complexities. As you use more compute, and as you train on more, and more difficult tasks, your rate of improvement of biology for example is going to be somewhat bound by the time it takes a cell to grow in a way that your rate of improvement on math isn't, for example.", "So, yes, but I think for many things we'll be able to parallelize widely enough, and get enough iteration loops.", "Dwarkesh Patel", "Will the regime of training new models go away? Will we eventually get to the point where you've got the model, and then you just keep adding more skills to it, with RL training?", "Sholto Douglas", "That depends on whether, or not you think there's a virtue in pre-training a new architecture. Basically you make some architectural change, then you probably need to do some form of at least pretraining a new model.", "Dwarkesh Patel", "If RL requires a bunch of inference to do the training in the first place, does that push against the thing you were talking about where we actually need a bigger model in order to have brain-like energy? But then also it's more expensive to train it in RL. So, where does that balance out?", "Trenton Bricken", "I think we got to drink the bitter lesson here. Yeah, there aren't infinite shortcuts. You do just have to scale and have a bigger model, and pay more inference for it. If you want AGI, then that's what you got to pay the price of.", "Sholto Douglas", "But there's a tradeoff equation here. There is science to do which everyone is doing. What is the optimal point at which to do RL? Because you need something which can both learn, and discover the sparse reward itself.", "So you don't want a one parameter model. Useless, even though you can run it really fast. You also don't want a 100T model. It's super slow. The marginal benefit of its learning efficiency is not worth it. So, there's a Pareto frontier here. What's the optimal model size of your current class of capabilities, and your current set of RL environments, and this kind of stuff.", "Trenton Bricken", "And even in the last year there's been much more of a factor of the inference cost. So, just explicitly the bigger the model, the more expensive it is to do a forward pass and generate tokens. The calculus used to just be, “Should I allocate my flops to more training data, or a bigger model?” And now another huge factor is how much am I actually going to do forward passes on this model once it's trained?", "Sholto Douglas", "My total pool of compute, how do I allocate that across training data compute, and inference compute for the RL training.", "Trenton Bricken", "And then even within inference, there's all this research on, well, what strategy should I use? Should I sample 10, and take the best? Do I do this sort of branching search, et cetera, et cetera. And so with RL where you're sampling a whole lot of tokens, you also need to factor in the ability for the model to actually generate those tokens, and then learn, and get feedback.", "02:15:35 – Advice for students", "Dwarkesh Patel", "If we're living in this world, what is your advice to somebody early in their career, or a student in college? What should they be planning on doing?", "Sholto Douglas", "Once again, it's worth considering the spectrum of possible worlds and preparing yourself for that. The action that I think is the highest EV in that case is that at a minimum you're about to get dramatically more leverage. You already have. Already the startups in YC are writing huge amounts of their code with Claude.", "What challenges, what causes do you want to change in the world with that added leverage? If you had 10 engineers at your beck, and call, what would you do? If you had a company at your beck and call, what would that enable you to do? What problems, and domains suddenly become tractable? That's the world you want to prepare for.", "Now, that still requires a lot of technical depth. Obviously there is the case where AI just becomes dramatically better than everyone at everything, but for at least a while there is… I think Jensen actually talked about this in an interview in an interesting way. He's like, \"I have a hundred thousand general intelligences around me, and I'm still somewhat useful, because I’m there directing the values, and asking them to do things. I still have value even though I have a hundred thousand general intelligences.\"", "For many people, I think that will still be true for a fair while. Then as the AIs get better, and better, and better, and so on, eventually, no. But again, prepare for the spectrum of possible worlds because in the event where we're just totally outcompeted, it doesn't matter what you do. In all the other worlds, it matters a lot. Get the technical depth, study biology, study CS, study physics. Think hard about what challenges you want to solve in the world.", "Dwarkesh Patel", "Yeah, that's a lot of topics. That's a lot of shit.", "Sholto Douglas", "You can now. You can. It's so much easier to learn. Everyone now has the infinite perfect tutor.", "Dwarkesh Patel", "It's definitely been helpful to me.", "Trenton Bricken", "I would say some combination of: get rid of the sunk cost of your previous workflows, or expertise in order to evaluate what AI can do for you. Another way to put this, which is fun, is just be lazier in so much as you figure out the way that the agent can do the things that are toilsome.", "Ultimately, you get to be lazier, but in the short run, you need to critically think about the things you're currently doing, and what an AI could actually be better at doing, and then go, and try it, or explore it. Because I think there's still just a lot of low-hanging fruit of people assuming, and not writing the full prompt, giving a few examples, connecting the right tools for your work to be accelerated and automated.", "Dwarkesh Patel", "Yep, yep. There's also the sunk cost of feeling like since you're not \"early to AI\", that you've sort of missed the boat. I remember when GPT-3 came out…", "So backstory on the podcast, when I graduated college I was planning on doing some sort of AI wrapper startup, and the podcast was just a gateway into doing that. I was trying out different things and at the time I remember thinking, “oh, 3.5 is out.” People were like, \"I'm so behind on the startup scene here” or whatever. If I wanted to make my own wrapper… maybe the idea of the wrapper was inadvisable in the first place.", "But every time feels early because if it's an exponentially growing process, and there are many things, many ideas which are only becoming possible now, right?", "Sholto Douglas", "Exactly. It's that product exponential I talked about.", "Dwarkesh Patel", "That's right.", "Sholto Douglas", "Products literally obsolete it. You need to constantly reinvent yourself to stay at the frontier of capabilities.", "Dwarkesh Patel", "Do you remember? I had a really shitty idea, and I gave you a call.", "Sholto Douglas", "I don’t remember what it was.", "Dwarkesh Patel", "I think it was like RAG for lawyers, or something. Anyways, I think one of our first interactions was like, \"Hey, what do you think of this idea?\" And you were like, “I think the podcast sounds promising.”", "Sholto Douglas", "I was right.", "Dwarkesh Patel", "Which I appreciate.", "Trenton Bricken", "Yeah. I got slightly annoyed at a friend recently who I think is really talented and clever and interested in AI but has pursued a biology route. I just kind of tried to shake them like, \"You can work on AI if you want to.\"", "Humans are biological general intelligences where a lot of the things of value are just very general. Whatever kind of specialization that you've done maybe just doesn't matter that much. Again, it gets back to the sunk cost, but so many of the people, even my colleagues at Anthropic are excited about AI. They just don't let their previous career be a blocker. Because they're just innately smart, talented, driven, whatever else, they end up being very successful and finding roles. It's not as if they were in AI forever. I mean, people have come from totally different fields. Don't think that you need permission from some abstract entity to get involved, and apply, and be able to contribute.", "Dwarkesh Patel", "If somebody wanted to be an AI researcher right now, if you could give them an open problem, or the kind of open problem that is very likely to be quite impressive, what would it be?", "Sholto Douglas", "I think that now that RL's come back, papers building on Andy Jones's “Scaling scaling laws for board games” are interesting. Investigating these questions like the ones you asked before. Is the model actually learning to do more than its previous pass at K? Or is it just discovering that… Exploring questions like that deeply are interesting, scaling laws for RL, basically.", "Dwarkesh Patel", "I'd be very curious to see how much the marginal increase is in meta learning from a new task, or something.", "Trenton Bricken", "On that note, I think model diffing has a bunch of opportunities. People say, \"Oh, we're not capturing all the features. There's all this stuff left on the table.\" What is that stuff that's left on the table? If the model's jailbroken, is it using existing features that you've identified? Is it only using the error terms that you haven't captured? I don't know. There's a lot here.", "I think MATS is great. The Anthropic fellowship has been going really well. Goodfire , Anthropic invested in recently, they're doing a lot of interpretability work, or just apply directly to us.", "Dwarkesh Patel", "Anything to get your equity up, huh?", "Trenton Bricken", "There's just so many interpretability projects. There's so much low-hanging fruit, and we need more people, and I don't think we have much time.", "Sholto Douglas", "I also want to make a plug for performance engineering. This is one of the best ways to demonstrate that you have the raw ability to do it. If you made an extremely efficient transform implementation on TPU , or Trainium , or Incuda , then I think there's a pretty high likelihood that you'll get a job offer. But there's a relatively small pool of people that you can trust to completely own end-to-end the performance of a model.", "Trenton Bricken", "And if you have broad, deep electrical engineering skills, I think you can probably come up to speed pretty fast on accelerator stuff.", "Sholto Douglas", "You can come up to speed reasonably fast and it teaches you a lot of good intuitions of the actual intricacies of what's going on in the models, which means that you're then very well-placed to think about architecture and this kind of stuff. One of my favorite people in thinking about architecture at Anthropic at the moment actually came from a heavy GPU kernel programming background and just knows the ins, and outs really deeply. He can think about the trade-offs really well.", "Dwarkesh Patel", "This was fun guys. Thanks for doing it again.", "Trenton Bricken", "Great to be back." ]
[ "https://www.dwarkesh.com/p/sholto-douglas-trenton-bricken", "https://www.dwarkesh.com/p/sholto-douglas-trenton-bricken", "https://www.anthropic.com/", "https://en.wikipedia.org/wiki/Reinforcement_learning", "https://www.transformer-circuits.pub/2022/mech-interp-essay", "https://www.dwarkesh.com/p/sholto-douglas-trenton-bricken", "https://en.wikipedia.org/wiki/Large_language_model", "https://blogs.nvidia.com/blog/what-is-agentic-ai/", "https://www.twitch.tv/claudeplayspokemon", "https://en.wikipedia.org/wiki/High_availability#%22Nines%22", "https://arxiv.org/abs/2503.23829#:~:text=Reinforcement%20learning%20with%20verifiable%20rewards,answers%20are%20accessible%20for%20verification.", "https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback", "https://www.functionize.com/automated-testing/ai-unit-testing", "https://www.lesswrong.com/posts/rKC4xJFkxm6cNq4i9/reward-hacking-is-becoming-more-sophisticated-and-deliberate", "https://leetcode.com/", "https://docs.anthropic.com/en/docs/claude-code/overview", "https://openai.com/index/introducing-deep-research/", "https://x.com/SGRodriques", "https://www.futurehouse.org/", "https://www.futurehouse.org/research-announcements/demonstrating-end-to-end-scientific-discovery-with-robin-a-multi-agent-system", "https://www.futurehouse.org/research-announcements/demonstrating-end-to-end-scientific-discovery-with-robin-a-multi-agent-system", "https://www.astralcodexten.com/p/testing-ais-geoguessr-genius", "https://x.com/KelseyTuoc/status/1917340813715202540", "https://openai.com/index/introducing-o3-and-o4-mini/", "https://en.wikipedia.org/wiki/Generative_pre-trained_transformer", "https://www.marktechpost.com/2025/02/01/this-ai-paper-from-the-tsinghua-university-propose-t1-to-scale-reinforcement-learning-by-encouraging-exploration-and-understand-inference-scaling/", "https://www.llama.com/", "https://en.wikipedia.org/wiki/Qwen", "https://en.wikipedia.org/wiki/AlphaZero", "https://en.wikipedia.org/wiki/Neural_network_(machine_learning)", "https://www.dwarkesh.com/p/dario-amodei", "https://www.darioamodei.com/post/on-deepseek-and-export-controls", "https://en.wikipedia.org/wiki/Gradient_descent", "https://medium.com/@akash.kesrwani99/understanding-next-token-prediction-concept-to-code-1st-part-7054dabda347", "https://en.wikipedia.org/wiki/AlphaZero", "https://en.wikipedia.org/wiki/AlphaGo", "https://scale.com/", "http://www.incompleteideas.net/IncIdeas/BitterLesson.html", "https://ai.stackexchange.com/questions/5246/what-is-sample-efficiency-and-how-can-importance-sampling-be-used-to-achieve-it", "https://www.dwarkesh.com/p/mark-zuckerberg-2", "https://openai.com/index/introducing-gpt-4-5/", "https://en.wikipedia.org/wiki/Explainable_artificial_intelligence#Interpretability", "https://transformer-circuits.pub/2022/toy_model/index.html", "https://arxiv.org/html/2505.12822v1", "https://www.anthropic.com/news/golden-gate-claude", "https://transformer-circuits.pub/2024/scaling-monosemanticity/", "https://www.anthropic.com/news/claude-3-family", "https://transformer-circuits.pub/2023/monosemantic-features", "https://arxiv.org/abs/2402.16367", "https://www.lesswrong.com/posts/ChDH335ckdvpxXaXX/model-organisms-of-misalignment-the-case-for-a-new-pillar-of-1", "https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)", "https://en.wikipedia.org/wiki/AI_alignment", "https://www.theatlantic.com/technology/archive/2025/05/elon-musk-grok-white-genocide/682817/", "https://grok.com/", "https://x.com/Mikedotcoza/status/1922938690298400808", "https://www.anthropic.com/research/towards-understanding-sycophancy-in-language-models", "https://arxiv.org/abs/2406.07358", "https://arxiv.org/abs/2112.00114", "https://www.apolloresearch.ai/", "https://x.com/alexalbert__/status/1764722513014329620?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1764722513014329620%7Ctwgr%5E5543ef4e7e87abc5347853f784aaf4bc58b56c45%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fventurebeat.com%2Fai%2Fanthropics-claude-3-knew-when-researchers-were-testing-it%2F", "https://en.wikipedia.org/wiki/Reward_hacking", "https://arxiv.org/abs/2502.17424", "https://www.anthropic.com/research/alignment-faking", "https://amzn.to/43IcbII", "https://en.wikipedia.org/wiki/Superintelligence", "https://www.dwarkesh.com/p/eliezer-yudkowsky", "https://www.anthropic.com/research/constitutional-ai-harmlessness-from-ai-feedback", "https://en.wikipedia.org/wiki/Hill_climbing", "https://arxiv.org/abs/2103.03874", "https://arxiv.org/abs/2411.04872", "https://podcasts.apple.com/lc/podcast/openais-noam-brown-ilge-akkaya-and-hunter-lightman-on/id1750736528?i=1000671532058", "https://arxiv.org/abs/2504.20571", "https://openai.com/index/healthbench/", "https://transformer-circuits.pub/", "https://transformer-circuits.pub/2023/monosemantic-features", "https://transformer-circuits.pub/2025/attribution-graphs/biology.html#dives-addition", "https://arxiv.org/abs/2504.21304", "https://www.dwarkesh.com/p/jeff-dean-and-noam-shazeer", "https://aclanthology.org/D07-1090.pdf", "https://en.wikipedia.org/wiki/Word_n-gram_language_model", "https://en.wikipedia.org/wiki/Word_n-gram_language_model", "https://arxiv.org/abs/1706.03762", "https://en.wikipedia.org/wiki/American_Invitational_Mathematics_Examination", "https://www.darioamodei.com/essay/machines-of-loving-grace", "https://alignment.anthropic.com/2024/anthropic-fellows-program/", "https://www.nvidia.com/en-us/glossary/vision-language-models/", "https://go281.user.srcf.net/blog/research/residual-streams/", "https://ai-2027.com/", "https://www.greaterwrong.com/posts/qehggwKRMEyWqvjZG/reflections-on-neuralese?hide-nav-bars=true", "https://en.wikipedia.org/wiki/Latent_space", "https://towardsdatascience.com/neural-networks-forward-pass-and-backpropagation-be3b75a1cfcc", "https://transluce.org/", "https://transluce.org/observability-interface", "https://nicholas.carlini.com/", "https://www.cloudflare.com/learning/ai/inference-vs-training/", "https://www.nvidia.com/en-us/data-center/h100/", "https://semianalysis.com/author/dylanpatel/", "https://www.caltech.edu/about/news/thinking-slowly-the-paradoxical-slowness-of-human-behavior#:~:text=Caltech%20researchers%20have%20quantified%20the,faster%20than%20our%20thought%20processes.", "https://en.wikipedia.org/wiki/Cross-strait_relations#Deteriorating_relations_(2016%E2%80%93present)", "https://www.dwarkesh.com/p/ege-tamay", "https://blog.google/technology/ai/long-context-window-ai-models/", "https://www.dwarkesh.com/p/leopold-aschenbrenner", "https://situational-awareness.ai/", "https://situational-awareness.ai/from-gpt-4-to-agi/#Addendum_Racing_through_the_OOMs_Its_this_decade_or_bust", "https://www.deepseek.com/", "https://www.darioamodei.com/post/on-deepseek-and-export-controls", "https://x.com/noamshazeer?lang=en", "https://noambrown.github.io/", "https://arxiv.org/abs/2502.07864", "https://arxiv.org/abs/2502.11089", "https://en.wikipedia.org/wiki/Mixture_of_experts", "https://arxiv.org/pdf/2404.19737", "https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion", "https://www.dwarkesh.com/p/scott-daniel", "https://en.wikipedia.org/wiki/Deep_learning", "https://openai.com/index/introducing-codex/", "https://www.cursor.com/en", "https://en.wikipedia.org/wiki/Product-market_fit", "https://www.anthropic.com/news/claude-3-5-sonnet", "https://windsurf.com/editor", "https://en.wikipedia.org/wiki/Integrated_development_environment", "https://metr.org/", "https://en.wikipedia.org/wiki/Turing_test", "https://en.wikipedia.org/wiki/Artificial_general_intelligence", "https://en.wikipedia.org/wiki/Superintelligence", "https://www.anthropic.com/news/anthropics-responsible-scaling-policy", "https://www.transformer-circuits.pub/2022/mech-interp-essay", "https://www.dwarkesh.com/p/lars-doucet", "https://en.wikipedia.org/wiki/Georgism", "https://www.statista.com/statistics/188521/total-us-electricity-net-generation/", "https://en.wikipedia.org/wiki/Moravec%27s_paradox", "https://www.theatlantic.com/photo/2013/08/26-years-of-growth-shanghai-then-and-now/100569/", "https://www.swebench.com/", "https://en.wikipedia.org/wiki/Manhattan_Project", "https://en.wikipedia.org/wiki/Retrieval-augmented_generation", "https://arxiv.org/abs/2104.03113", "https://www.matsprogram.org/", "https://www.goodfire.ai/", "https://en.wikipedia.org/wiki/Tensor_Processing_Unit", "https://aws.amazon.com/ai/machine-learning/trainium/", "https://www.incuda.net/" ]
https://www.dwarkesh.com/p/steve-hsu
Steve Hsu - Intelligence, Embryo Selection, & The Future of Humanity
[ "Dwarkesh Patel 0:00", "Today I have the pleasure of speaking with Steve Hsu. Steve, thanks for coming on the podcast. I'm excited about this.", "Steve Hsu 0:04", "Hey, it's my pleasure! I'm excited too and I just want to say I've listened to some of your earlier interviews and thought you were very insightful, which is why I was excited to have a conversation with you.", "Dwarkesh Patel 0:14", "That means a lot for me to hear you say because I'm a big fan of your podcast.", "Feynman’s advice on picking up women", "Dwarkesh Patel 0:17", "So my first question is: “What advice did Richard Feynman give you about picking up girls?”", "Steve Hsu 0:24", "Haha, wow! So one day in the spring of my senior year, I was walking across campus and saw Feynman coming toward me. We knew each other from various things—it's a small campus, I was a physics major, and he was my hero–– so I'd known him since my first year. He sees me, and he's got this Long Island or New York borough accent and says, \"Hey, Hsu!\"", "I'm like, \"Hi, Professor Feynman.\" We start talking. And he says to me, \"Wow, you're a big guy.\" Of course, I was much bigger back then because I was a linebacker on the Caltech football team. So I was about 200 pounds and slightly over 6 feet tall. I was a gym rat at the time, and I was much bigger than him. He said, \"Steve, I got to ask you something.\" Feynman was born in 1918, so he's not from the modern era. He was going through graduate school when the Second World War started. So, he couldn't understand the concept of a health club or a gym. This was the 80s and was when Gold's Gym was becoming a world national franchise. There were gyms all over the place like 24-Hour Fitness. But, Feynman didn't know what it was.", "He's a fascinating guy. He says to me, \"What do you guys do there? Is it just a thing to meet girls? Or is it really for training? Do you guys go there to get buff?\" So, I started explaining to him that people are there to get big, but people are also checking out the girls. A lot of stuff is happening at the health club or the weight room. Feynman grills me on this for a long time. And one of the famous things about Feynman is that he has a laser focus. So if there's something he doesn't understand and wants to get to the bottom of it, he will focus on you and start questioning you and get to the bottom of it. That's the way his brain worked. So he did that to me for a while because he didn't understand lifting weights and everything. In the end, he says to me, \"Wow, Steve, I appreciate that. Let me give you some good advice.\"", "Then, he starts telling me how to pick up girls—which he's an expert on. He says to me, \"I don't know how much girls like guys that are as big as you.\" He thought it might be a turn-off. \"But you know what, you have a nice smile.\" So that was the one compliment he gave me. Then, he starts to tell me that it's a numbers game. You have to be rational about it. You're at an airport lounge, or you're at a bar. It's Saturday night in Pasadena or Westwood, and you're talking to some girl. He says, \"You're never going to see her again. This is your five-minute interaction. Do what you have to do. If she doesn't like you, go to the next one.\" He also shares some colorful details. But, the point is that you should not care what they think of you. You're trying to do your thing. He did have a reputation at Caltech as a womanizer, and I could go into that too, but I heard all this from the secretaries.", "Dwarkesh Patel 4:30", "With the students or only the secretaries?", "Steve Hsu 4:35", "Secretaries! Well mostly secretaries. They were almost all female at that time. He had thought about this a lot and thought of it as a numbers game. The PUA guys (pick-up artists) will say, “Follow the algorithm, and whatever happens, it's not a reflection on your self-esteem. It's just what happened. And you go on to the next one.” That was the advice he was giving me, and he said other things that were pretty standard: Be funny, be confident—just basic stuff. Steve Hu: But the main thing I remember was the operationalization of it as an algorithm. You shouldn’t internalize whatever happens if you get rejected, because that hurts. When we had to go across the bar to talk to that girl (maybe it doesn’t happen in your generation), it was terrifying. We had to go across the bar and talk to some lady! It’s loud, and you’ve got a few minutes to make your case. Nothing is scarier than walking up to the girl and her friends. Feynman was telling me to train myself out of that. You're never going to see them again; the face space of humanity is so big that you'll probably never re-encounter them again. It doesn't matter. So, do your best.", "Dwarkesh Patel 6:06", "Yeah, that's interesting because.. I wonder whether he was doing this in the 40’–– like when he was at that age, was he doing this? I don't know what the cultural conventions were at the time. Were there bars in the 40s where you could just go ahead and hit on girls or?", "Steve Hsu 6:19", "Oh yeah, absolutely. If you read literature from that time, or even a little bit earlier like Hemingway or John O'Hara, they talk about how men and women interacted in bars and stuff in New York City. So, that was much more of a thing back than when compared to your generation. That's what I can’t figure out with my kids! What is going on? How do boys and girls meet these days? Back in the day, the guy had to do all the work. It was the most terrifying thing you could do, and you had to train yourself out of that.", "Dwarkesh Patel 6:57", "By the way, for the context for the audience, when Feynman says you were a big guy, you were a football player at Caltech, right? There's a picture of you on your website, maybe after college or something, but you look pretty ripped. Today, it seems more common because of the gym culture. But I don’t know about back then. I don't know how common that body physique was.", "Steve Hsu 7:24", "It’s amazing that you asked this question. I'll tell you a funny story. One of the reasons Feynman found this so weird was because of the way body-building entered the United States.  They were regarded as freaks and homosexuals at first. I remember swimming and football in high school (swimming is different because it's international), and in swimming, I picked up a lot of advanced training techniques from the Russians and East Germans. But football was more American and not very international. So our football coach used to tell us not to lift weights when we were in junior high school because it made you slow. “You’re no good if you’re bulky.” “You gotta be fast in football.” Then, something changed around the time I was in high school–the coaches figured it out. I began lifting weights since I was an age group swimmer, like maybe age 12 or 14. Then, the football coaches got into it mainly because the University of Nebraska had a famous strength program that popularized it. At the time, there just weren't a lot of big guys. The people who knew how to train were using what would be considered “advanced knowledge” back in the 80s. For example, they’d know how to do a split routine or squat on one day and do upper body on the next day–– that was considered advanced knowledge at that time. I remember once.. I had an injury, and I was in the trainer's room at the Caltech athletic facility. The lady was looking at my quadriceps. I’d pulled a muscle, and she was looking at the quadriceps right above your kneecap. If you have well-developed quads, you'd have a bulge, a bump right above your cap. And she was looking at it from this angle where she was in front of me, and she was looking at my leg from the front. She's like, “Wow, it's swollen.” And I was like, “That's not the injury. That's my quadricep!” And she was a trainer! So, at that time, I could probably squat 400 pounds. So I was pretty strong and had big legs. The fact that the trainer didn't really understand what well-developed anatomy was supposed to look like blew my mind!", "So anyway, we've come a long way. This isn't one of these things where you have to be old to have any understanding of how this stuff evolved over the last 30-40 years.", "Dwarkesh Patel 10:13", "But, I wonder if that was a phenomenon of that particular time or if people were not that muscular throughout human history. You hear stories of  Roman soldiers who are carrying 80 pounds for 10 or 20 miles a day. I mean, there are a lot of sculptures in the ancient world, or not that ancient, but the people look like they have a well-developed musculature.", "Steve Hsu 10:34", "So the Greeks were very special because they were the first to think about the word gymnasium. It was a thing called the Palaestra , where they were trained in wrestling and boxing. They were the first people who were seriously into physical culture specific training for athletic competition.", "Even in the 70s, when I was a little kid, I look back at the guys from old photos and they were skinny . So skinny! The guys who went off and fought World War Two, whether they were on the German side, or the American side, were like 5’8-5’9 weighing around 130 pounds - 140 pounds. They were much different from what modern US Marines would look like. So yeah, physical culture was a new thing. Of course, the Romans and the Greeks had it to some degree, but it was lost for a long time. And, it was just coming back to the US when I was growing up. So if you were reasonably lean (around 200 pounds) and you could bench over 300.. that was pretty rare back in those days.", "Embryo selection", "Dwarkesh Patel 11:46", "Okay, so let's talk about your company Genomic Prediction . Do you want to talk about this company and give an intro about what it is?", "Steve Hsu 11:55", "Yeah. So there are two ways to introduce it. One is the scientific view. The other is the IVF view. I can do a little of both. So scientifically, the issue is that we have more and more genomic data. If you give me the genomes of a bunch of people and then give me some information about each person, ex. Do they have diabetes? How tall are they? What's their IQ score?  It’s a natural AI machine learning problem to figure out which features in the DNA variation between people are predictive of whatever variable you're trying to predict.", "This is the ancient scientific question of how you relate the genotype of the organism (the specific DNA pattern), to the phenotype (the expressed characteristics of the organism). If you think about it, this is what biology is! We had the molecular revolution and figured out that it’s people's DNA that stores the information which is passed along. Evolution selects on the basis of the variation in the DNA that’s expressed as phenotype, as that phenotype affects fitness/reproductive success. That's the whole ballgame for biology. As a physicist who's trained in mathematics and computation, I'm lucky that I arrived on the scene at a time when we're going to solve this basic fundamental problem of biology through brute force, AI, and machine learning. So that's how I got into this. Now you ask as an entrepreneur, “Okay, fine Steve, you're doing this in your office with your postdocs and collaborators on your computers. What use is it?”", "The most direct application of this is in the following setting: Every year around the world, millions of families go through IVF—typically because they're having some fertility issues, and also mainly because the mother is in her 30s or maybe 40s. In the process of IVF, they use hormone stimulation to produce more eggs. Instead of one per cycle, depending on the age of the woman, they might produce anywhere between five to twenty, or even sixty to a hundred eggs for young women who are hormonally stimulated (egg donors). From there, it’s trivial because men produce sperm all the time. You can fertilize eggs pretty easily in a little dish, and get a bunch of embryos that grow. They start growing once they're fertilized. The problem is that if you're a family and produce more embryos than you’re going to use, you have the embryo choice problem. You have to figure out which embryo to choose out of  say, 20 viable embryos.", "The most direct application of the science that I described is that we can now genotype those embryos from a small biopsy. I can tell you things about the embryos. I could tell you things like your fourth embryo being an outlier. For breast cancer risk, I would think carefully about using number four. Number ten is an outlier for cardiovascular disease risk. You might want to think about not using that one. The other ones are okay. So, that’s what genomic prediction does. We work with 200 or 300 different IVF clinics in six continents.", "Dwarkesh Patel 15:46", "Yeah, so the super fascinating thing about this is that the diseases you talked about—or at least their risk profiles—are polygenic. You can have thousands of SNPs (single nucleotide polymorphisms) determining whether you will get a disease. So, I'm curious to learn how you were able to transition to this space and how your knowledge of mathematics and physics was able to help you figure out how to make sense of all this data.", "Steve Hsu 16:16", "Yeah, that's a great question. So again, I was stressing the fundamental scientific importance of all this stuff. If you go into a slightly higher level of detail—which you were getting at with the individual SNPs, or polymorphisms—there are individual locations in the genome, where I might differ from you, and you might differ from another person. Typically, each pair of individuals will differ at a few million places in the genome—and that controls why I look a little different than you", "A lot of times, theoretical physicists have a little spare energy and they get tired of thinking about quarks or something. They want to maybe dabble in biology, or they want to dabble in computer science, or some other field. As theoretical physicists, we always feel, “Oh, I have a lot of horsepower, I can figure a lot out.” (For example, Feynman helped design the first parallel processors for thinking machines.) I have to figure out which problems I can make an impact on because I can waste a lot of time. Some people spend their whole lives studying one problem, one molecule or something, or one biological system. I don't have time for that, I'm just going to jump in and jump out. I'm a physicist. That's a typical attitude among theoretical physicists. So, I had to confront sequencing costs about ten years ago because I knew the rate at which they were going down. I could anticipate that we’d get to the day (today) when millions of genomes with good phenotype data became available for analysis. A typical training run might involve almost a million genomes or half a million genomes. The mathematical question then was: What is the most effective algorithm given a set of genomes and phenotype information to build the best predictor? This can be boiled down to a very well-defined machine learning problem. It turns out, for some subset of algorithms, there are theorems— performance guarantees that give you a bound on how much data you need to capture almost all of the variation in the features. I spent a fair amount of time, probably a year or two, studying these very famous results, some of which were proved by a guy named Terence Tao, a Fields medalist. These are results on something called compressed sensing: a penalized form of high dimensional regression that tries to build sparse predictors. Machine learning people might notice L1-penalized optimization . The very first paper we wrote on this was to prove that using accurate genomic data and these very abstract theorems in combination could predict how much data you need to “solve” individual human traits. We showed that you would need at least a few hundred thousand individuals and their genomes and their heights to solve for height as a phenotype. We proved that in a paper using all this fancy math in 2012. Then around 2017, when we got a hold of half a million genomes, we were able to implement it in practical terms and show that our mathematical result from some years ago was correct. The transition from the low performance of the predictor to high performance (which is what we call a “phase transition boundary” between those two domains) occurred just where we said it was going to occur. Some of these technical details are not understood even by practitioners in computational genomics who are not quite mathematical. They don't understand these results in our earlier papers and don't know why we can do stuff that other people can't, or why we can predict how much data we'll need to do stuff. It's not well-appreciated, even in the field. But when the big AI in our future in the singularity looks back and says, “Hey, who gets the most credit for this genomics revolution that happened in the early 21st century?” they're going to find these papers on the archive where we proved this was possible, and how five years later, we actually did it. Right now, it's under-appreciated, but the future AI––that Roko's Basilisk AI–will look back and will give me a little credit for it.", "Dwarkesh Patel 21:03", "Yeah, I was a little interested in this a few years ago. At that time, I looked into how these polygenic risk scores were calculated. Basically, you find the correlation between the phenotype and the alleles that correlate with it. You add up how many copies of these alleles you have, what the correlations are, and you do a weighted sum of that. So that seemed very simple, especially in an era where we have all this machine learning, but it seems like they're getting good predictive results out of this concept. So, what is the delta between how good you can go with all this fancy mathematics versus a simple sum of correlations?", "Steve Hsu 21:43", "You're right that the ultimate models that are used when you've done all the training and when the dust settles, are straightforward. They’re pretty simple and have an additive structure. Basically, I either assign a nonzero weight to this particular region in the genome, or I don't. Then, I need to know what the weighting is, but then the function is a linear function or additive function of the state of your genome at some subset of positions. The ultimate model that you get is straightforward. Now, if you go back ten years, when we were doing this, there were lots of claims that it was going to be super nonlinear—that it wasn't going to be additive the way I just described it. There were going to be lots of interaction terms between regions. Some biologists are still convinced that's true, even though we already know we have predictors that don't have interactions.", "The other question, which is more technical, is whether in any small region of your genome, the state of the individual variants is highly correlated because you inherit them in chunks. You need to figure out which one you want to use. You don't want to activate all of them because you might be overcounting. So that's where these L-1 penalization sparse methods force the predictor to be sparse. That is a key step. Otherwise, you might overcount. If you do some simple regression math, you might have 10-10 different variants close by that have roughly the same statistical significance.", "But, you don't know which one of those tends to be used, and you might be overcounting effects or undercounting effects. So, you end up doing a high-dimensional optimization, where you grudgingly activate an SNP when the signal is strong enough. Once you activate that one, the algorithm has to be smart enough to penalize the other ones nearby and not activate them because you're over-counting effects if you do that. There's a little bit of subtlety in it. But, the main point you made is that the ultimate predictors, which are very simple and addictive—sum over effect sizes and time states—work well. That’s related to a deep statement about the additive structure of the genetic architecture of individual differences.", "In other words, it's weird that the ways that I differ from you are merely just because I have more of something or you have less of something. It’s not like these things are interacting in some incredibly understandable way. That's a deep thing—which is not appreciated that much by biologists yet. But over time, they'll figure out something interesting here.", "Why hasn’t natural selection already optimized humans?", "Dwarkesh Patel 24:19", "Right. I thought that was super fascinating, and I commented on that on Twitter. What is interesting about that is two things. One is that you have this fascinating evolutionary argument about why that would be the case that you might want to explain. The second is that it makes you wonder if becoming more intelligent is just a matter of turning on certain SNPs. It's not a matter of all this incredible optimization being like solving a sudoku puzzle or anything. If that's the case, then why hasn't the human population already been selected to be maxed out on all these traits if it's just a matter of a bit flip?", "Steve Hsu 25:00", "Okay, so the first issue is why is this genetic architecture so surprisingly simple? Again, we didn't know it would be simple ten years ago. So when I was checking to see whether this was a field that I should go into depending on our capabilities to make progress, we had to study the more general problem of the nonlinear possibilities. But eventually, we realized that most of the variance would probably be captured in an additive way. So, we could narrow down the problem quite a bit. There are evolutionary reasons for this. There’s a famous theorem by Fisher, the father of population genetics (aka. frequentist statistics). Fisher proved something called Fisher's Fundamental Theorem of Natural Selection , which says that if you impose some selection pressure on a population, the rate at which that population responds to the selection pressure (let’s say it’s the bigger rats that out-compete, the smaller rats) then at what rate does the rat population start getting bigger? He showed that it's the additive variants that dominate the rate of evolution. It's easy to understand why if it's a nonlinear mechanism, you need to make the rat bigger. When you sexually reproduce, and that gets chopped apart, you might break the mechanism. Whereas, if each short allele has its own independent effect, you can inherit them without worrying about breaking the mechanisms. It was well known among a tiny theoretical population of biologists that adding variants was the dominant way that populations would respond to selection. That was already known. The other thing is that humans have been through a pretty tight bottleneck, and we're not that different from each other. It's very plausible that if I wanted to edit a human embryo and make it into a frog, then there are all kinds of subtle nonlinear things I’d have to do. But all those identical nonlinear complicated subsystems are fixed in humans. You have the same system as I do. You have the not human, not frog or ape, version of that region of DNA, and so do I. But the small ways we differ are mostly little additive switches. That's this deep scientific discovery from over the last 5-10 years of work in this area.", "Now, you were asking about why evolution hasn't completely “optimized” all traits in humans already. I don't know if you’ve ever done deep learning or high-dimensional optimization, but in that high-dimensional space, you're often moving on a slightly-tilted surface. So, you're getting gains, but it's also flat. Even though you scale up your compute or data size by order of magnitude, you don't move that much farther. You get some gains, but you're never really at the global max of anything in these high-dimensional spaces. I don't know if that makes sense to you. But it's pretty plausible to me that two things are important here. One is that evolution has not had that much time to optimize humans. The environment that humans live in changed radically in the last 10,000 years. For a while, we didn't have agriculture, and now we have agriculture. Now, we have a swipe left if you want to have sex tonight. The environment didn't stay fixed. So, when you say fully optimized for the environment, what do you mean?", "The ability to diagonalize matrices might not have been very adaptive 10,000 years ago. It might not even be adaptive now. But anyway, it's a complicated question that one can't reason naively about. “If God wanted us to be 10 feet tall, we'd be 10 feet tall.” Or “if it's better to be smart, my brain would be *this* big or something.” You can't reason naively about stuff like that.", "Dwarkesh Patel 29:04", "I see. Yeah.. Okay. So I guess it would make sense then that for example, with certain health risks, the thing that makes you more likely to get diabetes or heart disease today might be… I don't know what the pleiotropic effect of that could be. But maybe that's not that important one year from now.", "Steve Hsu 29:17", "Let me point out that most of the diseases we care about now—not the rare ones, but the common ones—manifest when you're 50-60 years old. So there was never any evolutionary advantage of being super long-lived. There's even a debate about whether the grandparents being around to help raise the kids lifts the fitness of the family unit. But, most of the time in our evolutionary past, humans just died fairly early. So, many of these diseases would never have been optimized against evolution. But, we see them now because we live under such good conditions, we can regulate people over 80 or 90 years.", "Dwarkesh Patel 29:57", "Regarding the linearity and additivity point, I was going to make the analogy that– and I'm curious if this is valid– but when you're programming, one thing that's good practice is to have all the implementation details in separate function calls or separate programs or something, and then have your main loop of operation just be called different functions like, “Do this, do that”, so that you can easily comment stuff away or change arguments. This seemed very similar to that where by turning these names on and off, you can change what the next offering will be. And, you don't have to worry about actually implementing whatever the underlying mechanism is.", "Steve Hsu 30:41", "Well, what you said is related to what Fisher proved in his theorems. Which is that, if suddenly, it becomes advantageous to have X, (like white fur instead of black fur) or something, it would be best if there were little levers that you could move somebody from black fur to white fur continuously by modifying those switches in an additive way. It turns out that for sexually reproducing species where the DNA gets scrambled up in every generation, it's better to have switches of that kind. The other point related to your software analogy is that there seem to be modular, fairly modular things going on in the genome.", "When we looked at it, we were the first group to have, initially, 20 primary disease conditions we had decent predictors for. We started looking carefully at just something as trivial as the overlap of my sparsely trained predictor. It turns on and uses *these* features for diabetes, but it uses *these* features for schizophrenia. It’s the stupidest metric, it’s literally just how much overlap or variance accounted for overlap is there between pairs of disease conditions. It's very modest. It's the opposite of what naive biologists would say when they talk about pleiotropy. They're just disjoint! Disjoint regions of your genome that govern certain things. And why not? You have 3 billion base pairs—there's a lot you can do in there. There's a lot of information there. If you need 1000 to control diabetes risk, I estimated you could easily have 1000 roughly independent traits that are just disjoint in their genetic dependencies. So, if you think about D&D,  your strength, decks, wisdom, intelligence, and charisma—those are all disjoint. They're all just independent variables. So it's like a seven-dimensional space that your character lives in. Well, there's enough information in the few million differences between you and me. There's enough for a 1000-dimensional space of variation.", "“Oh, how considerable is your spleen?” My spleen is a little bit smaller, yours is a little bit bigger - that can vary independently of your IQ. Oh, it's a big surprise. The size of your spleen can vary independently of the size of your big toe. If you do information theory, there are about 1000 different parameters, and I can vary independently with the number of variants I have between you and me. Because you understand some information theory, it’s trivial to explain, but try explaining to a biologist and you won't get very far.", "Dwarkesh Patel 33:27", "Yeah, yeah, do the log two of the number of.. is that basically how you do it? Yeah.", "Steve Hsu 33:33", "Okay. That's all it is. I mean, it's in our paper. We look at how many variants typically account for most of the variation for any of these major traits, and then imagine that they're mostly disjoint. Then it’s just all about: how many variants do you need to independently vary 1000 traits? Well, a few million differences between you and me are enough. It's very trivial math. Once you understand the base and how to reason about information theory, then it's very trivial. But, it ain’t trivial for theoretical biologists, as far as I can tell.", "Aging", "Dwarkesh Patel 34:13", "But the result is so interesting because I remember reading in The Selfish Gene that, as he (Dawkins) hypothesizes that the reason we could be aging is an antagonistic clash. There's something that makes you healthier when you're young and fertile that makes you unhealthy when you're old. Evolution would have selected for such a trade-off because when you're young and fertile, evolution and your genes care about you. But, if there's enough space in the genome —where these trade-offs are not necessarily necessary—then this could be a bad explanation for aging, or do you think I'm straining the analogy?", "Steve Hsu 34:49", "I love your interviews because the point you're making here is really good. So Dawkins, who is an evolutionary theorist from the old school when they had almost no data—you can imagine how much data they had compared to today—he would tell you a story about a particular gene that maybe has a positive effect when you're young, but it makes you age faster.", "So, there's a trade-off. We know about things like sickle cell anemia. We know stories about that. No doubt, some stories are true about specific variants in your genome. But that's not the general story. The general story you only discovered in the last five years is that thousands of variants control almost every trait and those variants tend to be disjoint from the ones that control the other trait. They weren't wrong, but they didn't have the big picture.", "Dwarkesh Patel 35:44", "Yeah, I see. So, you had this paper , it had polygenic, health index, general health, and disease risk.. You showed that with ten embryos, you could increase disability-adjusted life years by four, which is a massive increase if you think about it. Like what if you could live four years longer and in a healthy state?", "Steve Hsu 36:05", "Yeah, what's the value of that? What would you pay to buy that for your kid?", "Dwarkesh Patel 36:08", "Yeah. But, going back to the earlier question about the trade-offs and why this hasn't already been selected for,  if you're right and there's no trade-off to do this, just living four years older (even if that's beyond your fertility) just being a grandpa or something seems like an unmitigated good. So why hasn’t this kind of assurance hasn't already been selected for?", "Steve Hsu 36:35", "I’m glad you're asking about these questions because these are things that people are very confused about, even in the field. First of all, let me say that when you have a trait that's controlled by  10,000 variants (eg. height is controlled by an order of 10,000 variants and probably cognitive ability a little bit more), the square root of 10,000 is 100.  So, if I could come to this little embryo, and I want to give it one extra standard deviation of height, I only need to edit 100. I only need to flip 100 minus variance to plus variance. These are very rough numbers. But, one standard deviation is the square root of “n”. If I flip a coin “n” times, I want a better outcome in terms of the number of ratio heads to tails. I want to increase it by one standard deviation. I only need to flip the square root of “n” heads because if you flip a lot, you will get a narrow distribution that peaks around half and the width of that distribution is the square root of “n”.", "Once I tell you, “Hey, your height is controlled by 10,000 variants, and I only need to flip 100 genetic variants to make you one standard deviation for a male,” (that would be three inches tall, two and a half or three inches taller), you suddenly realize, “Wait a minute, there are a lot of variants up for grabs there. If I could flip 500 variants in your genome, I would make you five standard deviations taller, you'd be seven feet tall.”  I didn't even have to do that much work, and there's a lot more variation where that came from. I could have flipped even more because I only flipped 500 out of 10,000, right? So, there's this quasi-infinite well of variation that evolution or genetic engineers could act on. Again, the early population geneticists who bred corn and animals know this. This is something they explicitly know about because they've done calculations.", "Interestingly, human geneticists who are mainly concerned with diseases and stuff, are often unfamiliar with the math that the animal breeders already know. You might be interested to know that the milk you drink comes from heavily genetically-optimized cows bred artificially using almost exactly the same technologies that we use for genomic prediction. But they're doing it to optimize milk production and stuff like this. So there is a big well of variance. It's a consequence of the trait's polygenicity. On the longevity side of things, it does look like people could “be engineered” to live much longer by flipping the variants that make the risk for diseases that shorten your life. The question is then, “Why didn't evolution give us life spans of thousands of years?” People in the Bible used to live for thousands of years. Why don't we? I mean, *chuckles* that probably didn’t happen. But the question is, you have this very high dimensional space, and you have a fitness function. How big is the slope in a particular direction of that fitness function? How much more successful reproductively would Joe caveman have been if he lived to be 150 instead of only 100 or something? There just hasn't been enough time to explore this super high-dimensional space. That's the actual answer. But now, we have the technology, and we're going to fucking explore it fast. That's the point that the big lightbulb should go off. We’re mapping this space out now. Pretty confident in 10 years or so, with the CRISPR gene editing technologies will be ready for massively multiplexed edits. We'll start navigating in this high-dimensional space as much as we like. So that's the more long-term consequence of the scientific insights.", "Dwarkesh Patel 40:53", "Yeah, that's super interesting. What do you think will be the plateau for a trait of how long you’ll live? With the current data and techniques, do you think it could be significantly greater than that?", "Steve Hsu 41:05", "We did a simple calculation—which amazingly gives the correct result. This polygenic predictor that we built (which isn't perfect yet but will improve as we gather more data) is used in selecting embryos today. If you asked, out of a billion people, “What's the best person typically, what would their score be on this index and then how long would they be predicted to live?”’ It's about 120 years. So it's spot on.", "One in a billion types of person lives to be 120 years old. How much better can you do? Probably a lot better. I don't want to speculate, but other nonlinear effects, things that we're not taking into account will start to play a role at some point. So, it's a little bit hard to estimate what the true limiting factors will be.", "But one super robust statement, and I'll stand by it, debate any Nobel Laureate in biology who wants to discuss it even,  is that there are many variants available to be selected or edited. There's no question about that. That's been established in animal breeding in plant breeding for a long time now. If you want a chicken that grows to be *this* big, instead of *this* big, you can do it. You can do it if you want a cow that produces 10 times or 100 times more milk than a regular cow. The egg you ate for breakfast this morning, those bio-engineered chickens that lay almost an egg a day… A chicken in the wild lays an egg a month. How the hell did we do that? By genetic engineering. That's how we did it.", "Dwarkesh Patel 42:51", "Yeah. That was through brute artificial selection. No fancy machine learning there.", "Steve Hsu 42:58", "Last ten years, it's gotten sophisticated machine learning genotyping of chickens. Artificial insemination, modeling of the traits using ML last ten years. For cow breeding, it's done by ML.", "First Mover Advantage", "Dwarkesh Patel 43:18", "I had no idea. That's super interesting. So, you mentioned that you're accumulating data and improving your techniques over time, is there a first mover advantage to a genomic prediction company like this? Or is it whoever has the newest best algorithm for going through the biobank data?", "Steve Hsu 44:16", "That's another super question. For the entrepreneurs in your audience, I would say in the short run, if you ask what the valuation of GPB should be? That's how the venture guys would want me to answer the question. There is a huge first mover advantage because they're important in the channel relationships between us and the clinics. Nobody will be able to get in there very easily when they come later because we're developing trust and an extensive track record with clinics worldwide—and we're well-known.", "So could 23andme or some company with a huge amount of data—if they were to get better AI/ML people working on this—blow us away a little bit and build better predictors because they have much more data than we do? Possibly, yes. Now, we have had core expertise in doing this work for years that we're just good at it. Even though we don't have as much data as 23andme, our predictors might still be better than theirs.", "I'm out there all the time, working with biobanks all around the world. I don't want to say all the names, but other countries are trying to get my hands on as much data as possible.", "But, there may not be a lasting advantage beyond the actual business channel connections to that particular market. It may not be a defensible, purely scientific moat around the company. We have patents on specific technologies about how to do genotyping or error correction on the embryo, DNA, and stuff like this. We do have patents on stuff like that. But this general idea of who will best predict human traits from DNA? It's unclear who's going to be the winner in that race. Maybe it'll be the Chinese government in 50 years? Who knows?", "Dwarkesh Patel 46:13", "Yeah, that's interesting. If you think about a company Google, theoretically, it's possible that you could come up with a better algorithm than PageRank and beat them. But it seems like the engineer at Google is going to come up with whatever edge case or whatever improvement is possible.", "Steve Hsu 46:28", "That's exactly what I would say. PageRank is deprecated by now. But, even if somebody else comes up with a somewhat better algorithm if they have a little bit more data, if you have a team doing this for a long time and you're focused and good, it's still tough to beat you, especially if you have a lead in the market.", "Dwarkesh Patel 46:50", "So, are you guys doing the actual biopsy? Or is it just that they upload the genome, and you're the one processing just giving recommendations? Is it an API call, basically?", "Steve Hsu 47:03", "It's great, I love your question. It is totally standard. Every good IVF clinic in the world regularly takes embryo biopsies. So that's standard. There’s a lab tech doing that. Okay. Then, they take the little sample, put it on ice, and ship it. The DNA as a molecule is exceptionally robust and stable. My other startup solves crimes that are 100 years old from DNA that we get from some semen stain on some rape victim, serial killer victims bra strap, we've done stuff that.", "Dwarkesh Patel 47:41", "Jack the Ripper, when are we going to solve that mystery?", "Steve Hsu 47:44", "If they can give me samples, we can get into that. For example, we just learned that you could recover DNA pretty well if someone licks a stamp and puts it on their correspondence. If you can do Neanderthals, you can do a lot to solve crimes. In the IVF workflow, our lab, which is in New Jersey, can service every clinic in the world because they take the biopsy, put it in a standard shipping container, and send it to us. We’re actually genotyping DNA in our lab, but we've trained a few of the bigger clinics to do the genotyping on their site. At that point, they upload some data into the cloud, and then they get back some stuff from our platform. And at that point, it's going to be the whole world, every human who wants their kid to be healthy and get the best they can– that data is going to come up to us, and the report is going to come back down to their IVF physician.", "Dwarkesh Patel 48:46", "Which is great if you think that there's a potential that this technology might get regulated in some way, you could go to Mexico or something, have them upload the genome (you don't care what they upload it from), and then get the recommendations there.", "Steve Hsu 49:05", "I think we’re going to evolve to a point where we are going to be out of the wet part of this business and only in the cloud and bit part of this business. No matter where it is, the clinics are going to have a sequencer, which is *this* big, and their tech is going to quickly upload and retrieve the report for the physician three seconds later. Then, the parents are going to look at it on their phones or whatever. We’re basically there with some clinics. It’s going to be tough to regulate because it’s just this. You have the bits, and you’re in some repressive, terrible country that doesn’t allow you to select for some special traits that people are nervous about, but you can upload it to some vendor that’s in Singapore or some free country, and they give you the report back.", "Doesn’t have to be us, we don’t do the edgy stuff. We only do the health-related stuff right now. But, if you want to know how tall this embryo is going to be… I’ll tell you a mind-blower! When you do face recognition in AI, you're mapping someone's face into a parameter space on the order of hundreds of parameters, each of those parameters is super heritable.", "In other words, if I take two twins and photograph them, and the algorithm gives me the value of that parameter for twin one and two, they're very close. That's why I can't tell the two twins apart, and face recognition can ultimately tell them apart if it’s really good system. But you can conclude that almost all these parameters are identical for those twins. So it's highly heritable.", "We're going to get to a point soon where I can do the inverse problem where I have your DNA  and I predict each of those parameters in the face recognition algorithm and then reconstruct the face. If I say that when this embryo will be 16, that is what she will look like. When she's 32, this is what she's going to look like. I'll be able to do that, for sure. It's only an AI/ML problem right now. But basic biology is clearly going to work. So then you're going to be able to say, “Here's a report. Embryo four is so cute.” Before, we didn't know we wouldn't do that, but it will be possible.", "Dwarkesh Patel 51:37", "Before we get married, you'll want to see what their genotype implies about their faces' longevity. It's interesting that you hear stories about these cartel leaders who will get plastic surgery or something to evade the law, you could have a check where you look at a lab and see if it matches the face you would have had five years ago when they caught you on tape.", "Steve Hsu 52:02", "This is a little bit back to old-school Gattaca, but you don't even need the face! You can just take a few molecules of skin cells and phenotype them and know exactly who they are. I've had conversations with these spooky Intel folks. They're very interested in, “Oh, if some Russian diplomat comes in, and we think he's a spy, but he's with the embassy, and he has a coffee with me, and I save the cup and send it to my buddy at Langley, can we figure out who this guy is? And that he has a daughter who's going to Chote? Can do all that now.", "Dwarkesh Patel 52:49", "If that's true, then in the future, world leaders will not want to eat anything or drink. They'll be wearing a hazmat suit to make sure they don't lose a hair follicle.", "Steve Hsu 53:04", "The next time Pelosi goes, she will be in a spacesuit if she cares. Or the other thing is, they're going to give it. They're just going to be, “Yeah, my DNA is everywhere. If I'm a public figure, I can't track my DNA. It's all over.”", "Dwarkesh Patel 53:17", "But the thing is, there's so much speculation that Putin might have cancer or something. If we have his DNA, we can see his probability of having cancer at age 70, or whatever he is, is 85%. So yeah, that’d be a very verified rumor. That would be interesting.", "Steve Hsu 53:33", "I don't think that would be very definitive. I don't think we'll reach that point where you can say that Putin has cancer because of his DNA—which I could have known when he was an embryo. I don't think it's going to reach that level. But, we could say he is at high risk for a type of cancer.", "Genomics in dating", "Dwarkesh Patel 53:49", "In 50 or 100 years, if the majority of the population is doing this, and if the highly heritable diseases get pruned out of the population, does that mean we'll only be left with lifestyle diseases? So, you won't get breast cancer anymore, but you will still get fat or lung cancer from smoking?", "Steve Hsu 54:18", "It's hard to discuss the asymptotic limit of what will happen here. I'm not very confident about making predictions like that. It could get to the point where everybody who's rich or has been through this stuff for a while, (especially if we get the editing working) is super low risk for all the top 20 killer diseases that have the most life expectancy impact. Maybe those people live to be 300 years old naturally. I don't think that's excluded at all. So, that's within the realm of possibility. But it's going to happen for a few lucky people like Elon Musk before it happens for shlubs like you and me.", "There are going to be very angry inequality protesters about the Trump grandchildren, who, models predict will live to be 200 years old. People are not going to be happy about that.", "Dwarkesh Patel 55:23", "So interesting. So, one way to think about these different embryos is if you're producing multiple embryos, and you get to select from one of them, each of them has a call option, right? Therefore, you probably want to optimize for volatility as much, or if not more than just the expected value of the trait. So, I'm wondering if there are mechanisms where you can  increase the volatility in meiosis or some other process. You just got a higher variance, and you can select from the tail better.", "Steve Hsu 55:55", "Well, I'll tell you something related, which is quite amusing. So I talked with some pretty senior people at the company that owns all the dating apps. So you can look up what company this is, but they own Tinder and Match. They’re kind of interested in perhaps including a special feature where you upload your genome instead of Tinder Gold / Premium.  And when you match- you can talk about how well you match the other person based on your genome. One person told me something shocking. Guys lie about their height on these apps.", "Dwarkesh Patel 56:41", "I’m shocked, truly shocked hahaha.", "Steve Hsu 56:45", "Suppose you could have a DNA-verified height. It would prevent gross distortions if someone claims they're 6’2 and they’re 5’9. The DNA could say that's unlikely. But no, the application to what you were discussing is more like, “Let's suppose that we're selecting on intelligence or something. Let's suppose that the regions where your girlfriend has all the plus stuff are complementary to the regions where you have your plus stuff. So, we could model that and say,  because of the complementarity structure of your genome in the regions that affect intelligence, you're very likely to have some super intelligent kids way above your, the mean of your you and your girlfriend's values. So, you could say things like it being better for you to marry that girl than another. As long as you go through embryo selection, we can throw out the bad outliers. That's all that's technically feasible. It's true that one of the earliest patent applications, they'll deny it now. What's her name? Gosh, the CEO of 23andme…Wojcicki, yeah. She'll deny it now. But, if you look in the patent database, one of the very earliest patents that 23andme filed when they were still a tiny startup was about precisely this: Advising parents about mating and how their kids would turn out and stuff like this. We don't even go that far in GP, we don't even talk about stuff like that, but they were thinking about it when they founded 23andme.", "Dwarkesh Patel 58:38", "That is unbelievably interesting. By the way, this just occurred to me—it's supposed to be highly heritable, especially people in Asian countries, who have the experience of having grandparents that are much shorter than us, and then parents that are shorter than us, which suggests that the environment has a big part to play in it malnutrition or something. So how do you square that our parents are often shorter than us with the idea that height is supposed to be super heritable.", "Steve Hsu 59:09", "Another great observation. So the correct scientific statement is that we can predict height for people who will be born and raised in a favorable environment. In other words, if you live close to a McDonald's and you're able to afford all the food you want, then the height phenotype becomes super heritable because the environmental variation doesn't matter very much. But, you and I both know that people are much smaller if we return to where our ancestors came from, and also, if you look at how much food, calories, protein, and calcium they eat, it's different from what I ate and what you ate growing up. So we're never saying the environmental effects are zero. We're saying that for people raised in a particularly favorable environment, maybe the genes are capped on what can be achieved, and we can predict that. In fact, we have data from Asia, where you can see much bigger environmental effects. Age affects older people, for fixed polygenic scores on the trait are much shorter than younger people.", "Ancestral populations", "Dwarkesh Patel 1:00:31", "Oh, okay. Interesting. That raises the next question I was about to ask: how applicable are these scores across different ancestral populations?", "Steve Hsu 1:00:44", "Huge problem is that most of the data is from Europeans. What happens is that if you train a predictor in this ancestry group and go to a more distant ancestry group, there's a fall-off in the prediction quality. Again, this is a frontier question, so we don't know the answer for sure. But many people believe that there's a particular correlational structure in each population, where if I know the state of this SNP, I can predict the state of these neighboring SNPs. That is a product of that group's mating patterns and ancestry. Sometimes, the predictor, which is just using statistical power to figure things out, will grab one of these SNPs as a tag for the truly causal SNP in there. It doesn't know which one is genuinely causal, it is just grabbing a tag, but the tagging quality falls off if you go to another population (eg. This was a very good tag for the truly causal SNP in the British population. But it's not so good a tag in the South Asian population for the truly causal SNP, which we hypothesize is the same).", "It's the same underlying genetic architecture in these different ancestry groups. We don't know if that's a hypothesis. But even so, the tagging quality falls off. So my group spent a lot of our time looking at the performance of predictor training population A, and on distant population B, and modeling it trying to figure out trying to test hypotheses as to whether it's just the tagging decay that’s responsible for most of the faults. So all of this is an area of active investigation. It'll probably be solved in five years. The first big biobanks that are non-European are coming online. We're going to solve it in a number of years.", "Dwarkesh Patel 1:02:38", "Oh, what does the solution look like?  Unless you can identify the causal mechanism by which each SNP is having an effect, how can you know that something is a tag or whether it's the actual underlying switch?", "Steve Hsu 1:02:54", "The nature of reality will determine how this is going to go. So we don't truly know if the innate underlying biology is true. This is an amazing thing. People argue about human biodiversity and all this stuff, and we don't even know whether these specific mechanisms that predispose you to be tall or have heart disease are the same in these different ancestry groups. We assume that it is, but we don't know that. As we get further away to Neanderthals or Homo Erectus, you might see that they have a slightly different architecture than we do.", "But let's assume that the causal structure is the same for South Asians and British people. Then it's a matter of improving the tags. How do I know if I don't know which one is causal? What do I mean by improving the tags? This is a machine learning problem. If there's an SNP, which is always coming up as very significant when I use it across multiple ancestry groups, maybe that one's casual. As I vary the tagging correlations in the neighborhood of that SNP, I always find that that one is the intersection of all these different sets, making me think that one's going to be causal. That's a process we're engaged in now—trying to do that. Again, it's just a machine learning problem. But we need data. That's the main issue.", "Dwarkesh Patel 1:04:32", "I was hoping that wouldn't be possible because one way we might go about this research is that it itself becomes taboo or causes other sorts of bad social consequences if you can definitively show that on certain traits, there are differences between ancestral populations, right?", "So, I was hoping that maybe there was an evasion button where we can't say because they're just tags, and the tags might be different between different ancestral populations. But with machine learning, we’ll know.", "Steve Hsu 1:04:59", "That's the situation we're in now, where you have to do some fancy analysis if you want to claim that Italians have lower height potential than Nordics—which is possible. There's been a ton of research about this because there are signals of selection. The alleles, which are activated in height predictors, look like they've been under some selection between North and South Europe over the last 5000 years for whatever reason. But, this is a thing debated by people who study molecular evolution.", "But suppose it's true, okay? That would mean that when we finally get to the bottom of it, we find all the causal loci for height, and the average value for the Italians is lower than that for those living in Stockholm. That might be true. People don't get that excited? They get a little bit excited about height. But they would get really excited if this were true for some other traits, right?", "Suppose the causal variants affecting your level of extraversion are systematic, that the average value of those weighed the weighted average of those states is different in Japan versus Sicily. People might freak out over that. I'm supposed to say that's obviously not true. How could it possibly be true? There hasn't been enough evolutionary time for those differences to arise. After all, it's not possible that despite what looks to be the case for height over the last 5000 years in Europe, no other traits could have been differentially selected over the last 5000 years. That's the dangerous thing. Few people understand this field well enough to understand what you and I just discussed and are so alarmed by it that they're just trying to suppress everything. Most of them don't follow it at this technical level that you and I are just discussing. So, they're somewhat instinctively negative about it, but they don't understand it very well.", "Dwarkesh Patel 1:07:19", "That's good to hear. You see this pattern that by the time that somebody might want to regulate or in some way interfere with some technology or some information, it already has achieved wide adoption. You could argue that that's the case with crypto today. But if it's true that a bunch of IVF clinics worldwide are using these scores to do selection and other things, by the time people realize the implications of this data for other kinds of social questions, this has already been an existing consumer technology.", "Is this eugenics?", "Steve Hsu 1:07:58", "That's true, and the main outcry will be if it turns out that there are massive gains to be had, and only the billionaires are getting them. But that might have the consequence of causing countries to make this free part of their national health care system. So Denmark and Israel pay for IVF. For infertile couples, it's part of their national health care system. They're pretty aggressive about genetic testing. In Denmark, one in 10 babies are born through IVF. It's not clear how it will go. But we're in for some fun times. There's no doubt about that.", "Dwarkesh Patel 1:08:45", "Well, one way you could go is that some countries decided to ban it altogether. And another way it could go is if countries decided to give everybody free access to it. If you had to choose between the two,  you would want to go for the second one. Which would be the hope. Maybe only those two are compatible with people's moral intuitions about this stuff.", "Steve Hsu 1:09:10", "It’s very funny because most wokeist people today hate this stuff. But, most progressives like Margaret Sanger, or anybody who was the progressive intellectual forebears of today's wokeist, in the early 20th century, were all that we would call today in Genesis because they were like, “Thanks to Darwin, we now know how this all works. We should take steps to keep society healthy and (not in a negative way where we kill people we don't like, but we should help society do healthy things when they reproduce and have healthy kids).” Now, this whole thing has just been flipped over among progressives.", "Dwarkesh Patel 1:09:52", "Even in India, less than 50 years ago, Indira Gandhi, she's on the left side of India's political spectrum. She was infamous for putting on these forced sterilization programs. Somebody made an interesting comment about this where they were asked, “Oh, is it true that history always tilts towards progressives? And if so, isn't everybody else doomed? Aren't their views doomed?”", "The person made a fascinating point: whatever we consider left at the time tends to be winning. But what is left has changed a lot over time, right? In the early 20th century, prohibition was a left cause. It was a progressive cause, and that changed, and now the opposite is the left cause. But now, legalizing pot is progressive. Exactly. So, if Conquest’s second law is true, and everything tilts leftover time, just change what is left is, right? That's the solution.", "Steve Hsu 1:10:59", "No one can demand that any of these woke guys be intellectually self-consistent, or even say the same things from one year to another. But one could wonder what they think about these literally Communist Chinese. They’re recycling huge parts of their GDP to help the poor and the southern stuff. Medicine is free, education is free, right? They're clearly socialists, and literally communists. But in Chinese, the Chinese characters for eugenics is a positive thing. It means healthy production.", "But more or less, the whole viewpoint on all this stuff is 180 degrees off in East Asia compared to here, and even among the literal communists.", "Dwarkesh Patel 1:11:55", "So let's talk about one of the traits that people might be interested in potentially selecting for: intelligence. What is the potential for us to acquire the data to correlate the genotype with intelligence?", "Steve Hsu 1:12:15", "Well, that's the most personally frustrating aspect of all of this stuff. If you asked me ten years ago when I started doing this stuff what were we going to get, everything was gone. On the optimistic side of what I would have predicted, so everything's good. Didn't turn out to be interactively nonlinear, or it didn't turn out to be interactively pleiotropic. All these good things, —which nobody could have known a priori how they would work—turned out to be good for gene engineers of the 21st century.", "The one frustrating thing is because of crazy wokeism, and fear of crazy wokists, the most interesting phenotype of all is lagging because everybody's afraid, even though there are very good reasons for medical researchers to want to know the cognitive ability of people in their studies. For example, when you want to study aging, or decline of cognitive function memory, in older people, you want to have baseline measurements of how good their cognitive function was when they were younger, right? So very good reasons for why you want to have all this data. But, researchers are afraid because it's also linked to all these controversial social issues. So, there's just a ginormous amount of genomic data, where there's no cognitive measurement attached as a field to that data—which would have been very cheap to measure. Again, wokists hate this, but I can measure your IQ on a 12-minute test no problem, right? Not with perfect accuracy, but I can get instrumental measurements. If I take it  the NFL has this thing called the Wonderlic—every player being considered for the draft is asked to take this Wonderlic—it's a short test, 12 minutes long, and it's pretty highly correlated (0.8 or 0.9), maybe with a more fulsome IQ measure. So, it would be trivial and inexpensive to gather this data. Once we have my prediction from this earlier math that I was talking about, when you get to a border a million, it could be 1 million or 2 million well-phenotyped people and genomes, we would be able to build a pretty decent IQ predictor that might have a standard error of maybe 10 points or something. That would be incredible for science, but not getting done.", "Dwarkesh Patel 1:14:58", "Suppose there are differences in how things are tagged between different ancestral groups (I'm not talking about average differences or anything, just how the genotype is tagged). And if the Chinese do this first, then, they have an advantage that can't be transferred over, right? Because it's only applicable or advantageously applicable to their population.", "Steve Hsu 1:15:24", "That's a great point. Even a small country like Singapore or Taiwan has enough data to do this. No problem for Estonia. They could do it and have this thing working and just not share it with anybody. So, it's certainly possible. Now that's a little bit too science-fictiony because the leaders who run these countries are not transhumanist rationalists people who read your blog or my blog posts on the internet. I don't think anything that exciting is going to happen. Maybe it will.", "Tradeoffs to intelligence", "Dwarkesh Patel 1:15:59", "Do you think that the potential for pleiotropy is higher with intelligence? I mean, with certain populations? Oh, of course, by the way, they're slim or 5000 is not enough, blah blah blah. But given that you see with certain populations like Ashkenazi Jews, you have a higher incidence of nervous system disorders like Tay Sachs and other things, that seems potential to be the trade-off of higher average intelligence. Do you think that maybe pleiotropy has a higher chance of occurring with intelligence?", "Steve Hsu 1:16:39", "It can only be speculation at this stage. With the history of the Ashkenazi Jews, they also went through some very narrow population bottlenecks. There are some special aspects of their genetics. Whether it's related to cognitive function or not, we don't really know for sure, but there are many reasons why they have a fairly high proportion of inherited diseases and things that they're dealing with. This is one of the reasons why Israel is so progressive when it comes to genetic screening and IVF.", "One thing people talk a lot about is schizophrenia. So they say that schizophrenia could be correlated with creativity. So if your brother's schizophrenic, maybe you're more likely to be creative. He's super creative, but we don't know what he's talking about. Hahahah. So people say that if you start screening against schizophrenia, maybe we won't get creative geniuses. So there are all kinds of pleiotropic things that are possibly true.  But the thing I keep wanting to go back to is that if it's 10,000-20,000 different genetic variants, locations in your genome, that are more or less determining your genetic, cognitive potential, I can go around the high dimensional space. If I find out you can make someone smart using this stuff in this cluster, but it makes them dull or makes them autistic, or it makes them they don't have big muscles, I'll just go round. I don't need to use those, I have plenty more, look over here! Those 500, I don't need to use, I will use *these* 500. This is why it's important to look at historical geniuses who were pretty normal. Maybe they're even good athletes. And, maybe they even were good with the ladies. These people existed. So you have these existence proofs that I can if I need to, if I'm a really good genetic engineer, and I can operate in this 10,000-dimensional space, whatever obstacle you put for me, I will just drive around it. I need lots of data and lots of ML. I'll do it. That's the answer, which, again, most people don't really get. But it's true.", "Dwarkesh Patel 1:18:56", "So I mean, there's a thing where if two traits are correlated at the ends, that person who was, for example, the smartest will not necessarily be the person who is strong. These aren't necessarily correlated, but the person who has the highest mathematical ability will not be the person who has the highest verbal ability—even though the two are correlated.", "At some point, it'll be interesting because parents will have to make that trade-off, even if two things are extraordinarily correlated. It'll be interesting to see how they choose.", "Steve Hsu 1:19:21", "Eventually, you'll have to trust your friendly neighborhood genetic engineer to advise. There's gonna be a lot more modeling going on in the background.", "Dwarkesh Patel 1:19:30", "For the time being, we're stuck with educational attainment as a correlate. That concerns me because educational attainment also probably correlates with other things that somebody might want or they might not want—which are conscientiousness and conformity. If you're Bryan Caplan Caplan, in the case against education , he says that the three things education signals are conscientiousness, conformity, and intelligence. You want intelligence? Most parents probably want conscientiousness rather than conformity, but some might not. Hopefully, we can get the direct intelligence data itself. But, is there some way to segment out the conformity part of that educational attainment data?", "Steve Hsu 1:20:12", "In my dream world,  if I were the CEO of 23andme or something, what would I do (oh, warning, they're actually secretly doing this, but you didn't hear that from me)? I would have little surveys on the site that say, “Can you do a personality survey, and one of the categories will be conscientiousness, and one will be extraversion, right?” Conformity is not a traditional, Big Five thing. But, you can have questions about how conforming someone is. Of course, we know how to do a little math. So, we can diagonalize a matrix of correlated measurements of all these different things. So, I might be able to remove the chunk within EA (Educational attainment), which is due to conformism, remove the chunk, which is due to conscientiousness, and leave behind the chunk, which correlates highly with the separate IQ predictor that I built separately using a different method. All these things are understood solutions, these problems are understood. It's just a data problem.", "I'll tell you an interesting thing. There are 20,000 sibling pairs in the UK Biobank. Three years ago, most people didn’t really understand these polygenic scores, and they were very skeptical, thinking that we weren’t really capturing the real stuff, etc. My group was the first to say: let’s look to see how well we can predict which of the two brothers who experienced the same environment will be taller. How well does my predictor do that? I'm going to predict which of these two brothers has diabetes. Now does the diabetes predictor really do that? You're modeling out all the environmental shit because they grew up in the same family, right? We showed that the predictive power falls off if you're trying to do this trick with unrelated pairs of people versus brothers who grew up in the same house or sisters is minor. It's a small fall-off in predictive power.", "Basically, we are getting the true genetic stuff. One of the interesting things is when you look at EA— if you build a predictor and you ask, “Does it work better or worse when I try to predict which of the two brothers got more education?” It turns out it works much worse. And that’s because part of what that predictor is capturing is some maybe property of the parents who beat them and made them go to school, but both brothers got beaten and had the skill—so that the reduction in quality of EA prediction for brothers is quite a bit higher than if you're just trying to predict G (General Intelligence).", "So we have predictors we built that just predict G. Those have a much smaller reduction in quality when you apply them to brothers than in unrelated pears. I went through that quickly, so people could look up the paper. But the point is that we can see EA is weird and is a very different trait than G from these kinds of results. Again, people who criticize us have no idea how sophisticated the work is. They don't read our papers.", "If they try to read our papers, they can't understand them. But we've done all this stuff. Now, a guy who comes from a physics background or from an AI/ML background can absorb it. But a lot of our critics just can't absorb it. It's literally a G thing. They can't absorb it. So they just want to keep criticizing us forever.", "Dwarkesh Patel 1:24:04", "The funny thing is that I have a much easier time when I read your papers. In the pros part. And the explanation in the organization is…I don't know if it's your physics background, or whatever. But, I noticed it's Scott Aaronson’s papers as well. They're written like essays, as long as you understand the underlying ideas, they're so easy to absorb. Whereas, if I just read a random thing on Bio Archive, it’s like “I don't even know where to get started with this.”", "It is just written so turgidly.", "Steve Hsu 1:24:30", "I'm totally with you. There are multiple reasons for this. One thing is maybe that I'm an outsider. So I'm trying to write it very clearly. Conceptually, maybe the theoretical physicists would write it. But also it's a slightly selected population. Scott has an enormously popular blog, and he writes these huge posts all the time. I have a blog too, so we are a little bit better at expressing ourselves or clarifying ideas than the average scientist who's just trying to get the thing out and publish it in Nature.", "Consumer preferences", "Dwarkesh Patel 1:25:01", "Awesome. Let's talk a little about what consumers actually want. Gwern has this really detailed post about embryo selection. He writes in it, “My belief is that the total uptake will be fairly modest as a fraction of the population,” and he's talking about embryo selection there. “A large fraction of the population expresses hostility towards any new fertility-related technology whatsoever, and people open to the possibility will be deterred by the necessity of advanced family planning and the high financial cost of IVF, and the fact that the IVF process is lengthy and painful.” So, he seems very pessimistic about the possibility that this is something that millions of people are using—what is your reaction to his take here?", "Steve Hsu 1:25:49", "There are two perspectives that you could adopt in looking at this. One is a venture capitalist perspective, where you ask: “How big is this market? What's it worth dominating this market? What valuation should I accept from these pirates at GP?” The other perspective is being worried that humans are all going to engineer themselves to be blond, 6’4, and we're going to be suddenly susceptible to all kinds of diseases—and one single, cold virus will kill all of us. So, there's two different perspectives on what level of penetration will this technology have. From the venture guys' perspective, I will just say this: one out of 10 babies in Denmark is born this way. Would you capture a market that interfaces with one out of 10 families, and that's going to grow, of course. One out of 10 families in all developed countries, maybe including China. Do you have the genome of mom and dad and the kid?  Maybe you can sell them some health services later on? Maybe your relationship with these people is sticky? That's for the venture guys. From the, “Oh, I'm really worried about human evolution!” Or, “When are we going to get another von Neumann?” That's a different question.It may be that it'll never be more than 10 or 20% of the population that's using IVF. Through IVF, embryo selection, and maybe potentially editing someday. In that sense, why worry, there's always going to be this natural reservoir of the wild type that has much more genetic diversity? Maybe this is  the Goldilocks world. But imagine the Goldilocks world where there's plenty of wild-type people, and then there's plenty of people using these advanced technologies, and everybody's happy—including our investors.", "Dwarkesh Patel 1:28:01", "Something tells me that that will not be satisfying for the people who are concerned about the evolutionary diversity or whatever. I have the sense that this whole argument is just a front for a moral reservation about this technology.", "Steve Hsu 1:28:17", "Exactly. It's a front for people who just hate it. But what is Gwern saying? Is he saying that these 10% of babies born in Denmark are already mostly screened for chromosomal abnormalities? If I take that same data, and I can generate this other report, are you really not going to look at that report? Are you gonna say, “Well, one of my one of these kids is going to be super high risk for macular degeneration or something, but I'm already screening them for chromosomal abnormalities?  Is that really going to happen? I don't think so. That 10% of the population that's using IVF is going to look at the report, which can be generated by the cost of running some bits through the AWS server.", "I'm not sure what he means by that. I admire Gwern a lot. But what does he mean by that? Not many people are going to adopt it. Does he mean that the percentage of adoption within IVF families or the fraction of the population that's already doing IVF? Because those are already big numbers. So I don't know what he means.", "Dwarkesh Patel 1:29:28", "One way to think about genetic prediction, given your earlier statement about the Scandinavian countries doing a lot of IVF, is that it’s because of how old people are when they're having babies. A venture capitalist can think of your company as a way to get exposure to demographic collapse, right?", "Steve Hsu 1:29:47", "Yes, that's been mentioned. By the way, it's 3-5% of us. It ain't small. If you go to a kindergarten, there are IVF babies there. Have you seen IVF babies running around in the playground? So I don't know whether their perspective is, is this a big enough market for you to make money in it? Or is this going to change the future of the human species? You can have different perspectives.", "Gwern", "Dwarkesh Patel 1:30:14", "By the way, Gwern is such an interesting character. I've been reading him for a long time, but obviously, his persona is very mysterious. Do you know what is going on here? How did this person get into it? It's a really interesting and detailed report he published in every selection. What is going on here?", "Steve Hsu 1:30:40", "Well, Gwern is a super smart guy! I know a lot of scholars and serious scientists and intellectuals in the academy, and even though I didn't quite agree with his take that you just mentioned–– I mean, it might not be technically wrong, because I'm not sure what he meant by his words. I'm not even sure he would disagree with the quantitative things I just mentioned to you. But, I just want to say some positive things about Gwern because I like to read his stuff.", "In the early days, he was already following much of this stuff about genomic prediction and embryo selection. He's written stuff on GPT-3 and alignment risk. He's written lots and lots of insightful things. He's quite impressive, even if you compare him to the most, famous academic scholars—whether it's Steve Pinker, or somebody who has written a lot of stuff that people read—and as obviously, been thinking deeply about a lot of different things during the course of a very serious life. I think Gwern is super awesome. He's right up there with those guys. So, it's awesome that we live in this internet age where some totally anonymous dude can produce really good thoughts about a wide variety of things. He's not wrong with most of the stuff he writes about embryo selection—it's pretty much right. I have a very high opinion of Gwern.", "Dwarkesh Patel 1:32:13", "It's interesting with people like Gwern—it's almost in the model, you can think of early 20th century or late 19th century, these gentleman scholars, who would just pontificate about a lot of different subjects—I wonder if we're gonna see a return of the sort of generalist thinker. Maybe we've over-indexed on specialists, but now it's now the time for somebody like you! Theoretical physics, bringing all of that computational and mathematical knowledge to genomics—is that the new trend in science, at least at the upper levels?", "Steve Hsu 1:32:47", "I don't think it's a trend. So, in terms of Gwern having a platform, you can tell he's thinking and reading a lot. He's thinking, and then he's writing very insightful stuff, and he has an audience, thanks to the internet, so people can read it. That is an amazing positive trend, which will continue. So, we're in a way, in a golden age for intellectual exchange. Even the conversation that you and I are having is an example of that.", "The thing I'm afraid is not going to happen is that because science is so specialized now (it takes so much money and resources and institutional support within a university or lab or something to get stuff done), it's getting less and less common to find polymathic people who can do things at the frontier, where they make a significant contribution, and it's recognized by the natives in that sub-specialty that's becoming rare r. It was much less rare in the time of Feynman and von Neumann and people like that, just because the science was smaller. Feynman played around with some molecular biology, which was a big thing. He was friends with Francis Crick—who was down in San Diego. So, he would do stuff like that. Now, it's almost impossible. People would tell me, “Steve, why’re you fucking around with this stuff? You're wasting your talent.” So, I don't think the trends are good for that. But for general intellectual exchange, the trend is good.", "Will parents matter?", "Dwarkesh Patel 1:34:35", "Yeah, that's interesting. Going back to IVF, do you think the gains will be greater in any given trait you could think about for parents who are already high in that trait or for parents who are lower in that trait compared to the average population?", "Steve Hsu 1:34:52", "I don't think the base level of mom and dad is a big factor. The big factor is how good are your predictors? And how many embryos are you looking at? Or how good are your editing tools? By the way, I just want to reinforce something I recently learned—it was so impressive that it freaked me out. I thought, “Oh, I'm in this field. So in this industry, I know about it,” but our company was having some conversations with a company that handles egg donation. So it's IVF space. And the egg donors are typically young women 22-23, that could even be college-age women who are paid a sum of money to go through an IVF cycle and just donate the eggs to some billionaire family or whoever wants the eggs. And I was told that 60 to 100 eggs per cycle is not unknown. It's shocking because usually, it's an older woman in her 30s, or 40s, who is going through it, and they're struggling just to get some viable embryos. And then, when you run that same process with a 19-year-old, what do you get? And I was shocked at how high these numbers were. In principle, let's just imagine you're a billionaire oligarch, but very tech savvy. You want to have some, you want to have a large family, and you want to have high-quality kids, maybe very long-lived healthy Kids—you might be selecting the best out of hundreds. There are 100 parallel universes I could live in. I get to peek into each one and then choose. I'm going to step through door number 742 because that's the outcome I like. Not that expensive. But amazing that people can now do this.", "Dwarkesh Patel 1:36:53", "That'll imply that the returns of being young when you have kids will increase because IVF is theoretically supposed to help kids when you have kids when you're old, so it's evening the playing field. The addition of this with the additional embryos for somebody as young as I know, it's we're tilting away in favor of young now—at least of your care about those traits that genetic screening could help you figure out.", "So let me ask you about what you think about some possibilities Gwern talks about in that post. One is that we might turn induced pluripotent stem cells into embryos, and then we'll be able to select across hundreds of embryos without having to harvest eggs.", "Steve Hsu 1:37:41", "Yeah, so eggs are the limiting factor; sperm is cheap. The stem cell technology takes a skin cell and revert it to the pluripotent state so that it can become some other cell that is a skin cell that may be an egg cell—that technology has been more or less mastered for mice and rats. There are a few labs in Japan where they seem to have fully mastered this for multiple generations of rats using induced pluripotent C to make the eggs. So, my guess would be to get it working in humans is not that hard. It's a matter of some years of just slaving away in the lab to get it working. I know of startups that are working on this. Now, there's going to be some trepidation. Initially, why would you do that if you can pay some 19-year-old to be your egg donor?", "For example, some gay couples really want to do it, because maybe they think they can make their partner’s skin into an egg. So there are reasons why you do it. But for many people, they would say that an egg was made through a new and untested process and they’d rather have a normal egg. Well, I don't have that additional risk in this whole thing. So I don't know what will happen there adoption-wise. But I do think that it's just a technological prediction. It will be possible. We're not that far from being able to do it.", "The fact that we can do it in rats means we're not too far. It could have enormous implications for natural selection. If you wanted to be able to select from the best of 1000 embryos, eventually there's no technical barrier. Now, I would say that on roughly the same timescale for the pluripotent production of eggs to mature, they are to be tested so that people are confident in it. Multiplex, very accurate CRISPR-based editing will also arrive on that same timescale. At that point, why are you fooling around this? I just went in and did it to make the changes I needed to make. Over that same timescale,  it's roughly the timescale over which we'll figure out where the real causal is.", "Dwarkesh Patel 1:40:32", "Which is what is nice because, otherwise, you're just changing the tack.", "Steve Hsu 1:40:36", "So, all of this is stuff that I'm fully confident you're going to see. I may not see all of it, but I'll see the technology perfected; I won't necessarily see this impact on society. But you'll probably see.", "Dwarkesh Patel 1:40:54", "I'm hoping it's ready by the time I'm ready to have kids—which is still a while away. Another possibility that Gwern discusses is iterated embryo selection, which you can just keep…I'll let you describe how it works. But what do you think about this possibility?", "Steve Hsu 1:41:08", "Yeah, so you make a bunch of embryos, and then you decide which ones you want. Before you actually make it into a person so that then that person grows up and reproduces, you reproduce just using iterations of embryos. That's also plausible, too. All of these molecular technologies have a chance of working. I don't know anybody who's spending all their time working on that. But yeah, that could work as well.", "Well, I still want to say that I made these jokes about the wokists, and progressives and people who hate us, and I feel it's wrongheaded of them. I consider myself a progressive, I don't consider myself woke. But the goals of having healthy beautiful people who live to be 200 years old—who’s against that? I'm also against inequality in society. Consistent with growth and advancement in science and technology, we should try to have a fairly egalitarian society. I'm for all those things. So if you're a wokeist watching this interview to just hate Steve Hsu or something, think why you're angry at me.", "I'm actually exploring how the world is. Don't you want to know how the world actually is if we have an inequality problem because some people don't do well in school, don't you want to give those families these resources so they can fix it for the next generation? Isn't that the ultimate goal of what you want?", "Dwarkesh Patel 1:42:50", "To steel-man a little bit, someone might say, “Listen, one of the things that prevents a runaway divergence between families over time in the model of Piketty, or something, is a reversion to the mean. I listened to your interview with Gregory Clark, where he says that this is already the case. But to the extent that it doesn't get magnified over time. The reason is that it's hard to maintain a leech in genetics is because of reversion to the mean.", "If you can keep that up, and if there's increasing returns to having good genes, because you can then afford these kinds of treatments, then the possibility of society, instead of  a normal distribution for society, you can have a bimodal distribution that keeps getting further and further apart. That is a potential possibility.", "Steve Hsu 1:43:43", "The Morlocks and the Eloy. That is a fair concern that this could lead to grotesque, huge inequality. That is a risk of the technology in it. A lot of that depends on society, too. I mean, when someone confronts me with that, I will acknowledge it as a legitimate concern. But then I'll say that we live in a country—which is the richest, in some sense,—the richest country in the world, and there are plenty of people who don't even have health care. Did you worry about that inequality?  We have a lot of inequality, there are a lot of things for you to worry about when it comes to inequality.", "Dwarkesh Patel 1:44:26", "Actually, maybe this might not be globally beneficial, both for at least this particular debate. It might be beneficial if the case was when I asked you, “Oh, do people who are lower on some trait have greater potential for increasing that trait than somebody who's higher up on it?” If that was the case, you could just say, \"Listen, the smart people are just going to ask them taught at some point, whereas the dumb people can just catch up over time, right?”", "Steve Hsu 1:44:51", "Well, again, if you're more of a left guy, and you like government intervention, and so this becomes part of the government health care system, and it's free. You say we will allow more aggressive edits or more embryos to be produced.", "For below-average families, there's a very natural way you can redistribute, just  you're going to forcibly take a bunch of money from me when I die that I would rather pass on to my kids, you're going to forcibly take it from me. But you can forcibly give more genomic prediction sources to people who need them. It's easy.", "Wordcels and shape rotators", "Dwarkesh Patel 1:45:25", "Just to shift topics quite a bit here, you had an interesting post on that recent Twitter viral meme about the wordcels and shape rotators. About how the content of a shape rotator combines two separate abilities, math, and spatial ability, that is, yeah, when you do principal component analysis and psychometrics, they turn out to be different but correlated. As a programmer, I'm really curious about which of those is the one that is required more for that particular skill set. Because I'm the type of person that when we're talking about abstractions, data structures, and the flow of a program, I just intuitively think about it. I just think and imagine what it looks like visually. Whereas I know friends who I said, “So clearly programming is visual-spatial ability. And they said that actually, they don't imagine it visually at all that for them, it's much more of just looking through the loop and asking what's going to happen next. So yeah, I'm curious, which of these is a better description of programming?", "Steve Hsu 1:46:33", "Your description captures the whole story that people are very different in how they attack, even though they're attacking the same problem in the way that their brain does it.  That's one of the most fascinating things about this field of psychometrics in psychology is really trying to get into that. One of the things that fascinated me when I was being educated and going through training in theoretical physics and math was looking at how my classmates at Caltech or Richard Feynman, or how somebody approached a problem, which might be totally different than the way I would do it, or the way that we would communicate about the solution once we got it. And there are clear visual people: Feynman was a very visual thinker. Other people are more logical or go verbal, where they're stepping through things. And it might even be, and they hear the arguments as they step through it or something. So everybody's different. And those things are super fascinating. Something that's gone out of fashion now, but was very, in fact, very, in very standard when I was growing up is when I took shop class, I don't know if you had to take shop class in junior high, or high school, but we had to take shop class, which where you go to bend metal, and literally they have machines that would weave I made an ashtray or something out of steel or something. So yes, in that class, which is very spatially loaded,  you could have guys, I had a friend who was you ever had a very high LSAT score and went to Princeton, to study English, that guy could not spatially rotate at all, he was totally lost in figuring out how to, do the bends to make the ashtray or whatever, right.", "So you see that very clearly. And in those old days, when things were more based on when you went shopping class, sometimes we just gave you a standardized test, which was a standardized test of spatial ability. So we're all, my generation is, you don't have to lie to me about all these things. We saw how it works—we saw people take the standardized test for spatial ability, especially in visualization. We saw people try to fucking work the machine, the metal bending machine, and some people just couldn't actually make the thing.", "In the real economy of atoms, grams of steel, kilograms of steel (which is all moved to China now or something), all this stuff is super important. You can't just theorize about, “Okay, then I have this module that does this, and this function will have these types.” Like, well, that's nice and super valuable in this part of the economy (academia), but somebody's got to get this plant working. And it's got to be efficient. And we got to put the machines here so we don't have to carry the ship too far from here. That's very spatially loaded, which used to be part of the American economy and education system. Now it's all gone. But it's real, it's not fake. No, people are not making this up. And psychometricians of the 1950s and 60s would have been, yeah, here's my 10 volume treatise on spatial, measuring spatial visualization ability or something.", "Dwarkesh Patel 1:49:43", "If you read the biography of somebody like Einstein, I mean, he was especially known for being a spatial thinker. He was incredibly visual. Thought experiments like “what does it look like or what does it feel to be moving at this speed” or whatever? Super interesting. In the case of programmers, I'm not sure I got your answer. But what do you think is the more important skill for that particular discipline?", "Steve Hsu 1:50:09", "People are going to do it in different ways. Yeah. I do think that if you compare the category of engineers to the category of software developers, engineers generally, have higher, on average higher spatial ability, and they're using it. Whereas you can be an awesome programmer with zero spatial ability. That's my guess.", "Dwarkesh Patel 1:50:31", "Yeah, I wonder if when you're studying history or something, you notice that some people are really attracted to the military history aspect of it, and seeing how the units move. I wonder if that's because they have a higher spatial ability, and they need to be able to understand how the units are moving, and so on.", "Steve Hsu 1:50:50", "I was gonna say - this is a very weird thing for me to reveal that, sometimes when I'm having trouble falling asleep, I'll be visualizing. Recently, I was thinking about how I would use a ballistic missile to target an aircraft carrier. Or sometimes, if I'm trying to go to sleep, I'll just be visualizing, “Okay, when you're at about an altitude of five kilometers, what can your radar see? And how much resolution do you need? And then how much time do you have to hit the ship?” And, I'll be thinking about stuff for relaxation. This is highly visual but also quantitative because you have to make some estimates. But, that would be typical of a lot of physicists. Because if we start talking about it, we'd be like, “Oh, yeah, right. And you've only got about a point of order a 10th of a second to do this. And you've bet you're thinking of doing it in milliseconds, so we're okay.” And then anyway, that type of thinking is very prevalent amongst certain types of people.", "Dwarkesh Patel 1:51:52", "Right. Now, I'm curious why it's the case that people from physics often transition to finance.  I know that was something you're considering at one point—is the underlying knowledge of mathematics just the same? Or is it just such a credible signal of mathematical ability? And gee, do quant firms want to hire physics students?", "Steve Hsu 1:52:20", "The answer is a little bit complicated.  All the factors you mentioned are true. But one of the things was that in the early phase, in the 80s, and 90s, when a lot of people in my generation went into finance, a lot of them went to trade derivatives. If you look at options, pricing theory looks a lot like physics—it's the mathematics of random walks. So, there was a very tight, not tight connection. But the concepts were strongly related to what was necessary. Now, he brought it out a little bit more to say, “Okay, but nowadays, if you go to really big quant funds, and they're looking for a signal and analyzing tons of data,” they're not trading derivatives, just actual names of stocks or whatever.", "There's more load on people with machine learning and CS backgrounds now. The physicists who go in there have to use that subset of their skills, but the funds would just as soon hire a CS or ML-type guy to do it. So it's a little bit of a complicated answer.", "Dwarkesh Patel 1:53:28", "Yeah, that's super interesting. Because I mean, back in the 90s and early 2000s, I read that book about the fall of long-term capital management.", "Steve Hsu 1:53:42", "Actually, there are two books; there's one called When Genius Fails. And then there are actually three books, at least three books, but they're all good.", "Dwarkesh Patel 1:53:50", "Then you just hear about the people who created options and pricing theory there, about applying calculus to random walks and stuff—stuff I don't understand. But just super cool that you have these mathematicians that are just coming in and applying these ideas to finance.", "Steve Hsu 1:54:06", "They will. I want to say one thing about physicists, which is a little different from mathematicians, and computer science guys. Maybe not so different from data science guys, but definitely different from most computer science guys and most math guys because we spend a lot of time looking at bad noisy data. So even if you're a theorist, you had to go through these lab courses. For me, those lab courses were among the hardest, the worst! You’d have to go in and build some electronic equipment to take some data, and it could be extremely noisy.  You're measuring muon cosmic rays coming through the roof and hitting your detector and then, you have to analyze the data.", "And, when you're building this thing, you screw it up. You get data that makes no sense, or you say something about the amplifier wasn't right, or you're used to seeing data that sucks. You have this theoretical view of what should be happening, maybe you're visualizing it as the muon comes in, and it does this and, and interpolating between the theoretical view of what should be happening with the particles and the systems, and what the actual data looks like. Saying, “Oh, shit, we didn't do this, or we didn't shield this part.", "So that's why we're getting that.” That's something physicists are very used to doing. Mathematicians are often shitty at it—they just accept, “Oh, I just accept this is the data. Now I will now reason with this data.” The same could be true for computer science people. But you need someone who's actually had to deal with shitty data and try to connect it to a very elegant mathematical model. That's something physicists are uniquely used to.", "Dwarkesh Patel 1:55:45", "That's also true of CS people, which is that in debugging, there are many potential problems that could happen. Obviously, one of them is that you wrote the code wrong. But often you get to the actual implementation, there are so many layers of abstraction beneath you and above the actual hardware that you have to figure out like, “Why is the correspondence between this idea I had and the actual program output not the same?”", "Steve Hsu 1:56:14", "That's fair. Because when you debug your code, there are many different ways it could have failed. You have to, in a sense, step back and model , “Oh, maybe this module is feeding me something wrong. And that's what's causing the problem, or this other layer.” So that is very analogous to when we have to deal with a physical experiment in the lab. The thing with physics, though, is that we were really, really geared towards getting toward the underlying reality.", "Like if it’s really late at night, and my lab partner and I just want to get out and go to sleep, we can't tell ourselves that things are okay. “We didn't actually screw up the shielding on that.” “It's okay, and we'll just bring the data home and look at it.” Now we have to actually decide, do we have to spend three more hours ripping this thing apart and re-shielding it?", "Or we have to get to the real underlying reality, and we can't fake it. We can't just pretend that this admission scheme will work perfectly. We can't lie to ourselves about it. That's true for coders, too. But, anyway, it's very different from social scientists and stuff where they can just decide I don't like that reality, I'll just make up this model for how society behaves. And then I'm done. We can't do that.", "Bezos and brilliant physicists", "Dwarkesh Patel 1:57:29", "Yeah. So given the theoretical physicists' skill set, as you just mentioned, is it potentially the case? I mean, obviously, the common criticism of  physics as a community is that they're absorbing too much talent—that three or four standard deviations above average intelligence people are working on a field that, in popular convention, at least seems, isn't making as much progress.", "Dwarkesh: More people in physics are making the step you made, which is learning all these skills in theoretical physics, and moving out of it. Maybe finance is one way in which we're getting these pro-social benefits from the skills that physics builds. Stepping into fields of genomics or things like that. Should more physicists be just using their skills elsewhere?", "Steve Hsu 1:58:16", "Number one, the attrition rate is super high. So even if you take this set of kids that are plus three or four standard deviations in ability, and they enter a physics major at Princeton, or MIT or something, the fraction of them that actually end up as practicing physicists is pretty small. So they're bleeding off at all points.", "Bezos started in physics and, toward the end of his Princeton career, switched to computer science. Elon was in graduate school in physics at Applied Physics or physics at Stanford, and he bled out. So it's already the case that for me, one way to say it is that education is phenomenal; you should try to get that education; it'll pay off for you later. You'll probably bleed out, and you'll trade away and do something else. Now, if you say, “Okay, of the thousands of theoretical physicists, or physicists who do fundamental research, including the experimentalists, around the world, there are tens of thousands. Maybe some of those guys should also be doing more cancer research or financial modeling.”", "Maybe we should tear them off, you should remove even more of those guys and have them do more so that there's still some argument in favor of that. But, we do need a core of people that are trying to do these tough fundamental answers to these hard fundamental questions about nature.", "Dwarkesh Patel 1:59:38", "The Bezos example is fascinating. For the people in the audience who might not know: His original plan was to become a theoretical physicist. He didn't pursue it because he noticed one of his friends was just so much obviously more gifted than him at that skill. The story they tell us is about Bezos working on a problem for many hours and making no progress before asking a friend who solved the entire problem in his mind. Basically, Bezos realized that it (theoretical physics) was not his competitive advantage.", "Steve Hsu 2:00:20", "I gotta add one anecdote. So, I know many of the guys in Bezos’ eating club, we also tick because we're very similar in vintage, a lot of them were late. So I know all these guys, I know all these Bezos stories. The funny thing is the guy you're talking about (guy who solved the physics problem in his mind), whose name I believe is Asana, is a Sri Lankan guy. He went to grad school at Caltech.", "At one point, we met up at Caltech when I was visiting, and he was in grad school. I met this guy and talked to him about Bezos. We had other friends in common so we weren't focused on Bezos, but yeah, I met this guy in that anecdote that you just mentioned.", "Dwarkesh Patel 2:01:19", "That's good to know. Because that is relevant to my question. My friend and I have continually debated the importance of intelligence at the peak of entrepreneurial or engineering ability. And he uses that anecdote to say, “Oh, look, Bezos was not smart enough to be a theoretical physicist. So, therefore, intelligence is not that important. Beyond a confident, not incredibly high point. And afterward, Bezos was creative or blah, blah, blah. He was hard working.”", "I don't know; my perception of the story was, “Okay, he's not intelligent enough to be a theoretical physicist. He's below five standard deviations or four standard deviations above the mean; clearly, just studying physics at Princeton is itself a testament that it's probably at least  two or three standard deviations about, well, at least three.” What was the perception of those people you talked to at Princeton, about Jeff Bezos? Is it that he just was super hardworking or creative? Or is intelligence super high, just not high enough to be a theoretical physicist?", "Steve Hsu 2:02:30", "Yeah, this is a great topic that many people are interested in. And even among my close friends. In school, we all talk about this stuff. You have to distinguish between the very abstract intelligence—which is helpful in physics and math, or maybe computer science, versus a more generalist intelligence. Those are correlated, but they're not the same thing. So, I would say Bezos is probably very off-scale for ability to work hard, take risks, function under pressure, focus, and generalist intelligence.", "Since these traits are at least somewhat uncorrelated, if you're top 10%, in each of these five, simultaneously, already pretty rare individual because plenty of the physics guys who did better than Bezos in the physics classes could not lead a company, they could not put together a presentation that would convince a venture capitalist to invest. So it's a different skill set that we're talking about. The idea that there's a unit dimensional measure of cognitive abilities is not that useful. I'm probably guilty. People say, \"Wait, Steve Hsu just said that, but he's the guy most responsible for promulgating this perspective. But it's only because it's the simplest thing to talk about. If you compress it to one general factor, it's just easier to talk about—it doesn't mean that the other components are not meaningful. We just got done talking about verbal vs. spatial vs. some more generalized mathematical talent. So obviously, it's a high dimensional, not that high dimensional, it's at least a multi-dimensional space of abilities that we're talking about. Now, the point about Bezos, which is nontrivial, though, which is directly relevant to the life experiences of  physicists who leave physics and do other stuff, is that very often in an engineering setting, or a startup setting, people say “You don't know shit about that! What are you talking about?”", "But the reality is people who do perform on a technical problem that the startup has to solve—in Bezos his case, it was often optimization of some supply chain thing or optimization of some sorting process or reducing the error rate. The people in the company uniformly say that when Bezos comes in the room, he will give us good feedback on the solution to this ops problem that could be better than what we said, or at least he finds the problems with what we said, or if we did an excellent job on it, he gets it right away, which is some executives might not get it right away. So my point is that people with these super high raw G abilities generally can be helpful in these technological environments. Even if they don't have a lot of background, they can still come in and be helpful. And sometimes they can solve problems that the people who are well trained in the area are having trouble with. That is fair. But it's not fair to say there's just some unit dimensional measure of intelligence. This guy always beats this guy, this guy always beats this guys–it doesn't work that way. But it just, on some of these scales, these guides are generally more helpful than the critics would give them credit for.", "Dwarkesh Patel 2:06:01", "Your life story is an example of that. But, I had another experience of this, which was I recently interviewed Sam Bankman-Fried, who is the CEO of FTX, on my podcast. For all interviews, I tried to come up with questions that the guest probably had not heard before. I tried really hard to come up with questions that he might not have heard before that might have been really interesting and challenging to answer. I listen to all the interviews he’d ever done and then prayed for a long time.  But yeah, if you look and listen to that interview, you'll notice how he answers these questions. It sounds like he was just talking to somebody about that. No matter how creative a question I could try to throw at him, it's just his ability to grok. All the context is explained in a way that an audience would understand. It was exceptional.", "Steve Hsu 2:07:03", "Being a super successful founder selects for the ability to figure out how to effectively communicate with a person according to their background. Whether it’s an investor or someone from a tech-heavy venture fund, you have to think about how they think about a problem. Founders are selected for being excellent multiband communicators across different cultures and stuff like this. It's not surprising to me that this guy would have those capabilities.", "Dwarkesh Patel 2:07:54", "Okay, so you are a practitioner of jiu-jitsu and other martial arts. One notable aspect of those disciplines is that you can punch above your weight, right? Royce, Gracie, and the UFC are great examples of this. Is that possible with a trait like intelligence? Is it possible that we have techniques or other ways of compensating for your analogy? Or just natural weight? What is jujitsu for fighting?", "Steve Hsu 2:08:37", "Great question. So, in a way, jiu-jitsu is applied physics because you're thinking about two arms and questions like “if it’s easier for you to punch and knock me out before I can close a distance and force you to grapple with me?” I do Jiu Jitsu so much because it's very rational. It's a scientific analysis of what two humans can do to each other. It's a technology in terms of what technologies people can use, amplifying their brain power. We're surrounded by it.", "So, here's an exciting thing. Suppose you and your girlfriend are trying to get the answer to some question. And you're both using Google. There's an enormous variance in who immediately puts the search term in that gets the right at the top hit is the direct answer to your question, and that's very cheap, very G-loaded. But if you get good at using particular technologies or specific information channels, you can't amplify your ability beyond just what the raw capability is. So, my answer is that there are tools, but nobody uses them. There’s no dojo where you can go where they start teaching you immediately, “This, do this, this, this, and this,” and then go the guy who's bigger than you, take him down and choke them out. There isn't anything analogous to this for cognition. But I can see how people can amplify their capabilities in different ways, either more or less effectively.", "Elite education", "Dwarkesh Patel 2:10:23", "Now, you had a blog post a long time ago about elite education. And in it, you talked about how even if you control for the SAT, the top jobs people hear about from elite schools are overrepresented. So I'm curious, do you think this is because of a selection effect, Harvard selecting based on personality? That selects for certain high achievers? Or is that something about being at Harvard that makes you a high achiever, but what is going on?", "Steve Hsu 2:10:57", "I researched this question pretty aggressively when I was first when I first became an entrepreneur. Because I was like, “Well, we can raise this much money, we can get these meetings with these funds. But how the hell did this guy raise $100 million for the stupid idea, what the hell?” And then I would start looking into this guy's background, I'd see he went to Harvard. So I got intensely interested in super outlier guys—how did this guy get a job writing for The Simpsons? What would I write? This other guy writes for The Simpsons, but he went to Ohio State. So he's ten layers of social networking away from The Simpsons, but the Harvard guys are not—his buddies at the Crimson. There are multiple factors,why? Take two kids. They both scored 1580 on the SATs. One goes to Ohio State on the Ohio regents scholarship for engineering. And the other one goes to Harvard, even though the engineering school there sucks. What's the difference in their lives? Maybe the guy went to Harvard because he understands how the world works a little better than the other dude. When he gets to Harvard, he will meet many super ambitious, aggressive, smart kids. Some of those kids are children of super-wealthy people.", "Some of them are children of super influential people. And all of them are trying to get ahead. They're super ambitious—they know what it means to be a managing director at Goldman or become a partner at McKinsey—they know what those things are. If you didn't know them because you grew up in Ohio, you learn them immediately. You just get a better view of what's possible in the elite sector of society from that exposure. So there are multiple factors in networking, some of these Harvard kids come from super wealthy families, some of them their dad used to play golf, the head of the fund that he's trying to get a meeting with, right? So it's all those things together. I'm not saying it's good, but I understand how the world works. I understand why this other dude can raise so much more money than I can raise or get meetings that I can get. Right. So that's how I was initially interested in this question.", "Dwarkesh Patel 2:13:26", "Why are China and India underrepresented massively in Nobel Prizes per capita? Even in computer science, when I would try to find papers on specific subjects, it was rare that they would come from China or something like that. And when they did, it was just that the quality was much worse than the ones I could find from a professor in the US. I'm curious why you think that is. It can't just be the population or anything  that because when those researchers come to the US, they're producing stellar research; what is happening here? Why is this effect natural? And if so, what is the explanation?", "Steve Hsu 2:14:11", "Well, the easy answer to that question is that many of the things you mentioned are lagging indicators. So they reflect how the West was developed and had a solid scientific and engineering tradition, while China and India were desperately poor and didn't have any of that. In my own life, in the last 20 years when I visited universities in China, South Korea, and Taiwan. They had plenty of talented undergraduates, but the best of undergraduates always wanted to come to the US for their Ph.D. They went from that to some of the best undergraduates deciding to stay there–– so the researchers who are professors there are becoming world-class. But that happened only in my adult lifetime. So, you can see it's a heavily lagging indicator. Interestingly, in my physics career, I knew several…. The Indian term was called toppers. So the people who take the exams rank every kid in the country who takes the exam, right? So, I knew guys who were number one, number two, or number five on the IIT entrance exam, but they ended up going to Caltech, or they ended up going to MIT. So, there's this massive brain drain. It's a super powerful, elite brain drain . MIT recently has just been recruiting. If you win one of these Olympians, you get a gold medal, and the informatics Olympiad or the math at MIT will try to get you to come to MIT. So, this enormous sucking of talent into the United States is excellent. But that's why when you go to IIT, even though the undergraduates are super bright, the professors are… (no offense to my colleagues who teach there), but if those professors got a bid from UCLA, the professors there would generally move to UCLA. So that's the difference. But that's gradually evening out.", "Dwarkesh Patel 2:16:10", "Are there any downsides to the fact that we can pay researchers or postdocs in the US less because we're partially paying foreign workers in visas? Is that just a market arbitrage that has positive externalities for the economy? Or is there some downside to the fact that native it's not competitive for native-born workers?", "Steve Hsu 2:16:34", "Suitable for the US, overall, on average, bad for developing countries because you're stealing their talent. It's terrible for native-born Americans who have to compete against the best brains from all over the world; so much harder for an American kid, too, to get the job he deserves at these elite levels strongly impacted by immigration. So, you have winners and losers. Whether there's a long-term problem for America... Some guys are super obsessed who comment on my blog now and then studied where all the IMO, international math olympiad winners were going, where are they what,and they claim they're seeing this huge drop off in kids who grew up in America who are not first children of immigrants, but have instead simply been here a while.", "They just never win these competitions. So ultimately, you might be  discouraging the native talent pool by just letting the door open and bringing in all these super talented people from outside. So there could be some second-order effects that aren’t so sound.", "Dwarkesh Patel 2:17:49", "Although it's interesting when you look at an industry like tech, there's a similar aspect of foreign competition being allowed in because of H1B visas, the compensation has remained competitive. Is it just because Tech is in super inelastic demand for talent?", "Steve Hsu 2:18:10", "Yeah. This is a little more focused on software development and ML and stuff, but if you look at more traditional engineering fields, which aren't as hot (ex. engineers at Boeing, etc.) those guys would probably say that their salaries are heavily suppressed by the existence of hungry engineers from India and China. So in the software/tech industry, because it's been so hot for so long, it doesn't feel this effect so much. So yeah it's got plenty of elasticity.", "Dwarkesh Patel 2:18:42", "Awesome. Okay, Steve, this was so much fun. I really enjoyed this conversation. Loved preparing for it, talking to you, and I really got to learn a lot more about this subject that I had been interested in for a long time. Is there anything else we should touch upon on any of the subjects we've covered today or failed to cover today?", "Steve Hsu 2:19:00", "Well, we've covered so much, and I just really think you're a great interviewer. Your questions are always getting at a key thing that many people are confused about. There's a lot of depth there. So, I thought it was great. There's plenty more we could talk about—we should just get together and do this some other time. But I don't think you left anything out.", "Dwarkesh Patel 2:19:21", "If you're willing, I would love to do version two of this, where we talk about your physics work and the other subjects we might have missed this time around.", "Steve Hsu 2:19:29", "Yeah, we get to talk about many worlds and quantum computing. Yes.", "Dwarkesh Patel 2:19:33", "Haha this will be fun. In the meantime, do you want to give people your website, your podcast, and your Twitter, so they know where to find you?", "Steve Hsu 2:19:45", "My last name is Hsu. That's the hardest thing for people because it's anti-phonetic. Just search for me. I'm on Twitter. I have a blog and a podcast called Manifold, which doesn't have a huge listenership, but I tried to keep the quality level high and get best-in-class guests. We’re willing to go into some depth, so it's got a very niche audience. But if you'd like the conversation that we just had here, you'll probably like Manifold. So you can look for that in all the usual places you get your podcasts and YouTube.", "Dwarkesh Patel 2:20:23", "The podcast is similar. It's exactly what I'm trying to do here. You just know so much about so many different fields. It's so fun to listen to where you're having expert-level conversations and everything from social science to foreign policy. So yeah, Manifold podcast is a place to check out.", "Steve Hsu 2:20:54", "Yeah, my pleasure." ]
[ "https://en.m.wikipedia.org/wiki/Palaestra", "https://www.lifeview.com/", "https://medlineplus.gov/genetics/understanding/genomicresearch/snp/", "https://towardsdatascience.com/l1-and-l2-regularization-explained-874c3b03f668?gi=cb1cd49c4ea4", "https://en.wikipedia.org/wiki/Fisher%27s_fundamental_theorem_of_natural_selection", "https://www.medrxiv.org/content/10.1101/2022.06.15.22276102v1", "https://press.princeton.edu/books/hardcover/9780691174655/the-case-against-education", "https://www.23andme.com/en-int/", "https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/age-related-macular-degeneration#:~:text=Age%2Drelated%20macular%20degeneration%20(AMD)%20is%20an%20eye%20disease,the%20back%20of%20the%20eye).", "https://www.gwern.net/", "https://infoproc.blogspot.com/2022/02/annals-of-psychometry-wordcels-and.html", "https://roonscape.substack.com/p/a-song-of-shapes-and-words", "https://mettl.com/glossary/p/psychometric-psychology/", "https://www.manifold1.com/" ]
https://www.dwarkesh.com/p/tony-blair
Tony Blair - Life of a PM, The Deep State, Lee Kuan Yew, & AI's 1914 Moment
[ "00:00:00 – A prime minister’s constraints", "Dwarkesh Patel 00:00:00", "Today I have the pleasure of speaking with Tony Blair who was Prime Minister of the UK from 1997 to 2007 and now leads the Tony Blair Institute , which advises dozens of governments on improving governance, reform, and adding technology.", "For my first question, I want to go back to your time in office when you first got in. You had these large majorities. What are the constraints on a prime minister, despite the fact that they have these large majorities? Was it the other members of your party fighting against you? Was it the deep state ? What part was constraining you at that point?", "Tony Blair 00:00:35", "The biggest constraint is that politics, in particular political leadership, is probably the only walk of life in which someone is put into an immensely powerful and important position with absolutely zero qualifications or experience. I'd never had a ministerial appointment before. My one and only one was being prime minister. It’s great if you want to start at the top, but it's that that’s most difficult.", "When you're running for office, you have to be the great persuader. The moment you get into office, you really have to be the great chief executive. Those two skill sets are completely different. A lot of political leaders fail because they've failed to make the transition.", "Those executive skills — focus, prioritization, good policy, building the right team of people who can actually help you govern — are crucial. The moment you become the government, the saying becomes less important than the doing. Whereas when you're in opposition or running for office, it's all about saying.", "All of these things mean that it's a much more difficult, much more focused role. Suddenly you're thrust into this completely new environment when you come in. That's what makes it the hardest thing. Of course, you do have a situation with the system as a system. It's not that there's this great deep state theory. We can talk about that but that's not the problem with government.", "The problem with government is not that it's a conspiracy, either left-wing or right-wing. It's a conspiracy for inertia. The thing about government systems is that they always think, \"we're permanent, you've come in as the elected politician, you're temporary. We know how to do this and if you only just let us alone, we would carry on managing the status quo in the right way.\" That's the toughest thing, making that transition.", "Dwarkesh Patel 00:02:48", "That's really interesting. Let’s take you back with everything you knew, let's say in 2007, but you have the majorities and the popularity you had in 1997. Is it that you know now what is a waste of time? Would you say, \"I'm not going to do these PMQs , they're total theatrics\" or \"I'm not going to meet the queen\"? Is it the time? Is it that you're going to go against the bureaucracy and say, \"I think you're wrong about your inertia\"? What fundamentally changes?", "Tony Blair 00:03:20", "It wouldn't be that you wouldn't do Prime Minister's Questions, because Parliament will insist on that. You certainly wouldn't want to offend the Queen, who was the monarch in my time. But you're right. You would have a much clearer idea of how to give direction to the bureaucracy and how to bring in outside skilled people who can help you deliver change.", "I always split my premiership into the first five years, which in some ways were the easiest. We were doing things that were important, like a minimum wage . We did big devolution . We did the Good Friday peace agreement in Northern Ireland.", "It was only really in the second half of my premiership that we started to reform healthcare, education, and criminal justice. That's when your skill set as a chief executive really comes into play.", "00:04:12 – CEOs vs. politicians", "Dwarkesh Patel 00:04:12", "Many people have a preconception that if you could get a successful CEO or businessperson into office, these executive skills would transfer over pretty well into becoming a head of state. Is that true? If not, what is it that they'd be lacking that you need to be an electable leader?", "Tony Blair 00:04:28", "This is really interesting and I think a lot about this. The truth is those skills would transfer to being a political leader. They're not the only skills you need because you still have to be a political leader. Therefore, you've got to know how to manage your party. You've got to know how to frame certain things.", "As a CEO of a company, you're the person in charge. You can more or less lay down the law. Politics is more complicated than that. When highly skilled CEOs come into politics, oftentimes they don't succeed. That's not because their executive skill set is the problem. It's because they haven't developed a political skill set.", "Dwarkesh Patel 00:05:11", "I was reading your memoir . There were a couple of times where you said that you realized later on that you had more leverage.  In retrospect, you were able to do things that you didn't do at the time.", "Is that one of the things that would change if you went back to 1997? Would you realize, \"I actually can fire this entire team if I don't think they're doing a good job. I actually can cancel my meetings with ambassadors.\" Did you have more leverage than you realized at the time?", "Tony Blair 00:05:39", "In terms of running the system, yes. Definitely with the benefit of experience, I would have given much clearer directions. I would have moved people much faster. Again, in politics this is where it's different from running a company. In a company, by and large, you can put the people in the places you want them. Except in exceptional circumstances, this is true. If you're running a company, you've got no one in the senior management who you don't want to be in the senior management.", "Politics isn't like that because you've got political elements you may have to pacify. There may be people that you don’t particularly want because they’re not particularly good at being ministers. However, they may be very good at managing your party or your government in order to get things through.", "What I learned over time is that the important thing is to put quality people into the core positions that really matter to you. Don't fall short on quality. It's one of the really interesting things. Being a political leader is the same as leading a company, or a community center, or a football team. It all comes to the same thing. It's such an obvious thing to say that it's all about the people. But it's all about the people.", "You get really good, strong, determined people who share your vision and are prepared to get behind and really push. You actually don't need that many of them to change your country. But they do need to be there.", "Dwarkesh Patel 00:07:23", "That's really interesting. This isn’t particular to the UK but even in Western governments, people are often frustrated by this. They elect somebody they think is a change maker, but things don't necessarily change that much. The system feels an inertia. If this is the case, is it because they didn't have the right team around them?", "If you think of Obama or Trump or Biden, at the very top I assume they can recruit top people. To the extent that they weren't able to exact the change they wanted, it wasn’t because they didn't get the right chief of staff, right? They can probably get the right chief of staff.", "Tony Blair 00:07:58", "Absolutely. They can get really good people. One of the things you actually learn about being at the top of a government is that if you pick up the phone and say, \"I need you to come and help,\" to someone, pretty much they will come.", "That’s not the problem. I say this often to the leaders that I work with. We work in roughly 40 different countries in the world today and that's only growing. We have teams of people that go and live and work alongside the president's team. I talk and exchange views with the president or the prime minister. Very often, the two problems are these.", "Number one, people confuse ambitions with policies. Often I will speak to a leader and say, \"so what are your policies?\" They'll give me a list of things. I say to them, \"those aren't really policies, they're just ambitions.\" Ambitions in politics are very easy to have, because they're just general expressions of good intention.", "The problem comes with the second challenge. Though politics at one level is very crude — you're shaking hands, kissing babies, making speeches, devising slogans, attacking your opponents — when it comes to policy it's a really intellectual business. If they've only got ambitions, then they haven't really undertaken the intellectual exercise to turn those into policies. Policies are hard. It's hard to work out what the right policy is.", "Take this AI revolution. We're living through a period of massive change. This is for sure the biggest technological change since the Industrial Revolution. This is really difficult work, for political leaders today to understand that, to work out what the right policy is, to access the opportunities, mitigate the risks, and regulate it.", "What happens a lot of the time is that people are elected on the basis that they are change makers because they've articulated a general vision for change. But when you then come to, \"okay, what does that really mean in specific terms?\" That's where the hard work hasn't been done. If you don't do that hard work and really dig deep, then what you end up with are just ambitions. They remain ambitions.", "00:10:31 – COVID, AI, & How Gov. Deals with Crisis", "Dwarkesh Patel 00:10:31", "Now that you've brought up AI, I want to ask about this. I do a lot of episodes on AI. To the people who are in the industry it seems plausible, though potentially unlikely, that in the next few years you could have a huge July 1914 -type moment, but for AI. There's a big crisis. Something major has happened in terms of misuse or a warning shot.", "How well would today’s governments deal with this, given how they function either in the West or how you see them function with the other leaders you advise? They get this news about some AI that's escaped or some bioweapon that's been made because of AI. Would it immediately kick off a race dynamic from the West to China? Would they have the technical competence to deal with this? How would that shape out?", "Tony Blair 00:11:17", "Right now, definitely not. One of the things that we do as an institute, and one of the reasons I'm here in Silicon Valley, is to try and bridge the gap between what I call the change makers and the policy makers. A lot of the time the policy makers just fear the change makers. The change makers don't really want anything to do with the policy makers because they just think they get in the way. You don't have a dialogue.", "Let’s say what you're describing were to happen. By the way it's possible at some point it does happen. If it happened right now, political leaders wouldn't know where to begin in solving that problem, or what it might mean.", "This is what I keep saying to the political leaders I'm talking to today. We're likely to have a c hange of government in the UK this year . I am constantly saying to my own party, the Labour Party , which will probably win this election, \"you've got to focus on this technology revolution.\" It's not an afterthought. It's the single biggest thing that's happening in the world today. It’s of a real world nature and is going to change everything.", "Leave aside all the geopolitics and the conflicts and war and America, China, all the rest of it. This revolution is going to change everything about our society, our economy, the way we live, the way we interact with each other. If you don't get across it, then when there is a crisis like the one you're positing, you're going to find that you have no idea how to deal with it.", "Dwarkesh Patel 00:12:54", "COVID is maybe a good case study to analyze how these systems function. The Tony Blair Institute made what were very sensible recommendations to governments, many of which went unheeded. What I thought was especially alarming about COVID was not only that governments made mistakes with vaccine rollout and testing and so forth, but that these mistakes were so correlated across major governments. No Western government, maybe no government, got COVID right.", "What is the fundamental source of that correlation? In the way that governments are bad at dealing with crises, they seem in some correlated way bad with crises. Is it because the same people are running these governments? Is it because the apparatus is the same? Why is that?", "Tony Blair 00:13:40", "First of all, to be fair to people who were in government at the time of COVID, it was a difficult thing to deal with. I always said the problem with COVID was that it was plainly more serious than your average flu, but it wasn't the bubonic plague.", "To begin with, there was one very difficult question. To what degree do you try and shut the place down in order to get rid of the disease? You had various approaches to that, but that's one very difficult question. Most governments try to strike a middle course, to do restrictions but then ease them up over time.", "Then you have the issue of vaccination. Normally with drugs it takes you years to trial a drug. You had to accelerate all of that. That was done to be fair. But then you have to distribute it. That is also a major challenge.", "Part of the problem was that governments weren't sure where to go for advice. They had scientific advice and medical advice. They had to balance that with the needs of their economy and the anxiety a lot of people had. They felt that when you were having a large shutdown, they were going to be hugely disadvantaged as indeed people were. There's an argument for saying for the developing world that lockdowns probably did more harm than good.", "Dwarkesh Patel 00:15:15", "It sounds like you're saying that they made the trade-off you're describing in the wrong way. They could have gone heavier on the testing and vaccination rollout so that the lockdowns could have been avoided and fundamentally more people's lives could have been saved.", "In some fundamental sense, the pandemic is a simpler problem to deal with than an AI crisis in a technical sense. Yes, you have to fast-track the vaccines, but it's a thing we've dealt with before. There are vaccines and you roll them out. If the government can't get that right, how worried should we be about their ability to deal with AI risk?", "Should we then just be fundamentally averse to a government-led answer to the solution? Should we hope the private sector can solve this because the government was so bad at COVID?", "Tony Blair 00:15:57", "What the private sector can do is to input into the public sector. In COVID, the countries that handed vaccine production to the private sector and said, \"run with it\" are the countries that did best, frankly.", "Especially with something as technically complex as AI, you are going to rely on the private sector for the facts of what is happening and to establish the options about what you do. But in the end, the government or the public sector will have to decide which option to take.", "The thing that makes governing difficult is that when you decide, you divide. The moment you take a decision on a public policy question, there are always two ways you can go. With COVID, you could have decided to do what Sweden did and let the disease run pretty much. You could have decided to do what China did and lock down completely. What happened with China was that once we got the Omicron variant , it became obvious you weren't going to be able to keep COVID out. They didn't have the facility or the agility to go and change policy.", "These policy questions are hard. It's very easy with hindsight to say, \"yeah, you should have done this, should have done that.\" If this happened in relation to AI, you would absolutely depend on the people who were developing AI to be able to know what decision you should take, not necessarily how you decide it, but what the choice is.", "Dwarkesh Patel 00:17:44", "Before COVID we had these health bureaucracies and we hoped they functioned well. It turns out many of them didn't. We probably have equivalents in terms of AI. We have government departments that deal with technology and commerce and so forth. What if they're as potentially broken as we found out that much of our health bureaucracy was?", "If you were prime minister now or for the next government, what would you do other than \"we're going to have a task force and we're going to make sure we're making good decisions\"? I'm sure people were trying to make sure the CDC was functional and it wasn't. Would you just fire everybody there and make a new department? How would you go about making sure it's really ready for the AI crisis?", "Tony Blair 00:18:31", "You've got to distinguish between two separate things. One is making the system have the skills and the sensitivity to know the different contours of the crisis that you've got. The other is being able to produce potential solutions for what you do.", "We needed to rely upon the scientific community to say, \"this is how we think the disease is going to run.\" We relied on the private sectors to say, \"this is how we could develop vaccines.\" We had to rely on different agencies in order to say, \"I think you could concertina the trial period to get the drugs.\"", "In the end, the decision of whether you lock down or you don't lock down, you can't really leave it to those people because that's not their expertise. If you look at it at the moment, some people want to regulate AI now on the basis that it's going to cause enormous dangers and problems.", "Because it's general purpose technology, there are real risks and problems associated with it. Europe is already moving in quite an adverse regulatory way. On the other hand, there will be people who say, \"if you do that, you're going to stifle innovation, and we're going to lose the opportunities that come with this new technology.\"", "Balancing those two things, that's what politics is about. You need the experts, the people who know what they're talking about, to tell you \"this is how AI is going to be. This is what it can do. This is what it can't do. We are explaining the technicality to you.\" But ultimately, what your policy is, you've got to decide that. By the way, whichever way you decide it, someone's going to attack you for it.", "00:21:24 – Learning from Lee Kuan Yew", "Dwarkesh Patel 00:21:24", "Let's go back to the topics discussed with foreign leaders at TBI. Take Lee Kuan Yew and his position in the 1960s and the Singapore he inherited. If you were advising Lee Kuan Yew with the advice you would likely give to a developing country now, would Singapore have been even more successful than it ended up? Would it have been less successful? What would the effect of your advice now have been on Singapore in the 1960s?", "Tony Blair 00:21:51", "With Lee Kuan Yew in Singapore it's the wrong way around. I learned so much from him. I first went to see him in Singapore back in the 1990s when I was leader of the Labour Party. The Labour Party had been really critical of Singapore. The first thing he said to me when I came into the room was, \"why are you seeing me? Your party's always hated me.\" I said, \"I want to see you because I've watched what you do in government and I want to learn from it.\"", "I don't think there's anything I could have told Lee Kuan Yew. He's a fascinating leader. This is the interesting thing about government. Don’t look upon government as a branch of politics. Look upon it as its own discipline, a professional discipline. You can learn lessons of what's worked and what doesn't work.", "The fascinating thing about Lee Kuan Yew is that right at the beginning he took three important decisions for Singapore. Each one of them now seems obvious. Each one at the time was deeply contested.", "Number one, he said, \" everyone's going to speak English in Singapore .\" There were lots of people who said to him at the time, \"no, we've been thrown out of Malaysia effectively. We're now a fledgling country, a city-state country. We need to have our own local language. We need to be true to our roots and everything.\" He said, \"no, English is the language of the world and we're all going to speak English.\" Today that's what happens in Singapore.", "Secondly, he said, \"we're going to get the best intellectual capital and management capital, from wherever it exists in the world, and we're going to bring it to Singapore.\" Again, people said, \"no, we should stand on our own two feet\" and \"you're also bringing in the British who we've got all these disputes with.\" He said, \"no, I'm going to bring in the best from wherever they are, and they're going to come to Singapore.\" Today, Singapore exports intellectual capital.", "The third thing he did was he said, \"there's going to be no corruption . And one of the ways we're going to do that is we're going to make sure political leaders are well paid .” The Singaporean leaders are the best paid in the world by a factor of about 10 for the next person. And he said, “there's going to be zero tolerance of corruption , just zero tolerance of corruption.” Those are the three decisions that were instrumental in building Singapore today.", "Dwarkesh Patel 00:24:19", "If you could go to the UK or the US right now and narrow down the list to three key priorities, what could somebody in power do to fix them? Let’s say Starmer is elected in the UK. We’ll see whoever becomes president in the US.", "Do Western leaders have the power to enact what the equivalent would be for their societies right now? Or do you have all this inertia and you can't start fresh like Singapore could in the 1960s?", "Tony Blair 00:24:53", "No, you definitely can. The American system's different because it's a federal system . In many ways, it's a good thing that there are limits to what the federal government can do in the U.S.", "If you take the UK or most governments, there's a lot of power at the center. We've just been talking about the technology revolution. How do you use it to transform healthcare, education, and the way government functions? How do you help educate the private sector as to how they can embrace AI in order to improve productivity?", "This is a huge agenda for a government and a really exciting one. I keep saying this to people that are in politics today. Sometimes people get a bit depressed about being in politics because you have all this criticism. Certainly people in the West feel that society's not changing fast enough and well enough. I say no, it's a really exciting time to be in politics because you've got this massive revolution that you've got to come to terms with.", "Dwarkesh Patel 00:25:53", "Speaking of federalism, you advise dozens of governments. For any one leader you're probably giving very sensible advice for their country. It's a positive expected value. To the extent that limits variance and experimentation across countries in different ways to govern or different policies, are we losing the ability to discover a new Singapore? Western NGOs and other global institutions will give them good recommendations but is there maybe something missing that we don't understand that an experimentation would reveal?", "Tony Blair 00:26:31", "We really don't do that with our governments. One of the things governments should be able to do is experiment to a degree. Part of the problem with systems is there's always a bias towards caution. That's what I mean by saying that with systems, if there were conspiracy for anything, it's for inertia.", "What we do is we concentrate with governments on what is true no matter what government you're in. I describe four Ps of government when you get into power.", "Number one, you've got to prioritize because if you try to do everything, you'll do nothing.", "Number two, you've got to get the right policy, what we were talking about before. That means going deep and getting the right answer. That often means bringing people in from the outside, who can tell you what the right answer is. That has nothing to do with left or right, it's usually to do with practicality.", "Number three, you've got to have the right personnel.", "Number four, you've got to performance manage. Once you've decided something and you've got a policy, you've got to focus on the implementation.", "Whether you're running the United States of America or you're running a small African country, those things are always true.", "00:27:37 – Foreign Policy & Intelligence", "Dwarkesh Patel 00:27:37", "Let's talk about foreign policy for a second. This is not just you but every administration, especially Western administrations, has to deal with these irascible dictatorial regimes. They're right on the brink of WMDs . They make all these demands in order to put off their path towards WMDs.", "Obviously you had to deal with Saddam . Today we have to deal with Iran and North Korea. It seems like sanctions don't seem to work. Regime change is really expensive. Is there any fundamental solution to this kind of dilemma that we keep being put into decade after decade? Can we just buy them a nice mansion in Costa Rica? What can we do about these kinds of regimes?", "Tony Blair 00:28:20", "It's very difficult. Take Iran today. There's no appetite in the West to go and enforce regime change. You could do two things that are really important. Iran is basically the origin of most of the destabilization across the Middle East region and beyond. First of all, you can constrain it as much as possible. Secondly, you can build alliances. That means that their ability to impact is reduced.", "It's a constant problem because they're determined to acquire nuclear weapons capability. We want to stop them doing that. We don't want to engage in regime change. On the other hand, all the other things that you do will be limited in their effect. So it's difficult. It's very difficult, particularly now that you have an alliance that has grown up. China, Russia, Iran, and, to a degree North Korea, work closely together.", "Dwarkesh Patel 00:29:34", "As a leader, how do you distinguish among cases when the intelligence communities come to you? How do you distinguish between a case like Iraq, where potentially they got it wrong, versus Ukraine, where it seems like they were on the ball? How do you know which intelligence to trust? And how good is Western intelligence generally? How good are the Five Eyes ?", "Tony Blair 00:29:53", "Generally, it's extremely good. The Five Eyes is extremely good.", "Dwarkesh Patel 00:29:57", "And how do you distinguish the cases where they're not?", "Tony Blair 00:30:00", "It's difficult. With the benefit of hindsight, particularly in relation to Iraq, you've got to go much deeper. You've got to not take the fact that there were all these problems in the past as an indication of what's happening now or in the future. On the whole, Western intelligence is reasonably good. Of course, we'll get much better now with the tools it's got at its disposal.", "Dwarkesh Patel 00:30:29", "How much situational awareness do they have about the topics you're talking about, whether it's AI or the next pandemic? I mean these forward-looking kinds of problems rather than who's doing an invasion? Maybe they have a lot of expertise and decades of experience with the latter sort. Predicting who's got the data center where and those kinds of things, how good are they at that?", "Tony Blair 00:30:47", "They're all over this stuff now, the intelligence services here in America and in the UK. You’ve also got a whole new category of threat to deal with because the cyber threats are real and potentially devastating in their impact. You can see from Ukraine, war is going to be fought in a completely different way in the future as well.", "00:31:12 – How much leadership actually matters", "Dwarkesh Patel 00:31:12", "I want to ask you about your experience advising leaders, or interacting with them in office, and seeing how their countries progressed. How much of the variance in outcomes for countries is explained by the quality of the leadership versus other endogenous factors such as human capital, geography, etc.?", "Tony Blair 00:31:30", "The whole reason I started this institute was because the quality of governance — of which leadership is a big part — is the determinant. In today's world where capital's mobile, technology's mobile, any country with good leadership can make a success. If you take two countries side by side — same resources, same opportunities, same potential therefore — one succeeds and  one fails. If you look at it, it's always about the quality of decision making.", "Before the Ukraine War , when both Poland and Ukraine came out of the Soviet Union in the early 1990s, people would have given Ukraine as much chance as Poland of doing well. Poland today is doing really well. Why is that? Because they've joined the European Union. They've had to make huge changes and reforms. Therefore they're a successful country.", "Look at Rwanda and Burundi. It was Rwanda that suffered the genocide , but Rwanda today is one of the most respected countries in Africa. Look at the Korean peninsula. It’s the biggest experiment in human governance that’s ever been: North Korea and South Korea. South Korea had the same GDP per head as Sierra Leone in the 1960s. Now it's one of the top countries in the world.", "Dwarkesh Patel 00:32:54", "Do you think that's fundamentally a question of who the leadership was, like Park and South Korea? There are other factors that are obviously different between these countries. You also mentioned that leadership determines governance. But if you look at a system like the US or the UK, we've had good and bad leaders. Fundamentally, the quality of the governance doesn't seem to shift that much between who the leader is. Is the quality of governance and the quality of the institutions, separate endogenous variables from the leadership?", "Tony Blair 00:33:28", "The institutions matter and good leaders should be able to build good institutions. We were talking about Lee Kuan Yew in Singapore. Would Singapore be Singapore without those decisions that he took? No.", "Take, for example, China and Deng Xiaoping . When he decided to switch after the death of Mao and switch policy completely to open China up , that made the difference.", "If you track back India's development over the past 25 years, you can see the points at which decisions were taken that gave India the chance it has today. The interesting thing about this is how much it really does matter who the leader is and the governance of the country.", "What I'd say to people engaged in this groundbreaking revolution of artificial intelligence is: we need your help in changing government and changing countries. When I'm talking to people and leaders we work with in the developing world today, I say to them, \"don't try and replicate the systems of the West.\" You don't need to do that. You can teach your children better and differently without building the same type of system we have in the West. I wouldn't design the healthcare system in the UK today as it is now, if we had the benefit of generative AI.", "The leaders that are going to succeed in these next years will be the people that can understand what is happening in places like this. Here’s the frustrating thing from our perspective, from the perspective of leaders. Even though they would probably be very well intentioned towards the developing world, people in the technology sector think, \"I don't know what I can do in order to help.” Actually there's massive amounts they can do in order to help.", "00:35:34 – Private vs. public tech", "Dwarkesh Patel 00:35:34", "When you look at the IT revolution and how much it's improved, say market services versus how much it's improved public sector services, there's clearly been a big difference. Would it have just been better to privatize the things that IT could have enabled more of, like education? What lessons does that have for AI? The public sector didn't seem that good at integrating IT. Maybe it’ll be bad at integrating AI. Should we, say, just privatize healthcare and education as much as possible because all the productivity gains will come from the private sector of those things anyways?", "Tony Blair 00:36:15", "It's a great question. It's the single most difficult thing. You can't just hand everything over to the private sector because in the end, the public will expect the government to take account of the public interest. You may say that the government's useless at protecting the public interest. That's another matter. On the whole, people in America and the UK aren't going to say, \"okay, just hand it over to these tech giants and let them run everything.\"", "We have a whole program in my institute now, which we call the reimagined state . There was a minimalist state in the 18th century, and in the first part of the 19th century. That grew in the last part of the 19th century, and first part of the 20th century, into a maximalist state. You look for government to do a lot of things for you and the state grows large.", "We should reimagine the state today as a result of this technology revolution and make it much more strategic. It's much more about setting a framework and then allowing much more diversity, competition. The hardest thing about the public sector in those circumstances is creating self-perpetuating innovation. If you don't innovate in the private sector, you go out of business. If you don't innovate in the public sector, you're still there. It's just the service has got worse. This is the really tough intellectual task.", "For example, in education today in America you'll have a significant tail of kids that are taught really badly. It’s probably the same in any Western country. No one today should be taught badly. By the way, everyone should be taught on an individual basis, on a personalized basis. For example, look at what Sal Khan 's doing with Khan Academy . We work with them. There are other people doing great things in education using technology.", "We should be able to create a situation in which young people today are able to learn at the pace that is good for them. No young person should be without opportunity. How you reform the system to allow that to happen, that's the big challenge. With the healthcare system, you will in time end up with an AI doctor. You'll end up with an AI tutor.", "The question will be what's the framework within which those things operate. How do we use them to allow better service? How do we allow a lot of the people within the healthcare or education systems to concentrate on the most important part of their learning? For example, if you're a doctor now, you’re having to write up a whole lot of notes after a consultation or do lesson planning if you're a teacher.", "00:39:14 – Advising leaders and implementing policy change", "Dwarkesh Patel 00:39:14", "Let’s go back to TBI for a second. When you give a leader some sensible advice and then they don't follow through on it, what usually is the reason that happens? Is it because it's not politically palatable in their country? Is it because they don't get it? To the extent that you have good advice that’s ignored, why does that happen?", "Tony Blair 00:39:34", "It happens usually for two reasons. Number one, it's really hard to make change. What I learned about making change is that there's a certain rhythm to it. When you first propose a reform, people tell you it's a terrible idea. When you're doing it, it's absolute hell. After you've done it, you wish you'd done more of it.", "Sometimes people just find the system too resistant. There might be vested interests that get in the way of it. I sometimes come across countries that are island states. They have warm weather, but they get all their electricity from heavy fuel oil when they've got limitless amounts of solar and wind that they could be using. There’s a vested interest.", "The other thing about the government is that it’s a conspiracy of distraction. You've got events and crises and scandals. The most difficult thing is to keep focused when you've got so many things that are diverting you from that core task. Sometimes what we do with our leaders is we say to them, \"okay, we're going to do an analysis. Here are your priorities. Here's how much time you spend on them.\" You end up literally with people spending 4 percent of their time on their priorities. You say, \"no wonder you're not succeeding.\"", "Dwarkesh Patel 00:42:08", "I'm not necessarily picking on your time, but for any head of state in Western government or any government. How much of the time they spend is fundamentally wasted in the sense of the three key priorities you would have, say, identified for Singapore in the 1960s? Time that’s spent that’s not fundamentally moving the ball forward on things like that. It’s like meeting people, ambassadors, press, whatever. How much of the time is just that?", "Tony Blair 00:42:40", "A lot.", "Dwarkesh Patel 00:42:41", "Greater than 80%? 90%?", "Tony Blair 00:42:43", "No, that would be too high. It’s a lot and a lot more today.", "I often say this to leaders and I think this is the single biggest problem with Western politics today. When my kids were younger, I would say to them, \"work hard, play hard.” Work hard, play hard equals possibility of success. Play hard, work hard is a certain failure. You'll play so hard you never end up working hard.", "The equivalent in politics is policy first, politics second. In other words, work out what the right answer is, and then work out how you shape the politics around that. But what actually happens in a lot of systems today is politics first and policy second. People end up with a position that they’ve chosen for political reasons, and then they try to shape a policy around that politics. It never works.", "The most important thing, for policymakers and intellectual business, is that you've got to get the right answer. There is a right answer, by the way. Often, the reason why it's so difficult to govern today is there's so much political noise. It's hard to get out of that zone of noise and sit in a room with some people who really know what they're talking about and go into the detail of what is the solution to a problem.", "Sometimes when I talk to leaders about this, I find that they just say to me, \"I don't have the time to do that.\" I say, \"if you don't have the time to do that, you are going to fail. Because in the end, you won't have the right answer.” You've got to believe that over time, the best policy is the best politics.", "Dwarkesh Patel 00:44:47", "It's not just the time you had to spend, for example talking to the press, but also the kinds of topics that draw attention. There’s some statement your cabinet minister made on the BBC and some latest scandal.", "As you wrote in your book, the 30 minutes of the PMQs is not the big deal. It's the two days you spent in anxiety and preparation. Fundamentally, is the attention distraction the bigger issue than the actual time you spend on these events?", "Tony Blair 00:45:17", "It is to a degree. The other thing is you undergo a lot of attacks today in politics. It happens to celebrities, but they tend to have at least some sort of fan base that are constantly supporting them. With politics today, you can often be in a situation where you're almost dehumanized. You're subject to attacks on your integrity, your character, your intentions.", "It's possible, if you're not careful, that you just end up sitting there thinking this is really unfair. You get distracted from focusing on the business. That's why I always say to people, one part of being a political leader, or any leader, is to be able to have a certain sort of Zen. You need a bit of a Zen-like attitude to all the criticism and the disputatiousness that will go on around you. It's just going to happen.", "Today with social media, it happens to an even greater degree. I often say to leaders that you cannot pay attention to this stuff. Get someone to summarize it for you in half a page and you read it in the morning.", "Dwarkesh Patel 00:46:38", "GPT can do it for you now.", "Tony Blair 00:46:40", "Honestly, if you start going down that rabbit hole, you'll never re-emerge.", "00:46:45 – Looking back on the unipolar moment", "Dwarkesh Patel 00:46:45", "Let’s go back a little bit to your time in office. There was a unique unipolar moment that happens very rarely in history. The Anglosphere had much more power in the 1990s and early 2000s than the rest of the world. In what way did that feel different from today's world?", "Is there something you would now change about the way the institutions were set up at the time and carried forward? There was a key opportunity in the unipolar moment. How should that have been used? How well was it used?", "Tony Blair 00:47:19", "It's difficult. We did try a lot, contrary to what's sometimes written, for example with Russia. I dealt with President Putin a lot when I was prime minister. It was myself and President Clinton who took the crucial decision to bring China into the world's trading framework. The G7 at the time was the G8 with Russia there and China would always be invited.", "I honestly think we tried a lot to recognize that we were going to live in a new world. The power wasn’t going to shift from the West in the sense the West would become not powerful. The East was going to become also powerful. We did understand that and work towards that.", "Particularly in these last few years, China and Russia have come to a position that is seemingly hostile to Western democracy in terms of fundamental values and systems. That's a difficult problem. What we probably underestimated was how fast India would rise. At the time it seemed India was still going to be quite constrained.", "We live in a multipolar world today. Personally, I think that's a good thing. In any event, it's an inevitable thing. It's really important always to give this message to China, for example. China as of right is one of the big powers in the world and as of right should have a huge influence. I don't believe in trying to constrain or contain China.", "We do have to accept that the Chinese system is presently run on different lines to our own and is overtly hostile to some degree. This is why it's important for us to retain military and technological superiority, even though I believe passionately that it's important that we leave space for cooperation and engagement with China.", "Now how much could we have foreseen all of this back in those days? I'm not sure. Sometimes one of the problems of the West is that we always see it through our own lens. We always think that we could have done something different to change the world. But actually the rest of the world operates on its own principles as well. Sometimes the change happens, not because we didn't do something, but because the rest of the world did.", "Dwarkesh Patel 00:50:24", "Final question. You interact with leaders today across dozens, maybe hundreds, of countries. Which among them is best playing the deck they've been dealt? Who is the most impressive leader? It doesn't have to be a huge country or anything. Given the deck they've been dealt, who's best?", "Tony Blair 00:50:47", "One of the things you must never, ever do in politics is say who's your favorite leader, who's done well. You will make one friend and many enemies. I'm just going to answer it in this way. If you look at the countries that have succeeded today — any country that has transitioned from being a third world or second world country to a first world country — certain things stand out and are clear.", "Number one, they have stable macroeconomic policy.", "Number two, they allow business and enterprise to flourish.", "Number three, they have the rule of law.", "Number four, they educate their people well.", "Wherever you look around the world and you see those things in place, you will find success. Whenever you find their absence, you will find either the fact or the possibility of failure.", "The one thing, however, that any country leader should focus on today is the possibility of all of these rules being rewritten by the importance of technology. The single most important thing today, if I were back in the frontline of politics, would be to engage with this revolution and to understand it. We need to bring people who also get it into the discussions and the councils of government and to take the key decisions that will allow us to access the opportunities and mitigate the risks.", "Dwarkesh Patel 00:52:32", "That's a wonderful place to close. Mr. Blair, thank you so much for coming on the podcast.", "Tony Blair 00:52:35", "Thank you for having me." ]
[ "https://en.wikipedia.org/wiki/Tony_Blair", "https://www.institute.global/", "https://en.wikipedia.org/wiki/Deep_state", "https://en.wikipedia.org/wiki/Prime_Minister's_Questions", "https://en.wikipedia.org/wiki/National_Minimum_Wage_Act_1998", "https://en.wikipedia.org/wiki/Devolution_in_the_United_Kingdom", "https://en.wikipedia.org/wiki/Good_Friday_Agreement", "https://amzn.to/3X1Kc40", "https://en.wikipedia.org/wiki/July_Crisis", "https://en.wikipedia.org/wiki/2024_United_Kingdom_general_election", "https://en.wikipedia.org/wiki/Labour_Party_(UK)", "https://en.wikipedia.org/wiki/Swedish_government_response_to_the_COVID-19_pandemic", "https://en.wikipedia.org/wiki/Chinese_government_response_to_COVID-19", "https://en.wikipedia.org/wiki/SARS-CoV-2_Omicron_variant", "https://en.wikipedia.org/wiki/Lee_Kuan_Yew", "https://en.wikipedia.org/wiki/History_of_Singapore#1959%E2%80%931963:_Full_internal_self-government", "https://blog.thepienews.com/2018/12/how-singapore-became-an-english-speaking-country/", "https://en.wikipedia.org/wiki/Corruption_in_Singapore", "https://www.straitstimes.com/singapore/how-mr-lee-kuan-yew-shaped-the-public-service", "https://www.emerald.com/insight/content/doi/10.1108/PAP-04-2022-0037/full/html#:~:text=during%201959%2D2020.-,Lee's%20zero%2Dtolerance%20policy%20toward%20corruption,decadence%20of%20many%20Asian%20leaders.%20%E2%80%A6", "https://en.wikipedia.org/wiki/Keir_Starmer", "https://en.wikipedia.org/wiki/Federalism_in_the_United_States#:~:text=In%20the%20United%20States%2C%20federalism,and%20toward%20the%20national%20government.", "https://en.wikipedia.org/wiki/Weapon_of_mass_destruction", "https://en.wikipedia.org/wiki/Iraq_War", "https://en.wikipedia.org/wiki/Saddam_Hussein", "https://en.wikipedia.org/wiki/Five_Eyes", "https://situational-awareness.ai/", "https://en.wikipedia.org/wiki/Russian_invasion_of_Ukraine", "https://hbr.org/1995/03/starting-over-poland-after-communism", "https://en.wikipedia.org/wiki/Rwandan_genocide", "https://en.wikipedia.org/wiki/Park_Chung_Hee", "https://en.wikipedia.org/wiki/Deng_Xiaoping", "https://en.wikipedia.org/wiki/Mao_Zedong", "https://en.wikipedia.org/wiki/Chinese_economic_reform", "https://www.institute.global/insights/politics-and-governance/reimagining-government-for-the-21st-century", "https://en.wikipedia.org/wiki/Sal_Khan", "https://www.khanacademy.org/", "https://en.wikipedia.org/wiki/GPT-4", "https://en.wikipedia.org/wiki/Vladimir_Putin", "https://en.wikipedia.org/wiki/Bill_Clinton", "https://en.wikipedia.org/wiki/G7", "https://en.wikipedia.org/wiki/G8" ]
https://www.dwarkesh.com/p/tyler-cowen-2
Tyler Cowen - Talent, Collapse, & Pessimism of Sex
[ "Did Caplan Change On Education?", "Tyler Cowen", "Ask Bryan about early and late Caplan. In which ways are they not consistent? That's a kind of friendly jab.", "Dwarkesh Patel", "Okay, interesting.", "Tyler Cowen", "Garrett Jones has tweeted about this in the past. In The Myth of the Rational Voter , education is so wonderful. It no longer seems to be true, but it was true from the data Bryan took from. Bryan doesn't think education really teaches you much.", "Dwarkesh Patel", "So then why is it making you want a free market?", "Tyler Cowen", "It once did, even though it doesn't now, and if it doesn't now, it may teach them bad things. But it's teaching them something.", "Dwarkesh Patel", "I have asked him this. He thinks that education doesn't teach them anything; therefore, that woke-ism can’t be a result of colleges. I asked him, “okay, at some point, these were ideas in colleges, but now they’re in the broader world. What do you think happened? Why did it transition together?” I don't think he had a good answer to that.", "Tyler Cowen", "Yeah, you can put this in the podcast if you want. I like the free podcast talk often better than the podcast. [laughs]", "Dwarkesh Patel", "Okay. Well yeah, we can just start rolling. Today, it is my great pleasure to speak to Tyler Cowen about his new book, “Talent, How to Find Energizers, Creatives, and Winners Across the World.” Tyler, welcome (once again) to The Lunar Society.", "Tyler Cowen", "Happy to be here, thank you!", "Travel vs. History", "Dwarkesh Patel 1:51", "Okay, excellent. I'll get into talent in just a second, but I've got a few questions for you first. So in terms of novelty and wonder, do you think travelling to the past would be a fundamentally different experience from travelling to different countries today? Or is it kind of in the same category?", "Tyler Cowen", "You need to be protected against disease and have some access to the languages, and obviously, your smartphone is not going to work, right? So if you adjust for those differences, I think it would be a lot like travelling today except there'd be bigger surprises because no one else has gone to the past. Older people were there in a sense, but if you go back to ancient Athens, or the peak of the Roman Empire, you’d be the first traveller.", "Dwarkesh Patel", "So do you think the experience of reading a history book is somewhat substitutable for actually travelling to a place?", "Tyler Cowen", "Not at all! I think we understand the past very very poorly. If you’ve travelled appropriately in contemporary times, it should make you more skeptical about history because you'll realize how little you can learn about the current places just by reading about them. So it's like Travel versus History, and the historians lose.", "Dwarkesh Patel", "Oh, interesting. So I'm curious, how does travelling a lot change your perspective when you read a work of history? In what ways does it do so? Are you skeptical of it to an extent that you weren't before, and what do you think historians are probably getting wrong?", "Tyler Cowen", "It may not be a concrete way, but first you ask: was the person there? If it's a biography, did the author personally know the subject of the biography? That becomes an extremely important question. I was just in India for the sixth time, I hardly pretend to understand India, whatever that possibly might mean, but before I went at all, I'd read a few hundred books about India, and it’s not like I got nothing out of them, but in some sense, I knew nothing about India. Now that I’ve visited, the other things I read make more sense, including the history.", "Do Institutions Become Left Wing Over Time?", "Dwarkesh Patel", "Okay, interesting. So you've asked this question to many of your guests, and I don't think any of them have had a good answer. So let me just ask you: what do you think is the explanation behind Conquest’s Second Law? Why does any institution that is not explicitly right-wing become left-wing over time?", "Tyler Cowen", "Well, first of all, I'm not sure that Conquest’s Second Law is true. So you have something like the World Bank which was sort of centrist state-ist in the 1960s, and by the 1990s became fairly neoliberal . Now, about what's left-wing/right-wing, it's global, it's complicated, but it's not a simple case of Conquest’s Second Law holding. I do think that for a big part of the latter post-war era, some version of Conquest’s Law does mostly hold for the United States. But once you see that it’s not universal, you're just asking: well, why have parts? Why has the American intelligentsia shifted to the left? So that there's political science literature on educational polarization? [laughs] I wouldn't say it's a settled question, but it's not a huge mystery like “how Republicans act wackier than Democrats are” for example. The issues realign in particular ways. I believe that’s why Conquest’s Law locally is mostly holding.", "Dwarkesh Patel", "Oh, interesting. So you don't think there's anything special about the intellectual life that tends to make people left-wing, and this issue is particular to our current moment?", "Tyler Cowen", "I think by choosing the words “left-wing” you're begging the question. There's a lot of historical areas where what is left-wing is not even well defined, so in that sense, Conquests Law can't even hold there. I once had a debate with Marc Andreessen about this –– I think Mark tends to see things that are left-wing/right-wing as somewhat universal historical categories, and I very much do not. In medieval times, what's left wing and what's right wing? Even in 17th century England, there were particular groups who on particular issues were very left-wing or right-wing. It seems to me to be very unsatisfying, and there's a lot of fluidity in how these axes play out over real issues.", "Dwarkesh Patel", "Interesting. So maybe then it’s what is considered “left” at the time that tends to be the thing that ends up winning. At least, that’s how it looks like looking back on it. That's how we categorize things. Something insightful I heard is that “ if the left keeps winning, then just redefine what the left is.” So if you think of prohibition at the time, it was a left-wing cause, but now, the opposite of prohibition is left-wing because we just changed what the left is.", "Tyler Cowen", "Exactly. Take the French Revolution: they're the historical equivalent of nonprofits versus 1830s restoration . Was everything moving to the left, between Robespierre and 1830? I don't pretend to know, but it just sure doesn't seem that way. So again, there seem to be a lot of cases where Conquest’s Law is not so economical.", "Dwarkesh Patel", "Napoleon is a great example of this where we’re not sure whether he’s the most left-wing figure in history or the most right-wing figure in history.", "Tyler Cowen 6:00", "Maybe he’s both somehow.", "What Does Talent Correlate With?", "Dwarkesh Patel", "How much of talent or the lack thereof is a moral judgment for you? Just to give some context, when I think that somebody is not that intelligent, for me, that doesn't seem like a moral judgment. That just seems like a lottery. When I say that somebody's not hard working, that seems like more of a moral judgment. So on that spectrum, where would you say talent lies?", "Tyler Cowen", "I don't know. My default is that most people aren't that ambitious. I'm fine with that. It actually creates some opportunities for the ambitious–– there might be an optimal degree of ambition. Well, short of everyone being sort of maximally ambitious. So I don't go around pissed off at unambitious people, judging them in some moralizing way. I think a lot of me is on autopilot when it comes to morally judging people from a distance. I don't wake up in the morning and get pissed off at someone in the Middle East doing whatever, even though I might think it was wrong.", "Dwarkesh Patel", "So when you read the biographies of great people, often you see there's a bit of an emotional neglect and abuse when they're kids. Why do you think this is such a common trope?", "Tyler Cowen", "I would love to see the data, but I'm not convinced that it's more common than with other people. Famous people, especially those who have biographies, on average are from earlier times, and in earlier times, children were treated worse. So it could be correlated without being causal. Now, maybe there's this notion that you need to have something to prove. Maybe you only feel you need to prove something if you’re Napoleon and you're short, and you weren't always treated well. That's possible and I don't rule it out. But you look at Bill Gates and Mark Zuckerberg without pretending to know what their childhoods were like.  It sure sounds like they were upper middle class kids treated very well, at least from a distance. For example, the Collison's had great parents and they did well.", "Dwarkesh Patel", "It could just be that the examples involving emotional neglect stuck out in my mind in particular.", "Tyler Cowen", "Yeah. So I'd really like to see the data. I think it's an important and very good question. It seems to me, maybe one could investigate it, but I've never seen an actual result.", "Dwarkesh Pate l", "Is there something you've learned about talent spotting through writing the book that you wish wasn't so? Maybe you found it disturbing, or you found it disappointing in some way. Is there something that is a correlate for talent that you wish wasn't?", "Tyler Cowen", "I don't know. Again, I think I'm relatively accepting of a lot of these realities, but the thing that disappoints me a bit is how geographically clustered talent is. I don't mean where it was born, and I don't mean ethnically. I just mean where it ends up. So if you get an application, say from rural Italy where maybe living standards are perfectly fine–– there's good weather, there's olive oil, there's pasta. But the application just probably not that good. Certainly, Italians have had enough amazing achievements over the millennia, but right now, the people there who are actually up to something are going to move to London or New York or somewhere. So I find that a bit depressing. It's not really about the people.", "Dwarkesh Patel", "When you do find a cluster of talent, to what extent can that be explained by a cyclical view of what's happening in the region? In the sense of the “hard times create strong men” theory? I mean at some point, Italy had a Renaissance, so maybe things got complacent over time.", "Tyler Cowen", "Again, maybe that's true for Italy, but most of the talent clusters have been such for a long time, like London and New York. It's not cyclical. They've just had a ton of talent for a very long time. They still do, and later on, they still will. Maybe not literally forever, but it seems like an enduring effect.", "Dwarkesh Patel", "But what if they leave? For example, the Central European Jews couldn’t stay where they were anymore and had to leave.", "Tyler Cowen", "Obviously, I think war can destroy almost anything. So German scientific talent took a big whack, German cultural talent too. I mean, Hungarian Jews and mathematics-–I don't know big of a trend it still is, but it's certainly nothing close to what it once was.", "Dwarkesh Patel", "Okay. I was worried that if you realize that some particular region has a lot of talent right now, then that might be a one-time gain. You realize that India, Toronto or Nigeria or something have a lot of talent, but the culture doesn't persist in some sort of extended way.", "Tyler Cowen", "That might be true for where talent comes from, but where it goes just seems to show more persistence. People will almost certainly be going to London for centuries. Is London producing a lot of talent? That's less clear. That may be much more cyclical. In the 17th century, London was amazing, right? London today? I would say I don't know. But it's not obvious that it’s coming close to its previous glories. So the current status of India I think, will be temporary, but temporary for a long time. It's just a very big place. It has a lot of centres and there are things it has going for it like not taking prosperity for granted. But it will have all of these for quite a while–– India's still pretty poor.", "Dwarkesh Patel", "What do you think is the difference between actual places where clusters of talent congregate and places where that are just a source of that talent? What makes a place a sink rather than a source of talent?", "Tyler Cowen", "I think finding a place where people end up going is more or less obvious. You need money, you need a big city, you need some kind of common trade or linguistic connection. So New York and London are what they are for obvious reasons, right? Path dependence history, the story of making it in the Big Apple and so on. But origins and where people come from are areas that I think theory is very bad at understanding. Why did the Renaissance blossom in Florence and Venice, and not in Milan? If you're going back earlier, it wasn't obvious that it would be those places. I've done a lot of reading to try to figure this out, but I find that I've gotten remarkably not far on the question.", "Dwarkesh Patel", "The particular examples you mentioned today–– like New York, San Francisco, London, these places today are kind of high stakes, because if you want to move there, it's expensive. Do you think that this is because they've been so talented despite this fact, or because you need some sort of exclusion in order to be a haven of talent?", "Tyler Cowen", "Well, I think this is a problem for San Francisco. It may be a more temporary cluster than it ought to have been. Since it's a pretty recent cluster, it can’t count on the same kind of historical path dependence that New York and Manhattan have. But a lot of New York still is not that expensive. Look at the people who work and live there! They're not all rich, to say the least. And that is an important part of why New York is still New York. With London, it's much harder, but it seems to me that London is a sink for somewhat established talent––which is fine, right? However, in that regard, it’s much inferior to New York.", "Humility, Mental Illness, Caffeine, and Suits", "Dwarkesh Patel", "Okay, I want to play a game of overrated and underrated with you, but we're going to do it with certain traits or certain kinds of personalities that might come in when you're interviewing people.", "Tyler Cowen", "Okay, it's probably all going to be indeterminate, but go on.", "Dwarkesh Patel", "Right. So somebody comes in, and they're very humble.", "Tyler Cowen", "Immediately I'm suspicious. I figure most people who are going to make something of themselves are arrogant. If they're willing to show it, there's a certain bravery or openness in that. I don't rule out the humble person doing great. A lot of people who do great are humble, but I just get a wee bit like, “what's up with you? You're not really humble, are you?”", "Dwarkesh Patel", "Maybe humility is a way of avoiding confrontation–– if you don't have the competence to actually show that you can be great.", "Tyler Cowen", "It might be efficient for them to avoid confrontation, but I just start thinking that I don't know the real story. When I see a bit of arrogance, I'm less likely to think that it may, in a way, be feigned. But the feigning of arrogance in itself is a kind of arrogance. So in that sense, I'm still getting the genuine thing.", "Dwarkesh Patel", "So what is the difference? Let's say a 15-year-old who is kind of arrogant versus a 50-year-old who is kind of arrogant, and the latter has accomplishments already while the first one doesn't. Is there a difference in how you perceive humility or the lack thereof?", "Tyler Cowen", "Oh, sure. With the 50-year-old, you want to see what they have done, and you're much more likely to think the 50 year old should feign humility than the 15-year-old. Because that's the high-status thing to do–– it’s to feign humility. If they can't do that, you figure, “ Here's one thing they're bad at. What else are they bad at?” Whereas with the 15-year-old, maybe they have a chip on their shoulder and they can't quite hold it all in. Oh, that's great and fine. Let's see what you're gonna do.", "Dwarkesh Patel", "How arrogant can you be? There are many 15 year olds who are really good at math, and they have ambitions like “I want to solve P ≠ NP ” or “I want to build an AGI” or something. Is there some level where you just clearly don't understand what's going on since you think you can do something like that? Or is arrogance always a plus?", "Tyler Cowen", "I haven't seen that level of arrogance yet. If a 15-year-old said to me, “in three years, I'm going to invent a perpetual motion machine,” I would think “No, now you're just crazy.” But no one's ever said that to me. There’s this famous Mark Zuckerberg story where he went into the VC meeting at Sequoia wearing his pajamas and he told Sequoia not to give him money. He was 18 at a minimum, that's pretty arrogant behavior and we should be fine with that. We know how the story ends. So it's really hard to be too arrogant. But once you say this, because of the second order effect , you start thinking: “ Well, are they just being arrogant as an act? ” And then in the “act sense”, yes, they can be too arrogant.", "Dwarkesh Patel", "Isn't the backstory there that Mark was friends with Sean Parker and then Sean Parker had beef with Sequoia…", "Tyler Cowen", "There's something like that. I wouldn't want to say off the top of my head exactly what, but there is a backstory.", "Dwarkesh Patel", "Okay. Somebody comes in professionally dressed when they don't need to. They've got a crisp clean shirt. They've got a nice wash.", "Tyler Cowen", "How old are they?", "Dwarkesh Patel", "20.", "Tyler Cowen", "They’re too conformist. Again, with some jobs, conformity is great, but I get a little suspicious, at least for what I'm looking for. Though I wouldn't rule them out for a lot of things–– it’s a plus, right?", "Dwarkesh Patel", "Is there a point though, where you're in some way being conformist by dressing up in a polo shirt? Like if you're in San Francisco right now, it seems like the conformist thing is not to wear a suit to an interview if you're trying to be a software engineer.", "Tyler Cowen", "Yeah, there might be situations where it's so weird, so over the top, so conformist, that it's actually totally non-conformist. Like “ I don't know anyone who's a conformist like you are!” Maybe it's not being a conformist, or just being some kind of nut, that makes you interested again.", "Dwarkesh Patel", "An overall sense that you get from the person that they're really content, almost like Buddha came in for an interview. A sense of wellbeing.", "Tyler Cowen", "It's gonna depend on context, I don't think I'd hold it against someone, but I wouldn't take it at face value. You figure they're antsy in some way, you hope. You'll see it with more time, I would just think.", "Dwarkesh Patel", "Somebody who uses a lot of nootropics. They're constantly using caffeine, but maybe on the side (multiple times a week), they're also using Adderall, Modafinil , and other kinds of nootropics.", "Tyler Cowen", "I don't personally like it, but I've never seen evidence that it's negatively correlated with success, so I would try to put it out of my mind. I sort of personally get a queasy feeling like “Do you really know what you're doing. Is all this stuff good for you? Why do you need this?” That's my actual reaction, but again, at the intellectual level, it does seem to work for some people, or at least not screw them up too much.", "Dwarkesh Patel", "You don't drink caffeine, correct?", "Tyler Cowen", "Zero.", "Dwarkesh Patel", "Why?", "Tyler Cowen", "I don't like it. It might be bad for you.", "Dwarkesh Patel", "Oh really, you think so?", "Tyler Cowen", "People get addicted to it.", "Dwarkesh Patel", "You're not worried it might make you less productive over the long term? It's more about you just don't want to be addicted to something?", "Tyler Cowen", "Well, since I don't know it well, I'm not sure what my worries are. But the status quo regime seems to work. I observe a lot of people who end up addicted to coffee, coke, soda, stuff we know is bad for you. So I think: “What's the problem I need to solve? Why do it?”", "Dwarkesh Patel", "What if they have a history of mental illness like depression or anxiety? Not that mental illnesses are good, but at the current margins, do you think that maybe they're punished too heavily? Or maybe that people don't take them seriously enough that they actually have a bigger signal than the people are considering?", "Tyler Cowen", "I don't know. I mean, both could be true, right? So there's definitely positive correlations between that stuff and artistic creativity. Whether or not it’s causal is harder to say, but it correlates. So you certainly should take the person seriously. But would they be the best Starbucks cashier? I don't know.", "How does Education Affect Talent?", "Dwarkesh Patel", "Yeah. In another podcast, you've pointed out that some of the most talented people you see who are neglected are 15 to 17 year olds. How does this impact how you think? Let's say you were in charge of a high school, you're the principal of a high school, and you know that there's 2000 students there. A few of them have to be geniuses, right? How is the high school run by Tyler Cowen? Especially for the very smartest people there?", "Tyler Cowen", "Less homework! I would work harder to hire better teachers, pay them more, and fire the bad ones if I'm allowed to do that. Those are no-brainers, but mainly less homework and I’d have more people come in who are potential role models. Someone like me! I was invited once to Flint Hill High School in Oakton, it's right nearby. I went in, I wasn't paid. I just figured “I'll do this.” It seems to me a lot of high schools don't even try. They could get a bunch of people to come in for free to just say “I'm an economist, here's what being an economist is like” for 45 minutes. Is that so much worse than the BS the teacher has to spew? Of course not. So I would just do more things like that.", "Dwarkesh Patel", "I want to understand the difference between these three options. The first is: somebody like you actually gives an in-person lecture saying “this is what life is like”. The second is zoom, you could use zoom to do that. The third is that it's not live in any way whatsoever. You're just kind of like maybe showing a video of the person.", "Tyler Cowen", "I'm a big believer in vividness. So Zoom is better than nothing. A lot of people are at a distance, but I think you'll get more and better responses by inviting local people to do it live. And there's plenty of local people, where most of the good schools are.", "Dwarkesh Patel", "Are you tempted to just give these really smart 15-year-olds a hall pass to the library all day and some WiFi access, and then just leave them alone? Or do you think that they need some sort of structure?", "Tyler Cowe n", "I think they need some structure, but you have to let them rebel against it and do their own thing. Zero structure strikes me as great for a few of them, but even for the super talented ones, it's not perfect. They need exposure to things, and they need some teachers as role models. So you want them to have some structure.", "Dwarkesh Patel", "If you read old books about education, there's a strong emphasis on moral instruction. Do you think that needs to be an important part of education?", "Tyler Cowen", "I'd like to see more data. But I suspect the best moral instruction is the teachers actually being good people. I think that works. But again, I'd like to see the data. But somehow getting up and lecturing them about the seven virtues or something. That seems to me to be a waste of time, and maybe even counterproductive.", "Dwarkesh Patel", "Now, the way I read your book about talent, it also seems like a critique of Bryan’s book, The Case Against Education.", "Tyler Cowen", "Ofcourse it is. Bryan describes me as the guy who's always torturing him, and in a sense, he's right.", "Dwarkesh Patel", "Well, I guess more specifically, it seems that Bryan's book relies on the argument that you need a costly signal to show that you have talent, or you have intelligence, conscientiousness, and other traits. But if you can just learn that from a 1500 word essay and a zoom call, then maybe college is not about the signal.", "Tyler Cowen", "In that sense, I'm not sure it's a good critique of Bryan. So for most people in the middle of the distribution, I don't think you can learn what I learned from Top 5 Emergent Ventures winners through an application and a half-hour zoom call. But that said, I think the talent book shows you my old saying: context is that which is scarce. And you're always testing people for their understanding of context. Most people need a fair amount of higher education to acquire that context, even if they don't remember the detailed content of their classes. So I think Bryan overlooks how much people actually learn when they go to school.", "Dwarkesh Patel", "How would you go about measuring the amount of context of somebody who went to college? Is there something you can point to that says, “Oh, clearly they're getting some context, otherwise, they wouldn’t be able to do this”?", "Tyler Cowen", "I think if you meet enough people who didn't go to college, you'll see the difference, on average. Stressing the word average. Now there are papers measuring positive returns to higher education. I don't think they all show it’s due to context, but I am persuaded by most of Brian's arguments that you don't remember the details of what you learned in class. Oh, you learn this about astronomy and Kepler's laws and opportunity costs, etc. but people can't reproduce that two or three years later. It seems pretty clear we know that. However, they do learn a lot of context and how to deal with different personality types.", "Dwarkesh Patel", "Would you falsify this claim, though, that you are getting a lot of context? Is it just something that you had to qualitatively evaluate? What would have to be true in the world for you to conclude that the opposite is true?", "Tyler Cowen", "Well, if you could show people remembered a lot of the facts they learned, and those facts were important for their jobs, neither of which I think is true. But in principle, they're demonstrable, then you would be much more skeptical about the context being the thing that mattered. But as it stands now, that's the residual. And it's probably what matters.", "Dwarkesh Pate l", "Right. So I thought that Bryan shared in the book that actually people don't even remember many of the basic facts that they learned in school.", "Tyler Cowen", "Ofcourse they don't. But that's not the main thing they learn. They learn some vision of how the world works, how they fit into it, that they ought to have higher aspirations, that they can join the upper middle class, that they're supposed to have a particular kind of job. Here are the kinds of jerks you're going to meet along the way! Here's some sense of how dating markets work! Maybe you're in a fraternity, maybe you do a sport and so on. That's what you learned.", "Dwarkesh Patel", "How did you spot Bryan ?", "Tyler Cowen", "He was in high school when I met him, and it was some kind of HS event. I think he made a point of seeking me out. And I immediately thought, “Well this guy is going to be something like, gotta keep track of this guy. Right away.”", "Dwarkesh Patel", "Can you say more - what happened?", "Tyler Cowen", "His level of enthusiasm, his ability to speak with respect to detail. He was just kind of bursting with everything . It was immediately evident, as it still is. Bryan has changed less than almost anyone else I know over what is now.. he could tell you how many years but it’s been a whole bunch of decades.", "Dwarkesh Patel", "Interesting. So if that's the case, then it would have been interesting to meet somebody who is like Bryan, but a 19 year old.", "Tyler Cowen", "Yeah, and I did. I was right.", "Talent Scouting", "Dwarkesh Patel", "To what extent do the best talent scouts inevitably suffer from Goodhart’s Law ? Has something like this happened to you where your approval gets turned into a credential? So a whole bunch of non-earnest people start applying, you get a whole bunch of adverse selection, and then it becomes hard for you to run your program.", "Tyler Cowen", "It is not yet hard to run the program. If I needed to, I would just shut down applications. I've seen a modest uptick in bad applications, but it takes so little time to decide they're no good, or just not a good fit for us that it's not a problem. So the endorsement does get credentialized. Mostly, that's a good thing, right? Like you help the people you pick. And then you see what happens next and you keep on innovating as you need to.", "Dwarkesh Patel", "You say in the book that the super talented are best at spotting other super talented individuals. And there aren't many of the super talented talent spotters to go around. So this sounds like you’re saying that if you're not super talented, much of the book will maybe not do you a bunch of good. Results be weary should be maybe on the title. How much of talent spotting can be done by people who aren't themselves super talented?", "Tyler Cowen", "Well, I'd want to see the context of what I wrote. But I'm well aware of the fact that in basketball, most of the greatest general managers were not great players. Someone like Jerry West , right? I'd say Pat Riley was not. So again, that's something you could study. But I don't generally think that the best talent scouts are themselves super talented.", "Dwarkesh Patel", "Then what is the skill in particular that they have that if it's not the particular thing that they're working on?", "Tyler Cowen", "Some intangible kind of intuition, where they feel the right thing in the people they meet. We try to teach people that intuition, the same way you might teach art or music appreciation. But it's not a science. It's not paint-by-numbers.", "Dwarkesh Patel", "Even with all the advice in the book, and even with the stuff that isn't in the book that is just your inarticulable knowledge about how to spot talent, all your intuitions… How much of the variance in somebody's “True Potential” is just fundamentally unpredictable? If it's just like too chaotic of a thing to actually get your grips on. To what extent are we going to truly be able to spot talent?", "Tyler Cowen", "I think it will always be an art. If you look at the success rates of VCs, it depends on what you count as the pool they're drawing from, but their overall rate of picking winners is not that impressive. And they're super high stakes. They're super smart. So I think it will mostly remain an art and not a science. People say, “ Oh, genomics this, genomics that”. We'll see, but somehow I don't think that will change this.", "Dwarkesh Patel", "You don't think getting a polygenic risk score of drive, for example, is going to be a thing that happens?", "Tyler Cowen", "Maybe future genomics will be incredibly different from what we have now. Maybe. But it's not around the corner.", "Dwarkesh Patel", "Yeah. Maybe the sample size is just so low and somebody is like “How are you even gonna collect that data? How are you gonna get the correlates of who the super talented people are?”", "Tyler Cowen", "That, plus how genomic data interact with each other. You can apply machine learning and so on, but it just seems quite murky.", "Dwarkesh Pate l", "If the best people get spotted earlier, and you can tell who is a 10x engineer in a company and who is only a 1x engineer, or a 0.5x engineer, doesn't that mean that, in a way that inequality will get worse? Because now the 10x engineer knows that they're 10x, and everybody else knows that they're 10x, they're not going to be willing to cross subsidize and your other employees are going to be wanting to get paid proportionate to their skill.", "Tyler Cowen", "Well, they might be paid more, but they'll also innovate more, right? So they'll create more benefits for people who are doing nothing. My intuition is that overall, inequality of wellbeing will go down. But you can't say that's true apriori. Inequality of income might also go up.", "Dwarkesh Patel", "And then will the slack in the system go away for people who are not top performers? Like you can tell now, if we're getting better.", "Tyler Cowen", "This has happened already in contemporary America. As I wrote, “ Average is over.” Not due to super sophisticated talent spotting. Sometimes, it’s simply the fact that in a lot of service sectors, you can measure output reasonably directly––like did you finish the computer program? Did it work? That has made it harder for people to get paid things they don't deserve.", "Dwarkesh Pate l", "I wonder if this leads to adverse selection in the areas where you can't measure how well somebody is doing. So the people who are kind of lazy and bums, they'll just go into places where output can't be measured. So these industries will just be overflowing with the people who don't want to work.", "Tyler Cowen", "Absolutely. And then the people who are talented in the sectors, maybe they'll leave and start their own companies and earn through equity, and no one is really ever measuring their labor power. Still, what they're doing is working and they're making more from it.", "Dwarkesh Patel", "If talent is partly heritable, then the better you get at spotting talent, over time, will the social mobility in society go down?", "Tyler Cowen", "Depends how you measure social mobility. Is it relative to the previous generation? Most talent spotters don't know a lot about parents, like I don't know anything about your parents at all! The other aspect of spotting talent is hoping the talent you mobilize does great things for people not doing anything at all. That's the kind of automatic social mobility they get. But if you're measuring quintiles across generations, the intuition could go either way.", "Dwarkesh Patel", "But this goes back to wondering whether this is a one time gain or not. Maybe initially they can help the people who are around them. Somebody in Brazil, they help people around them. But once you’ve found them, they're gonna go to those clusters you talked about, and they're gonna be helping the people with San Francisco who don't need help. So is this a one time game then?", "Tyler Cowe n", "Many people from India seem to give back to India in a very consistent way. People from Russia don't seem to do that. That may relate to the fact that Russia is in terrible shape, and India has a brighter future. So it will depend. But I certainly think there are ways of arranging things where people give back a lot.", "Dwarkesh Patel", "Let's talk about Emergent Ventures . Sure. So I wonder: if the goal of Emergent Ventures is to raise aspirations, does that still work given the fact that you have to accept some people but reject other people? In Bayesian terms, the updates up have to equal the updates down? In some sense, you're almost transferring a vision edge from the excellent to the truly great. You see what I'm saying?", "Tyler Cowen", "Well, you might discourage the people you turn away. But if they're really going to do something, they should take that as a challenge. And many do! Like “Oh, I was rejected by Harvard, I had to go to UChicago, but I decided, I'm going to show those bastards.” I think we talked about that a few minutes ago. So if I just crushed the spirits of those who are rejected, I don't feel too bad about that. They should probably be in some role anyway where they're just working for someone.", "Dwarkesh Patel", "But let me ask you the converse of that which is, if you do accept somebody, are you worried that if one of the things that drives people is getting rejected, and then wanting to prove that you will reject them wrong, are you worried that by accepting somebody when they're 15, you're killing that thing? The part of them that wants to get some kind of approval?", "Tyler Cowen", "Plenty of other people will still reject them right? Not everyone accepts them every step of the way. Maybe they're just awesome. LeBron James is basketball history and past a certain point, it just seems everyone wanted him for a bunch of decades now. I think deliberately with a lot of candidates, you shouldn't encourage them too much. I make a point of chewing out a lot of people just to light a fire under them, like “what you're doing. It's not gonna work.” So I'm all for that selectively.", "Dwarkesh Patel", "Why do you think that so many of the people who have led Emergent Ventures are interested in Effective Altruism ?", "Tyler Cowen", "There is a moment right now for Effective Altruism, where it is the thing. Some of it is political polarization, the main parties are so stupid and offensive, those energies will go somewhere. Some of that in 1970 maybe went to libertarianism. Libertarianism has been out there for too long. It doesn't seem to address a lot of current problems, like climate change or pandemics very well. So where should the energy go? The Rationality community gets some of it and that's related to EA, as I'm sure you know. The tech startup community gets some of it. That's great! It seems to be working pretty well to me. Like I'm not an EA person. But maybe they deserve a lot of it.", "Dwarkesh Patel", "But you don't think it's persistent. You think it comes and goes?", "Tyler Cowe n", "I think it will come and go. But I think EA will not vanish. Like libertarianism, it will continue for quite a long time.", "Dwarkesh Patel", "Is there any movement that has attracted young people? That has been persistent over time? Or did they all fade?", "Tyler Cowen", "Christianity. Judaism. Islam. They're pretty persistent. [laughs]", "Dwarkesh Patel", "So to the extent that being more religious makes you more persistent, can we view the criticism of EA saying that it's kind of like a religion as a plus?", "Tyler Cowen", "Ofcourse, yeah! I think it's somewhat like a religion. To me, that's a plus, we need more religions. I wish more of the religions we needed were just flat-out religions. But in the meantime, EA will do,", "Money, Deceit, and Emergent Ventures", "Dwarkesh Pate l", "Are there times when somebody asks you for a grant and you view that as a negative signal? Let's say they're especially when well off: they’re a former Google engineer, they wanna start a new project, and they're asking you for a grant. Do you worry that maybe they're too risk averse? Do you want them to put their own capital into it? Or do you think that maybe they were too conformist because they needed your approval before they went ahead?", "Tyler Cowen", "Things like this have happened. And I asked people flat out, “Why do you want this grant from me?” And it is a forcing question in the sense that if their answer isn't good, I won't give it to them. Even though they might have a good level of talent, good ideas, whatever, they have to be able to answer that question in a credible way. Some can, some can't.", "Dwarkesh Patel", "I remember that the President of the University of Chicago many years back said that if you rejected the entire class of freshmen that are coming in and accepted the next 1500 that they had to reject that year, then there'll be no difference in the quality of the admits.", "Tyler Cowen", "I would think UChicago is the one school where that's not true. I agree that it's true for most schools.", "Dwarkesh Patel", "Do you think that's also true of Emergent Ventures?", "Tyler Cowen", "No. Not at all.", "Dwarkesh Pate l", "How good is a marginal reject?", "Tyler Cowen", "Not good. It’s a remarkably bimodal distribution as I perceive it, and maybe I'm wrong. But there aren't that many cases where I'm agonizing and if I'm agonizing I figure it probably should be a no.", "Dwarkesh Patel", "I guess that makes it even tougher if you do get rejected. Because it wasn't like, “ oh, you weren't a right fit for the job,” or “you almost made the cut.” It's like, “ No, we're actually just assessing your potential and not some sort of fit for the job.” Not only were you just not on the edge of potential, but you were also way on the other edge of the curve.", "Tyler Cowen", "But a lot of these rejected people and projects, I don't think they're spilling tears over it. Like you get an application. Someone's in Akron, Ohio, and they want to start a nonprofit dog shelter. They saw EV on the list of things you can apply to. They apply to a lot of things and maybe never get funding. It's like people who enter contests or something, they apply to EV. Nothing against non-profit dog shelters, but that's kind of a no, right? I genuinely don't know their response, but I don't think they walk away from the experience with some deeper model of what they should infer from the EV decision.", "Dwarkesh Patel", "How much does the money part of Emergent Ventures matter? If you just didn't give them the money?", "Tyler Cowen", "There's a whole bunch of proposals that really need the money for capital costs, and then it matters a lot. For a lot of them, the money per se doesn't matter.", "Dwarkesh Patel", "Right, then. So what is the function of return for that? Do you like 10x the money, or do you add .1x the money for some of these things? Do you think they add up to seemingly different results?", "Tyler Cowen", "I think a lot of foundations give out too many large grants and not enough small grants. I hope I'm at an optimum. But again, I don't have data to tell you. I do think about this a lot, and I think small grants are underrated.", "Dwarkesh Patel", "Why are women often better at detecting deceit?", "Tyler Cowen", "I would assume for biological and evolutionary reasons that there are all these men trying to deceive them, right? The cost of a pregnancy is higher for a woman than for a man on average, by quite a bit. So women will develop defense mechanisms that men maybe don't have as much.", "Dwarkesh Patel", "One thing I heard from somebody I was brainstorming these questions with–– she just said that maybe it's because women just discuss personal matters more. And so therefore, they have a greater library.", "Tyler Cowen", "Well, that's certainly true. But that's subordinate to my explanation, I’d say. There are definitely a lot of intermediate steps. Things women do more of that help them be insightful.", "Building Writing Stamina", "Dwarkesh Patel", "Why is writing skill so important to you?", "Tyler Cowen", "Well, one thing is that I'm good at judging it. Across scales, I'm very bad at judging, so there's nothing on the EV application testing for your lacrosse skill. But look, writing is a form of thinking. And public intellectuals are one of the things I want to support. Some of the companies I admire are ones with writing cultures like Amazon or Stripe . So writing it is! I'm a good reader. So you're going to be asked to write.", "Dwarkesh Patel", "Do you think it's a general fact that writing correlates with just general competence?", "Tyler Cowen", "I do, but especially the areas that I'm funding. It’s strongly related. Whether it's true for everything is harder to say.", "Dwarkesh Patel", "Can stamina be increased?", "Tyler Cowen", "Of course. It's one of the easier things to increase. I don't think you can become superhuman in your energy and stamina if you're not born that way. But I think almost everyone could increase by 30% to 50%, some notable amount.", "Dwarkesh Patel", "Okay, that's interesting.", "Tyler Cowen", "Put aside maybe people with disabilities or something but definitely when it comes to people in regular circumstances.", "Dwarkesh Patel", "Okay. I think it's interesting because in the blog post from Robin Hanson about stamina, I think his point of view was that this is just something that's inherent to people.", "Tyler Cowen", "Well, I don't think that's totally false. The people who have superhuman stamina are born that way. But there are plenty of origins. I mean, take physical stamina. You don't think people can train more and run for longer? Of course they can. It's totally proven. So it would be weird if this rule held for all these organs but not your brain. That seems quite implausible. Especially for someone like Robin, where your brain is just this other organ that you're gonna download or upload or goodness knows what with it. He’s a physicalist if there ever was one.", "Dwarkesh Patel", "Have you read Haruki Murakami's book on running ?", "Tyler Cowen", "No, I've been meaning to. I'm not sure how interesting I'll find it. I will someday. I like his stuff a lot.", "Dwarkesh Patel", "But what I found really interesting about it was just how linked building physical stamina is for him to building up the stamina to write a lot.", "Tyler Cowen", "Magnus Carlsen would say the same with chess. Being in reasonable physical shape is important for your mental stamina, which is another kind of simple proof that you can boost your mental stamina.", "When Does Intelligence Start to Matter?", "Dwarkesh Patel", "After reading the book, I was inclined to think that intelligence matters more than I previously thought. Not less. You say in the book that intelligence has convex returns and that it matters especially for areas like inventors. Then you also say that if you look at some of the most important things in society, something like what Larry and Sergey did, they're basically inventors, right? So in many of the most important things in society, intelligence matters more because of the increasing returns. It seems like with Emergent Ventures, you're trying to pick the people who are at the tail. You're not looking for a barista at Starbucks. So it seems like you should care about intelligence more, given the evidence there.", "Tyler Cowen", "More than who does? I feel what the book presents is, in fact, my view. So kind of by definition, I agree with that view. But yes, there's a way of reading it where intelligence really matters a lot. But it's only for a relatively small number of jobs.", "Dwarkesh Patel", "Maybe you just started off with a really high priori on intelligence, and that's why you downgraded?", "Tyler Cowen", "There are a lot of jobs that I actually hire for in actual life, where smarts are not the main thing I look for.", "Dwarkesh Patel", "Does the convexity of returns on intelligence suggest that maybe the multiplicative model is wrong? Because if the multiplicative model is right, you would expect to see decreasing returns and putting your stats on one skill. You'd want to diversify more, right?", "Tyler Cowen", "I think the convexity of returns to intelligence is embedded in a multiplicative model, where the IQ returns only cash out for people good at all these other things. For a lot of geniuses, they just can't get out of bed in the morning, and you're stuck, and you should write them off.", "Dwarkesh Patel", "So you cite the data that Sweden collects from everybody that enters the military there. The CEOs are apparently not especially smart. But one thing I found interesting in that same data was that Swedish soccer players are pretty smart. The better a soccer player is, the smarter they are. You've interviewed professional basketball players turned public intellectuals on your podcast. They sound extremely smart to me. What is going on there? Why, anecdotally, and with some limited amounts of evidence, does it seem that professional athletes are smarter than you would expect?", "Tyler Cowen", "I'm a big fan of the view that top-level athletic performance is super cognitively intense and that most top athletes are really extraordinarily smart. I don't just mean smart on the court (though, obviously that), but smart more broadly. This is underrated. I think Michelle Dawson was the one who talked me into this, but absolutely, I'm with you all the way.", "Dwarkesh Patel", "Do you think this is just mutational load or––", "Tyler Cowen", "You actually have to be really smart to figure out things like how to lead a team, how to improve yourself, how to practice, how to outsmart the opposition, all these other things. Maybe it’s not the only way to get there, but it is very G loaded . You certainly see some super talented athletes who just go bust. Or they may destroy themselves with drugs: there are plenty of tales like that, and you don't have to look hard.", "Dwarkesh Patel", "Are there other areas where you wouldn't expect it to be G loaded but it actually is?", "Tyler Cowen", "Probably, but there's so many! I just don't know, but sports is something in my life I followed. So I definitely have opinions about it. They seem incredibly smart to me when they're interviewed. They're not always articulate, and they’re sort of talking themselves into biased exposure. But I heard Michael Jordan in the 90s, and I thought, “That guy's really smart.” So I think he is! Look at Charles Barkley. He's amazing, right? There's hardly anyone I'd rather listen to, even about talent, than Charles Barkley. It's really interesting. He's not that tall, you can't say, “oh, he succeeded. Because he's seven foot two,” he was maybe six foot four tops. And they called him the Round Mound of Rebound. And how did he do that? He was smart. He figured out where the ball was going. The weaknesses of his opponents, he had to nudge them the right way, and so on. Brilliant guy.", "Dwarkesh Patel", "What I find really remarkable is that (not just with athletes, but in many other professions), if you interview somebody who is at the top of that field, they come off really really smart! For example, YouTubers and even sex workers.", "Tyler Cowen", "So whoever is like the top gardener, I expect I would be super impressed by them.", "Spotting Talent (Counter)signals", "Dwarkesh Patel", "Right. Now all your books are in some way about talent, right? Let me read you the following passage from An Economist Gets Lunch , and I want you to tell me how we can apply this insight to talent. “ At a fancy fancy restaurant, the menu is well thought out. The time and attention of the kitchen are scarce. An item won't be on the menu unless there's a good reason for its presence. If it sounds bad, it probably tastes especially good?”", "Tyler Cowen", "That's counter-signaling, right? So anything that is very weird, they will keep on the menu because it has a devoted set of people who keep on ordering it and appreciate it. That's part of the talent of being a chef, you can come up with such things.", "Dwarkesh Patel", "How do we apply this to talent?", "Tyler Cowen", "Well, with restaurants, you have selection pressure where you're only going to ones that have cleared certain hurdles. So this is true for talent only for talents who are established. If you see a persistent NBA player who's a very poor free throw shooter like Shaquille O'Neal was, you can more or less assume they're really good at something else. But for people who are not established, there's not the same selection pressure so there's not an analogous inference you can draw.", "Dwarkesh Patel", "So if I show up to an Emergent Ventures conference, and I meet somebody, and they don't seem especially impressive with the first impression, then I should believe their work is especially impressive.", "Tyler Cowen", "Yes, absolutely, yes.", "Dwarkesh Patel", "Okay, so my understanding of your book Creative Destruction is that maybe on average, cultural diversity will go down. But in special niches, the diversity and ingenuity will go up. Can I apply the same insight to talent? Maybe two random college grads will have similar skill sets over time, but if you look at people on the tails, will their skills and knowledge become even more specialized and even more diverse?", "Tyler Cowen", "There are a lot of different presuppositions in your question. So first, is cultural diversity going up or down? That I think is multi-dimensional. Say different cities in different countries will be more like each other over time.. that said, the genres they produce don't have to become more similar. They're more similar in the sense that you can get sushi in each one. But novel cuisine in Dhaka and Senegal might be taking a very different path from novel cuisine in Tokyo, Japan. So what happens with cultural diversity.. I think the most reliable generalization is that it tends to come out of larger units. Small groups and tribes and linguistic groups get absorbed. Those people don't stop being creative and other venues, but there are fewer unique isolated cultures, and much more thickly diverse urban creativity. That would be the main generalization I would put forward. So if you wanted to apply that generalization to talent, I think in a funny way, we come back to my earlier point: talent just tends to be geographically extremely well clustered. That's not the question you asked, but it's how I would reconfigure the pieces of it.", "Dwarkesh Patel", "Interesting. What do you suggest about finding talent in a globalized world? In particular, if it's cheaper to find talent because of the internet, does that mean that you should be selecting more mediocre candidates?", "Tyler Cowen", "I think it means you should be more bullish on immigrants from Africa. It's relatively hard to get out of Africa to the United States in most cases. That's a sign the person put in a lot of effort and ability. Maybe an easy country to come here from would be Canada, all other things equal. Again, I'd want this to be measured. The people who come from countries that are hard to come from like India, actually, the numbers are fairly high, but the roots are mostly pretty gated.", "Dwarkesh Patel", "Is part of the reason that talent is hard to spot and find today that we have an aging population?  So then we would have more capital, more jobs, more mentorship available for young people coming up, than there are young people.", "Tyler Cowen", "I don't think we're really into demographic decline yet. Not in the United States. Maybe in Japan, that would be true. But it seems to me, especially with the internet, there’s more 15-year-old talent today than ever before, by a lot, not just by little. You see this in chess, right? Where we can measure performance very well. There’s a lot more young talent from many different places, including the US. So, aging hasn't mattered yet. Maybe for a few places, but not here.", "Dwarkesh Patel", "What do you think will change in talent spotting as society becomes older?", "Tyler Cowen", "It depends on what you mean by society. I think the US, unless it totally screws up on immigration, will always have a very seriously good flow of young people that we don't ever have to enter the aging equilibrium the way Japan probably already has. So I don't know what will change. Then there's work from a distance, there's hiring from a distance, funding from a distance. As you know, there's EV India, and we do that at a distance. So I don't think we're ever going to enter that world..", "Dwarkesh Patel", "But then what does it look like for Japan? Is part of the reason that Japanese cultures and companies are arranged the way they are and do the recruitment the way they do linked to their demographics?", "Tyler Cowen", "That strikes me as a plausible reason. I don't think I know enough to say, but it wouldn't surprise me if that turned out to be the case.", "Dwarkesh Patel", "To what extent do you need a sort of “great man ethos” in your culture in order to empower the top talent? Like if you have too much political and moral egalitarianism, you're not going to give great people the real incentive and drive to strive to be great.", "Tyler Cowen", "You’ve got to say “great man or great woman ethos”, or some other all-purpose word we wish to use. I worry much less about woke ideology than a lot of people I know. It's not my thing, but it's something young people can rebel against. If that keeps you down, I'm not so impressed by you. I think it's fine. Let the woke reign, people can work around them.", "Dwarkesh Pate l", "But overall, if you have a culture or like Europe, do you think that has any impact on––", "Tyler Cowen", "Europe has not woken up in a lot of ways, right? Europe is very chauvinist and conservative in the literal sense, and often quite old fashioned depending on what you're talking about. But Europe, I would say, is much less woke than the United States. I wouldn't say that's their main problem, but you can't say, “oh, they don't innovate because they're too woke”, like hang out with some 63 year old Danish guys and see how woke you think they are once everyone's had a few drinks.", "Dwarkesh Patel", "My question wasn't about wokeism. I just meant in general, if you have an egalitarian society.", "Tyler Cowen", "I think of Europe as less egalitarian. I think they have bad cultural norms for innovation. They're culturally so non-egalitarian. Again, it depends where but Paris would be the extreme. There, everyone is classified right? By status, and how you need to wear your sweater the right way, and this and that. Now, how innovative is Paris? Actually, maybe more than people think. But I still think they have too few dimensions of status competition. That's a general problem in most of Europe–– too few dimensions of status competition, not enough room for the proverbial village idiot.", "Dwarkesh Patel", "Interesting. You say in the book, that questions tend to degrade over time if you don't replace them. I find it interesting that Y Combinator has kept the same questions since they were started in 2005. And of course, your co-author was a partner at Y Combinator. Do you think that works for Y Combinator or do you think they're probably making a mistake?", "Tyler Cowen", "I genuinely don't know. There are people who will tell you that Y Combinator, while still successful, has become more like a scalable business school and less like attracting all the top weirdos who do amazing things. Again, I'd want to see data before asserting that myself, but you certainly hear it a lot. So it could be that Y Combinator is a bit stale. But still in a good sense. Like Harvard is stale, right? It dates from the 17th century. But it's still amazing. MIT is stale. Maybe Y Combinator has become more like those groups.", "Dwarkesh Patel", "Do you think that will happen to Emergent Ventures eventually?", "Tyler Cowen", "I don't think so because it has a number of unique features built in from the front. So a very small number of evaluators too. It might grow a little bit, but it's not going to grow that much. I'm not paid to do it, so that really limits how much it's going to scale. There's not a staff that has to be carried where you're captured by the staff, there is no staff. There's a bit of free riding on staff who do other things, but there's no sense of if the program goes away, all my buddies on staff get laid off. No. So it's kind of pop up, and low cost of exit. Whenever that time comes.", "Dwarkesh Patel", "Do you personally have questions that you haven't put in the book or elsewhere because you want them to be fresh? For asking somebody who's applying to her for the grant?", "Tyler Cowen", "Well, I didn't when we wrote the book. So we put everything in there that we were thinking of, but over time, we've developed more. I don't generally give them out during interviews, because you have to keep some stock. So yeah, there's been more since then, but we weren't holding back at the time.", "Dwarkesh Patel", "It’s like a comedy routine. You gotta write a new one each year.", "Tyler Cowen", "That's right. But when your shows are on the air, you do give your best jokes, right?", "Will Reading Cowen’s Book Help You Win Emergent Ventures?", "Dwarkesh Patel", "Let’s say someone applying to emergent ventures reads your book. Are they any better off? Or are they perhaps worse off because maybe they become misleading or have a partial view into what's required of them?", "Tyler Cowen", "I hope they're not better off in a way, but probably they are. I hope they use it to understand their own talent better and present it in a better way. Not just to try to manipulate the system. But most people aren't actually that good at manipulating that kind of system so I'm not too worried.", "Dwarkesh Patel", "In a sense, if they can manipulate the system, that's a positive signal of some kind.", "Tyler Cowen", "Like, if you could fool me –– hey, what else have you got to say, you know? [laughs]", "Dwarkesh Patel", "Are you worried that when young people will encounter you now, they're going to think of you as sort of a talent judge and a good one at that so they're maybe going to be more self aware than whether––", "Tyler Cowen", "Yes. I worry about the effect of this on me. Maybe a lot of my interactions become less genuine, or people are too self conscious, or too stilted or something.", "Dwarkesh Patel", "Is there something you can do about that? Or is that just baked in the gig?", "Tyler Cowen", "I don't know, if you do your best to try to act genuine, whatever that means, maybe you can avoid it a bit or delay it at least a bit. But a lot of it I don't think you can avoid. In part, you're just cashing in. I'm 60 and I don't think I'll still be doing this when I'm 80. So if I have like 18 years of cashing in, maybe it's what I should be doing.", "Identifying talent early", "Dwarkesh Patel", "To what extent are the principles of finding talent timeless? If you're looking for let's say, a general for the French Revolution, how much of this does the advice change? Are the basic principles the same over time?", "Tyler Cowen", "Well, one of the key principles is context. You need to focus on how the sector is different. But if you're doing that, then I think at the meta level the principles broadly stay the same.", "Dwarkesh Patel", "You have a really interesting book about autism and systematizers. You think Napoleon was autistic?", "Tyler Cowen", "I've read several biographies of him and haven’t come away with that impression, but you can't rule it out. Who are the biographers? Now it gets back to our question of: How valuable is history? Did the biographers ever meet Napoleon? Well, some of them did, but those people had such weak.. other intellectual categories. The modern biography is written by Andrew Roberts , or whoever you think is good, I don't know. So how can I know?", "Dwarkesh Patel", "Right? Again, the issue is that the details that stick in my mind from reading the biography are the ones that make him seem autistic, right?", "Tyler Cowen", "Yes. There's a tendency in biographies to storify things, and that's dangerous too.", "Dwarkesh Patel", "How general across a pool is talent or just competence of any kind? If you look at somebody like Peter Thiel–– investor, great executive, great thinker even, certainly Napoleon, and I think it was some mathematician either Lagrangian or Laplace, who said that he (Napoleon) could have been a mathematician if he wanted to. I don't know if that's true, but it does seem that the top achievers in one field seem to be able to move across fields and be top achievers in other fields. Is that a pattern that you see as well?", "Tyler Cowen", "Maybe somewhat, but I don't think you can be top at anything–– or even most things. A lot of these are very successful people in other areas might just be near millionaires. Maybe they ran a car dealership and earned $3 million in 1966. Which was a pretty good life back then. But it's not like what they have ended up being.", "Dwarkesh Patel", "You quote Sam Altman in the book, and I thought it was really interesting. He says, “The successful founders I funded believe that they are eventually certain to be successful.” To what extent is this self-belief the result or the cause of being talented?", "Tyler Cowen", "Maybe it's both, but keep in mind the context for Sam. He’s talking about companies and startups, and startups succeed at such a low rate that the success is really on selecting for people with quite a bit of overconfidence. In other sectors, you won't in general find that same level of overconfidence. So you have to be careful. I agree with Sam, but he's talking about one set of things, not everything.", "Dwarkesh Patel", "Is that not true of other fields? If you're looking for a public intellectual, you're partially hoping for the outcome that they become remembered or their ideas have a lot of influence. That's also a rare thing to be able to do.", "Tyler Cowen", "I think more people stumble into it, for instance. There's more people who know early on that they can do it, but not for Sam-like reasons of overconfidence. They kind of know it because they can, and there are enough early tests. So I still think it's different. And there's more stumbling into it by accident.", "Dwarkesh Patel", "Which better describes your intellectual journey? Like were you in some sense a little overconfident in your 20s?", "Tyler Cowen", "There's an interesting break in my life that relates to stumbling into things. I grew up with no internet so I thought I would do quite well. In that sense, I was overconfident, but I had no notion that I would have large numbers of people listening to me. I just didn't think about the internet! So in that sense, I totally stumbled into the particular way in which I ended up doing well. But I was still, at a younger age, overconfident.", "Dwarkesh Patel", "That’s interesting. I wanted to backtest some of your methods of finding talent, but for certain people. So we just talked about Haruki Murakami ; let's use him as an example. In his 30s, I believe he was running a bar. The novelist Haruki Murakami is just running a bar, right? It doesn't have to be him in particular, but just generally think of like a 30-year-old–– you go to Japan, you go to a bar, you start talking to the bartender who also happens to be in the place. What would the conversation look like? How would you identify that this person could be a great novelist or anything?", "Tyler Cowen", "I think my chance of identifying great novelists is very low. And it's one reason why it's not something I try to do.", "Dwarkesh Patel", "Interesting. Well, why is that?", "Tyler Cowen", "You see, when I look at biographies, there seem to be so many instances of people who don't show obvious signs of promise early on. Maybe if I knew more about such people, I could develop markers. It's possible, right? Like my chance of being good at that is probably way above average, but I definitely don't think I'd be good at it now.", "Dwarkesh Patel", "Interesting. And what do you think makes novelists so hard to predict?", "Tyler Cowen", "They can blossom much later, and very often, they do. A very high percentage of them are women whose earlier lives are interrupted–– often by children, but not only that, this gets back to the late bloomers thing. There's also something quite discreet about a novel–– until a person has done it, it's hard to tell how good they are. Or say a great nonfiction book: with Taleb or Pinke or whoever, you could read their earlier blog posts and just flat out see, “ oh, they're really smart, maybe they could write a great book.” I wouldn't say anyone could do that, but most people we know could do that. I can spot them earlier. But with a novel, I can't.", "Dwarkesh Patel", "Do you think that's also true? It sounds very similar to a startup founder. Even with regards to the time horizon, you haven't really done anything that's like it before. The time horizon would be maybe like 5 to 10 years? Maybe it'll take shorter, but––", "Tyler Cowen", "There are more intermediate benchmarks with startups. Just how good a job do they do trying to raise their first round? There's a lot you can watch. It's not indicative of product-fit to the market and a bunch of other things. But you see a lot early on. How good is the pitch deck? Again, there are some great things that had terrible pitches, but I think you see way more early signs. Maybe novelists show early signs that I don't know about. So again, I'm suggesting that maybe I could learn, but right now, I'm totally at peace with that one. Joseph Conrad, was I going to get that? Herman Melville? I don't think so.", "Dwarkesh Patel", "Interesting. Okay, well, let's backtest with another––", "Tyler Cowen", "Like, “ Hey Joe, you're from Poland, and you're gonna write in English?” Come on you know, get real.", "Dwarkesh Patel", "Scott Aaronson , as you know, is a famous complexity theorist, computer scientist, and he was actually my former professor. He wrote in a blog post about standardized tests “I was a 15 year old with perfect SATs and a published research paper, but not only was I young and immature with spotty grades and weird academic trajectory, I had no sports, no music, no diverse leadership experiences. I was a narrow linear A to B thinker who lacked depth and emotional intelligence. The exact opposite of what Harvard and Princeton were looking for in every way. ” Now, what would happen to Scott Aaronson if he at that time had applied to Emergent Ventures?", "Tyler Cowen", "I’ve never met Scott, but odds are very strong that we’d fund him. From the sound of it. Again, I don't know him at all.", "Dwarkesh Patel", "But the narrow linear thinker––", "Tyler Cowen", "I don't know what that means. A lot of people misdescribe themselves. They say things like, “ Oh, if you ask me those questions, I would suck.” And they're wrong. They wouldn't suck. I know they don't. And I suspect Scott's self-description is a bit off. But I think he would do very well at Emergent Ventures..", "Existential Risks and the Longterm", "Dwarkesh Patel", "Yeah, I agree. Let's talk about Effective Altruism. You have expressed skepticism to the idea that you can use longtermism to say that existential risks matter more than everything else because we should be optimizing for the branch of the decision tree where we survive for millions of years. Can you say more of why you're skeptical of that kind of reasoning?", "Tyler Cowen", "Well, I'd like to express my skepticism a little more clearly. I think existential risk matters much more than almost anyone thinks. In this sense I'm with the EA people, but I do think they overvalue it a bit. I would just say I don't think there are many good things we can do to limit existential risks that are very different from looking for mortality and growing GDP, supporting science, trotting down a pretty familiar list of things that don't all have to be that long-term. In that sense, I think they flip out about it a bit too much and have all these super-specific hypotheses. But we should invest in good things now. I do favor an Asteroid Protection Program , by the way.", "Dwarkesh Patel", "To the extent that there was a trade off, in that hypothetical, would you put the same weight on extra work that they do? Or do you just differ with them on how you actually go about solving essential risks?", "Tyler Cowen", "Probably more the latter. I think they're not epistemically modest enough when it comes to existential risk. They think they have all these petite very particular hypotheses about AGI and we've got to prevent this. That's where I really differ from them. I think their ability to limit that risk, however great or small, might basically be zero. That if AGI is a risk, it's the worst set of procedures that will do you in and you can't regulate those very well at all. Putting everyone at your favorite tech company through this training about alignment… I'm not against doing that. But like, come on! You know, if it's gonna happen, it's like handling pandemic materials. It's the sloppiest people you've got to worry about, and they are not sitting in on your class on AGI in alignment.", "Dwarkesh Patel", "Yes. Although it isn't surprising to the extent that companies in the US that (maybe) care more than other other entities about alignment are actually the first currently. Like Open AI for instance.", "Tyler Cowen", "Yes but it won't matter. Because if that view is the correct one, and I don't think it is, the more screwed up successors will just come 10 years later. And you know, Skynet goes live.. but 10 years later.", "Dwarkesh Patel", "I'm curious, why is the possibility of humanity surviving for a very long time, not something that is a strong part of your worldview–– given that you’re a long termist.", "Tyler Cowen", "I think the chance of there being a major war with nuclear weapons or whatever comes next. While very low in any given year, you just have the clock tick, and that chance adds up. We're not going to be here for another 100,000 years. It’s a simple argument, but I'm not a pessimist about any given year at all.", "Dwarkesh Patel", "Right, but if the odds are like sufficiently above zero, then do you just not buy the argument that anything above zero is just huge? And that we should be optimizing for that?", "Tyler Cowen", "I'm all for efforts to make nuclear weapons safer. But it's hard to know exactly what to do. Like what do we do in Ukraine now? Should we be more tough, less tough? There's different arguments, but they're not that different from just the normal foreign policy arguments. There's not some special branch of EA longtermism that tells you what to do in Ukraine. Those people, if anything, tend to be kind of underinvested in historical and cultural forms of knowledge. So I just don't think you buy that much extra stuff by calling yourself worried about existential risk. It's plenty of people in the US foreign policy establishment who think about all this stuff. Until recently, most of them had never heard of EA. Maybe even still. It doesn't change the debate much.", "Dwarkesh Patel", "I'm sure you've heard these arguments, but it seems with nuclear war, it's hard to imagine how it could kill every single person on the planet.", "Tyler Cowen", "It probably won’t. But we’ll be permanently set back kind of forever. And in the meantime, we can't build asteroid protection or whatever else. It’ll just be like medieval living standards: super small population, feudal governance, lots of violence, rape. There's no reason to think like, oh, read a copy of the Constitution in and 400 years, we're back on track. That's crazy wrong.", "Dwarkesh Patel", "We did emerge from feudalism , right? So if it happened once, isn't that example enough?", "Tyler Cowen", "We don’t know! There’s like–– hundreds of thousands of years of human history where we seem to make diddly squat progress. We don't know why. But don't assume that just because it happened once that means you always rebuild. I don't think it does.", "Dwarkesh Patel", "Right. If it's not just the idea of everything being laid in space, what would it take for our descendants to be able to recover industrial civilization?", "Tyler Cowen", "I don't think we have good theories of that at all. I would just say we had a lot of semi-independent operating parts of the world, circa 1500. And not that many of them made much progress.", "Dwarkesh Patel", "Right. I mean, I think of you as an optimist––at least by temperament. But this seems like one of the more pessimistic things I've heard overall, anywhere, because of the idea that not only will human civilization be decimated almost surely, but that they will never be able to recover.", "Tyler Cowen", "I wouldn't say never, you know, never say never. But there's no reason to assume you just bounce back–– I would say we don't know. Other problems will come upon us: nuclear winter or crop failures, climate change. It just seems very daunting to me. And like the overall history of mammalian species is not that optimistic. The fact that sex exists is biologically very pessimistic. I mean, I don't think of myself as a pessimist, but you can call it that.", "Dwarkesh Patel", "Can you explain that quote? “The fact that sex exists is biologically very pessimistic.”", "Tyler Cowen", "Anything that stands still gets destroyed by parasites or destroyed by something. So you've got to randomize and change what you are through sex. That's like the clearly winning model for at least larger things. That's a sign nothing survives for that long. So the existence of sex is the most pessimistic thing there is, I find that ironic. So I'm not the pessimist, sex is.", "Dwarkesh Patel", "Let's say I take your argument that economic growth is very important. Does that imply anything in particular about what somebody should do with their life? Or is it basically just an argument about policy?", "Tyler Cowen", "It can guide your life a bit. So to think just more carefully: How do you fit in? Maybe you could do something important. Maybe you could have higher aspirations. For most people, that won't matter. But again, certainly some people could do much more. I do my best to try to help that along.", "Dwarkesh Patel", "Right. It doesn't have to do with the fact that we don't know much about what causes economic growth? Or is it just that you could never offer concrete advice about what you could do to increase economic growth?", "Tyler Cowen", "I give concrete advice to people all the time with a grain of salt. I'll tell people, “I think you should go to this school, not that school.” That's concrete advice. I don't think we know so little about growth. We certainly know a lot about how to wreck growth, right? So we know enough.", "Dwarkesh Patel", "Right, but not enough to create something like 80,000 hours for progress. What do you think?", "Tyler Cowen", "You mean an institution? I mean there's an Institute for progress..", "Dwarkesh Patel", "No. 80,000 hours has a list, “ here's the things you should consider doing with your career,”", "Tyler Cowen", "That’s not the right way to think about it. You want to sit down with the person, understand the entire context, see what they could do, see if there's a way you could or should bend up the curve. But a list now… that's a little to EA static maximization for me. If you focus more on learning about cultures, history, you're not going to come up with some list as the way to approach that.", "Dwarkesh Patel", "What are culture and history going to show you?", "Tyler Cowen", "They show how complex things are, and the people who made very significant contributions show how complex the inputs were into that. Or even people who did very terrible things like Napoleon–– understanding Napoleon, where he came from, the ideas he had, it’s super complicated. I don't really get the list version. Here's the list for baby Napoleon: like don't invade Russia or whatever. It just seems to miss the point.", "Cultivating Young Talent", "Dwarkesh Pate l", "If a young person were to read a bunch of biographies, not as early career advice, but just generally as trying to better understand how they can be more effective, do you think that would teach them that things are more complex than they thought? Would that give them any practical benefit?", "Tyler Cowen", "I think both! Napoleon is a good person to read biographies of. When I was young, sports and chess players were my grist. And I feel I learned a lot from that. I don't know that it was any big lesson, but it’s just how you saw all these histories of people persevering and self-improving. That's worth a lot. So I don't think it's a waste of time at all. I think it's probably essential. I don't know if you have to read biographies , like if you just follow sports careers, that might be enough. But that's kind of like reading a biography, right? YouTube can also do it! I don't like to fixate on the biography, they seem in a way, inefficiently long.", "Dwarkesh Patel", "Is it like somebody having a blog and you following along with their blog on a weekly or daily basis?", "Tyler Cowen", "Absolutely. That’s reading a biography. I've been blogging for almost 20 years, and I hope there's some lesson in the constancy of that for people.", "Dwarkesh Patel", "I was struck while reading your book that some of the advice you offer for how to ask good questions in hiring is actually great advice for also how to ask good questions in a podcast. Like you keep the conversation flow going to get them interested in talking about something that they're very interested in.", "Tyler Cowen", "Don't worry about changing the subject, Just get them on something where they're involved and excited.", "Dwarkesh Patel", "So to what extent was just informed by having a podcast?", "Tyler Cowen", "Oh, quite a bit. You can think of podcasts in the sense like interviews. They're kind of very judgmental, like, “How worthy are you?” Right? People are afraid more and more.", "Dwarkesh Patel", "You have a quite mellow personality, and I'm similar in that way. Do you think that has any sort of intellectual consequence in the sense that if you could experience like this, the exuberant highs or the incapacitating lows, you would be maybe less modest or moderate about your views about longtermism and economic growth? Like you’d be the type to want to get everyone into the galaxy?", "Tyler Cowen", "I might be more creative, but I think I would be more wrong.", "Dwarkesh Patel", "Interesting. On that trade off, where should one be if they're trying to reason about important topics? Should you just try to increase the variance so you can get to the important things?", "Tyler Cowen", "Again, I think it’s context specific. You want to understand where a person is, understand they were probably born that way and that you can only budge them so much, regardless of wherever you might think they should be. Just try to marginally improve how they're dealing with the flow coming their way. I prefer to work with people's strengths and boost their strengths, rather than have a list set out of how to reform them. I think it's a way more productive way to do things. It's lower conflict, and as a coach or mentor or co-worker, it's way less stressful for you. Because you're being very positive with them, and it's sincere, so you'll just do more of it. If you're hitting them over the head, “Why don't wake up at 7am in the morning,” and maybe they should, but it's like come on, you have something better to do than that. They can figure that out for themselves. Tell them something else that they can actually find useful.", "The Lifespans of Public Intellectuals", "Dwarkesh Patel", "When you interviewed Andrew Sullivan, one of the things he said was that the reason he decided to write about gay rights was that he got HIV, and he realized he might not have that long to live. Yeah. Ofcourse, we see the consequence of that in society today. If you find out you only have five years to live, what is the book you would write and what is the argument you would make?", "Tyler Cowen", "Five years from now? I think I would do more Emergent Ventures. I would finish the book I'm working on, and I might consider a sequel to Talent depending on Daniel's plans. But my marginal thing is I don't feel like writing more books. Though, I will write more books.", "Dwarkesh Patel", "So like, it's like in your late career that you think the most important thing is institution building and talent spotting.", "Tyler Cowen", "Yes, at this point. I've written like 16-17 books. So it's not like I haven't had my say.", "Dwarkesh Patel", "One thing about top performers in many fields is that they have intricate philosophies and worldviews. Like, you know, Peter Thiel obviously with the religion stuff , but even like somebody like George Soros, with the theory of reflexivity, to what extent do you think that these are very important in their success? Maybe if you just have somebody who has plus four-standard-deviations in verbal intelligence, one of the things they'll do other than be very successful is to just create intricate worldviews?", "Tyler Cowen", "With all the people I know who are like that, such as Peter, I feel that it was important for their success. Soros? I don't know. But since all the people I do know, it seems to matter. My intuition is that it matters a lot more broadly. Like, you need a unique way of looking at the world.", "Dwarkesh Patel", "Is that a correlate in the sense that it jolts you out of complacency?", "Tyler Cowen", "And it protects you from other people's idiocy. Your mimetic desires get channelled away from a lot of other things that might even be good overall, but they would distract you.", "Dwarkesh Patel", "Right. But maybe the actual theory itself is not the edge.", "Tyler Cowen", "Correct. Now sometimes it is the edge. There’s the famous story of how Peter knew Rene Girard and saw Facebook would be a big thing, right? It's probably true. But it doesn't have to be the edge, I would say.", "Dwarkesh Patel", "Yeah. So one thing I found really interesting in your book, What Price Fame was you had this interesting discussion where you cite your former adviser, Thomas Schelling , about how certain celebrities can make themselves focal points in the culture. I'm curious about how we can apply this to public intellectuals. In recent years, we've seen a lot of public intellectuals become a focal point in the cultural war or in just general discussions. So in some sense, this has happened to you as well, right? Where we've seen this with many of your podcast guests: Jordan Peterson, Christopher Hitchens, Sam Harris––how do public intellectuals make themselves focal points?", "Tyler Cowen", "Well, by doing something noteworthy, right? It can be an idea, but it can also be a form of performance art, and maybe performance art has become more important. I think my own vocality has more to do with performance art than with any specific idea I have. I think very carefully about how to stay somewhat focal for a long period of time. So it's quite unusual that I've mostly increased my vocality for over 20 years now. It's much more common that people come and go and decline. I work quite hard not to do that. A lot of the IDW people : very clear peaks, which now lie in the past, and I suspect will have fairly rapid declines. Well I don't want that to happen to me. I want to be in the thick of things, just for selfish reasons.", "Dwarkesh Patel", "What do you think is the explanation for why they peak so early?", "Tyler Cowen", "I don't know if early is the word. Jordan Peterson was at it for a long time. But they made extreme bets on very particular ideas, and maybe different people will disagree about those ideas. But I think a lot of them are losing ideas, even if they might be correct in some ways. I've done much less to bet on a single idea. You can say it’s kind of market-based economics. But it's not so radical. Like we're still in a capitalistic society. I have enough to say about new issues that come along all the time. I’ll be taking a mostly pro-capitalist point of view. Like it's just not that obsolete; maybe it's not peak fashion now, but it's fine to be doing that.", "Dwarkesh Patel", "They were overleveraged in the GOP margin cult.", "Tyler Cowen", "I'm not making a big bet on Ivermectin is one way to put it.", "Risk Aversion in Academia", "Dwarkesh Patel", "Why doesn't tenure make academics less risk averse?", "Tyler Cowen", "I've thought about this quite a bit. I think the selection filters are very strong. They're very pro-conformity. People care a lot about their peers and think of them. You're selecting conformists to begin with, and so you're just literally not trained in how to take risks. It's not always that easy, right? So let's say you're like F*** this tenure. They get up and take a risk. What do you say that the demand curve slope upwards? [laughs] It's just not that easy.", "Dwarkesh Patel", "Is there something that would be analogous to a course on risk that makes more people more risk-taking?", "Tyler Cowen", "I would say at Mercatus , we have a lot of students come through here, and we do try to teach them all in different ways about how to take more career risk. I think we've been remarkably successful at that.", "Dwarkesh Patel", "What is being taught that makes it more risky?", "Tyler Cowen", "Well, first of all, they observe what we all do. And they just learn by example. If they want advice, like how to run a podcast, how to write a blog, how to try to work for someone on the Senate Banking Committee, we have people who've done that, and we'll put them in touch. So we don't have “a class.” But there's a very serious, focused effort that anyone with any interest in learning how to do things and take more risk can get that here, for free.", "Dwarkesh Patel", "How malleable is risk aversion?", "Tyler Cowen", "I think a lot of the students we produce have done unusual things. Certainly relative to other programs. It could be better, of course, but us leading by example is the number one way we teach them.", "Dwarkesh Patel", "You had an interesting post in 2011 where you're talking about which public intellectuals have been the most influential. One thing I noticed from the list was that the people you said who were most influential were people who were advocating for one very specific narrow policy chain their entire careers.", "Tyler Cowen", "A lot of those people fade. Now the two I cited: Andrew Sullivan and Peter Singer have not faded. It's a risky strategy. I would get bored too quickly doing that.", "Dwarkesh Patel", "Okay, I was just about to ask: is there a reason why you haven't adopted a strategy yourself?", "Tyler Cowen", "That's the way in which I am risk averse. I don't have a single issue. Like for Andrew it was gay marriage, and great, he won. That's amazing. I'm all for it. But in part one, because it is the thing he cared about most. And I don't have a comparable thing like that. The closest thing that I have is like, “here's my way of thinking I'm going to show it to you.” And I have done that pretty monomaniacally. And I think I've been somewhat successful.", "Dwarkesh Patel", "Does this imply that the most interesting intellectuals will never be the most influential? Because to be influential, you have to focus on just one thing?", "Tyler Cowen", "On average, that’s true. So John Stuart Mill was super interesting, and he wrote about many, many things and he had some influence. But was there a single thing where he saw it through and made it happen? I don't know. Or Richard Cobden, not too interesting as a deep economic thinker, but free trade against the Corn Laws –– he and John Bright made that happen. I would say they were correct, but it's not that interesting to read them.", "Dwarkesh Patel", "Do you think some people just change people's worldviews in general? They might not have impacted any one person all that much, no one’s going to become evangelists because they read this person, but they could be impactful towards the broader cultural change in a way that's hard to measure? Do you think these people are especially influential over the long term?", "Tyler Cowen", "I don't know, I hope there's some influence there, but it's very hard to say. Hard to measure.", "Dwarkesh Patel", "Much of the blogosphere or the legendary parts of it were started in 2000s to 2010s–– people like you, Paul Graham , and Scott Alexander . Do you think that people starting blogs today are just LARPing the new moment you had at the time, or do you think that this is actually a format that can survive the test of time?", "Tyler Cowen", "I think it will survive. So we have early bloggers, Samuel Pepys, James Boswell, they've survived, right? It's good material. That's the 17th and 18th centuries. So why can’t it survive today when technology makes it easier and more readily preserved? Just the notion that you write and someone reads seems to be extremely robust.", "Dwarkesh Patel", "Right. So you write on the internet on a regular basis?", "Tyler Cowen", "Why not? So many writers like writing on a regular basis, and it has some practical advantages so I'd be very surprised if that went away. Now, at this moment, you could say substack is bigger than blogs, it's a bit different. But it's broadly a similar thing.", "Dwarkesh Patel", "Well, what do you see as the differences between a substack and a blog?", "Tyler Cowen", "Substack posts tend to be longer. With a lot of blogs, you're very much the editor and not just content-creator. You're sometimes an editor in substack, but much less. So something like what instapundit has done. I don't think there's really a substack version of that, for better or worse. You don't need one; it's done on blogs. He's mostly been an editor. A lot of Marginal Revolution is just me as editor, or sometimes Alex as editor.", "Dwarkesh Patel", "Are you worried that the same format and look of substack will make people also intellectually less creative?", "Tyler Cowen", "I think substack encourages posts that are too long, too whiny, and too self reflective, and too obsessed with a relatively small number of recurring topics. So do I worry? Yes but are there enough mechanisms in the whole thing for self-correction? Obviously, there's competition, readers get sick of stuff that's not great, it cycles through whatever, it'll be fine.", "Is Stagnation Inevitable?", "Dwarkesh Patel", "Is the reason that we've been seeing a decline in research productivity explained by the buildup of scientific bureaucracy? If it's been consistent over decades, we just have slowly deteriorating research productivity that the only explanation can be that we just pick the low-hanging fruits.", "Tyler Cowen", "I think that's a reason. But I don't think it's the most fundamental reason. In this sense, maybe like Patrick Collison would see it as more important than I do. I think exhausting the literal low hanging fruits at the technological level is the main reason. With those, you can replenish them with new breakthrough general purpose technologies. But that takes a long time. I see that as the main reason, and the ongoing bureaucratization of science–– I fully accept and want to reform and want to change. But it's not literally my number one reason for stagnation.", "Dwarkesh Patel", "Right. Is it just like a sine wave where you'll just have periods of easy-grab innovations and then harder-grab? Or is there something particular about this stagnation period?", "Tyler Cowen", "I don't know what the function looks like. It just seems to me that today, we have enough diverse sources of ideas, enough wealth, enough different universities, research labs, that it ought not to go too badly. There's an awful lot of conformity, but it doesn't seem that absolute or extreme. With something like mRNA work, right. AI is making a lot of progress, fusion is being talked about in a serious way. It doesn't seem that grim to me.", "Dwarkesh Patel", "What I find interesting is that you are an optimist in the short term, given this uncertainty, but in the long term, given the uncertainty about the future of human civilization, you're a pessimist.", "Tyler Cowen", "It’s the same view. The more “progress” advances, the easier it is to destroy things. The chance of an accident can be small, and I think it is small. But again, bad things happen. It’s easier to destroy than create.", "Dwarkesh Patel", "Right. Do you have an emotional reaction to the idea that the human story is almost certain to end? Do you think we only have 700 years of this left?", "Tyler Cowen", "I don't know what I can do about it. I try actually to do something about it so I have a reaction. But I'm aware of the extreme fallibility embedded in all such projections. I'm like, let's just wake up this morning, and let's do some stuff now. And like, I'm going to do it. I hope there's a payoff under all these different scenarios.", "Dwarkesh Patel", "Yeah. Do you think that as state capacity just continues to decline—", "Tyler Cowen", "I don't know that it's declining. It feels like it is. But it's a bit like being in the longest line at the supermarket, you notice it more. The US State has done a bunch of things well. If you look at the war against terror, I don't know who or what gets the credit, but compared to what we expected right after 9/11, it’s gone okay. Operation warp speed went amazingly well. Just the local DMV works way better than it used to. So it's a very complex picture when it comes to state capacity. It's not flat out in decline, I would say.", "Dwarkesh Patel", "So what is the explanation for why it gets better over time? There's a public choice theory explanation for why it might get worse, but to the extent that certain parts of it are getting better over time..", "Tyler Cowen", "The explanation I have is so stupid, I'm embarrassed to present it. You have some people in the system who want to make it better, and they make it better. It’s not very sophisticated. I don't know a deep way of tracing it to differential incentive structures. It’s just demand.", "Dwarkesh Patel", "Okay. So are you optimistic about libertarianism in the long term, if saved capacity continues to get better?", "Tyler Cowen", "I don't think we'll ever have libertarian societies. I think there's quite a good chance we will get more good reforms than bad ones, so capitalism, democracy, and broadly classical liberal values will advance. That's optimistic. I think there's quite a good chance for that.", "Dwarkesh Patel", "Is there any difference between you and Fukuyama’s view of the end of history ? That there’s nothing more compelling than capitalist liberal democracy?", "Tyler Cowen", "Well, he's changed his view a number of times. You can read the original Fukuyama as being quite pessimistic, there's something about demand for esteem and self-respect that the end of history doesn't satisfy so that unravels. Then there's the Fukuyama view that the rest of history is all about how we’ll manipulate biology–– which seems to be significant. Maybe he overstated it, but I don't dismiss it at all. Then it's like all these other Fukuyama restatements since then that makes me dizzy. I just asked a simple question: are you long the market or short the market? I'm long the market. I don't know what he is. Very few people are short so I hope he's long the market too. This is one of my favorite forcing questions. Are you long the market? Are you short the market? People spew so much bullshit, and it's all tribalism, so when you ask them “ Are you short the market? ”, they say something like “Oh, well, I haven't bought anything lately. ” It's like they become morons when you ask them this question. They should just say, “ those are my tribalist sympathies, I'm neurotic and really stressed.” What I do is pretty optimistic, of course.", "Dwarkesh Patel", "But the general fact that people are more neurotic, it seems like Fukuyama is right in the sense that, the last man in liberal democracy will be a neurotic mess. I think that's how somebody could characterize American politics at least. So is he right in a way that humans are not satisfied by liberal democracies?", "Tyler Cowen", "I don't think people are satisfied by anything. He's right. I'm not sure it's a special problem for a liberal democracy, probably less. There are other ways to anesthetize yourself.", "Dwarkesh Patel", "So there's no form of government or like no structure of society where people are just like, generally a mess.", "Tyler Cowen", "I haven't seen it. Yeah, like, when would that have been? Maybe right after you went to war, there's some kind of giddiness and desire to build for the future. But it can't last that long.", "What are Podcasts for?", "Dwarkesh Patel", "I'm curious why you continue to read. One of the reasons you say that reading is fast for you is because you know many of the things that are already in books. So then why continue doing it, if you already know many of the things that are in there?", "Tyler Cowen", "Well, it's often frustrating, but I do try to read in new areas. I very much prefer travelling to reading as a way of learning things. But I can't always travel. At the margin, I would rather travel more and read less. Absolutely.", "Dwarkesh Patel", "Okay. Let me ask a meta question. What do you think podcasts are for? What is happening?", "Tyler Cowen", "To anaesthetize people? To feel they're learning something? To put them to sleep. So they can exercise and not feel like idiots. Occasionally to learn something. To keep themselves entertained while doing busy work of some kind.", "Dwarkesh Patel", "Is this the same as the anesthetizing? [laughs]", "Tyler Cowen", "You want to feel you're imbibing the most important ideas, and there are very costly, tough ways to do that. For example: to actually work on one of those problems as a researcher. But most people can't do that (through no fault of their own). Even if they’re academics, maybe they just can't do it. So one of the next best things is to listen to someone to at least pretend that they've done it. And it's okay, it's a substitute. Like, why not? What are you supposed to do? Watch TV?", "Dwarkesh Patel", "Okay, but is your own podcast a complement to actual intellectual inquiry?", "Tyler Cowen", "I don't assume that it is. I think of it as a very high-class form of entertainment. More than anything, right, which I like to be clear. I don't feel bad about that.", "Dwarkesh Patel", "Yeah, but I do wonder if the substitute would have actually been real engagement or if it would have just been just pure entertainment.", "Tyler Cowen", "It will probably be a lesser podcast, I guess.", "Dwarkesh Pate l", "Yeah. Well, on that note, Tyler, this was a lot of fun. I do want to thank you, especially because you were my fourth guest on the podcast, and it was great having you on early; it was a huge help in terms of growing the podcast.", "Tyler Cowen", "Happy to be on it again. Yeah, thank you for coming by." ]
[ "https://en.wikipedia.org/wiki/Garett_Jones", "https://www.amazon.com/Myth-Rational-Voter-Democracies-Policies/dp/0691138737", "https://www.amazon.com/Myth-Rational-Voter-Democracies-Policies/dp/0691138737", "https://www.amazon.com/Talent-Identify-Energizers-Creatives-Winners/dp/1250275814", "https://www.amazon.com/Talent-Identify-Energizers-Creatives-Winners/dp/1250275814", "https://marginalrevolution.com/marginalrevolution/2021/06/how-and-why-is-conquests-second-law-true.html", "https://www.worldbank.org/en/home", "https://en.wikipedia.org/wiki/Neoliberalism", "https://conversationswithtyler.com/episodes/marc-andreessen/", "https://en.wikipedia.org/wiki/Bourbon_Restoration_in_France", "https://www.britannica.com/biography/Maximilien-Robespierre", "https://www.startupgrind.com/blog/the-collison-brothers-and-story-behind-the-founding-of-stripe/", "https://encyclopedia.ushmm.org/content/en/map/jewish-communities-in-central-europe", "https://www.britannica.com/topic/path-dependence", "https://en.wikipedia.org/wiki/P_versus_NP_problem", "https://www.businessinsider.com/mark-zuckerbergs-brutal-prank-on-sequoia-2010-5", "https://medium.com/paloit/second-order-effect-in-product-design-and-strategy-82c7fd2c52e6#:~:text=Second%20Order%20Effect%20refers%20to,have%20knowledge%20or%20control%20of%20.", "https://www.webmd.com/drugs/2/drug-16962/modafinil-oral/details", "https://www.amazon.com/Case-against-Education-System-Waste/dp/0691174652", "https://www.amazon.com/Case-against-Education-System-Waste/dp/0691174652", "https://www.mercatus.org/emergent-ventures", "https://en.wikipedia.org/wiki/Bryan_Caplan", "https://en.wikipedia.org/wiki/Goodhart%27s_law#:~:text=All%20metrics%20of%20scientific%20evaluation,people%20start%20to%20game%20it.", "https://en.wikipedia.org/wiki/Jerry_West", "https://en.wikipedia.org/wiki/Pat_Riley", "https://www.cancer.gov/publications/dictionaries/genetics-dictionary/def/polygenic-risk-score#:~:text=An%20assessment%20of%20the%20risk,Also%20called%20PRS.", "https://www.sciencedirect.com/topics/social-sciences/social-mobility#:~:text=Social%20mobility%20is%20the%20movement,class%20or%20status%20group%20formation.", "https://www.emergent.vc/", "https://www.effectivealtruism.org/", "https://plato.stanford.edu/entries/libertarianism/", "https://www.lesswrong.com/posts/s8yvtCbbZW2S4WnhE/what-exactly-is-the-rationality-community", "https://www.statisticshowto.com/what-is-a-bimodal-distribution/", "https://slab.com/blog/stripe-writing-culture/", "https://www.overcomingbias.com/2019/09/stamina-succeeds.html", "https://en.wikipedia.org/wiki/Robin_Hanson", "https://www.amazon.ae/What-Talk-about-When-Running/dp/0307389839", "https://en.wikipedia.org/wiki/Michelle_Dawson", "https://www.google.com/search?q=G+loaded&sxsrf=ALiCzsaC0Np0zJ4Jl4C_zAKFuNtIn3eqlA%3A1663619755853&ei=q9IoY5TVM-O_xc8Pxca6uA0&ved=0ahUKEwjU1trE2qH6AhXjX_EDHUWjDtcQ4dUDCA4&uact=5&oq=G+loaded&gs_lcp=Cgdnd3Mtd2l6EAMyBQgAEIAEMgUIABCABDIFCAAQgAQyBQgAEIAEMgUIABCABDIFCAAQgAQyBggAEB4QFjIGCAAQHhAWMggIABAeEA8QFjIICAAQHhAPEBZKBAhBGABKBAhGGABQAFgAYKsCaABwAXgAgAHFAYgBxQGSAQMwLjGYAQCgAQKgAQHAAQE&sclient=gws-wiz#:~:text=definition%20of%20G,%E2%80%BA%20G%2Dlo...", "https://www.amazon.com/Economist-Gets-Lunch-Everyday-Foodies/dp/B00B1KZ8JG", "https://www.amazon.com/Creative-Destruction-Globalization-Changing-Cultures/dp/0691117837", "https://en.wikipedia.org/wiki/Woke#:~:text=In%20this%20pejorative%20sense%2C%20woke,political%20correctness'%20gone%20awry%22.", "https://www.ycombinator.com/", "https://www.amazon.com/Age-Infovore-Succeeding-Information-Economy/dp/0452296196#:~:text=%22The%20Age%20of%20the%20Infovore,endlessly%20flowing%20and%20incoherent%20information.", "https://www.amazon.com/Napoleon-Life-Andrew-Roberts/dp/0143127853", "https://blog.samaltman.com/", "https://www.harukimurakami.com/", "https://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb", "https://stevenpinker.com/", "https://www.scottaaronson.com/", "https://scottaaronson.blog/?m=201409", "https://en.wikipedia.org/wiki/Longtermism#:~:text=Longtermism%20is%20an%20ethical%20stance,reduce%20existential%20risks%20to%20humanity.", "https://en.wikipedia.org/wiki/Asteroid_impact_avoidance", "https://en.wikipedia.org/wiki/Artificial_general_intelligence", "https://www.skynetworldwide.net/", "https://www.britannica.com/topic/feudalism", "https://80000hours.org/2013/08/our-progress/", "https://80000hours.org/2013/08/our-progress/", "https://en.wikipedia.org/wiki/Andrew_Sullivan", "https://www.amazon.com/Talent-Identify-Energizers-Creatives-Winners/dp/1250275814", "https://perell.com/essay/peter-thiel/", "https://en.wikipedia.org/wiki/George_Soros", "https://www.investopedia.com/terms/r/reflexivity.asp", "https://www.businessinsider.com/peter-thiel-on-rene-girards-influence-2014-11", "https://www.amazon.com/What-Price-Fame-Tyler-Cowen/dp/0674001559", "https://en.wikipedia.org/wiki/Thomas_Schelling", "https://en.wikipedia.org/wiki/Intellectual_dark_web", "https://www.jordanbpeterson.com/", "https://nymag.com/intelligencer/2018/06/trump-made-the-gop-a-personal-cult-could-democrats-do-that.html", "https://www.fda.gov/consumers/consumer-updates/why-you-should-not-use-ivermectin-treat-or-prevent-covid-19", "https://www.google.com/search?q=demand+curve&sxsrf=ALiCzsYNmwD-iF2Z61Cn_azp4n797UPKmg%3A1663620720798&ei=cNYoY9ecMLODxc8PgPmIgA4&ved=0ahUKEwjXlOqQ3qH6AhWzQfEDHYA8AuAQ4dUDCA4&uact=5&oq=demand+curve&gs_lcp=Cgdnd3Mtd2l6EAMyBAgAEEMyBQgAEIAEMgUIABCABDIFCAAQgAQyBQgAEIAEMgUIABCABDIFCAAQgAQyBQgAEIAEMgUIABCABDIFCAAQgAQ6BAgjECc6EQguEIAEELEDEIMBEMcBENEDOgsIABCABBCxAxCDAToOCC4QsQMQgwEQxwEQrwE6DgguELEDEIMBEMcBENEDOgoIABCABBCHAhAUOgUIABCRAjoLCAAQsQMQgwEQkQI6BwguELEDEEM6DgguEIAEELEDEIMBENQCOggILhCABBCxAzoKCAAQsQMQgwEQQzoECC4QQzoICAAQsQMQgwFKBAhBGABKBAhGGABQAFjQDmC8D2gAcAF4AIABhAKIAbESkgEFMC4zLjiYAQCgAQHAAQE&sclient=gws-wiz#:~:text=Demand%20Curves%3A%20What,terms%20%E2%80%BA%20demand%2Dcurve", "https://www.mercatus.org/scholars/tyler-cowen", "https://marginalrevolution.com/marginalrevolution/2011/08/which-intellectuals-have-influence.html", "https://en.wikipedia.org/wiki/Andrew_Sullivan", "https://petersinger.info/", "https://plato.stanford.edu/entries/mill/", "https://www.britannica.com/biography/Richard-Cobden", "https://www.google.com/search?q=ree+trade+against+the+Corn+Laws&sxsrf=ALiCzsZYagw2989C5ApH5CJpUcaG7a8MsQ%3A1663620827211&ei=29YoY92zDKqP9u8PkY272As&ved=0ahUKEwidjcnD3qH6AhWqh_0HHZHGDrsQ4dUDCA4&uact=5&oq=ree+trade+against+the+Corn+Laws&gs_lcp=Cgdnd3Mtd2l6EANKBAhBGABKBAhGGABQAFgAYK4BaABwAXgAgAEAiAEAkgEAmAEAoAECoAEBwAEB&sclient=gws-wiz#:~:text=Free%20Trade%20and,trade%2Dand%2Dt...", "https://en.wikipedia.org/wiki/John_Bright", "http://www.paulgraham.com/", "https://astralcodexten.substack.com/", "https://en.wikipedia.org/wiki/Live_action_role-playing_game", "https://en.wikipedia.org/wiki/Samuel_Pepys", "https://en.wikipedia.org/wiki/James_Boswell", "https://substack.com/", "https://en.wikipedia.org/wiki/Instapundit", "https://marginalrevolution.com/", "https://en.wikipedia.org/wiki/Operation_Warp_Speed", "https://en.wikipedia.org/wiki/The_End_of_History_and_the_Last_Man#:~:text=Fukuyama%20argues%20that%20history%20should,of%20government%20for%20all%20nations." ]
https://www.dwarkesh.com/p/tyler-cowen-3
Tyler Cowen - Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth
[ "Dwarkesh Patel 00:00:00", "This is a fun book to read because you mentioned in there what the original sources to read are. It’s like the Harold Bloom of economics, right?", "Tyler Cowen 00:00:10", "It’s a book written for smart people.", "Dwarkesh Patel 00:00:13", "Okay, so let’s just jump into it. The book we’re talking about is Goat, who is the greatest economist of all time, and why does it matter? Alright, let’s start with Keynes. So in the section on Keynes, you quote him, I think, talking about Alfred Marshall. He says, “The master economist must possess a rare combination of gifts. He must be a mathematician, historian, statesman, philosopher. No part of man’s nature or his institutions must lie entirely outside his regard.” And you say, well, Keynes is obviously talking about himself because he was all those things, and he was arguably the only person who was all those things at the time. He must have known that.", "Dwarkesh Patel 00:00:57", "Okay, well, you know what I’m going to ask now. So what should we make of Tyler Cowen citing Keynes using this quote? A quote that also applies to Tyler Cowen?", "Tyler Cowen 00:01:09", "I don’t think it applies to me. What’s the exact list again? Am I a statesman? Did I play a role at the Treaty of Versailles or something comparable?", "Dwarkesh Patel 00:01:18", "I don’t know. We’re in Washington. I’m sure you talk to all the people who matter quite a bit.", "Tyler Cowen 00:01:21", "Well, I guess I’m more of a statesman than most economists, but I don’t come close to Keynes in the breadth of his high-level achievement in each of those areas.", "Dwarkesh Patel 00:01:32", "Okay, let’s talk about those achievements. So, chapter twelve, General Theory of Interest, Employment, and Money. Here’s a quote. “It is probable that the actual average result of investments, even during periods of progress and prosperity, have disappointed the hopes which promoted them. If human nature felt no temptation to take a chance, no satisfaction, profit apart, in constructing a factory, a railway, a mine, or a farm, there might not be much investment merely as a result of cold calculation.” Now, it’s a fascinating idea that investment is irrational, or most investment throughout history has been irrational. But when we think today about the fact that active investing exists for winners’ curse like reasons, VCs probably make, on average, less returns than the market, there’s a whole bunch of different examples you can go through, right? M&A usually doesn’t achieve the synergies it expects. Throughout history, has most investment been selfishly irrational?", "Tyler Cowen 00:02:26", "Well, Adam Smith was the first one I know to have made this point, that projectors, I think he called them, are overly optimistic. So people who do startups are overly optimistic. People who have, well, entrenched VC franchises make a lot of money, and there’s some kind of bifurcation in the distribution, right? Then there’s a lot of others who are just playing at it and maybe hoping to break even. So the rate of return on private investment, if you include small businesses, it’s highly skewed. And just a few percent of the people doing this make anything at all. So there’s a lot to what Keynes said. I don’t think he described it adequately in terms of a probability distribution, but then again, he probably didn’t have the data. But I wouldn’t reject it out of hand.", "Dwarkesh Patel 00:03:13", "Another example here is this is something your colleague Alex Tabarrok talks about a lot, is that innovators don’t internalize most of the gains they give to society. So here’s another example. The entrepreneur compared to one of his first employees, is he that much better off for taking the extra risk and working that much harder? What does this tell us? It’s a marvelous insight that we’re actually more risk-seeking than it’s selfishly good for us.", "Tyler Cowen 00:03:41", "That was Reuven Brenner’s claim in some of his books on risk. Again, I think you have to distinguish between different parts of the distribution. So it seems there’s a very large number of people who foolishly start small businesses. Maybe they overly value autonomy when they ought to just get a job with a relatively stable company. So there, part of the thesis is correct, and I doubt if there’s really big social returns to whatever those people do, even if they could make a go of it. But there’s another part of the distribution, people who are actually innovating or have realistic prospects of doing so. Where I do think those social returns are very high. Now, that 2% figure that’s cited a lot, I don’t think it’s really based in much real. It’s maybe not a crazy seat of the pants estimate, but people think like, oh, we know it’s 2% and we really don’t. So look at Picasso, right? He helped generate cubism with Braque and some other artists. How good is our estimate of Picasso’s income compared to the spin-offs from Picasso? We just don’t really know. Right? We don’t know. It’s 2%. It could be 1%, it could be 6%.", "Dwarkesh Patel 00:04:51", "How different do you think it is in art versus, I don’t know, entrepreneurship versus different kinds of entrepreneurship? There are different industries there as well, right?", "Tyler Cowen 00:04:59", "I’m not sure it’s that different. So say if some people start blogging, a lot of people copy them, right? Well, some people start painting in a particular style, a lot of people copy them. I’m not saying the numbers are the same, but they don’t sound like issues that in principle are so different in 2%.", "Dwarkesh Patel 00:05:16", "Overestimate or underestimate. It might be wrong, but in which way is it wrong?", "Tyler Cowen 00:05:19", "My seat of the pants estimate would be two to 5%, so I think it’s pretty close. But again, that’s not based on anything firm.", "Dwarkesh Patel 00:05:26", "Here’s another quote from Keynes. “Investment based on genuine long-term expectation is so difficult as to be scarcely practicable. He who attempts it must surely lead much more laborious days and run greater risks than he who tries to guess better than the crowd how the crowd will behave.” So one way to look at this is like, oh, he just doesn’t understand the efficient market hypothesis. It’s like before random walks or something. But there are things you can see in the market today. Where are the prospects for future dividends so much higher after Covid than they were immediately after the crash? How much of market behavior can be explained by these sorts of claims from Keynes?", "Tyler Cowen 00:06:08", "I think Keynes had the view that for his time, you could be a short-run speculator and in fact beat the markets. And he believed that he did so, and at least he did for some periods of his life. That may have been luck, or maybe he did have special insight. It probably wasn’t true in general, though we don’t really know. Did efficient markets hold during Britain at that time? Maybe there just were profit opportunities for smarter than average people. So that’s a view. I’m inclined not to believe it. But again, I don’t think it’s absurd. Keynes is saying, for people who want money, this is biased toward the short term. You can get your profits and get out. And that’s damaging long-term investment, which in fact, he wanted to socialize. So he’s being led to a very bad place by the argument. But again, we shouldn’t dismiss it out of hand.", "Dwarkesh Patel 00:07:01", "Why is it not easy to retrospectively study how efficient markets were back then, in the same way we can study it now? You look at the price-to-earnings ratios, and then what were the dividends afterwards over the coming decades for those companies based on their stock price or something?", "Tyler Cowen 00:07:17", "I don’t know how many publicly traded firms there were in Britain at that time. I don’t know how good the data are. Things like bid, ask, spread, at what price you actually executed trades can really matter for testing efficient markets hypothesis, so probably we can’t tell, even though there must be share price data of some sort. At what frequency? Well, is it once a day? Is it once a week? We don’t have the sort of data we have now where you can just test anything you want.", "Dwarkesh Patel 00:07:47", "He also made an interesting point. Not only is it not profitable, but even if you succeed, society will look at the contrarian in a very negative light. You will be doubly punished for being a contrarian. But that doesn’t seem to be the case. Right? You have somebody like Warren Buffett or Charlie Munger. People who do beat the market are actually pretty revered. They’re not punished in public opinion.", "Tyler Cowen 00:08:08", "They pursued mostly long-term strategies.", "Dwarkesh Patel 00:08:10", "Right.", "Tyler Cowen 00:08:10", "But again, trying to make sense of Keynes, if you think about long-term investing, and I don’t think he meant Buffett-style investing. I think he meant building factories, trying to figure out what people would want to buy 25 years from that point in time. That probably was much harder than today. You had way less access to data. Your ability to build an international supply chain was much weaker. Geopolitical turmoil at various points in time was much higher. So again, it’s not a crazy view. I think there’s a lot in Keynes that’s very much of his time, that he presents out of a kind of overconfidence as being general. And it’s not general. It may not even be true, but there were some reasons why you could believe it.", "Dwarkesh Patel 00:08:55", "Another quote from Keynes, I guess I won’t read the whole quote in full, but basically says, over time, as investments, markets get more mature, more and more of equities are held basically by passive investors, people who don’t have a direct hand in the involvement of the enterprise, and the share of the market that’s passive investment now is much bigger. Should we be worried about this?", "Tyler Cowen 00:09:16", "As long as at the margin people can do things, I’m not very worried about it. So there are two different kinds of worries. One is that no one monitors the value of companies. It seems to me those incentives aren’t weaker. There’s more research than ever before. There’s maybe a problem. Not enough companies are publicly held. But you can always, if you know something the rest of the market doesn’t, buy or sell-short and do better. The other worry is those passive investors have economies of scale, and they’ll end up colluding with each other. You’ll have, say, like three to five mutual funds, private equity firms owning a big chunk of the market portfolio. And in essence, directly or indirectly, they’ll tell those firms not to compete. It’s a weird form of collusion. They don’t issue explicit instructions like, say the same few mutual funds own Coke and Pepsi. Should Coke and Pepsi compete, or should they collude? Well, they might just pick lazier managers who in some way give you implicit collusion.", "Dwarkesh Patel 00:10:14", "Maybe this is another example of the innovators being unable to internalize their gains. As active investors who are providing this information to the market, they don’t make out that much better than the passive investors, but they’re actually providing a valuable service. But the benefits are diffused throughout society.", "Tyler Cowen 00:10:29", "I think overconfidence helps us on that front. So there’s, quote-unquote, too much trading from a private point of view. But from a social point of view, maybe you can only have too much trading or too little trading, and you might rather have too much trading.", "Dwarkesh Patel 00:10:42", "Explain that. Why can it only be too much or too little?", "Tyler Cowen 00:10:46", "Well, let’s say the relevant choice variable is investor temperament. So, yes, you’d prefer it if everyone had the temperament just to do what was socially optimal. But if temperament is some inclination in you, and you can just be overconfident or not confident enough, and overconfidence gives you too much trading, that might be the best we can do. Again, fine-tuning would be best of all, but I’ve never seen humans where you could just fine-tune all their emotions to the point where they ought to be.", "Dwarkesh Patel 00:11:15", "Yeah. Okay, so we can ask the question, how far above optimal are we? Or if we are above optimal? In the chapter, Keynes says that over time, as markets get more mature, they become more speculative. And the example he gives is like, the New York market seems more speculative to him than the London market at that time. But today, finance is 8% of GDP. Is that what we should expect it to be to efficiently allocate capital? Is there some reason we can just look at that number and say that that’s too big?", "Tyler Cowen 00:11:43", "I think the relevant number for the financial sector is what percentage it is of wealth, not GDP. So you’re managing wealth, and the financial sector has been a pretty constant 2% of wealth for a few decades in the United States, with bumps. Obviously, 2008 matters, but it’s more or less 2%, and that makes it sound a lot less sinister. It’s not actually growing at the expense of something and eating up the economy. So you would prefer it’s less than 2%? Right. But 2% does not sound outrageously high to me. And if the ratio of wealth to GDP grows over time, which it tends to do when you have durable capital and no major wars. The financial sector will grow relative to GDP. But again, that’s not sinister. Think of it in terms of wealth.", "Dwarkesh Patel 00:12:29", "I see. So one way to think about it is like the management cost as a fraction of the assets under management or something. And that’s right. In that case, 2% is not that bad. Yeah. Okay, interesting. I want to go back to the risk aversion thing again, because I don’t know how to think about this. So his whole thing is these animal spirits, they guide us to make all these bets and engage in all this activity. In some sense, he’s saying, like, not only are we not risk-neutral, but we’re more risk-seeking than is rational. Whereas the way you’d conventionally think about it is that humans are risk-averse, right. They prefer to take less risk than is rational in some sense. How do we square this?", "Tyler Cowen 00:13:09", "Well, here, Milton Friedman, another goat contender, comes into the picture. So his famous piece with Savage makes the point that risk aversion is essentially context dependent. So he was a behavioral economist before we knew of such things. So the same people typically will buy insurance and gamble. Gambling you can interpret quite broadly, and that’s the right way to think about it. So just flat out risk aversion or risk-loving behavior, it doesn’t really exist. Almost everyone is context-dependent now. Why you choose the contexts you do, maybe it’s some kind of exercise in mood management. So you insure your house, so you can sleep well at night, you buy fire insurance, but then you get a little bored. And to stimulate yourself, you’re betting on these NBA games. And yes, that’s foolish, but it keeps you busy and it helps you follow analytics, and you read about the games online, and maybe that’s efficient mood management, and that’s the way to think about risk behavior. I don’t bet, by the way. I mean, you could say I bet with my career, but I don’t bet on things.", "Dwarkesh Patel 00:14:13", "What’s your version of the lottery ticket? What is the thing where you, just for the entertainment value or the distraction value, take more risk than would seem rational?", "Tyler Cowen 00:14:23", "Well, writing the book titled “GPT-4,” which is not with any known publisher. It’s just online; it’s free, published within GPT-4. It took me quite a while to write the book. I’m not sure there’s a huge downside, but it’s risky in the sense that it’s not what anyone else was doing. So that was a kind of risk. I invested a lot of my writing time in something weird, and I’ve done things like that pretty frequently. So that keeps me, you could say, excited, or starting MRU, the online education videos in economics no pecuniary return to me at all. Indirectly, it costs me a lot of money. That’s a sort of risk. I feel it’s paid off for me in a big way. But on one hand, you can say, “Well, Tyler, what do you actually have from that?” And the answer is nothing.", "Dwarkesh Patel 00:15:11", "Yeah, well, this actually raises the question I was going to ask about these GO contenders in general, and how you’re judging them, where you’re looking at their work as a whole. Given that, I don’t know, some of these risks pay off that these intellectuals take, some of them don’t pay off. Should we just be looking at their top contributions and just disregard everything else? For Hayek, I think one of the points you have against him is that his top three articles are amazing. But after that, there’s a drop-off. The top risk you take, are they the only ones that matter? Why are we looking at the other stuff?", "Tyler Cowen 00:15:40", "I don’t think they’re the only ones that matter, but I’ll weight them pretty heavily. But your failures do reflect usually in how you think or what you know about the world. So Hayek’s failures, for instance, his inability to come up with a normative standard in “The Constitution of Liberty,” show in some ways he just wasn’t rigorous enough. He was content with the kind of Germanic, put a lot of complex ideas out there and hope they’re profound. And you see that even in his best work. Now, that is profound. But it’s not as if the failures and the best work for all these people are unrelated. And same with Keynes. Like Keynes, more or less changed his mind every year. That’s a strength, but it’s also a weakness. And by considering Keynes’s really good works and bad works, like his defenses of tariffs, you see that. And the best work, he also moved on from in some way. If you read “How to Pay for the War” in 1940, if you didn’t know better, you would think it’s someone criticizing the General Theory.", "Dwarkesh Patel 00:16:43", "Does quantity have a quality all of its own? When you think of great intellectuals, were many of these people have like volumes and volumes of work. Was that necessary for them to get the greatest sets? Or is the rest of it just a distraction from the things that really stand the test of time?", "Tyler Cowen 00:16:56", "For the best people, it’s necessary. So John Stuart Mill wrote an enormous amount. Most of it’s quite interesting, but his ability to see things from multiple perspectives, I think, was in part stemming from the fact that he wrote a lot about many different topics, like French history, ancient Greece. He had real depth and breadth.", "Dwarkesh Patel 00:17:16", "If Keynes is alive today, what are the odds that he’s in a polycule in Berkeley, writing the best-written Less Wrong post you’ve ever seen?", "Tyler Cowen 00:17:24", "I’m not sure what the counterfactual means. So Keynes is so British. Maybe he’s an effective altruist at Cambridge. And given how he seems to have run his sex life, I don’t think he needed a polycule. Like a polycule is almost a Williamsonian device to economize on transactions costs. But Keynes, according to his own notes, seems to have done things on a very casual basis.", "Dwarkesh Patel 00:17:50", "He had a spreadsheet, right, of his special partners?", "Tyler Cowen 00:17:52", "And from context, it appears he met these people very casually and didn’t need to be embedded in, oh, we’re the five people who get together regularly, so that’s not a hypothetical. We think we saw what he did, and I think he’d be at Cambridge, right? That’s where he was. Why should he not today, be at Cambridge?", "Dwarkesh Patel 00:18:14", "How did a gay intellectual get that amount of influence in Britain of that time? When you think of somebody like Alan Turing, helps Britain win World War II and is castrated because of one illicit encounter that is caught, was it just not public? How did he get away with it?", "Tyler Cowen 00:18:29", "Basically, I don’t think it was a secret about Keynes. He had interacted with enough people that I think it was broadly known. He was politically very powerful. He was astute as someone managing his career. He was one of the most effective people you could say, of all time, not just amongst economists. And I’ve never seen evidence that Keynes was in any kind of danger. Turing also may have intersected with national security concerns in a different way. I’m not sure we know the Alan Turing story and why it went as badly as it did, but there was in the past, very selectively, and I do mean very selectively, more tolerance of deviance than people today sometimes realize.", "Dwarkesh Patel 00:19:13", "Oh, interesting.", "Tyler Cowen 00:19:14", "And Keynes’s benefited from that. But again, I would stress the word selectively.", "Dwarkesh Patel 00:19:17", "Does it say more? What determines who is selected for this tolerance?", "Tyler Cowen 00:19:21", "I don’t feel I understand that very well. But there’s plenty say in Europe and Britain of the early 20th century where quote-unquote outrageous things were done. And it’s hard to find evidence that people were punished for it. Now, what accounts for the difference between them and the people who were punished? I would like to see a very good book on it.", "Dwarkesh Patel 00:19:42", "Yeah, I guess it’s similar to our time. Right. We have certain taboos and you can get away with.", "Tyler Cowen 00:19:47", "Yeah, they say whatever on Twitter and.", "Dwarkesh Patel 00:19:49", "Other people get cancelled, actually. How have you gotten away with it? I feel like you’ve never been in, at least as far as I know. I haven’t heard you being in the part of any single controversy. But you have some opinions out there.", "Tyler Cowen 00:19:59", "I feel people have been very nice to me.", "Dwarkesh Patel 00:20:01", "Yeah. What’d you do? How did you become the Keynes of our time if we’re comparing after all? Right.", "Tyler Cowen 00:20:10", "I think just being good-natured helps, and helping a lot of people helps. And Turing, I’m a huge fan of, wrote a paper on him with Michelle Dawson, but it’s not obvious that he was a very good diplomat, and it seems he very likely was a pretty terrible diplomat, and that might be feeding into this difference.", "Dwarkesh Patel 00:20:29", "How do you think about the long-term value and the long-term impact of intellectuals you disagree with? So, do you think over the course of history, basically, the improvements they make to the discourse and the additional things they give us a chance to think about, that washes out their object level, the things they were object level wrong about?", "Tyler Cowen 00:20:48", "Well, it’s worked that way so far. Right. So we’ve had economic growth, obviously with interruptions, but so much has fed into the stream. And you have to be pretty happy with today’s world compared with, say, 1880. The future may or may not bring us the same, but if the future brings us continuing economic growth, then I’m going to say exactly that. Oh, be happy. They fed into the stream. They may have been wrong, but things really worked out. But if the future brings us a shrinking population asymptotically approaching a very low level, and greater poverty and more war, and you’ve got to wonder, well, who is responsible for that, right?", "Dwarkesh Patel 00:21:26", "Who would be responsible for that?", "Tyler Cowen 00:21:29", "We don’t know, but I think secular thinkers will fall in relative status if that’s the outcome. And that’s most prominent intellectuals today, myself included.", "Dwarkesh Patel 00:21:39", "Yeah. Who would rise in status as a result?", "Tyler Cowen 00:21:42", "Well, there’s a number of people complaining strenuously about fertility declines. If there’s more war, probably the hawks will rise in status whether or not they should, and alternative scenarios that the pacifists rise in status. But I basically never see the pacifists rising in status for any more than brief moments like after the Vietnam War. Maybe they did after World War I. Yes, but again, that didn’t last because World War II swept all that away.", "Dwarkesh Patel 00:22:10", "Right.", "Tyler Cowen 00:22:11", "So the pacifists seemed to lose long-term status no matter what. And that means the Hawks would gain in status. And those worried about fertility and whatever technology drives the new wars, if that is what happens. Let’s say it’s drones. It’s possible, right? People who warned against drones, which is not currently that big a thing. There are quite a few such people, but there’s no one out there known for worrying about drones the way, say, Eliezer is known for worrying about AI. Now drones, in a way, are AI, but it’s different.", "Dwarkesh Patel 00:22:44", "Yeah. Although Nat Friedman, Stuart Armstrong, other people have talked about, we’re not that far away from drones. I guess you have millions of views. Whoever made that would rise, I think. Stuart Armstrong. No, sorry, not Stuart. Anyways.", "Tyler Cowen 00:22:57", "Yeah, but those people could end up as much more important than they are now.", "Dwarkesh Patel 00:23:00", "Yeah. Okay. Let’s talk about Hayek. Sure. So before we get into his actual views, I think his career is a tremendous white pill in the sense that he writes The Road to Serfdom in 1944 when Nazi Germany and Soviet Union are both prominent players. And honestly, the way things shaked out, he would be pretty pleased that a lot of the biggest collectivisms of the day have been wiped out. So it is a tremendous white bill. You can have a career like that.", "Tyler Cowen 00:23:30", "He was not as right as he thought at the time, but he ended up being too grumpy in his later years.", "Dwarkesh Patel 00:23:37", "Oh really?", "Tyler Cowen 00:23:38", "He thought, well, collectivism is still going to engulf the world. And I think he became a grumpy old man. And maybe it’s one thing to be a grumpy old man in 2024, but to be a grumpy old man in the 80s didn’t seem justified.", "Dwarkesh Patel 00:23:52", "What was the cause? What specifically did he see? That he.", "Tyler Cowen 00:23:55", "He thought there were atavistic instincts in the human spirit which were biologically built in, that led us to be collectivists and too envious and not appreciative of how impersonal orders worked and that this would cause the west to turn into something quite crummy. I wouldn’t say he’s been proven wrong, but a lot of the west has had a pretty good run since then and there’s not major evidence that he’s correct. The bad events we’ve seen, like some war coming back, something weird happening in our politics. I’m not sure how to describe it. I’m not sure they fit the Hayek model. Of sort of simply the accretion of more socialism.", "Dwarkesh Patel 00:24:37", "But in terms of the basic psychological urges towards envy and resentment, doesn’t the rise of wokeness provide evidence for his view?", "Tyler Cowen 00:24:44", "But now wokeness, I would say, is peaked and is falling. That’s a big debate. I don’t see wokeness as our biggest problem. I see excessive bureaucracy, sclerotic institutions, kludocracy as bigger problems. They’re not unrelated to wokeness, to be clear, but I think they’re more fundamental and harder to fix.", "Dwarkesh Patel 00:25:02", "Let’s talk about Hayek’s arguments. So obviously he has a famous argument about decentralization. But when we look at companies like Amazon, Uber, these other big tech companies, they actually do a pretty good job of central planning, right? There’s like a sea of logistics and drivers and trade-offs that they have to square. Do they provide evidence that central planning can work?", "Tyler Cowen 00:25:25", "Well, I’m not a Coasian, so Coase in his famous 1937 article said the firm is planning. And he contrasted that to the market, right? I think the firm is the market. The firm is always making contracts in the market, is subject to market checks and balances. To me, it’s not an island of central planning in the broader froth of the market. So I’m just not Coasian. So for people who are Coasian, this is an embarrassing question for them, but I’ll just say Amazon being great, is the market working? And they’re not centrally planning. Even the Soviet Union, it was very bad, but it didn’t end up being central planning. It started off that way for a few years. So, I think people misinterpret large business firms in many ways on both the left and the right.", "Dwarkesh Patel 00:26:07", "Wait. But under this argument, it still adds to the credence of the people who argue that basically we need the government to control. Because if it is the case that the Soviet Union is still not central planning, people would say, well yeah, but that’s kind of what I want in terms of there’s still kind of checks in terms of import, exports, of the market test. It still applied to the government in that sense. What’s wrong with that argument that basically you can treat the government as that kind of firm?", "Tyler Cowen 00:26:32", "I’m not sure I followed your question. I would say this. I view the later Soviet Union as being highly decentralized managers optimizing their own rents and setting prices too low to take bribes.", "Dwarkesh Patel 00:26:45", "Allah.", "Tyler Cowen 00:26:46", "Paul Craig Roberts, what he wrote in that’s a very bad decentralized system. And it was sort of backed up by something highly central communist party in the USSR. But it’s not like the early attempts at true central planning in the Soviet Union, after the revolution, which did totally fail and were abandoned pretty quickly, even by Lenin.", "Dwarkesh Patel 00:27:09", "Would you count the ’50s period in the Soviet Union as more centrally planned or more decentralized by that point?", "Tyler Cowen 00:27:14", "Decentralized. You have central plans for a number of things, obviously, weaponry, steel production. You have targets, but even that tends to collapse into decentralized action just with bad incentives.", "Dwarkesh Patel 00:27:26", "So your explanation for why did the Soviet Union have high growth in the, is it more catch up? Is it more that they weren’t communists at the time? How would you explain it?", "Tyler Cowen 00:27:35", "A lot of the Soviet high growth was rebuilding after the war, which central planning can do relatively well, right? You see government rebuilding cities, say, in Germany, that works pretty well. But most of, and this is even before World War II, just urbanization. It shouldn’t be underrated today, given we’ve observed China. But so much of Chinese growth was driven by urbanization, so much of Soviet growth. You take someone working on a farm producing almost nothing, put them in a city, even under a bad system, they’re going to be a lot more productive. And that drove so much of Soviet growth before, after the war, but that at some point more or less ends as it has. Well, it hasn’t quite ended with China, but it’s certainly slowed down and people don’t pay enough attention to that. I don’t know why. It now seems pretty obvious, but going.", "Dwarkesh Patel 00:28:23", "Back to the point about firms. So I guess the point I was trying to make is, I don’t understand why the argument you make that, well, these firms are still within the market in the sense that they have to pass these market tests. Why that couldn’t also apply to government-directed production, because then people argue sometimes it does. Right.", "Tyler Cowen 00:28:41", "Government runs a bunch of enterprises. They may have monopoly positions, but many are open to the market. In Singapore, government hospitals compete with private hospitals. Government hospitals seem to be fine. I know they get some means of support, but they’re not all terrible.", "Dwarkesh Patel 00:28:57", "But I guess as a general principle, you’d be against more government-directed production, right?", "Tyler Cowen 00:29:02", "Well, it depends on the context. So if it’s, say, the military, probably we ought to be building a lot more of some particular things, and it will be done through Boeing, Lockheed, and so on. But the government’s directing it, paying for it in some way, planning it, and we need to do that. We’ve at times done that well in the past. So people overrate the distinction between government and market, I think, especially libertarians. But that said, there’s an awful lot of government bureaucracy that’s terrible, doesn’t have a big market check. But very often, governments act through markets and have to contract or hire consultants or hire outside parties. And it’s more like a market than you think.", "Dwarkesh Patel 00:29:43", "I want to ask you about another part of Hayek. So, he has an argument about how it’s really hard to aggregate information toward a central planner. But then, more recently, there have been results in computer science that just finding the general equilibrium is computationally intractable. Which raises the question, well, the market is somehow solving this problem, right? Separate from the problem of getting the information, making use of the information to allocate scarce resources. How is that computationally a process that’s possible? I’m sure you’re aware, like the linear optimization, non-convex constraints. How does the market solve this problem?", "Tyler Cowen 00:30:20", "Well, the market’s not solving for a general equilibrium. It’s just solving for something that gets us into the next day. And that’s a big part of the triumph, just living to fight another day, wealth not going down, not everyone quitting. And if you can do that, things will get better. And that’s what we’re pretty good at doing, is just building a sustainable structure. And a lot of it isn’t sustainable, like the fools who start these new small businesses. But they do pretty quickly disappear, and that’s part of the market as well. So, if you view the whole thing in terms of computing a general equilibrium, I think one of Hayek’s great insights is that’s just the wrong way to think about the whole problem. So, lack of computational ability to do that doesn’t worry me for either the market or planning, because to the extent planning does work, it doesn’t work by succeeding at that. Like Singaporean public hospitals don’t work because they solve some computational problem. They seem to work because the people running them care about doing a good job. And enough of the workers go along with that.", "Dwarkesh Patel 00:31:21", "Yeah. So, related to that, I think in the meaning of competition, he makes the point that the most interesting part of markets is when they go from one equilibrium to another, because that’s where they’re trying to figure out what to produce and how to produce it better and so on, and not the equilibriums themselves. And it seemed related to the Peter Thiel point in zero to one. That monopoly is when you have interesting things happen because when there’s just competitive equilibrium, there’s no profits to invest in R&D or to do cool new things. Do those seem like related points? Am I reading? Absolutely.", "Tyler Cowen 00:31:50", "And Hayek’s essay competition as a discovery process or procedure makes that point very explicitly. And that’s one of his handful of greatest essays, one of the greatest essays in all of economics.", "Dwarkesh Patel 00:32:01", "Is there a contradiction in Hayek in the sense that the decentralization he’s calling for results in specialists having to use the very scientism and statistical aggregates?", "Tyler Cowen 00:32:13", "Of course, that Hayek underrates scientism. Scientism is great, it can be abused, but we all rely on scientism. If you have an mRNA vaccine in your arm, well, how do you feel about scientism and so on?", "Dwarkesh Patel 00:32:26", "How much should we worry about this opening up the whole system to fragilities, if there’s like no one mind that understands large parts of how everything fits together? People talk about this in the context of if there’s a war in China, and the producers didn’t think about that possibility when they put valuable manufacturing in Taiwan and stuff like that.", "Tyler Cowen 00:32:43", "No one mind understanding things is inevitable under all systems. This gets into some of the alignment debates. If you had one mind that understood everything or could control everything, you have to worry a great deal about the corruptibility of that mind. So legibility, transparency, are not per se good. You want enough of them in the right places, but you need some kind of balance. So I think supply chains are no longer an underanalyzed problem, but until Covid they were, and they’re a big deal. And the Hayekian argument doesn’t always work, because the signal you have is of the current price. And that’s not telling you how high are the inframarginal values if you get, say, cut off from being able to buy vaccines from India, because you’re at the bottom of the queue. So that was a problem. It was the market failing, because the price doesn’t tell you inframarginal values. And when you move from some ability to buy the output to zero, those inframarginal values really matter.", "Dwarkesh Patel 00:33:43", "What would Hayek make of AI agents as they get more powerful? You have some market between the AI agents. There’s some sort of decentralized order as a result. What insights would you have about that?", "Tyler Cowen 00:33:55", "Well, a lot of Hayekians wrote about these issues, including at George Mason in the 1980s. And I think some of those people even talked to Hayek about this. And my recollection, which is imperfect, is that he found all this very interesting and in the spirit of his work. And Don Lavoie was leading this research program; he died prematurely of cancer. Bill Tulloh was also involved. And some of this has been written up, and it is very Hayekian and George Mason actually was a pioneer in this area.", "Dwarkesh Patel 00:34:25", "What do you make of AI agents? The market between them and the sort of infrastructure and order that you need to facilitate that.", "Tyler Cowen 00:34:31", "They’re going to replicate markets on their own, has been my prediction, and I think they’re going to evolve their own currencies. Maybe at first, they’ll use Bitcoin, but there’ll be an entire property rights system based, at least at first, on what we now call NFTs. I’m not sure that will end up being the right name for them, but if you want property rights in a so-called imaginary world, that’s where you would start with Bitcoin and NFTs. So I don’t know what percent of GDP this will be at first. It will be quite small, but it will grow over time. And it’s going to show Hayek to have been right about how these decentralized systems evolve.", "Dwarkesh Patel 00:35:06", "Do you anticipate that it’ll be sort of a completely different sphere and that there’s like the AI agents’ economy and there’s the human economy, and obviously they have links between them, but it’s not intermixed. Like they’re not on the same social media or the same task rabbit or whatever. It’s a very separate infrastructure that’s needed for the AI agents to talk to themselves versus talk to humans.", "Tyler Cowen 00:35:26", "I don’t see why we would enforce segregation now. You might have some segregated outlets like maybe X Twitter. Well, we’ll keep off the bots, let’s say it can even manage to do that. But if I want to hire a proofreader, I’m going to deal with the AI sector and pay them in Bitcoin. And I’ll just say to my personal AI assistant, “Hey, go out and hire an AI and pay them with whatever,” and then just not think about it anymore.", "Dwarkesh Patel 00:35:52", "And it will happen, maybe because there’s much higher transaction costs with dealing with humans and interacting with the human world, whereas they can just send a bunch of vectors to each other. It’s much faster for them to just have a separate dedicated infrastructure for that.", "Tyler Cowen 00:36:05", "But transaction costs for dealing with humans will fall because you’ll deal with their quote-unquote assistants, right? So you’ll only deal with the difficult human when you need to. And people who are very effective will segregate their tasks in a way that reflects their comparative advantage. And people who are not effective will be very poor at that, and that will lead to some kind of bifurcation of personal productivity. How well will you know what to delegate to your AI? I’ll predict you’ll be very good at it. You may not have figured it out yet, but say you’re like an A+ on it and other people are D. That’s a big comparative advantage for you.", "Dwarkesh Patel 00:36:44", "We’re talking, I guess, about GBD five level models. When you think in your mind about, okay, this is GBD five. What happens with GBD six, GBD seven. Do you see it? Do you still think in the frame of having a bunch of RAs, or does it seem like a different sort of thing at some point?", "Tyler Cowen 00:36:59", "I’m not sure what those numbers going up mean, what a GPT seven would look like, or how much smarter it could get. I think people make too many assumptions there. It could be the real advantages are integrating it into workflows by things that are not better GPTs at all. And once you get to a GPT, say 5.5, I’m not sure you can just turn up the dial on smarts and have it integrate general relativity and quantum mechanics.", "Dwarkesh Patel 00:37:26", "Why not?", "Tyler Cowen 00:37:27", "I don’t think that’s how intelligence works. And this is a Hayekian point. And some of these problems, there just may be no answer. Like, maybe the universe isn’t that legible, and if it’s not that legible, GPT Eleven doesn’t really make sense as a creature or whatever.", "Dwarkesh Patel 00:37:44", "Isn’t there a Hayekian argument to be made that, listen, you can have billions of copies of these things. Imagine the sort of decentralized order that could result from the amount of decentralized tacit knowledge that billions of copies talking to each other could have. That in and of itself is an argument to be made about the whole thing as an emergent order will be much more powerful than we were anticipating.", "Tyler Cowen 00:38:04", "Well, I think it will be highly productive. What tacit knowledge means with AIs, I don’t think we understand yet. Is it by definition all non-passive? Or does the fact that how GPT four works is not legible to us or even its creators so much? Does that mean it’s possessing tacit knowledge, or is it not knowledge? None of those categories are well thought out, in my opinion. So we need to restructure our whole discourse about tacit knowledge in some new, different way. But I agree, these networks of AIs, even before, like GPT eleven, they’re going to be super productive, but they’re still going to face bottlenecks. Right. And I don’t know how good they’ll be at overcoming the behavioral bottlenecks of actual human beings, the bottlenecks of the law and regulation. And we’re going to have more regulation as we have more AIs.", "Dwarkesh Patel 00:38:53", "Right. Yeah. When you say there’ll be uncertainties, I think you made this argument when you were responding to Alex Epstein on fossil future, where you said uncertainties also extend out into the domain where there’s a bad outcome or much bigger outcome than you’re anticipating.", "Tyler Cowen 00:39:04", "That’s right.", "Dwarkesh Patel 00:39:05", "So can we apply the same argument to AI? The fact that there is uncertainty is also a reason for worry.", "Tyler Cowen 00:39:11", "Well, it’s always a reason for worry, but there’s uncertainty about a lot of things, and AI will help us with those other uncertainties. So on net, do you think more intelligence is likely to be good or bad, including against x risk? And I think it’s more likely to be good. So if it were the only risk, I’d be more worried about it than if there’s a whole multitude of risks. But clearly, there’s a whole multitude of risks. But since people grew up in pretty stable times, they tend not to see that in emotionally vivid terms. And then this one monster comes along, and they’re all terrified.", "Dwarkesh Patel 00:39:42", "What would Hayek think of prediction markets?", "Tyler Cowen 00:39:45", "Well, there were prediction markets in Hayek’s time. I don’t know that he wrote about them, but I strongly suspect he would see them as markets that through prices, communicate information. But even around the time of the civil war, there were so-called bucket shops in the US and New York where you would bet on things. They were betting markets with cash settlement, probably never called prediction markets, but they were exactly that. Later on, they were banned. But it’s an old standing thing. There were betting markets on lives in 17th century Britain, different attempts to outlaw them, which I think basically ended up succeeding. But under the table, I’m sure it still went on to some extent.", "Dwarkesh Patel 00:40:22", "Yeah. The reason it’s interesting to think about this is because his whole argument about the price system is that you can have a single dial that aggregates so much information, but it’s precisely for this, and for that reason, it’s so useful to somebody who’s trying to know based on that information. But it’s precisely for this reason that it’s so aggregated that it’s hard to learn about any one particular input to that dial.", "Tyler Cowen 00:40:42", "But I would stress it’s not a single dial. And whether Hayek thought it was a single dial, I think you can argue that either way. So people in markets, they also observe quantities, they observe reaction speeds. There’s a lot of dimensions to prices other than just, oh, this newspaper costs $4, the terms on which it’s advertised. So markets work so well because people are solving this complex multidimensional problem and the price really is not a sufficient statistic the way it is in an Arrow-Debreu. And I think at times Hayek understood that and at other times he writes as if he doesn’t understand it. But it’s an important point.", "Dwarkesh Patel 00:41:18", "Somewhat related question what does it tell us about the difficulty of preserving good institutions, good people, that the median age of a corporation is 18 years and they don’t get better over time, right? Decade after decade, what corporation? There’s a racial corporations that continue improving in that way?", "Tyler Cowen 00:41:34", "Well, I think some firms keep improving for a long time. So there are Japanese firms that date back to the 17th century. They must be better today or even in 1970 than they were way back when. Like the leading four or five Danish firms, none of them are younger than the 1920s. So Maersk, the firm that came up with Hosempic, the pharmaceutical firm, they must be much better than they were back then, right? They have to be. So how that is possible to me is a puzzle. But I think in plenty of cases it’s true.", "Dwarkesh Patel 00:42:09", "I can really say that the best firms in the world aren’t ones that have been improving over time. If you look at the biggest companies by market cap, it’s not like this is what it takes to get there is hundreds of years of continual refinement. What does that tell us about the world?", "Tyler Cowen 00:42:26", "Or just hundreds of years? But again, don’t be overly biased by the US experience and the tech sector. There’s around the world plenty of firms that at least seem to get better as they get older. Certainly, their market cap goes up. Some of that might just be a population effect. Maybe their productivity per some unit is in some ways going down. But that’s a very common case. And why the US is such an outlier is an interesting question, right? Israel clearly is an outlier in a sense. They only have pretty young firms, right? And they’ve done very well in terms of growth.", "Dwarkesh Patel 00:43:01", "Can it be explained by the fact that in these other countries it’s actually just harder to start a new company? Not necessarily that the older companies are actually getting better.", "Tyler Cowen 00:43:08", "Possibly, but it does seem the older companies are often getting better, right? Like in, you know, take China is pretty much entirely new firms because of communism. Japan, in particular, seems to have a lot of very old firms. I don’t know if they’re getting better, but I don’t think you can write that off as a possibility.", "Dwarkesh Patel 00:43:28", "This is Hayek in competition as a discovery process. And it seems like he predicted NIMBYism. So he says in a democratic society it would be completely impossible, using commands that could not be regarded as just, to bring about those changes that are undoubtedly necessary but the necessity of which could not be strictly demonstrated in a particular case. So it seems like he’s kind of talking about what we today call NIMBYism.", "Tyler Cowen 00:43:52", "Sure. And there’s plenty of NIMBYism in earlier times. You look at the 19th-century debates over restructuring Paris Hausmann and putting in the broader boulevards and the like that met with very strong opposition. It’s a kind of miracle that it happened.", "Dwarkesh Patel 00:44:05", "Yeah. Is this a thing that’s inherent to the democratic system? Recently, I interviewed Dominic Cummings and obviously planning is a big issue in the UK. It seems like every democratic country has.", "Tyler Cowen 00:44:16", "This kind of problem and most autocratic countries have it too. Now, China is an exception. They will probably slide into some kind of NIMBYism even if they stay autocratic. Just people resist change. Interest groups always matter. Public opinion. Ala David Hume always matters. And it’s easy to not do anything on a given day. Right. And that just keeps on sliding into the.", "Dwarkesh Patel 00:44:41", "Guess.", "Tyler Cowen 00:44:42", "India has had a lot of NIMBYism. It’s fallen away greatly under Modi and especially what the state governments have done. But it can be very hard to build things in India.", "Dwarkesh Patel 00:44:52", "Still, although it is a democracy, I guess it’s a China example. We’ll see what happens there.", "Tyler Cowen 00:44:57", "That’s right. But it would be very surprising because the Chinese government is highly responsive to public opinion on most, but not all issues. So why wouldn’t they become more NIMBY? Especially with a shrinking population, they’re way overbuilt.", "Dwarkesh Patel 00:45:12", "Right.", "Tyler Cowen 00:45:12", "So the pressure to build will be weak and in cases where they ought to build, I would think quite soon they won’t.", "Dwarkesh Patel 00:45:19", "How much of economics is a study of the systems that human beings use to allocate scarce resources and how much is just something you’d expect to be true of aliens, AIs? It’s interesting when you read the history of economic thought, how often they make mention of human nature specifically like Keynes is talking about. People have high discount rates. Right? Yeah. But what are your thoughts here?", "Tyler Cowen 00:45:44", "My former colleague Gordon Tullock wrote a very interesting book on the economics of ant societies and animal societies and very often they obey human-like principles, or more accurately, humans obey non-human, animal-like principles. So I suspect it’s fairly universal and depends less on quote unquote human nature than we sometimes like to suggest. Maybe that is a bit of a knock on some behavioral economics, the logic of the system. Armin Alchian wrote on this. Gary Becker wrote on this. There were some debates on this in the early 1960s, and that the automatic principles of profit and loss and selection at a firmwide level really matter. And it’s responsible for a lot of economics being true. I think that’s correct.", "Dwarkesh Patel 00:46:30", "Actually, that raises an interesting question of within firms, the sort of input they’re getting from the outside world or ground truth data, is profit loss, bankruptcy. It’s like very condensed information. And from this they had to make the determination of who to fire, who to hire, who to promote, what project to pursue. How do we make sense of how firms disaggregate this very condensed information?", "Tyler Cowen 00:46:54", "I would like to see a very good estimate of how much of productivity gains is just from selection and how much is from, well, smart humans figuring out better ways of doing things. And there are some related pieces on this in the international trade literature. So when you have freer trade, a shockingly high percentage of the productivity gains come from your worst firms being bankrupted by the free trade. And Alex Tabarrok has some posts on this. I don’t recall the exact numbers, but it was higher than almost anyone thought. And that, to me, suggests the Alchian-Becker mechanisms of evolution at the level of the firm, enterprise, or even sector, they’re just a lot more important than human ingenuity. And that’s a pretty Hayekian point. Hayek presumably read those pieces in the. Don’t think he ever commented on them.", "Dwarkesh Patel 00:47:41", "Interesting. Let’s talk about Mill.", "Tyler Cowen 00:47:47", "Right, not James Mill, but he was interesting, too.", "Dwarkesh Patel 00:47:49", "So his arguments about the law force against women and how basically throughout history, the state of women in his society is not natural or the wisdom of the ages, but just the result of the fact that men are stronger and have codified that. Can we apply that argument in today’s society against children and the way we treat them?", "Tyler Cowen 00:48:09", "Yes, I think we should treat children much better. We’ve made quite a few steps in that direction. It’s interesting to think of Mill’s argument as it relates to Hayek. So Mill is arguing you can see more than just the local information. So keep in mind, when Mill wrote, every society that he knew of at least treated women very poorly, oppressed women because they were physically weaker or at a big disadvantage. If you think there’s some matrilineal exceptions, Mill didn’t know about them, so it appeared universal. And Mill’s chief argument is to say you’re making a big mistake if you overly aggregate information from this one observation, that behind it is a lot of structure, and a lot of the structure is contingent, and that if I, Mill, unpack the contingency for you, you will see behind the signals. So Mill is much more rationalist than Hayek. It’s one reason why Hayek hated Mill. But clearly, on the issue of women, Mill was completely correct that women can do much better, will do much better. It’s not clear what the end of this process will be. It will just continue for a long time. Women achieving in excellent ways. And it’s Mill’s greatest work. I think it’s one of the greatest pieces of social science, and it is anti-Hayekian. It’s anti-small c conservatism.", "Dwarkesh Patel 00:49:26", "His other book, On Liberty, is very Hayekian, though, right? In the sense that free speech is needed because information is contained in many different people’s minds.", "Tyler Cowen 00:49:34", "That’s right. And I think Mill integrated sort of. You could call it Hayek and anti-Hayek better than Hayek ever did. That’s why I think Mill is the greater thinker of the two.", "Dwarkesh Patel 00:49:45", "But on the topic of children, what would Mill say, specifically? I guess he could have talked about it if he wanted to, but I don’t know if he was. In today’s world, we send them to school. They’re there for 8 hours a day. Most of the time, it’s probably wasted, and we just use a lot of coercion on them. We don’t need to. How would he think about this issue?", "Tyler Cowen 00:50:03", "There’s Mill’s own upbringing, which was quite strict and by current standards, oppressive, but apparently extremely effective in making Mill smart. So I think Mill very much thought that kids should be induced to learn the classics, but he also stressed they needed free play of the imagination in a way that he drew from German and also British Romanticism, and he wanted some kind of synthesis of the two. But by current standards, Mill, I think, still would be seen as a meanie toward kids. But he was progressive by the standards of his own day.", "Dwarkesh Patel 00:50:37", "Do you buy the arguments about aristocratic tutoring for people like Mill? And there’s many other cases like this, but since they were kids, they were taught by one-on-one tutors, and that explains part of their greatness.", "Tyler Cowen 00:50:49", "I believe in one-on-one tutors. But I don’t know how much of those examples is selection, right? So I’m not sure how important it is. But just as a matter of fact, if I were a wealthy person and just had a new kid, I would absolutely invest in one-on-one tutors.", "Dwarkesh Patel 00:51:04", "You talk in the book about how Mill is very concerned about the quality and the character development of the population. But when we think about the fact that somebody like him was elected to the parliament at the time, the greatest thinker who’s alive is elected to government, and it’s hard to imagine that could be true in today’s world. Does he have a point with regards to the quality of the population?", "Tyler Cowen 00:51:29", "Well, Mill, as with women, he thought a lot of improvement was possible. And we shouldn’t overly generalize from seeing all the dunces around us, so to speak. Maybe the book is still out on that one, but it’s an encouraging belief, and I think it’s more right than wrong. There’s been a lot of moral progress since Mill’s time. Not in everything, but certainly in how people treat children or how men treat their wives. And even when you see negative reversals, Stephen Pinker so far seems to be right on that one. But you do see places like Iran. How women were treated seems to have been much better in the 1970s than it is today. So there are definitely reversals.", "Dwarkesh Patel 00:52:13", "But on a specific reversal of somebody of Mill’s quality probably wouldn’t get elected to Congress in the US or parliament in the UK. How big a deal is that?", "Tyler Cowen 00:52:22", "Advice may get through the cracks due to all the local statesmen who wisely advise their representatives in the House. Right. So I don’t know how much that process is better or worse compared to, say, the 1960s. I know plenty of smart people who think it’s worse. I’m not convinced that’s true.", "Dwarkesh Patel 00:52:41", "Let’s talk about Smith. Adam Smith. Yeah. Okay. One of the things I find really remarkable about him is he publishes in 1776, The Wealth of Nations. And basically, around that time, Gibbon publishes the decline and fall of the Roman Empire. Yep. So he publishes The Decline and Fall of the Roman Empire. And one of his lines in there is if you were asked to state a period of time when man’s condition was what is his best, it was during the reign of Commodus to Domitian. And that’s like 2000 years before that. Right. So there’s basically been at least it’s plausible to somebody really smart that there’s basically been no growth since for 2000 years. And in that context to be making the case for markets and mechanization and division of labor. I think it’s even more impressive when you put it in context that he has basically been seeing 0.5% or less growth.", "Tyler Cowen 00:53:29", "Strongly agree. And this is, in a way, Smith being like Mill. Smith is seeing the local information of very small growth and the world barely being better than the Roman Empire, and inferring from that, with increasing returns to division of labor, how much is possible. So Smith is a bit more of a rationalist than Hayek makes him out.", "Dwarkesh Patel 00:53:47", "To be right now, I wonder if we use the same sort of extrapolative thinking that Smith uses. We haven’t seen that much growth yet. But if you apply these sorts of principles, this is what you would expect to see. What would he make of the potential AI economy where we see 2% growth a year now, but you have billions of potential more agents or something. Would he say, well, actually, you might have 10% growth because of this? You would need more economic principles to explain this or that. Just adding that to our list of existing principles would imply big gains?", "Tyler Cowen 00:54:19", "It’s hard to say what Smith would predict for AI. My suspicion is that the notion of 10% growth was simply not conceivable to him. So he wouldn’t have predicted it because he never saw anything like it. That to him, 3% growth would be a bit like 10% growth. It would just shock him and bowl him over. But Smith does also emphasize different human bottlenecks and constraints of the law. So it’s quite possible Smith would see those bottlenecks as mattering and checking AI growth and its speed.", "Dwarkesh Patel 00:54:52", "But as a principle, given the change we saw pre-industrial revolution and after 1870, does it seem plausible to you that you could go from the current regime to a regime where you have 10% growth for decades on end?", "Tyler Cowen 00:55:08", "That does not seem plausible to me. But I would stress the point that high rates of growth decades on end, the numbers cease to have meaning because the numbers make the most sense when the economy is broadly similar, like, oh, everyone eats apples, and each year there’s 10% more apples at a roughly constant price. As the basket changes, the numbers become meaningless. It’s not to deny there’s a lot of growth, but you can think about it better by discarding the number. And presumably, AI will change the composition of various bundles quite a bit over time.", "Dwarkesh Patel 00:55:39", "So when you hear these estimates about what the GDP per capita was in the Roman Empire, do you just disregard that and think in terms of qualitative changes from that time?", "Tyler Cowen 00:55:45", "Depends on what they’re being compared to. So there are pieces in economic history that are looking at, say, the 17th, 18th-century Europe comparing it to the Roman Empire. Most of GDP is agriculture, which is pretty comparable, right? Especially in Europe. It’s not wheat versus corn. It’s wheat and wheat. And I’ve seen estimates, oh, say, by 1730, some parts of Western Europe are clearly better off than the Roman Empire at its peak. But, like, within range, those are the best estimates I know, and I trust those. They’re not perfect, but I don’t think there’s an index number problem so much.", "Dwarkesh Patel 00:56:23", "And so when people say, we’re 50% richer than an average Roman at the peak of the empire, this kind of thinking doesn’t make sense to you?", "Tyler Cowen 00:56:33", "It doesn’t make sense to me. And a simple way to show that. Let’s say you could buy from a Sears Robot catalog of today or from 1905, and you have $50,000 to spend. Which catalog would you rather buy from? You have to think about it right now. If you just look at changes in the CPI, it should be obvious you would prefer the catalog from 1905. Everything’s so much cheaper. That white shirt costs almost nothing.", "Dwarkesh Patel 00:56:59", "Right?", "Tyler Cowen 00:56:59", "At the same time, you don’t want that stuff. It’s not mostly part of the modern bundle. So even if you ended up preferring the earlier catalog, the fact that you have to think about it reflects the changes.", "Ambiguities when you read the contemporaries of Smith. Other economists who were writing at the time, were his arguments just clearly, given the evidence of the time, much better than everybody around? Or was it just that, exposed, he was clearly right. But given the arguments of the time, it could have gone any one of different ways.", "Tyler Cowen 00:57:29", "Well, there aren’t that many economists at the time of Smith, so it depends on what you’re counting. I mean, the two fellow Scots you could compare Smith to are Sir James Stewart, who published a major work, I think, in 1767. On some matters, Stewart was ahead of Smith. Not most, clearly Smith was far greater. But Stewart was no slouch. And the other point of comparison is David Hume, Smith’s best friend. Of course, per page, you could argue Hume was better than Smith. Certainly on monetary theory, Hume was better than Smith. Now, he’s not a GOAT contender. He just didn’t do enough. But I wouldn’t say Smith was ahead of Hume. He had more and more important insights. But Hume was pretty impressive. Now, if you’re talking about, oh, the 18th-century German cameralists, well, they were bad mercantilists, but there are people, say, writing in Sweden in the 1760s, analyzing exchange rates, who had better understandings of exchange rates than Smith ever did? So it’s not that he just dominated everyone.", "Dwarkesh Patel 00:58:31", "Let me offer some other potential nominees that were not in the book for GOAT, and I want your opinions of them. Henry George, in terms of explaining how land is fundamentally different from labor and capital when we’re thinking about the economy.", "Tyler Cowen 00:58:45", "Well, first, I’m not sure land is that fundamentally different from labor and capital. A lot of the value of land comes from improvements, and what’s an improvement can be quite subtle. It doesn’t just have to be putting a plow to the land. So I would put George in the top 25. Very important thinker. But he’s a bit of a. Not a one-note Johnny. His book on protectionism is still one of the best books on free trade. But he’s circumscribed in a way, say, Smith and Mill were not.", "Dwarkesh Patel 00:59:16", "Today. Does its status rise? We see rents in big cities.", "Tyler Cowen 00:59:20", "Status is way up for this reason because of YIMBY NIMBY. And I think that’s correct. He was undervalued. He’s worth reading very carefully. A few years ago we’re recording here at Mercatus, we had a like twelve-person, two-day session with Peter Thiel just on reading Henry George. It’s all we did. And people came away very impressed, I think.", "Dwarkesh Patel 00:59:42", "And for people who are interested, they might enjoy the episode I did with Lars Doucet, who, oh, I don’t know about this.", "Tyler Cowen 00:59:47", "He’s a Georgist.", "Dwarkesh Patel 00:59:48", "Oh yeah. He’s a really smart guy. Basically, he wrote a book review of Henry George that won Scott Alexander’s book review contest.", "Tyler Cowen 00:59:57", "Oh, I know this.", "Dwarkesh Patel 00:59:58", "And then he’s just turned it into a whole book of his own, which is actually really good.", "Tyler Cowen 01:00:02", "And I think there’s something truly humane in George when you read him. That can be a bit infectious. That’s positive.", "Dwarkesh Patel 01:00:10", "And there was some insane turnout for his funeral. Right. He was very popular at the time.", "Tyler Cowen 01:00:16", "And that was deserved.", "Dwarkesh Patel 01:00:17", "Yeah. I guess you already answered this question, but Ronald Coase, in terms of helping us think about firms and property rights and transaction costs, well, even though I.", "Tyler Cowen 01:00:26", "Think the 1937 piece is wrong, it did create one of the most important genres. He gets a lot of credit for that. He gets a lot of credit for the Coase theorem. The FCC property rights piece is superb. The lighthouse piece is very good. Again, he’s in the top 25, but in terms of his quantity, its own quality, it’s just not quite enough. There’s no macro, but of course, you rate him very, very highly.", "Dwarkesh Patel 01:00:51", "How about your former advisor, Thomas Schelling?", "Tyler Cowen 01:00:54", "He is a top-tier Nobel laureate, but I don’t think he’s a serious contender for the greatest economist of all time. He gets the most credit for making game theory intuitive, empirical, and workable, and that’s worth a lot. Economics of self-command. He was a pioneer, but in a way, that’s just going back to the Greeks and Smith. He’s not a serious contender for GOAT, but a top-tier Nobel laureate for sure.", "Dwarkesh Patel 01:01:21", "You have a fun quote in the book on Arrow where you say his work was Nobel Prize winning important, but not important important.", "Tyler Cowen 01:01:29", "Well, some parts of it were important important like how to price securities. So I think I underrated Arrow a bit in the book. If you ask like, what regrets do I have about the book? I say very, very nice things about Arrow, but I think I should have pushed him even more.", "Dwarkesh Patel 01:01:42", "What would Arrow say about prediction markets?", "Tyler Cowen 01:01:45", "Well, he was really the pioneer of theoretically understanding how they work. So he was around until quite recently. I’m sure he had things to say about prediction markets, probably positive.", "Dwarkesh Patel 01:02:00", "So one of the points you make in the book is economics at the time was really a way of carrying forward big ideas about the world. What discipline today is where that happens?", "Tyler Cowen 01:02:09", "Well, Internet writing, it’s not a discipline, but it’s a sphere, and plenty of it happens more than ever before. But it’s segregated from what counts as original theorizing in the academic sense of that word. Is that a good or bad segregation? I’m not sure, but it’s really a very sharp, radical break from how things had been. And it’s why I don’t think there’ll be a new GOAT contender. Probably not ever. Or if there is, it will be something AI-related.", "Dwarkesh Patel 01:02:36", "Yeah, that sounds about right to me. But within the context of Internet writing, obviously, there are many disciplines there, economics being a prominent one when you split it up, is there a discipline in terms of, I don’t know, people writing in terms of computer science concepts or people writing in terms of economic concepts? Who’s today?", "Tyler Cowen 01:02:55", "The discipline ceased to matter. That really good Internet writing is multidisciplinary. When I meet someone like a Scott Aronson, who’s doing, like, computer science, AI type Internet writing on his blog, I have way more in common with him than with a typical research economist, say, at Boston University. And it’s not because I know enough about computer science, like I may or may not know a certain amount, but it’s because our two enterprises are so similar. Or Scott Alexander, he writes about mental illness also. That just feels so similar, and we really have to rethink what the disciplines are. It may be that the method of writing is the key differentiator for this particular sphere, not for everything.", "Dwarkesh Patel 01:03:36", "Scott Aronson was my professor in college for a couple. Yeah, yeah. That’s where I decided I’m not going to go to grad school because you just see like two standard deviations above you easily. You might as well just choose a different game.", "Tyler Cowen 01:03:50", "But his method of thinking and writing is infectious, like that of Scott Alexander and many of the rest of us.", "Dwarkesh Patel 01:03:56", "Yeah. So I think in the book you say you were raised as much by economic thought or the history of economic thought as you are by your graduate training.", "Tyler Cowen 01:04:07", "More, much more. It’s not even close.", "Dwarkesh Patel 01:04:10", "Today people would say, I was talking to Basil Halperin, who’s a young economist, and he said he was raised on Marginal Revolution in the same way that you were raised on the history of economic thought. Does this seem like a good trade? Are you happy that people today are raised on Scott Alexander and Marginal Revolution?", "Tyler Cowen 01:04:26", "At the margin, I would like to see more people raised on Marginal Revolution. I don’t just mean that in a selfish way. Regarding the internet writing mode of thinking, I would like to see more economists and research scientists raised on it, but the number may be higher than we think. If I hadn’t run Emergent Ventures, I wouldn’t know about Basil, per se, maybe would not have met him. And it’s infectious, so it might always be a minority, but it will be the people most likely to have new ideas. It’s a very powerful new mode of thought, which I’ll call the Internet way of writing and thinking. And it’s not sufficiently recognized as something like a new field or discipline, but that’s what it is.", "Dwarkesh Patel 01:05:03", "I wonder if you’re doing enough of that when it comes to AI, where I think you have really interesting thoughts about GPD, five-level stuff, but somebody with your sort of polymathic understanding of different fields, if you just extrapolate out these trends, it seems like you might have a lot of interesting thoughts about what might be possible with something much further down the line.", "Tyler Cowen 01:05:21", "Well, I have a whole book with AI predictions, averages over, and I have about 30 Bloomberg columns and probably 30 or 40 Marginal Revolution posts. I can just say I’ll do more, but the rate at which ideas arrive at me is the binding constraint. I’m not holding them back.", "Dwarkesh Patel 01:05:40", "Speaking of Basil, he had an interesting question. Should society or government subsidize savings so that we’re, in effect, having it lead to basically a zero social discount rate. So people on average probably have their own lives prioritized. They have discount rates based on their own lives. If we’re long-term, should the government be subsidizing savings?", "Tyler Cowen 01:06:07", "I’ll come close to saying yes. First, we tax savings right now. So we should stop taxing savings. Absolutely. I think it’s hard to come up with workable ways of subsidizing savings that don’t give rich people a lot of free stuff in a way that’s politically unacceptable and also unfair. So I’m not sure we have a good way of subsidizing savings, but in principle, I would be for it if we could do it in a proper, targeted manner.", "Dwarkesh Patel 01:06:33", "Although you had a good argument against this in “Stubborn Attachments,” right? That over the long term, if economic growth is high enough, then the savings of the rich will just be dissipated to everybody below.", "Tyler Cowen 01:06:45", "Well, I’m not sure to whom it’s dissipated. It does get dissipated. The great fortunes of the past are mostly gone, but they may not go to people below. And the idea of writing into a tax system, subsidies on that scale, in essence, subsidies to wealth, not GDP. But wealth is, say, six to eight times GDP. I just think the practical problems are quite significant. It’s not an idea I’m pushing, but there are, at the margins, ways you can do it that only benefit people who are poor, ways you can improve through better either regulation or deregulation, like the workings of local credit unions, that are a kind of de facto subsidy without having to subsidize all of the saved wealth. There are a lot of ways you can do that, and we should look for that more.", "Dwarkesh Patel 01:07:30", "Relatedly, I think, a couple of years ago, Paul Schlemming had an interesting paper that if you look from 1311 to now, interest rates have been declining. There’s been hundreds of years of interest rate declines. What is the big picture explanation of this trend?", "Tyler Cowen 01:07:46", "I’m not sure we have one. You may know Cowen’s third law: All propositions about real interest rates are wrong. But simply, lower risk, better information, higher buffers of wealth would be what you’d call the intuitive economistic explanations. There’s probably something to them, but how much of that trend do they actually explain as a percent of the variance? I don’t know.", "Dwarkesh Patel 01:08:07", "Let’s talk about anarchy. You have first written about this; I hadn’t read the last time we talked, and it’s really interesting. So maybe you can restate your arguments as you answer this question, but how much of your arguments about how network industries lead to these cartel-like dynamics? How much of that can help explain what happened to social media, Web 2.0?", "Tyler Cowen 01:08:28", "I don’t view that as such a cartel. I think there’s a cartel at one level, which is small but significant. This is maybe more true three, four years ago than today, with Elon owning Twitter and other changes, but if someone got kicked off social media platforms three, four years ago, they would tend to get kicked off all or most of them. It wasn’t like a consciously collusive decision, but it’s a bit like, oh, well, I know the guy who runs that platform, and he’s pretty smart, and if he’s worried, I should be worried. And that was very bad. I don’t think it was otherwise such a collusive equilibrium, maybe some dimensions on hiring social people, software engineers. There was some collusion, not enough bidding, but it was mostly competing for attention. So I think the real risk protection agencies’ side of network-based collusion is through banking systems, where you have clearinghouses and payments networks, and to be part of it, the clearinghouse, in the absence of legal constraint, can indeed help everyone collude. And if you don’t go along with the collusion, you’re kicked out of the payment system. That strikes me as a real issue.", "Dwarkesh Patel 01:09:40", "Do your arguments against anarchy, do they apply at all to web 3.0 crypto-like stuff?", "Tyler Cowen 01:09:48", "Do I think it will evolve into collusion? I don’t see why it would. I’m open to hearing the argument that it could, though. What would that argument look like?", "Dwarkesh Patel 01:09:56", "Well, I guess we did see with crypto that in order to just have workable settlement, you need these centralized institutions, and from there you can get kicked off those, and the government is involved with those. And you can maybe abstract the government away and say that they will need to collude in some sense in order to facilitate transactions.", "Tyler Cowen 01:10:15", "And the exchanges have ended up quite centralized, right?", "Dwarkesh Patel 01:10:18", "Yeah.", "Tyler Cowen 01:10:19", "And that’s an example of clearinghouses and exchanges being the vulnerable node. But I don’t know how much web 3.0 is ever going to rely on that. It seems you can create new crypto assets more or less at will. There’s the focality of getting them started. But if there’s a real problem with the preexisting crypto assets, I would think you could overcome that. So I would expect something more like a to and fro, waves of centralization, decentralization, and natural checks embedded in the system. That’s my intuition, at least.", "Dwarkesh Patel 01:10:50", "Does your argument against anarchy prove too much in the sense that globally different nations have anarchic relations with each other, and they can’t enforce a monopoly on each other, but they can coordinate to punish bad actors in the way you want protection agencies to do? Right? Like we can sanction North Korea together or something.", "Tyler Cowen 01:11:07", "I think that’s a very good point and a very good question, but I would rephrase my argument. You could say it’s my argument against anarchy, and it is an argument against anarchy, but it’s also an argument that says anarchy is everywhere. So within government, the feds, the state governments, all the different layers of federalism, there’s a kind of anarchy. There’s not quite a final layer of adjudication, the way you might think we pretend there is. I’m not sure how strong it is internationally. Of course, how much gets enforced by a hegemon, how much is spontaneous order? Even the different parts of the federal government are in a kind of anarchy with respect to each other. So you need a fair degree of collusion for things to work, and you ought to accept that. But maybe in a Straussian way, where you don’t trumpet it too loudly. But the point that anarchy itself will evolve enough collusion to enable it to persist, if it persists at all, is my central point. My point is, well, anarchy isn’t that different now, given we’ve put a lot of social political capital into our current institutions, I don’t see why you would press the anarchy button. But if I’m North Korea and I can press the anarchy button for North Korea, I get that it might just evolve into Haiti, but I probably would press the anarchy button for North Korea if at least someone would come in and control the loose nukes.", "Dwarkesh Patel 01:12:30", "Yeah. This is related to one of those classic arguments against anarchy, that under anarchy, anything is allowed, so the government is allowed. Therefore, we’re in a state of anarchy in some sense.", "Tyler Cowen 01:12:39", "In a funny way, that argument’s correct. We would reevolve something like government. And Haiti has done this, but in very bad ways, where it’s gangs and killings. It doesn’t have to be that bad. There’s medieval Iceland, medieval Ireland. They had various forms of anarchy, clearly limited in their destructiveness by low population, ineffective weapons, but they had a kind of stability. You can’t just dismiss them and you can debate how governmental were they? But the ambiguity of those debates is part of the point that every system has a lot of anarchy, and anarchies have a fair degree of collusion if they survive, actually.", "Dwarkesh Patel 01:13:16", "So I want to go back to much earlier in the conversation where you’re saying, listen, it seems like intelligence is a net good. So just that being your heuristic, you should call forth the AI.", "Tyler Cowen 01:13:28", "Well, not uncritically. You need more argument. But just as a starting point, if more intelligence isn’t going to help you, you have some really big problems anyway.", "Dwarkesh Patel 01:13:37", "But I don’t know if you still have the view that we have like an 800-year timeline for human civilization, but that sort of timeline implies that intelligence actually is going to be the. Because the reason we have an 800-year timeline presumably is like some product of intelligence, right.", "Tyler Cowen 01:13:53", "My worry is that energy becomes too cheap and people at very low cost can destroy things rather easily. So, say, if destroying a city with a nuclear weapon cost $50,000, what would the world look like? I’m just not sure. It might be more stable than we think, but I’m greatly worried, and I could readily imagine it falling apart.", "Dwarkesh Patel 01:14:15", "Yeah. But I guess the bigger point I’m making is that in this case, the reason the nuke got so cheap was because of intelligence. Now, that doesn’t mean we should stop intelligence, but just that, if that’s like the end result of intelligence over hundreds of years, that doesn’t seem like intelligence is always that good.", "Tyler Cowen 01:14:34", "Well, we’re doing better than the other great apes, I would say, even though we face these really big risks. And in the meantime, we did incredible things. So that’s a gamble I would take, but I believe we should view it more self-consciously as a sort of gamble, and it’s too late to turn back. The fundamental choice was one of decentralization, and that may have happened hundreds of millions or billions of years ago. And once you opt for decentralization, intelligence is going to have advantages and you’re not going to be able to turn the clock back on it. So you’re walking this tightrope, and by goodness, you’d better do a good job. I mean, we should frame our broader history more like that, and it has implications for how you think about x-risk. Again, I think of the x-risk people, a bit of them. It’s like, well, I’ve been living in Berkeley a long time, and it’s really not that different. My life’s a bit better, and we can’t risk all of this. But that’s not how you should view broader history.", "Dwarkesh Patel 01:15:30", "I feel like you’re an expert person. Even they don’t think we’re like 100% guaranteed to go out by 800 years or something, we’re guaranteed at all.", "Tyler Cowen 01:15:38", "It’s up to us. I just think the risk, not that everyone dies, I think that’s quite low, but that we retreat to some kind of pretty chaotic form of like, medieval Balkans existence with a much lower population. That seems to me quite a high risk. With or without AI, it’s probably the default setting.", "Dwarkesh Patel 01:15:59", "Given that you think that’s a default setting, why is that not a big part of your, when you’re thinking about how new technologies are coming about, why not consciously think in terms of, is this getting us to the outcome where we avoid this sort of preindustrial state that would result from the $50,000 nukes?", "Tyler Cowen 01:16:17", "Well, if you think the risk is cheap energy, more than AI per se, admittedly, AI could speed the path to cheap energy. It seems very hard to control. The strategy that’s worked best so far is to have relatively benevolent nations become hegemons and establish dominance. So it does influence me. I want the US, UK, some other subset of nations to establish dominance in AI. It may not work forever, but in a decentralized world, it sure beats the alternative. So a lot of the AI types, they’re too rationalist, and they don’t start with the premise that we chose a decentralized world a very, very long time ago, even way before humans.", "Dwarkesh Patel 01:16:57", "And I think you made an interesting point when you were talking about Keynes in the book, where you said one of his faults was that he assumed that people like, would always be in charge.", "Tyler Cowen 01:17:06", "That’s right.", "Dwarkesh Patel 01:17:06", "And I do see that also in the alignment discourse. Like alignment is if it’s just handing over to the government and just assuming the government does what you’d expect it to do.", "Tyler Cowen 01:17:13", "And I worry about this from my own point of view. So even if you think the US is pretty benevolent today, which is a highly contested and mixed proposition, and I’m an American citizen, pretty patriotic, but I’m fully aware of the long history of my government in killing, enslaving, doing other terrible things to people. And then you have to rethink that over a long period of time, at maybe the worst time period, that affects the final outcome, even if the average is pretty good. And then if power corrupts, and if the government even indirectly controls AI systems, so the US government could become worse because it’s a leader in AI. Right? But again, I’ve got to still take that over China or Russia or wherever else it might be.", "I just don’t really understand when people talk about national security. I’ve never seen the AI doomers say anything that made sense. And I recall those early days. Remember China issued that edict where they said, we’re only going to put AIs that are safe and they can’t criticize the CCP. How many super smart people, and I mean super smart, like X, just jump on that and say, see, China’s not going to compete with us. We can shut AI down. They just seem to have zero understanding of some properties of decentralized worlds.", "Or Eliezer’s tweet, was it from yesterday? I didn’t think it was a joke, but, oh, there’s a problem. That AI can read all the legal code and threaten us with all these penalties. It’s like he has no idea how screwed up the legal system is. Yeah, it would just be courtroom waits of, like, 70 or 700 years. It wouldn’t become a thing people are afraid of. It would be a social problem in some way.", "Dwarkesh Patel 01:18:52", "What’s your sense of how the government reacts when the labs are doing, regardless of how they should react, how they will react, and when the labs are doing, like, I don’t know, $10 billion training runs? And if under the premise that these are powerful models, not human level per se, but just they can do all kinds of crazy stuff, how do you think the government’s going to. Are they going to nationalize the labs or staying in Washington? What’s your sense?", "Tyler Cowen 01:19:15", "I think our national security people are amongst the smartest people in our government. They’re mostly well intentioned in a good way. They’re paying careful attention to many things. But what will be the political will to do what they don’t control? And my guess is, until there’s sort of an SBF like incident, which might even not be significant, but a headlines incident, which SBF was, even if it doesn’t affect the future evolution of crypto, which I guess is my view, it won’t. Until there’s that, we won’t do much of anything, and then we’ll have an SBF like incident, and we’ll overreact. That seems a very common pattern in American history. And the fact that it’s AI, the stakes might be high or whatever. I doubt if it will change the recurrence of that pattern.", "Dwarkesh Patel 01:20:01", "How would Robert Nozick think about different AI utopias?", "Tyler Cowen 01:20:05", "Well, I think he did think about different AI utopias. Right? So I believe whether he wrote or talked about it, but the notion of humans much smarter than they are, or the notion of aliens coming down who are like in some way, morally, intellectually way beyond us. He did write about that and he was worried about how they would treat us. So he was sensitive to what you would call AI risk, viewed a bit more broadly very early on.", "Dwarkesh Patel 01:20:34", "What was his take?", "Tyler Cowen 01:20:35", "Well, Nozick is not a thinker of takes. He was a thinker of speculations and multiple possibilities, which I liked about him. He was worried about it, this I know, and I talked to him about it, but I couldn’t boil it down to a simple take. It made him a vegetarian, I should add.", "Dwarkesh Patel 01:20:54", "Wait, that made him because we want to be treating the entities that are to us as AI.", "Tyler Cowen 01:20:59", "Aliens from outer space might treat us. We are like that to animals. May not be a perfect analogy, but it’s still an interesting point. And therefore we should be vegetarians. That was his argument. At least he felt he should be.", "Dwarkesh Patel 01:21:11", "I wonder if we should honor past generations more, or at least respect their wishes more. For if we think of the alignment problem, it’s similar to how we react to our previous generations. Do we want the AIs to treat us as we treat people thousands of years ago?", "Tyler Cowen 01:21:26", "Yeah, it’s a good question. And I’ve never met anyone who’s consistent with how they view wishes of the dead. Yeah, I don’t think there is a consistent, philosophically grounded point of view on that one.", "Dwarkesh Patel 01:21:39", "I guess the sort of Thomas Paine view of you don’t regard them at all. Is that not self consistent?", "Tyler Cowen 01:21:43", "It’s consistent, but I’ve never met anyone who actually lives according to.", "Dwarkesh Patel 01:21:47", "Oh, and what’s inside?", "Tyler Cowen 01:21:48", "Are they contradicting, say, you know, their spouse were to die and the spouse gave them instructions? Sure, they would put weight on those instructions. Somewhere out there there’s probably someone who wouldn’t. But I’ve never met such a person.", "Dwarkesh Patel 01:22:01", "And how about the Burke view that you take them very seriously? Why is that not self consistent?", "Tyler Cowen 01:22:06", "The Burke view? What do you mean?", "Dwarkesh Patel 01:22:07", "Burke view.", "Tyler Cowen 01:22:08", "Oh, well, it’s time inconsistent to take those preferences seriously. And Burke himself understood that; he was a very deep thinker. So, well, you take them seriously now, but as time passes, other ancestors come along, they have somewhat different views. You have to keep on changing course. What you should do now, should it be what the ancestors behind us want, or your best estimate of what the 30 or 40 years of ancestors to come will want once they have become ancestors? So it’s time, inconsistent again. There’s not going to be a strictly philosophical resolution. There will be practical attempts to find something sustainable, and that which survives will be that which we do, and then we’ll somewhat rationalize it, ex post.", "Dwarkesh Patel 01:22:51", "Yeah. There’s an interesting book about the ancient, ancient Greeks. What is it called? I forgot the name. But it talks about the hearths that they have for their families, where the dead become gods. But then over time, if you keep this hearth going for hundreds of years, there’s like thousands of ancestors that you don’t even remember their names. Right. Who are you praying to?", "Tyler Cowen 01:23:11", "And then it’s like the Arrow Impossibility Theorem for all the gods.", "Dwarkesh Patel 01:23:15", "What do they all want me to do?", "Tyler Cowen 01:23:17", "And you can’t even ask them.", "Dwarkesh Patel 01:23:18", "Yeah, okay. We were talking before we started recording about Argentina and the reforms they’re trying there. And they’re trying to dollarize because the dollar is more stable than their currency. But this raises the question of why is the dollar so stable? So we’re also a democracy. Right. But the dollar seems pretty well managed. What is the larger explanation of why monetary policy seems well managed in the US?", "Tyler Cowen 01:23:42", "Well, US voters hate inflation, mostly for good reasons, and we have enough wealth that we can pay our bills without having to inflate very much. And 2% has been stable now for quite a while. It’s an interesting question, which I cannot answer, and I have looked into this and asked smart people from Argentina, why does Argentina in particular have recurring waves of hyperinflation? Is there something about the structure of their interest groups that inevitably, recurringly leads them to demand too much? I suppose, but there are plenty of poor, badly run countries that don’t have hyperinflation. African countries historically have not had high rates of hyperinflation, haven’t had high rates of inflation. Why is that? Well, maybe they don’t capture enough through seigniorage. For some reason, currency holdings aren’t large enough, there’s some kind of financial repression, I don’t know, but it’s very hard to explain why some of these countries, but not others, go crazy with the printing press.", "Dwarkesh Patel 01:24:41", "And this is maybe a broader question about different institutions in the government where I don’t understand enough to evaluate their object-level decisions. But if you look at the Supreme Court or the Federal Reserve or something, just from a distance, it seems like they’re really well-run, competent organizations with highly technocratic, nonpartisan people running them.", "Tyler Cowen 01:25:01", "They’re not nonpartisan, but they’re still well run.", "Dwarkesh Patel 01:25:03", "Yeah. And what’s the theory of why these institutions, in particular, are so much better run? Is it just that they’re one step back from direct elections? Is it that they have traditions of knowledge within them? How do we think about this?", "Tyler Cowen 01:25:18", "I think both of those. I don’t think the elections point is sufficient because there are plenty of unelected bodies that are totally corrupt around the world. Most of them are, perhaps some sense of American civic virtue that gets communicated and then the incentives are such. Say you’re on the Fed for a while, what you can do afterward can be rewarding, but you want a reputation for having done a good job. So your sense of morality and your private self-interest coincide, and that’s pretty strong. And we’re still in that loop. I don’t really see signs of that loop breaking.", "Dwarkesh Patel 01:25:52", "It’s also striking to me how many times I’ll read an interesting article or paper and the person who wrote it, it’s like the former head of the Federal Reserve in New York or something. It just seems like that’s a strong vindication of these institutions, that the standards are very high.", "Tyler Cowen 01:26:03", "And if you speak with any of those people, like who’ve been on Fed boards, ask them questions, they’re super smart, super involved, curious, really, for the most part, do want the best thing for their country.", "Dwarkesh Patel 01:26:15", "Going back to these economists at the end, you talk about how you’re kind of disappointed in this turn that economics has taken.", "Tyler Cowen 01:26:22", "Maybe I’m just not surprised.", "Dwarkesh Patel 01:26:23", "Right.", "Tyler Cowen 01:26:23", "It’s division of labor. Adam Smith, who said it would make people a bit feeble-minded and infurious, was completely correct.", "Dwarkesh Patel 01:26:31", "Wait, Adam Smith said what would make people division of labor? I see, right. Yeah.", "Tyler Cowen 01:26:35", "Not stupid. Current economic researchers probably have never been smarter, but they’re way less broad and less curious.", "Dwarkesh Patel 01:26:44", "Patrick Carlison put it in an interesting way where he said, in the past, maybe thinkers were more interested in delving into the biggest questions, but if they couldn’t do it rigorously in a tractable way, they would make the trade-off in favor of the big question. And today we make the opposite trade-off. Does that seem like a fair comparison?", "Tyler Cowen 01:27:02", "I think that’s correct. And I would add that, say, in the time of Smith, there was nothing you could do rigorously. So there was no other option.", "Well, oh, I’m going to specialize in memorizing all the grain prices and run some great econometrics on that. And that’ll be rigorous. It’s really William Stanley Jevons who to the Anglo world introduced this notion. There’s something else you can do that’s rigorous. It was not yet rigorous, but he opened the door and showed people the.", "Dwarkesh Patel 01:27:27", "Alternative of the Jevons paradox?", "Tyler Cowen 01:27:32", "Well, I would say his work in statistics originally on the value of money.", "Dwarkesh Patel 01:27:35", "Right.", "Tyler Cowen 01:27:36", "But his statistical work on coal also had some rigor, so you’re not wrong to cite that. And Jevons just showed that rigorous statistical work and economics could be the same thing, and that was his greater innovation than just marginalism. So he’s an underrated figure. Maybe he should be in the book in a way, but it had some unfortunate secondary consequences. Too many people crowd into specialization. “Crowd” is a funny word to use because they’re each sitting in their separate nodes, but it’s a kind of crowding.", "Dwarkesh Patel 01:28:07", "Is there some sort of Hayekian solution here, where in markets, the effect of having the sort of decentralized process is that the sum is greater than the parts, whereas in academic disciplines, the sum is just a bunch of different statistical aggregates. There’s no grand theory that comes together as a result of all this micro work. Is there some Hayekian solution here?", "Tyler Cowen 01:28:30", "Well, yes, you and I are the Hayekian solution that as specialists proliferate, we can be, quote-unquote, parasitic on them and take what they do and turn it into interesting larger bundles that they haven’t dreamt of and make some kind of living doing that. And we’re much smaller in number, but I’m not sure how numerous we should be. And there’s a bunch of us, right?", "Dwarkesh Patel 01:28:51", "You’re in a separate category, Tyler. I’m running a podcast here.", "Tyler Cowen 01:28:55", "I run a podcast. We’re exactly in the same category, is my point.", "Dwarkesh Patel 01:29:00", "And what do you see as the future of the kind of sort of thinking you do? Do you see yourself as the last of the literary economists, or is there a future of this kind of. Is it just going to be the slatesar codexes? Are they going to take care of it, or this sort of lineage of thinking?", "Tyler Cowen 01:29:16", "Well, the next me won’t be like me in that sense. I’m the last, but I don’t think it will disappear. It will take new forms. It may have a lot more to do with AI, and I don’t think it’s going to go away. There’s just a demand for it. There’s a real demand for our products. We have a lot of readers, listeners, people interested, whatever, and there’ll be ways to monetize that. The challenge might be competing against AI, and it doesn’t have to be that AI does it better than you or I do, though it might, but simply that people prefer to read what the AIs generate for ten or 20 years. And it’s harder to get an audience because playing with the AIs is a lot of fun. So that will be a real challenge. I think some of us will be up to it. You’ll be faced with it more than I will be, but it’s going to change a lot.", "Dwarkesh Patel 01:30:03", "Yeah. Okay. One of the final things I want to do is I want to go into political philosophy a little bit. Okay. And ask that we haven’t been doing it already. Okay. So I want to ask you about certain potential weaknesses of the democratic capitalist model that we live in. And in terms of both in terms of whether you think they’re object-level right and second, regardless of how right they are, how persuasive and how powerful a force they will be against our system of government and functioning. Okay, so there’s a libertarian critique that basically democracy is sort of a random walk with a drift towards socialism. And there’s also a ratchet effect where government programs don’t go away. And so it just ends up towards socialism at the end.", "Tyler Cowen 01:30:52", "It ends up with having a government that is too large. But I don’t see the evidence that it’s a road to serfdom. France and Sweden have had pretty big governments, way too large, in my opinion. But they haven’t threatened to turn autocratic or totalitarian. Certainly not. And you’ve seen reforms in many of those countries. Sweden moved away from a government approaching 70% of GDP, and now it’s quite manageable. Government there should be smaller.", "Dwarkesh Patel 01:31:19", "Yet.", "Tyler Cowen 01:31:20", "I don’t think the trend is that negative. It’s more of a problem with regulation and the administrative state. But we’ve shown an ability to create new sectors, like big parts of tech. They’re not unregulated. Laws apply to them, but they’re way less regulated. And it’s a kind of race. That race doesn’t look too bad to me at the moment. We could lose it, but so far so good. So the critique should be taken seriously, but it’s yet to be validated.", "Dwarkesh Patel 01:31:47", "How about the egalitarian critique from the left that you can’t have the inequality the market creates with the political and moral equality that humans deserve and demand?", "Tyler Cowen 01:31:58", "They just say that.", "Dwarkesh Patel 01:31:59", "What’s the evidence?", "Tyler Cowen 01:32:01", "The US has a high degree of income inequality. So does Brazil, a much less well-functioning society. Brazil continues on average, it will probably grow one or 2%. That’s not a great record. But Brazil has yet to go up in a puff of smoke. I don’t see it.", "Dwarkesh Patel 01:32:18", "And how about the Nietzschean critique? And in the end, of history. Fukuyama says this is more powerful. This is the one he’s more worried about, more so than the leftist critique. And over time, basically what you end up with is the last man and you can’t defend the civilization. You know the story.", "Tyler Cowen 01:32:33", "It’s a lot of words. I mean, is he short the market? I’ve asked Fukuyama, this is a long time ago, but he wasn’t. Then again, it’s a real issue. It seems to me the problems of today, for the most part, are more manageable than the problems of any previous era. We still might all go, poof, return to medieval balkan style existence in a millennia or whatever, but it’s a fight and we’re totally in the fight and we have a lot of resources and talent. So, let’s do it.", "Dwarkesh Patel 01:33:06", "Okay.", "Tyler Cowen 01:33:07", "I don’t see why that particular worry is so dominant. It’s a lot of words and I like to get very concrete. Like even if you’re not short the market, if that were the main relevant worry, where would that show up in asset prices as it got worse? It’s a very concrete question. I think it’s very useful to ask. And when people don’t have a clear answer, I get worried.", "Dwarkesh Patel 01:33:26", "Where does your prediction that hundreds of years down the line we’ll have the $50,000 nukes? Where does that show up in the asset prices?", "Tyler Cowen 01:33:33", "I think at some point VIX, an index of volatility, will go up, probably not soon. Nuclear proliferation has not gone crazy, which is wonderful, but I think at some point it’s hard to imagine it not getting out of control.", "Dwarkesh Patel 01:33:49", "Last I read, VIX is surprisingly low and stable.", "Tyler Cowen 01:33:52", "That’s right. I think 2024 is on the path to be a pretty good year.", "Dwarkesh Patel 01:33:56", "Yeah. Or do you think the market is just wrong in terms of thinking about both geopolitical risk from Israel or…", "Tyler Cowen 01:34:01", "No, I don’t think the market’s wrong at all. I think that war will converge. I’m not saying the humanitarian outcome is a good one, but in terms of the global economy, I think markets are thinking rationally about it, though the rational forecast, of course, is often wrong.", "Dwarkesh Patel 01:34:16", "What’s your sense on the scaling stuff? When you look at the arguments in terms of what’s coming, how do you react to that?", "Tyler Cowen 01:34:22", "Well, your piece on that was great. I don’t feel I have the expertise to judge that as a technical matter. It does seem to me intuitively it would be weird on the technical side if scaling just stopped working. But on the knowledge side, I think people underestimate possible barriers. And what I have in mind is quite a bit of reality. The universe might in some very fundamental way simply not be legible, and that there’s no easy and fruitful way to just quote-unquote, apply more intelligence to the problem. Like, oh, you want to integrate general relativity and quantum mechanics. It may just be we’ve hit the frontier and there’s not a final layer of, oh, here’s how it fits together. So there’s no way to train an AI or other thing to make it smarter to solve that. And maybe a lot of the world is like that. And that to me, people are not taking seriously enough. So I’m not sure what the net returns will be to bigger and better and smarter AI.", "Dwarkesh Patel 01:35:17", "That seems possible for P versus NP type of reasons. It’s just like harder to make further discoveries. But I feel like we have pretty good estimates in terms of the declining researcher productivity because of low-hanging fruit being gone in this sort of sense of we’re reaching the frontier and whatever percent it is a year, if you can just keep the AI population growing faster than that, if you just want to be crude about it, that seems enough to, if not get to the ultimate physical synthesis, at least much farther than where human civilization would get in the same span of time, that seems very plausible.", "Tyler Cowen 01:35:51", "I think we’ll get further. I expect big productivity gains. As a side note, I’m less convinced by the declining researcher productivity argument than I used to be. So the best way to measure productivity for an economist is wages. And wages of researchers haven’t gone down, period. In fact, they’ve gone up. Now, they may not be producing new ideas. You might be paying them to be functionaries or to manage PR or to just manage other researchers. But I think that’s a worry that we have a lot more researchers with generally rising researcher wages, and that hasn’t boosted productivity growth. China, India, South Korea brought into the world economy scientific talent. It’s better than if we hadn’t done it, but it hasn’t, in absolute terms, boosted productivity growth. And maybe that’s a worrisome sign.", "Dwarkesh Patel 01:36:42", "The metric of researcher wages. It seems like it could just be a fact that even the less marginally useful improvements are worth the extra cost. In terms of if you think of a company like Google is probably paying its engineers a lot more than it was paying in the early days, even though they’re doing less now because changing a pixel on the new Google page is going to affect billions of users. The same thing could be happening in the economy. Right.", "Tyler Cowen 01:37:06", "That might hold for Google researchers, but take people in pharma, biomedicine. There’s a lot of private sector financed research or indirectly financed by buying up smaller companies. And it only makes sense if you get something out of it that really works, like a good vaccine or good medication. Ozempic, super profitable. So wages for biomedical researchers in general haven’t gone down. Now, finally, it’s paying off. But I’m not sure AI will be as revolutionary as the other AI optimists believe. I do think it will raise productivity growth in ways which are visible.", "Dwarkesh Patel 01:37:43", "To what extent? In the conventional growth story, you think in terms of population size, right. And then, so you just increase the population size. You get much more research at the other end. To what extent does it make sense to think about, well, if you have these billions of AI copies, we can think of that in terms of, as a proxy of how much progress they could produce. Is that not a sensible way to think about that?", "Tyler Cowen 01:38:04", "At some point, having billions of copies probably won’t matter. What will matter much more is how good the best thing we have is, and how well integrated it is into our other systems, which have bottlenecks of their own. The principles governing the growth of that are much harder to discern. It’s probably a much slower growth than just juicing up. “Oh, we’ve got a lot of these things, and they’re trained on more and more GPUs.”", "Dwarkesh Patel 01:38:28", "But precisely because the top seems to matter so much is why we might expect bigger gains. Right? So if you think about Jews in the 20th century, 2% of the population or less than that, and 20% of the Nobel Prizes, it does seem like you can have a much bigger impact than if you’re on the very tail if you just have just a few.", "Tyler Cowen 01:38:46", "A hundred John von Neumann copies, maybe that’s a good analogy. That the impact of AI will be like in the 20th century, the impact of Jews, right? Which would be excellent. Right?", "Dwarkesh Patel 01:38:56", "Yeah.", "Tyler Cowen 01:38:56", "But it’s not extraordinary. It’s not a science fiction novel.", "Dwarkesh Patel 01:39:00", "It is. I mean, you read the early 20th-century stuff as you have. It’s like a slow takeoff right there, of like go from V2 rockets to the moon in a couple of decades. It’s kind of a crazy pace of change.", "Tyler Cowen 01:39:12", "Yeah, that’s what I think it will be like again. Great stagnation is over. We’ll go back to those earlier rates of change, transform a lot of the world, mostly a big positive. A lot of chaos disrupted institutions along the way. That’s my prediction. But no one writes a science fiction novel about the 20th century. It feels a bit ordinary still.", "Dwarkesh Patel 01:39:32", "Yeah.", "Tyler Cowen 01:39:33", "Even though it wasn’t.", "Dwarkesh Patel 01:39:34", "I forget the name of the philosopher you asked this to, but the feminist philosopher you asked the question. Amiya Srinivasan, you asked the question, what would have to be different for you to be a social conservative?", "Tyler Cowen 01:39:43", "Right.", "Dwarkesh Patel 01:39:44", "What would have to be different for you to not be a doomer, per se, but just one of these people who think this is the main thing to be thinking about during this period of history or something like that?", "Tyler Cowen 01:39:52", "Well, I think it is one of the main things we should be thinking about. But I would say if I thought international cooperation were very possible, I would at least possibly have very different views than I do now, or if I thought no other country could make progress on AI. Those seem unlikely to me, but they’re not logically impossible. So the fundamental premise where I differ from a lot of the doomers is my understanding of a decentralized world and its principles being primary. Their understanding is some kind of comparison. Like, here’s the little people, and here’s the big monster, and the big monster gets bigger, and even if the big monster does a lot of good things, it’s just getting bigger, and here are the little people. That’s a possible framework, but if you start with decentralization and competition and well, how are we going to manage this? In some ways, my perspective might be more pessimistic. But you don’t just think you can wake up in the morning and legislate safety. You look at the history of relative safety having come from hegemons, and you hope your hegemon stays good enough, which is a deeply fraught proposition. I recognize that.", "Dwarkesh Patel 01:41:04", "What’s the next book?", "Tyler Cowen 01:41:06", "I’m already writing it. Part of it is on Jevons, but the title is The Marginal Revolution. But not about the blog, about the actual marginal. But it’s maybe a monograph, like 40,000 words. But I don’t think book length should matter anymore. I want to be more radical on that.", "Dwarkesh Patel 01:41:25", "I think 40,000 words is perfect because it’ll actually fit in context. So when you do the GPT-4.", "Tyler Cowen 01:41:31", "Now, the context may be bigger by then. Yeah, but I want to have it in GPT in some way, or whatever has replaced it.", "Dwarkesh Patel 01:41:40", "Okay. Those are all the questions I had. Tyler, this was a lot of fun.", "Tyler Cowen 01:41:43", "And keep up the great work, and delighted you’re at it. Thank you.", "Dwarkesh Patel 01:41:47", "Thank you. Yeah, thanks for coming on the podcast. It’s the third time now, so a lot of fun.", "Tyler Cowen 01:41:51", "Okay, bye. Everyone." ]
[]
https://www.dwarkesh.com/p/tyler-cowen-4
Tyler Cowen - the #1 bottleneck to AI progress is humans
[ "Dwarkesh Patel 00:00:07", "Tyler, welcome.", "Tyler Cowen 00:00:08", "Dwarkesh, great to be chatting with you.", "Dwarkesh Patel 00:00:11", "Why won't we have explosive economic growth, 20% plus, because of AI?", "Tyler Cowen 00:00:17", "It's very hard to get explosive economic growth for any reason, AI or not. One problem is that some parts of your economy grow very rapidly, and then you get a cost disease in the other parts of your economy that, for instance, can't use AI very well.", "Look at the US economy. These numbers are guesses, but government consumption is what, 18%? Healthcare is almost 20%. I'm guessing education is 6 to 7%. The nonprofit sector, I'm not sure the number, but you add it all up, that's half of the economy right there.", "How well are they going to use AI? Is failure to use AI going to cause them to just immediately disappear and be replaced? No, that will take, say, 30 years. So you'll have some sectors of the economy, less regulated, where it happens very quickly. But that only gets you a modest boost in growth rates, not anything like the whole economy grows 40% a year.", "Dwarkesh Patel 00:01:04", "The mechanism behind cost disease is that there's a limited amount of laborers, and if there's one high productivity sector, then wages everywhere have to go up. So your barber also has to earn twice the wages or something. With AI, you can just have every barbershop with 1,000 times the workers, every restaurant with 1,000 times the workers, not just Google. So why would the cost disease mechanism still work here?", "Tyler Cowen 00:01:25", "Cost disease is more general than that. Let's say you have a bunch of factors of production, say five of them. Now, all of a sudden, we get a lot more intelligence, which has already been happening, to be clear.", "Well, that just means the other constraints in your system become a lot more binding, that the marginal importance of those goes up, and the marginal value of more and more IQ or intelligence goes down. So that also is self-limiting on growth, and the cost disease is just one particular instantiation of that more general problem that we illustrate with talk about barbers and string quartets.", "Dwarkesh Patel 00:01:57", "If you were talking to a farmer in 2000 BC, and you told them that growth rates would 10x, 100x, you'd have 2% economic growth after the Industrial Revolution, and then he started talking about bottlenecks, what do you say to him in retrospect?", "Tyler Cowen 00:02:11", "He and I would agree, I hope. I think I would tell him, \"Hey, it's going to take a long time.\" And he'd say, \"Hmm, I don't see it happening yet. I think it's going to take a long time.\" And we'd shake hands and walk off into the sunset. And then I'd eat some of his rice or wheat or whatever, and that would be awesome.", "Dwarkesh Patel 00:02:29", "But the idea that you can have a rapid acceleration in growth rates and that bottlenecks don't just eat it away, you could agree with that, right?", "Tyler Cowen 00:02:38", "I don't know what the word \"could\" means. So I would say this: You look at market data, say real interest rates, stock prices, right now everything looks so normal, startlingly normal, even apart from AI. So what you'd call prediction markets are not forecasting super rapid growth anytime soon.", "If you look at what experts on economic growth rate... We had Chad Jones here yesterday. He's not predicting super rapid growth, though he thinks AI might well accelerate rates of growth. So the experts and the markets agree. Who am I to say different from the experts?", "Dwarkesh Patel 00:03:09", "You're an expert.", "Tyler Cowen 00:03:11", "I'm with another expert.", "Dwarkesh Patel 00:03:13", "In his talk yesterday, Chad Jones said that the main variable, the main input into his model for growth, is just population. If you have a doubling, an order of magnitude increase in the population, you plug that number in in his model, you get explosive economic growth.", "Tyler Cowen 00:03:26", "I don't agree.", "Dwarkesh Patel 00:03:27", "Why not buy the models?", "Tyler Cowen 00:03:28", "His model is far too much a one-factor model, right? Population. I don't think it's very predictive. We've had big increases in effective world population in terms of purchasing power. A lot of different areas have not become more innovative. Until the last, say, four years, most of them became less innovative.", "So it's really about the quality of your best people or institutions, as you and Patrick were discussing last night. And there it's unclear what's happened, but it's also fragile. There's the perspective of the economist, but also that of the anthropologist, the sociologist.", "They all matter. But I think the more you stack different pluralistic perspectives, the harder it is to see that there's any simple lever you can push on, intelligence or not, that's going to give you breakaway economic growth.", "Dwarkesh Patel 00:04:11", "What you just said, where you're bottlenecked by your best people, seems to contradict what you were saying in your initial answer, that even if you boost the best parts, you're going to be bottlenecked by the restaurants.", "Tyler Cowen 00:04:20", "You're one of our best people, right? You're frustrated by all kinds of things.", "Dwarkesh Patel 00:04:25", "I think I'm gonna be making a lot more podcasts after AGI.", "Tyler Cowen 00:04:28", "Okay, good. I'll listen. I'll be bottlenecked by time.", "Here's a simple way to put it. Most of sub-Saharan Africa still does not have reliable clean water. The intelligence required for that is not scarce. We cannot so readily do it.", "We are more in that position than we might like to think, but along other variables. And taking advantage of the intelligence from strong AI is one of those.", "Dwarkesh Patel 00:04:53", "So about a year ago, your co-writer on Martial Revolution , Alex Tabarrok, had a post about the extreme scarcity of high-IQ workers. And so if the labor force in the United States is 164 million people, if one in a thousand of them are geniuses, you have 164,000 geniuses. That's why you have to do semiconductors in Taiwan, because that's where they're putting their nominal amount of geniuses. We're putting ours in finance and tech.", "If you look at that framework, we have a thousand times more of those kinds of people. The bottlenecks are going to eat all that away? If you ask any one of these people, if you had a thousand times more of your best colleague, your best coworker, your best co-founder, the bottlenecks are going to eat all that away? Your organization isn't going to grow any faster?", "Tyler Cowen 00:05:32", "I didn't agree with that post. If you look at labor market data, the returns to IQ as it translates into wages, they're amazingly low. They're pretty insignificant.", "People who are very successful, they're very smart, but they're people who have say eight or nine areas where they're like, on a scale of 1 to 10, there are nine. Like they have one area where they're just like an 11 and a half on a scale of 1 to 10. And then on everything else, they're an eight to a nine and have a lot of determination.", "And that's what leads to incredible success. And IQ is one of those things, but it's not actually that important. It's the bundle, and the bundles are scarce. And then the bundles interacting with the rest of the world.", "Like just try going to a mid-tier state university and sit down with the committee designed to develop a plan for using artificial intelligence in the curriculum. And then come back to me and tell me how that went, and then we'll talk about bottlenecks. They will write a report. The report will sound like GPT-4, and we'll have the report. The report will not be bottlenecked, I promise you.", "Dwarkesh Patel 00:06:38", "These other traits, look, the AIs, it's conscientiousness, if it's pliability, whatever. The AIs will be even more conscientious. They work 24/7. If you need to be deferential to the FDA, they'll write the best report the FDA has ever seen. They'll get things going along. With these other traits, they're not going to be bottlenecked by them.", "Tyler Cowen 00:06:54", "They'll be smart and they'll be conscientious, that I strongly believe. I think they will boost the rate of economic growth by something like half a percentage point a year. Over 30-40 years, that's an enormous difference. It will transform the entire world.", "But in any given year, we won't so much notice it. A lot of it is something like a drug that might have taken 20 years now will come in 10 years. But at the end of it all is still our system of clinical trials and regulation.", "If everything that took 20 years takes 10 years, over time, that's an immense difference. But you don't quite feel it as so revolutionary for a long time.", "Dwarkesh Patel 00:07:27", "The whole vibe of this progress studies thing is, look, we've got all these low-hanging fruits or medium-hanging fruits that if we fix our institutions, if we made these changes to regulations, to institutions, we could rapidly boost the rate of economic growth. You're okay, so we can fix the NIH and get increases in economic growth. But we have a billion extra people, 10 billion extra people, the smartest people, the most conscientious people, and that has an iota of difference of economic growth.", "Isn't there a contradiction between how much that rate of economic growth can increase between these two perspectives?", "Tyler Cowen 00:07:58", "There's diminishing marginal returns to most of these factors. A simple one is how it interacts with regulation, law, and the government. Another huge one is energy usage.", "How good is our country in particular at expanding energy supply? I've seen a few encouraging signs lately with nuclear power. That's great. Most places won't do it.", "And even those reports, exactly how many years it will take, I know what the press releases say. We'll see, you know, it could be 10 years or more. That will just be a smidgen of what we'll need to implement the kind of vision you're describing.", "So yeah, they're going to be bottlenecks all along the way, the whole way. It's going to be a tough slog, like the printing press, like electricity. The people who study diffusion of new technologies never think there will be rapid takeoff.", "My view is I'm always siding with the experts. Economists, social scientists, most of them are blind and asleep to the promise of strong AI. They're just out to lunch. I think they're wrong. I trust the AI experts.", "But when you talk about, say, diffusion of new technologies, the people who do AI are basically totally wrong. The people who study that issue, I trust the experts. If you put together the two views where in each area you trust the experts, then you get my view, which is amazing in the long run, will take a long time, tough slog, all these bottlenecks in the short run.", "The fact that there's like a billion of your GPT whatevers, which I'm all in love with, I promise you it's going to take a while.", "Dwarkesh Patel 00:09:26", "What would the experts say if you said, look, we're going to have, forget about AI, because I feel like when people hear AI, they think of GPT-4, not the humans, not the things that are going to be as smart as humans. What would the experts say if you said tomorrow the world population, the labor force, is going to double? What impact would that have?", "Tyler Cowen 00:09:41", "Well, what's the variable I'm trying to predict? If you mean energy usage, that's going to go up, right? Over time, it's probably going to double.", "Dwarkesh Patel 00:09:51", "Growth rate, I'm not sure it'd be a noticeable difference. Doubling the world population?", "Tyler Cowen 00:09:52", "Yeah, I'm not sure. I don't think the Romer model has been validated by the data. And I don't agree with the Chad Jones model, much as I love him as an economist. I don't think it's that predictive.", "Look at artistic production in Renaissance Florence. There are what, 60,000 people in the city, the surrounding countryside. But it's that so many things went right at the top level that it was so amazing in terms of still value added today.", "The numbers model doesn't predict very well.", "Dwarkesh Patel 00:10:19", "The world economy today is some hundred trillion something. If the world population was one-tenth of what it is now, if we only had 1 billion people, 100 million people, you think we could have the world economy at this level with our level of technology?", "Tyler Cowen 00:10:31", "No. The delta is a killer, right? This is one thing we learned from macro. The delta and the levels really interact.", "So shrinking can kill you. Just like companies, nonprofits, if they shrink too much, often they just blow up and disappear. They implode. But that doesn't mean that growing them gets you 3x, 4x, whatever, proportional to how they grow.", "It's oddly asymmetric. It's very hard to internalize emotionally that intuition in your understanding of the real world. But I think we need to.", "Dwarkesh Patel 00:11:00", "What are the specific bottlenecks? Like, why?", "Tyler Cowen 00:11:04", "Humans. Here they are. Bottleneck, bottleneck. Hi, good to see you. And some of you are terrified. You're going to be even bigger bottlenecks.", "That's fine. It's part of free speech. But my goodness, once it starts changing what the world looks like, there will be much more opposition. Not necessarily from what I call doomsday grounds, but just people like, hey, I see this has benefits, but I grew up, trained my kids to live in some other kind of world. I don't want this.", "And that's going to be a massive fight. I really have no prediction as to how it's going to go. But I promise you, that will be a bottleneck.", "Dwarkesh Patel 00:11:40", "But you can see even historically, you don't have to go from the farmers to industrial revolution 10x. You can just look at actually cases in history where we have had 10x rates, sorry, 10% increase rates of economic growth. You go to China after Deng Xiaoping, they have decades of 10% economic growth. And then that just because you can do some sort of catch up.", "The idea that you can't replicate that with AI or that it's not infeasible. Where were the bottlenecks when Deng Xiaoping took over?", "Tyler Cowen 00:12:09", "They're in a mess now. I'm not sure how it's going to go for them. They're just a middle-income country. They struggled to tie per capita income with Mexico.", "I think they're a little ahead of Mexico now. They're the least successful Chinese society in part because of their scale. Their scale is one of their big problems.", "There's this fear that if they democratize and try to become a normal country, that the median voter won't protect the interests of the elites. So I think they're a great example of how hard it is for them to scale because they're the poorest group of Chinese people on the planet.", "Dwarkesh Patel 00:12:40", "I mean not the challenges now, but the fact that for decades they did have 10% economic growth, in some years 15%.", "Tyler Cowen 00:12:45", "Well, starting from a per capita income of like $200 per head.", "Dwarkesh Patel 00:12:49", "And now they're ancestors were going to be like as poor as the Chinese, you know, like 30 years ago.", "Tyler Cowen 00:12:55", "I'm very impressed by the Industrial Revolution. Like you could argue progress, or all progress studies here, most important event in human history, maybe. Typical rate of economic growth during that period was about 1 1/2 percent.", "And the future is about compounding and sticking with it and, you know, seeing things pay off in the long run. Just human beings are not going to change that much. And I don't think that property of our world will change that much, even with a lot more IQ and conscientiousness.", "Dwarkesh Patel 00:13:21", "I interviewed you nine months ago, and I was asking you about AI then. I think your attitude was like, \"Eh.\" How has your attitude changed since we talked nine months ago?", "Tyler Cowen 00:13:34", "I don't remember what I thought in what month, but I would say on the whole, I see more potential in AI than I did a year ago. I think it has made progress more quickly than I had been expecting, and I was pretty bullish on it back then.", "The 01 model, to me, is very impressive. I think further extensions in that direction will make a big, big difference. The rate at which they come is hard to say, but it's something we have, and we just have to make it better.", "Dwarkesh Patel 00:14:01", "You showed me your document of different questions that you came up with for 01 for economic reasoning.", "Tyler Cowen 00:14:06", "I don't think I used GPT-4 for those.", "Dwarkesh Patel 00:14:07", "Okay, but what percentage of them did 01 get right? Because I don't think I got a single one of those right.", "Tyler Cowen 00:14:12", "Those questions were too easy. They were for GPT-4, so I abandoned those questions.", "You know, a hundred questions of economics. How well does a human do on them? They're hard, but it's pointless. I would not be shocked if somebody's AI model, in less than three years, beat human experts on a regular basis. Let's put it that way.", "Dwarkesh Patel 00:14:38", "Did that update you in any way, that now you've resigned on these questions because they were too easy for these models? And in the initial, like, they are hard questions objectively, right? They're just easy for 01.", "Tyler Cowen 00:14:49", "I feel like Kasparov the first time he met Deep Blue. There were two matches, and in the first one, Kasparov won. I lived through that first match. I feel like I'm sort of in the midst of the first match right now.", "But I also remember the second match. In the final game, Kasparov made that bonehead error in the Caro-Kann defense. That, too, was a human bottleneck, and he lost the whole match. So we'll see what the rate of change is.", "Dwarkesh Patel 00:15:15", "Yesterday, Patrick was talking about how important it is for the founders of different institutions to hang around and be the ones in charge. I've heard you talk about, you know, the Beatles were great because the Beatles were running the Beatles. Why do you think it's so important for that to be the case?", "Tyler Cowen 00:15:32", "I think courage is a very scarce input in a lot of decisions. Founders have courage to begin with, but they also need less courage to see through a big change in what the company will do.", "Facebook, now Meta, has made quite a few big changes in its history. Mark had a lot of courage to begin with. But if Mark Zuckerberg says, \"We're going to do this, we're going to do that,\" it's pretty hard for everyone else to say no in a good way. I really like that. It economizes on courage, having a founder, and you're selecting for courage. Those would be two reasons.", "Dwarkesh Patel 00:16:08", "How does that explain the Beatles' success?", "Tyler Cowen 00:16:10", "Well, the Beatles are an interesting example. I mean, they broke up in 1970, right? The Rolling Stones are still going. That tells you something, but the Beatles created much greater value.", "The Beatles are the group we still all talk about much more, even though the Rolling Stones are still with us. They were always unstable. There are two periods of the Beatles: early Beatles, John is the leader. But then Paul works at it, and John becomes a heroin addict. Paul gets better and better and better, and ultimately there's no core. There's not a stable equilibrium.", "The Beatles split up, but that creative tension for those core seven to eight years was just unbelievable. And it's four founders -- Ringo, not quite a founder, but basically a founder because Pete Best was crummy, and they got rid of him right away.", "It's one of the most amazing stories in the world. I like studying these amazing productivity stories like Johann Sebastian Bach, Magnus Carlsen, Steph Curry, the Beatles -- I think they're worth a lot of study. They're atypical. You can't just say, \"Oh, I'm going to be like the Beatles.\" Like, you're going to fail. The Beatles did that. But nonetheless, I think it's a good place to look for getting ideas and seeing risks.", "Dwarkesh Patel 00:17:15", "What did you think of Patrick's observation of the competency crisis?", "Tyler Cowen 00:17:19", "I see it differently from Patrick, and he and I have discussed this. I think there's basically increasing variance in the distribution.", "Young people at the top are doing much better, and they're far more impressive than they were in earlier times. If you look at areas where we measure performance of the young -- chess is a simple example; we perfectly measure performance -- very young people are just better and better at chess. That's proven. Even in NBA basketball, you have very young people doing things that they would not have been doing, say, 30 years ago. A lot of that is mental and training and application and not being a knucklehead. So the top of the distribution is getting much better.", "You see this also in science, Internet writing. The very bottom of the distribution -- well, youth crime has been falling since the 90s, so the very bottom of the distribution also is getting better. I think there's some thick middle above the very bottom and extending a bit above the median. That's clearly getting worse.", "Because they're getting worse, there are a lot of anecdotal examples of them getting worse, like students wanting more time to take the test or having flimsy excuses or mental health problems with the young or whatever. It's a lot more of that because of that thick band of people getting worse, and that's a great concern. But I see the very bottom and a big chunk of the top of the distribution as just much better. I think it's pretty proven by numbers that that's the case. I would say this increasing variance is a weird mix of where the gains and declines are showing up.", "I've said this to Patrick, and I'm going to say it to him again, and I hope I can convince him.", "Dwarkesh Patel 00:18:53", "It seems concerning then, the composition, that the average goes down. If you look at PISA scores or something.", "Tyler Cowen 00:18:58", "The median goes down. A lot of tests, they've pushed more people into taking the test -- PISA scores in particular.", "I suspect those scores, adjusted for that, are roughly constant, which is still not great. I agree. I think there's some decline.", "Some of it is pandemic, and we're recovering a bit slowly, getting back to human bottlenecks. But I think a lot of the talk of declining test scores is somewhat overblown. At most, there's a very modest decline, I would say.", "Dwarkesh Patel 00:19:25", "If the top is getting better, what do you make of the anecdotal data he was talking about yesterday where the Stanford kids come up to him and say, \"All my friends, they're stupid. You can't hire anybody from Stanford anymore.\" That should be the cream of the crop, right?", "Tyler Cowen 00:19:37", "There's plenty of data on the earnings of Stanford kids. If there were a betting market in, you know, what's the future trend? I'm long. How long I should be, I really don't know. But I visit Stanford not every year, but regularly, and the selection in who it is I meet...", "We're talking about selection, and they're very impressive. Emergent Ventures has funded a lot of people from Stanford. As far as I can tell, as a group, they're doing very well.", "So that is of no concern to me. If you're worried about the Stanford kids, something seems off in the level of salience and focus in the argument because they're overall doing great. They have high standards, and that's good too.", "Paul McCartney thought John Lennon was a crummy guitar player, and John thought a lot of Paul's songs were crap. In a way they're right, in a way they're wrong. But it's a sign what high standards the Beatles had.", "How old are you, by the way?", "Dwarkesh Patel 00:20:30", "Tyler Cowen 00:20:30", "Okay, now go back however many years. Was there a 24-year-old like you doing the equivalent of podcasting? It's just clearly better now than it was back then.", "And you were doing this a few years ago, so it's just obvious to me the young peaks are doing better. You're proof.", "Dwarkesh Patel 00:20:49", "Wasn't Churchill, by the time he was 24, an international correspondent in Cuba and India? I think he was the highest-paid journalist in the world by the time he was 24.", "Tyler Cowen 00:21:00", "I don't know. I mean, what was he paid, and how good was his journalism? I just don't know. I don't think it's that impressive a job to be an international journalist.", "What does it pay people now? He did some good things later on, but most of his early life he's a failure.", "And then ask the Irish. Getting back to Patrick, ask the Irish and people from India what they think of younger Churchill, and you'll get an earful. His real great achievement, I don't know how old he was exactly, but it's quite late in his life. Until then, he's a destructive failure.", "There was no one on Twitter to tell him, \"Hey, Winston, you need to rethink this whole Irish thing.\" Today there would be. Sam will do it, right? Sam will tweet at Winston Churchill, \"Got to rethink the Irish thing.\" And Sam is persuasive.", "Dwarkesh Patel 00:21:57", "If you read his aphorisms, I think he would have actually been pretty good on Twitter.", "Tyler Cowen 00:22:01", "Maybe. But again, what does the equilibrium look like when everything changes? Clearly he was an impressive guy, especially given how much he drank.", "Dwarkesh Patel 00:22:12", "Okay, so even if you don't buy the Stanford kids, if you don't buy the young kids, the other trend he was talking about, where if you look at the leaders in government, whatever you think of Trump, Biden, we're not talking about Jefferson and Hamilton anymore. How do you explain that trend?", "Tyler Cowen 00:22:30", "Well, Jefferson and Hamilton, they're founders, right? And they were pretty young at the time. You can do great things when you're founding a country in a way that just cannot be replicated later on.", "Putting aside the current year, which I think is weird for a number of reasons, I think mostly we have had impressive candidates. Most of the U.S. bureaucracy in Washington I think is pretty impressive: generals, national security people, top people in agencies, people at Treasury, people at the Fed. I interact with these groups pretty often.", "Overall, they're impressive, and I've seen no signs they're getting worse. Now, if you want to say the two candidates this year, again, there's a lot we're not going to talk about, but there is a lot you could say on the negative side. Yes.", "But like Obama, Romney, whichever one you like, I think these are two guys who should be running for president, and that was not long ago.", "Dwarkesh Patel 00:23:27", "So then there's a bunch of candidates running who are good. What goes systematically wrong in the selection process is the two who are selected are not even as good as the average of all the candidates.", "Tyler Cowen 00:23:37", "You mean this theory?", "Dwarkesh Patel 00:23:37", "And I'm not talking about America in particular. If the theory is just noise, it seems like it skews one way.", "Tyler Cowen 00:23:43", "Well, the Democrats had this funny path with Biden, and Kamala didn't get through the electoral process in the normal way. So that just means you get weirdness, whatever you think of her as a candidate.", "Trump, whom I do not want to win, I think he is extraordinarily impressive in some way, which along a bunch of dimensions exceeds a lot of earlier candidates. I just don't like the overall package. But I would not point to him as an example of low talent.", "I think he's a supreme talent but harnessed to some bad ends.", "Dwarkesh Patel 00:24:18", "If you look at the early 20th century, some of the worst things that happened to progress is just these psychopathic leaders. What happened? Why did we have so many just awful, awful leaders in the early 20th century?", "Tyler Cowen 00:24:30", "Well, give me like a country and a name and a time period, and I'll try to answer it.", "Dwarkesh Patel 00:24:34", "He was from the university.", "Tyler Cowen 00:24:37", "That's what was wrong with him, right? Just think of what school it was.", "Dwarkesh Patel 00:24:44", "Who? Woodrow Wilson?", "Tyler Cowen 00:24:44", "Yeah, one of our two or three worst presidents on civil rights. World War I, he screwed up. The peace process, he screwed up.", "Indirectly led to World War II. Reintroduced segregation of civil service in some key regards. Just seemed he was a nasty guy and should have been out of office sooner given his health and state of mind. So he was terrible.", "But he was, on paper, a great candidate. Hoover on paper was a great candidate and was an extremely impressive guy. I think he made one very bad set of decisions relating to deflation and letting nominal GDP fall. But my goodness, there's a reason they called it the Hoover Institution after Hoover.", "Dwarkesh Patel 00:25:25", "But the Hitler, Stalin, Mao's. Was there something that was going on that explains why that was just a crummy time for world leaders?", "Tyler Cowen 00:25:34", "I don't think I have a good macro explanation of that whole trend, but I would say a few things. That's right after the period where the world is changing the most. I think when you get big new technologies, and this is relevant for AI, you get a lot of new arms races.", "Sometimes the bad people win those arms races. So at least for quite a while, you had Soviet Russia and Nazi Germany winning some arms races, and they're not democratic systems. Later you have China with Mao being not a democratic system.", "And then you have a mix of bad luck. Stalin and Mao just draw the urn. You could have gotten less crazy people than what you got.", "And I agree with Hayek, the worst get to the top under autocracy. But like, they're that bad. That was just some bad luck too.", "There are other things you could say, but I think we had a highly disoriented civilization. You see it in aesthetics approaching beginnings of World War I. Art, music, radically changing. People feel very disoriented.", "There's a lot up for grabs. Imperialism, colonialism start to be factors. There wasn't like a stable world order, and then you had some bad luck tossed into that. And all of a sudden, these super destructive weapon systems compared to what we had had, and it was awful.", "I'm not pretending that's some kind of full explanation, but that would be like a partial start.", "Dwarkesh Patel 00:26:55", "You compared our current period to 17th-century England, where you have a lot of new ideas and things go topsy-turvy. What's your theory of why things go topsy-turvy at the same time when these eras come about? What causes this volatility?", "Tyler Cowen 00:27:07", "I don't think I have a general theory. If you want to talk about 17th-century England, they had the scientific revolution. You have the rise of England as a true global power. The navy becomes much more important, and the Atlantic trade route becomes much more important because of the new world.", "Places like the Middle East, India, and China, that were earlier--you know, Persia had major roles--they're crumbling, partially for reasons of their own. And that's going to help the relative power of the more advanced European nations. England has a lot of competition from the Dutch Republic and France.", "This is happening at the same time that, for the first time in human history that I know of, we have sustained economic growth, according to Greg Clark, starting in the 1620s of about 1% a year. And that is compounding--again, slow numbers, but compounding. England is the place that gets the compounding at 1% starting in the 1620s, and somehow they go crazy: civil war, kill the king, all these radical ideas. Libertarianism comes out of that, which I really like: John Milton, John Locke.", "Also, this brutal conquest of the new world. Very good and very bad coming together. I think it should be seen as these sets of processes where very good and very bad come together, and we might be in for a dose of that again now, soon.", "Dwarkesh Patel 00:28:23", "It seems like a simple question, but how do you make sure we get the good things and not the crazy civil war as well?", "Tyler Cowen 00:28:28", "You try at the margin to nudge toward the better set of things. But it's possible that all the technical advances that recently have been unleashed, now that the great stagnation is over, which of course include AI, will mean another crazy period. It's quite possible. I think the chance of that is reasonably high.", "Dwarkesh Patel 00:28:48", "What's your most underrated cult?", "Tyler Cowen 00:28:51", "Most underrated cult? Progress studies.", "Dwarkesh Patel 00:28:57", "I think you called peak EA right before SBF fell.", "Tyler Cowen 00:29:01", "That's right. I was at an EA meeting, and I said, \"Hey everyone, this is as good as it gets. Enjoy this moment. It's all basically going to fall apart. You're still going to have some really significant influence, but you won't feel like you have continued to exist as a movement.\"", "That's what I said, and they were shocked. They thought I was insane, but I think I was mostly right.", "Dwarkesh Patel 00:29:23", "What specifically did you see? Was the exuberance too high? Did you see SBF's balance sheet?", "Tyler Cowen 00:29:31", "Well, I was surprised when SBF was insolvent. I thought it was a high-risk venture that had no regulatory defense and would end up being worth zero, but I didn't think he was actually playing funny games with the money. I just have a long history of seeing movements in my lifetime, from the 1960s onwards, including libertarianism.", "There are common patterns that happen to them all. We're here in Berkeley. My goodness, free speech movement. Where's free speech in Berkeley today? How'd that work out in the final analysis?", "It's a very common pattern. Just to think, \"Wow, the common pattern is going to repeat itself,\" and then you see some intuitive signs, and you're just like, \"Yeah, that's going to happen.\" And the private benefits of belonging to EA, they were very real in terms of the people you could hang out with or the sex you could have. But they didn't seem that concretely crystallized to me in institutions, the way they are in Goldman Sachs or legal partnerships.", "That struck me as very fragile, and I thought that at the time as well.", "Dwarkesh Patel 00:30:31", "I'm not sure I understood. What were the intuitive signs?", "Tyler Cowen 00:30:35", "Well, not seeing the very clear, crystallized, permanent incentives to keep on being a part of the institutions. A bit of excess enthusiasm from some people, even where they might have been correct in their views. Some cult-like tendencies.", "The rise of it being so rapid that it was this uneasy balance of secular and semi-religious elements, that tends to flip one way or the other or just dissolve. So, I saw all those things, and I just thought the two or three best ideas from this are going to prove incredibly important still. And from this day onwards, I don't give up that belief at all. But just as a movement, I thought it was going to collapse.", "Dwarkesh Patel 00:31:16", "When do we hit peak progress studies?", "Tyler Cowen 00:31:19", "You know, when Patrick and I wrote the piece on progress and progress studies, he and I thought about this, talked about it. I can't speak for him, but my view at least was that it would never be such a formal thing or controlled or managed or directed by a small group of people or trademarked. It would be people doing things in a very decentralized way that would reflect a general change of ethos and vibe.", "So, I hope it has, in many ways, a gentler but more enduring trajectory. And I think so far I'm seeing that. I think in a lot of countries, science policy will be much better because of progress studies.", "That's not proven yet. You see some signs of that. You wouldn't say it's really flipped, but a lot of reforms. You're in an area like no one else has any idea, much less a better idea or a good idea.", "And some modestly small number of people with some talent will work on it and get like a third to half of what they want. That will have a huge impact, and if that's all it is, I'm thrilled. I think it will be more than that.", "Dwarkesh Patel 00:32:23", "I asked Patrick yesterday, how do you think about progress studies differently now that you know AI is a thing that's happening?", "Tyler Cowen 00:32:29", "Yeah.", "Dwarkesh Patel 00:32:29", "What's the answer for you?", "Tyler Cowen 00:32:31", "I don't think about it very differently. But again, if you buy my view about slow takeoff, why should it be that different? We'll have more degrees of freedom.", "So, if you have more degrees of freedom, all your choices, decisions, issues, and problems are more complex. You're in more need of some kind of guidance. All inputs other than the AI rise in marginal value.", "Since I'm an input other than the AI, or I hope, that means I rise in marginal value, but I need to do different things. So, I think of myself over time as less a producer of content and more a connector, people person, developing networks in a way where if there somehow had been no transformers and LLMs, I would have stayed a bit more as the producer of content.", "Dwarkesh Patel 00:33:15", "When I was preparing to interview you, I asked Claude to take your persona. Compared to other people I tried this with, it actually works really well with you.", "Tyler Cowen 00:33:25", "Because I've written a lot on the internet.", "Dwarkesh Patel 00:33:27", "Yeah.", "Tyler Cowen 00:33:27", "That's why this is my immortality, right?", "Dwarkesh Patel 00:33:31", "That's right. I've heard you say in the past you don't expect to be remembered in the future. At the time, I don't know if you were considering that because of your volumes of text, you're going to have an especially salient persona in future models. How does that change your estimation of your intellectual contribution going forward?", "Tyler Cowen 00:33:47", "I do think about this. The last book I wrote is called Goat: Who's the Greatest Economist of All Time? I'm happy if humans read it, but mostly I wrote it for the AIs. I wanted them to know I appreciate them.", "My next book, I'm writing even more for the AIs. Again, human readers are welcome. It will be free.", "But who reviews it? Is TLS going to pick it up? It doesn't matter anymore. The AIs will trawl it and know I've done this, and that will shape how they see me in, I hope, a very salient and important way.", "As far as I can tell, no one else is doing this. No one is writing or recording for the AIs very much. But if you believe even a modest version of this progress—and I'm modest in what I believe relative to you and many of you—you should be doing this. You're an idiot if you're not writing for the AIs.", "They're a big part of your audience, and their purchasing power will accumulate over time. They're going to hold a lot of crypto. We're not going to give them bank accounts, at least not at first.", "Dwarkesh Patel 00:34:54", "What part of your persona will be least captured by the AIs if they're only going by your writing?", "Tyler Cowen 00:34:59", "I think I should ask that as a question to you. What's your answer? I don't think AIs are that funny yet. They're better on humor than many people allege, but I don't use them for humor.", "Dwarkesh Patel 00:35:10", "It's interesting that you learn so much about a person when you're interviewing them for a job or for Emergent Ventures. You can read their application, but just in the first 10 minutes, their vibe—three minutes, but yes. Whatever's going on there that's so informative, the AIs won't have, just from the writing.", "Tyler Cowen 00:35:24", "Not at first. But I think I've heard of projects—this is secondhand, I'm not sure how true it is—but interviews are being recorded by companies that do a lot of interviews. These will be fed into AIs and coded in particular ways.", "Then people, in essence, will be tracked through either their progress in the company or LinkedIn profile. We're going to learn something about those intangibles at some rate. I'm not sure how that will go, but I don't view it as something we can never learn about.", "Dwarkesh Patel 00:35:55", "Do you actually have a conscious theory of what's going on when you get on a call with somebody, and three minutes later you're like, \"You're not getting the grant\"? What happens?", "Tyler Cowen 00:36:03", "Well, often there's one question the person can't answer. If it's someone applying with a nonprofit idea, plenty of people have good ideas for nonprofits, and I see these all the time. But when you ask them the question, \"How do you think about building out your donor base?\" it's remarkable to me how many people have no idea how to answer that.", "Without that, you don't have anything. So it depends on the area. But that would be an example of an area where I ask that question pretty quickly, and a significant percentage can't answer it.", "I'm still willing to say, \"Well, come back to me when you have a good case.\" Oddly, none of those people have ever come back to me that I can think of, but I think over time some will. That's a very concrete thing.", "But there are other intangibles. Just when you see what the person thinks and talks about too much. So if someone wants to get an award only for their immigration status, that to me is a dangerous sign.", "Even though at the same time, usually you're looking for people who want to come to the US, whether they can do it or not. There are just a lot of different signals you pick up, like people somehow have the wrong priorities, or they're referring to the wrong status markers. It comes through more than you would think.", "Dwarkesh Patel 00:37:15", "If you had the transcript of the call, but you couldn't see the video, you would say no. But in the case where you could see the video, you might say yes if you see the transcript. What happens in those cases?", "Tyler Cowen 00:37:25", "Having only the transcript would be worth much, much less, I would say, if that's what you're asking. Yeah, it would be maybe 25% of the value.", "Dwarkesh Patel 00:37:35", "And what's going on with the 75%?", "Tyler Cowen 00:37:36", "We don't know. But I think you can become much better at figuring out the other 75%, partly just with practice.", "Dwarkesh Patel 00:37:43", "Yesterday Patrick was talking about these concentrations of talent that he sees in the history of science, with these labs that have six or seven Nobel Prizes. He was also talking about the second employee at Stripe, Greg Brockman, who wasn't visible to other parts of the startup ecosystem in the same way. What's your theory of what's going on? Why are these clusters so limited? What's actually being inherited and transmitted here?", "Tyler Cowen 00:38:07", "Well, Patrick was being too modest. I thought his answer there was quite wrong, but he sort of knows better. He was able to hire Greg Brockman because he's Patrick. It's very simple. He wasn't going to come out and just say that, and he may even deny it a bit to himself.", "But if you're Patrick and John, you're going to attract some Greg Brockmans. And if you're not, it's just way, way harder because the Greg Brockmans are pretty good at spotting who are the Patricks and Johns. In a way, that's just pushing it back a step, but at least it's answering part of the question in a way that Patrick didn't because he was modest and humble.", "Dwarkesh Patel 00:38:42", "It seems like that makes the clusters less valuable then, because if Greg Brockman is just Greg Brockman, and Greg chose Patrick and John because they're Patrick and John, and Patrick and John chose Greg because he's Greg, it wasn't that they made each other great. It was just talent sees talent, right?", "Tyler Cowen 00:38:56", "Well, they make each other much better, just like Patrick and John made each other much better and still do. But you're getting back to my favorite human bottlenecks. Thank you. I'm fully on board with what you're saying.", "To get those—how many Beatles are there? It's amazing how much stuff doesn't really last. It's just super scarce, achievement at the very highest levels.", "That's this extreme human bottleneck. And AI, even a pretty strong AI, it remains to be seen how much it helps us with that.", "Dwarkesh Patel 00:39:25", "I'm guessing ever since you wrote the progress studies article, you got a lot of applications for Emergent Ventures from people who want to do some progress studies thing on the margins. Do you wish you got fewer of those proposals or more of them? Do you just wish they were unrelated?", "Tyler Cowen 00:39:38", "I don't know. To date, a lot of them have been quite good, and many of them are people who are here. There's a danger that as the thing becomes more popular, at the margin they become much worse.", "I guess I'm expecting that, so maybe mentally I'm raising my own bar on those. Maybe over time, I find it more attractive if the person is interested in, say, the Industrial Revolution. If they're interested in progress studies, capital P, capital S, I'm growing more skeptical of that over time.", "Not that I think there's anything intrinsically bad about it—I'm at a progress studies conference with you. But still, when you think about selection and adverse selection, I think you've got to be very careful and keep on raising the bar there. It's still probably good if those people do something in capital P, capital S progress studies, but it's not necessarily good for Emergent Ventures to just keep on funding the number.", "Dwarkesh Patel 00:40:34", "If you buy your picture of AI where it increases growth rates by half a percentage point, what does your portfolio look like?", "Tyler Cowen 00:40:41", "I can tell you what my portfolio is. It's a lot of diversified mutual funds with no trading, pretty heavily US-weighted, and nothing in it that would surprise you. My wife works for the SEC, so we're not allowed to do most things.", "Even to buy most individual stocks, you may not be allowed to do it, and certainly not allowed derivatives or shorting anything. But if somehow that restriction were not there, I don't think it would really matter. So, buy and hold, diversify, hold on tight, and make sure you have some cheap hobbies and are a good cook.", "Dwarkesh Patel 00:41:17", "Why aren't you more leveraged if you think the growth rate's going to go up even slightly?", "Tyler Cowen 00:41:22", "Well, I also have this view: maybe a lot of the equity premium is in the past. People, especially in this part of the world, are very good at internalizing value, and it will be held and earned in private markets and by VCs rather than public pension funds. Why give it to them?", "I think Silicon Valley has figured this out. Sand Hill Road has figured it out. So, what one can do with public equities is unclear.", "What private deals I can get in on with my really tiny sum of wealth is pretty clear. So I'm left with that.", "Money for me is not what's scarce; time is scarce. I do have some very cheap hobbies, and I feel I'm in very good shape in that regard.", "Dwarkesh Patel 00:42:05", "That being said, I think you could get pretty good deal flow even with your portfolio.", "Tyler Cowen 00:42:10", "I don't know. You can only focus on so many things. If I have good deal flow in emergent ventures, which I'm not paid to do, say I had a billion dollars from whatever.", "I wouldn't have any better way of spending that billion dollars than buying myself a job doing emergent ventures or whatever. So, I'm already where I would be if I could buy the thing for a billion dollars. I'm just not that focused on it.", "I think it's good that you limit your areas of focus. For some people, it's just money. I think that's great. I don't begrudge them that at all. I think it's socially valuable. Let's have more of it. Bring it on. But it's just not me.", "When I started my career, it was really unknown that an economist could really earn anything at all. There were no tech jobs with billionaires. Finance was a low-paying field. When I started choosing a career, it was not a thing. There wasn't this fancy Goldman Sachs. It was a slow, boring thing.", "Programmers were weird people in basements. And then an economist, you would earn maybe $40,000 a year. Two people, Milton Friedman and Paul Samuelson, had outside income, and you had no expectation that you would ever earn more than that.", "I went into this with all of that. Relative to that, I feel so wealthy. Just like, oh, you can sell some books, or you can give a talk. I just feel like I am a billionaire now.", "If anything, I want to become what I've called an information trillionaire. I'm not going to make that, but I think it's a good aspiration to have. Just collect more information and be an information trillionaire. Like Dana Gioia has that same goal. He and I have talked about this. I think that's a very worthy goal.", "Dwarkesh Patel 00:43:57", "Was there a second field that you were considering going to other than economics?", "Tyler Cowen 00:44:01", "It was either economics or philosophy. And I saw, back then--this would be the late 1970s--it was much harder to get a job as a philosopher, though not impossible the way it sort of is now. They were paid less and just had fewer opportunities. So I thought, well, I'll do economics. But I think in a way, I've always done both.", "Dwarkesh Patel 00:44:20", "Okay, I really want to go back to this diffusion thing we were talking about at the beginning with economic growth.", "Tyler Cowen 00:44:24", "Yeah.", "Dwarkesh Patel 00:44:24", "Because I feel like I don't... What am I not understanding? I hear the word \"diffusion,\" I hear the word \"bottlenecks,\" but I just don't have anything concrete in my head when I hear that. What are people who are thinking about AI missing here when they just plug these things into their models?", "Tyler Cowen 00:44:39", "I'm not sure I'm the one to diagnose, but I would say when I'm in the Bay Area, the people here to me are the smartest people I've ever met, on average. They're the most ambitious, dynamic, and smartest, by a clear grand slam compared to New York City or London or anywhere. That's awesome, and I love it.", "But I think a side result of that is that people here overvalue intelligence. Their models of the world are built on intelligence mattering much, much more than it really does. Now, people in Washington don't have that problem. We have another problem, and that needs to be corrected, too.", "But I just think if you could root that out of your minds, it would be pretty easy to glide into this expert consensus view that tech diffusion is universally pretty slow. And that's not going to change. No one's built a real model to show why it should change, other than sort of hyperventilating blog posts about everything's going to change right away.", "Dwarkesh Patel 00:45:43", "The model is that you can have AIs make more AIs, right? That you can have returns.", "Tyler Cowen 00:45:47", "Ricardo knew this, right? He didn't call it AI, but Malthus and Ricardo, they all talked about this. It was just humans for them. Well, people then would breed. They would breed at some pretty rapid rate. There were diminishing returns to that.", "You had these other scarce factors. Classical economics figured that out. They were too pessimistic, I would say. But they understood the pessimism intrinsic in diminishing returns in a way that people in the Bay Area do not.", "And it's better for them that they don't know it. But if you're just trying to inject truth into their veins rather than ambition, diminishing returns is a very important idea.", "Dwarkesh Patel 00:46:24", "In what sense was that pessimism correct? Because we do have seven billion people, and we have a lot more ideas as a result. We have a lot more industries.", "Tyler Cowen 00:46:30", "Yeah, I said they were too pessimistic, but they understood something about the logic of diffusion, where if they could see AI today, I don't think they would be so blown away by it. They'd say, \"Oh, you know, I read Malthus.\" Ricardo would say, \"Malthus and I used to send letters back and forth. We talked about diminishing returns. This will be nice. It'll extend the frontier, but it's not going to solve all our problems.\"", "Dwarkesh Patel 00:46:55", "One concern you could have about progress in general is, if you look at the famous Adam Smith example, you've got that pin maker, and the specialization obviously has all these efficiencies. But the pin maker is just doing this one thing. Whereas if you're in the ancestral environment, you get to basically negotiate with every single part of what it takes to keep you alive, and every other person in your tribe does. Is individuality lost with more specialization, more progress?", "Tyler Cowen 00:47:22", "Well, Smith thought it would be. I think compared to his time, we have much more individuality, most of all in the Bay Area. That's a good thing.", "I worry the future with AI, that a kind of demoralization will set in, in some areas. I think there'll be full employment pretty much forever. That doesn't worry me.", "But what we will be left doing, what exactly it will be, and how happy it will make us... Again, I don't have pessimistic expectations. I just see it as a big change. I don't feel I have a good prediction. And if you don't have a good prediction, you should be a bit wary and just say, \"Okay, we're going to see.\" But, you know, some words of caution.", "Dwarkesh Patel 00:48:03", "Are merited when you're learning about a new field? The vibe I get from you when you're doing a podcast is that you're picking up the long tail of different—you talk to interesting people, or you read the book that nobody else would have considered. How often do you just have to read the main textbook versus looking at the esoteric thing? How do you balance that trade-off?", "Tyler Cowen 00:48:24", "Well, I haven't interviewed that many scientists. Ed Boyden would be one, Richard Prum, the ornithologist from Yale—those are very hard preps. I think those are two excellent episodes, but I'm limited in how many I can do by my own ability to prepare.", "I like doing historians the most because the prep is a lot of work, but it's easy, fun work for me. I know I always learn something.", "Now I'm prepping for Stephen Kotkin, who's an expert on Stalin and Soviet Russia. That's been a blast. I've been doing that for four months, reading dozens of books, and it's very automatic.", "Whereas you try to figure out what Ed Boyden is doing with light shining into the brain, it's like, \"Oh my goodness, do I understand this at all?\" Or am I like the guy who thinks the demand curve slopes upward? So it just means I'm only going to do a smallish number of scientists, and that's a shame. But maybe AI can fill in for us there.", "Dwarkesh Patel 00:49:19", "You recommended a book to me, Stalin's Library , which talks about the different books that Stalin read and the fact that he was kind of a smart, well-read guy. The book also mentioned in the early chapters that in all his annotations, if you look through all his books, there's never anything that even hints that he doubted Marxism.", "Tyler Cowen 00:49:38", "That's right. There's a lot of other evidence that that's the correct portrait.", "Dwarkesh Patel 00:49:41", "What's going on there? A smart guy who's read all this literature, all these different things, never even questions Marxism. What do you think?", "Tyler Cowen 00:49:48", "I think the culture he came out of had a lot of dogmatism to begin with. I mean both Leninism, which is extremely dogmatic—Lenin was his mentor.", "Like Patrick's thing about the Nobel laureates, it happens in insidious ways, too. So, Lenin is the mentor of Stalin.", "Soviet culture, communist culture, and then Georgian culture, which, appealing and fun-loving and wine-drinking and dance-heavy as it is, there's something about it that's a little, you know, you pound the fist down and you tell people over the table how things are. He had all those stacked vertically. Then we got this bad genetic luck of the draw on Stalin, and it turned out obviously pretty terrible.", "Dwarkesh Patel 00:50:34", "And if you buy Hayek's explanation that the reason he rose to the top is just because the most atrocious people win in autocracies, what is that missing?", "Tyler Cowen 00:50:43", "I think what Hayek said is subtler than that. And I wouldn't say it's Hayek's explanation; I would say Hayek pinpointed one factor. There are quite a few autocracies in the world today where the worst people have not risen to the top.", "The UAE would be, I think, the most obvious example. I've been there. As far as I can tell, they're doing a great job running the country.", "There are things they do that are nasty and unpleasant. I would be delighted if they could evolve into a democracy, but the worst people are not running the UAE, this I'm quite sure of.", "So, it's a tendency. There are other forces, but culture really matters. Hayek is writing about a very specific place and time.", "It really surprised me. There are these family-based Gulf monarchies with very large, clannish interest groups of thousands of people that have proven more stable and more meritocratic than I ever would have dreamed, say in 1980. I know I don't understand it, but I just see it in the data. It's not just UAE; there are a bunch of countries over there that have outperformed my expectations, and they all have this broadly common system, actually. And they're not ruled by their worst people.", "Dwarkesh Patel 00:51:51", "That makes you wonder, when you go around the world—because I know you go outside the Bay Area and the East Coast as well—and you talk about progress studies related ideas, what's the biggest difference in how they're received versus the audience here?", "Tyler Cowen 00:52:02", "Well, the audience here is so different. You're the outlier place of America. And then where I normally am, outside of Washington, D.C., that's the other outlier place. And in a way, we're opposite outliers.", "I think that's healthy for me, both where I live and that I come here a lot and that I travel a lot. But you all are so out there in what you believe. I'm not sure where to start.", "You come pretty close to thinking in terms of infinities, on the creative side and the destructive side. And no one in Washington thinks in terms of infinities; they think at the margin. Overall, I think they're much wiser than the people here.", "But I also know if everyone, or even more people, thought like the D.C. people, our world would end. We wouldn't have growth. They're terrible.", "People in the EU are super wise. You have a meal with some sort of French person who works in Brussels—it's very impressive. They're cultured, they have wonderful taste, they understand all these different countries, they know something about Chinese porcelain. And if you lived in a world ruled by them, the growth rate would be negative 1%.", "So there's some way in which all these things have to balance. I think the US has done a marvelous job at that, and we need to preserve that.", "What I see happening—the UK used to do a great job at it. UK, somehow the balance is out of whack, and you have too many non-growth-oriented people in the cultural mix.", "Dwarkesh Patel 00:53:33", "The way you describe this French person you're having dinner with...", "Tyler Cowen 00:53:36", "These are real dinners. And the food is good, too.", "Dwarkesh Patel 00:53:40", "It kind of reminds me of you, in the sense that you're also well-cultured, and you know all these different esoteric things. What's the biggest difference between you and these French people you have dinner with?", "Tyler Cowen 00:53:54", "I don't think I'm well-cultured, would be one difference. There are many differences. First, I'm an American. I'm a regional thinker. I'm from New Jersey, so I'm essentially a barbarian, not a cultured person.", "I have a veneer of culture that comes from having collected a lot of information. So I'll know more about culture than a lot of people, and that can be mistaken for being well-cultured, but it's really quite different. It's like a barbarian's approach to culture.", "It's like a very artistic approach to being cultured and should be seen as such. So I feel the French person is very foreign from me.", "And there's something about America they might find strange or repellent. And I'm just so used to it. I see intellectually how many areas we fall flat on are destructive, but it doesn't bother me that much because I'm so used to it.", "Dwarkesh Patel 00:54:42", "What is most misunderstood about autism?", "Tyler Cowen 00:54:45", "If you look at the formal definition, it's all about deficits that people have right now. If you define it that way, no one here is autistic. If you define it some other way, which maybe we haven't pinned down yet, a third of you here are autistic.", "I don't insist on owning the definition. I think it's a bad word. It's like \"libertarian.\" I would gladly give it away.", "But there is some coherent definition where a third of you here probably would qualify. And this other definition where none of you would, it's like kids in mental homes banging their head against the wall. I don't know, it seems that whole issue needs this huge reboot.", "Dwarkesh Patel 00:55:22", "One frustration that tech people have is that they have very little influence, it seems, in Washington compared to how big that industry is. Industries that are much smaller will have much greater sway in Washington. Why is tech so bad at having influence in Washington?", "Tyler Cowen 00:55:38", "I think you're getting a lot more influence than maybe you realize, quickly, through national security reasons. The feds have not stopped the development of AI, whatever you think they should or should not do. It's basically proceeded.", "National security as a lobby, they don't care about tech per se. But it has meant that on a whole bunch of things in the future, you will get your way a bit more than you might be expecting.", "A key problem you have is so much of it is in one area, and it's also an area where there's a dominant political party. Even within that political party, there are many parts of California with a dominant faction.", "Compare yourself to the community bankers who are in so many American counties, who have connections to every single person in the House of Representatives. Your issues in a way are not very partisan. The distortions you cause through your privileges are invisible to America.", "It's not like Facebook, where some John Haidt has written some best-selling book complaining about what it is you do. There's not a best-selling book complaining about the community banks, and they are ruthless and powerful and get their way. I'm not going to tangle with them.", "You all here are so far from that, in part because you're dynamic and you're clustered.", "Dwarkesh Patel 00:56:54", "Final question. Based on yesterday's session, it seems like Patrick's main misgiving with progress is that if you look at the young girl cohort, there's something that's not going great with them. You would hope that over time, progress means that people are getting better and better over time. If you buy his view of what's happening with young people, what's your main misgiving about progress? Where if you look at the time series data, you're not sure you like where this trend is going?", "Tyler Cowen 00:57:21", "Our main concern always should be war. I don't have any grand theory of what causes war, or if such a theory ever is possible. But I do observe in history that when new technologies come along, they are turned into instruments of war, and some terrible things happen.", "You saw this in 17th-century England. You saw this with electricity and the machine gun. Nuclear weapons is a story in process.", "I'm not sure that's ever going away. So, my main concern with progress is progress and war interact, and it can be in good ways. Like the world, à la Steven Pinker, has had relative peace.", "That's fraying at the edges in the data. The numbers are now moving the wrong way, but it's still way better than most past time periods. We'll have to see where that goes.", "There might be a ratchet effect where wars become more destructive. And even if they're more rare, when they come, each one's a real doozy. Whether we really are or ever can be ready for that, I'm just not sure.", "And thank you very much, Dwarkesh.", "Dwarkesh Patel 00:58:18", "This will be the second session we have to end on a pessimistic note.", "Tyler Cowen 00:58:21", "The optimistic note is that we're here. Human agency matters. If we were all sitting around in the year 1000, we never could have imagined the world being anything like this, even a much poorer country.", "It's up to us to take this extraordinary and valuable heritage and do some more with it. And that's why we're here. So, I say let's give it a go.", "Dwarkesh Patel 00:58:44", "Great note to end on. Thanks, Tyler." ]
[]
https://www.dwarkesh.com/p/victor-shih
Xi Jinping’s paranoid approach to AGI, debt crisis, & Politburo politics — Victor Shih
[ "00:00:00 – Is China more decentralized than the US?", "Dwarkesh Patel Today I'm talking with Victor Shih , who is the director of the 21st Century China Center at UC San Diego. China is obviously one of the most important economic and geopolitical issues of our time, doubly so if you believe what I believe about AI. I was especially keen to talk to you because you have deep expertise not only in Chinese elite politics, but also its economic system and fiscal and banking policies. We'll get into all of that before we get into the AI topics.", "Is China actually a more decentralized system than America? If you look at government spending, in China it's about 85% local+provincial and 15% national. In the US, if you combine state and local it's 50%, and the federal government is 50%.", "When you think of an authoritarian system, you often think of it as something very top-down, where the center controls everything. But looking at those numbers, it seems like China might actually be quite decentralized. Or is that the wrong way to look at the numbers?", "Victor Shih For a while China was quite decentralized, from the mid-1970s all the way until the mid-1990s. China was very decentralized. Local governments generated a lot of revenue, but they also spent a lot of money. It incentivized them to do basically good things, such as trying to attract as much FDI as possible, trying to attract as much local investment as possible, give tax breaks, and so on and so forth. China had a very good period of mainly private sector–driven growth because of this fiscal decentralization. That's the understanding of most economists.", "But then there was a tax centralization in 1994 , where the central government basically said, “Okay, this is ridiculous. We don't want to end up like the Soviet Union and fall into different pieces. We need to control fiscal income.” So then they grabbed the most lucrative source of taxation at that time—and it continues to be the case—which is the value-added tax . The value-added taxes—and in subsequent decades pretty much all other tax categories—are now collected by the central government. Then they reimburse part of the money to the provinces.", "Supposedly they say, “Okay, now you can spend the money as you wish.” But in reality, that's not the case. In reality it's kind of like, “If you do this thing, then I'll give you a little bit of money. But you have to do the thing that I want you to do to get this money, to get this grant.” So the fiscal autonomy, I would say, has been falling since 1994.", "There are different periods. Sometimes there's a little bit more autonomy. From 2000 to recently, until 2020, the localities gained more autonomy vis-à-vis land sales . That's when the real estate market was going very well. Even though they couldn't control the tax revenue, they could control the land revenue. But then the central government basically killed the land market in 2022 . Now I think localities in China are highly dependent on the central government.", "00:03:16 – How the Politburo Standing Committee makes decisions", "Dwarkesh Patel Let's start at the very top. First, I want to understand the personnel. If you look at the members of the Politburo and their pasts, a lot of them were provincial leaders, and then had some kind of local role before that. But in many cases, they'll have stints, as the head of some chemical engineering SOE , or they studied engineering.", "Xi Jinping himself has a PhD in chemical engineering, right?", "Victor Shih", "Something, some kind of…", "Dwarkesh Patel", "That’s what I want to ask about. If you go through the list, a lot of them are PhD economists, PhD industrial engineers, and so on. Some even have PhDs in Marxist theory. That seems fake. Are they actual technocrats? If you look at their degrees, they seem quite impressive. Or is this just fake made-up credentialism? If there’s a PhD economist in the Politburo, should I be impressed and think they really understand economics? Or should I just think this is a fake degree?", "Victor Shih So the former Premier of China, Li Keqiang , his undergraduate degree was in economics. He studied under one of the best economists in China. So I think he understood economics.", "In the Politburo today, there’s this wing made up of military-industrialists who were trained as engineers. They indeed were trained in the best programs in China. For example, Ma Xingrui , the Party Secretary of Xinjiang, is in the Politburo. He graduated from Tsinghua University . Zhang Guoqing , who worked in the military industry for years and, I don’t know if it was Tsinghua but he graduated from one of the top science programs in China. So they do know a lot about science. But does that mean they know how to govern? That’s a somewhat different question.", "But you also have other cases, like Ding Xuexiang . He’s in charge of cybersecurity in China. He has a technical background in metallurgical forging. Graduating from, not the MIT of China but maybe closer to the IIT of China, a second-tier science institute founded by a Soviet expert who wanted to teach China how to do metallurgy.", "Does that kind of very specialized technical knowledge make you a better leader? I think it depends on the person. In the Chinese Communist Party , as people rise through the ranks, they have to have political acumen. If you don’t have political acumen, you’re not going to make it. Some people, on top of their technical expertise, turn out to have that acumen. But it’s not something you can know ex ante, except for the princelings .", "The princelings are special not just because of their genes. Yes, they’re the children of high-level officials. But they grew up honing this political acumen because their parents could teach them about it. So some of them turn out to have this political acumen. They’re pretty smart. They don’t necessarily have the best degrees from the best universities in China, but they end up being very successful.", "It’s not necessarily the case that all the Peking and Tsinghua graduates reach the top. They are certainly disproportionately represented in the Politburo, but they don’t dominate. At the end of the day, it is somewhat random.", "Dwarkesh Patel I still feel confused about how to model the competence of the Party. On the one hand, they’re super educated in STEM. They’re selected at least partly on merit, etc. On the other hand you see things like… A lot of Zero-COVID was just genuinely stupid, scrubbing down airport runways and so forth. And it just went on way too long. Why was nobody steering the ship then? What’s the right way to put this picture together?", "Victor Shih There's a lot of expertise at the lower level that can feed into the higher level. There are channels for doing that. I would say there are a lot of channels. So they're not so dysfunctional on that score. But what time and again leads to suboptimal policy is the Party’s instinct to preserve itself, and in a way worse than that, to preserve its power. It’s not just, “Let’s do this thing because it will lead to a better public health outcome and we’ll survive politically.”", "They’re mindset is, “Even if we would survive politically, if doing that would result in a loss of power—like having less control over the banks, or scientists becoming more independent—then let’s not do that. Because what's the cost? Some people die, whatever. We have slower growth for a couple of years. But if we get to preserve power, we will go ahead and preserve power.”", "Dwarkesh Patel Here's a question. Traditionally, if you study economics in an American university, you learn about supply and demand. You learn why certain regulations are bad, why tariffs are bad, etc. Basic 101 teaches you that.", "So if a lot of the members at the highest levels of the Communist Party have that basic understanding, what’s the reason they make mistakes that many economists say they shouldn’t do? People talk about rebalancing a lot. Why don’t they think increasing consumption is a good idea?", "Victor Shih There’s a traditional reason, and then there’s today’s reason, which is on top of the traditional reason.", "The traditional reason is that, in education in China, you get tracked into sciences, STEM, or you get tracked into social science or humanities. If you get tracked into STEM, you may never learn anything about supply and demand. It’s just not required. You just learn a bunch of math. You learn a lot of engineering, if you want to be an engineer. Then there are some art classes for you to select. You can select the art classes, or maybe one econ class. For a while, learning economics was popular. In the 1990s, all the engineering students would have taken an econ class. But for people who were trained before that, they would not.", "The one class that is still required for everybody is government ideology. In fact, the requirement for government ideology has been ratcheted up in recent years. I look at all the applications of students from China. I look at what classes they’ve taken. Now, once a year, they have to take a class called \"Situation and Policy,\" which is basically just the Chinese government's perspective on all kinds of different issues. But that’s not economics, it’s not accounting. It’s not some basic skill. It’s just telling people what the government prefers on a given topic.", "So that’s the traditional reason. You can have some very high-level officials—even if they have a PhD in nuclear engineering—who may not know very much about how the market works, how society works, and so on and so forth.", "The new reason is that we have one very powerful figure in the Chinese Communist Party. Being a Politburo member used to give you some autonomy in the domain that you govern over. That is no longer the case. If Xi Jinping expresses a preference over a policy direction, that is the direction you have to go toward, no matter what. Because if he observes that you're dragging your feet, or that you're pursuing a slightly different agenda, you’ll be purged . And we’ve seen cases of that.", "Dwarkesh Patel At some point, we’ll talk about Leading Small Groups and the way they work. But the picture you get from the fact that Xi is monitoring what all these different Politburo members are doing… He’s noticing whether they’re dragging their feet. He’s leading all these small teams. They organize different leaders who are the heads of ministries or commissions relevant to a specific project. I guess every week they meet and they hammer out who’s a bottleneck, what we need to do to get a project moving. So Xi has ratcheted up the number of these meetings. The picture you get is of someone who is micromanaging details across all these policy areas.", "If you read Stalin biographies, Stalin actually was a person like this, obviously not to great effect. But he was interested in micromanaging every single detail, from theater productions to steel production. Everything. Is that who Xi is?", "Victor Shih Yeah. If you look at what he does day to day, he spends a lot of time in meetings. It’s astonishing. The Chinese leadership, they're not going to play golf 3 days out of the week. Xi Jinping and his colleagues are having meetings about policies almost every day, like 270 days out of the year.", "Dwarkesh Patel The study sessions of the Politburo, are these meetings substantive? Are they real? Is he actually learning the intricacies of, \"Here’s how Zero-COVID has impacted this province and here’s why we should...\"? Because if you read the announcements from the Party or departments, they’re so vapid. It’s typical Communist-speak. How real are these meetings?", "Victor Shih So I’ve talked to some people who’ve lectured to the Politburo. I think they’re somewhat real. Because they don’t know ex ante what questions they’ll be asked, I’m talking about the speakers. Xi Jinping and other leaders will know what the speaker will say in the main remarks. But then there’s a Q&A portion which actually is real, it’s not staged. That’s pretty scary. If you’re some professor and Xi Jinping asks you to come in and asks you a bunch of random questions. It's very scary. So I think they are substantive. And the thing is, Xi Jinping will make remarks after such a lecture and those remarks become policy.", "You asked about the Leading Small Group. This is where ranking is very important. The head of the group always has to be the most senior-ranked person. Of course, the good thing about having Xi Jinping as the head of a leading small group is that his authority won’t be challenged by anyone else. Because everyone agrees he’s the most powerful and senior-ranked person in all of China.", "The problem is, all the other members of the LSG are lower rank than him. Well not all, but before, when decisions were made in the Politburo Standing Committee , notionally the Party and even the bureaucratic ranks of all the Poliburo Standing Committee members were equivalent to each other. So there were debates. There were even sometimes cases of overturning the position of the Party Secretary General . Historically, you’ve seen cases like that.", "But when most of the decision-making was pushed into all these different LSG… Usually there’s one other person who’s similarly ranked as Xi Jinping, like the Premier of China, or one of the other top people. But everyone else would be these Vice Premiers , ministers, or provincial governors , who are definitely ranked lower than Xi Jinping. In no scenario will they debate a particular policy with him.", "At most, there might be one person with sort of equal standing, officially, with Xi Jinping, who could voice some kind of disagreement. But because Xi is so powerful, informally, no one would actually dare to do that. Basically, the whole thing, once all the decisions got pushed into LSGs, became Xi Jinping. And it’s very hard on him.", "Before all these meetings, there’s a briefing. Someone gives him a briefing book and says, \"Okay, this is what we’re going to talk about. Here are the policy options. Which one do you want?\" He has to make up his mind. Sometimes I’m sure he consults with other people. But he has to be really engaged with a large number of policy areas on a day-to-day basis, because the decisions he makes basically become law, basically.", "Dwarkesh Patel It's stupendous to think about. You can tell someone how big China is—the population, the size of the economy, etc.—but visiting gives you a tangible sense of just how big. You have provinces, each of which has as many people as a normal nation-state, provinces with hundreds of millions of people, 40 million, and so forth.", "One person will say something at the end of a meeting that determines policy for basically the population of North and South America or something. Does he say things at the end of meetings, during speeches, that indicate he actually has a good understanding of these different policy areas?", "Obviously, the official speeches are very vague. It’s always something like, “We will make sure that we are robustly pursuing Chinese growth while hewing to the tenets of Mao's thought.” It’s never anything specific or interesting.", "But do we have any indication that behind closed doors, he’s—", "Victor Shih You don't think there's anything specific or interesting, but if you've read thousands of these things, sometimes it's quite revealing actually.", "Dwarkesh Patel I’m interested in what it reveals about him personally. Does he display any analytical ability? Any sort of empirical understanding of different policies, etc.?", "Victor Shih Yeah, that is difficult. I would say, because of his upbringing as a princeling, for internal Party matters of how to control the Party, how to control the military, from some of the speeches... They’re not secret speeches but they’re more geared toward other members of the Chinese Communist Party. If you go to China, you can read some of these, which I have.", "He has a really good political nose. He knows what it takes to control the Party apparatus, to control the military. Especially on economic issues, it seems like some advisor gives him some talking points, and he talks through them. There's some sense of that. On technology issues I think he cares a lot, but only from the perspective that competition with the United States is a very important objective for the Chinese Communist Party. It’s for him personally, a very important objective. So he wants to win.", "But precisely the extent to which he understands all the intricacies... it's unclear to me. I don’t think he necessarily does. But then, comparing him to American leaders, especially today, I think he’s probably better prepared. Just because in China, the experts talk to the top leadership all the time.", "Dwarkesh Patel Yeah. The advantage of the American system, of course, is that, at least in theory, it can survive a leader who is not up to snuff.", "00:21:07 – Xi’s right hand man in charge of AGI", "Dwarkesh Patel Okay, let’s go back to Ding Xuexiang. So the Politburo has 25 members?", "Victor Shih Yes, 25 members.", "Dwarkesh Patel The Standing Committee has seven members. This is the key group inside the Politburo. This person is one of those seven. He runs the Central Science and Technology Commission . The reason I’m especially interested in him is because any large-scale AI effort that China would launch would be under his purview.", "Do we understand what he wants, who he is, what his relationship with Xi is, what he would do if AGI is around the corner, etc.?", "Victor Shih These are all very good questions. In addition to the Science and Technology Commission that you mentioned, another very important position that he holds is head of the Office of the Central Commission on Cybersecurity . The head of the Central Commission on Cybersecurity is Xi Jinping himself. But the head of the administrative office—which runs the day-to-day operations of that commission, basically a LSG—is Ding Xuexiang. He took that position back in 2022, so he’s been steeped in cybersecurity for a couple of years. I think by now he knows all the major players and some of the key policy issues.", "His relationship with Xi Jinping is a very interesting one. He only directly worked under Xi Jinping for one year. When Xi was in Shanghai, Ding was the very senior-level secretary who would support whoever the Party Secretary of Shanghai was. He worked under three different Party Secretaries of Shanghai and supported Xi Jinping for one year.", "For some reason, Xi Jinping just trusted him absolutely after that. It’s a mystery. I’ve looked at all the open literature, I’ve asked people in Shanghai. Nobody knows why that’s the case. Almost certainly, I think one reason is that he was gathering a lot of information about the other leaders in Shanghai and sending all of that information to Xi Jinping, to let him know.", "Because at that time, the previous leader of China, Jiang Zemin , had a very big stronghold in Shanghai . So Xi had to break that apart before he could assert control. Ding Xuexiang likely was one of the people who sent all the necessary information to Xi Jinping in order for him to do that. But beyond that—lots of people do that for Xi Jinping, truth be told—what else he’s done for the big boss to earn his trust is a big mystery.", "But the manifested outcome is that Xi trusts him a great deal. In 2013, he was promoted to Beijing to be second-in-command in Xi’s personal office, handling all the flows of data in and out of Xi’s desk. That’s a very important position. He became first-in-command a couple of years later. He took control over the entire apparatus that governs Xi Jinping’s day-to-day life back in 2017. Now he’s a Vice Premier in charge of cybersecurity. So clearly, this guy is trusted.", "His preferences for AI, AGI ... I tweeted this Davos speech , which you also looked at. I think it’s very revealing. What he said was, “We need to invest in AI, but we can’t go all out in investing in it without knowing what the brakes are. We have to develop the brakes at the same time.”", "I think that’s extremely revealing. It’s very different from the American approach. In America, it’s all driven by the private sector. Except for one or two companies, everyone just invests and invests and tries to reach AGI as soon as possible.", "For the Chinese government, they’re very afraid that some actor—outside, but even inside the Party—is going to use it as a tool to usurp the Party’s power. So they want to know that they have a way of stopping everything if it comes to it. For them, developing the brakes is just as important as developing the AI itself.", "Dwarkesh Patel What are the different organizational milestones we should be watching for, to understand how they’re thinking about AI, and when things have escalated?", "To add more color to this question, what I expect to happen, let's say next year, in terms of raw AI capabilities is that we might have computer-use agents . Not just a chatbot, but something that can actually do real work for you. You set it loose on your computer, and it goes through your email, compiles your taxes, etc.", "In five years, I expect it to be able to do all white-collar work, or most white-collar work. That’s 40% of the economy, potentially. Eventually, we’ll have full AGI, even robotics and so forth. As this is happening, how should we be tracking how the Party is thinking about what’s happening, how seriously they take it, and what they think they should do about it?", "I think a couple weeks ago there was a Politburo study session on AI . I don’t know if it was AGI AI, or what kind of AI they were discussing. Should we be looking for a leading group on AGI in particular? Should we be trying to just read more speeches? What should we be paying attention to?", "Victor Shih Thus far, it seems like it’s under the cybersecurity leading group. So Ding Xuexiang would be in charge of it.", "Dwarkesh Patel There won’t be a new group?", "Victor Shih I don’t think so. Because for the Party, it’s the security aspect of it that’s the most important.", "There’s one aspect where it’d be great to automate all industrial production. China is already doing a good job with that. I don’t think they need additional governing infrastructure to allow them to do that.", "But when it comes to applying AGI or AI to governance, even to service sectors—generating video content for people, doing travel agency stuff—the Party is very paranoid that some hostile actor, outside of China—certainly they believe that very strongly—or inside China, is going to do something and the AGI is going to take off, and it’s going to undermine the Party’s authority.", "So what we’re going to see in terms of institutional development is not at the top end, but at the lower end. They will want to designate human beings in all the government agencies, in all the commercial entities that are using AI or AGI, to put their foot on the brake if it comes to it.", "Dwarkesh Patel But so far, after DeepSeek became a big deal, the immediate reaction in China…", "Victor Shih Very enthusiastic.", "Dwarkesh Patel All the different big tech companies— WeChat , Tencent —everybody was encouraged to adopt it. And they did. They adopted it as fast as possible.", "Xi himself met with Liang Wenfeng and did a whole meeting where all the industrial heads were there. Xi said that they have to accelerate technology in China, etc.", "So it seems like so far the response has been that the greater the capabilities of AI, the more they’re excited about it, the more they think this is the beginning of Chinese greatness. But you think that changes at some point?", "Victor Shih No, they want to develop it of course. They’ll pour a lot of money in. They’ll try to give DeepSeek as much help as possible, sourcing GPUs and all this kind of stuff.", "But I am all but certain that there is one, or maybe a team of people, in the headquarters of DeepSeek who can pull the plug, if necessary. Because there is such a team of people in every major internet company in China.", "Dwarkesh Patel What would cause them to pull the plug? Specifically?", "Victor Shih First of all, if AGI started generating Falun Gong –related content, and it proliferated very quickly beyond the capacity of the censor to control it, then they’ll have to stop the algorithm from generating new images, new videos, and so forth.", "Dwarkesh Patel You were saying they can help High-Flyer source GPUs and so on. What is the mechanism by which that would happen?", "Suppose it turns out that for DeepSeek to continue making progress, they need all the GPUs that Huawei can produce, and they need a whole bunch of other things. They need energy. They need data centers.", "What would it actually look like for the government to say, “Every single Huawei GPU has to go to DeepSeek. They’re going to get access to all this spare land to build data centers”? Who would coordinate that? Would that be the leading cybersecurity group? Would that be somebody else?", "Victor Shih It could be. That’s interesting. The physical investment part would have to share power with some other leading group.", "That would be a justification for an AGI leading group of some sort. Because building power centers falls under the NDRC . It could be under Li Qiang or He Lifeng , especially He who’s in charge of investment and financing.", "But if Xi Jinping doesn’t want them to share the power, then he would have to give it all to Ding Xuexiang and give him a lot of power. He could just do it by fiat or create a separate organization to make it happen. At this point, data centers are being built in China at a very fast clip. I think they’re just using the existing command structure.", "There is potentially a foreign aspect to this. You build up all this cloud computing capacity in the Gulf States. Who’s going to be the customer? Who’s willing to pay billions of dollars to use it? China, right? So then the foreign policy apparatus will have to coordinate that, and so forth.", "Dwarkesh Patel Belt and Road v2.", "Victor Shih Exactly.", "Dwarkesh Patel Does it have any end implication for the progression of AI—who actually ends up controlling these leading groups, whose purview it ends up under? Or is that basically a detail? In terms of what happens with AI in China, does it actually have that big an implication?", "Victor Shih I think it does matter. You have some decision-makers who have shown themselves to be not very good—even politically not very good—but nonetheless, for some reason, they’re trusted. Maybe precisely because they’re not very good. They’re very dependent on Xi Jinping and that’s why they get promoted.", "The chief negotiator with the United States, He Lifeng , the Vice Premier in charge of finance, he’s well known for starting and perpetuating the largest real estate bubble the world has ever seen in Tianjin . I don’t know if you went to Tianjin , but there are just empty buildings everywhere. There’s this \" New Manhattan \"—literally the size of Manhattan—filled with empty office buildings. He made that happen.", "Dwarkesh Patel That ironically sounds like a great tourist site. Not because it’s real Manhattan, but because it’s fake.", "Victor Shih It’s actually cool to visit. It's like, “Wow, this is great. But what are you going to do with the buildings?”", "Dwarkesh Patel In Emeishan we went to this huge Buddhist temple . By huge, I mean there would be this structure with a shrine, and you’d go through it, and behind it would be an even bigger shrine that was hiding from view. Then you’d go through that one, and there’d be an even bigger one. This happened five times. If you were to drive through this would take you a while.", "Victor Shih But it’s new, not historical, right?", "Dwarkesh Patel Oh no yeah, it’s new. By the way, there was nobody else there. It was me, three white guys, and nobody else. I asked the head monk, “How did you guys finance this?” And he said, “We’ve got a lot of supporters, a lot of donations.”", "Victor Shih I don’t think so, yeah.", "So then it really depends who’s in charge of AGI. Ding Xuexiang really is a mystery. I think he has great political acumen. He must, in order to gain Xi Jinping’s trust. He has some technical background, but it’s in metallurgical forging. But he’s worked in Shanghai for decades, so he knows how private corporations run. He knows a lot about international trade, FDI.", "If you look at the people in the Politburo and the Politburo Standing Committee , I’d say he’s definitely in the top quartile of people you’d want to be in charge of AI, if not the top two or three people.", "One thing I would say about AI is that content creation is going to be a big roadblock for China. Because China is so paranoid about the content that flows on the internet that they put human beings all over the place to slow things down, slow the transmission down. Now, if you have AI creating tons of content, there will still be human beings—algorithms and human beings—double- and triple-checking the content. They’re just so afraid of subversive content getting out.", "Dwarkesh Patel But it’s still been compatible with them innovating and having leading companies in content, right? RedNote , TikTok, and so forth. Not to mention WeChat.", "And AI might be even more robust to these kinds of sensitive content, right? Because you can just teach the AI not to say certain things.", "Victor Shih Even if that were the case—even if you train something that you think is pretty good—they will still want a human being checking everything.", "00:35:37 – DeepSeek was trained to track CCP policy", "Dwarkesh Patel As somebody who is trying to observe what the Party’s doing, have you found that using LLMs has been helpful? If so, which model gives you the best insight into what the CCP is thinking?", "Victor Shih Yeah, I’ve been using AI quite a bit. Both to code data—it used to be text hand-coded stuff and now a lot of it can be automated— but in terms of finding out what the Chinese government is doing, some of the American models are okay. Grok is okay. But the most helpful has been DeepSeek.", "To me, it’s quite clear that DeepSeek—which, of course, was first developed by a hedge fund to trade in the Chinese market—part of what they really trained the model for was to detect important policy documents and meetings within the Chinese government.", "Because when you enter the right prompts—even not especially sophisticated prompts like, “What is the Chinese government doing when it comes to AI?”—it comes out with a bunch of very high-quality links, which is very useful for my research. Immediately, you get a sense of what are the latest policies, what are the latest statements by high-level officials. It seems that DeepSeek puts some higher weighting, let’s say, on this kind of content.", "I don’t dare install it on my phone. I certainly have collaborators who’ve downloaded the open-source model and installed it onto a hard drive. I use the web interface for some of my research.", "Dwarkesh Patel Do you think it’s just because they have more Chinese-language data, and so it’s just better at understanding the Party communications?", "Victor Shih I’ve used Baidu ’s thing . I think the typical Chinese-language models are more trained on social media content, which will come back with hits that are more social media-like. If you ask about AI, it’ll talk about how AI gets used in different things instead of policy documents and meetings and so forth.", "So it seems, just from inductive observation, that DeepSeek is trained on this kind of content.", "Dwarkesh Patel So that High-Flyer can make trades?", "Victor Shih Yes. Because obviously, China being China, government policies have a huge impact on how different stocks do. That’s often where you would find alpha.", "In the old days, people used to call up their friends who worked for this or that ministry to get insider information. But I think what High-Flyer has discovered could also generate alpha is to use algorithms to look closely at these policy documents.", "Dwarkesh Patel This is going to sound like a naive question. We’re talking about how the Party, in the abstract, wants to maintain control over every aspect of Chinese life and the Chinese economy. But of course, the Party is made up of individuals.", "Many of these individuals— Xi himself —went through periods where the Party had extraordinary control over people’s lives, during the Cultural Revolution and so forth. They personally, and their families, suffered as a result of this.", "They’re educated people. Many of them have been in industry. These are not naive people. Maybe I’m just projecting too much of my Western bias here, but what is the reason they think it’s so crucial that the banks are run by the Party, the AI is run by the Party, and nothing escapes the Party-state?", "Victor Shih So the Cultural Revolution generation, a lot of people suffered horrendously, including Xi Jinping himself.", "Basically, you have two types of lessons that people drew from it. The first lesson is, “Oh, the Party’s too dictatorial. China needs to liberalize,” and so on. You have a lot of people from that generation who feel that way. Many of them now live in the United States. Basically, they tried to leave China as soon as possible in the 1980s and succeeded.", "But for someone like Xi Jinping himself, and others like him, the lesson was, “Just don’t be on the losing side. In any political struggle, make sure you’re on the winning side. Because if you’re on the winning side, then you can do terrible things to your enemies.” Apparently, that’s the lesson he learned. He basically honed his skills and built his coalitions—for the time that he would take over high-level positions in the Party—for decades, as it turns out.", "If you look at the data, when he was in Fujian Province , he spent an inordinate amount of time hanging out with military officers in Fujian in the late 1980s and early 1990s. You wonder, why did he bother to do that? Most local officials don’t do that. The military’s there, you need to be nice to them and all that. But he built a dormitory for them. He even joined a unit, an anti-aircraft regiment. He tried to do as much as possible with the military, because he knew he needed the support of somebody in the military when the time came.", "Guess what? 30 years later, when he was about to be elevated to Secretary General of the Chinese Communist Party, many of the people he knew back in Fujian are now generals. They are in charge of important military units. And once he came to power, he really promoted some of these people.", "But then lately, of course, he’s been purging some of these people which is really interesting. He had a strategic vision about his own career trajectory, and he pursued that in a very determined fashion, I would say.", "Dwarkesh Patel What is the main difference between him and Stalin, in the political maneuvering sense? Because all of this sounds very similar.", "Victor Shih I would say early in their careers they’re very similar. Outwardly, they’re very low-key. Stalin was the bureaucrat, very low-key, not flamboyant, not making loud speeches like Trotsky did. He was low-key, getting things done. He was seen as a very reliable person. This was a reason why Stalin was chosen. As far as I know—I don’t know that much about Stalin—it’s the same thing with Xi Jinping.", "His father actually belonged to the liberal faction in the Chinese Communist Party. His father never offended that many people within the Party. Everyone at a high level will have sort of screwed over somebody along the way, but his father was in the better category, let’s say.", "He himself was very low-key. While the other princelings were fighting for good jobs in the capital city of Beijing or Shanghai, Xi Jinping said, “Oh no, it’s okay. I’ll go down to the villages. I’ll work in Hebei , in a rural area.” Then he went to Fujian, which was kind of a peripheral area, then Zhejiang , which was slightly better but still not Beijing or Shanghai.", "So he got out of the way of this political infighting, I think in a similar way to Stalin. Early on, he didn’t confront people too much. But once he came to power, he knew—they both knew—what it took to control the Party they governed over.", "Basically, you form a whole series of coalitions to get rid of your most threatening enemy at any given time. For Xi Jinping, it was this guy Zhou Yongkang , who controlled the police forces and the Ministry of State Security at the time. He was doing all kinds of irregular things, being fabulously corrupt. He was a threat to the whole Party. So Xi convinced Hu Jintao to join with him to purge Zhou Yongkang, which was successful.", "Stalin did the same thing to Trotsky. And then, after the most threatening person is gone, you form another coalition to purge the next person, and so on and so forth. They both basically did that until they achieved absolute power within the Party.", "Dwarkesh Patel Yeah. If you read the Stalin biography , so many of the people in the early part, half the people in the Politburo are later going to end up in the gulags. And many of the people in the later Politburo had already been to the gulag.", "I wonder if there’s a Trotsky-analogous person in this story, somebody who was a flamboyant speechmaker?", "Victor Shih Yeah, Bo Xilai . So he was one competitor with Xi Jinping. But because he was so high-profile, he never had a chance to be the top leader of China. And of course, Xi Jinping made sure that he would fall.", "Dwarkesh Patel And now he’s in prison.", "Victor Shih And now he’s in prison, yeah, his whole life. His son is in Canada, actually. He’s on Twitter now, on X.", "Dwarkesh Patel I should reach out, honestly. He’d be an interesting person to talk to.", "Victor Shih Yeah, I think that would be really interesting, actually.", "00:45:35 – Local government debt crisis", "Dwarkesh Patel Let’s move on to another topic I know you’ve studied deeply, the local government debt situation . What is the newest there? What’s your sense of the situation now?", "Victor Shih It just keeps growing in absolute size. Basically, China is trying to tell the world that they don’t have a high debt level. At the central level, that is sort of the case. It’s still 60-70% of GDP.", "But the reason why that’s the case is because with a lot of things the Chinese government wants to do, they push it down to the local level. But the authorization of local debt issuance is still authorized by the central government. So at the end of the day, it’s really the central government telling local governments, “Okay, you need to do this thing. I’m not going to give you money, but I will give you the authorization to issue even more debt.”", "So the debt level keeps going higher and higher. The one thing that has alleviated the cash flow pressure, late last year, was that the central government authorized again the issuance of close to 10 trillion renminbi in special local debt to repay some of the higher interest-bearing debt of local governments. But that’s an accounting exercise . It doesn’t literally decrease local debt. It just changes it from higher-yielding to lower-yielding. Meanwhile, the overall size of local government debt in China, I would estimate to be 120-140% of GDP.", "Dwarkesh Patel Woah, that’s on top of the 60% that’s owed by the central government?", "Victor Shih Yeah, so total government debt is pushing 200%.", "Dwarkesh Patel And the high-level way to understand what this debt was for… The debt in Western countries and other OECD countries is often because you owe money to pensioners. In this case, it was just that they built bridges….", "Victor Shih Yeah, they built all the high-speed rail, all the beautiful infrastructure that you do see in China. More recently, it’s been industrial policies. For example, if there’s a new science park for AI, the land they acquire, the infrastructure that’s built, it’s all built with borrowed money.", "Then there are these startups. The startups are partially financed by some of these central-level investment funds that you read about in industrial policies. But actually, the local governments also have their own seeder funds, which use borrowed money to finance some of these venture deals.", "Dwarkesh Patel For startups, is the fact that they can get cheap credit from banks more relevant? Is the fact that they can get loans from provincial funds more important? There are also these central government funds, the “ Big Fund ,” and so forth. Which is the bigger piece here?", "Victor Shih A lot of it depends on your social network and your connections. If you’re a startup at Tsinghua University, you have access to private sector funds but also to some of the central government seeder funds if it falls in the right category: semiconductors, AI, etc.", "With Liang Wenfeng, I don’t know… Because it started out as a financial institution, it was mainly private sector money from financial investors. But there are other AI initiatives, especially more hardware-driven ones, in the provinces, which are seeded by local government funds.", "Without financial repression, an investor in China could invest in Silicon Valley. They could go to Sequoia and invest money. But because of capital controls , they legally cannot move more than $20,000 a year out of China. They have to find domestic investments.", "As a result, many of them choose to invest in Chinese high-tech. Or they just put their money in the bank. Actually, a lot of people just choose to put their money in the bank. And then the state banking system has the deposits with which to finance local government seeder funds for industrial policies.", "Dwarkesh Patel The crucial thing here is that the banking sector is totally controlled by the state. So you're not getting a true rate of return based on what the productive value of investment is. It’s literally like 1%, right?", "Victor Shih Yes. The deposit rate is 1%.", "Dwarkesh Patel That’s an insanely low return.", "Victor Shih Well, the inflation rate is very low in China.", "00:50:00 – BYD, CATL, & financial repression", "Dwarkesh Patel There are a couple of mixed-up questions I want to ask here.", "You said if there were no capital controls, these people might want to take their money and go invest in Silicon Valley. Very basic development economics would tell you that if a country is less developed, it should have higher rates of return. People from Silicon Valley would want to invest in China. Why would getting rid of capital controls make capital go the other way?", "This ties into some other questions. Why is it the case that the Chinese stock market has performed so badly, even though the economy has grown a lot? Why is it that they have a current account surplus and they’re basically accumulating T-bills ? The pattern you should see is that they could earn much higher rates of return building productive things in China, rather than accumulating 4% yield T-bills?", "Victor Shih At the deepest level, I would say there’s a deep fundamental difference between capitalism and socialism. It sounds very philosophical, but I think this might help your Gen Z listeners think about this.", "For socialism, they only care about output. It’s like, “Okay, whatever capacity we’re thinking about—whether it’s grain production or metal production or, these days, robotics—we just want more of it.” So then the state can use the state banking system, which they control, to allocate huge amounts of capital to maximize the output of all these different things they care about. But when you maximize output, you don’t necessarily make money doing that.", "Whereas capitalism wants to maximize profit, which is the difference between the cost of production and the amount you can earn from selling the output. For socialism, they don’t care about that.", "Dwarkesh Patel But the companies aren’t socialist, right? The companies are profit-seeking?", "Victor Shih But because the financial system is socialist, they’re forced into socialist-like behavior.", "You can’t go into a bank and say, “Look, I make robotics. This robot I’m going to make is going to be highly profitable down the road, but I can only make ten of them.” The bank would be like, “This is BS.\" I know a lot of startups don’t make any money, but at some point, notionally, you have to start making money.", "Whereas the socialist banking system basically says, “Even if you never make any money, or hardly any money, that’s okay. As long as the Chinese government tells us this is a strategic sector, and as long as you can prove to us that you can actually produce the thing we want you to produce, if you never, ever make money doing it, that’s perfectly fine.”", "Dwarkesh Patel What is the sort of outer-loop feedback, the thing that keeps the system disciplined?", "Here there are private investors that actually care about earning a rate of return. So they’re only going to invest in companies that they think have a viable shot at becoming huge companies that are actually at the frontier of technology.", "If these banks aren’t actually competing against real investors—", "Victor Shih And it’s not their money so they don’t care.", "Dwarkesh Patel But it seems like this is still compatible with many of the world’s leading companies being developed in this system. So how is it working?", "Victor Shih This is where the bureaucracy steps in. How would a bank tell what’s a good company and what’s not a good company? There’s a complicated system where—for something like robotics or semiconductors—there’s an expert group in the Ministry of Industry and Information Technology . They assess all these different projects. The banks will literally send something into them and say, “Okay, this company wants to borrow $10 million. Do they have real technology? Is this a good prospect?”", "Then some bureaucrat—supposedly with no financial stake in the company, supposedly completely neutral, and I say “supposedly” because in reality we’ve seen many cases of corruption—along with an expert panel, who again supposedly have no relationship with the company or the bank, will sign off on these projects or reject them. Then it gets sent back to the bank, and the bank says, “Okay, these bureaucrats say your project is good, you’re good to go. Here’s $100 million.”", "Dwarkesh Patel But it seems to work. Doesn’t this just belie the whole idea? Central planning isn’t supposed to work, right?", "Victor Shih There’s selection bias. We see and pay attention to the cases that work. There have been some fabulous success cases. By the way, they do not include Xiaomi or BYD . Those were mainly privately funded at the beginning.", "Dwarkesh Patel Same with CATL and DJI and High-Flyer.", "Victor Shih Yeah, exactly. So the success cases actually are not that. But there are some. I don’t know all the cases. Huawei is one where I would say there was a lot of state funding along the way. So yes, you have success cases. But there are so many failed cases, billions and billions of dollars just thrown down a hole, basically, for the failed cases.", "In fact, for semiconductors, things got so bad that they had to arrest dozens of people in the Ministry of Industry and Information Technology, who were in charge of approving these semiconductor deals. They all went to jail because they were in on a deal where some company submitted a bogus project, got it approved, took part of the money, and so on.", "So it doesn’t work very well. Of course, you have fraudulent cases in the US too and all that kind of stuff. But the scale in China, the waste in China, is a much larger-scale phenomenon.", "Dwarkesh Patel And it’s all financed, as you say, with this financial repression which is basically a tax on savers. So even though there’s no meaningful income tax—it’s just value-added tax and corporate income tax—if you factor in the currency devaluation which is a tax on consumers, and then this financial repression which is a tax on savers, it might actually be a meaningful decrease in the quality of life of an average person.", "Victor Shih Oh yeah. You definitely see it. For an economy that’s been growing over 5% percent for decades, by this time you’d expect pretty much the vast majority of the population in China to have all the basic necessities: shelter, medical care, etc. But you don’t see that.", "A lot of migrant workers are barely getting by. Homelessness is increasingly a problem in China, especially with the labor market being so fickle these days. Elderly care is pretty abysmal, very basic level for a lot of people. It’s not the case for everyone. Some people get very good elderly care if they worked for the government or an SOE previously.", "So you see a lot of problems the Chinese government has overlooked. This is why I say that they have a goal of making all Chinese people happy and fulfilled. One part of that is the “fulfilled” part, the technology part. But the part about truly helping people live a good life, they’re not making as much progress as they should.", "Dwarkesh Patel Just to close the loop on this, sometimes we get impressed when we see things like Made in China 2025 . The NDRC and MIIT have these very specific targets, like “ LIDAR needs to get to this level,” or “We need EUV machines by this year,” and so forth.", "But your claim is that that part of the system is actually dysfunctional and in fact, the part that works is the 5% that’s totally private?", "Victor Shih No, the private sector stuff works a lot better. You have occasional successes from the state-financed system. I wouldn’t say there are no success cases.", "But I would say for every success case you see, there are maybe even over a dozen failed cases. Of course, you have failed cases everywhere in VC, but the amount of money being wasted in the state financial system is much larger.", "00:58:12 – How corruption leads to overbuilding", "Dwarkesh Patel Going back to the local government debt burden, there’s a world where AI isn’t a big deal and there’s a world where AI is a big deal. If we live in the world where AI truly causes huge uplift to economic growth, is it possible that this problem doesn’t meaningfully harm China’s ability to compete at the frontier?", "Here’s why that might be the case. It seems like this problem is especially affecting poor provinces. You have the actual numbers, but are Shanghai and Guangdong in a good fiscal position? And could they could keep funding top-end semiconductor, AI firms, and data centers?", "So China could still compete at the frontier in AI, even if the poor provinces can’t fund basic services. And in the long run, if they get advanced AI and that’s a big uplift to economic growth, maybe they can just grow their way out of the debt problem.", "Victor Shih Yeah, we can talk about that. I’m a bit skeptical about that in the case of China. But yes, as you said, some places like Shanghai and Guangdong are still in relatively okay fiscal positions. But there are other wealthier provinces with a lot of debt also.", "Zhejiang Province—where DeepSeek is located, actually, and also Alibaba and other tech companies—has a high debt load. But because they started from a pretty wealthy place, they can still service the debt without too much difficulty. Actually, the trade war is going to make it worse for Zhejiang Province, because a lot of the manufacturing and export firms are also in Zhejiang.", "Nonetheless, the case of China is that the financial system is geared toward fulfilling the priorities of the Party, i.e. the priorities of Xi Jinping. So, even if they can’t do anything else, they will do the things that Xi Jinping wants them to do. And AI, apparently, is one of the higher priorities for Xi Jinping.", "He’s held multiple study sessions, had different Politburo meetings, and Ding Xuexiang—one of his most trusted lieutenants—is in charge of it. So I do expect a fair amount of resources to be devoted to it, regardless of the local debt situation.", "In fact, at the local level, things are so bad. But they just don’t care. There are civil servants, teachers, who are not paid four or five months out of the year. You’d think, “How can you run a government like that? Wouldn’t it just collapse?”", "But then you have to look at the outside options. If you’re a teacher, a very low-level bureaucrat, what option do you have? Do you want to go back to delivering food to people? Or would you rather at least take six or seven months of civil service salary? At least you have healthcare. At least you get a free lunch at the workplace cafeteria every day. People still choose to work for the Chinese government, even if that’s the case.", "A lot of the previously very generous fringe benefits and so forth, that’s all been cut down because of the local debt issue. Still, they’re pouring a lot of resources into national defense, into AI, into technology.", "Dwarkesh Patel What would a solution at this point look like? If you’ve already spent all this money… First of all, why is it the case that these investments actually aren’t productive? It’s not like it’s entitlement spending where you’re doing it for old people and at the end of the day you’re not getting any return for it. Theoretically, you’re building real bridges, you’re building real airports. So why aren’t they productive? And then, what does a solution at this point look like?", "Victor Shih So it was very productive—I think you noted this in the Arthur Kroeber book review you did—in the 1980s and 1990s, because China lacked a lot of infrastructure.", "But once you’ve built the first high-speed rail between Beijing and Shanghai… You build the second one and then the third one and there’s very rapidly diminishing returns.", "Also, the population is basically shrinking in China , especially in a lot of cities in northeastern and southwestern China. You really don’t need high-speed rail connecting those cities to other places, because nobody lives there anymore.", "Dwarkesh Patel Can I ask about this? If the reason that the provincial leaders kept doing this construction was because they wanted to get promoted, and they wanted to get GDP numbers on the books and government spending counts as GDP, even if the ultimate thing you’re building isn’t that productive…", "But the central government has had a problem with this for a while. They think it’s fake, and they don’t want to keep the fake growth going. They can see the local debt numbers. Were these people still promoted nonetheless? Like, why were people doing this? Who was the person who created this? And also, a lot of these people got arrested in the aftermath, right? So they weren’t promoted, they were arrested. So what was incentivizing them to keep going?", "Victor Shih When you invest in a large infrastructure project, a lot of money changes hands. This is a big debate in my field. Some people say, “Look, the more you invest, the more likely you are to get promoted.” My argument is that it’s not the actual investment, it’s the rent-seeking that comes along with the investment. You have to get a contractor for cement, a contractor for this and that. You get a big envelope under the table, as a kickback. That allows you to pay your superiors.", "Let’s say you did a billion-dollar investment to build light rail in a city. You got a hundred million dollar payout from all the contractors. You can give your superior $50 million to thank that person and increase your chance of getting promoted. So I would say that’s the more realistic mechanism for linking investment and promotion rather than, “Oh, good job, you generated investment and GDP in this city, we’re going to reward you.”", "In fact, there’s really good work by James Kung showing this. When you do a real estate deal, you can sell land cheaply to politically connected princelings, and then they’ll lobby on your behalf for your promotion. Statistically, there is an effect. If you sell land cheaply to princelings, your chances of promotion go up.", "Dwarkesh Patel I find it fascinating whether a corrupt political equilibrium can lead to more construction vs. less construction. The political equilibrium in America today… California isn’t the least corrupt state in the country, but the political equilibrium is one where there are many different factions who get their hands into the pocket, so everyone’s not like, “We need California rail to happen yesterday because I need my share of it.” It’s more like, “I’m going to get a little consulting fee to slow this down by five years.” So it's quite interesting.", "If you read the Caro biography of Robert Moses and you look at the strategies he employed, it’s actually very similar to what you’re mentioning here. It was in every single faction’s interest that the bridge that Robert Moses was working on gets done, through some of the mechanisms you mentioned.", "For example, he would give the banks these discounted bonds for the construction so the bank lobbyists were encouraged to keep it going. The unions, obviously, wanted employment. So they wanted the construction too. Every single person at every stage was incentivized.", "So I find it interesting that China has maintained a political equilibrium that encourages too much building, whereas our political equilibrium discourages it.", "Victor Shih Yeah, it is. Basically, all the regulations in the US—environmental and so on—build their own stakeholders and lobbying groups. Then it's in the interest of those groups to prolong the process so they can absorb a higher share of the rent.", "Whereas in China, because they have this command structure headed by the Chinese Communist Party, the secretary of the city or province can cut through all the red tape. He’s the boss of most of the regulators at the local level. So if he wants to do something, he can cut through all of it, as long as he can benefit himself somehow. So I would say that’s the difference.", "Dwarkesh Patel Okay and what would a solution to this problem look like? Because if the debt is already over 100% of GDP, that’s tough.", "Victor Shih Of course the Chinese government will never, ever in a million years do this in its current form, but it would be to reduce a lot of the other unnecessary spending: national defense, many of the industrial policies. And then to use the savings to bolster domestic demand with welfare policies and gradually decrease the amount of local government debt. That, I think, would be economically sound.", "China already has the largest navy in the world. Why does it need a navy that's even bigger than the largest navy in the world? It doesn't need that. It already has more than enough for national defense. It has some of the best jet fighters in the world, some of the best missile systems. No one is going to invade China or come close to it. So they actually don’t need it.", "Now, on the tech race I’m somewhat sympathetic to China. Because now there are US policies preventing Chinese companies from buying all the chips for AI, they want their own capacity. They’re spending a lot of money on it. So that part I’m somewhat sympathetic to. But other aspects of industrial policy—for some of the battery stuff, for solar power, etc.—they could certainly reduce a lot of subsidies.", "They could increase taxes, actually, on many export-oriented firms. That’s one reason for the large trade surplus in China and the trade imbalance with the US. They could get more revenue that way to finance domestic demand and pay down local government debt over time.", "The Chinese government, of course, will never do that because it places a very high priority on competition with the US across many different fronts, and on gaining dominance over multiple supply chains. But in a world of global trade, why do you need that? The same goes for the US. The US doesn’t need dominance over every single supply chain either.", "Dwarkesh Patel Yeah. Why is it the case that when economists talk about rebalancing more toward consumption, they often suggest increasing welfare? It seems like the more straightforward thing to do would be to get rid of financial repression so that by default savers would have more purchasing power. You can also get rid of currency devaluation to achieve the same effect. Why not get rid of the regressive taxes instead?", "Victor Shih Because if you just get rid of financial repression, it will only benefit the net-savers, which is the top 10-20% of households. If you look at the median household, they have very little savings besides the home they live in.", "So there's this misconception that China has the largest savings deposits in the world. That’s true. But if you look at the distribution, those deposits are highly concentrated in the hands of the top 10%. So even if deposit interest rates suddenly go up to 4%, it's only going to benefit the people with large amounts of savings. Most people won’t benefit. Most people, if they don’t have better welfare services, will just save like crazy in anticipation of getting sick. So you do need better medical care insurance in China.", "Frankly, that’s something I worry about in the US too. If you cut back on Medicaid , it's going to encourage precautionary savings. You could say, “Well, that’s a good thing. Americans save too little.” Maybe, but then your short-term consumption is going to come down.", "01:10:46 – Probability of Taiwan invasion", "Dwarkesh Patel So you mentioned a second ago, “why do they need this big military?” Obviously the reason is not just self-defense but a potential invasion of Taiwan . Given your reading of elite politics, what’s your sense on the probability that they would actually invade Taiwan this decade?", "Victor Shih First of all, I think we do know something about Xi Jinping’s preference mapping when it comes to this issue. He’s said a number of times that obviously China should be united and I do believe that’s sincere.", "But clearly, his desire to achieve that is not so strong that he would take a very risky gamble to achieve it. If that were the case, he would’ve done it already. He’s been in power for 12 years. He hasn’t done very much. Certainly the amount of pressure on Taiwan has ratcheted up. Also from his other policymaking, we know he’s not a reckless policymaker.", "Dwarkesh Patel Zero-COVID was reckless.", "Victor Shih Do you think that was reckless? That was actually conservative, right? He preserved a lockdown longer than was necessary.", "Dwarkesh Patel I mean it depends. You could say the same thing about Taiwan. It’s conservative because in the long run maybe they pose a national security threat. It is a change from the norm.", "Victor Shih If he wanted it so badly that he’d do whatever it takes to get it, he would’ve done it already.", "Dwarkesh Patel I guess you could argue that he’s been trying to build up enough self-sufficiency such that in the aftermath, China wouldn’t be left in the lurch without food or power. That’s why they’re doing clean tech, why they’re investing in semis and so forth. It’s so that over the next few years, if they have enough self-sufficiency, they’re in a position where they actually could do it. You couldn’t have done the same thing in 2015.", "Victor Shih Right. We’ve seen the Chinese government engage in all these behaviors: stockpiling oil and grain for example, enlarging the navy, building amphibious landing capabilities, and so on.", "One argument is that there’s a threshold. Once he feels he’s reached that threshold, then he’ll go for it. But for me, I don’t think that threshold is fixed. It’s really conditional on a lot of different things. Some exogenous factors might increase the threshold. Other exogenous factors might lower the threshold.", "More recently, one factor that likely increased the threshold is what happened in Ukraine . That’s a case where Putin was very confident he could take over all of Ukraine in a short time. It turns out he was wrong. He was getting bad intelligence, and so forth. I think for Xi Jinping, he cannot discount the possibility. That likely increased the threshold. But there could be other factors that lower the threshold.", "Dwarkesh Patel This gets to the question of how much information flow there is. How much true information is getting up to Xi Jinping?", "In authoritarian systems, obviously there’s the danger that the leader is not being told things they don’t want to know. But you mentioned earlier that the top leadership is in communication with experts all the time, in meetings all day, where they are learning about what’s happening. Is that your sense?", "Because then you read about something like Zero-COVID and by year three it was very obviously a mistake. So to the extent it wasn’t stopped at that point, was that because information wasn’t getting up to the top leadership?", "There was also that famous case in 2024 where Xi asked, “Why don’t we have billion-dollar unicorn tech companies in China?” And people responded, “Does he not realize the 2021 tech crackdown is the reason?” Did nobody tell him?", "What’s your sense of how much he’s being told things which might make him uncomfortable, like the possibility that he might not succeed in Taiwan?", "Victor Shih This goes to the question of the role of expertise in the Chinese government. That’s a very interesting question. Frankly, I’m trying to use some data to assess it, but my intuition is this.", "The top leadership always listens to experts. Unlike in some cases in the US, the experts are respected and listened to. But there are interest groups close to leadership, who know that, so they’ll present different kinds of experts to tell him different expert opinions. Sometimes he’ll get conflicting advice from different experts, and Xi has to make some kind of gut call. Other times, for political reasons they’ll just ignore the experts.", "They’ll never say, “Fake news,” or “We don’t believe in science.” Sometimes even though they know for sure the experts are right, because of other political reasons they’ll say, “Okay, you’re right, but we’re not going to do what you want us to do for now for a bunch of political reasons.”", "Dwarkesh Patel How did that manifest in Zero-COVID?", "Victor Shih The timing of it. Also the decision not to buy Paxlovid on a very large scale might’ve been a political decision.", "We have a paper showing that they knew the experts were telling them the US vaccines were better, mRNA vaccines were better. But the Chinese government said, “Okay, we believe you, but we still want to help our domestic pharmaceutical industry so we’re only going to sell the domestic stuff in China.” That’s very clear. If you look at the propaganda, they began to denigrate Western vaccines and highlight Chinese vaccines.", "But I think they do meet with experts a lot. Sometimes the experts are manipulated into saying certain things. Other times, the experts are sincere. They listen to them but they still choose not to implement what the experts recommend.", "Dwarkesh Patel I guess I’m just trying to get a sense of where this nets out, in terms of the competence of the political system. It seems like they have a track record of making some smart calls. But then occasionally they’ll make a call so bad that it wipes out all the smart ones. Zero-COVID is an example of that and obviously, the one-child policy before that, not to mention all the things that Mao did.", "I’m getting the sense that they’ll consult with economists or scientists on some things. I guess I still don’t understand. They’re like, “Okay, we’re not going to buy the American vaccine because of this.\" This is the time of all times to be making sure the pharmaceutical industry in China is in a good position.", "Sure but did they not know that people were locked in their apartments starving and that this wasn’t a viable long-term strategy? How is that compatible with this very technocratic picture we have of the Party?", "Victor Shih They’re technocrats. They’re people who know a lot. But generally speaking, protecting the Party—protecting the state apparatus and all the important tools of the Chinese government—will almost always be a higher priority than any kind of technical consideration in a given decision-making process.", "Why protect all these pharmaceutical companies? Because they’re state-owned enterprises, they need to survive the ordeal. They need to have a national brand name. Also, pharmaceutical and what they call “new biology” is one of these industrial plans. In order to finance future research in advanced biology, these companies need cash flows.", "01:18:56 – Succession after Xi", "Dwarkesh Patel What is the succession plan after Xi is gone?", "Victor Shih There are no plans right now.", "Dwarkesh Patel What’s your sense of what would happen? Tomorrow he drops dead. What happens next?", "Victor Shih Oh god. That would be an issue, if he were to drop dead tomorrow. The biggest impact is this. Right now, financial repression works because…", "Let’s say you’re a billionaire trying to get your money out of China. You make up fake paperwork: “I’m importing a million Rolex watches from Singapore, I need to pay an invoice for a billion dollars.” That order goes from your bank to the State Administration of Foreign Exchange in Shenzhen or Shanghai. Some bureaucrat will have to okay it, all large denomination outflows.", "Right now, the bureaucrat thinks: “If I approve this, someone in Beijing is going to notice it and I’ll be in jail in a week. So I will not approve it.” But if that bureaucrat thinks, “There’s nobody in Beijing. No one’s minding the store and this guy promises me $100 million if I approve it”, then he’ll go ahead and approve it.", "So if there’s any sense that there’s no command structure in Beijing, even for a week or two, I think we would have a financial crisis, at least.", "Dwarkesh Patel That's interesting. Because there would be tremendous capital outflows?", "Victor Shih Let’s put it this way. With the foreign exchange reserve people say, “Oh my god, it’s so much money, $3 trillion.” Right now, it’s less than 5% of the money supply in China . So it doesn’t even take a panic.", "Even if people just want to rationally reallocate 10% of their assets outside of China, the foreign exchange reserve is gone. There’s massive devaluation, etc. They’d have to jack up interest rates to something like 20% to stop it. Then there’d be mass bankruptcy, and so on.", "Dwarkesh Patel But they can stop it?", "Victor Shih If there’s a command structure, yes. If people believe they’re going to jail the week after, even if they get $100 million in cryptocurrency in Hong Kong or wherever.", "Dwarkesh Patel Suppose it doesn’t happen tomorrow, but in 2028 or 2030 he just goes senile. Over the course of months, it’s clear there needs to be a successor. What does then the succession process look like then?", "Victor Shih There’s a way we historically know doesn’t work, and then there’s a way that could work. The model we know doesn’t work is designating someone: “You’re going to be the successor.” That person is going to die for sure, in a horrible way. We’ve seen this with Stalin. We’ve seen this with Mao.", "The other way is to have someone that everyone trusts to represent the leader’s opinion, but who cannot possibly become the number-one person. Mao had Jiang Qing , his wife. Xi Jinping could do something similar with his wife . Increasingly, we’re seeing his daughter take on a higher profile. The reason these individuals cannot be number one in the Party is because of this ingrained sexism in the Party against women. It’s also because they’ve never held official positions in the Party.", "So I think having a transitional figure who can stabilize the situation while different factions fight it out, hopefully peacefully, will be an important ingredient. Sometimes even those transitional figures are too weak. Jiang Qing was purged within weeks of Mao dying.", "Now the Long March veterans were so close to each other that they worked it out among themselves. That was a pretty peaceful transition, aside from the purge of Jiang Qing and her colleagues in the Gang of Four . These days, the Long March generation of officials who’ve spent decades with each other, governing with each other—that generation doesn’t exist anymore. There’s very high turnover. The people in the Politburo, there are different elements in there.", "Some of them have worked closely together over the decades, but many of them worked in such disparate bureaucracies that there’s not a lot of trust between them. That’s probably by design. Xi doesn’t want people in the Politburo to trust each other and be friends with each other too much, because they might get together and usurp his power. So given that there isn't this social capital, the kind we saw in 1976, I think this transition is going to be more ruthless, more brutal, and potentially more disruptive.", "Dwarkesh Patel What are the important factions in Chinese politics today? What’s the equivalent of liberal or conservative in American politics? Historically, there were things like the Shanghai clique . Do those still exist? If they do, what do they stand for?", "Victor Shih They’re all beholden to Xi Jinping in one way or another.", "Dwarkesh Patel But do they have different ideas about what should happen with China, or what should happen in the world?", "Victor Shih Generally speaking, you have some people who are more pro-market. These tend to be people with more governing experience along the coastal provinces, like Zhejiang Province or Shanghai. Then you have the statists, people whose careers were mostly in state-owned enterprises. All the military industrialists, I would guess they’re more pro–state power.", "Actually, those figures might ultimately be somewhat stabilizing. You basically saw this in the Soviet Union. For those people, they just wanted to preserve their vested interests. So it was like, “Okay, whoever is in charge, as long as you subsidize the state sector some more, I’ll support you.”", "In the Soviet Union after Stalin died between the sort of Kruschev and post-Kruschev transition, they were stabilizing politically, even though they ended up damaging the Soviet economy by demanding so many subsidies for their own sectors.", "So the fact that you have quite a number of these guys from the military-industrial sector as Politburo members could potentially be stabilizing.", "01:25:10 – Future growth forecasts", "Dwarkesh Patel What’s your forecast on Chinese growth—or the Chinese economy relative to the US—if AI is not a thing? Let’s forecast out everything else. In the year 2040, is it 2x as big as the US economy PPP ? 3x? 4x? 1x?", "Victor Shih I think it’s 1x or 1.2x the US economy.", "Dwarkesh Patel Even PPP?", "Victor Shih Yeah. I don’t quite believe in these PPP calculations.", "Dwarkesh Patel Oh really? Interesting. How come?", "Victor Shih Just because the quality is very different. In some cases, China’s quality is still worse. In other cases, it’s increasingly better than the US quality. So I’m not an expert, but even controlling for inflation—so real income—I think the overall size of the economy will be similar to the US, maybe slightly larger by 2040.", "Dwarkesh Patel But it’s already 1.1x or 1.2x, right?", "Victor Shih No, not as far as I know. If it’s not PPP and it’s just real income, I think it’s around 70% of the US economy.", "Dwarkesh Patel This is quite a bearish forecast. Usually people expect China to be quite dominant by 2040.", "Victor Shih Growth rates were slowing down anyway. Now you have trade pressure and so forth. It’s going to slow down.", "There’s no big consumption stimulus in sight because of the high debt level. So they can’t push growth through that mechanism. They have to double down on more investment, more supply-side. But at some point, is the world really going to switch to 100% buying BYD as opposed to Volkswagen and all these other cars? I don’t know. I think there’ll be a pretty big pushback, or at least some kind of limit.", "Let’s say China takes over most of Latin America, most of Africa, most of the Middle East. That’s still not such a big market, right? Ultimately, you have to conquer Europe. You have to take North America. That’s going to be really tough.", "Dwarkesh Patel Final question. What advice do you have for me on how to land—if not somebody on the Politburo—then someone one step away from the Politburo.", "Victor Shih That’s very tough. Anyone who’s been a mid-level or higher official can no longer leave the People’s Republic of China. There are some former vice ministers who can go to Hong Kong. So I think that’s your best bet. Reach out to one of these vice minister-types and agree to meet with them in Hong Kong, or Shezhen even better yet. That could be okay if they’ve been out of power.", "Dwarkesh Patel You think it could be an interesting interview? Or would they not say anything?", "Victor Shih Yeah, it depends on the person. Some people are willing to be a bit more open. But frankly, there is this atmosphere in China where people are afraid of deviating from the Party line, because they can really get in trouble.", "Dwarkesh Patel This was super useful to get a much more tangible sense of how the Chinese political system actually works. We talk about China so much. Most of us have so little understanding of even the basics, like the Congress/president level understanding of the Chinese political system. Obviously a lot of it is obfuscated on purpose, so I appreciate you adding a lot more color to it.", "Victor Shih Great. Thank you for having me." ]
[ "https://gps.ucsd.edu/faculty-directory/victor-shih.html", "https://china.ucsd.edu/", "https://www.investopedia.com/terms/f/fdi.asp", "https://en.wikipedia.org/wiki/Tax-Sharing_Reform_of_China_in_1994", "https://en.wikipedia.org/wiki/Taxation_in_China", "https://www.piie.com/research/piie-charts/2024/chinese-local-governments-reliance-land-revenue-drops-property-downturn", "https://thediplomat.com/2022/03/what-doomed-chinas-much-anticipated-property-market-reform-plan/", "https://en.wikipedia.org/wiki/Politburo_of_the_Chinese_Communist_Party", "https://en.wikipedia.org/wiki/State-owned_enterprises_of_China", "https://en.wikipedia.org/wiki/Xi_Jinping", "https://en.wikipedia.org/wiki/Premier_of_China", "https://en.wikipedia.org/wiki/Li_Keqiang", "https://en.wikipedia.org/wiki/Ma_Xingrui", "https://en.wikipedia.org/wiki/Tsinghua_University", "https://en.wikipedia.org/wiki/Zhang_Guoqing", "https://en.wikipedia.org/wiki/Ding_Xuexiang", "https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology", "https://en.wikipedia.org/wiki/Indian_Institutes_of_Technology", "https://en.wikipedia.org/wiki/Chinese_Communist_Party", "https://en.wikipedia.org/wiki/Princelings", "https://en.wikipedia.org/wiki/Peking_University", "https://en.wikipedia.org/wiki/Chinese_government_response_to_COVID-19#Zero-COVID_policy", "https://asiasociety.org/policy-institute/decoding-chinese-politics?policy=top-leadership&group=organizations&size=rank&connection=personal", "https://en.wikipedia.org/wiki/Anti-corruption_campaign_under_Xi_Jinping", "https://en.wikipedia.org/wiki/Leading_Small_Group", "https://en.wikipedia.org/wiki/Joseph_Stalin", "https://asiasociety.org/policy-institute/who-briefs-xi-jinping-how-politburo-study-sessions-help-decode-chinese-politics", "https://en.wikipedia.org/wiki/Politburo_Standing_Committee_of_the_Chinese_Communist_Party", "https://en.wikipedia.org/wiki/General_Secretary_of_the_Chinese_Communist_Party", "https://en.wikipedia.org/wiki/Vice_Premier_of_China", "https://en.wikipedia.org/wiki/Governor_(China)", "https://en.wikipedia.org/wiki/Central_Science_and_Technology_Commission", "https://en.wikipedia.org/wiki/Central_Cyberspace_Affairs_Commission", "https://en.wikipedia.org/wiki/Party_Secretary_of_Shanghai", "https://en.wikipedia.org/wiki/Jiang_Zemin", "https://en.wikipedia.org/wiki/Shanghai_clique", "https://en.wikipedia.org/wiki/Vice_Premier_of_China", "https://en.wikipedia.org/wiki/Artificial_general_intelligence", "https://x.com/vshih2/status/1925432760514355371", "https://www.weforum.org/stories/2025/01/davos-2025-special-address-ding-xuexiang-vice-premier-china/", "https://openai.com/index/computer-using-agent/", "https://cset.georgetown.edu/publication/xi-politburo-collective-study-ai-2025/", "https://en.wikipedia.org/wiki/DeepSeek", "https://en.wikipedia.org/wiki/WeChat", "https://en.wikipedia.org/wiki/Tencent", "https://en.wikipedia.org/wiki/Liang_Wenfeng", "https://www.nytimes.com/2025/03/18/business/china-government-deepseek.html", "https://en.wikipedia.org/wiki/Graphics_processing_unit", "https://en.wikipedia.org/wiki/Falun_Gong", "https://en.wikipedia.org/wiki/Censorship_in_China#Internet", "https://en.wikipedia.org/wiki/High-Flyer", "https://en.wikipedia.org/wiki/Huawei", "https://en.wikipedia.org/wiki/National_Development_and_Reform_Commission", "https://en.wikipedia.org/wiki/Li_Qiang", "https://en.wikipedia.org/wiki/He_Lifeng", "https://en.wikipedia.org/wiki/Belt_and_Road_Initiative", "https://www.reuters.com/business/autos-transportation/chinas-trade-tsar-limelight-us-tariff-talks-2025-05-07/", "https://www.wsj.com/real-estate/china-real-estate-bubble-bust-35a2b7db", "https://en.wikipedia.org/wiki/Tianjin", "https://en.wikipedia.org/wiki/Yujiapu_Financial_District", "https://en.wikipedia.org/wiki/Emeishan_City", "https://en.wikipedia.org/wiki/Mount_Emei#Buddhist_architecture_on_Emei", "https://en.wikipedia.org/wiki/20th_Politburo_of_the_Chinese_Communist_Party", "https://en.wikipedia.org/wiki/20th_Politburo_Standing_Committee_of_the_Chinese_Communist_Party", "https://en.wikipedia.org/wiki/Xiaohongshu", "https://en.wikipedia.org/wiki/Large_language_model", "https://en.wikipedia.org/wiki/Grok_(chatbot)", "https://en.wikipedia.org/wiki/Baidu", "https://en.wikipedia.org/wiki/Ernie_Bot", "https://en.wikipedia.org/wiki/Xi_Jinping#Early_life_and_education", "https://en.wikipedia.org/wiki/Cultural_Revolution", "https://en.wikipedia.org/wiki/Fujian", "https://en.wikipedia.org/wiki/Leon_Trotsky", "https://en.wikipedia.org/wiki/Xi_Zhongxun", "https://en.wikipedia.org/wiki/Hebei", "https://en.wikipedia.org/wiki/Zhejiang", "https://en.wikipedia.org/wiki/Zhou_Yongkang", "https://en.wikipedia.org/wiki/Ministry_of_State_Security_(China)", "https://en.wikipedia.org/wiki/Hu_Jintao", "https://amzn.to/4mwByVn", "https://en.wikipedia.org/wiki/Bo_Xilai", "https://en.wikipedia.org/wiki/Bo_Guagua", "https://x.com/bokuangyi?lang=de", "https://www.bloomberg.com/news/articles/2025-04-16/china-economy-can-1-6-trillion-help-solve-xi-s-hidden-debt-problem", "https://carnegieendowment.org/posts/2024/11/a-10-trillion-rmb-accounting-exercise?lang=en", "https://carnegieendowment.org/posts/2024/11/a-10-trillion-rmb-accounting-exercise?lang=en", "https://en.wikipedia.org/wiki/OECD", "https://en.wikipedia.org/wiki/China_Integrated_Circuit_Industry_Investment_Fund", "https://en.wikipedia.org/wiki/Sequoia_Capital", "https://en.wikipedia.org/wiki/Capital_control", "https://www.investopedia.com/terms/c/current-account-surplus.asp#:~:text=A%20current%20account%20surplus%20means,adds%20to%20a%20country's%20reserves.", "https://www.investopedia.com/terms/t/treasurybill.asp", "https://en.wikipedia.org/wiki/Ministry_of_Industry_and_Information_Technology", "https://en.wikipedia.org/wiki/Xiaomi", "https://en.wikipedia.org/wiki/BYD_Company", "https://www.google.com/search?q=CATL&oq=CATL&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIGCAEQRRhBMgYIAhBFGEEyBggDEEUYQTIGCAQQRRhBMgYIBRAuGEDSAQcxMTJqMGoxqAIAsAIA&sourceid=chrome&ie=UTF-8", "https://en.wikipedia.org/wiki/DJI", "https://en.wikipedia.org/wiki/Made_in_China_2025", "https://en.wikipedia.org/wiki/Lidar", "https://en.wikipedia.org/wiki/Extreme_ultraviolet_lithography", "https://en.wikipedia.org/wiki/Guangdong", "https://www.dwarkesh.com/p/chinas-economy", "https://www.reuters.com/world/china/chinas-population-falls-third-consecutive-year-2025-01-17/", "https://en.wikipedia.org/wiki/Rent-seeking#:~:text=%22Rent%2Dseeking%22%20is%20an,than%20by%20creating%20new%20wealth.", "https://ideas.repec.org/a/eee/deveco/v123y2016icp86-106.html?curator=Informerly&page=15", "https://www.jameskung.net/", "https://amzn.to/3ZGrBuG", "https://en.wikipedia.org/wiki/Robert_Moses", "https://en.wikipedia.org/wiki/Medicaid", "https://en.wikipedia.org/wiki/Chinese_unification", "https://en.wikipedia.org/wiki/Russian_invasion_of_Ukraine", "https://en.wikipedia.org/wiki/Vladimir_Putin", "https://www.economist.com/china/2024/07/16/xi-jinping-is-trying-to-love-bomb-chinas-entrepreneurs", "https://www.scmp.com/tech/big-tech/article/3227753/timeline-chinas-32-month-big-tech-crackdown-killed-worlds-largest-ipo-and-wiped-out-trillions-value", "https://en.wikipedia.org/wiki/Nirmatrelvir/ritonavir", "https://patrickjchester.com/publication/vacnat/vacnat.pdf", "https://en.wikipedia.org/wiki/MRNA_vaccine", "https://en.wikipedia.org/wiki/One-child_policy", "https://en.wikipedia.org/wiki/Mao_Zedong", "https://en.wikipedia.org/wiki/State_Administration_of_Foreign_Exchange", "https://en.wikipedia.org/wiki/Shenzhen", "https://en.wikipedia.org/wiki/Foreign-exchange_reserves_of_China", "https://ycharts.com/indicators/china_m2_money_supply#:~:text=China%20M2%20Money%20Supply%20is,7.96%25%20from%20one%20year%20ago.", "https://en.wikipedia.org/wiki/Jiang_Qing", "https://en.wikipedia.org/wiki/Peng_Liyuan", "https://en.wikipedia.org/wiki/Xi_Mingze", "https://en.wikipedia.org/wiki/Long_March", "https://en.wikipedia.org/wiki/Smashing_the_Gang_of_Four", "https://en.wikipedia.org/wiki/Gang_of_Four", "https://en.wikipedia.org/wiki/Shanghai_clique", "https://en.wikipedia.org/wiki/Nikita_Khrushchev", "https://en.wikipedia.org/wiki/Purchasing_power_parity" ]
https://www.dwarkesh.com/p/will-macaskill
Will MacAskill - Longtermism, Altruism, History, & Technology
[ "Dwarkesh Patel 0:06", "Okay, today I have the pleasure of interviewing William MacAskill . Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future . Will, thanks for coming on the podcast.", "Will MacAskill 0:20", "Thanks so much for having me on.", "Effective Altruism and Western values", "Dwarkesh Patel 0:23", "My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?", "Will MacAskill 0:32", "Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on.", "We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn’t get a lot of traction until early 2010 after GiveWell and Giving What We Can launched.", "What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn’t possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did.", "Dwarkesh Patel 1:49", "If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in.", "Will MacAskill 2:09", "Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth.", "One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out.", "Dwarkesh Patel 2:56", "So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh’ values?", "Will MacAskill 3:19", "Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average.", "If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains.", "Dwarkesh Patel 4:14", "If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress.", "Will MacAskill 4:30", "Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What’s right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately.", "The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with.", "Are we unwise?", "Dwarkesh Patel 5:05", "In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats?", "Will MacAskill 5:32", "My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years.", "Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles.", "Dwarkesh Patel 6:34", "But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker?", "Will MacAskill 6:47", "Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community.", "Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence.", "The contingency of technology", "Dwarkesh Patel 7:47", "In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter?", "Will MacAskill 8:17", "In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past.", "But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy.", "Dwarkesh Patel 9:10", "It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one?", "Will MacAskill 9:22", "In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution.", "It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within decades if they hadn't been developed when they were.", "Dwarkesh Patel 9:57", "The model here is, “These general-purpose changes in the state of technology are contingent, and it'd be very important to try to engineer one of those. But other than that, it's going to get done by some guy creating a start-up anyways?”", "Will MacAskill 10:11", "Even in the case of the steam engine that seemed contingent, it gets developed in the long run. If the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point? Would similar technologies that were vital to the industrial revolution developed? Yes, there are very strong incentives for doing so.", "If there’s a culture that's into making textiles in an automated way as opposed to England in the 18th century, then that economy will take over the world. There's a structural reason why economic growth is much less contingent than moral progress.", "Dwarkesh Patel 11:06", "When people think of somebody like Norman Borlaug and the Green Revolution . It's like, “If you could have done something that, you'd be the greatest person in the 20th century.” Obviously, he's still a very good man, but would that not be our view? Do you think the green revolution would have happened anyways?", "Will MacAskill 11:22", "Yes. Norman Borlaug is sometimes credited with saving a billion lives. He was huge. He was a good force for the world. Had Norman Borlaug not existed, I don’t think a billion people would have died. Rather, similar developments would have happened shortly afterwards.", "Perhaps he saved tens of millions of lives—and that's a lot of lives for a person to save. But, it's not as many as simply saying, “Oh, this tech was used by a billion people who would have otherwise been at risk of starvation.” In fact, not long afterwards, there were similar kinds of agricultural development.", "Who changes history?", "Dwarkesh Patel 12:02", "What kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers?", "Will MacAskill 12:12", "Not quite moral philosophers, although there are some examples. Sticking on science technology, if you look at Einstein, theory of special relativity would have been developed shortly afterwards. However, theory of general relativity was plausibly decades in advance. Sometimes, you get surprising leaps. But, we're still only talking about decades rather than millennia. Moral philosophers could make long-term difference. Marx and Engels made an enormous, long-run difference. Religious leaders like Mohammed, Jesus, and Confucius made enormous and contingent, long-run difference. Moral activists as well.", "Dwarkesh Patel 13:04", "If you think that the changeover in the landscape of ideas is very quick today, would you still think that somebody like Marx will be considered very influential in the long future? Communism lasted less than a century, right?", "Will MacAskill 13:20", "As things turned out, Marx will not be influential over the long term future. But that could have gone another way. It's not such a wildly different history. Rather than liberalism emerging dominant in the 20th century, it was communism. The better technology gets, the better the ruling ideology is to cement its ideology and persist for a long time. You can get a set of knock-on effects where communism wins the war of ideas in the 20th century.", "Let’s say a world-government is based around those ideas, then, via anti-aging technology, genetic-enhancement technology, cloning, or artificial intelligence, it's able to build a society that possesses forever in accordance with that ideology.", "Dwarkesh Patel 14:20", "The death of dictators is especially interesting when you're thinking about contingency because there are huge changes in the regime. It makes me think the actual individual there was very important and who they happened to be was contingent and persistent in some interesting ways.", "Will MacAskill 14:37", "If you've got a dictatorship, then you've got single person ruling the society. That means it's heavily contingent on the views, values, beliefs, and personality of that person.", "Scientific talent", "Dwarkesh Patel 14:48", "Going back to the second nation, in the book, you're very concerned about fertility. It seems your model about scientific and technological progress happens is number of people times average researcher productivity. If resource productivity is declining and the number of people isn't growing that fast, then that's concerning.", "Will MacAskill 15:07", "Yes, number of people times fraction of the population devoted to R&D.", "Dwarkesh Patel 15:11", "Thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history. Venice, Athens, or even something like FTX , right? There are 20 developers making this a multibillion dollar company—do these examples suggest that organization and congregation of researchers matter more than the total amount?", "Will MacAskill 15:36", "The model works reasonably well. Throughout history, you start from a very low technological baseline compared to today. Most people aren't even trying to innovate. One argument for why Baghdad lost its Scientific Golden Age is because the political landscape changed such that what was incentivized was theological investigation rather than scientific investigation in the 10th/11th century AD.", "Similarly, one argument for why Britain had a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music. That doesn't compound in the way that making textiles does. If you look at like Sparta versus Athens, what was the difference? They had different cultures and intellectual inquiry was more rewarded in Athens.", "Because they're starting from a lower base, people trying to do something that looks like what we now think of as intellectual inquiry have an enormous impact.", "Dwarkesh Patel 16:58", "If you take an example like Bell Labs, the low-hanging fruit is gone by the late 20th century. You have this one small organization that has six Nobel Prizes. Is this a coincidence?", "Will MacAskill 17:14", "I wouldn't say that at all. The model we’re working with is the size of the population times the fraction of the population doing R&D. It's the simplest model you can have. Bell Labs is punching above its weight. You can create amazing things from a certain environment with the most productive people and putting them in an environment where they're ten times more productive than they would otherwise be.", "However, when you're looking at the grand sweep of history, those effects are comparatively small compared to the broader culture of a society or the sheer size of a population.", "Longtermist institutional reform", "Dwarkesh Patel 18:00", "I want to talk about your paper on longtermist institutional reform. One of the things you advocate in this paper is that we should have one of the houses be dedicated towards longtermist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body?", "Will MacAskill 18:23", "The thing I'll caveat with longtermist institutions is that I’m pessimistic about them. If you're trying to represent or even give consideration to future people, you have to face the fact that they're not around and they can't lobby for themselves. However, you could have an assembly of people who have some legal regulatory power. How would you constitute that? My best guess is you have a random selection from the population? How would you ensure that incentives are aligned?", "In 30-years time, their performance will be assessed by a panel of people who look back and assess the policies’ effectiveness. Perhaps the people who are part of this assembly have their pensions paid on the basis of that assessment. Secondly, the people in 30-years time, both their policies and their assessment of the previous 30-years previous assembly get assessed by another assembly, 30-years after that, and so on. Can you get that to work? Maybe in theory—I’m skeptical in practice, but I would love some country to try it and see what happens.", "There is some evidence that you can get people to take the interests of future generations more seriously by just telling them their role. There was one study that got people to put on ceremonial robes, and act as trustees of the future. And they did make different policy recommendations than when they were just acting on the basis of their own beliefs and self-interest.", "Dwarkesh Patel 20:30", "If you are on that board that is judging these people, is there a metric like GDP growth that would be good heuristics for assessing past policy decisions?", "Will MacAskill 20:48", "There are some things you could do: GDP growth, homelessness, technological progress. I would absolutely want there to be an expert assessment of the risk of catastrophe. We don't have this yet, but imagine a panel of super forecasters predicting the chance of a war between great powers occurring in the next ten years that gets aggregated into a war index.", "That would be a lot more important than the stock market index. Risk of catastrophe would be helpful to feed into because you wouldn't want something only incentivizing economic growth at the expense of tail risks.", "Dwarkesh Patel 21:42", "Would that be your objection to a scheme like Robin Hanson’s about maximizing the expected future GDP using prediction markets and making decisions that way?", "Will MacAskill 21:50", "Maximizing future GDP is an idea I associate with Tyler Cowen. With Robin Hanson’s idea of voting on values but betting on beliefs, if people can vote on what collection of goods they want, GDP and unemployment might be good metrics. Beyond that, it's pure prediction markets. It's something I'd love to see tried. It’s an idea of speculative political philosophy about how a society could be extraordinarily different in structure that is incredibly neglected.", "Do I think it'll work in practice? Probably not. Most of these ideas wouldn't work. Prediction markets can be gamed or are simply not liquid enough. There hasn’t been a lot of success in prediction markets compared to forecasting. Perhaps you can solve these things. You have laws about what things can be voted on or predicted in the prediction market, you could have government subsidies to ensure there's enough liquidity. Overall, it's likely promising and I'd love to see it tried out on a city-level or something.", "Dwarkesh Patel 23:13", "Let’s take a scenario where the government starts taking the impact on the long-term seriously and institutes some reforms to integrate that perspective. As an example, you can take a look at the environmental movement. There're environmental review boards that will try to assess the environmental impact of new projects and repeal any proposals based on certain metrics.", "The impact here, at least in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these mechanisms in order to prevent projects that would actually help the environment. With longtermism, it takes a long time to assess the actual impact of something, but policymakers are tasked with evaluating the long term impacts of something. Are you worried that it'd be a system that'd be easy to game by malicious actors? And they'd ask, “What do you think went wrong with the way that environmentalism was codified into law?”", "Will MacAskill 24:09", "It's potentially a devastating worry. You create something to represent future people, but they're not allowed to lobby themselves (it can just be co-opted). My understanding of environmental impact statements has been similar. Similarly, it's not like the environment can represent itself—it can't say what its interests are. What is the right answer there? Maybe there are speculative proposals about having a representative body that assesses these things and elect jobs by people in 30-years time. That's the best we've got at the moment, but we need a lot more thought to see if any of these proposals would be robust for the long term rather than things that are narrowly-focused.", "Regulation to have liability insurance for dangerous bio labs is not about trying to represent the interests of future generations. But, it's very good for the long-term. At the moment, if longtermists are trying to change the government, let's focus on a narrow set of institutional changes that are very good for the long-term even if they're not in the game of representing the future. That's not to say I'm opposed to all such things. But, there are major problems with implementation for any of them.", "Dwarkesh Patel 25:35", "If we don't know how we would do it correctly, did you have an idea of how environmentalism could have been codified better? Why was that not a success in some cases?", "Will MacAskill 25:46", "Honestly, I don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could’ve had some system that wouldn't have been co-opted in the long-term.", "Are companies longtermist?", "Dwarkesh Patel 25:56", "Theoretically, the incentives of our most long-term U.S. institutions is to maximize future cash flow. Explicitly and theoretically, they should have an incentive to do the most good they can for their own company—which implies that the company can’t be around if there’s an existential risk…", "Will MacAskill 26:18", "I don't think so. Different institutions have different rates of decay associated with them. So, a corporation that is in the top 200 biggest companies has a half-life of only ten years. It’s surprisingly short-lived. Whereas, if you look at universities Oxford and Cambridge are 800 years old. University of Bologna is even older. These are very long-lived institutions.", "For example, Corpus Christi at Oxford was making a decision about having a new tradition that would occur only every 400 years. It makes that kind of decision because it is such a long-lived institution. Similarly, the legends can be even longer-lived again. That type of natural half-life really affects the decisions a company would make versus a university versus a religious institution.", "Dwarkesh Patel 27:16", "Does that suggest that there's something fragile and dangerous about trying to make your institution last for a long time—if companies try to do that and are not able to?", "Will MacAskill 27:24", "Companies are composed of people. Is it in the interest of a company to last for a long time? Is it in the interests of the people who constitute the company (like the CEO and the board and the shareholders) for that company to last a long time? No, they don't particularly care. Some of them do, but most don't. Whereas other institutions go both ways. This is the issue of lock-in that I talked about at length in What We Owe The future: you get moments of plasticity during the formation of a new institution.", "Whether that’s the Christian church or the Constitution of the United States, you lock-in a certain set of norms. That can be really good. Looking back, the U.S. Constitution seems miraculous as the first democratic constitution. As I understand it, it was created over a period of four months seems to have stood the test of time. Alternatively, lock-in norms could be extremely dangerous. There were horrible things in the U.S. Constitution like the legal right to slavery proposed as a constitutional amendment. If that had locked in, it would have been horrible. It's hard to answer in the abstract because it depends on the thing that's persisting for a long time.", "Living in an era of plasticity", "Dwarkesh Patel 28:57", "You say in the book that you expect our current era to be a moment of plasticity. Why do you think that is?", "Will MacAskill 29:04", "There are specific types of ‘moments of plasticity’ for two reasons. One is a world completely unified in a way that's historically unusual. You can communicate with anyone instantaneously and there's a great diversity of moral views. We can have arguments, like people coming on your podcast can debate what's morally correct. It's plausible to me that one of many different sets of moral views become the most popular ultimately.", "Secondly, we're at this period where things can really change. But, it's a moment of plasticity because it could plausibly come to an end — and the moral change that we're used to could end in the coming decades. If there was a single global culture or world government that preferred ideological conformity, combined with technology, it becomes unclear why that would end over the long-term? The key technology here is Artificial Intelligence. The point in time (which may be sooner than we think) where the rulers of the world are digital rather than biological, that [ideological conformity] could persist.", "Once you've got that and a global hegemony of a single ideology, there's not much reason for that set of values to change over time. You've got immortal leaders and no competition. What are the other kind of sources of value-change over time? I think they can be accounted for too.", "Dwarkesh Patel 30:46", "Isn't the fact that we are in a time of interconnectedness that won't last if we settle space — isn't that bit of reason for thinking that lock-in is not especially likely? If your overlords are millions of light years away, how well can they control you?", "Will MacAskill 31:01", "The “whether” you have is whether the control will happen before the point of space settlement. If we took to space one day, and there're many different settlements and different solar systems pursuing different visions of the good, then you're going to maintain diversity for a very long time (given the physics of the matter).", "Once a solar system has been settled, it's very hard for other civilizations to come along and conquer you—at least if we're at a period of technological maturity where there aren't groundbreaking technologies to be discovered. But, I'm worried that the control will happen earlier. I'm worried the control might happen this century, within our lifetimes. I don't think it’s very likely, but it's seriously on the table - 10% or something?", "Dwarkesh Patel 31:53", "Hm, right. Going back to the long-term of the longtermism movement, there are many instructive foundations that were set up about a century ago like the Rockefeller Foundation, Carnegie Foundation. But, they don't seem to be especially creative or impactful today. What do you think went wrong? Why was there, if not value drift, some decay of competence and leadership and insight?", "Will MacAskill 32:18", "I don't have strong views about those particular examples, but I have two natural thoughts. For organizations that want to persist a long time and keep having an influence for a long time, they’ve historically specified their goals in far too narrow terms. One fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But, he specified it to help blacksmith apprentices. You might think this doesn't make much sense when you’re in the year 2000. He could have invested more generally: for the prosperity of people in Philadelphia and Boston. It would have had plausibly more impact.", "The second is a ‘regression to the mean’ argument. You have some new foundation and it's doing an extraordinary amount of good as the Rockefeller Foundation did. Over time, if it's exceptional in some dimension, it's probably going to get closer to average on that dimension. This is because you’re changing the people involved. If you've picked exceptionally competent and farsighted people, the next generation are statistically going to be less so.", "Dwarkesh Patel 33:40", "Going back to that hand problem: if you specify your mission too narrowly and it doesn't make sense in the future—is there a trade off? If you're too broad, you make space for future actors—malicious or uncreative—to take the movement in ways that you would not approve of? With regards to doing good for Philadelphia, what if it turns into something that Ben Franklin would not have thought is good for Philadelphia?", "Will MacAskill 34:11", "It depends on what your values and views are. If Benjamin Franklin only cared about blacksmith's apprentices, then he was correct to specify it. But my own values tend to be quite a bit more broad than that. Secondly, I expect people in the future to be smarter and more capable. It’s certainly the trend over time. In which case, if we’re sharing similar broad goals, and they're implementing it in a different way, then they have it.", "How good can the future be?", "Dwarkesh Patel 34:52", "Let's talk about how good we should expect the future to be. Have you come across Robin Hanson’s argument that we’ll end up being subsistence-level ems because there'll be a lot of competition and minimizing compute per digital person will create a barely-worth-living experience for every entity?", "Will MacAskill 35:11", "Yeah, I'm familiar with the argument. But, we should distinguish the idea that ems are at subsistence level from the idea that we would have bad lives. So subsistence means that you get a balance of income per capita and population growth such that being poorer would cause deaths to outweigh additional births.", "That doesn't tell you about their well-being. You could be very poor as an emulated being but be in bliss all the time. That's perfectly consistent with the Malthusian theory. It might seem far away from the best possible future, but it could still be very good. At subsistence, those ems could still have lives that are thousands of times better than ours.", "Dwarkesh Patel 36:02", "Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned the study you had commissioned: you were trying to find out if people in the developing world find life worth living. It turns out that 19% of Indians would not want to relive their life every moment. But, 31% of Americans said that they would not want to relive their life at every moment? So, why are Indians seemingly much happier at less than a tenth of the GDP per capita?", "Will MacAskill 36:29", "I think the numbers are lower than that from memory, at least. From memory, it’s something more like 9% of Indians wouldn't want to live their lives again if they had the option, and 13% of Americans said they wouldn’t. You are right on the happiness metric, though. The Indians we surveyed were more optimistic about their lives, happier with their lives than people in the US were. Honestly, I don't want to generalize too far from that because we were sampling comparatively poor Americans to comparatively well-off Indians. Perhaps it's just a sample effect.", "There are also weird interactions with Hinduism and the belief in reincarnation that could mess up the generalizability of this. On one hand, I don't want to draw any strong conclusion from that. But, it is pretty striking as a piece of information, given that you find people's well-being in richer countries considerably happier than poorer countries, on average.", "Dwarkesh Patel 37:41", "I guess you do generalize in a sense that you use it as evidence that most lives today are living, right?", "Will MacAskill 37:50", "Exactly. So, I put together various bits of evidence, where approximately 10% of people in the United States and 10% of people in India seem to think that their lives are net negative. They think they contain more suffering than happiness and wouldn't want to be reborn and live the same life if they could.", "There's another scripture study that looks at people in United States/other wealthy countries, and asks them how much of their conscious life they'd want to skip if they could. Skipping here means that blinking would reach you to the end of whatever activity you're engaging with. For example, perhaps I hate this podcast so much that I would rather be unconscious than be talking to you. In which case, I'd have the option of skipping, and it would be over after 30 minutes.", "If you look at that, and then also asked people about the trade offs they would be willing to make as a measure of intensity of how much they're enjoying a certain experience, you reach the conclusion that a little over 10% of people regarded their life that day as being surveyed worse than if they'd been unconscious the entire day.", "Contra Tyler Cowen on what’s most important", "Dwarkesh Patel 39:18", "Jumping topics here a little bit, on the 80,000 Hours Podcast , you said that you expect scientists who are explicitly trying to maximize their impact might have an adverse impact because they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but might be more important.", "Do you think this could be a general problem with longtermism? If you were trying to find the most important things that are important long-term, you might be missing things that wouldn't be obvious thinking this way?", "Will MacAskill 39:48", "Yeah, I think that's a risk. Among the ways that people could argue against my general set of views, I argue that we should be doing fairly specific and targeted things like trying to make AI safe, well-govern the rise of AI, reduce worst-case pandemics that can kill us all, prevent a Third World War, ensure that good values are promoted, and avoid value lock-in. But, some people could argue (and people like Tyler Cowen and Patrick Collison do), that it's very hard to predict the future impact of your actions.", "It's a mug's game to even try. Instead, you should look at the things that have done loads of good consistently in the past, and try to do the same things. In particular, they might argue that means technological progress or boosting economic growth. I dispute that. It's not something I can give a completely knock-down argument to because we don’t know when we will find out who's right. Maybe in thousand-years time. But one piece of evidence is the success of forecasters in general. This also was true for Tyler Cowen, but people in Effective Altruism were realizing that the Coronavirus pandemic was going to be a big deal for them. At an early stage, they were worrying about pandemics far in advance. There are some things that are actually quite predictable.", "For example, Moore's Law has held up for over 70 years. The idea that AI systems are gonna get much larger and leading models are going to get more powerful are on trend. Similarly, the idea that we will be soon be able to develop viruses of unprecedented destructive power doesn’t feel too controversial. Even though it’s hard to predict loads of things, there are going to be tons of surprises. There are some things, especially when it comes to fairly long-standing technological trends, that we can make reasonable predictions — at least about the range of possibilities that are on the table.", "Dwarkesh Patel 42:19", "It sounds like you're saying that the things we know are important now. But, if something didn't turn out, a thousand years ago, looking back to be very important, it wouldn't be salient to us now?", "Will MacAskill 42:31", "What I was saying with me versus Patrick Collison and Tyler Cowen, who is correct? We will only get that information in a thousand-years time because we're talking about impactful strategies for the long-term. We might get suggestive evidence earlier. If me and others engaging in longtermism are making specific, measurable forecasts about what is going to happen with AI, or advances in biotechnology, and then are able to take action such that we are clearly reducing certain risks, that's pretty good evidence in favor of our strategy.", "Whereas, they're doing all sorts of stuff, but not make firm predictions about what's going to happen, but then things pop out of that that are good for the long-term (say we measure this in ten-years time), that would be good evidence for their view.", "Dwarkesh Patel 43:38", "You were saying earlier about the contingency in technology implies that given their worldview, even if you're trying to maximize what in the past is at the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or trying to change the rate of economic growth?", "Will MacAskill 43:57", "I really do take the argument seriously of how people have acted in the past, especially for people trying to make a long-lasting impact. What things that they do that made sense and whatnot. So, towards the end of the 19th century, John Stuart Mill and the other early utilitarians had this longtermist wave where they started taking the interests of future generations very seriously. Their main concern was Britain running out of coal, and therefore, future generations would be impoverished. It's pretty striking because they had a very bad understanding of how the economy works. They hadn't predicted that we would be able to transition away from coal with continued innovation.", "Secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. So, that particular action didn't make any sense given what we know now. In fact, that particular action of trying to keep coal in the ground, given Britain at the time where we're talking about much lower amounts of coal—so small that the climate change effect is negligible at that level—probably would have been harmful.", "But, we could look at other things that John Stuart Mill could have done such promoting better values. He campaigned for women's suffrage. He was the first British MP. In fact, even the first politician in the world to promote women's suffrage - that seems to be pretty good. That seems to have stood the test of time. That's one historical data point. But potentially, we can learn a more general lesson there.", "AI and the centralization of power", "Dwarkesh Patel 45:36", "Do you think the ability of your global policymakers to come to a consensus is on net, a good or a bad thing? On the positive, maybe it helps around some dangerous tech from taking off, but on the negative side, prevent human challenge trials that cause some lock-in in the future. On net, what do you think about that trend?", "Will MacAskill 45:54", "The question of global integration, you're absolutely right, it's double-sided. One hand, it can help us reduce global catastrophic risks. The fact that the world was able to come come together and ban Chlorofluorocarbons was one of the great events of the last 50 years, allowing the hole in the ozone layer to to repair itself. But on the other hand, if it means we all converge to one monoculture and lose out on diversity, that's potentially bad. We could lose out on the most possible value that way.", "The solution is doing the good bits and not having the bad bits. For example, in a liberal constitution, you can have a country that is bound in certain ways by its constitution and by certain laws yet still enables a flourishing diversity of moral thought and different ways of life. Similarly, in the world, you can have very strong regulation and treaties that only deal with certain global public goods like mitigation of climate change, prevention of development of the next generation of weapons of mass destruction without having some very strong-arm global government that implements a particular vision of the world. Which way are we going at the moment? It seems to me we've been going in a pretty good and not too worrying direction. But, that could change.", "Dwarkesh Patel 47:34", "Yeah, it seems the historical trend is when you have a federated political body that even if constitutionally, the Central Powers constrain over time, they tend to gain more power. You can look at the U.S., you can look at the European Union. But yeah, that seems to be the trend.", "Will MacAskill 47:52", "Depending on the culture that's embodied there, it's potentially a worry. It might not be if the culture itself is liberal and promoting of moral diversity and moral change and moral progress. But, that needn't be the case.", "Dwarkesh Patel 48:06", "Your theory of moral change implies that after a small group starts advocating for a specific idea, it may take a century or more before that idea reaches common purchase. To the extent that you think this is a very important century (I know you have disagreements about that with with others), does that mean that there isn't enough time for longtermism to gain by changing moral values?", "Will MacAskill 48:32", "There are lots of people I know and respect fairly well who think that Artificial General Intelligence will likely lead to singularity-level technological progress and extremely rapid rate of technological progress within the next 10-20 years. If so, you’re right. Value changes are something that pay off slowly over time.", "I talk about moral change taking centuries historically, but it can be much faster today. The growth of the Effective Altruism movement is something I know well. If that's growing at something like 30% per year, compound returns mean that it's not that long. That's not growth. That's not change that happens on the order of centuries.", "If you look at other moral movements like gay rights movement, very fast moral change by historical standards. If you're thinking that we've got ten years till the end of history, then don't broadly try and promote better values. But, we should have a very significant probability mass on the idea that we will not hit some historical end of this century. In those worlds, promoting better values could pay off like very well.", "Dwarkesh Patel 49:59", "Have you heard of Slime Mold Time Mold Potato Diet ?", "Will MacAskill 50:03", "I have indeed heard of Slime Mold Time Mold Potato Diet, and I was tempted as a gimmick to try it. As I'm sure you know, potato is close to a superfood, and you could survive indefinitely on butter mashed potatoes if you occasionally supplement with something like lentils and oats.", "Dwarkesh Patel 50:25", "Hm, interesting. Question about your career: why are you still a professor? Does it still allow you to the things that you would otherwise have been doing like converting more SBF’s and making moral philosophy arguments for EA? Curious about that.", "Will MacAskill 50:41", "It's fairly open to me what I should do, but I do spend significant amounts of time co-founding organizations or being on the board of those organizations I've helped to set up. More recently, working closely with the Future Fund, SBF’s new foundation, and helping them do as much good as possible. That being said, if there's a single best guess for what I want to do longer term, and certainly something that plays to my strengths better, it's developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's understandable and gets more people to get off their seats and start to do a lot of good for the long-term. I’ve had a lot of impact that way. From that perspective, having an Oxford professorship is pretty helpful.", "The problems with academia", "Dwarkesh Patel 51:34", "You mentioned in the book and elsewhere that there's a scarcity of people thinking about big picture questions—How contingent is history? How are people happy generally?—Are these questions that are too hard for other people? Or they don't care enough? What's going on? Why are there so few people talking about this?", "Will MacAskill 51:54", "I just think there are many issues that are enormously important but are just not incentivized anywhere in the world. Companies don't incentivize work on them because they’re too big picture. Some of these questions are, “Is the future good, rather than bad? If there was a global civilizational collapse, would we recover? How likely is a long stagnation?” There’s almost no work done on any of these topics. Companies aren't interested too grand in scale.", "Academia has developed a culture where you don't tackle such problems. Partly, that's because they fall through the cracks of different disciplines. Partly because they seem too grand or too speculative. Academia is much more in the mode of making incremental gains in our understanding. It didn't always used to be that way.", "If you look back before the institutionalization of academic research, you weren't a real philosopher unless you had some grand unifying theory of ethics, political philosophy, metaphysics, logic, and epistemology. Probably the natural sciences too and economics. I'm not saying that all of academic inquiry should be like that. But should there be some people whose role is to really think about the big picture? Yes.", "Dwarkesh Patel 53:20", "Will I be able to send my kids to MacAskill University? What's the status on that project?", "Will MacAskill 53:25", "I'm pretty interested in the idea of creating a new university. There is a project that I've been in discussion about with another person who's fairly excited about making it happen. Will it go ahead? Time will tell. I think you can do both research and education far better than it currently exists. It's extremely hard to break in or creating something that's very prestigious because the leading universities are hundreds of years old. But maybe it's possible. I think it would could generate enormous amounts of value if we were able to pull it off.", "Dwarkesh Patel 54:10", "Excellent, alright. So the book is What We Owe The Future. I understand pre-orders help a lot, right? It was such an interesting read. How often does somebody write a book about the questions they consider to be the most important even if they're not the most important questions? Big picture thinking, but also looking at very specific questions and issues that come up. Super interesting read.", "Will MacAskill 54:34", "Great. Well, thank you so much!", "Dwarkesh Patel 54:38", "Anywhere else they can find you? Or any other information they might need to know?", "Will MacAskill 54:39", "Yeah, sure. What We Owe The Future is out on August 16 in the US and first of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMcCaskill . If you want to try and use your time or money to do good, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of the income (10% or more) to the charities that do the most good. It has a list of recommended charities. 80,000 Hours —if you want to use your career to do good—is a place to go for advice on what careers have the biggest impact at all. They provide one-on-one coaching too.", "If you're feeling inspired and want to do good in the world, you care about future people and I want to help make their lives go better, then, as well as reading What We Owe The Future, Giving What We Can, and 80,000 hours are the sources you can go to and get involved.", "Dwarkesh Patel 55:33", "Awesome, thanks so much for coming on the podcast! It was a lot of fun.", "Will MacAskill 54:39", "Thanks so much, I loved it." ]
[ "https://en.wikipedia.org/wiki/William_MacAskill", "https://www.amazon.com/What-Owe-Future-William-MacAskill/dp/1541618629", "https://www.wikiwand.com/en/Mohism", "https://en.wikipedia.org/wiki/Peter_Singer", "https://www.givewell.org/", "https://www.givingwhatwecan.org/", "https://en.wikipedia.org/wiki/Holden_Karnofsky", "https://en.wikipedia.org/wiki/Toby_Ord", "https://www.forourposterity.com/burkean-longtermism/", "https://www.wikiwand.com/en/Solow%E2%80%93Swan_model", "https://www.pbs.org/wgbh/americanexperience/features/green-revolution-norman-borlaug-race-to-fight-global-hunger/#:~:text=Norman", "https://ftx.us/?fromIntl=true", "https://www.theguardian.com/books/2010/sep/26/baghdad-centre-of-scientific-world", "http://mason.gmu.edu/~rhanson/futarchy2013.pdf", "https://ageofem.com/", "https://80000hours.org/podcast/episodes/will-macaskill-ambition-longtermism-mental-health/", "https://en.wikipedia.org/wiki/Moore%27s_law", "https://iep.utm.edu/mill-eth/", "https://slimemoldtimemold.com/", "https://www.bloomberg.com/news/features/2022-04-03/sam-bankman-fried-ftx-s-crypto-billionaire-who-wants-to-give-his-fortune-away", "https://twitter.com/willmacaskill", "https://www.givingwhatwecan.org/", "https://80000hours.org/" ]