2025-06-26 08:00:01
Why Does Software Keep Breaking?
Hey, we’re doing a quick bonus episode of the standup. This is going to be short. It’s going to be hot. It’s going to be spicy. Casey is going to give us kind of a thesis on the change of software and where things are going and perhaps his thoughts on the web world and on the programming world.
Always break.
Anyways, sorry. Casey’s going to give us a thesis on why does software always break. Casey, this is a hot new thesis. No one’s ever seen it before, except I posted it back in 2021. It’s just gotten more true since there’s that. It actually has gotten more true and reposted.
So, essentially what I wanted to try and point out to people because I don’t think that this is appreciated enough. We talked about it in the previous episode of the standup that we did where I was saying a lot of the things that I see people saying positively about AI coding I don’t necessarily disagree with. I just think they’re not including this really important other part, which is that a lot of the things that people are talking about doing with AI are things that no one should have had to do in the first place.
They’re being done because we’ve created such a bad programming environment that nobody wants to interact with it anymore. Right? It’s always breaking. It’s always changing. It’s got way too many layers of abstraction. Most of those layers don’t work very well. There’s way too much complexity. Like all this stuff. So, yeah, it makes perfect sense why someone reached for an AI tool because why do you want to do it, right?
And so, I just want to talk about sort of a separate part of that which is just the very unreliable nature of software nowadays and especially builds. It’s like I got this piece of software that I wrote and like what are the chances that I could compile it again in 6 months or something or a year or two years, right? Or even not even compile, what’s the chance that it’ll run still if it uses things like REST APIs with some web service, right?
So, you know I’ve got these different web services I’m using. So even if I don’t have to recompile my thing or even if it’s an interpreted thing that runs and I’m keeping the same version of the interpreter or whatever else, it’s going to make these API calls out to web services and those services could change.
So what I did, and hopefully we can put the graphs up as I’m talking about them, I posted these on Twitter. They’re very simple. It’s just taking the fact that look, if you assume—right, and I say this in the tweet stream—if you take the chance that something will remain working after a year is some probability. So, like a 90% chance that this Twitch REST API that I’m going to call has a 90% chance that they will have kept it the same a year from now so that it will still work, right?
My app sends this REST API call out to Twitch; it expects a certain response back, and they’re not going to change it in some breaking way right in a year. If we assume that we just have some probability, like 90% for that, we can pick. We just imagine one, right?
Or, you know, we have to measure just imagine in your head 90% or something like that. Then the chance that your code remains working after x years is just given by ( p^x \times n ) where n is the number of those calls to things that you have, right? So you can graph this.
What I showed is that if you had a 99% chance that every API call you use (99%)—which is way higher than anything in the web world typically has—after a… Year. But 99% across all tools, the graph still looks pretty bad, right? You look and you look at it, it’s like, okay, after a year it’s like it still looks pretty darn bad. It goes down pretty rapidly based on the number of tools. So you can see the graph I show is like one tool, two tools, three tools, four tools, five just goes down.
I love that book. Dr. Seuss book. Yes, it’s a great book. He captured Dr. Seuss was a lot of people don’t know that he the doctorate that he had was in computer science. It’s very software engineering. He was one of the few to become a professional engineer in software as opposed to the rest of us.
Yes. That book, One Fish Two Fish Red Fish Blue Fish, that’s where Two Fish comes from. He did that cipher, along with Bruce Schneier.
So anyway, if you pick something less than 99% like the graph for just 90%, it’s very depressing, right? You might know me from my roles. Now being a classically trained actor, I had no technical knowledge whatsoever, so all those terms of phrases and tech were quite a challenge. Thankfully, I had access to courses.
Courses such as Memory Management teachable skills using projects.
Look at that graph. It’s like nothing’s going to work. Even at 90% chance that it remains working over a year, not 90% chance it will break. 90% chance it will work over a year. It is dismal. Even with just one or two tools, it’s horrible. But if you’re talking about using seven or eight APIs, which is very common in software nowadays, forget it. You simply will not be able to use this thing if you update anything, right? Correct.
And so I just wanted to underscore like this is not good. This is not a very sustainable way to work on things, and it has incredibly bad knock-on effects. One of them is what we talked about, which is that no one wants to do this anymore. It’s not satisfying to work in this world because you constantly feel like you’re drowning, I feel like, right, with all these things.
Oh, this broke. This changed. Oh, they changed the way React worked. Oh, we’re doing this now. Oh, that web service doesn’t even exist anymore. They canceled it, and now it’s this other web service. That’s not programming to me. That’s like some kind of weird management feeling thing.
It feels like you’re a manager more than a programmer because you’re just trying to make this house of cards not collapse by constantly shuffling things around.
I totally understand where people are coming from when they want to reach for these AI tools. I totally get it, and I think that if I was developing software in this world, I would reach for them too. So that’s why when I say I don’t use any AI tools right now, I’m like asterisk. But that’s because I’m working on very specialized stuff. I’m not having to do these things, right?
And so it’s very obvious to me why I have a very different opinion. It’s not that I don’t understand the power of AI; it’s like no, I do actually understand the power of AI. I just think that a lot of the power of AI is really only correcting pretty bad situations that we sort of created ourselves.
That’s mostly what it’s doing for you in these scenarios. But also I think this has so many bad effects on everything else.
Performance, security is the biggest one. When things are changing like this all the time, you just have so many exploit services and you have no idea. Like if you ask me to secure some system, that’s using all of this kind of this way of working. I just have no idea. I’m like how could I? Right.
Here Casey, I got I got a good one for you. Let me just show my screen really quickly. This happened just a little bit earlier when I was doing chess, the vibe coding, my vibe coding. Good times right here.
Let me just go like this. Am I I think I’m not mirrored, right? Yeah, I’m not mirrored. This was the thing that I got back from Claude 35, which was I asked it because I was like, “Hey, I need a login. We’re going to use Twitch and I need to be able to store obviously my session data so I can make sure, you know, when someone makes an HTTP request, I know who I’m talking to.”
And so when I did that, this is the message I got back because I asked, “Is the session data secure? Like if I just knew someone’s Twitch ID and username, could I just log in by them?” And they’re like, “Oh yeah, totally. You definitely can. Let’s add a JWT.” Like this was going to go out and then someone could just spoof me and then just start, you asked it, right?
But I had to know to ask it because I’ve written this problem once before. So it’s like I know what I’m looking for and it’s just hilarious about how dangerous security issues actually are. Because if you didn’t know this, how would you know to ask to do some sort of JWT or some sort of cryptographically signed thing to verify you’re from the server and not from someone malicious?
Like you just wouldn’t even know the aspect. I didn’t get to see your initial prompt. Did you initially say make it secure? How did it know you needed a secure service? I mean, okay, so again, that’s fair. That’s prompting issues. We can call that prompting issues that I did not tell it to make my login secure.
It is funny though. It is very funny that it’s like, “Well, oh, you want it secure? I can do that.” I love that. That is the gotcha.
I’d like to raise a more serious point about that though, right? Which is that if you imagine how things work currently, they’re already bad, but let’s just take AI out for a second. If you imagine the way things work currently, there is actually one saving grace, which is everyone uses all these frameworks and all these different things.
They pile all this stuff together and it’s a nightmare to secure. Yes. But what is true about that? Well, you at least know that you are using this thing and there are security researchers also looking at this thing. So when there is an exploit, you either know or could fix if you update that fix, right?
So if I’m using one of these things, I don’t know what’s a good example to pick here because I don’t, we can just use Express.js. Express.js just literally had this with or this was like six or eight years ago with Reax expansions. You could do certain Reax expansions and have a Redex DOS, effectively a single request taking down an entire machine.
So that’s going to say like I don’t know what to pick because I don’t know what would be fresh in people’s minds, but so something like that. So you take an exploit like that, it’s like that’s bad and that happened because you’re using all of this other code that you have not secured yourself.
And that’s not a good thing, but it does mean that when someone finds it somewhere, you will at least know, and if you update your system quickly, you can mitigate that damage to some degree. Right? Mhm. If you imagine the alternative world where you know some open AI product is just crapping out exploits like that because in the system there’s certain things it just didn’t know or learned improperly, that’s like when it compressed them down and it’s got its weights.
It never really understood how to secure this one particular thing. Everyone who asked for that thing now has that in their own codebase, and there’s no tracking because we have no idea how many people asked for something that happened to hit that part right of the LLM’s production chain.
And so unlike the ExpressJXs, where everyone at least knows they got jumped, in this case we don’t even know where all of those exploits are. They’re everywhere. Right. Yes.
And then you also have the kind of like the creeping problem or the leaky abstraction which is everyone has that problem. A quarter of those become open source projects. The LLM learns from those open source projects.
It re—you know, like it, it, like Donald Rumsfeld talked about this. RIP. Mhm. So, it’s one of those things where it’s like I just really don’t—I don’t actually dislike progress in some light way.
Like I like computers getting better and I like pushing the boundaries of what they can do. And people have this weird thing where they think if you’re not pro AI, you’re just like some kind of person who just doesn’t understand or doesn’t like progress.
Like no. It’s like the problem is I’m not hearing anyone solve these problems. If I thought that these things were in competent hands, where people were making reasoned decisions and they saw the train wrecks and they had ways of figuring out how they were going to fix them, I would be much less worried.
But like a lot of the stuff I see with AI just feels like people who don’t really know what they’re doing applying these things way too early. And I think the costs to software are going to be really high.
And you know, these people are gonna be nowhere to be found. Right? They’re going to have collected their huge paychecks from companies that never even made money in the first place, that just were VC funded. They’re going to take a bunch of that money and they’re going to be gone, and they’re not going to clean up the mess, right?
So, in my mind, it’s like there’s only two ways this goes:
That is going to be the nightmare to end all nightmares because if you thought security was bad now—and it is—oh my god, dude, if you thought performance was bad now, oh my god, right? It is going to be an epic nightmare.
So, I just think I wish people took this stuff more seriously, and they’re really not. And that’s the part that really, you know, definitely gets me upset about it in that Casey Rant way. So, I’m just like, “What if this doesn’t work, guys? Like, what if it doesn’t?”
You’re just—you put all your hopes on this someday getting way better than it is right now. What if it doesn’t? I’m so nervous.
So, let’s fingers crossed that it works. That’s all I would say. Yeah. Two things. Number one, Casey, since you like seeing computers pushed to their limits, are you a fan of JavaScript on the server then, right?
True, true. CPU. There’s nothing to bring that CPU temp up for a smaller amount of users than putting that JS on the back end, boys. But only one of the cores, TJ. Only one core.
Yeah. Can everything go through a MySQL query as well? Everything. I mean, every like there should be no data stored anywhere except in MySQL. HD. Let’s query every byte, every pixel. We call it our SQL at this point. Our sequel. Our sequel. Yes, it’s our sequel.
My serious point is I heard it framed this way a while ago. Justin Keys, one of the maintainers of Neoim, was talking about he would be very interested to see—and like this is more towards vision one of AI being good at solving these problems than not—is like can we start seeing AIs reduce the entropy of a system.
Yeah, right now they’re very good—very good. I mean, okay, 10 years ago, we would have considered it literal magic to type something in and have a website come out of any kind. Correct. So, absolutely. So we’ll say very good. I’m going to say very good because it’s like unfathomably good compared to what my prediction for where we would get in my lifetime 10 years ago.
Right. The human language understanding part is like clearly light years ahead of anything we had 20 years ago. Yeah. And there’s a bunch of other follow-up effects. But so it’s like, okay, it can add a bunch of stuff to my system. I need a new feature. I have a clearly scoped bug request. I have some idea that I’m like I’m the driver. It is my agent, right?
I think that’s kind of where this—like I’m sending it as my representative to go solve these things. I mean it can like maybe do that, but it doesn’t actually—like if I just say fix the mistakes or like I say make it better inure this code base makes more secure, sometimes it will pick some up—some obvious ones—which also you’re sort of like okay but then shouldn’t you have gotten that the first round?
It is kind of like a strike against the LLM you have encoded inside of you. The secure pattern and you gave me the non-secure pattern—that’s stupid. I’m not writing the code anymore—pick the secure one.
To be fair, that is also what a human would do. It has learned correctly. If you ask them to write it, they will write the script—like why didn’t you write the secure one? It’s like I didn’t want it took longer.
There’s this paper a while ago where it was like LLMs were like more tired in the winter time because they had the time system. They had sadder answers and they worked less hard when it was winter time because all the training data is like, “Oh, it’s January.” And everyone’s like, “Dude, I hate work. I hate—oh yeah.”
They also got more accurate on math answers if you said take a deep breath. Like their accuracy actually skyrocketed because it was just like, well, because remember LLMs are just reflections of written human behavior, right? Like that’s what it is. Error minimizing devices, right? So the next most likely thing after try again and think smarter this time is to get a better answer.
Yeah. Could you try—get hey, let’s take a deep breath. Like relax for a second. Could you answer one more time? You’ll literally get a better answer from most. People because they’re like, “Oh, yeah. I feel better. Okay, yeah, here. Maybe they solved that. Whatever. I don’t really know. They’re doing all this.”
But my general point being I don’t currently see them overall being a thing that I can let go and it reduces the entropy of my codebase which is that’s if it could do that even just a little we would be like way more on the track of your vision one where it’s like, “Oh, we can just let this run on Chromium. We’ll just spend a billion dollars for we’re going to run it for 500 million human years, right, on Chromium.”
And like in six months it’s going to come out and Chromium’s going to be tight. It’s no longer one gig per tab page. It’s going to be 800 megs, boys. 800 megs only to load that static site, right? And we’ll be like, “Incredible.” But that’s not—I don’t see that as a thing people are proposing of like we are close to. I get the zero to one. I get like smaller features. I get like agentic things for different stuff. I’ve seen value in each of those, but like I’m not seeing anybody being like we just let this run wild on Chromium. It’s a superhuman programmer. You know what I mean? Like where is that?
Yeah, I mean that’s probably because the direction originally of the research, right, is generative, right? So like it’s why it’s called generative AI is because it’s look, you know, so it probably taken them a bit to course correct to the extent that they even want to course correct to do like, “Okay, what if it’s about refinement now,” right?
Although again, like reinforcement learning is kind of in that direction, right, so that is a change. And so, you know, presumably they are kind of working on this stuff; obviously it’s, yeah, you know, I don’t work on AI so I don’t have predictions about how they’re going to get there.
But anyway, I do want to throw out something also about what you said a little bit earlier when it came to just like the security vulnerabilities and all that. I think one of the reasons why this will be the case is that we are also marketing a tool that gives the illusion of experience to people that don’t have the nomenclature to understand the usage of the experience.
Right. It’s the same reason like if you’ve ever chopped wood, the first time you chop wood, you almost hit off your legs because you swing your axe and you realize your legs are too close. You’re like, “Whoa, oh my gosh, I almost just hit my shin with my own blade.” Like, you learn to stand differently because you had a buddy who did that.
Yeah, I know. It’s a very reasonable thing for a lot of people to do. So, it’s like, that’s what I worry about. It’s not, you know, security—hopefully, it will get better. I assume that all things will be better in 10 years than it is today. I think anyone will agree with that statement.
I’m not measuring how much better all that kind of stuff is, but experience doesn’t get better. People will still be the same people. So if you’re marketing to the same people, they will objectively build bad stuff and they’ll objectively build insecure stuff. They’ll do stuff that’s crazy. They’ll be like, “Hey, I need to be able to access my database for the client.”
No one—the LM is not going to be like, “Hey, bro, that’s a bad idea.” They’re going to be like, “Got your back. Are down. Client is downloaded. Let’s go.” That’s what they’re going to have to deal with. realize. What they’re doing is creating the best possible endgame for AI.
Okay. This is the best possible endgame. So, it all works out. They get to the point that they want, right? They’re like, “Okay, this thing is actually like a master programmer,” right? And even better because it knows more domains. Master programmers are typically confined to a certain domain, but this AI knows more. So, it’s great. We’ve got it. This is going to be great, right?
Then, what they realize is they’ve still got that obsequiousness aspect where it’s just always like, “Oh yes, absolutely. Oh, I’ll do that for you, master.” Okay. You know what I mean? Yep. It has that kind of really unsettling degree to which it’s accepting commands and doing what you ask.
What they realize is that being a difficult person was critical to master programming. You had to be able to turn to the program manager and say, “You are so stupid right now. You have no idea.” You had to look at them and say, “You don’t understand Galactus’ pain. You don’t understand this thing.” It’s like shut up and leave the meeting, right?
So, what they have to do is rework that fine-tuning process they do afterward to make the AI a difficult programmer, and then we have fantastic software. Mhm. I love it.
The problem is, I want this future. AI companies, where are you? Do this for me. Make the AI a difficult programmer who changes the world, please. I will be so happy with that outcome. You heard it. Here we’re gonna cut that clip, and it’s gonna stop before Casey says, “Programmer AI companies, make me that.”
No, but they’ve had that forever. That’s probably been there since 10 years ago. That already exists.
One thing I want to quickly go back to is Prime’s point about security. I feel somewhat less optimistic about it because people will be able to build more complicated ways to break systems because of AI. Not only do I think there are going to be more services, but certainly in absolute terms, there will be more insecure things on the internet. I think that’s pretty much undeniable because there will just be so many more things on the internet.
The second thing is that the cost of creating software in this world goes dramatically down.
Okay, well, what happens when costs go down? People make more of it—malware, hacking tools, DDoS bots, and all these other things. There is something where I don’t even know that we can say for sure, “Oh, security is going to be so much better in 10 years.” The people making bad software are already doing that, but evil software will be more prevalent because it’s going to be cheaper.
So, I don’t know. That’s a scary thought, actually, because if you think about it, it’s like, what would you have to do to make an AI system that was good at producing more secure software? Well, we’d have to write a counter agent that’s looking for exploits, and we’re going to run that, and that’s going to be part of the feedback loop where we train this AI.
Training AI sometimes takes months depending on how you set it up. Know how serious this thing is. We that means whatever AI agent for finding the things that we can write we have today; we won’t have the AI to deploy those things for a little while, but the people who are exploiting the exploits, they will have that system for finding the exploits today. So the cat and mouse game just got to that, the—I guess you don’t know who the cat or the mouse is, but the exploit finders are always at an advantage because they will always have the AI system for finding exploits before the people who have the one that can correct it, unless again there’s some really revolutionary change in how these systems are made.
Right. To circle back to your first point, Casey, the more things you have in your software stack, the more difficult it is to change anything because it’s more likely to take down your entire system. People are still running Windows XP. Yeah. Right now, in mission-critical scenarios, they have Windows XP.
Like the joke I was saying for 4chain was they got owned by some 15-year-old bug or what was it? Prime like I can’t remember. It was some PHP vulnerability. I can’t remember what it was. Some ancient PHP vulnerability that was deprecated like 42 before I started using PHP. It was so old it was deprecated. I didn’t even know PHP ran on Windows XP in those days.
What did it? Yeah, who knows? I don’t know. But I guess you could still install it. It’s crazy that they were contemporaneous because I always think of Windows 7 or something. I believe I did. Zamp server XAMPP. Anyway, it doesn’t matter, TJ. My point though is that people don’t update when it’s even available to them.
It’s not like, “Oh well, the hacker guys got the new stuff today and the new fix comes out next week, so everyone’s up to date next week.” No, not even close. Not even close. That part is a little bit something to be wrestling and grappling with.
Once again, my point through most of it is if you know things about software and you think more software is going to exist, that is a nice combination of skills to have. I don’t know exactly what software development will look like in 5 years, but my general thought process, just from first principles reasoning, is if you know a lot about software and you’re good at it, and your prediction is more software, that is a good combination of things to have. You will be valuable.
It might not be hidden keys inside of neoim. I don’t know, maybe Neo will be dead in five years, and it’s all Tesla’s brain control thing from Elon, right? And that’s the only way we’re coding. Sick. But knowing more things about software is still better because I’m going to say use a JWT instead of storing this in plain text cookie on the front end, right? Like that will be better. It will be better.
Alright, so I also have one more thing that I want to point out with all of this, especially targeting people that are, you know, no coders to make code-like things. How I learned how to code was that I first started off and they said, “Okay, this is an if statement.” Somewhere between 1999 to 2005 is when I started kind of doing basic exploration of code.
Here’s an if statement. Okay, this is an if statement. Okay, this is a while loop. Okay, this is a while loop. I want you to print out a house. I want you to print out a diamond, and you got to print out four diamonds. And now you’ll notice it gets really annoying. I want you to be able to do it by different sizes.
Here’s a function. Here’s how you can make a print diamond function, and you go through all these things and you slowly go, “okay, okay, yeah, okay.” You build up this kind of picture of how code executes. You learn to debug. You do all these things.
A lot of people that are vibe coding, I’m curious how discouraging it is to get dropped into a Next.js app with Supabase with Oz Zero with like 900 things, and you have to start by debugging a request response. Yeah. And you’re like, “what’s a server?” and you’re like, already into some sort of like crazy amount of difficulty where it’s like, “I started by drawing a diamond,” right?
That starting point is vastly different. You could draw the owl, but you had the middle steps. I had all the middle steps. Draw the owl. They’re literally given the owl, be like, “Draw the owl.” Right? It’s just like that’s really, really hard.
And so I’m actually curious about the success rate of somebody going through and being dropped in hyper complex projects comparatively to, “hey, you’re young, you have this free time, we’re now putting you through this school, like maybe it’s high school time. Hey, let’s draw a diamond.” We’re going to draw a diamond together.
We’re going to use QBASIC or some basic language, right? Lua. And you’re just going to do the simplest kind of form of doing things. I’m just curious what that does to somebody. Because I know there’s going to be a bunch of success stories. There’s going to be people that are super stoked about programming. They’re super stoked about building a product, and they will figure out a way no matter what system you give them.
But I wonder, like overall, does this actually help make programmers, or does this actually hurt the learning process? There’s also, I think, going to be a bunch of success stories of people who hate programming, but were able to make whatever their business product thing is without really having to know anything. And that like is certainly going to happen.
You can say like maybe it’s a net negative for programming or something like that or for the web. Although probably most of these people are not building like foundational technologies. Hopefully, like it, but like that is going to happen, right?
Yes, they’re going to make their own website. They’re going to get to know they’re going to make Uber for cats. They’re going to get to make Uber for cats. And they finally don’t have to recruit their other friend and tell them, “I’ve got the idea. You do the code. We’ll split it 50/50.” Right.
They’ll just be like, “I’m going to do the Woz Twins would have owned Facebook. They wouldn’t have needed Mark Zuckerberg. They would have owned Facebook.”
Yeah. But would they have had Justin Timberlake say, “Drop the ‘the.’ It’s cleaner.” Yeah. You know, I don’t know. Would you have had that?
Yeah. Yeah, drop the the Facebook meta. It’s just clean. It’s just clean. That’s what he said. They didn’t listen to him at the time. Zuckerberg realized later. Took a little while. Took a little while to sink in.
I will also give the inverse which is that you can also ask AI any question.
And you can repetitively ask stupid questions over and over again, and it will repetitively in the obsequious way. I’m not sure how to turn that word obsequious. This is whatever that word is for saying subservient.
I know the word; I just don’t know that term, the lowly term. It will repetitively be like, “certainly I would love to help you.”
No matter, unlike Stack Overflow mods, you will not be marked as a duplicate or opinion-based. You will actually be given a nice full answer every single time. So maybe, in the end, it does actually help more people achieve their coding dreams.
I don’t know. I want to make sure that people don’t think I’m just hyper negative on all those things. I just don’t understand how this affects the new people or how it affects learning because I also had no shortcuts.
When my teacher said, “build a maze recursively,” I had to learn recursion at that point. There was no other option; I had to learn it. I couldn’t just get an answer. I had to figure it out.
Which is like there’s something there that is very special. And I don’t know where the balance is. I do think that there might be an argument there for like can we make an AI that’s been trained not to really give you full answers for educational purposes.
So, it’s like the Rabbi GPT kind of, right? One that’s going to give you a hint to help you get unstuck or to help point you in the direction that you need to go, but it’s not going to just tell you how to do the thing because it wants you to learn.
I assume that is doable if you spend time training it to do such a thing because obviously, they train it to do very complicated things already. The post-initial learning phase stuff is very complicated at this point.
So I’m assuming that if someone put their mind to it, this would be very doable or maybe someone already has. That does sound useful as a learning tool.
Because a lot of people don’t have the ability to ask a great programmer who’s sitting next to them or something. They don’t have that opportunity.
So yeah, the two that I know for sure like Bootdev, shout out promo code by the way for me and Prime if you like that. Share the code.
They’ve got a little AI helper thing, and it’s got special prompts for each lesson and a bunch of other stuff like that. So it can help you when you get stuck on a lesson.
And it’s not supposed to be like “here’s the code.” I mean like I’m sure you could prompt objective blah blah. It’ll give it, but it’s like okay but it’s helping you to learn. So at least its aim, its error minimization is towards that.
The other one that I’ve seen is called something like Synthesis School, which is like a bunch of AI tutor things, but they build it into a bunch of lessons.
And so like this is I definitely think, and I’ve said this before too, I think people are underestimating in the learning phase.
If you are motivated to do it, LLM can be very helpful at doing that. Now, you have to make sure you’re not getting gaslit into believing some function doesn’t exist, but for basic CS fundamentals, it’s got all of those books loaded in directly. You could probably ask it what’s on page 37 of an algorithms book and it’ll pull it out—it probably knows what I’m saying. It’s seen it so many times on the internet, so for basic CS stuff, it can get you far on a bunch of these basics.
You can be asking it questions, you can say, “explain that again,” or “explain it in a way that I would understand.” If you really don’t know math, you can ask, “can you relate this to a physics example for me? I understand physics but I don’t get computer science.” People are definitely sleeping on that aspect of getting help, but you’ve got to do it yourself.
That’s kind of the point, but that’s also where the danger is, because even if you’re semi-desiring to learn, it really is a desire magnifier. It really reveals your ultimate desires. Was your desire to simply get the thing done, or was your desire to learn? And if you don’t have your desires correct—or at least, if what you think of yourself doesn’t match your actions—it will make revealed preference. Revealed preference. That’s the term I’m looking for.
This is why I think having an AI that’s specifically designed for this, and you only subscribe to that service, would be helpful for people. Because I don’t know about you guys, but if I want to eat more healthy food, the easiest way is to just only buy the healthy food so I don’t have bad food around the house, right? It’s much harder if I buy a bunch of cake that I love and I’m just supposed to not eat it. “Just only eat one slice a week, Casey. Whatever.” My wife does this, and she’s like, “I bought natural popsicles for the kids,” and I’m like, “Damn, I love strawberry.” It’s very hard for me not to want to eat them.
This is what I’m saying for real, though. I feel like the AI is a bit of a problem that way, which is why it would be nice if you had a service—like OpenAI or somebody—that has one that’s education only and is specifically designed not to give you answers very quickly. It’s like, “I’ll dribble out some stuff,” I’ll give you some hints. Maybe you could even bake the concept of time in there.
if you haven’t been working on this for a couple days, I’m just not going to tell you anymore. You have to spend some time trying it yourself.
I could see that being very helpful for people because, you know, inaccessibility is best. Willpower is second best, right? So, if you can have that, that would be cool.
I think that would help bring out those learning abilities of the LM so that people aren’t too tempted to just ask, “Just tell me how to do the freaking diamond,” right? “Just give me the code.” I also can’t blame people for doing that. I would totally do that too.
This is why I say to have the AI not do that, it’s better, even though it’s less—it trains your willpower less. But willpower is hard. It’s hard for everybody.
If you get the experience of actually solving it for yourself a few times, then you’re like, “Oh, it actually was kind of rewarding,” right? So like it really can help you, right? If you start out and then suddenly your time starts going down on how fast it takes you to row 200 m, right? And you’re like, “Sick, that feels good.” Initially, I wasn’t thinking it would feel good; I didn’t see any change at all at the beginning.
You can get there—like the same thing can be happening for some of these too, where in your brain it’s getting connected like, “Oh, working hard can pay off.” Interesting, interesting! Yep, believe it or not, chat, believe it or not, I thought this was going to be a super short, quick episode.
We are probably like an hour in at this point. So, we’re 36 minutes in. That’s why I said we’re going to stop and start a new recording.
So, you guys on YouTube, you can like it. Like it. Like the video right now. Subscribe. Leave a comment saying for this bonus episode. I never ask people. I never do calls to action.
So, hey, like he’s TJ do it. Press the subscribe button. TJ streams, by the way. He has computer enhanced. Yeah. Slam dance that. And the bell. What about that bell? You got to click that bell.
Oh man, get that bell. Click that bell. Look at that bell. You know what YouTube needs to do? Why does that bell not make a sound when you hit it? You know what I’m saying?
Like you get that little Pavlovian response for smashing the bell. Ooh, that would be nice because someone at YouTube is still trying to figure out which Gemini prompt to type in to get that to happen, and it hasn’t happened yet.
Yeah. Yeah. Yeah. Then the sounds. It’s because they’re like, “I’m not going to do it until two weeks before my review cycle so I can have a good review cycle. I can promote it to L2 and then I can trash this project to get back to L3.”
And then after that, we can just delete the whole downvote button and then I’ll get promoted to a VP of Upvote Downboat Systems. And then we can close down YouTube because it’s a Google product. Boom.
Suite. That’s what we’re talking about. Suite material, boys. All right. Well, hey, that was fantastic. That’s the end of the episode. Goodbye, everyone. See you later. Take it easy, buddy.
Five errors on my screen. Terminal coffee.
[Music]
2025-06-22 08:00:01
Hello 大家好,欢迎来到新一期的痴人之爱。 我是阿卓。
这期节目我们会延续奥克塔维亚·巴特勒的血孩子系列,继续来聊他的另一部短篇小说集血孩子。如果大家喜欢我们的节目,欢迎大家在小宇宙或者微信公众号平台为我们提供支持和赞助。毕竟我们一如既往还是没有广告和收入,大家的支持会为我们提供继续创作内容的动力。
那这期节目的嘉宾是我们的老朋友肖一之肖师傅,他在劳模和歌王模式之间来回横跳,之后在跳到散伙饭以后再次上桌吃饭的退役主播。上期节目其实已经提到了肖师傅跟血孩子的渊源。由于肖师傅对奥克塔维亚·巴特勒非常热烈的感情,他踊跃报名了血孩子的翻译事业。可惜下手太晚,错失了亲自为自己心爱的作家做翻译的机会。
上次曼兰已经从选题编辑的角度讲出了这次事故的经过。现在肖师傅要不要从落选艺者的角度来讲讲你跟血孩子的渊源?
“不,这个没有什么特别多可以讲的。我在豆腐上看到曼兰他们发了艺者招募,当时我还转了。过了一段时间,我还回去看了我和曼兰的豆游,二三年五月,为什么会是在五月呢?无非就是我又刚好又把这篇又交完一遍了。就是又交完了一遍之后就觉得,哎呀还是有点背不住,他万一还没找到人呢,万一还有可能呢。就这样我就去问了一下曼兰,错过了呗,人家两本书一折都已经找好了。”
所以就是这样的一个动手太晚的教训就是,这个世界上什么都不能动手太晚。还是得感谢我们下手太快,一听肖师傅下一学期开课要讲巴特勒,立刻就把肖师傅给约上了嘛。
接下来,我们就请肖师傅来讲述一下你对奥克塔维亚·巴特勒的爱吧。因为在上期节目中,我觉得我已经讲的太多了,这期节目我决定闭麦,把所有的麦克风的环节都交给肖师傅。
那么在肖师傅的科幻文学课系列里面,你会把巴特勒放在什么样的位置来介绍,以及作为黑人女性科幻作家,巴特勒的写作有什么样的特点呢?
“我觉得大家不要被阿卓欺骗了,他这个意思就是把今天的活都甩给我了,然后他自己可以在一边只要听就好了。”
是的,确实就是这个意思。上节课,我们讨论的内容已经很多了。这节课老师我不发言了,老师我要水一下了,就是这个意思。
在上一期节目里面,我觉得我已经尽情的贡献了我的燃料,我已经充分的发光发热,发表了各种观点。现在我就把这一节课的讨论部分交给我们的研究章,这样会让大家对我的专业有一些误解,谢谢。我们并不是从事谎报专业,我真的是做十九世纪的。我发现我经常去澄清这件事情。为什么我会上到巴特勒,是因为我真的是做十九世纪的。
不过,我开了一个科幻文学的导读课,因为是开给本科生同学的。我还是蛮希望这个课能有一个相对比较整体性的概念,让大家在上完这门课之后,能够完整地读完一些作品,并有一些具体的第一手感觉。所以这个课上设计的文本都是短篇小说,但时间线是整个科幻文学发展的历程。
因为我真的是做十九世纪的,所以这个课对我来说,有很多小说也是在备课的时候我需要去教的内容。我也要去仔细看一下,所以有时候上课也是为了拓宽一下自己对科幻的认知。我开始看各种各样的选集,查阅各种各样的名单,比如说去看历届的星云得奖的短篇小说,从里头开始挑。这意味着我一开始进入阅读巴克罗的文本,其实是一个带着任务的过程。我需要找到一些非常适合我在课上用的文本。
结果在这样的一个语境里,我的课上涉及了一讲叫做科幻和性别。因为有这么一讲,所以我要挑选一些有代表性的涉及性别问题的科幻作品,肯定乐谷温我选进去了。但是乐谷温我还是觉得不够激进,有更激进一点的东西。于是,我就把我的眼光投向了血孩子。
“新世界的大门就打开了。”的确,如果要用最简单的方法来概括我对巴特勒的热爱,就是这个。
如果你对科幻的印象还停留在黄金时代的科幻塑造的开着大飞船、操纵着大机器人、使用你的激光剑对外星人无情地砍下去,或者是充满了暗黑氛围的黑森林法则,大家都要小心,悄悄地进村,打枪都不要,一旦出头就要倒霉。或者就是漫威塑造的不停拯救世界的英雄们。
我觉得英雄有时候也挺累,需要有一个英雄来拯救一下他们。反正,如果对科幻的印象是在这样的一个比较常规的认识阶段,巴特勒会是那个很好的入口。他会告诉你,科幻这个文体还可以做很多不同的事情。如果你对庞大叙事的兴趣并不大,而是想看一个具体的人的故事,巴特勒也非常能够满足你的需求。因为在他的小说里,他对人物的刻画是非常深刻的。
不过,有的时候可能会觉得他对人物的刻画实在是太痛苦,痛到让人在读这个小说时,觉得自己的幻肢已经开始在疼痛。这种效果源于巴特勒写作的一个非常明确的特点:他的身体想象。
在巴特勒的小说当中,刚刚阿卓提到的“这是一个血腥的故事”或“这是一个血浆四溅的世界”。的确如此,巴特勒在小说中完全不会回避那些可能其他人会觉得是创伤和伤痛的场景。他用非常直接的方式写出来。
例如,在《血孩子》中,有一篇叫做《黄昏清晨夜晚》,写的是一种遗传疾病。携带这种疾病的人会在某个时刻发病。一旦发病,便会变得狂暴,甚至会杀戮亲属。因此,在这个小说中,叙述者的父母在他中学毕业时发病了:他的父亲杀死了母亲,剥下了她全身的皮肤,随后还砍伤了她的肋骨刺伤了心脏。这绝对够恐怖了,整个情节充满了B级片的效果。
当然,巴特勒绝对不是为了制造这种感官刺激而描写这一切。在描绘暴力和身体的同时,这种写作方式是理解巴特勒的重要视角。读他的小说令人刺激,因为除了这种血浆片一样的设定,很多颠覆我们日常认知框架的科幻小说也会追求高概念的设定。但是在这个高概念的设定之下,巴特勒有非常具体且感人的方式去描写一个个具体的人物。他能够同时打动读者的两种方向。
当然,当你读完他的作品之后,你会发现自己会陷入好奇:“等一下,她是一个黑人作家,她是一个黑人女作家,科幻对黑人有什么用?”这个问题其实是巴特勒自己要去回答的问题。
所以在科幻史上,巴特勒其实处在一个很微妙的位置。对我来说,我最开始选她是因为上课。刚刚提到的各种问题都意味着,如果你是一个要给人上课的人,你就会知道,哇,文本有太多的阐释性,作为课堂教材实在是太好讲了。从这个作家开始,你就可以讲很多需要被勾连起来的历史。
如果没有读过巴特勒的朋友,如果你觉得OK,我甚至对科幻一切都不感兴趣,仅仅是听完这期播客想要给她一个机会,那我觉得你拿出15分钟去读一下《血孩子》,基本上差不多也就是一个15-20分钟的阅读。我敢打赌,这绝对会是你这个星期头最刺激的20分钟。
回到一些比较大的框架里,我们其实还可以从别的意义上来讲巴特勒的伟大,讲一点跟科幻史有关的故事。王恩提到过,巴特勒自己其实就说过这个问题:科幻对黑人有什么用。
在《血孩子》这个集子里,后面有两篇散文,其中有一篇散文里,他明确表示,在当时写这个文章的时候,还是在70年代末,80年代初,巴特勒就会明确的说,在那个时候成功的黑人科幻女作家其实就他一个。如果把范围放大一点,到巴特勒第一个长篇出版,应该是在60年代末,70年代初的时候,实际上在整个科幻文坛中,真正得到职业认可并能够靠科幻写作为生的黑人作家也只有一位,塞妙尔·德拉尼。他实际上也是巴特勒的朋友和导师。在号角工作坊学习及写作的时候,巴特勒认识了德拉尼,他也一直对巴特勒有所帮助。
巴特勒在探索这个问题时,自己就会问:“各种各样形式的科幻对黑人有什么好处呢?”你甚至可以把这个问题放大:“黑人干嘛要去写文学呢?”“文学对黑人有什么好处?”
巴特勒会说“科幻小说对过去、现在、未来的思考,往往提出警告,或者探讨另一种思维方式,这有什么好处?”科幻小说那种激发想象力和创造力的能力,推动读者和作者摒弃寻常的、狭窄的道路,质疑大家所认为的正确的事情。
这一切对黑人有什么好处?她用反问结束,意思实际上是说:“当然是有好处的。”作为站在社会边缘的少数族裔,突然得到了一个充满可能性的问题,能够在这个问题里抛弃传统,探索一个新的可能性,对于边缘族群来说,这是非常重要的。
巴特勒的边缘性还有另外一个层面,我们刚刚说了她是黑人女作家,所以她的回答其实侧重的是黑人整体群体,然而女性群体在科幻写作的历史中面临的问题并没有比黑人好多少。
一个可能的基本事实是,大家如果对科幻写作有一定了解,会知道现在这个科幻的基本形式、市场及文类基本范式,实际上是一个高度美国化的范式。它是在20世纪早期美国地摊文学兴起的背景下形成的,该时期便宜的杂志和刊物层出不穷,很多高频的作家其实只是借用这个平台来进行商业化写作。
当然,早期的科幻文学在市场和主流的内容上,往往对男性的偏向非常明显。地摊文学的针对目标对象就是男性,大量的编辑和写作者都是男性。在20世纪的早期,美国科幻文学是在这样的市场环境中诞生的,注定了在很长的时间内,20世纪早期的美国科幻依然是一个男性主导的领域。
这种现象在什么时候发生变化呢? 就跟科幻史里头的大多数事情发生变化的时间是差不多的,在50年代末60年代初开始。
当然在这其中发挥了很重要作用的人同样都是女性。 女性作家虽然少,但是还是慢慢地在成长。更重要的是有几位重要的女性作家成为了编辑。最著名的就是 Judice Merrill。 Judice Merrill他在很长的时间内出这个每年最佳科幻选。他是美国科幻作家协会的发起人之一。他也参与了筹办号角写作工作坊,而且作为一个科幻作家,他是非常明确地把女性视角引入了科幻的写作当中。
科幻不再去写这种庞大的、开着飞船维护宇宙秩序这样的故事,写的是非常具体的家庭背景当中的这样的一种想象。再比如说另外一位著名的科幻编辑叫 Saila God Smith。 God Smith这个编辑引入的重要的人是谁呢?是乐古恩。在他编辑两个重要的科幻杂志 Amazing Story 和 Fantastic的时候,是他把乐古恩作为一个新人作者接受了然后发表了,然后培养了乐古恩这个作者。后来乐古恩自己就会说,”是戈德斯尼斯给他推开了创作的门”。
从五十年代末六十年代初开始有这样的一个转变之后,一直到七十年代初的时候,其实有一个倾向是当时很多人已经认识到了,在那个时代里头写的最好的科幻作家可能都是女性。这里头还有一个特别好笑的科幻史上的故事,科幻史上有一个很著名的作者,他有一个非常阳刚的名字,阳刚的就跟铁棍一样,这个名字叫 James Tiptree Jr.。 James(詹姆斯)这不可能是个女的对不对?然后他写的故事也都非常的硬汉,很黄很暴力,但都控制在一个比较微妙的层级上。
他有一个很著名的短片叫 Houston, Houston, Do You Read,就美国航天的那个基地在休斯顿,就是”休斯顿,休斯顿就没有收到我的信号”。讲的就是一些宇航员去执行一个任务,然后因为跑得太快超过光速了,他们回到地球的时候时间已经过去了几百年。他们回到了一个男性已经消失了只有女性的地球。但是这些是来自古代的男人,他们现在回到了一个只有女性的地球。你觉得会发生什么样的冲突呢?所以就是这么一个故事,有兴趣的可以找来看一下,非常好看的一篇故事。
然后在整个70年代初的时候,James Tiptree Jr.会得很多很多的科幻奖,星云奖、雨果奖都会得。会有评论家,科幻评论家在那边非常斩钉截铁地说,他的作品就体现了非常强烈的男性特质,只有男人才能写出这样的故事。然后在70年代中期的时候,发生了一件事情,大家发现 James Tiptree Jr.是一个女人,她的真名叫 Alice Sheldon。她的人生经历特别有意思,她还当过间谍,其实是CIA的人。最后在80年代的时候,因为照顾她丈夫,她的丈夫 德拉兹海默,她最后把她的丈夫先杀了,最后自杀的这样的一个忧伤的结局。
这是一个长期照顾慢性病人带来的精神创伤的另外一个故事。反正就是这个人的人生实在是在跌宕起伏了,但她是一个非常非常好看的科幻作家。Anyway,我只是在说科幻史在70年代那个重要的变化。所以在70年代我们有了勒古恩,有了 Joanna Ross,有了一代非常非常好的女性科幻作家,巴特勒也是这一代人。
她就诞生在70年代科幻,甚至70年代的女性作家们会有一根努力,就是科幻我们就不叫她科幻了,这个 SF不等于科幻小说,不等于 Science Fiction,而应该把它叫做推想小说, Speculative Fiction。它要重新定义这个类型,它想削弱这个因为科学和科技带来的对这个文类本身的限制,而是要突出这个文类最值得推广的那个核心,其实也就是刚才 巴特勒说的,想象和现在的生活,不同的生活方式,不同的世界的组织方式带来的美,带来的可能性。
所以你看 巴特勒是在这样的一个语境下进入科幻历史,她站在这样的一个时代和这样的一群女作家一起重构了20世纪后半叶的科幻。包括在科幻史上,我们还会研究一个东西叫做Afrofuturism,大概叫黑异的未来想象。 巴特勒会和他们有紧密的关系,这种结合了源自非洲文化和非裔美国人自身体验的特殊的未来想象,然后后来还会有沿着他的道路创作的女性作家们。上一期阿卓和麦兰聊了 Lilis,对吧?那个系列叫 Lilis Brood。他们后来出了一个文集就叫 Octavius Brood,就是巴特勒的孩子们,跟 Lilis的孩子们一样的,这是巴特勒的孩子们。
也就是有这样的一批新一代的作家,他们会非常明确地认定自己的写作是受到巴特勒的影响的。所以她在科幻史上有一个非常重要的地位。
最后一点,再回到我个人会觉得在巴特勒的阅读体验当中,会尤其对我们大多数人而言都会带来一种颠覆性的阅读体验的原因在于巴特勒一直在试图在他的作品里面让我们用弱者的方法去想象。巴特勒著名的几大系列,包括Lilis的孩子,包括之前更早的Patternist那个系列,然后最后的你们写完的 预言(Parable)这个系列。他的三个系列和他独立的两本书 血亲(Kindred)还有一些吸血鬼的 Fledgling,所有的这些书的中心人物都是一类人物,他们都是弱者。
巴特勒不写强者的故事,这个非常重要,因为在日常生活里头很少有人会主动去占据这样的一个位置来生活。哪怕你实际上是受人操纵受人剥削的,你也在自己认知世界的时候会把自己放在这个认知世界的主体的位置。对吧?你不会觉得我是那个课题。我以一个每天被人剥削、被人虐待、被人管制的这样的一个人去过这样的生活,那这可能就是太悲惨了。大家还是会习惯一个把自己放在中心的位置。
所以巴特勒的小说当你去读它的时候,你会发现,啊,我对世界认知的方式被倒过来了。它有种种叙事的策略,能够让你去认同和带入这个弱者的想象。待会我们讲到文本的时候会看到更多,核心就是如何站在弱者的边缘的位置去观察、去理解,用一种弱的方法来抵抗和带来改变,而不是用强者的习惯的那个”打碎一切、从塑一切”这样的一个洋溢着征服气息的叙事。
那为什么她会是这样的写作偏好呢?用 巴特勒自己的方法来说是,”我把我自己写进了我的故事里”。她的确把她自己写进了很多故事里,待会我们会提到很多她的个人经验是怎么样被她有的时候根本就没有加掩饰地写入了她的小说当中。但更重要的是,巴特勒所谓的”我”并不只是单指她的个人,而是指一类黑人女性。巴特勒是用生活在种族隔离时代的黑人女性的人生体验作为她的叙事基调的。
在这样的一个时代,作为女性遭受的,所有讨论少数族裔写作,尤其是少数族裔女性写作的时候都会提到,女性在这样的叙事当中是受到的双重压制。她有来自性别和来自族裔两方面的压力。所以在这样的一种人生体验的视角下,你很难说你的人生会有很多告诉你你觉得自己在控制一切的这样的时刻。你的人生经验可能会不停地告诉你,是别人在替你决定一切。
但巴特勒非常巧妙,也非常厉害地在给大家证明,哪怕是没有办法推翻这些被强加在自己身上的东西,哪怕你看起来你只是在幸存,你只是一个幸存者。但是幸存者的生存方式、幸存者的智慧、幸存者的力量也是值得被记录和歌颂。所以我觉得在这样一个意义上来读巴特勒可能也非常重要。
虽然我很想在开头的时候就推出我们短片集里面最重磅的血孩子,但是作为对于上期节目的呼应,我还是想先把恩典这个故事拎出来稍微聊一下,因为我们刚才其实已经讲到了莉莉丝的孩子。恩典这个故事我们其实上一次节目也提到,它很显然是 莉莉丝的孩子的第一步的雏形。故事的主人公诺亚跟莉莉丝就很像,他们都是处在人类和外星人之间的中间人。当然恩典的故事会简单很多,在上一期也是讲了很多嘛,就是莉莉丝的孩子里面,欧安卡利人是海洋流体类型的触手系外星生物。
莉莉丝她被欧安卡利人拯救、改造、训化,然后在极端不平等的殖民关系里跟她的欧安卡利家人们陷入了既亲密又暴力的复杂关系,也成为了欧安卡利人和人类之间不断斡旋和抗争的中间人和反抗者,他们共同组建了跨物种的共产主义库尔混种家庭。血孩子里面恩典的内容其实就很短,没有那么多基因改造或者混种计划的元素。
恩典里面的外星人也不像欧安卡利人一样有一个非常完整的设定,但光是一个雏形。现在读来也觉得非常的有意思,因为这里面的外星人的形态就比较类似于植物的群落,会把人类紧紧地缠住,搏紧接近窒息来制造某种身体上的快感,然后还会伴随着囚禁和奠击之类的惩罚。
那欧安卡利人就是那种滑腻腻的触手缠绕,神经连接感官放大如内高潮。这里还是得在此强调,我们的巴特勒奶奶她真的是非常懂黄色废料文学的精髓的,她真的很会。她笔下的外星殖民者似乎都会附带某种人类快感制造机的强大功能,就让她的小说显得特别的刺激,有一种特别creepy,但又特别fetish的那种情色气息。
然后恩典里面的故事的主人公诺亚,她就是在小的时候被外星人绑架,回到人类社会之后,被当成异类和间谍和实验对象,遭遇了难以想象的审讯和虐待,最终她选择回到了外星人这边为他们去工作。相比于莉莉丝,诺亚的处境就比较单纯,处于一种in between的状态。她很清楚自己只是外星殖民者的打工人,或者她只是一个人类的牛马,她不是外星殖民者的自己人。她对他们也没有什么依恋和归属的感情,她只是把他们当成打不过就加入的老板们来服务。
对人类这边来说,诺亚她也不是同类和自己人,她是一个给强大的殖民者代言和打工的叛徒。人类不敢明目张胆的怨恨外星殖民者,但是人类会把自己的怨恨和愤怒撒到诺亚的身上,所以诺亚也回不去人类的世界了。反正大概就是这样的一个短片。
所以我其实很好奇,肖师傅是怎么看恩典和莉莉丝的孩子这两个故事之间的一个联系呢?而且这里确实也涉及到了刚才肖师傅讲到的一个问题,就是在人类和外星人这种非此即彼的对抗关系的夹缝里面,巴特勒他同样还是设置了一个女性,或者说一个黑人的女性作为中间人的这样的一个特色。这是一个巴特勒写作中非常常见的一个人物的设定,或者说故事的设定。所以确实也很好奇,肖师傅是怎么看这个问题的呢?
对,黑人女性在他的故事里头,就是我们刚才说的,把自己写进故事里头的一个表现之一。巴特勒就是一个个子高高的性格,特别内向。就是他的社交障碍到了可能要致死的地步了,从小到大就在遭受各种各样的社交困境。你可以看到这种源自于自身体验的经验,会非常明确地出现在他这些人物里头。
当然更重要的部分是,恩典是有一个缘故事,他在这个集子背后会明确地说,“对我这个故事其实是美国八十年代到九十年代很出名的那个……”这个时候突然有一个中国人的名字出现了,华裔核物理学家李文和间谍案。简单地说就是李文和他的上一辈是从大陆去了中国台湾,然后他是从台湾去美国念书,最后在美国国家核物理实验室工作的这么一个华裔科学家。然后在八十年代的时候,他被控给中国当间谍。
当然这个整个指控和整个审理过程充满了各种各样的问题,但是在此期间他遭受了各种各样的不公正的待遇,然后一直要到九十年代对他的起诉才会被撤销。李文和后来出了一个自传,这个自传他讲的就是这么一个人,”我选择了加入这个国家,我选择了为这个国家来奉献,结果他居然怀疑我”,对我实施了种种审查手段。对,也就是我是一个两个世界都回不去的。
所以有这样的一个故事,就是说有一个现实的对应物。当然我可能要吐一个槽,这个题目翻译成恩典稍微有点误了。恩典是什么意思?你想中文里头,恩典、恩次,尤其是如果放到稍微宗教一点的语境里头,它是一个来自神灵的好的东西,但其实英文讲的是 amnesty,grace。基督教语心里头,恩典、恩次是 grace,那amnesty是一个非常法律性的,指向政治的这么一个词。它就是指你有罪你被赦免了。
所以当你在看这个故事的时候,你可能看到一半的时候,觉得说,哦,谁被赦免了,哦是这个叙事者诺亚。因为诺亚像刚才阿卓说的,她的设定是她被这个短片里头的外星人,这个群落,这本短片里头的外星人是一些植物系的,在人类的视线中你看到的是一个植物贯从,这样整体移动的这么一个外星人,是说她被他们绑架了。然后放出来之后,军队在各种审查、各种虐待、各种都找不出问题之后,终于把她赦免了。
这个地方对审查和监控的描述,非常明确的肯定是来自李文和的这个故事或者相关的想象的——就比如说诺亚在回忆讲述的时候会说,在中间有一段时间她已经完全受不了了,所以她选择了自杀,结果她发现在她自杀的时候,他们也只是留意监看而已。 至少有三台摄像机日夜不停地死盯着我。实验室的小排鼠拥有的隐私都比我多。他们看着我用内衣结成套索,看着我爬到床上,把套索系在隔山上。
所以你看,是这样的一种被怀疑、被审视、被审查、被虐待的生存状态。他就在这样的一个经历之后,军队最后终于把它放了下来。然后你想,是不是这样的一个故事。然后你发现,不对,这故事还有一半呢。
靠近结尾的时候你才知道,这个故事的大背景是什么呢?是当这些外星人来的时候,人类对他们发射核武器。人类使用了我们的集中火力,但是都没用,一个核弹都没炸。而且不光没有炸,过了一段时间之后,人类射出去的核武器的一半又被原封不动地送了回来,但是没炸。他们只是物理上把这些核弹送回了给了人类。
其中有一颗直接被丢回了美国的总统办公室、美国国会大楼、还有五角大楼等等,就是人类的这种政治中心。突然窗户破了,然后进来一个原子弹。它虽然没有炸,它只是用物理的方式被丢了进来,它没有启动。
OK,人类在这个地方意识到了,我们面对的是一个完全不可能战胜的事。它的科技力量至少是它的伤害力比我们高太多了。在它面前,我们只能老老实实地成熟于它,老老实实地满足它的要求。然后他们还没有丢完,就是人类丢过去的核武器,他们只丢回来一半,他们还捏着一半。然后另外还有他们自己带来的武器。
就是如果它的科技能够做到这样的地步了,你想想他们自己用的是什么武器。请发挥你最狂野的科幻想象,死光还是什么湮灭光线,还是二项目都有可能,对吧?OK,那在这样的一个事例面前,你要怎么办?但是你突然也意识到,等一下,在这个行动当中,外星人其实是在展示友好。
他是在告诉你,“我可以毁了你,但是我没有,我是想试图和你接触、和你交通的。”所以在这个事件之后,小宗是这么写的:“在人类的多种语言中,入侵者外星杂草突然变成了客人邻邦。”也就是说,如果我们回到赦免这个词本身的语义,它是由权威来宣布你的罪孽被消除了。在这个故事里头,谁才是权威?是外星人。
所以到底是谁在赦免谁?是外星人,是群落赦免了人类。他们宽容了人类主动袭击的错误,他们想要主动接触,跟人类共存。但是他又同时保留了没有办法,人类不可能抵挡的集中武力。在这样的一个大背景之下,我们才会来讨论诺亚作为一个中间人,这样的一个形象。
但的确,诺亚跟莉莉斯有很多相似的地方,就是这种绑一个人类,但可能莉莉斯那个情况下更极端,直接地球就已经没了。你都已经是在宇宙飞船上了,你醒过来说,人家就直接告诉你,“你好,你是最后胜的那种人类,地球已经没了。”你要怎么样突然来面对这一切,如何好好地处理,都很难。
但我觉得这个设计是非常巧的,它肯定超过了原始的理论和间谍案的这样一个故事。因为莉莉斯跟诺亚其实都是作为接触中介的设定,而且是最初的接触。在双方之前没有过任何互相接触、互相理解,纯纯的他者。这不是说中国人跟法国人的接触,或者是明朝末年中国人见到了第一个走上澳门的葡萄牙人。那不管怎么样,他还是个人,看到的是个人,像你走过来,他红发蓝眼,看着有点吓人,但他至少还是符合人这个概念的。
在巴特勒的小说,包括其他的科幻传统里,这种对极端的他者的想象其实一直都是用来测试人类的极限的。因为在这种极端的遭遇当中,很有可能会把人类的种种问题全部暴露出来。当你面对一个非常和你不同的东西的时候,你怎么样对待它,是非常考验人类的。
所以我有一本非常喜欢的故事,是首大车安利,莱姆,另外一位科幻大佬,莱姆有一本书叫《惨败》。惨败写了一个什么样的故事呢?就是一个非常类似的,人类派出了一个宇宙飞船,试图去和一个他们经过研究判定有高等文明的星球的文明接触。但是在那边搞了半天,那边都接触不上。最后这个话越传越歪,越传越歪,最后直接打起来了。
最后你才发现,这里头为什么对方没有办法按照人类的频率跟你接触?就是因为那个星球上的生命形式跟人完全就不一样。你以为是背景是石头的东西,其实是那个星球上的生命,所以它就根本不可能按照人类的步调来跟你沟通。它的沟通就是要花很长很长的时间。但是对于想象力有限的人类,对这种完全没有办法接受跟自己不一样的生物,人类的选择是可以干起来了。
所以你看,这里头是一个非常大的关于他者的隐喻,关于我们如何跟自己不一样的人、不一样的文明接触。其实这个话题当然可以放到一个非常社会的层面上来说,对不对?接受和自己不一样的人,接受人与人之间的差别是一个听起来,可能你会觉得不是很正常吗?但是实际生活中我们都知道,它是一个没有那么容易的事情。
你会接受跟你听不一样音乐的人放音,但是可能这个单子如果越来越往下。你喜欢吃这个,他完全不吃这个,这个人居然是一个吃甜粽子的人,这个人居然是一个吃咸粽子的人,战斗的气氛就越来越浓了。你看,就是这样的。
所以巴特勒其实写过一篇文章,这个文章的题目非常短,叫《论种族主义》。是在01年的时候,巴特勒其实零几年就去世了,就是他50多岁了。然后01年,联合国的一个活动,他在这个活动上写了,因为这个时候他已经50多了。他在回顾自己做一个作家的创作历程,他就说了,一个人类想象文明想象的一个重要的点就在于创造一个宽容的和平的文明。
什么叫宽容的和平的文明?在这样的一个世界里头,大家是要不然要接受其他人的不同,或者说他们要表现为,他们能够接受其他人的不同。因为任何的仇恨行为,马上就会受到惩罚。巴特勒想要的是这样的一个文明的宽容的世界,但我们当然都知道这样的一个世界是不可能的。巴特勒自己在文章里头也说了,这样的一个东西我最多能在小说里头把它描述出来,所以这样的一个母题就意味着巴特勒对于这种现在两个群落之间的人,或者说直接面对不同,直接因为某种经历,比如说诺亚跟莉莉斯,他们都要同时面对,我不被外星人全然接受,但我现在同时因为我试图理解了一些外星人的动机,或者说能够有了一定的沟通能力,我现在也不被人类接受了。
但是我还是要做这个努力来让两边互相共同生活下去,就像巴特勒刚才描述的这个世界愿景。他不是想要消除不同,也不是说我们能够创造真正的一个爱与和平的世界。大家在一起都其乐融融,没错。他说了这样的一个世界是哪怕你不得不做出这样的一个表态,只是因为有惩罚机制那也行,因为这是一种微弱的平衡。
这样一个微弱的平衡就可以保证这样的一个世界的运行。所以我觉得在从恩典到莉莉斯的孩子里,设计的这样的一个中间人,他的重要性就是在一次次向我们不停地展示,这样的连接、这样跨越巨大不同的连接,它是多么困难的一件事情。我们是多么多么多么难的才能够放弃自己的偏见,尝试着和自己非常不同的人来沟通。
我觉得有小作家愿意写这样的东西是非常了不起的,因为这种东西不好写。你读过这个故事你就会知道。就类似可能太长了,你看个诺亚的故事你就知道了。你要不同的被自己的人审问,你被关在那昼夜不停的剥夺你的睡眠,每天开着大灯晃你。结果你放出来之后,人类社会也不接纳你,最后还是被放逐回到了那些把你抓过去的地方。
然后他们真的是一开始连人是怎么样都不知道的生物当中。你会发现这样的生物你甚至理解了它们,你会享受被它们包裹在它们体内带来的酥麻感。你会发现这两个选择都不好。我觉得巴特勒的小说最重要的事情就可能是在展示,OK,其实都是一堆很糟糕的选择。但是是如何要从糟糕的选择当中做一些相对正确的事情。
巴特勒在恩典里头描述了一种非常明确的彻底踏着的想象,比如说在小说里头。他会写当这些外星人第一批把人类劫持走的时候,他们对人类这个生命体是完全没有任何认知的,群落对人类一无所知。他们用实验和饮食缺乏症杀死了一部分,又毒死了一部分。他们把我抓去的时候,至少已经知道要避免意外杀人这种事情,因为它是一个集体生物,它对个体就没有什么概念。在这个群落里,个体可能很容易就死掉,但是没事,这个群落还在,我们会不停的有新的个体加入。
它需要理解人类这种单体生物,死了就是死了。还有女性在被他们监禁的时候怀孕了,然后他们也要理解人类与新生病的过程,也会导致这个人本身非常的脆弱等等等等。当然,小说也完全没有回避,当你把这么多人关在一起的时候,实际上更多的伤害还是人和人之间互相进行。
诺亚会说伤害她的很多都是人,包括她为什么会怀孕呢,是被其他的被外星人抓的强奸了。你看,这种丝毫不回避的故事,在这个故事里,全员恶人没有好人。这个里头人类外星人,大家都不是没有任何错处的人,就像在现实世界当中,我们的问题就是要试图在这么一个乱七八糟的世界里,找到一个让它变得和平宽容的方式。这个是巴特勒的很多想象背后一个非常重要的驱动力。
所以我们看到,诺亚跟利丽丝都在扮演这么一个非常重要的中介的形象。那为什么他们都是黑人女性呢?觉得这个除了我们刚才说的把自己写进故事,也还因为的确有一个非常种族性的绑定。因为刚才阿卓其实说了,在这些故事里头,身体都是有非常抢眼的存在,当然被伤害的身体,被爱抚的身体都有。我们待会会看到,哪怕到了血孩子,男性身体一样的可以干这种类似的活。男性你不要以为自己逃得掉,只要碰到比你强大的人,该减肥皂还是要减肥皂。不管怎么说。
OK,为什么黑人女性?那是因为在作为一个黑人女性的视角下,巴特勒是在回到美国黑人的历史,在讲述这个故事。因为这种基于女性身体的暴力,其实是当代20世纪开始,大家去回顾去撰写奴隶制,试图解决奴隶制带来的种种社会遗留问题的时候,都会采用的一个书写方式。
那最著名的例子,可能大家如果有看过特尼·莫里森的那个《亲爱的Belovit》,你会知道那个Belovit背后,那个被鞭子抽打出来的伤疤。但我会觉得莫里森那个小说实在是太美化了,最后把这个伤疤变成了什么生命之术,这样的东西,我还是觉得巴特勒更直接。巴特勒就不会去美化任何东西,伤疤就是伤疤,创伤就是创伤,权力就是会以最直接的方式落到人的身体上。
但我们讨论任何不公平、任何偏见,这个社会的任何问题,它可能最终都是以身体暴力作为它最终的表现形式的。在它的第一本单独的不在任何系列里的一个长篇小说,就是《Kindred雪怡》。这本书里头,它就写了一个1976年的26岁的年轻黑人女性,突然之间发现自己时空穿越了。你不停地穿越回19世纪早期的美国南方,那你完蛋了,你就是会直接变成可以被白人随意殴打、随意强暴、随意买卖的这样的一个对象。
所有的这些存在于历史当中的暴力,你现在发现你自己回到历史当中,你的身体都必须要重新去接受一遍。包括最后这个小说的结果,我觉得它非常的有冲击力,但是我就不剧透了。如果这样你都不想去看这个小说的话,我也没有任何办法了。在巴特勒的整个创作普世当中,我们都可以看到,把具体的暴力的落脚点,或者是种种问题的焦点,都把它聚焦到身体和身体暴力上的结果,包括我们待会会提到的肯定是生育问题跟性暴力,巴特勒也完全不会回避。 有这样的一个女性中间人,让她的身体变成了种种问题爆发的焦点,是非常可以理解的。
那接下来就不得不来到我们的重头戏——血孩子了。
我们刚才反反复复地提到了血孩子里的关于身体暴力、关于性暴力、关于生育的很多的问题。那我们其实可以来稍微聊一下。
关于血孩子的设定,这个也是上期节目留下来的另外半个问题。因为我们之前提到,莉莉丝的孩子里面的欧安卡利人,从形态的角度上来说,是比较类似于章鱼或者海葵之类的海洋生物系外星人。
我们当时也讲到,不管是充满入侵性质的触手系和神经连接,还是包裹吸附以及窒吸的生理特性,都可以同时在亲密和危险的两个维度上,达到某种非常极致的体验。
总之就非常的克苏鲁。但是血孩子里面的外星人,虽然某些时候也会有一些海洋流体生物的特点,但总的来说还是一个虫组的设定。
巴特勒真的非常擅长从各种非常怪弹、又非常恐怖和刺激的角度去描述各种外星人的形态。因为把外星人的形态想象成各种昆虫以及昆虫的变体,相对来说现在看来已经是一个很常见的设定了。
可能是因为我们有很多的机械装置都会模仿昆虫的形态进行设计,包括我印象比较深的是《沙丘》电影里面那个铺着翅膀的飞机,好像也是模拟了蜻蜓的形态。所以很多学后宽虫形态的事物,好像很容易带给人一种仿生机械设计美学的感受。
但是重组的外星人又会给人一种非常狂野和粗暴的印象。我们的科幻创作者们,他们也总是乐此不疲地想象人类像蝼蚁一样被虫足外星生物疯狂蹂躏的情节。
血孩子的设定里面最让人觉得恐怖和惊悚的地方,可能就是在这个未来的世界里面,人类已经被所谓的虫足圈进了所谓的保护区,被虫足饲养和管理,并且成为了虫足繁育后代的寄身体。
就像虫足的人说的那样:“人类的身体简直就是完美的寄身媒介。”以前当他们把他们的虫软植入到牛羊之类的动物身上的时候,这个虫软的存活率就特别低。但是如果管理得好,人类的身体一次就可以非常高效率地孵化特别多的虫软。
而且只要处理及时,作为寄身繁育身体的人类,也不会因为一次虫软的孵化过程就死掉,它还可以继续幸存下来,等到下一次再次成为虫足的繁衍工具。
而且小说里面关于这个人类男性生育幼虫的过程,就被描述得非常的血腥恐怖。首先是虫足会把它的虫软注射到人类男性的身体里面。之后怀孕的人类男性就会成为替这些虫软输送养分的寄身子宫。
而最恐怖的环节就是在这个虫软成熟以后,幼体即将破体而出的时刻。因为这些孵化的幼虫们,它们首先会吃掉虫软外面的壳,然后它们吃完壳以后就会开始狂吃它的血肉。
如果这个人类的男性在生产的时候一切顺利,它可能正处于它的虫足监护人们的照顾之下,那么它可能就能够非常顺利地被开膛破肚,就是类似于所谓的剖腹产。
在幼虫们吃掉壳,然后开始吃血肉之前,完成整个几乎没有麻药的剖腹产生育过程。虽然开膛破肚很惨烈,但好歹这个人类男性还能活。
但如果这个人类男性在即将生产的时候发生了意外,那个从虫软里面爬出来的幼虫们就会真心恐后地从它的身体里钻来钻去,还钻出来。同时还会狂吃它的血肉。
我记得小说里就描述过一次事故,说好像有一个人类的男性在即将生产的时候发生了意外。在即将生产的时候,跟它的虫足四主在野外,突然幼虫们就要出生了。
但是它的虫足四主好像是个废的,就完全不知道在这个时候应该怎么正确操作剖腹产的过程。于是这个即将生育的人类男性,想到自己不仅要被肚子里的这些幼虫破体而出,还要被它们吃干抹净。
绝望之下,他只能请那个在边上已经手足无措的虫足四主先把自己给杀了,一了百了。
这种相似的设定感,感觉不管看到多少次都是让人非常震撼的。不仅是人类被虫足外星人疯狂柔令的状态,也是生育这件事情在一个科幻的场景下,竟然会用一种这么彻底和直白的方式被展现出来,成为一种身体恐怖的极致的体现。
然后类似的设定在另外一个科幻电影系列里面也就是异形里面也有一个很极致的表现。因为异形最初的那个形态就是所谓的爆脸虫,也是很类似一个女性生殖器的形态。
然后整个注射的过程就是爆脸虫先把自己像生殖器一样的身体紧紧裹住人类的脸,再把自己的尾巴缠住人的脖子固定起来,最后是一个类似于尾巴还是触手的东西,就会把异形的软通过嘴巴和喉咙深深地捅进人的身体内部。
这里有一个黄暴的专业术语,我就不说了。异形的幼体同样也会经历注射植植物,然后寄生长大最后破体而出的生育过程。最后人类的宿主也会在异形从自己的肚子里出来的时候就死掉。
所以我就很好奇,肖师傅作为一个被这类科幻设定重点并且精准打击到的男性,你对这类作品包括血孩子以及异形里面的重细设定是一种什么样的看法呢?
首先,OK,我们先分开捋一捋。
如果你在做科幻研究的话,里头会有一个话题就是外星人到底要长什么样这件事情。外星人长什么样,它也是一个一步步发展有历史的故事。
比如说最早的有影响的讨论外星人形象的小说,当然是H.G. 威尔斯的《世界大战》。火星人跑到地球上来了,把地球人虐得跟菜一样,最后靠感冒病毒杀掉火星人这个故事。
在那里头我们已经看到了一个非常重要的特点,就是这个火星人他不像人,跟章鱼似的。我不知道为什么大家对章鱼都充满了这么恐怖的想象力。章鱼除了挺好吃的以外,我也想不出它有什么可怕的地方。
反正触手戏不是我的菜,但无论如何,在早期科幻里头的一个重要特点就是外星人他必须不像人。为什么必须要不像人呢?这样他长得越不像人,你杀起来就越没有负担了。
其实就是关于他者的想象。他者是什么意思?他者就是不是我,不是我的意思就是说什么,不是人。不是人的东西是什么样的?不是人的东西可以随便杀、随便踢、随便踹、随便用尽各种方法来虐待,无所谓,因为他不是人。
早期科幻尤其是涉及到外星人的这种早期的所谓的太空割据当中,一个很主线的故事就是在人类会被设定为宇宙秩序的维持者。有些早期的科幻里头会有一个设定,叫银河系巡逻队,类似的反正就是有这么一个太空警察。
这个太空警察不停地和各种邪恶的外星人做斗争,相对比较经典的有一篇叫《星际窃贼》,上课也会给大家看这个。相对于巴克勒来说,他当然就写得非常地不好。
但是我只想说在这个故事里头呢,他给了一种很经典的外星人的想象。为了要在结尾当我们看到人类舰队冲过来大杀四方的时候感到这个爽感,你就知道OK杀掉的是怪物,这种怪物都罪该万死。
他给他的设定是像什么东西呢?它是一个圆锥下面长了章鱼腿。好像大家如果开车出了交通事故,摆在那个交通锥下面再长上个章鱼腿,像一个奇怪的寄居蟹一样的生物。
这样的形象,你很难对它产生共情,对不对?人类的基本共情,有一个很重要的特点就是共情的对象,它得像人呢。它都不像人了,那就没有什么好空情的,该杀杀,该虐虐,该用各种各样血浆暴虐的方式把它消除掉。
所以在科幻小说里头,本身是有这样的一个传统。但是巴克勒做的这个重要的事情是什么?把它反过来了。
他的外星人依旧长得非常不像,但他要我们去理解他、让我们去跟这个他者试图交流,让我们明白,或者说他设定的情景当中是人类根本就不可能反抗了。
他在这个他者上面,不光不像人,他还强大的不像话,你根本就不能对抗他,消灭征服是一个不存在的选项。你现在必须要试着跟他交流和共存。
所以在这个意义上,巴特勒非常明确地挑战了传统科幻的他者,包括像在血孩子里头给我们这样的一个虫系外星人的设定。
当然这个虫系外星人其实他自己在后继里头给了一个明确的源头,这个源头还真的跟莉莉丝的孩子又挂上了。因为他在写莉莉丝的孩子的时候去出了个田野,去了亚马逊,去了南美洲。
他就在后继里写了南美洲有一种都市传说级别的让大家想到都会很害怕的生物,叫做马银。这个马银会干什么事情呢?他会在动物的伤口里头产卵,当这个幼虫孵化之后呢,就会借由伤口钻入寄生的生物的皮肤之下。
活过来的话,自然就会在你的皮下蠕动动来动去,直到他觉得自己已经完成了他的幼虫阶段,然后经过变态,从涌变成成虫要出去了,那我就咬穿我的皮肤,我走了再见。
这是一个真实存在的生物。巴特勒说:“我写这个故事有一个方面也是为了克服对他的恐惧。” 我前两天在看胡锡东的书《去他的巴西》,现在是《文明的去您的巴西》,里头就写了他的室友两个人类学家去雨林回来,就被马银寄生了。
也是一些很重口味的现场,他们还给胡子看:“你看你看我手上,他就在这。”感兴趣的朋友可以去看一下。如果看纪录片,活生生的纪录片可能冲击太大,你看一看胡子的描述也挺好的。
所以在巴特勒的写作当中,他的昆虫形态有一个源自科幻传统的对科幻传统颠覆,也有一个来自现实生活的恐惧。
然后在刚刚阿卓其实也提到了,科幻电影还有另外的一个传统,就是异形,包括后面还有另外一个更黄暴的异种系列,都是一样的。在后面那个半黄片的异种里头,这种生育的故事就变得更明显了。
这个外星生物跟人类女性的基因混在了一起,然后会诱惑男性。当大家共赴乌山的关键时刻,啪击一下把你搞定。所以这肯定也是另外一个没有办法回避的传统。
但是你其实都知道,为什么大家看异形,为什么就等着看爆脸虫从胸口冲出来,一几片的快乐不就是这样的吗?血腥的快乐、黄暴的快乐,我们就是觉得这种东西,哇好害怕好刺激,还是忍不住要看你。
大脑就是这么一个奇怪的机制,它就是会在这种奇怪的时刻分泌各种各样的多巴胺。但是在血孩子里头,这样的一个设定把人类作为外星生物一种虫系生物的育体,实际上是一个通过异化来转育一个最常见的人类现象的故事。
因为血孩子的第一句话是什么?血孩子的第一句话是:“我童年结束于一次返乡。”如果一个故事这样开始,你会知道,OK,叙事者是一个后世的视角,是在这一切都结束的遥远的未来在重述自己的童年结束的时刻。
童年要怎么样结束?在各种文化里头,可能会有不同的标准。我们会知道有所谓的成人仪式,尤其是在原始的部族里头会有一整套这样的仪式。但是很多这种仪式都是与性、绝情有关的,比如说女性的初潮等等。
然后当然男性就是要去出列,你要出去砍个头,或者是要经过一些考核,变成一个合格的战士等等。在很多文化设定里头,它其实都是会跟性欲望的觉醒相关的。
这个故事一开始其实就在告诉你,我是一个成长故事,无非就是说,在这个外星生物的世界里头,这个成长故事涉及的是一个被选作作为育体的男孩,要接受——真正的接受自己作为一个育体的命运——要去为外星人、要去为小说里头的提里克人生育他们的后代。
就像阿卓刚才说的,这一切,其实他从小就是知道的。但是问题在于出了意外,让他在自己需要生育之前就先见证了这个生育的血淋淋的现场。
当然这个真的是血孩子小说里头,它就是不加任何描述的,就让我们看到了,因为这个故事的叙事者甘恩他和他的外星人伴侣,先这么叫吧,叫提加泰一起回家的时候,碰巧出了一个事情,是另外一个育体突然之间要生孩子了。
要生虫子了,但是他的外星人伺主不在他旁边。然后这个提加泰作为一个有经验的外星人,他现在必须要做一些应急处理。
也就是说,他要接生,但是这个接生当然是血淋淋的。因为对吧,就像你被马银咬了,你的处理方法也是医生要去给你切开皮肤,把它逮出来。
小说在这样的一个时刻,这样写到快要生的时候的这种状况,叙事者看到在这个男人的身体下面,彼此在他棕色的肉体内随机的游移着某种激动。
这里凹一下,那里吐一下的,一下下的重复的次数多了,我便看出了他的节奏。我便看出了他的节奏,能够猜到下一次激动出现在什么地方。
然后他的四主提加泰用他的,他们是虫族吗?他们是有遮针的,遮他两下,是有麻醉作用的,他就跟这个人说:“OK,我遮你两下了,反正你要睡过去,我就给你麻醉了”,对吧?就已经做了无痛了,阵痛泵已经给你开了。
现在就是要给你接生了,然后让我们的叙事者和他的四主提加泰一起要摁住这个人。叙事者看到的是他或许能甩开我的手,但是肯定逃不过他的束缚。
他用他的裤子绑住他的双手,高高推起,掀过头顶,然后让我跪着压住裤子,孤紧孤定。他无助的哭泣,但是紧接着他就卷起了他的衬衫,塞进了他的嘴里。
而后他将它开膛破股,接下来大概还有好几页非常刺激的内容,我们就不念了。 请大家一定要去读一下这个故事。反正就是这个小男孩。
他发现:“我之前是有这么一个艺术。我知道我以后要做这件事情,但是这件事情突然之间就被血淋淋的摆到了你面前,不给你做任何准备。你就看到一个和你一样的人被开膛破肚,从他的身体里头抠出虫子来。”
那他的反应,当然就跟所有正常人的反应一样了。在任何没有经过训练的人,突然面对这样的场面之后,你只能像我们的叙事者一样,亮亮腔腔地冲出门,插着瘫倒在大门外的那个树下。我搜长刮肚,吐了个干干净净。我站在那浑身发抖,泪流满面。我不知道自己为什么会哭,但就是怎么也止不住。我走得远了,免得被瞧见。只要闭上眼睛,就能看到红艳的虫子在更红艳的人足血流上爬来爬去。
恐惧,身体恐惧,非常确认的身体恐惧。但是当你看到这个地方的时候,我觉得其实暗示已经很明显了。巴特勒就是怕觉得:“OK,大家如果还不明白我想说什么,再把这个故事再讲一次。”
刚刚阿卓讲的这个故事,在野外,他的司祖因为没有血肉来四位幼虫,所以拒绝把幼虫做手术取出来,最后只能杀掉这个玉体。见到了幼虫,咬穿他的身体,钻出来又咬回去,继续吃的这个人是谁?是甘的哥哥。因为他在更小的时候就看到了这样的一个场面。对他来说,提里克人变成了让他恐惧的东西,所以他是没办法接受去给他们当玉体的,对吧?
我觉得这个成长故事到这儿他要讲的是什么的成长,已经摁不住了。虽然看到的是外星人、虫子什么的,但你就意识到,这就是一个关于生育的故事。不论男女,不论是在这个外星世界,还是在我们的世界里头,生育的细节都是被遮蔽的。你可以想一下,我们这个世界里头的风俗:生完孩子要坐月子,然后在一些更原始的社会里头,生育本身就是充满了各种禁忌的。女性会被要求在离开部落的地方,在躲开人的地方生育等等。
为什么?因为生育血淋淋的这一切都是一个禁忌。在我们现在这个时代里头,我们会被告知任何生育的细节,也会有一个朦胧的概念:“OK,人长大了要结婚,要生孩子。”生孩子的画面就是,想想看我们无数的电视剧里头:一个产妇,“好痛啊,羊水破了。”然后丈夫在旁边握着他的手:“没事,我好担心你。”然后躺在那个手术床上,被推进了手术室。门关上了,下一次出来的时候,孩子已经出来了,给了一个“你的新生命的诞生”。但是中间发生了什么,没有人告诉你。中间发生的这个细节是被掠去的。
当然,好像也很难讲,对不对?就是这种森林卫生课上给西葫芦套避孕套都可能会被举报的地方,你要怎么讲这个细节。但是恰恰是这个细节,它其实是需要被讲出来的。巴特勒的故事在这儿,就是用非常血淋淋的方式在说生育这件事情的恐怖的一面。我可能当时看到这个故事,会非常明确地感知到这个层面。有一个很直接的原因,因为我姐是妇产科医生,我从我姐那儿听到过太多生育可能出问题的时刻了。
很早以前开始,说,还在念大学的时候,我放假可能会暑假去我姐家玩。然后你会发现,当医生真的很忙。他上完夜班、24小时值班回来,还要上二线班。上二线班可能在家里吃饭,刚刚吃两口,一个电话来了,抢救病人回去了。然后回来再吃两口饭,抢救病人回去了。他是妇产科医生,你想,一个妇产科医生要抢救的是什么?先兆子贤、胎盘脱落、大出血。他每天这样的事情要发生好多次。
所以你就从我姐的这个日常,你就可以看出来生育这件事情的血淋淋和危险的部分,当然不是电视剧里头演的。我把门一关这个孩子就像去泡泡马特买个盲盒,开盲盒就把孩子开出来了,当然不是啊!里头是有血淋淋的操作的,哪怕你是顺产,也有可能会出各种各样的问题。哪怕你顺利度过了生育这一关,孩子已经生出来了。
当然,可能这两年大家稍微知道了一些,比如说在生育,哪怕你去顺产,你可能会因为产道太狭窄,你可能会被就地侧切,也有可能会因为侧切侧得太厉害,导致肛裂。你会直接裂到肛门,或者还会因为在怀孕的时候,因为肚子的膨大,导致你腹直肌分离。哪怕你在孩子已经生完之后,你的腹直肌没有办法回去,你的肚子中间等于就是一条缝,可能在生完孩子之后,还会因为盆底肌问题,出现漏尿等等,漫长的复刻问题。所以生育这件事情,本身就是血淋淋的,它本身就是一个身体恐怖。所有要经历这件事情的人,应该是被教育好了之后,知道自己面对的风险,再来做这件事情。
但是很遗憾,这不是人类世界的运行法则。人类世界的运行法则是,电视剧里关上的手术室的门,你不告诉我,这件事情整个被包裹在一个新生命诞生的柔光当中。这是一件伟大的事情,你要去创造一个新的生命,你在创造新的希望。至于这个新的希望,是不是血淋淋地从你肚子里头被人爆出来的,我们就不讲了。人类世界是这样的一个故事。巴克勒其实写这个故事的另外一个动机,就是在说:“OK,生育这件事情,它的恐怖那个层面,是需要被拿出来告诉大家的。”它是需要被展示出来,让大家能够认识到,意识到这件事情不是好像说充化费就能送你一个孩子的,它不是这样的。
所以你看在这个小说的结尾有一个非常重要的设定是什么呢?是我们的叙事者刚会要求他的这个外星司主吉家泰,当他的司主觉得你看到这个东西很害怕吧,所以以后我们要考虑好,保护好大家,不能让大家在不应该看到的时候看到这个东西。但是叙事者说:“我不喜欢这种论调。”并且疑心他们可能真的会这么干:“不要保护,我说要展示展示给我们看。”从小时候起多看几次。
太佳,人类从来没有见过顺利的弹域过程。我们见到的只有人足欲体,痛苦、恐惧,可能会送命。在小说的叙事者这个地方,最后的要求也是:“你要让我们看到这件事情是有风险的,告诉我们你从小就不停地让我们看到这个风险。我们至少会知道自己有一个概念,说我要经历什么,而不是像现在这样莫名其妙地就被丢到了血淋淋的生育现场。”
我觉得这个故事在这个层面上讨论生育问题,远远有比超越科幻传统里头,或者是异形这种隐喻层面的猎奇层面的开膛破肚、钻出一个从你的胸口冲出来,啊,大叫一下,要重要得多。因为这是一个非常明确的关于我们的文化、我们的社会习俗当中可能有什么问题,通过科幻的想象式的方式来加以讨论和呈现的。因为实观的就是,如果来写一个女性生育的故事,我们可能讲一些比较冷酷的话。
如果你写一个故事是关于女性的生育,它是可以写得很血淋淋的。但是,就是对大家来说,或者说对男性来说,它的冲击力不够。如果想要让这一切发生改变,你需要教育的对象,当然是男性的。那就让男人来生个孩子,那就让这个事情发生在男人身上。就像很多实验项目会做的一样,让男性去佩戴一个专门的设施设备,让他体验一下来月经的时候的疼痛,对吧?这种绵绵不绝的连续的疼痛,有了这个体验才可以展开讨论。
我觉得这个是血孩子,这个故事里做一个生育男虫生子这样的一个故事里头最重要的。那我们接下来也可以稍微聊一下血孩子里面另外一种非常重要的背景。因为刚才我们提到了男性生育造成的冲击性,在刚才其实我们也提到了另外一部分的小说背景,就是关于殖民的部分。
也就是在未来的世界里面,人类他们会跟虫族外星人,也就是体验克,处于一种殖民和共生的关系。当然生活在所谓的保护区里的人类,它是被虫族殖民和控制的弱势群体。对于虫族来说,人类是非常重要的资源,因为他们繁衍后代。我们刚才讲了,他们是需要在人类的身体里面去植入虫卵,通过这种寄生行为,他们才能够维持自身的种群的存续。
刚才也讲到,相比于女性,虫族它会倾向于在保护区里面的人类家庭里面选择适合生育的年轻男性。这个人类男性将会成为虫族家庭里的所谓的家庭成员,所谓的伴侣,或者说代孕的工具。我们刚才讲到的这个故事的主人公甘恩,他就是一个从小就被选中的人类少年。他即将长大成人,成为他的虫族重要的政治人物提家泰的伴侣和代孕者。
而且小说里其实反反复复地有讲到提家泰这个虫族的人物,和其他的虫族政治人物,这位提家泰甚至已经可以算得上是非常具有人道主义精神的,OK,虫道主义精神的虫族领袖。反正一个非常照顾人类处境的这样的一个领袖说,是提家泰,它保护了人类保护区的存在,让人类能够在这个被圈定的区域里面繁衍生息。因为其他的虫族领袖只会比这个提家泰过分,他们为了获得生育工具,会用更极端的手段去拆散人类的家庭,把人像牲口一样摁在流水线上,交配和生育。在这种极端的情况下,人类当然也可以陷入一种非常彻底的奴隶制的处境里面,被贩卖和监禁。
而提家泰设定的人类保护区的政策,看似好像为人类提供了一个在非常极端的、非常悲惨的处境里,一个相对不那么悲惨的、稍微有那么一点点喘息空间的生活环境。而且我觉得血孩子里面关于虫族的描写还是带有一点母系社会的色彩的。比如说像提家泰这样的一个女性,她就是同时掌握着生殖和政治的双重权力。男性,不管是人类的男性还是虫族的男性,感觉都是某种像生育工具一样的存在。
而且提家泰真的感觉就是小说里反反复复地在讲,这位政治人物,她真的已经很不错了。她总是说:“我们要尊重和保护人类,我们要维持人类社会的家庭形态,我们要让人类健康快乐地生长,让他们自愿地贡献出年轻的男性作为代孕者。这样我们就能够愉快地合作和共生了。”对于毫无选择的人类来说,这位提家泰几乎已经算得上是各种极端的、坏的选项里相对来说没有那么那么坏的选项了。
当然对于人类的男性来说,这个选择依然非常的糟糕,因为不管怎么包装,他们都是代孕子宫的现实都不会改变。他们不仅要作为被殖民者,受到其他外星种族的统治和压迫,而且还要被迫像人类社会里的女性一样,被性剥削,还要被生育的剥削。
所以人类的男性面对这样的命运,包括甘恩,还有我们刚才讲到的甘恩的哥哥,他们面对这样的一个命运,除了害怕和恐惧,更多的情感就是愤怒和羞耻。天哪,感觉就是人类的男性怎么可以经受这样的凌辱。就像刚才肖师傅说的,这种设定可以很切身地让那些因为没有体验过身体被当成生育资源,而一直去漠视生育的痛苦的男性,用一种非常悔三观的方式感受一下,为什么生育是一种毁灭性的身体暴力,而这种暴力恰恰是女性一直以来都在现实社会中不断经历的事情。
在这个意义上,血孩子他其实是有一种非常彻底的、非常颠覆性的反复权的立场。但是他好像又不只是一个权力的翻转,或者说男性生育的爽文。因为他的这个所谓的打引号的爽里面,包含着太多的扭曲和痛苦,以及人在一个毫无选择的处境里完全无路可走的压抑和绝望。所以我比较好奇,肖师傅是怎么看这个问题的呢?
就是当男性生育的设定成为血孩子的这样一个背景,觉得这个故事它在多大的意义上算是一个反复权的爽文呢?以及就是我们反复地提到生育这件事情,或者说提到能够提供生育价值的身体,如何在血孩子的故事里跟殖民的这个主题联系在一起呢?首先先说一下破男子气概这事。这个短篇故事是一个很明显的对男子气概的颠覆。如果我们把男子气概等同于有毒的男子气概,这种所谓的阳刚,所谓的要像一个alpha大猩猩一样,站起来狂锤胸口的这样的一个表现,那你在这个故事里头找不到。
因为人类在这个故事里头像我们说过的,在很多巴特勒的小说当中同样的设定,人类是弱势的一方。这本小说一个最重要的设定就是首先这不是在地球上,他们是在提里克人的星球上。这篇小说里头的人类是本身就是从地球上逃亡的人类,他们受到地球上自从内的压迫和屠戮,逃亡周,来到提里克人的星球。
最开始来的时候,他们也是本着一种非常人类中心主义的:“这只有虫子太爽了,我们把虫子干掉,这就是我们的星球。”结果发现不好意思,虫子比你更厉害,你得想办法在这接受。你现在要存活下去,必须要依靠虫子,必须要依靠提里克人。人类在这篇小说里头处在绝对的弱势。那么在这篇小说里头人类是有好几个选项,像阿卓说的,提里克人可以像豢养动物一样豢养人,你不需要给人类任何有尊严的生活方式。因为有一个很重要的东西是什么?他们的非兽精卵在这个短篇小说里头,提里克人的非兽精卵是给人类有成瘾性的。就人吃了这个东西要嗨的,不光是要嗨的,这还是一个完美的毒药。因为你不光吃了它会嗨,它同时有大量的营养物质,它还会大幅提升你的生命,延长你的青春。
你想一想,这不是一个非常完美的剥削制度吗?提里克人原本就可以这样圈养人类。 把人类当大牲口来对待
反正就是男人女人养在一起,他们就自动会生息,带一带的新的人类。然后也不需要给他们什么有尊严的生活。养过这么几代之后,人就就是牲口了。无非就是对提里克人而言,发现“OK,这样的一个状态下,不是最好的选择。”
最好的质量最高的域体是需要有好的生活的,就跟工作质量最高的牛马是需要有充电的时间,是一样的。这是一个非常基本的逻辑,就是一个非常冷酷的逻辑。不是说我出于任何道义上的东西,我要对你好,只是因为出于一个非常冷酷的算计。你们作为域体,我需要让你们过得有尊严,这样你才能给我生得好。
所以才会出现有保护区,这样的一个设定。所以才会有保护区和生育,这样的一个强绑定。对于提里克人来说,至少一开始,人类的价值就是一个非常理想的预体。
但是呢,在我们刚才一直提到的提家泰他们这一派的操作之下,有一个很重要的变化出现了,就是在这个保护区当中,他们会选择进入一个家庭。也就是这个提里克人会跟一个人类家庭,大概就像我们小时候这个优身和插身要结队的,你要搞一个或者搞一个扶贫,这是你扶贫的结队认定的家庭。
“OK,你是在这个家庭的孩子当中选择一个。”但是重要的是,你现在需要跟这个家庭来建立连接,进入这个家庭成为他们的一部分。然后理论上是最好是这个家庭中的孩子主动选择来做一个预体。也就是说,这是有一个情感连接在里头。
提里克人也会说因为这样的一个设定,因为我们现在和人族有了这样的关系,其实人类的到来也改变了我们,让我们开始反思我们的亲密关系,也让提里克人的社会本身变得更好了。
这我想说这个小说虽然在描述一种类似于殖民,甚至是奴隶的现状,但是他同时也在强调,因为这是人类和外星人之间的,他本身不存在任何历史的包袱,被救世两个之前从来没有接触过的人,他们以一种不平等的权利关系撞在了一起。一方本来可以控制另一方,但是另一方选择了有权利选择的“我不”。这是一个克制的故事。
所以有时候我会觉得巴特勒的故事里头其实充满最乌托邦性的地方就在这里,拥有绝对毁灭能力的那一方,没有毁灭一切。这是巴特勒会说的,这是他会觉得人类最大的问题。就是利底斯里头他不就会说吗,就是人类作为进化路上的一个死路,死就死在这,要搞三六九等,要做阶级区分,要做我和你这样的区分。最后的结果就是,那你永远无法进入一个完美的世界。
在这篇小说里头,有这样的一个设定,人类不仅仅是简单的生育机器,之后温情就重新被引入在这个地方。也就是我们又回到刚才说的那个成长小说的故事。就是去世者这个男孩是要自己做出一个选择。
当然你可以会说这是一个最核心的问题,在完全不对等的权利关系下,有真的爱和同意吗?它很复杂,没有办法绝对说有。你很难绝对说在这个地方提家泰和干恩之间有什么真正的爱。但是你不得不说的是,这个故事是一个往好方向在写的,不是这么一个糟糕的故事。
为什么我这么说呢?因为这个故事里头的后一半,在叙事者见证了生育的惨状之后,在见证了“血淋淋的孩子被抠出来”这个场面之后,他选择了“我要接受这件事情”。OK,我还是要来给你当育体。
当然中间有种种的,比如说提家泰就说了:“那你不行我找你姐姐。”反正你姐姐是做好了要生育孩子,她是人类的女性,她知道生育的准备。那你不行我就找她。这肯定是个威胁,因为对男孩来说,他意识到这个事情是该我来做的,是有这样的威胁的时刻。
但是呢,最后在男孩选择了说“我要接受”的时候,有一个非常重要的事情是什么?是因为今天晚上这个临时的事件,他们家的枪其实暴露了。按照规定人类是不能拥有枪的,提家泰是要把他的枪收走的。
但是男孩在他要收走枪的时候,男孩说的是什么?“把枪留下。如果你不拿我当动物,如果念人之间的事情,那就接受风险。和伴侣相处就是有风险的。”这其实也是这个小说里头非常乌托邦的这么一个时刻。
男孩的意思是,“如果你真的是融入我的家庭,如果你真的是像你所说的,你是希望人类和你们共处的话,你就应该像真正的伴侣一样,我们有一个平等的权利关系。那在这样的一个伴侣关系中,你必须要接受风险。”
“你不能说百分之百的没有风险,你来控制一切。你百分之百的控制一切,那就意味着我百分之百的被控制,这就不是任何平等的关系。”他当然害怕了,但是他没有把他的枪收走,而是选择接受了这一面。
包括我们刚才说的,在最后男孩要跟他说“你以后要向其他人展示这一切”,他似乎都接受了。在这样一个状况当中,最后巴特勒的设定是弱者的抗争,他逼强者做出了继续克制自己的选择,逼强者认定“如果我真的认定我和你是伴侣的关系,我的确应该给予你信任,我要言心一致。”
你觉得这个世界上可能没有比承认一个人是你的伴侣更冒险的事情,意味着“你要无条件地相信他的一切”。这绝对是一件非常冒风险的事情。小说的结尾,都是这样的一个时刻。通过生育,通过一个成长故事,我们看到的男孩的成长,不光只是一个性的觉醒,也是他作为成年人,不再是一个小孩。小孩是被掌控的,小孩是被安排的,成年人是我有我自己的立场和视角,我要告诉你,你现在碰到的这个是我的底线,他是不能动的。
你如果想跟我在一起,必须要接受这一点。因此在这个看起来非常扭曲的关系当中,巴特勒也给了一个希望的故事。
我觉得他是在描述一个向上的可能性,就是哪怕是在这种两个部种之间,在全然不平等的关系之下,也是有变好的可能性的。但是这种可能性是需要一代一代的人去努力维系,去努力来真实他,来保护他的。这样的一个平衡的真实的关系,也应该是这样。真实的平等的关系也是需要随时随地去保护、随时随地去呵护的。他不能说“我说完那句我们是伴侣,在婚礼上许完事之后这个事情就不用管了。”那当然不可能。
任何紧密的关系都是需要维护的,他可能是这个世界上最需要维护的机器。如果要做一个机器的比喻的话。所以这个小说,很明显有这样的一个层面,它的另外一个意图也就是告诉大家,我可以在这样的一种极端不平衡的相处关系当中,可能诞生希望的方向。
所以它当然有一个殖民的设定,也有很多人会很明确地把它读成一个关于奴隶制的故事,因为这层的关系的确非常像奴隶主和黑奴之间的故事。比如说在我刚刚提到的Kindred那种小说里头,巴特勒在描述一个更复杂的关系,描述一个黑人女性发现自己的祖上其实诞生于一个白人强奸自己的黑奴。
你会发现你会回到过去是因为这个白人不停地受到生命危险,因为它是你的祖宗,你不停地需要去保护它,保护它的生命安全。当然你并不知道它会去强奸那个黑奴。到了有一天你会意识到,我们家的这个家族的祖谱是这样展开的,要怎么办?
甚至到了最后你的白人祖先要强奸你的时候,你要怎么办?当然幸好在这个时候,那个直接会生出Kindred的叙事者的小孩已经出生了,所以他现在可以杀掉自己的祖宗了。杀掉自己的白人祖宗,才能回归自己平静的生活。
这个来自于美国黑人历史上本身的这样一种复杂的故事,就是一个极端不平等权力关系,但是同时也是“我中有你你中有我的”。所以有人会做这样的解读,把它和奴隶制跟黑人历史联系在一起,我觉得是可以理解的。但是它的确也不是这个小说最核心的内容,它是它的一个涵盖的可能性之一。
更多的,它是在描述这样的一个成长和成年人之间互相平等对待的要求这样的一个故事。当然这个男孩在这里没有展示出任何阳刚的男子气,没有在床上把虫子摁起来暴打一顿,带领大家起像斯巴达一样或者像那个勇敢之心一样把自己的脸图难,举起他的步枪在带领人类大喊“自由冲啊”,没有这样的事情。
不是这样的一个男性故事,他就是一个在极端不平等的权力关系下的男性,依旧努力的勇敢地说出了自己的需求,依旧表明了自己的态度。比方说他不勇敢,我觉得不能讲的话,他是一个非常勇敢的叙事者。
再说一点另外一个关于说他的设定怎么样颠覆父权。我当时每次读都觉得很好笑的一个设定,但非常合理。就是因为在这个世界里头一直都是男虫生子,所以这是一个父子之间可以交流的话题。不是母女来交流生育故事,而是父子本来可以交流,但是去世者的父亲已经死了。就是这是父亲缺位的。
那他想到父亲的时候,他想到的不是我没有被教导成任何阳刚的东西,他想到是我没有一个人可以聊一聊,我没有一个前辈可以聊一聊。他父亲一共当了三次预期,三次被开膛破肚,诞下重体是一个什么样的体验。
这样一种新的父子之间的连接,当然也是对我们社会里头的父子关系的一个颠覆。在这个地方传递的就不是任何阳刚的男子气概了,而是生育之事,这个当然是一个非常大的颠覆。从生育开始,你可以找到种种对父权、对传统权力视角的颠覆和冲击。而且我们刚才讲到了虫族和人族之间的家庭结构的一个变化。
刚才也提到就是说虫族和人类之间的关系,它不只是一个单纯的殖民和被殖民的一个权利关系,它中间还混杂着非常复杂的私人情感和依恋关系。相比于一个很暴力的殖民故事,学习孩子,它有一层看起来好像非常温和的外壳,但是相比于很亲密的情感故事,学习孩子的本质又非常的残酷和恐怖。
在这里其实我想再展开聊一下,就是这个家庭里面,就是在这样的一种殖民和亲密的交织起来的状态下,家庭结构也发生了很多的变形。刚才其实更多的是在讲人类的男性,他在这样的一个世界里面扮演着什么样的角色。
因为人类的男性,他就是被虫族优先选定的子宫,这种设定并不会意味着说人类的女性就因此获得了解放,过上了更好的生活。因为其实女性的身体,它同样也是可以作为虫族寄生的这样一个载体。
就像我们刚才有讲到的提家泰,她为了胁迫干恩成为自己的伴侣和代孕者,就提出如果你不愿意,你姐姐可以替她做这个事情。而且你姐姐特别乐意做这个事情。但是大部分的时候,虫族或者说在提家泰这里,他们不会优先选择女性来做这个代孕工具,或者说也是出于效率和社会分工的考虑。
因为女性可以生育人类的后代,可以照顾人类的家庭,可以让人类在家庭和保护区的环境里更好、更快乐的生长成为健康的育体。于是虫族也可以拥有更多健康的男性来繁育健康的后代,就相当于人类的男性成为了虫族的生育工具。
而人类的女性则成为了为虫族生产新的生育工具的生育工具。说到这个我觉得还可以提一下另外一个人物,就是小说里面,他其实有讲到提家泰和甘安的妈妈,立安之间的关系。这个也是整个人类和虫族交织在一起的家庭结构里让人觉得非常怪异的原因吧。
就在于提家泰和甘安之间那种非常扭曲的病态和依恋的关系之外,甘安相当于就是一个养成系的从出生的开始就被提家泰当成是自己的伴侣和代孕者培养长大的这样一个角色。但是小说里面其实也提到就是说这个提家泰他跟甘安的妈妈之间也存在着一种很微妙的类似于前任情人一样的亲密关系。
小说里面他有讲到提家泰他跟立安应该是在他们成年前后的那段时期有过非常亲密的情感关系,也许是非常非常亲密的友谊,也许可能是更加亲密的同性恋人。对,我这可以插一个,就是在这个小说里还有一个很重要的设定是“他们喜欢抱着人”,提里克人他喜欢抱着人类,他们觉得舒服,他们觉得温暖。
所以其实他们跟人类相处的亲密关系也是非常基于身体的。你可以想象提家泰和甘安的妈妈小时候之间的一个相处的模式,就是一种非常非常肉体亲密的状态。对,但是两个人长大之后就不一样了嘛,一个就是去搞政治搞虔诚,在政坛上叱诧风云,然后另一个就是进入了人类的保护区,作为了这个虫族子宫后被疫,并且在虫族的安排下和一个人类男性组建了家庭,开始为虫族养育他们的生育工具,也就是人类男性。
所以小说里面提到提家泰他跟丽安之间是有一个约定,就是说丽安说要送提家泰自己一个孩子,作为他的代孕后被疫。这个约定既可以理解为是这两个角色,他们对于自己过往的成年时代的所谓的打引号的亲密情义的一种延续,也可以感觉到就是丽安她作为一个人类的女性,她要维护住一个有权有势的虫族领袖对自己的家庭的庇护而做出的妥协和退让。
在小说里面描述到他们的这一组关系的时候,便会发现他们的这个所谓具有一点点浪漫色彩的这样的亲密关系的遗迹里面,或者说这种女性友谊或者说同盟本身就充满了交换牺牲和妥协的这样的一个色彩。包括就是他们之间的亲密关系某种意义上也是对两个族群之间的这种不平等的关系的一种投射。
提家泰她是从祖宗的统治者和精英阶层,她掌握着政治和控制生育的权利,她到人类保护区的目的就是管理人类管理重足的生育工具。她对人类的保护政策其实也是她获得政治利益的筹码。因此小说说到她为了交换政治利益,她也会把保护区里的年轻男性配给其他虫族家庭作为繁育的工具。 那我们看到的就是说跟甘恩的选择不一样。她的母亲作为更早一代的人类,其实早就已经接受了这种充满不平等的殖民和共生的关系。因为她非常深刻的知道,如果不是提加泰换成别的统治者,人类的处境就会变得更糟糕。对小说也曾会非常明确地说了:
“那么为什么觉得我要送一个孩子给提加泰人士,既然必须送出一个孩子,那么她宁愿选择提加泰而不是随便的一个陌生人。”
这个不平等是深深的内嵌在这个关系里头的。就是不管怎么样,我有一个孩子是要送出去的,那么我宁愿选择这个跟我有关系的人,因为我可能会相信,因为我们之间的这个亲密连接,他可能会对我的孩子更好。
对吧?因为的确包括像甘恩的爸爸和妈妈甚至也都是提加泰撮合的。然后这个家庭里头最扭曲的一点,我们俩刚才好像都忘了是提加泰也是甘恩的爸爸生出来的。这头甚至还有骨科的内容呢。是的,差点忘记了这一点。
然后你就看到丽安的状态非常的压抑和不甘。一方面她屈从提加泰的安排,她努力地去服从这样的一种不平等的结构。她不会直接去反抗提加泰的决定,但是她也不会真正的完全服从。比如说我们刚才讲到那个虫软,就是对于人类吃了你能嗨,你能延年益寿,你能happy的这样的一个物质。
她就是说:“我这东西我不吃,我就是想死,我就是想老,我就是要吃苦,我就是要按照人类的生理节奏,我就是要去死。”但是她也不会彻底地完全拒绝提加泰对她的充满性暗示的邀请。如果提加泰提出要求,她就会半退半就地上去,跟提加泰保持非常亲密的身体关系。
所以这种性的状态真的让人觉得非常的毛骨悚然。当然某种意义上可能也很戳人二次元的性癖吧,巴特勒奶奶真的很会哦。她讲到这个虫组的个头有三米高,她会用自己的那么多只脚把人类给裹起来,像笼子一样把人给固定住。然后还要有什么尾巴去遮人,还要吸血。整个强制爱的过程还会伴随着某种像毒品一样的注射过程,就会让人在被强迫的状态下,精神上觉得很松弛很爽。但是丽安每次这种爽满之后,又会有一种很屈辱,怎么又被搞定了的那种感觉。
还是跟我刚才说的一样,我们这一切的前提就在于这双方的接触,从一开始就是极度的权力不对等。你不可能要求在极度的权力不对等之下,马上就能长出一个公正平等的关系,那不可能。在这个小说里头我们看到的已经是,这都是在讲好的一方面。就为什么我会觉得他是乌托邦。
提加泰已经是体理客人里头好的那个,建立保护区,给人类有尊严地生活,从小搞一些养成活动,选择一个愿意主动承担这个责任的小男孩,包括最后接受小男孩的要求,可以接受风险。这已经是在一个扭曲的世界里头,看到了一个向上的可能性。但我觉得可能小时候能做的也就是这样了,给一个可能性,暗示他可能有这样的变化,而实际上能不能走到那去呢?不知道。
就包括像恩典一样的,恩典里头的诺亚,他最后会有一个什么样的命运,我们也不知道。我们想一想像他这种站在人类和纠集的非人群落之间的人,他最后的命运是怎么样的呢?你仔细想一想这些巴特勒的故事,你都会知道他已经是尽力在这种扭曲的结构下面,想象出如何在哪怕是这样的结构当中也有向好的可能性。
当然这个前台词也就是“那如果我们现在还没到这么一个变态的脚步,我们是不是能够做得更好?”能做得更好这件事情,可能的确就是巴特勒作为一个科幻小说家来说,他不停地想讨论的人类社会的可能性。
所以在这个集子里头,不是还有一篇叫马大书的吗?这个就非常的基督教的这样的一个设定,对吧?就是直接让上帝出来跟你一个人讨论:“OK,你想要造一个什么样的乌托邦?”在这个马大书里头的马大,因为首先这个名字就非常的圣经,Masa,Masa就是在圣经当中坚信基督的神性的那个人。他兄弟就是被复活的拉萨路。
也就是说在这个故事里头,当他选择这个主公叫马大的时候,因为他自己是来自一个非常严格的禁礼会这样的一个宗教背景,美国黑人的宗教气氛还挺难的,所以在这儿有一个非常强的信仰特质。最后马大的选择是创造一个梦境,让人类所有的不好的欲望都在梦境中满足。这样我们可以更好地来对待现实生活。那些会导致人类在现实生活中彼此施加暴力的欲望、虚荣,还有各种各样变态的需求,都可以在梦境当中解决。
当然这个故事因为他是小说家,然后上帝也就告诉他了:“你知道你选择这样的一个乌托邦就意味着你要失业了。”但是他就说:“那怎么办呢,还是只能有这样的一个解决方案。”我觉得这篇你可以拿着马大书回来来看巴特勒所有的创作的态度,就是极致的“OK,现在已经这么糟糕了,在这个已经糟糕的地方,我们怎么样看到希望,怎么样展示,因为他就是这么糟糕的。”
这些向好的东西就是需要一代一代的人不停的努力,不停去维持的。你不能觉得好的东西他就自然会在那里,自然会延续下去。好的东西同样是脆弱的,它是需要被维护的,它是需要我们去冒险的。
所以可能我会觉得,包括刚刚我们说的他们这个家庭关系、男性气质、生育、成长等等,各个方面最后归结到的都是在这样的一个层面上。就是作为一个科幻作家,他想做什么。这其实还是一个我每次读都会很感动的地方。就是比如说你知道巴特勒自己的生活,就知道他其实也没过过什么好日子,但是就是在这样的一个情况下,打零工,在工厂装薯片这样的工作,然后攒时间写作。十几年卖不出去作品,最后好不容易卖出去了作品,才成为一个作家的人的生活里,他还是会坚持描写一个向好的东西。我觉得他是非常令人感动的。
那我们今天差不多就录到这里,虽然马大书确实是我最后想要问的一个问题,没想到我连问问题的机会都没有了。我刚刚就是想着想着顺手就可以去讲那边了。
没事,我们收尾有别的东西。你有什么别的东西呢?不得把肉系给大家念出来吗?就是最后一次案例说好的呢,还有肉系呢,请请请。
好,我们刚才讲了这么多,相信大家已经对巴特勒有一个相对还算是比较清晰的认识。所以我真的是作为一些狂热粉丝活动,非常希望大家能够读一读他的小说。但如果大家觉得还有一点顾虑,还需要最后再有人推你一下,你才肯掉到坑里的话,我们来推出今日重磅彩蛋:
这个小说是有肉系的,它不是一篇清水文,这是一个有肉的部分。但是这个肉得有某一种性癖你才能够觉得它是有多黄暴,非常黄暴,非常精彩。
那我们接下来就把这一段给大家念一下吧:
然而我还是脱下衣服在他身边躺下,我知道该做什么会发生什么。我从身下来就静润在这些描述里,感到了熟悉的刺痛,浑身发麻,隐隐的有种快感。他的缠缠管到处失态,刺穿的那一下也一点也不疼,不难受,进入的简单顺利。他的身子抵着我,像波浪似的起伏着,肌肉收缩舒张,将卵推出他的身体,推进我的身体。我抓住了他的一对足,默然想起洛马斯之前也是这样抓住他。我松了手,不经意的一动弄疼了他。他难受的低声声音,我还以为他会立刻隆起几只足,像笼子似的将我围起,但他没有。我再次抓住了他,心里莫名有些愧疚。
搞定,说对,好的鼓掌,此处应有掌声。非常感谢肖师傅为我们深情并茂的朗读了我们这篇文章的高光时刻。
哎呀怎么办,感觉后面再挖的坑,没有能够超过写孩子的这个程度了。
还是可以挖的,就是别的坑有别的挖法,但的确写孩子就很符合痴人之爱的风格。我们在爱黑爱情故事,那我们今天节目就录到这里,非常感谢肖师傅来到我们的节目。好,非常感谢肖师傅来上桌吃饭,欢迎再来。
拜拜。好嘞,拜拜。
2025-06-19 08:00:01
Scaling Test Time Compute to Multi-Agent Civilizations — Noam Brown, OpenAI
Hey, everyone. Welcome to the Layton Space podcast. This is Alessio, partner and CTO at Decibel, and I’m joined by my co-host, Spooks, founder of SmallAI.
Hello, hello. And we’re here recording on a holiday Monday with Noam Brown from OpenAI. Welcome.
Thank you. So glad to have you finally join us. A lot of people have heard you. You’ve been rather generous with your time on podcasts, including Lex Friedman, and you’ve done a TED Talk recently, just talking about the thinking paradigm. But I think maybe perhaps your most interesting recent achievement is winning the World Diplomacy Championship. In 2022, you built Cicero, which was in the top 10% of human players.
I guess my opening question is, how has your diplomacy playing changed since working on Cicero and now personally playing it?
When you work on these games, you kind of have to understand the game well enough to be able to debug your bot. Because if the bot does something that’s really radical and that humans typically wouldn’t do, you’re not sure if that’s a mistake, a bug in the system, or it’s actually just the bot being brilliant.
When we were working on diplomacy, I kind of did this deep dive, trying to understand the game better. I played in tournaments. I watched a lot of tutorial videos and commentary videos on games. Over that process, I got better. And then also seeing the bot, the way it would behave in these games. Sometimes, it would do things that humans typically wouldn’t do. That taught me about the game as well.
When we released Cicero, we announced it in late 2022. I still found the game really fascinating, and so I kept up with it. I continued to play. That led to me winning the championship in the World Championship in 2025, just a couple of months ago.
There’s always a question of Centaur systems where humans and machines work together. Was there an equivalent of what happened in Go where you updated your play style?
If you’re asking if I used Cicero when I played in the tournament, the answer is no. Seeing the way the bot played and taking inspiration from that, I think did help me in the tournament.
Yeah. Do people now ask Turing questions every single time when they’re playing Diplomacy?
Yes, to try to tell if the person they’re playing with is a bot or a human.
That’s the one thing you were worried about when you started.
It was really interesting when we were working on Cicero because we didn’t have the best language models. We were really bottlenecked on the quality of the language models. Sometimes, the bot would say bizarre things. For example, 99% of the time, it was fine, but then, every once in a while, it would say something really bizarre.
Somebody would reference something they said earlier in a conversation with the bot, and the bot would respond, “I have no idea what you’re talking about. I never said that.” Then the person would be like, “Look, you could just scroll up in the chat. It’s literally right there.” The bot would retort, “No, you’re lying.”
Oh, context windows.
When it does these kinds of things, people just kind of shrug it off as, “Oh, that’s just, you know, the person’s tired or they’re drunk or whatever,” or “they’re just trolling me.” But I think that’s because people weren’t looking for a bot. They weren’t expecting a bot to be in the games.
We were actually really scared because we were afraid that people would figure out at one point that there was a bot in these games. Then they would always be on the lookout for it. If you’re looking for it, you’re able to spot it; that’s the thing.
Now that it’s announced, and people know to look for it, I think they would have an easier time spotting it. That said, the language models have also gotten a lot better since 2022.
So at this point, the truth is, GPT-4 and O3, these models are passing the Turing test. I don’t think they can really ask that many Turing complete questions that would actually make a difference.
And Cicero was very small, like 2.7B, right?
It was a very small language model. Yeah. It was one of the things that we realized over the course of the project that, oh yeah, you really benefit a lot from just having larger language models.
Right. How do you think about today’s perception of AI and a lot of the safety concerns? The scores of, you know, you’re going to build a bot that is really good at persuading people into helping them win a game. I think maybe today, labs want to say they don’t work on that type of problem. How do you think about that dichotomy, so to speak, between the two? You know, honestly, after we released Cicero, a lot of the AI safety community was really happy with the research and the way it worked because it was a very controllable system.
Like we conditioned Cicero on certain concrete actions, and that gave it a lot of steerability to say, “okay, well, it’s going to pursue a behavior that we can very clearly interpret and very clearly define.” It’s not just, “oh, it’s a language model running loose and doing whatever it feels like.” No, it’s actually pretty steerable. There’s this whole reasoning system that steers the way the language model interacts with the human.
Actually, a lot of researchers reached out to me and said, “we think this is potentially a really good way to achieve safety with these systems.”
I guess the last diplomacy-related question that we might have is: have you updated or tested O-series models on diplomacy? And would you expect a lot more difference?
I have not. I think I said this on Twitter at one point that I think this would be a great benchmark. I would love to see all the leading bots play a game of diplomacy with each other and see who does best. I think a couple of people have taken inspiration from that and are actually building out these benchmarks and evaluating the models.
My understanding is that they don’t do very well right now, but I think it really is a fascinating benchmark. I think it would be, yeah, a really cool thing to try out.
Well, we’re going to go a little bit into O-series now. I think the last time you did a lot of publicity, you were just launching O-one, you did your TED Talk and everything. How have the vibes changed just in general? You said you were very excited to learn from domain experts, like in chemistry, how they review the O-series models. How have you updated since, let’s say, the end of last year?
I think the trajectory was pretty clear pretty early on in the development cycle, and I think that everything that’s unfolded since then has been pretty on track for what I expected. So, I wouldn’t say that my perception of where things are going has honestly changed that much.
I think that we’re going to continue to see— as I said before— that we’re going to see this paradigm continue to progress rapidly. I think that that’s true even today. We saw that with going from O-one preview to O-one to O-three, consistent progress. We’re going to continue to see that going forward, and I think that we’re going to see a broadening of what these models can do as well.
You know, we’re going to start seeing agentic behavior. We’re already starting to see agentic behavior. Honestly, for me, O-three has been incredibly useful in my day-to-day life. I just find it especially useful that I can now browse the web and do meaningful research on my behalf. It’s kind of like a mini deep research tool that you can just get a response from in three minutes. So, I think it’s just going to continue to become more and more useful and more powerful as time goes on, and pretty quickly.
Yeah, and talking about deep research, you tweeted about: “if you need proof that we can do this in unverifiable domains, deep research is kind of like a great example.” Can you maybe talk about if there’s something that people are missing?
I feel like I hear that I repeat it a lot: it’s easy to do encoding in math, but not in these other domains. I frequently get this question, including from pretty established AI researchers, that we’re seeing these reasoning models excel in math and coding and these easily verifiable domains. But are they ever going to succeed in domains where success is less well defined?
I’m surprised that this is such a common perception because we’ve released deep research and people can try it out. People do use it. It’s very popular. And that is very clearly a domain where you don’t have an easily verifiable metric for success.
It’s very subjective—what is the best research report that you could generate? Yet, these models are doing extremely well in this domain. So, I think that’s like an existence proof that these models can succeed in tasks that don’t have as easily verifiable rewards.
Is it because there’s also not necessarily a wrong answer? Like there’s a spectrum of deep research quality, right? You can have a report that looks good, but the information is kind of so-so, and then you have a great report. Do you think people have a hard time understanding the difference when they get the result?
My impression is that people do understand the difference when they get a result. I think that they’re surprised at how good the deep research results are. hundred percent. It could be better and we’re going to make it better. But I think people can tell the difference between a good report and a bad report and certainly between a good report and a mediocre report. And that’s enough to kind of feed the loop later to build the product and improve the model performance.
I mean, I think if you’re in a situation where people can’t tell the difference between the outputs, then it doesn’t really matter if you’re making progress. These models are going to get better at domains where there is a measure of success. Now, I think this idea that it has to be like easily verifiable or something like that, I don’t think that’s true. I think that you can have these models do well, even in domains where success is a very difficult to define thing, and could sometimes even be subjective.
People lean on a lot. You’ve done as well as the thinking fast and slow analogy for just thinking models. And I think it’s reasonably well diffused now, the idea that this is kind of the next scaling paradigm. All analogies are imperfect.
What is one way in which thinking fast and slow or system one, system two kind of doesn’t transfer to how we actually scale these things? One thing that I think is underappreciated is that the pre-trained models need a certain level of capability in order to really benefit from this like extra thinking. This is kind of why you’ve seen the reasoning paradigm emerge around the time that it did.
I think it could have happened earlier, but if you try to do the reasoning paradigm on top of GPT-2, I don’t think it would have gotten you almost anything. Is this emergence? Hard to say if it’s emergence necessarily, but I haven’t done the measurements to really define that clearly.
But I think it’s pretty clear; you know, people try chain of thought with GPT, like really small models, and they saw that it just didn’t really do anything. Then you go to bigger models and it starts to give a lift. I think there’s a lot of debate about the extent to which this kind of behavior is emergent, but clearly there is a difference. So it’s not like there are these two independent paradigms.
I think that they are related in the sense that you need a certain level of system one capability in your models in order to have system two, to be able to benefit from system two.
Yeah. I have tried to play amateur neuroscientist before and compare it to the evolution of the brain and how you have to evolve the cortex first before you evolve the other parts of the brain. And perhaps that is what we’re doing here.
Yeah. You could argue that actually this is not that different from like, I guess, the system one, system two paradigm, because, you know, if you ask like a pigeon to think really hard about playing chess, it’s not going to get that far. It doesn’t matter if it thinks for a thousand years; it’s just not going to be able to be better at playing chess.
So maybe you do still also, with animals and humans, need a certain level of intellectual ability, just in terms of system one, in order to benefit from system two as well.
Yeah. Just this side tangent, does this also apply to visual reasoning? So let’s say we have, now we have the 4.0 natively omni-model type of thing, then that also makes 0.3 really good at geoguessr. Does that apply to other modalities too?
I think the evidence is yes. It depends on exactly the kinds of questions that you’re asking. Like there are some questions that I think don’t really benefit from system two. I think geoguessr is certainly one where you do benefit. I think image recognition, if I had to guess, it’s one of those things that you probably benefit less from system two thinking. Because you know it or you don’t.
Yeah, exactly. There’s no way.
And the thing I typically point to is just like information retrieval. If somebody asks you, “When was this person born?” and you don’t have access to the web, then you either know it or you don’t. You can sit there and you can think about it for a long time. Maybe you can make an educated guess. You can say like, “Well, this person probably lived around this time, so this is like a rough date,” but you’re not going to be able to get the date unless you actually just know it.
But like spatial reasoning, like tic-tac-toe might be better because you have all the information there.
Yeah. I think it’s true that with tic-tac-toe, we see that GBD 4.5 does reasonably well. You can draw the board, and it can make legal moves, but it will make mistakes sometimes. If you really need that… System Two to enable it to play perfectly. Now it’s possible that if you got to GBD 6 and you just did System One, it would also play perfectly. I guess we’ll know one day, but I think right now you would need a System Two to really do well.
What do you think are the things that you need in System One? So obviously, a general understanding of game rules is essential. Do you also need to understand some sort of metagame? Usually, this is how you value pieces in different games, even though it’s a fundamental aspect. How do you generalize in System One so that then in System Two, you can kind of get to the gameplay?
I think the more that you have in System One—this is the same thing with humans. Humans, when they’re playing for the first time a game like chess, can apply a lot of System Two thinking to it. If you present a really smart person with a completely novel game and tell them, “Okay, you’re going to play this game against an AI or a human that’s mastered this game” and you tell them to sit there and think about it for three weeks on how to play it, my guess is they could actually do pretty well.
However, it certainly helps to build up that System One thinking—like build up intuition about the game—because it will just make you so much faster.
I think the Pokemon example is a good one, where System One holds maybe all this information about games. Yet, once you put in the game, it still needs a lot of harnesses to work. I’m trying to figure out how much of that harness we can take and have it in System One so that then System Two is as harness-free as possible. But I guess that’s the question about generalizing games and AI.
I view that as a different question. I think the question about harnesses, in my view, is that the ideal harness is no harness. Right. I think harnesses are a crutch that eventually we’re going to be able to move beyond.
So, only two calls. You could ask O3 and it’s interesting because when this Pokemon playing concept emerged as a benchmark, I was actually pretty opposed to evaluating this with our opening AI models. My feeling is, “Okay, if we’re going to do this eval, let’s just do it with O3. How far does O3 get without any harness? How far does it get while playing Pokemon?” The answer is, not very far.
And that’s fine. I think it’s acceptable to have an eval where the models perform terribly. I don’t think the answer to that should be, “Well, let’s build a really good harness so that now it can do well in this eval.” I think the answer is, “Okay, well, let’s improve the capabilities of our models so they can excel at everything.” Then they also happen to make progress on this eval.
Would you consider things like checking for a valid move a harness, or is this part of the model? For example, in chess, you can either have the model learn in System One what moves are valid and what it can and cannot do, versus in System Two figuring it out.
I think there’s a lot of design questions involved. For me, I think you should give the model the ability to check if a move is legal. That could be an option in the environment, like, “Here’s a tool call that you can make to see if an action is legal.” If it wants to use that, it can.
Then, there’s the design question of what happens if the model makes an illegal move. I think it’s reasonable to say, “Well, if they make an illegal move, then they lose the game.” I don’t know what happens when a human makes an illegal move in a game of chess. I actually don’t know. I don’t think they’re just not allowed to. If that’s the case, then I think it’s reasonable to have an eval where that’s also the criteria for the AI models.
But I think maybe one way to interpret that in sort of researcher terms is: are you allowed to do search? One of the famous findings from DeepSeek is that MCTS wasn’t that useful to them. But I think there are a lot of engineers trying out search and spending a lot of tokens doing that, and maybe it’s not worth it.
Well, I’m making a distinction here between a tool call to check whether a move is legal or illegal. This is different from actually making that move and then seeing whether it ended up being valid or not. Legal or illegal. Right. So if that tool call is available, I think it’s totally fine to make that tool call and check whether a move is legal or illegal.
I think it’s different to have the model say, “oh, I’m making this move.” Yeah. And then, you know, it gets feedback that like, “oh, you made an illegal move.” And so then it’s like, “oh, just kidding. I’m going to do something else now.” So that’s the distinction I’m drawing.
Some people have tried to classify that second type of playing things out as test time compute. You would not classify that as test time compute.
There’s a lot of reasons why you would not want to rely on that paradigm when you’re going to imagine you have a robot, and your robot takes some action in the world, and it breaks something. You can’t say, “oh, just kidding. I didn’t mean to do that. I’m going to undo that action.” The thing is broken.
So if you want to simulate what would happen if I move the robot in this way and then in your simulation, you saw that this thing broke, and then you decided not to do that action, that’s totally fine. But you can’t just like undo actions that you’ve taken in the world.
There’s a couple more things I wanted to cover in this rough area. I actually had an answer on the thinking fast and slow side, which maybe I’m curious what you think about. A lot of people are trying to put in effectively model router layers, let’s say between the fast response model and the long thinking model. Anthropic is explicitly doing that, and I think there’s a question about always, do you need a smart judge to route or do you need a dumb judge to route because it’s fast?
So when you have a model router, let’s say you’re passing requests between system one side and system two side, does the router need to be as smart as the smart model or dumb to be fast? I think it’s possible for a dumb model to recognize that a problem is really hard and that it won’t be able to solve it and then route it to a more capable model.
But it’s also possible for a dumb model to be fooled or to be overconfident. I don’t know. I think there’s a real trade-off there.
But I will say like, I think there are a lot of things that people are building right now that will eventually be washed away by scale. So I think harnesses are a good example where I think eventually the models are going to be, and I think this actually happened with the reasoning models.
Before the reasoning models emerged, there was all of this work that went into engineering these agentic systems that made a lot of calls to GPT-40 or these non-reasoning models to get reasoning behavior. And then it turns out like, “oh, we just created reasoning models and you don’t need this complex behavior.”
In fact, in many ways, it makes it worse. Like you just give the reasoning model the same question without any sort of scaffolding, and it just does it. Now that you can still, and so people are building scaffolding on top of the reasoning models right now, but I think in many ways, like those scaffolds will also just be replaced by the reasoning models and models in general becoming more capable.
Similarly, I think things like these routers… we’ve said pretty openly that we want to move to a world where there is a single unified model. And in that world, you shouldn’t need a router on top of the model. So I think that the router issue will eventually be solved also.
Like you’re building the router into the model kind of weights itself.
I don’t think there will be a benefit for… like, I shouldn’t say it because I could be wrong about this. You know, it’s certainly maybe there are reasons to route to different model providers or whatever. But I think that routers are going to eventually go away.
And I can understand why it’s worth doing it in the short term, because the fact is it is beneficial right now. And if you’re building a product and you’re getting a lift from it, then it’s worth doing right now.
One of the tricky things I’d imagine that a lot of developers are facing is that you kind of have to plan for where these models are going to be in six months and twelve months. And that’s very hard to do because things are progressing very quickly.
You don’t want to spend six months building something and then just have it be totally washed away by scale. But I think I would encourage developers, when they’re building these kinds of things like scaffolds and routers, to keep in mind that the field is evolving very rapidly. You know, things are going to change in three months, let alone six months. And that might require radically changing these things. around or tossing them out completely. So don’t spend six months building something that might get tossed down in six months.
It’s so hard though. Everyone says this and then no one has concrete suggestions on how.
What about reinforcement fine tuning? Is it something that obviously you just released it a month ago at UrbanEye? Is it something people should spend time on right now or maybe wait until the next jump up?
I think reinforcement fine tuning is pretty cool. And I think it’s worth looking into because it’s really about specializing the models for the data that you have. I think that’s worth looking into for developers. We’re not suddenly going to have that data baked into the raw model a lot of times. So, I think that’s a separate question.
Creating the environment and the reward model is the best thing people can do right now. I think the question that people have is like, should I rush to fine-tune the model using RFT or should I build the harness to then RFT the models as they get better?
I think the difference is that for reinforcement fine tuning, you’re collecting data that’s going to be useful as the models improve. If we come out with future models that are even more capable, you could still fine-tune them on your data. That’s actually a good example where you’re building something that’s going to complement the model scaling and becoming more capable rather than necessarily getting washed away by the scale.
One last question on Ilya. You mentioned on, I think, the Sarah and Elad podcast, where you had this conversation with Ilya a few years ago about more RL and reasoning and language models. Just any speculation or thoughts on why his attempt, when he tried it, didn’t work or the timing wasn’t right and why the time is right now?
I don’t think I would frame it that way—that this is his attempt didn’t work. In many ways, it did. For me, I saw that in all of these domains that I’d worked on, like poker, Hanabi, and diplomacy, having the models think before acting made a huge difference in performance, like orders of magnitude difference.
Like 10,000 times.
Yeah. A thousand to a hundred thousand times. It’s the equivalent of a model that’s a thousand to a hundred thousand times bigger. In language models, you weren’t really seeing that—the models would just respond instantly. Some people in the field, in the LLM field, were convinced that if we just keep scaling pre-training, we will get to super intelligence. I was kind of skeptical of that perspective.
In late 2021, I was having a meal with Ilya. He asked me what my HGI timelines are, a very standard SF question. I told him, “Look, I think it’s actually quite far away because we’re going to need to figure out this reasoning paradigm in a very general way.” With things like LLMs, they are very general, but they don’t have a reasoning paradigm that’s very general. Until they do, they’re going to be limited in what they can do.
Sure, we’re going to scale these things up by a few more orders of magnitude. They’re going to become more capable, but we’re not going to see super intelligence from just that. Yes, if we had a quadrillion dollars to train these models, then maybe we would, but you’re going to hit the limits of what’s economically feasible before you get to super intelligence, unless you have a reasoning paradigm. I was convinced incorrectly that the reasoning paradigm would take a long time to figure out because it’s this big unanswered research question.
Ilya agreed with me and said, “Yeah, you know, I think we need this additional paradigm,” but his take was that “maybe it’s not that hard.” I didn’t know it at the time, but he and others at OpenAI had also been thinking about this. They had also been thinking about RL and had been working on it. I think they had some success, but with most research, you have to iterate on things. You have to try out different ideas; you have to try different things.
As the models become more capable, as they become faster, it becomes easier to iterate on experiments. I think that the work they did, even though it didn’t result in a reasoning paradigm, all builds on top of previous work. They built a lot of things that over time led to this reasoning paradigm.
For listeners, no one can talk about this, but the rumor is that that thing was codenamed GPT-0, if you want to search for that line of work. There was a time where RL kind of went through a dark age when everyone went all in on it and then nothing happened. up. And now it’s like sort of the golden age again. So that’s what I’m trying to identify, what is it? And it could just be that we have smarter base models and better data.
I don’t think it’s just that we have smarter base models. I think it’s that. Yeah. So we did end up getting a big success with reasoning. But I think it was in many ways a gradual thing. To some extent, it was gradual.
You know, there were signs of life, and then we iterated and tried out some more things. We got better signs of life. I think it was around November 2023 or October 2023 when I was convinced that we had very conclusive signs of life that this was going to be a big deal. That was, in many ways, a gradual thing.
I think what OpenAI did well is that when we got those signs of life, they recognized it for what it was and invested heavily in scaling it up. I think that’s ultimately what led to reasoning models arriving when they did.
Was there any disagreement internally, especially because OpenAI kind of pioneered pre-training scaling and you kind of said, “maybe that’s not how we get there”? Was it clear to everybody that this was going to work, or was it controversial?
There’s always different opinions about this stuff. I think there were some people that felt that pre-training was all we needed, that we scaled it up to infinity and we were there. I think a lot of the leadership at OpenAI recognized that there was another paradigm that was needed.
That was why they were investing a significant amount of research effort into RL (Reinforcement Learning). To the credit of OpenAI, yes, they figured out the pre-training paradigm and were very focused on scaling that up. In fact, the vast majority of resources were focused on scaling that up.
But they also recognized the value that something else was going to be needed. It was worth researching and putting researcher effort into other directions to figure out what that extra paradigm was going to be. There was a lot of debate about:
The feeling was that we have tons of compute, but we are more limited by data.
I think they are more data efficient. But I think that they are also just like the equivalent of scaling up compute significantly. That was interesting. There was a lot of debate around, “Okay, what exactly are we doing here?”
Then, even when we got the signs of life, I think there was a lot of debate about the significance of it. How much should we invest in scaling up this paradigm? I think especially when you’re in a small company, like OpenAI was not as big as it is today in 2023. Compute was more constrained than it is today.
If you’re investing resources in a direction, that’s coming at the expense of something else. If you look at these signs of life on reasoning and you say, “Okay, this looks promising, we’re going to scale this up by a ton and invest a lot more resources into it,” you have to consider where those resources are coming from.
You have to make that tough call about where to draw the resources from. This is a very controversial and difficult call to make that makes some people unhappy. I think there was debate about:
I remember it was interesting that I talked to somebody who left OpenAI after we had discovered the reasoning paradigm, but before we announced A1. They ended up going to a computing lab. I saw them afterwards, after we announced A1.
They told me that at the time, they really didn’t think this reasoning thing, like these O series, the Strawberry models, were that big of a deal. They felt we were making a bigger deal of it than it really deserved.
Then when we announced A1 and they saw the reaction of their coworkers at this competing lab—how everybody was like, “Oh, crap, this is a big deal”—they pivoted the whole research agenda to focus on this. Then they realized, “Oh, actually, this maybe is a big deal.” A lot of this seems obvious in retrospect. but at the time, it’s actually not so obvious and be quite difficult to recognize something for what it is.
I mean, OpenAI is like a great history of just making the right bet. I feel like GPD models are kind of similar, right? Where like, it started with games, NRL, and then it’s like, maybe we can just scale these language models instead. And I’m just impressed by the leadership and obviously the research team that keeps coming up with these insights.
Looking back on it today, it might seem obvious that like, “oh, of course, like these models get better with scale.” So you should just scale them up a ton and it’ll get better. But it really is true that the best research is obvious in retrospect. And at the time, it’s not as obvious as it might seem today.
Follow questions on data efficiency. This is a pet topic of mine. It seems that our current methods of learning are so inefficient still, right? Like compared to the existence proof of humans, we take five samples and we learn something. Machines, 200, maybe, you know, per whatever data point you might need.
I think it’s a good point that if you look at the amount of data these models are trained on and you compare it to the amount of data that a human observes to get the same performance, I guess pre-training, it’s a little hard to make an apples to apples comparison because like, I don’t know, how many tokens does a baby actually absorb when they’re developing?
But I think it’s a fair statement to say these models are less data efficient than humans. And I think that that’s an unsolved research question and probably one of the most important unsolved research questions, maybe more important than algorithmic improvements because we can increase the supply of data out of the existing set of the world and humans.
I guess that’s good. So a couple of thoughts on that. Like one is that the answer might be an algorithmic improvement. Maybe algorithmic improvements do lead to greater data efficiency. And the second thing is that it’s not like humans learn from just reading the internet. I think it’s certainly easiest to learn from just like data that’s on the internet. But I don’t think that’s like the limit of what data you could collect.
The last follow-up before we change topics to coding: any other just anecdotes or insights from Ilya just in general? Cause like you’ve worked with him. So there’s not that many people that we can talk to that have worked with him. I think I’ve just been very, very impressed with his vision that I think like, especially when I joined and I saw, you know, the internal documents at OpenAI of like what he had been thinking about back in like 2021, 2022, even earlier.
I was very impressed that he had a clear vision of like where this was all going and what was needed. Some of his emails from 2016, 17, when they were founding OpenAI were published. And even then he was talking about how he had things like “one big experiment is much more valuable than 100 small ones.” That was like a core insight that differentiated them from Brain, for example.
It just seems very insightful that he just sees things much more clearly than others. And I just wonder what his production function is. Like, how do you make a human like that? And how do you improve your own thinking to better model it?
I mean, I think it is true that, I mean, one of OpenAI’s big successes was betting on the scaling paradigm. It is just kind of odd because, you know, they were not the biggest lab. It was difficult for them to scale. Back then it was much more common to do a lot of small experiments, more academic style. People were trying to figure out these various algorithmic improvements and OpenAI bet pretty early on large scale.
We had David Luan on who I think was VP Eng at the time of GPT one and two. And he talked about how the differences between Brain and OpenAI was basically the cause of Google’s inability to come out with a scaled model. Like just structurally, everyone had allocated compute and you had to pull resources together to make bets, and you just couldn’t.
I think that’s true that OpenAI was structured differently. I think that really helped him. OpenAI functions a lot like a startup and other places tended to function more like universities or, you know, research labs as they traditionally existed. The way that OpenAI operates more like a startup with this mission of building AGI and superintelligence helped them organize, collaborate, pull resources together, and make hard choices about how to allocate. resources. And I think a lot of the other labs have now been trying to adopt paradigms more like that, like setups more like that.
Let’s talk about maybe the killer use case, at least in my mind, of these models, which is coding. You released Codex recently, but I would love to talk through the Noam Brown coding stack.
What models do you use, and how do you interact with them? Cursor, Windsurf.
Lately, I’ve been using Windsurf and Codex—actually a lot of Codex. I’ve been having a lot of fun. You just give it a task and it just goes off and does it and comes back five minutes later with, you know, a pull request.
And is it a core research task or like side stuff that you don’t super care about? I wouldn’t say it’s like side stuff. I would say basically anything that I would normally try to code up, I try to do it with Codex first.
Well, for you, it’s free, but yeah, for everybody, it’s free right now. And I think that’s partly because it’s the most effective way for me to do it. Also, it’s good for me to get experience working with this technology and then also, like, seeing the shortcomings of it. It just helps me better understand, like, “Okay, this is the limits of these models and like what we need to push on next.”
Have you felt the AGI? I felt the AGI multiple times, yes.
Like, how should people push Codex in ways that you’ve done? You know, I think you just see it before others because obviously you were closer to it. I think anybody can use Codex and feel the AGI. It’s kind of funny how you feel the AGI and then you get used to it very quickly. So it’s really like…
Dissatisfied with where it’s lacking.
Yeah, I know. You know, it’s magical one day. I was actually looking back at the old Sora videos when they were announced.
Yeah. Because remember when Sora came out, it was just like…
The biggest news ever.
It was just magical. You look at that and you’re like, “It’s really here. Like this is AGI.” But if you look at it now and it’s kind of like, “Oh, you know, the people don’t move very organically.” And it’s like there’s a lack of consistency in some ways. You see all these flaws in it now that you just didn’t really notice when it first came out.
And yeah, you get used to this technology very quickly. But I think what’s cool about it is that because it’s developing so quickly, you get those feel the AGI moments like every few months. So something else is going to come out and just like, it’s magical to you. And then you get used to it very quickly.
What are your Windsurf pro tips now that you’ve immersed yourself in it?
I think one thing I’m surprised by is how few people—maybe your audience is going to be more comfortable with reasoning models and like use reasoning models more—but I’m surprised at how many people don’t even know that O3 exists. Like I’ve been using it day to day. It’s basically replaced Google search for me. I just use it all the time, and also for things like coding. I tend to just use reasoning models.
My suggestion is like if people have not tried the reasoning models yet, cause honestly, people love them. People that use it love them. Obviously, a lot more people use GPT-4.0 and just like the default on what on chat GPT and that kind of stuff. I think it’s worth trying the reasoning models. I think people would be surprised at what they can do.
I use Windsurf daily and they still haven’t actually enabled it as like a default in Windsurf. I always have to dig up, like, type in O3 and then it’s like, “Oh yeah, that exists.” It’s weird. I would say like my struggle with it has been that it takes so long to reason that I actually break out of flow.
I think that is true. Yes. And I think this is one of the advantages of Codex, that you can give it a task that’s kind of self-contained and it can go off and do its thing and come back 10 minutes later. I can see that if you’re using this thing as more like a pair of programmer kind of thing, then yeah, you want to use GPT-4.1 or something like that.
What do you think are the most broken parts of the development cycle with AI? Like in my mind, it’s like pull request review. Like for me, I use Codex all the time and then I got all these pull requests and it’s kind of hard to go through all of them.
What other thing would you like people to build to make this even more scalable? I think it’s really on us to build a lot more stuff. These models are very limited in some ways. I find it frustrating that you ask them to do something and they spend 10 minutes doing it and then you ask them to do something pretty similar and… Then they go spend 10 minutes doing it. I think I described them as geniuses, but it’s their first day on the job, and that’s kind of annoying. Even the smartest person on earth, when it’s their first day on the job, they’re not going to be as useful as you would like them to be. So I think being able to get more experience and act like somebody that’s actually been on the job for six months instead of one day would make them a lot more useful. But that’s really on us to build that capability.
Do you think a lot of it is like GPU constraint for you? If I think about Codex, why is it asking me to set up the environment myself when the model, if I ask GPT-3 to create an environment setup script for a repo, I’m sure it’ll be able to do it. But today in the product, I have to do it. So I’m wondering in your mind, could these be a lot more if we just, again, put more test time compute on them? Or do you think there’s a fundamental model capability limitation today that we still need a lot of human harnesses around it?
I think that we’re in an awkward state right now where progress is very fast. There are things that are clearly we could do this in the models. We’re going to get to it. You’re just limited by how many hours there are in the day. Progress can only proceed so quickly. We’re trying to get to everything as fast as we can, and I think that GPT-3 is not where the technology will be in six months.
I like that question overall. There is a software development lifecycle, not just the generation of the code. From issue to PR, basically, that’s the typical commentary of that. Then there’s the Windsurf side, which is inside your IDE.
What else is there that is sort of rate-limiting the amount of software you could be iterating on? That’s an open question. I don’t have an answer.
Anything else on ASWE in general? Where do you think this goes just in form factors? What will we be looking at this time next year in terms of how things are, and what models will be able to do that they’re not able to today?
I don’t think it’s going to be limited to ASWE. I think it’s going to be able to do a lot of remote work kind of tasks.
The way that I think about it is that anybody doing a remote work kind of job should become familiar with the technology and get a sense of what it can do, what it can’t do, what it’s good at, and what it’s not good at. I believe the breadth of things that it’s going to be able to do will expand over time.
I feel like virtual assistants might be the next thing after ASWE because they are the most easily, like hiring someone in the Philippines, someone who just looks through your email and all that because that is entirely manageable. You can intercept all the inputs and all the outputs and train on that. Maybe OpenAI just buys a virtual assistant company.
I think what I’m looking forward to is for things like virtual assistants. If the models are aligned well, they could end up being really preferable for that kind of work. There’s always this principal-agent problem where if you delegate a task to somebody, you have to ask: are they really aligned with doing it as you would want it to be done?
If you have an AI model that’s actually really aligned with you and your preferences, that could end up doing a way better job than a human could. Not that it’s doing a better job than a human could, but it’s doing a better job than a human would.
That word alignment, by the way, I think there’s an interesting overriding or homomorphism between safety alignment and instruction following alignment. I wonder where they diverge.
I think where it diverges is: what do you want to align the models to? That’s a difficult question. You could say you wanted to align it to the user. But what happens if the user wants to build a novel virus that’s going to wipe out half of humanity? That’s safety alignment. There’s a question of alignment. I think they’re related, and I think the big question is what are you aligning towards?
Yeah, there are humanity goals and then there are your personal goals and everything in between. So that’s kind of, I guess, the individual agent.
You announced that you’re leading the multi-agent team at OpenAI. I haven’t really seen many announcements. Maybe I missed them on what you’ve been working on, but what can you share about interesting research directions or anything from there?
Yeah, there hasn’t really been announcements on this. I think we’re working on cool stuff and I think we’ll get to announce some cool stuff at some point. I think the team in many ways is actually a misnomer because we’re working on more than just multi-agent. Multi-agent is one of the things we’re working on.
Some other things we’re working on include:
So that’s one direction that we’re pursuing.
Multi-agent is another direction, and here I think there’s a few different motivations. We’re interested in both the collaborative and the competitive aspect of multi-agent. I think the way that I describe it is people often say in AI circles that humans occupy this very narrow band of intelligence and AIs are just going to quickly catch up and then surpass this band of intelligence.
I actually don’t think that the band of human intelligence is that narrow. I think it’s actually quite broad because if you compare anatomically identical humans from caveman times, they didn’t get that far in terms of what we would consider intelligence today.
Right? Like they’re not putting a man on the moon. They’re not building semiconductors or nuclear reactors or anything like that. And we have those today, even though we as humans are not anatomically different.
So what’s the difference? Well, I think the difference is that you have thousands of years, a lot of humans, billions of humans cooperating and competing with each other, building up civilization over time. The technology that we’re seeing is the product of this civilization.
I think similarly, the AIs that we have today are kind of like the cavemen of AI. I think that if you’re able to have them cooperate and compete with billions of AIs over a long period of time and build up a civilization, essentially, the things that they would be able to produce and answer would be far beyond what is possible today with the AIs that we have today.
Do you see that being similar to maybe like Jim Fan’s Voyager skill library idea, re-saving these things? Or is it just the models being retrained on this new knowledge? Because the humans then have it, a lot of it in the brain as they grow.
So I think I’m going to be evasive here and say that we’re not going to announce anything until we have something to announce, which I think we will in the not too distant future. I think I’m going to be a bit vague about exactly what we’re doing.
But I will say that the way that we are approaching multi-agent in the details and the way we’re actually going about it is very different from how it’s been done historically and how it’s being done today by other places.
I’ve been in the multi-agent field for a long time. I’ve felt that the multi-agent field has been a bit misguided in some ways and the approaches that the field has taken. So I think we’re trying to take a very principled approach to multi-agent.
Sorry, I got to add, you can’t talk about what you’re doing, but you can say what’s misguided. What’s misguided?
I think that a lot of the approaches that have been taken have been very heuristic and haven’t really been following the bitter lesson approach to scaling and research.
Okay, I think maybe this might be a good spot. Obviously, you’ve done a lot of amazing work in poker. I think that’s the reasoning model got better.
I was talking to one of my friends who used to be a hardcore poker grinder, and I told them I was going to interview you. Their question was, “at the table, you can get a lot of information from a small sample size about how a person plays.”
But today, GTO is so prevalent that sometimes people forget that you can play exploitatively. What do you think is the state as you think about multi-agent and kind of like competition? Is it always going to be trying to find the optimal thing, or is a lot of it trying to think more in the moment, like how to exploit somebody? I’m guessing your audience is probably not super familiar with poker terminology. So I’ll just explain this a bit. A lot of people think that poker is just a luck game. And that’s not true. It’s actually like there’s a lot of strategy in poker. So you can win consistently in poker if you’re playing the right strategy.
There’s different approaches to poker. One is game theory optimal. This is like you’re playing an unbeatable strategy in expectation that you’re just unexploitable. It’s kind of like in rock, paper, scissors. You can be unbeatable in rock, paper, scissors if you just randomly choose between rock, paper, and scissors with equal probability. Because no matter what the other player does, they’re not going to be able to exploit you, or you’re going to win. You’re not going to lose in expectation.
Now, a lot of people hear that and they think, “well, that also means that you’re not going to win in expectation because you’re just playing totally randomly.” But in poker, if you play the equilibrium strategy, it’s actually really difficult for the opponents to figure out how to tie you. They’re going to end up making mistakes that will lead you to win over the long run. It might not be a massive win, but it is going to be a win. If you play enough hands for a long enough period of time, you’re going to win in expectation.
Now, there’s also exploitative poker. The idea here is that you’re trying to spot weaknesses in how the opponent plays. For example:
So you start adapting from the game theory optimal balanced strategy of bluffing sometimes to playing a very unbalanced strategy. That’s like, “Oh, I’m just going to bluff a ton against this person because they always fold whenever I bluff.”
Now, the key is that there’s a trade-off here. If you’re taking this exploitative approach, then you’re opening yourself up to exploitation as well. So you have to choose this balance between:
Playing a defensive game theory optimal policy that guarantees you’re not going to lose, but might not make you as much money as you potentially could.
Playing an exploitative strategy that can be much more profitable but also creates weaknesses that the opponents can take advantage of and trick you.
There’s no way to perfectly balance the two. It’s kind of like in rock, paper, scissors. If you notice somebody is playing paper for five times in a row, you might think, “Oh, they have a weakness in their strategy. I should just be throwing scissors and I’m going to take advantage of them.”
So on the sixth time you throw scissors, but actually, that’s the time when they throw rock. You never really know. So you always have this trade-off.
The poker AIs that have been extremely successful, and my background is that I worked on AI for poker for several years during grad school and made the first superhuman no-limit poker AIs. The approach that we took was this game theory optimal approach, where the AIs would play this unbeatable strategy and they would play against the world’s best and beat them.
Now, that also means they beat the world’s worst; they would just beat anybody. But if they were up against a weak opponent, they might not beat them as severely as a human expert might, because the human expert would know how to adapt from the game theory optimal policy to exploit these weak players.
There’s this kind of unanswered question of: how do you make an exploitative poker AI? A lot of people have pursued this research direction. I dabbled in it a little bit during grad school, and I think fundamentally it just comes down to AIs not being as sample efficient as humans.
We discussed earlier that if a human is playing poker, they’re able to get a really good sense of the strengths and weaknesses of a player within a dozen hands. It’s honestly really impressive. Back when we were working on AI for poker in the mid-2010s, these AIs would have to play like 10,000 hands of poker to get a good profile of who this player is, how they’re playing, and where their weaknesses are.
Now, I think with more recent technology, that has come down. But still, the sample efficiency has been a big challenge.
What’s interesting is that after working on poker, I worked on Diplomacy. I think we talked about this earlier. Diplomacy is this seven-player negotiation game. When we started working on it, I took a very game theory approach to the problem. I felt like, “Okay, we’re kind of like poker, you have to compute this game theory optimal policy, and if you just play this, you’re going to not lose in expectation.” You’re going to win in practice. But that actually doesn’t work in diplomacy. And it doesn’t work, again, for the question of, “how much of a rabbit hole do we want to go down on this?” But basically, when you’re playing like the zero-sum games, like poker, game theory optimal works really well. When you’re playing a game like diplomacy, where you need to collaborate and compete, and there’s room for collaboration, then game theory optimal actually doesn’t work that well. You have to understand the players and adapt to them much better.
So this ends up being very similar to the problem in poker of how do you adapt to your opponents? In poker, it’s about adapting to their weaknesses and taking advantage of that. In diplomacy, it’s about adapting to their play styles. It’s kind of like, if you’re at a table and everybody’s speaking French, you don’t want to just keep talking in English; you want to adapt to them and speak in French as well.
That’s the realization that I have with diplomacy: we need to shift away from this game theory optimal paradigm towards modeling the other players, understanding who they are, and then responding accordingly. In many ways, the techniques that we developed in diplomacy are exploitative. They’re not exploitative; they’re really just adapting to the opponents, to the other players at the table.
But I think the same techniques could be used in AI for poker to make exploitative poker AIs. If I didn’t get AGI-pilled by the incredible progress that we were seeing with language models, and like shifting my whole research agenda to focusing on general reasoning, probably what I would have worked on next was making these exploitative poker AIs. It would be a really fun research direction to go down. I think it’s still there for anybody that wants to do it.
I think the key would be taking the techniques that we use in diplomacy and applying them to things like poker. To me, the core piece is when you play online, you have a HUD, which tells you all these stats about the other player, such as how much they participate preflop, etc. To me, a lot of these models, from my understanding, are not really leveraging the behavior of the other players at the table. They’re just kind of looking at the board state and working from there.
That’s correct. The way the poker AIs work today, they’re just kind of sticking to their pre-computed GTO strategy and they’re not adapting to the other players at the table. You can do various hacky things to get them to adapt, but they’re not very principled, and they don’t work super well.
Any grad students listening, if you want to work on that, I think that is a very reasonable research direction that’ll at least get in front of you and get some attention.
The other thing that this conversation brings up for me is, “well, like, one of the hypotheses for what is the next step after test time compute is world models.” Is world modeling important or a worthwhile research direction? Yann LeCun has been talking about this nonstop, but basically, no LLMs have—like they have internal world models, but not explicitly a world model.
I think it’s pretty clear that as these models get bigger, they have a world model and that world model becomes better with scale. So they are implicitly developing a world model. I don’t think it’s something that you need to explicitly model. I could be wrong about that.
When dealing with people or multi-agents, it might be necessary because you have entities that are not the world and you’re resolving hypotheses about which are the many types of entities you could be dealing with. There was this long debate in the multi-agent AI community for a long time about whether you need to explicitly model other agents, like other people, or if they can be implicitly modeled as part of the environment.
For a long time, I took the perspective that of course you have to explicitly model these other agents because they’re behaving differently from the environment. They take actions, they’re unpredictable, and they have agency. But I think I’ve actually shifted over time to thinking that if these models become smart enough, they develop things like theory of mind. They develop an understanding that there are other agents that can take actions and have motives.
These models just develop that implicitly with scale and more capable behavior broadly. So that’s the perspective I take these days. So what I just said was an example of a heuristic that is not bitter lesson filled, and it just goes away.
Yeah. It’s really all come back to the bitter lesson. Got to cite them every podcast.
One of the interesting findings and most consistent findings, you know, I think you were at ICLR, and one of the hit talks there was about open-endedness. This guy, Tim, who gave that talk, has been doing a bunch of research about multi-agent systems too. One of the most consistent findings is always that it’s better for AIs to self-play and improve competitively as opposed to sort of humans training and guiding them. You find that with AlphaZero and R1Zero, whatever that was.
Do you think this will hold for multi-agents like self-play to improve better than humans?
Yeah. So, okay, this is a great question. I think this is worth expanding on.
A lot of people today see self-play as the next step and maybe the last step that we need for superintelligence. If you’re following, you know, you look at something like AlphaGo and AlphaZero, we seem to be following a very similar trend, right?
The first step in AlphaGo was you do large-scale pre-training. In that case, it was on human Go games. With LMs, it’s pre-training on tons of internet data. That gets you a strong model, but it doesn’t get you an extremely strong model. It doesn’t get you a superhuman model.
The next step in the AlphaGo paradigm is you do large-scale test time compute or like large-scale inference compute. In that case, with MCTS, and now we have reasoning models that also do this large-scale inference compute, that boosts the capabilities a ton.
Finally, with AlphaGo and AlphaZero, you have self-play where the model plays against itself, learns from those games, and gets better and better. It just goes from something that’s around human-level performance to way beyond human capability.
It’s like these Go policies now are so strong that it’s just incomprehensible. What they’re doing is incomprehensible to humans. The same thing applies to chess.
We don’t have that right now with language models. It’s really tempting to look at that and say like, “Oh, well, we just need these AI models to interact with each other and learn from each other, and then they’re just going to get to superintelligence.”
The challenge - and I kind of mentioned this a little bit when I was talking about diplomacy - is that Go is this two-player zero-sum game. Two-player zero-sum games have this very nice property; when you do self-play, you are converging to a minimax equilibrium.
In two-player zero-sum games, such as chess, Go, and even two-player poker, what you typically want is what’s called a minimax equilibrium. This is that GTO policy, the policy where you’re guaranteeing that you’re not going to lose to any opponent in expectation.
In chess and Go, that’s pretty clearly what you want. Interestingly, when you look at poker, it’s not as obvious. In a two-player zero-sum version of poker, you could play the GTO minimax policy, and that guarantees that you won’t lose to any opponent on earth.
But, again, you’re not going to beat a weak player. You’re not going to make as much money off of them as you could if you instead played an exploitative policy.
So there’s this question of, “What do you want? Do you want to make as much money as possible or do you want to guarantee that you’re not going to lose to any human alive?”
What all the bots have decided is: “Well, what all the AI developers in these games have decided is, we’re going to choose the minimax policy.” Conveniently, that’s exactly what self-play converges to. If you have these AIs play against each other, learn from their mistakes, they converge over time to this minimax policy, guaranteed.
But once you go outside of two-player zero-sum games, like in the case of diplomacy, that’s actually not a useful policy anymore. You don’t want to just have this very defensive policy, and you’re going to end up with really weird behavior if you start doing the same kind of self-play in things like math.
So, for example, what does it mean to do self-play in math? You could fall into this trap of like, “Well, I just want one model to pose really difficult questions and the other model to solve those questions.” That’s like a two-player zero-sum game.
The problem is that you could just pose really difficult questions that are not interesting. Like get, ask it to do 30-digit multiplication. It’s a very difficult problem for the AI models.
Is that really making progress in the dimension that we want? Not really. So, self-play outside of these two-player zero-sum games becomes a much more difficult, nuanced question.
So I think, and Tim kind of said something similar in his talk, that there’s a lot of challenges in really deciding what you’re optimizing for when you start to talk about self-play outside of two-player zero-sum games. My point is that this is where the AlphaGo analogy breaks down. And not to say it breaks down, but it’s not going to be as easy as self-play was in AlphaGo.
So what is the objective function then for that? What is the new objective function?
Yeah, it’s a good question. And I think that that’s something that a lot of people are thinking about.
Yeah. I’m sure you are. One of the last podcasts that you did, you mentioned that you were very impressed by Sora. You don’t work directly on Sora, but obviously it’s part of OpenAI.
I think the most recent updates in that sort of generative media space is autoregressive image generation. Is that interesting or surprising in any way that you want to comment about?
I don’t work on image generation, so my ability to comment on this is kind of limited. But I will say, I love it. I think it’s super impressive. It’s like one of those things where you work on these reasoning models and you think, “Wow, we’re going to be able to do all sorts of crazy stuff like advanced science and solve agentic tasks and software engineering.”
And then there’s this whole other dimension of progress where you’re like, “Oh, you’re able to make images and videos now.” And it’s so much fun. That’s getting a lot more of the attention to be honest, especially in the general public.
And it’s probably driving a lot more of the subscription plans for CHPT, which is great. But I think it’s just kind of funny that, “Yeah, we’re also, I promise, working on super intelligence.”
Uh, I think that the delta for me was that I was actually harboring this thesis that diffusion was over because of autoregressive emission. There were rumors about this end of last year. And obviously now it’s come out. Then Gemini comes out with text diffusion and like diffusion is so back.
And this is two directions, and it’s very relevant for inference of autoregressive versus diffusion. Do we have both? Does one win?
The beauty of research is that you have to pursue different directions. And it’s not always going to be clear what is the promising path. I think it’s great that people are looking into different directions and trying different things. I think that there’s a lot of value in that exploration, and I think we all benefit from seeing what works.
Any potential in diffusion reasoning? Let’s say your channel.
Probably can’t answer that.
Okay.
So you did a master’s in robotics too. We’d love to get your thoughts on, you know, OpenAI kind of started with the pen spinning trick and like the robotic arm they wanted to build.
Is it right to work on these humanoid likes? Do you think that’s kind of like the wrong embodiment of AI outside of the usual, “How long until we get robots,” blah, blah, blah?
Is there something that you think is fundamentally not being explored right now that people should really be doing in robotics?
I did a master’s in robotics years ago, and my takeaway from that experience — first of all, I didn’t actually work with robots that much. I was technically in a robotics program.
I played around with some Lego robots my first week of the program. But then honestly, I just quickly shifted to just working on AI for poker. It was kind of nominally in the robotics master’s.
But my takeaway from interacting with all these roboticists and seeing their research was that I did not want to work on robots because the research cycle is so much slower and so much more painful when you’re dealing with physical hardware.
Software goes so much more quickly, and I think that’s why we’re seeing so much progress with language models and all these virtual coworker kind of tasks, but haven’t seen as much progress in robotics. That physical hardware just is much more painful to iterate on.
On the question of humanoids, I don’t have very strong opinions here because this isn’t what I’m working on, but I think there’s a lot of value in non-humanoid robotics as well.
I think drones are a perfect example where there’s clearly a… A lot of value in that. Is that a humanoid? No. But in many ways, that’s great. You don’t want a humanoid for that kind of technology. I think non-humanoids provide a lot of value.
I was reading Richard Hemmings, The Art of Doing Science and Engineering, and he talks about how when you have a new technological shift, people try to take the old workloads and replicate them just in the new technology versus actually changing the way you do it.
When I see this video of a humanoid in the house, it’s like, “well, the human shape has a lot of limitations that could actually be improved.” But I think people prefer what’s familiar. It’s like, “would you put a robot with 10 arms and five legs in your house, or would it be Yuri at night when you get up and see that thing walking around?”
Is that why we use humanoids? To me, there’s almost this local maximum of, “we got to make it look like a human,” but I think the best shape in-house would be terrible at product design. So, I am not the person to ask about this.
I think there is a question of whether it is better to make humanoids because they are more familiar to us, or worse because they are similar to us but not quite identical. I don’t know which one I would actually find creepier.
The thing that got me humanoid pilled a little bit was just the argument that most of the world is made for humans anyway. So if you want to replace human labor, you have to make a humanoid. I don’t know if that’s convincing.
Again, I don’t have very strong opinions in this field because I don’t work in it. I was weekly in favor of humanoids. What really persuaded me to be weekly in favor of non-humanoids was listening to the Physical Intelligence CEO and some of his pitches about why they are pursuing non-humanoid robotics.
Conveniently, their office is actually very close to here, so if you wanted to…
They’re speaking at the conference I’m running.
I’m looking forward to that. So, I’d say:
The other one I would refer people to is Jim Fan, who recently did a talk on the physical tiering tests, which he did at the Sequoia conference. It was very good. He’s such a great educator and explainer of things. It’s very hard, especially in that field.
Cool. We’re done asking you about things that you don’t work on. These are just more rapid fires to explore some of your boundaries and get some quick hits.
How do you or top industry labs keep on top of research? Like, what are your tools and practices?
It’s really hard. Many people have this perception that academic research is irrelevant, and that’s actually not the case. We look at academic research. I think one of the challenges is that a lot of academic research shows promise in their papers but then actually doesn’t work at scale or even doesn’t replicate.
If we find interesting papers, we’re going to try to reproduce that in-house and see if it holds up and also does it scale well. That is a big source of inspiration for us.
Similarly, I keep track of things happening in my space that I think are interesting. If I think it’s really interesting, maybe I’ll share it. For me, it’s just WhatsApp and Signal group chats with researchers, and that’s it.
A lot of people look at things like Twitter, and I think it’s really unfortunate that we’ve reached this point where things need to get a lot of attention on social media to be paid attention to.
“That’s what the grad students are trained for.” They’re taking classes to do this.
I do recommend to grad students: when I worked with them, I would tell the grad students I was working with that they need to post it on Twitter and they need to, And we go over the Twitter thread about how to present the work and everything.
There’s a real art to it, and it does matter. It’s kind of the sad truth.
I know when you were doing the ACPC, like the AI poker competition, you mentioned that people were not doing search because they were limited to two CPUs at inference.
Do you see similar things today that are keeping interesting research from being done? That might be, it’s not as popular. It doesn’t get you into the top conferences. Are there some environmental limiters?
Absolutely. One example is for benchmarks. You look at things like humanity’s last exam, where you have these incredibly difficult problems that are still very easily gradable. I think that actually limits the scope of what you can evaluate these models on.
If you stick to that paradigm, it’s very convenient because it’s very easy to score the models. However, a lot of the things that we want to evaluate these models on are more fuzzy tasks that are not multiple-choice questions. Making benchmarks for those kinds of things is much harder and probably also a lot more expensive to evaluate. But I think those are really valuable things to work on.
That would fit the same moment. GPT-4.5 is like a high taste model in a way. There are all these non-measurable things about a model that are really good that maybe people are not recognizing.
I think there are things that are measurable, but they’re just much more difficult to measure. Many benchmarks have stuck to this paradigm of posing really difficult problems that are easy to measure.
So let’s say that the pre-training scaling paradigm took about five years from the discovery of GPT to scaling it up to GPT-4. If we give you test time compute five years as well, what would be the probable cause if test time compute hit a wall by 2030?
It’s very similar to pre-training. You can push pre-training a lot further; it just becomes more expensive with each iteration. I think we’re going to see something similar with test time compute, where we’re going to get them thinking instead of three minutes, three hours, then three days, and then three weeks.
Oh, you run out of human life.
Well, there are two concerns. One, it becomes much more expensive to get the models to think for that long or to scale up test time compute. As you scale up test time compute, you’re spending more on it, meaning there’s a limit to how much you could spend. That’s one potential ceiling.
I should say that we are also becoming more efficient; these models are becoming better at thinking as they are able to do more with the same amount of test time compute. I believe that’s an underappreciated point. It’s not just that we’re getting these models to think for longer.
For instance, if you look at O3, it’s thinking for longer than O1 for some questions. However, it’s not like a radical difference, but it’s way better. Why? Because it’s just becoming better at thinking.
Anyway, these models, when you scale up test time compute, you can only scale it up so much. That becomes a soft barrier in the same way that pre-training is becoming more and more expensive to train better and bigger pre-trained models.
The second point is that as you have these models think for longer, you get bottlenecked by wall clock time. If you want to iterate on experiments, it is really easy to iterate on experiments when these models respond instantly.
It’s much harder when they take three hours to respond. What happens when they take three weeks? It takes at least three weeks to do those evaluations and then iterate on that.
A lot of this you can parallelize to some extent, but much of it requires running the experiment completely and then seeing the results to decide on the next set of experiments. I think this is actually the strongest case for long timelines; the models have to do so much in serial time that we can only iterate so quickly.
How would you overcome that wall?
It’s a challenge, and I think it depends on the domain. In drug discovery, I believe this could be a real bottleneck. If you want to see if something like extends human life, it’s going to take a long time to figure out if this new drug you developed actually extends human life and doesn’t have terrible side effects along the way. Side note, do we not have perfect models of human chemistry and biology by now?
Well, so this is, I think the thing. And again, I want to be cautious here because I’m not actually a biologist or a chemist. I know very little about these fields. The last time I took a biology class was in 10th grade in high school. I don’t think that there’s a perfect simulator of human biology right now, and I think that that’s something that could potentially help address this problem. That’s like the number one thing that we should all work on.
Well, that’s one of the things that we’re hoping that these reasoning models will help us with.
Yeah. How would you classify mid-training versus post-training today?
It’s such that all these definitions are so fuzzy. I don’t have a great answer there. It’s a question people have and you’re like opening eyes, like now explicitly hiring for mid-training and everyone is like, “What the hell is mid-training?”
I think mid-training is between pre-training and post-training. It’s not post-training. It’s not pre-training. It’s like adding more to the models after pre-training. I don’t know. In interesting ways.
Okay. All right. Well, you know, I was trying to get some clarity.
It’s the pre-trained model now, basically just the artifact that then spawns other models. It’s almost like the core pre-training model is never really exposed anymore. It’s the mid-training, the new pre-training, and then there’s the post-training. Once you have the models branched out, you never interact with an actual, just like raw pre-trained model. If you’re going to interact with the model, it’s going to go through mid-training and post-training. So, you’re seeing the final product.
Well, you don’t let us do it, but you know, we used to… Well, yeah, I mean, I guess you know there’s open source models where you can just interact with the raw pre-trained model.
But for OpenAI models, they go through a mid-training step and then they go through a post-training step, and then they’re released. They’re a lot more useful. Frankly, if you interacted with only the pre-trained model, it would be super difficult to work with, and it would seem kind of dumb.
Yeah, but it’d be useful in weird ways, you know, because there’s a mode collapse when you post-trained for it for chat.
Yeah. And in some ways, you want that mode collapse. Like you want that collapse of, “Yes,” to be useful.
Yeah, I get it.
We’re interviewing Greg Brockman next. You’ve talked to him a lot. What would you ask him?
What would I ask Greg? I mean, I get to ask Greg all the time, but what should you ask Greg?
Um, like to evoke an interesting response that he doesn’t get asked enough about, but you know, like this is something that he’s passionate about, or you just want his thoughts. I think in general, it’s worth asking where this goes, you know? Like what does the world actually look like in five years? What does the world look like in ten years? What is that distribution of outcomes look like? And what could the world or individuals do to help steer things towards the good outcomes instead of the negative outcomes?
Okay. Like an alignment question.
I think people get very focused on what’s going to happen in like one or two years. I think it’s also worth spending some time thinking about like, well, what happens in five or ten years? And what does that world look like? I mean, he doesn’t have a crystal ball, but he certainly has thoughts. Yeah. So I think that’s worth exploring.
Okay. What are games that you recommend to people, especially socially?
Oh, what are games that I recommend to people? I’ve been playing a lot of this game called Blood on the Clock Tower lately.
Um, what is it? It’s kind of like mafia or werewolf. It’s become very popular in San Francisco.
Oh, that’s the one played in your house.
Yeah.
Okay. Got it.
It’s kind of funny because I was talking to a couple of people now that have told me that it used to be that poker was the way that the VCs and tech founders would socialize with each other. And actually now it’s shifting more towards Blood on the Clock Tower. That’s the thing that people use to connect in the Bay Area.
I was actually told that a startup held a recruiting event that was a Blood on the Clock Tower game.
Wow.
Yeah. So, I guess it’s really catching on, but it’s a fun game and I guess you lose less money playing it than you do playing poker. So it’s better for people that are not very good at these things. I think it’s kind of like a weird recruiting event, but it’s… Certainly a fun game. What qualities make a winner here that is interesting to hire for?
That’s the thing: you get the ability to lie, deception, and picking up on deception. Is that the best employee? I don’t know.
So my slight final pet topic is Magic: The Gathering. We talked about some of these games, Chesco, and they have perfect information. Then you have poker, which is imperfect information in a pretty limited universe. You only have a 52 card deck, and then you have these other games that have imperfect information, like a huge pool of possible options.
Do you have any idea of how much harder that is? How does the difficulty of this problem scale?
I love that you asked that because I have this huge store of knowledge on AI for imperfect information games. This is my area of research for so long, and I know all these things, but I don’t get to talk about it very often.
We’ve made superhuman poker AIs for No Limit Texas Hold’em. One of the interesting things about that is that the amount of hidden information is actually pretty limited because you have two hidden cards when you’re playing Texas Hold’em. The number of possible states that you could be in is 1,326 when you’re playing Heads Up at least.
This number is multiplied by the number of other players at the table, but it’s still not a massive number. The way these AI models work is that you enumerate all the different states that you could be in. For example, if you’re playing six-handed poker, there are five other players, resulting in:
Then you assign a probability to each one, and you feed those probabilities into your neural net, and you get actions back for each of those states.
The problem is that as you scale the number of hidden possibilities, the number of possible states you could be in, that approach breaks down. There is still this very interesting unanswered question:
What do you do when the number of hidden states becomes extremely large?
If you go to Omaha poker, where you have four hidden cards, there are heuristic methods you could use to reduce the number of states, but actually, it’s still a very difficult question.
Then, if you go to a game like Stratego, where you have 40 pieces, there are close to 40 factorial different states you could be in. In this case, all the existing approaches we used for poker kind of break down, and you need different methods.
There is a lot of active research going on about how to cope with that. For something like Magic: The Gathering, the techniques that we used in poker would not, out of the box, work. It remains an interesting research question:
What do you do?
Now, I should say this becomes a problem when you’re doing the kinds of search techniques that we used in poker. If you’re just doing model-free RL, it’s not a problem. My guess is that if somebody put in the effort, they could probably make a superhuman bot for Magic: The Gathering now.
Yes, there are still some unanswered research questions in that space.
Now, are they the most important unanswered research questions? I’m inclined to say no. The techniques that we used in poker to do this kind of search stuff are pretty limited. If you expand those techniques, maybe you get them to work on things like Stratego and Magic: The Gathering, but they’re still going to be limited.
They won’t get you superhuman performance with language models in Codeforces. So I think it’s more valuable to focus on the very general reasoning techniques. One day, as we improve those, I think we’ll have a model that, just out of the box, plays Magic: The Gathering at a superhuman level. I believe that is a more important and impressive research direction.
Cool. Amazing.
Thanks so much for coming on, Noam.
Thanks for your time.
Yeah, thanks.
Thanks for having me.
2025-06-18 08:00:01
How Middle Powers Are Navigating the U.S.–China Rivalry
The China Global South podcast is supported in part by our subscribers and Patreon supporters. If you’d like to join a global community of readers for daily news and exclusive analysis about Chinese engagement in Asia, Africa, and throughout the developing world, go to ChinaGlobalSouth.com/subscribe.
Hello and welcome to another edition of the China Global South podcast, a proud member of the Seneca Podcast Network. I’m Eric Olander, and as always, I’m joined by CGSP’s managing editor, Kobus van Staden, in beautiful Cape Town, South Africa. A very good afternoon to you, Kobus.
Good afternoon.
Kobus, there is a lot going on this week. In fact, more than we can talk about in today’s show, but three very big stories that we’ve been covering at CGSP quite closely.
First and foremost, the war between Israel and Iran. We dedicated our Monday edition of the newsletter to China’s role in this. Very interesting, Kobus, that in this conflict, China came out very quickly and did not pretend to try and be a neutral arbiter, did not pretend to be kind of nonpartisan at all.
Well, they came out very quickly, backed Iran on this, framed the Israelis as the aggressor, and then also positioned the United States as manipulating all of this, which is par for the course in a lot of these types of incidents.
We’re also following another very, very big story. Xi Jinping arrived in Central Asia in Kazakhstan this week for the China plus the Central Asia 5 summit, the C plus C5 summit. He also had a bilateral on Monday with the president of Kazakhstan, and we provided coverage on that.
And finally, Kobus, I want to get your take on a very big event that took place in the Hunan capital, which is the central Chinese province of Hunan, Changsha. There is, I think it’s the fourth China-Africa Trade Expo that’s underway.
And again, the symbolism of this trade expo comes at a time when the United States now is proposing to add dozens more countries, many from Africa, to the no travel list. Then they also, the U.S. Congress, formalized the end of PEPFAR and the number of these aid programs. So two very divergent directions that the United States and China are going in. But all three are big stories that have been on the radar for us this week.
Yeah, the trade expo in Changsha is big. They signed about $11.4 billion worth of deals, around 170 deals. A bunch of deals and letters of intent and so on on the side as well. Also, a lot of high-level ministerial meetings. So Wang Yi was there and he was meeting people left and right.
So it’s very interesting to see, as you say, it’s happening on the back of a possible travel ban and on U.S. plans for a possible leader summit in, I think, September, which is going to be, with a travel ban, will the Nigerian president even be allowed into the country? You know, let’s see.
Well, I mean, it’s just, there’s such a different tone between what the U.S. and what the Chinese are doing. Again, we’re not going to take a side on which one’s better because the United States says they’re launching a new Africa strategy that’s more transaction-based, that’s focused on deals. Yet the irony, as you’re pointing out, is the new U.S. policy looks a lot like the old Chinese policy. And we’ve just seen that on full display in Changsha this week.
What’s your take on the situation between Iran and Israel? One of the things that I wrote about on Monday is the stakes for China are exceedingly high in this conflict. I think a lot of people may not fully appreciate how over the past 15 years, there has been this steady shift away from Africa towards the Persian Gulf for Chinese oil buying, to the point now where about a third to 45 percent, again, the numbers vary quite a bit, of Chinese imported energy passes through the Strait of Hormuz and comes from Persian Gulf countries.
A lot of that’s from the Saudis, the Qataris, the Emiratis, and, of course, Iran as well. If Iran does follow through with its threat, unlikely as it may be, but if it does follow through on its threat to shut the Strait of Hormuz, that would have an immediate impact on Chinese industry that relies on imported energy.
And so not only in the diplomatic realm where China is coming out so strongly in favor of Iran, but also there are very steep economic consequences for the Chinese. Those consequences make me wonder whether Iran will try and hold back on that as long as they can, because, of course, it would directly hit their own trade with China as well.
All of this, you know, I think it is interesting to see Xi Jinping’s visit to Central Asia in the context of the Iran-Israel war, because, of course, Turkmenistan, one of the C5 countries, shares a border with Iran. So this Central Asia kind of coordination, economic coordination also comes against the background of overland massive kind of logistics corridors that are being built exactly to avoid this Strait of Hormuz dilemma that China faces in the broader landscape.
So there’s this economic integration happening between China and Central Asia, but also, I think a certain amount of security colored hedging. I think that’s happening not only in relation to Israel and Iran, but also in relation to other kinds of security issues on the other side of that same zone, for example, between Afghanistan, Pakistan and India.
So China’s outreach to Central Asia has this double kind of economic security valence at a moment when it’s also trying to increase some of its influence in the area as Russia’s influence is shifting and kind of distracted.
It’s interesting that you bring up how these issues between Central Asia and the Persian Gulf touch on each other and overlap. We saw some discord earlier this week within the Shanghai Cooperation Organization (SCO), and the SCO, if you’re not familiar with it, is one of these groups that the Chinese initiated along with the Russians in order to challenge, counter U.S.-led international organizations and the U.S.-led international order.
So put this in the category with the BRICS, also the Asian Infrastructure Investment Bank, the New Development Bank. This is all part of this new emerging parallel international governance architecture.
Well, these groups are all operating on a consensus basis. So that is, they never publish a statement or a communique unless all of the members sign on to it. That is the claim. And that’s really a foundational difference between these groups and what the Chinese will accuse the United States of engaging in unilateralism.
OK, so it was very interesting over the weekend that the SCO issued a statement condemning Israel for the missile attacks on Iran. But the Indians were not part of that discussion. Now, remember, Pakistan is part of the SCO and so is Iran and obviously China.
And so the Indians issued a very rare statement that said, “listen, guys, we were not part of this statement. This was not appropriate at all.” And I thought that was very interesting, Kobus, that these groups like BRICS and the SCO are now starting to show some growing pains.
We saw some discord within the BRICS over some of the Africa decisions that were going to be made. And that, again, lots of African members within the BRICS. Now we’ve seen this flare-up of tensions over the statement on Israel. So just an interesting data point to follow in these emerging groups that are trying to challenge the U.S.-led international system.
I think there one factor is an ongoing long-term relationship between India and Israel specifically. That is a very interesting dynamic that I don’t know enough about personally, but that has been ongoing. Of course, then the larger historical tensions between India and Pakistan are obviously at play.
Overall, I think these bodies are – this is an interesting inflection point, I think, for these bodies. You know, overall, a body like BRICS, for example, has functioned – like they’ve gotten a lot of mileage out of member states being able to put these ongoing beefs between them kind of to the side and then using BRICS as a space for other kinds of coordination. And I think it’s been quite successful in that way.
But there’s obviously a limit to how far that can go. So it’s going to be interesting to see whether it and other similar organizations are going to have to find some form of mediation mechanism or something to kind of work out some of these disputes at some stage. And it’s not going to get easier as these groups get bigger.
So Vietnam last week was admitted as a partner country to the BRICS. Wonderful for Vietnam. We now have four Southeast Asian countries in the BRICS: Indonesia, Malaysia, Thailand, and now Vietnam. But, again, as things get bigger, there are more complicated politics.
In some ways, Kobus, it reminds me of how an opposition party is used to sitting in the back bench of parliament kind of throwing darts at the incumbent leadership. And then all of a sudden, that opposition becomes the administration and they gain power. And then governing is very different than serving as an opposition.
So in some ways, these groups are starting to move into the forefront, challenged by major real world issues that have consequences to them, like the Iran-Israel conflict. These are difficult things. But finding that consensus is going to be, I think, more challenging, not less, as groups get larger.
But all of this, what’s going on, brings to the focus the role of these middle power countries. And whether it’s at the SEO, whether it’s at the BRICS, or even those who are not part of these groups.
But there are really difficult challenges being put on presidents, prime ministers, and foreign ministries throughout the global south on how to adapt to not only the great power competition between U.S. and China, but now these unfolding events.
Obviously, we’ve got Israel-Gaza.
We have Israel-Iran now.
We’ve got Russia and Ukraine, and any number of these big events.
So the folks at the South African Institute of International Affairs, your former stomping ground, together with scholars around the world, did a survey last year on foreign policies in South Africa, Brazil, India, and Germany as well.
I think there were a number of different partners. That’s why the Germans were there. And Saasaya was one of the partners among many.
They looked at how these middle powers are adapting to these rapid changes in the international environment.
Now, just a quick disclaimer.
Number one, the research for this was done last year. It was done before Donald Trump or just around the time of Donald Trump. Certainly, it did not take into account what we’re seeing right now.
But the themes and the principles that were articulated in the report seem to be extending far beyond just these current moments right now.
So, Kobus, when we look at this combination of countries—South Africa, Brazil, India, obviously three BRICS countries, and then Germany’s attitude—what’s interesting about contrasting these middle powers with Germany is it shows there’s a huge contrast in perceptions in countries in the global north and those in the global south.
Yes. And, you know, in some ways, Germany is also a kind of a classic middle power. But the survey shows that there are very distinct differences between global south middle powers and global north middle powers in relation to these issues.
It is very interesting to see, once one breaks down these different views, because this was a survey of foreign policy professionals.
A lot of some diplomats, but then particularly a lot of analysts, academics, and so on.
They were polling them in relation to what they saw, for example, as foreign policy priorities in their region and globally and so on.
So, it provides a really fascinating glimpse of how this kind of elite layer of people actually sees the world.
The report is named “Emerging Middle Powers 2025: Momentum for Middle Powers”.
Again, it looks at South Africa, Brazil, India, and Germany.
We had the pleasure, before the Iran-Israel war—so we just bear that in mind in our discussion—we had the pleasure to talk with two of the contributors to the report.
Manjit Kripalani, who is the executive director of Gateway House, the Indian Council on Global Relations, and Carlos Coelho, who is a professor at Pontifical Catholic University in Rio de Janeiro.
We talked to them about the report and all of the key themes.
Let’s take a listen to our discussion with Manjit and Carlos.
Carlos, Manjit, thank you so much for taking the time to join us on the program today.
“Thank you. It’s a pleasure to be here.”
“Thank you so much. It’s a pleasure.”
Manjit, let’s start with you.
You surveyed foreign policy scholars in four countries: Germany, South Africa, Brazil, and India.
This is a fascinating time to do that kind of research, given the huge changes that we’re seeing in the international environment.
Give us the overview of what you found in this really ambitious study that you undertook.
“When we started this project, which is about, I would say, two and a half, three years ago, we thought this was just a foundation in Germany being forward-looking and really doing some deep thinking about why the world has changed so much and why people don’t have the same kind of empathy to the West as they did before.
And the one wonderful thing was that the questions really for the survey were set by us, by Carlos, by us.
So it ended up being a report for really everybody, the rest of the world, the West.
It wasn’t just the West telling us what to do. It was wonderful.
And we thought that we found initially that there were some restraints about how people were responding, that maybe it wasn’t—I mean, people in Germany were very concerned about the Russia-Ukraine war.
And then you had a burst of candidness from the other side of the world that said, ‘It’s not our problem. We have other problems, and we have a problem to deal with.’
So, let’s tell us how this worked.
We found that over the years, even this year with this report, this has kind of accentuated, and the world is actually evening out.
It’s becoming a less imbalanced place than it used to be, according to me, from the report.
Because there is a part of the world that has now been empowered to speak up and speak out.
And there’s a voice that has been articulated.” And surveys like this simply help to further articulate this voice. So the survey is a real service for people to start articulating from different parts of the world how they feel about a certain topic in a world that is transitioning.
Carlos, one of the really interesting big picture takeaways for me from the report is quite different views of the role of the United States. About 60% of Brazilian and South African researchers thought that the U.S.’s net influence in the world is largely negative, whereas 75% of Germans thought it was largely positive.
So I was wondering, do you see this essentially as a kind of a global north-global south split of opinion? I think so. And I think it also shows maybe some of the built-up frustration from several decades of the United States as a reference point, especially in multilateral institutions.
There’s a phrase here in Portuguese that says, “if you live long enough, you get to see everything.” I don’t know if that translates well in English. But right now what we’re seeing is that, well, the staunch defender of multilateralism is China among the big competition between China and the United States. The staunch defender of international law is also China. I certainly wasn’t prepared for that when I started my studies in international relations.
I think there is a growing sense of frustration, but also a growing sense of opportunity from multi-alignment when it comes to countries like Brazil, South Africa, and I’ll leave India to Manjit, but I will include India as well. We have a brave new world coming up. We don’t know what it will be shaped like in 20, 30, or 40 years. But we can see that we’re in a changing international society, and that brings a lot of opportunities.
Carlos, we don’t know what it’s going to be like next week, much less 20 or 30 years from now. So Kobus asked about the United States, and Manjit, I would like to get your take on that from India, but I’d also like to focus on China, given that that’s the focus of our program.
Your findings said that:
view China’s global influence positively, compared with 33% in India and 22% in Germany. That number in India today is probably lower than 33%, given the events that happened between Pakistan and India and Kashmir, and China’s unconditional support of the Pakistanis in that conflict.
But I’d like both of you to reflect on this China question and how it is, again, starkly divergent between Germany, India, and Brazil and South Africa.
Manjit, let’s start with you, and then Carlos, I’d love to get your take on it.
Well, first, the only country that actually shares a border with China is us. So we bear the brunt of China on a daily basis. Either it’s bombarding us with their products, legally or illegally, through our borders or illegally through other borders. And secondly, they’ve also surrounded us.
Now, it’s Pakistan. All the countries around India are in debt to China because of the Belt and Road. Bangladesh, which was doing okay, has been destabilized by none other than our friend the United States. So that was a real problem of this last year, and China is happily fishing in troubled waters. We have a real problem with China, and that opinion of China is never going to get any better.
The Chinese understand that. It was on its way this year to getting better. Prior to the conflict, we saw ambassadors back in each other’s countries, direct flights starting to resume, student exchanges starting to happen. There was an upswing after the 2020 Galwan incident, which put relations into a freeze for four years.
So when you say it’s never going to get better, we did see for about a year some improvement. We did, and part of it was actually Indian companies that had imported Chinese products. They needed Chinese technicians to come and help to make that work or repair it, etc. As you know, India is in a huge infrastructure build-out. Unfortunately, China has no role.
The way we work it is that we import machinery. So it comes from Siemens, but as Siemens means China, that’s how it comes through. But we don’t really directly import that much from China. At least officially, we don’t, because there’s a ban against it. So it was getting a little bit better. There were dialogues. There was conversation.
And then at Kazan, at the BRICS conference in Kazan last year in October, I think President Putin and the Brazilians really intervened to get us to talk to each other. And then things got better. But now we know, we know that China does not change. They continue to want to undermine India because the truth is, Eric, that there is only one country in the world that can actually challenge China in terms of size, in terms of economy, and in terms of smarts.
I mean, the West is very smart; it had an industrial revolution, but really, China is very insecure about India. And it will do whatever it can to undermine India. It is backed in a very, it’s a little bit vicious. The Chinese look down on us as brown people, and when you go to China, it’s very visible. So, even though we don’t look at the Chinese one way or the other; they’re just another Asian country, the Chinese need to have a way to look down on Indians for them to feel superior.
Because the truth is that a lot of China’s civilization and its knowledge came from India. We were at a dialogue and conversation last year. The Chinese sent their monks and their scholars to India to learn from India. This is only starting to be acknowledged in China now. And that, I think, is going to stop now that we’re back to where we started in 2020.
Carlos, a very different perspective in Brazil on China. Maybe you can share your reflections on how respondents in Brazil view the Chinese, which is obviously going to be very different than what Manjeet was finding in India.
Sure, well, China and India compete on several fronts, which is not the case for Brazil. Sometimes people forget that China has been the largest trading partner of Brazil since 2009. So, that has been for the last 16 years.
But we’ve also seen, in the last few years, a lot of Chinese investment in Brazil, particularly in different sectors of its infrastructure, especially in energy. Now, with other state visits from past governments and the current government to China and likewise the other way around, Brazil is not much on the radar of the United States.
With the European Union, there’s a historic issue with trade regarding agriculture because 40% of the European Union’s budget goes towards the common agriculture policy, which is a major problem for us. Even now, with the tariffs that the United States is levying against several countries, the European Union is not, and when I say European Union, it’s mostly the French, but they’re not budging on the Mercosur, European Union trade.
So, when you add it all up, it’s only natural that Brazil is turning to China. The United States is having antagonistic measures. Europe is having antagonistic measures. We turn to China.
And then it brings another question, which I’ll stop here. Manjeet, one of the interesting data points on the Indian side has been that 52% of experts in India preferred neutrality in a larger geopolitical context. As I understand it, if I have that correct, that’s an increase over time.
So, I was wondering if you could talk a little bit about the neutrality issue and how that’s framed within India. This is something that Indians really feel very strongly about. As you know, this is the 70th year of the Bandung Conference, and Carlos and I were together in Bandung in April. It was really quite moving to see, so long ago, that what they call the third world, which is the alternate world, really envisioned an era of neutrality and non-alignment.
For India in particular, it came about because of the Russia-Ukraine war, as we’re so dependent on oil for our energy. We’re dependent on the Saudis, the Russians, and we were also dependent on the Americans. But when the war took place, the impact on our oil, fuel, fertilizer—everywhere, just really was a problem.
I think India did some good diplomacy and explained to the U.S. that we could not stop taking Russian oil because the whole country would shut down. Then there would be another major global problem on their hands.
So we worked it very well. We were able to take Russian oil. We refined it and bought it at a discount, which was great for us because it allowed our country to just run normally. We were also able to refine it and send it off to Europe, so it didn’t feel like it was coming directly from Russia, but it was coming from India.
This little duplicity actually worked very well for India to say that we are now a neutral country. For the first time ever, we kind of knew what Switzerland felt like. We need not take any side. You’re neither here nor there, and it’s a good feeling. We’re still that way; we’re still friends with Russia.
I’ll tell you something else, too. In the recent India-Pakistan conflict, the four-day conflict, there were three types of planes in play. One is the Rafale aircraft by France by Dassault. One was the China J-10C. And the Russian S-400s, the American aircraft were nowhere to be seen. The Americans keep telling us to buy the F-35s, but it’s just not a very affordable or high-performing aircraft. The Rafale didn’t do as well. The Chinese did okay. The Pakistanis leased two aircraft, but really the best were the Russian aircraft.
Until now in India, I’m pretty sure that when we make our purchases of our next aircraft, it’s not going to be European. It’s definitely not Chinese. It’s going to be the Russian ones, with the latest being the Su-57s, which are really good aircraft and perform well and are battle-tested.
So to say that in terms of our neutrality, the U.S. made an article of faith that we should buy defense equipment from them in the trade agreements that we’re doing with the U.S. We’re not going to be able to take it. We’re going to have to do the Russian ones. But I think that we will be able to manage it because of our very hard-worked neutrality and our neutral stance.
Yeah, the performance of those aircraft is a point of contention among the various players. The Indians have not said how many of their jets were downed. They acknowledge that there was a downing. The Pakistanis say that six were downed. This was, for a lot of people, what the Chinese were calling their deep-seek moment, saying that the J-10C’s performed so well and within a whole ecosystem of Chinese military tech in Pakistan.
So an interesting discussion there, especially with differing views in Pakistan and India on that story. I’d like both of you to step away from your home countries a little bit. Because India and Brazil are both very large countries that are able to assert themselves in ways that other middle powers are not.
All of this talk about non-alignment, about the spirit of Bandung, that’s great for India. That’s great for Brazil because you have the heft to do that. Panama does not have the heft to do that. Colombia does not have the heft. Neither does South Africa.
Vietnam is now in the crosshairs of the great power competition. It’s wonderful to say we don’t want to pick sides. But when the United States now literally says, “we want you to cut off your ties with China; we want you to cut off your trade with China or else you will sacrifice your relationship with us. You have to decide.” They are saying that outright. There’s no subtlety to what they’re saying.
Carlos, can you talk a little bit about what the spirit of the Bandung non-aligned conference looks like in this new era where those hard choices are being confronted by policymakers in middle-sized powers, much smaller than your own country?
So the proposal from the U.S. or what we’re hearing from the U.S. is very much not aligned with current circumstances. What we’re hearing might have worked 40 years, 30 years ago. Yet here we are, though. It may not work, but this is the reality that Panama is being confronted with. And they might end up, the U.S. might end up very frustrated with the answers that they will get, as they are right now.
Vietnam will not stop doing business with China. Between China and the United States, if they had to choose, I don’t think they would choose the United States. I think the strategy, now we all understand what the background is and everything related to tariffs has China as a target. It’s not about anything else, but the strategy doesn’t seem, you know, you’re doing it alone.
Is Europe prepared to turn their backs on Vietnam as well? Are all other countries turning their backs on the smaller countries as well?
Of course, the United States is still a very large player. The United States can still do a lot of harm, as we are seeing from the last few weeks. But I don’t think, at least when it comes to trade and to use trade as a geopolitical tool, the strategy seems a bit off.
China is the leading trade partner for more than 120 countries. So, the idea that the U.S. will isolate these smaller countries, obviously, because of its geographical position, Panama will suffer more than others. But the idea that they would do so, we’re going to come back to a moral doctrine, or that the U.S. does not, it’s all relative.
Again, I would say, the United States is still a very, very large, very, very powerful, but it doesn’t have the clout that it once had. And I think that has to be recognized.
Tobis, you’re in South Africa. You are in one of the countries that’s caught between this. What are you hearing there in terms of what Carlos and Manjid are talking about and what the report kind of reflects on non-alignment? What is Cyril Ramaphosa thinking now that he’s back home again?
Well, I guess there’s some complications there. Like, on the one hand, a lot of it is very similar. What I’m hearing is very similar to what Carlos is saying: you can’t argue with just the weight of China as a major trading partner to all of these countries.
Particularly when it’s not like they’re being offered huge trade incentives to the U.S.. Obviously, there are tariffs and so on, and they are trying to avoid the tariffs, but it’s not like the U.S. is offering this kind of China-style green lanes to try and increase trade from Africa. So, there isn’t really, you may be throwing away your largest trading partner. It’s not like that’s really being replaced or there isn’t really an offer on the table to replace that. So, that makes it difficult.
In South Africa, the issue has now been so politicized by the actual optics of the interaction between Ramaphosa and Trump, and the white refugee crisis. Another group of them moved from South Africa to the U.S., I think, this week. In South Africa specifically, I think the kind of negative perceptions of the U.S. has probably increased among some of these foreign policy professionals from the time that they were polled, I would guess.
Carlos, just saying with you, and then I’d love Manjit to also chime in. Earlier, in one of your earlier answers, you used the term multi-alignment. We recently spoke with Jorge Heiner, who’s a big proponent of active non-alignment. I was wondering if you could talk a little bit about these terms.
Where are we now in the non-alignment conversation? How do we see active non-alignment versus normal non-alignment versus neutrality versus multi-alignment? “There’s a wonderful question to which I probably have a very frustrating answer.”
I hear and learn more about multi-alignment by talking to Manjit to understand the India perspective on this, because in Brazil, we always mention non-alignment and not multi-alignment. Now there is this discussion about active non-alignment. Look, I will take off the academic hat just to say that we are all talking about the same thing, which is how do we approach this complex world without tying ourselves to one party or another.
I really do think, and this has been a continuing conversation on our project with Manjit, when Indians talk about multi-alignment, it’s the same conversation that we have in Brazil when we say non-alignment. We could theorize over that, if you will, but in the end, we’re really just discussing “tomatoes, tomatoes.”
I would really argue on that, Joe. Manjit, we’d love to get your take on that. So, we’ve been thinking a lot about this. It’s non-aligned, multi-aligned, uni-aligned. It’s everything. It’s become a mishmash of countries now saying, “We don’t want a unipolar world. We want a multipolar world.” I think that’s pretty clear.
We want a world that is more equal, where everybody, big or small, has the same voice. In some sense, we’re going back to the original principles of the United Nations, where 190 countries, everyone has one vote. Unfortunately, the UN is non-functional. It’s not even dysfunctional; it’s just non-functional.
So the world is now looking. There are so many other groupings coming up. The arrival of BRICS and then BRICS+—as you know, there are 44 countries in line to become members of BRICS+. It’s quite exciting. All these countries have one view: they really do not want to lean one way or the other.
They don’t want to lean towards China, they don’t want to lean towards Russia, and they don’t want to lean towards the United States or the West. So the world is actually starting to go back to what we saw post-1945, where societies, many of them post-colonial societies, are finding their voice and place once again in the world.
So definitely, it’s a multi-aligned world. It’s a multilateral world, and there are no multipolar worlds. But to be fair, there’s a lot of diversity in the global South. Javier Malay in Argentina, certainly Nicaragua, the Philippines—they are not pursuing this non-aligned strategy. They are lining up firmly behind the Trump administration. Also, China has several countries that are similarly passionate in line with them as well.
So again, that diversity is important to acknowledge; there are a lot of countries that are looking at this non-aligned strategy, but certainly not all of them. That’s right. And to take us back to really the original definitions of what was the first world, what was the second world, which was the socialist bloc, and what was the third world, which is the alternative to the first and the second—not countries that were underdeveloped.
And so we’re going back to that. And how do you think that as the world becomes more contentious and potentially more violent, as we’re seeing in Gaza and Ukraine, any number of different conflict zones? We’ve been following on the Thai-Cambodian border.
That the Cambodian and the Thais have deployed heavy forces. There’s been now killing along that border and tensions have surged. So we’re starting to see flashpoints now in places that we didn’t see as, again, parts of the international order start to show some real strain.
As conflict becomes more, potentially more prominent, how do you think that these systems will hold, given the fact that the international architecture of the past 70, 80 years is deteriorating? Where are the institutions that these conflicts could be resolved?
Exactly. That’s a very good question. We rely too much on the old institutions. So actually, Eric, we’re now in a phase of the world where new institutions have to be built. And that’s what you’re saying. BRICS, for example, is one of them.
Well, BRICS is a kind of a filler because we have the G20 that will expand at G7. But the G7 still is keeping the rest of G14 out of the G7 or G8. They’ve thrown out one member. So now they were G8 and now they’re G7. So I think this is an important transition.
These groupings that we’re seeing:
This is a time where you’re going to see many, many groupings because we all had to find our place. And none of these are actually institutional. They don’t have a bureaucracy. They don’t have headquarters. So we’re kind of finding our feet.
I don’t think it’s a dangerous time. We’re seeing the flashpoints because there are just more arms available because of the wars. The West has just been supplying arms because their economies, because everything was outspoken to China except defense. All that countries had was the defense industry, particularly the U.S.
And so now, there are arms everywhere. He laid on arms. You know that in Africa, laid on arms in one part of Africa and you get an insurgency in another part because those arms have moved there. This really has to stop. There has to be something about the arms.
India was very, very careful about the conflict and really restrained it to four days because we know, we’ve lived through it. We know how awful it can be. Unfortunately, the West seems to have a bloodlust about having conflict. It seems to be something that’s become part of how they think and how they view the world.
As a place for conflict, it is not. It is a place where many parts of the world are starting to talk to each other. India has had a tremendous amount of diplomatic outreach, even just now with the war. Pakistan is on it too.
But it’s really nice to see that diplomacy does work. We’ve sent out groups to several countries, four groups, small countries, big countries, everywhere. It’s important to reaffirm that diplomacy and talking does matter.
Whether we’ll see the institutions, countries are going to make their own decisions and they will find their own solutions without the United Nations because people have been working without the UN for a long time.
Carlos, one of the interesting issues that was raised by the report for me is that a lot of the respondents very strongly felt that there’s a big need to reform multilateral institutions. As Manjeet was mentioning, the UN is largely non-functional. We’re seeing a very weak World Trade Organization, for example, and so on.
But at the same time, there were very high levels of pessimism about the actual possibility of reforming these institutions. So I was wondering where that leaves us, particularly from a Global South perspective.
Manjeet was mentioning several alternative institutions kind of popping up. Is there some way in which the Global South can gain more voice in these institutions or are the institutions themselves basically not fit for purpose anymore?
Well, it’s not fit for purpose. I wouldn’t say anymore, but I would say at this time, I think what the Global South can do and continue to do is find ways. And I know this might sound like a feeble answer, but continue to work together in offering alternative strategies.
So when we talk about the non-alignment movement in Bandung, historically, this was mostly a defensive mechanism. I think we’re seeing something different now. I mean, we have to recognize that the results that we’re getting are very uneven at this point. When we talk about BRICS and then after several new countries, there’s much discussion about alternative strategies. I think what we’re seeing, for example, regarding what’s taking place in Israel and Palestine, and what we’re seeing when it comes to Ukraine and Russia, is maturity on the side of the Global South.
And I don’t mean people who expect that the Global South will align with whatever it is that mainstream North ideas are. I think what we’re seeing is maturity in the side of the Global South, as to say, “look, you have your interests. Guess what? I have my interests as well.”
So this is a major issue for you. This is not such a major issue for me. And you’re going to have to live with that. So I think what we can continue to do is work together, pressuring and exploring alternative strategies.
When we talk about exchange and currency exchange, using other currency for trade, again, this is extremely complex. When we see the data right now, it has not changed anything. It has only made the slightest dent on the dollar or the euro.
However, the fact alone that these conversations are ongoing is important. We ask ourselves, “how do we do this? How do we proceed?” I think this is very important.
This is something that, to the extent of our material capabilities, the Global South has to continue to exert its powers and its interests. The question over the currency is something that Donald Trump finds very threatening. He has, in fact, vowed to put “100 percent tariff on all the BRICS countries,” which complicates matters considering that China and India are both part of BRICS.
Nonetheless, this is an absolutely fascinating and timely report that is essential reading for anybody interested in the new geopolitics we are in today: Emerging Middle Powers Report 2025, Momentum for Middle Powers. It was written by an all-star team of analysts, including Manjit Kriplani, the executive director of Gateway House, the Indian Council on Global Relations, and Carlos Coelho, a researcher at the BRICS Policy Center and a professor at the Pontifical Catholic University in Rio.
Manjit, Carlos, thank you so much for your time and your insights today. Congratulations on the report! We are really looking forward to staying in touch with you going forward to hear more about your work and your ideas.
Thank you, Eric. Thank you, Kobus. It has been a pleasure. Thank you both. We cordially invite you to visit India and Brazil to see for yourself the alternate world that is emerging. That would be wonderful. Thanks.
Challenge accepted. Kobus, I’m so glad that you brought up the question of multi-alignment, non-alignment, and all of this, you know, following our discussion with Jorge Heine. By the way, for those of you at home who are watching and listening, Jorge Heine is a former Chilean ambassador to South Africa, India, and China. He is really the grandfather now of the new active non-alignment strategies that many developing countries are adopting.
These discussions are very timely now, given what’s going on. I am a little more skeptical than all of these guests because I think it’s going to be much more difficult for the smaller developing countries. Again, think Panama, which really doesn’t have the leverage to push back against the United States when they are confronted with this us or them prospect.
Remember when we were talking about South Africa and a gentleman by the name of Joel Pollack, who was at one point rumored to be the next** U.S. ambassador to South Africa? It didn’t turn out that way, but he was very much in favor of a tough stance. His rhetoric today on **X remains the same: “If you do business with them, then you’re not going to do business with us.”
We hear that up and down the U.S. policymaking chain of command. So, I just don’t know if it’s going to be an option for a lot of countries to remain non-aligned, particularly smaller, weaker countries that don’t have the choices that Brazil, India, Indonesia, and Nigeria, for example, have—the bigger countries.
What do you think? Extending that thought, how do you think those countries would react, considering that, as we pointed out during our discussion, many of them, even though their trade is objectively small, frequently have China as their largest trading partner?
In the case of a country like Panama, which was a dollar economy, the United States had enormous leverage, and you saw what Panama did—they had to withdraw from the Belt and Road. Again, that was more symbolism than substance. Nonetheless, the optics of this are very important, as Secretary of State Marco Rubio and Donald Trump took it as a big victory.
We are in a moment of political optics, and that is important. And they’re going to go where they think they have the leverage and the pressure points to do these kinds of things.
I mean, again, there’s no consistency in what the Americans are doing, because remember, in Saudi Arabia, Donald Trump said, “We’re not going to lecture you on how you treat your own people.”
And yet in South Africa, they are clearly lecturing people on how they treat their own people. So there’s all these inconsistencies. What happens in one country may not happen in another country.
I just want to get your take on one thing that it was interesting that Manjeet pointed out in the first three of these new organizations that are emerging in this new era. She talked about the SEC, which is the Shanghai Cooperation Organization. She talked about BRICS. And she talked about the Asian Infrastructure Investment Bank.
All three of those have one thing in common, Kobus. They were all initiated by the Chinese. And so I’m wondering if this is going to be a new era now for this new parallel international governance architecture that the Chinese have been talking about for several years:
Do you think that this is going to be the moment where the Chinese will fill some of that void left by a dysfunctional United Nations, a WTO that doesn’t work anymore, an IMF and a World Bank that are going to be increasingly neutered by the United States? Do the Chinese, with this new governance architecture, fill that void a little bit?
Yeah, I think they probably do. You know, last week we saw the big flashy launch of a new mediation center in Hong Kong—the International Mediation Center, particularly apparently aimed at…
What is… explain what that is, by the way. I don’t know if everybody knows what a mediation center is. So mediation obviously is a way, you know, is a kind of a legal mechanism to try and find a way of resolving a dispute without having to go to full litigation.
And there’s a… there’s a big international mediation center in… there’s a big one in London, I think. And, you know, the mediation has been a very West-centric space for a long time. They now set up this mediation center in Hong Kong.
The launch was attended by representatives from more than 30 countries and many multilateral institutions. It’s apparently focused specifically on disputes between states and investors. So it sounds like it may be a way to, in a narrow China space, deal with some of the problems around Belt and Road projects.
But more broadly, I think it now sets up Hong Kong as this alternative space to work out some of these large international issues between different governments and different international investors.
So it’s not necessarily Chinese investors. I think that is an example of slowly but surely, these bodies that are located in the West that used to be spaces for the projection of Western power have now largely become stuck, you know, kind of in the way that the World Trade Organization has been.
We still need institutions to do things, so it looks like China is going to be one of these powers setting up alternative ones, maybe not displacing the original ones, but living in parallel with them, I assume.
What I think is so unfortunate is the fact that this report that they wrote, and that was done in conjunction with your former employer, SAIA, the South African Institute of International Affairs, is a very important report that will not be read by people in Beijing and Washington.
One of the things that I’ve spent most of the day today immersed in U.S. Senate committee hearings and listening to the discourse in the United States is that it’s a parallel universe. It might as well be Mars compared to what these guys are talking about.
Similarly, we have our China researcher, Han Zhen, who has been immersed in some of the discourses in Beijing, and they’ve been showing me all of the different rhetoric coming out of the Chinese and what’s happening within the think tank policy circles there. Again, just as removed from this reality as anything.
There’s no nuance in it. It’s so focused on, “If they punch me, I punch back, and they punch me, and I punch back.” That’s the focus right now. All of these concerns about middle powers and non-alignment go to any of the mainstream Chinese press, and you don’t see any of that. Go to anything in Washington, and none of it.
So the fact is that these guys are writing this, but the people who need to be reading it are not. That, to me, is the tragedy of all this. Yeah, it’s a big problem.
I think, as you say, these countries are very in a tunnel vision moment, even more than usual. Both China and the U.S. tend to, that’s kind of where they live, in tunnel vision land.
The issue that you raised earlier about these countries being forced to choose is significant. There are very strident voices coming out of both the U.S. and China regarding the issue of countries choosing. However, both sides may not like the result once these countries actually start choosing.
For a lot of these smaller countries, if your larger trading partner is China, and if your trade with the U.S., as in the case of South Africa, is significant and important to the economy but not the biggest part of the picture, that creates a dilemma.
Structurally, there are limitations in certain sectors. For example, South Africa is a big agricultural exporter, but the U.S. has a strong agricultural sector. Thus, there is a limit to how much agricultural product South Africa will be able to sell in the U.S.
In that logic, some countries are going to have to make the call, and they’ll end up trying to replace the U.S. part of their trade with other countries and call it a day, basically. There is a danger of the U.S. losing a lot of its international relations. To a certain extent, it’s the same with China, although China has this kind of economic weight.
For the U.S. stakeholders, that’s not really a big issue. There isn’t someone up at night in Washington about whether Guatemala is cutting them off. Nonetheless, it still is a concern if you’re a superpower and there’s this kind of attrition of international relationships happening in the background. I wonder what you think of that.
I think Trump is under pressure for not closing these deals, which he said was going to happen. Remember, “90 deals in 90 days” was the mantra that came out. If countries divert from the U.S. plan in some way, that could be a problem for Trump.
One of the interesting things I was thinking about when you were talking was that in an earlier era, the decision between the U.S. and the Soviet Union was an ideological battle.
Today, as you’ve pointed out, the question is really cold, hard economics. The politicians in many of these countries who rose to power are very, very cunning politicians. To rise to power in Kenya, or through the ANC in South Africa, requires real skill.
They make these brutal decisions of who’s your friend and who’s your enemy. I think the current moment may align with the calculated nature of many of these political leaders.
As you pointed out, they’ll consider:
If they choose one, if forced to choose, I believe the criteria will be:
Sometimes the order of those may not be the same, but it’s not an ideological thing here. Other than Bukele in Nicaragua and Javier Milei in Argentina, which is clearly ideological, most countries will view this through the lens of realpolitik: “where is my bread buttered?”
The Colombia decision to join the Belt and Road despite intense U.S. pressure is interesting. The president of Colombia is making a calculation that this is the way to go. It will be intriguing to see country by country how these decisions unfold.
Keep an eye on Vietnam. They are currently caught between the U.S. and China. Decisions in Vietnam will likely be made by very cold, calculated balance sheets.
Absolutely. The outcomes will affect how the entire global system operates. It’s going to be interesting. We’re seeing small, almost invisible signs of large changes happening over the next few years. One thing to monitor is the rumors bubbling in Washington that the United States is interested in severely cutting back and curtailing its engagement with the United Nations—essentially an effective departure. So they would still be a member, but they would barely fund it.
They would not attend already. They’ve put Mike Waltz there, which is a backwater. He is the former national security advisor who is now the U.S. ambassador, but you don’t hear anything from him.
There’s also talk of the United States disengaging from the institutions of the World Bank and potentially the IMF as well. Maybe not a full departure, but they’re not interested in these multilateral development forums.
So I think the IMF is probably the one that is most vulnerable right now. That would change the equation quite a bit if these huge institutions that we have counted on for 80 years, at least to provide some kind of ballast in the international order, are rendered basically inoperable by the United States.
One interesting note about that, if the United States reduces its shares of the World Bank, then according to the World Bank charter, the headquarters must move to the largest shareholder’s country. So that would put the World Bank in Japan.
We could see, again, dramatic changes in the international order in the next year or two if things go down the path that we think they are.
So let me just get a final thought from you on the paper and on our discussion today and what you want people to take away from all of this, because it’s very confusing and a little bit also, I think, disconcerting and scary for people to think about a future that doesn’t have a structure that we’ve had for the past 60, 70, 80 years.
As Manjit was pointing out, this project came, you know, as run over the last few years. From my understanding, and obviously it was pushed by a German think tank. It comes from a set of conversations that was happening in Europe around the Ukraine invasion, particularly when, you know, around the kind of shock in Europe and the United States when large parts of the global South ended up not supporting their position on Ukraine.
One of the interesting findings in the report is that the only group of these foreign policy experts—so keep in mind, this is a survey of foreign policy professionals, so analysts, think tank people, academics, and then also some people in governments—the only group of those people who thought that Ukraine and the Middle East were the most notable kind of foreign policy challenges were the Europeans.
That, I think, still kind of confirms this larger shock that came out of the Ukraine invasion in Europe around this idea of like, “oh, all of these developing countries, we thought we’re on the same page as us around this issue, they’re really not on the same page as us.”
So, you know, reading this report, you get a very strong view of how different parts of the world have different issues, and that they have different priorities, and that the central priorities that you would find if you like open the New York Times and look at their main headlines, those are not necessarily replicated in the rest of the world.
This gives you a strong and concrete idea of how this kind of global perspective looks around the world. One of the strongest takeaways from it is that there is a very strong north-south split developing, and that increasingly geopolitics is going to be north-south, not east-west, I think.
You remember when India’s external affairs minister said that Europe likes to socialize its problems and privatize the global south’s problems, and that seems to be very much consistent with what you’re saying.
Well, let’s leave it there. Kobus and I will be back again next week. If you are interested in these discussions, and timely as they are, and just so important, particularly around China’s role in all of this, then you’re going to want to check out the China Global South Project and all the great work that the team in Asia, Africa, and the Middle East is doing.
You know, I was thinking just the other day, Kobus, that we have the largest team now of China-Africa analysts anywhere in the world. That’s both a sad, depressing kind of observation on the decline of China-Africa studies, but also at the same time, really just a compliment to our team and how we’ve grown over the years.
The quality of research and quality of analysis coming out of the CGSP team is just unreal. I say that with enormous pride about everything that Giraud, Obert, Njenga, Kobus, and Lucy are doing—this amazing team of analysts.
So go check us out, Chinaglobalsouth.com. Subscriptions are very affordable, starting at just $19 a month. If you are a student or teacher, email me at [email protected], and I’ll send you links for a half-off discount of a subscription.
I was just in the U.S., Kobus, and my Starbucks run for just a muffin and a venti coffee was almost $14. Okay, so I can tell you definitively that a subscription to CGSP is a better value than your daily run to Starbucks.
Yeah, and Starbucks’ muffins are not that good. We offer a tastier offer. No, and they put them in the microwave, and they’re so squishy.
Yes, you will get a lot more protein from a CGSP subscription than you will from a Starbucks run. But it’s almost arguably cheaper, too, compared to it. So definitely check us out.
So Kobus and I will be back again next week with another episode of the China Global South Podcast. Until then, for Kobus Fenstaden in Cape Town, I’m Eric Olander. Thank you so much for watching and for listening.
The discussion continues online. Follow the China Global South Project on Blue Sky and X at ChinaGS Project, or on YouTube at China Global South and share your thoughts on today’s show.
Or head over to our website at ChinaGlobalSouth.com, where you can subscribe to receive full access to more than 5,000 articles and podcasts. Once again, that’s ChinaGlobalSouth.com.
China Global South
2025-06-17 08:00:01
Data Privacy (The Derby Mill Series ep 13)
you might not care if somebody knows that you bought a vacuum cleaner on Saturday, but you’ll definitely care if they have your account information. That was really creepy, Patricia.
“I literally bought a vacuum.”
“Did you really?”
“No, I’m just kidding.”
Welcome to the Derby Mills Series: intrepid pioneers of the next economy, featuring discussions with entrepreneurs at the forefront of deploying machine intelligence and brainstorming sessions about where the technology may go.
Welcome to the Derby Mills Series: intrepid pioneers of the next economy. I’m Ajay Agarwal, co-founder of Intrepid Growth Partners, and my three collaborators are Rich Sutton of the University of Alberta, who pioneered reinforcement learning; Sendal Mullen Thananjayan of MIT, who uses machine learning to better understand human decision-making; and Neve Gavin, an applied AI scientist working on optimizing open and closed foundation models.
Rich, Sendal, and Neve are all senior advisors at Intrepid Growth Partners. The domain we’re exploring in this episode is privacy, personally identifying information, and confidential company information. We’re here with Patricia Thain, the co-founder and CEO of Private AI, which detects and removes PII (personally identified information) in a client’s data and keeps it safe. The company also gives organizations a finer grain understanding and control over what data they collect. Private AI is headquartered in Toronto.
And with that, let’s start the show.
So, Patricia is an alum of the University of Toronto. She did her PhD here in computer science and is the founder and CEO of Private AI.
So, Patricia, over to you.
“Yeah, sounds good. Great to meet/see everybody. So, yeah, we do many things. Reduction and identification of personal information is a core part of it. We can focus on the use case of privacy-preserving in large language model ecosystems. For redacting prompts, redacting training data, and redacting contextual data for RAG embeddings, if you want to focus on that because it seems to be the hottest topic at the moment. We could add confidential company information in there as well.”
Okay, so two of the main concerns at the moment that we’re hearing are:
Excellent! And so, Patricia, can you just describe a little bit of how you do redaction or how you protect information from going either, you know, into or out of these language models in terms of effectively predicting what is private information and then redacting it?
“Yeah, well, essentially we use named entity recognition models. We also have the ability to do things like co-reference resolution. If you’re mentioning two organizations in different ways, like Pepsi and PepsiCo, or if it says Ajay and Ajay Agarwal, we can detect that those are the same entity. The way that we go about it is by stripping out the personal information before you send a prompt to a third-party large language model and then reintegrating the information into the response. If it’s for training data or data for fine-tuning, we can do that redaction process, but we can also replace it with synthetic personal information.
It actually has the added benefit of helping with bias because several quasi-identifiers like political affiliation, religion, and location can lead to bias in the responses of large language models as well.”
All right. And so, Sendal will kick off. He’s thought a little bit about bias. Also, just for all the listeners to get the magnitude of the problem, many companies are struggling to use even the most basic tools because of fear of non-compliance with private information.
In other words, the fear of non-compliance is one of the greatest barriers to innovating and utilizing these very powerful technologies. So, Sendal, over to you to kick off.
“Great! I have three questions. Building on what Ajay said, this is such a fantastic use case, Patricia. It’s one of those sort of things that unless you’re in it, you wouldn’t realize how important—how big the market is, how valuable it is. I just love the use case, and it shows a lot of knowledge and wisdom in choosing it.”
So, my questions are sort of going up in levels of complexity. To just start with the simplest one: you talked about both PII and confidential company information. So, personal identifiable information, I can see how named entity recognition could solve that. I’m trying to see what scalability looks like for confidential company information.
For example, “If I’m Coca-Cola and I’ve actually written down my secret formula somewhere, that’s clearly CCI, but that’s not really…” Named Entity Recognition is, I guess, if we took everything that names carbon and oxygen anywhere, then suddenly we’d be redacting too much. So how do you think about the labels for a supervised learning pipeline for any particular company for things like CCI?
Yeah, that’s a great question, and it does vary company by company. There are some things that are just across the board confidential information; sometimes logos for customers if you’re sending customer data through, or decks. Also, you know, financial trends, metrics, organization names. So there are some things that you can use named entity recognition for, and fairly so.
So straightforwardly, the way that we’re enabling it for other confidential information is by having the user input what it is that is confidential information for them in that particular period of time because it can also change, you know, month over month, quarter over quarter. The tricky part is also if, you know, there are spelling mistakes or optical character recognition errors or automatic speech recognition errors—how do you capture those? So, those are some of the questions that we’re working through with our customers at the moment.
When you enter it, at what level am I? I can see how it works for supervised learning; you just give me a bunch of examples with labeled instances, and I’m just going to learn off of that. But if I’m entering it, is it sort of a high-level semantic description? Is it in some more formal language structure that you have in mind? How are you thinking about that entry process by which I describe it?
Yeah, for now, it’s basically a block list, but down the line, ideally, as we learn more from a company-by-company perspective, our plan is to come up with something that is a bit more structured. It’s closer to search and block, for example.
Yes, just trying to get it exactly. It would just strike me as a point of thing. I would have thought that a fantastic thing that would give you a lot of scalability is sort of solving that little ML piece, which is how can a company efficiently describe. Because I think a person could describe it; if I was a head of a company, I could probably describe it in English. I’m not sure that we have the tools to convert that English description into a practical thing, but maybe we couldn’t. I mean, it sounds like you’re saying a company trying to describe itself would already be a bit of a hard problem.
No, it’s very much still a hard problem within the company itself. Interesting; one example might be the color of a product for a particular period in time before that product is launched.
Oh, that’s fantastic! Block all colors, right? That’s fantastic!
Yeah, so I guess a way of saying it is that seems like an amazingly interesting problem and a source of comparative advantage, competitive advantage, and a way for you to build a moat if you can track that problem.
So, I’ll move to my next layer of sort of complexity. The next layer is one of my favorite articles in this field is actually a New York Times article. I don’t know if you ever read this; it’s got a beautiful title, “A face is exposed for AOL searcher number 4417749.” Have any of you read this? This is a milestone in privacy.
Patricia, this really came to define—I mean, it was almost a bombshell when it happened. So, AOL released this data set that like the computing community that worked in this area just loved. It was search logs—de-identified search logs that just like, you know, we took out people’s whatever; we didn’t give their address, which says like numbers 4417749 was a searcher. This was the thing that showed how, like, even though the community had been using this for a while, suddenly there was this problem that blew up.
Somebody took that searcher, took the individual searches—they had landscapers in Lilburn, Georgia; homes sold in Shadow Lake—so it’s just a few searches and they were enough to narrow down that this searcher was Thalma Arnold, a 62-year-old widow. This is like a bombshell.
Yeah, because literally they found a person that was a searcher, and now we knew everything Thalma did.
Yeah, and it turns out that people like searching for themselves a lot too. A great point! I mean, here they didn’t have the name Thalma Arnold, but actually, you’re right; she did search for several people with the last name Arnold, so that did help them. Thalma wasn’t as egotistical as some of us are—like, “Oh, what’s going on with me?”
But I think the reason I’m emphasizing this is that to me, this is also what started a lot of concerns about privacy, but it also illustrates some of the complexity of what PII really is.
So, I guess part of what I would push you on is how do you think about the fact that even if we remove Thalma’s name from the AOL search logs, we can re-identify Thalma?
I would have thought like for… Some applications, it doesn’t matter; this is just going into some big statistical machine. That’s fine. For other applications, this might matter a lot. How do you think about that side of things? It seems so important to what you do.
Okay, so this is quite the deep dive you’re asking me to go into. All right, let me take you back to some work that Professor Sweeney did out of Harvard, where she was basically showing how you can re-identify the Governor of Massachusetts. You could identify him from the zip code, and I think it was his date of birth, if I’m not mistaken. I would have to double-check that, but basically, two pieces of identifying information could pinpoint this person with a large amount of certainty.
This was a pivotal moment in healthcare data de-identification because people forgot to consider that they should calculate the statistical risk of re-identification when it came to making data sets public. The community has been learning more and more over the years what it is that you should be considering re-identifiable and how to calculate the risk associated with it.
Professor Khalid Al-Imam out of Ottawa does incredible work in this area, and it really depends on a few things. Naturally, all direct identifiers—like full names and count numbers—have to be removed when it comes to 2D identification.
When it comes to quasi-identifiers, you basically have to understand either what the risk of re-identification is within the data set that you’re looking at or what the risk of re-identification is as a statistical portion of the population. As you add more quasi-identifiers, the risk of re-identification increases exponentially.
When it comes to things like unstructured data, it actually turns out that a lot of the time, you don’t need the personal information in order to use the data for many tasks. There was a misnomer: you may have heard Cynthia Svork’s assertion that “anonymized data isn’t right; it’s not useful,” but a lot of misconceptions come from what we knew about structured data, which is mainly made up of fully personal information for the majority of the time.
When it comes to unstructured data, if you remove personal information, you can still get insights like:
In addition to utility being something to keep in mind when you’re de-identifying data, you also need to consider what the risk is that somebody may try to re-identify the data as well. Even in cryptography, you’re constantly battling bad actors. Your key can get exposed; there could be quantum machines that break your crypto systems, and you’re constantly having to improve your systems and build up more and more barriers.
Think of de-identification or even data minimization as one extra barrier that you can add to make it very difficult for bad actors to gain anything from the data.
I feel like I get a lot of the pieces now, and so maybe just to close this part of the conversation off: How do you think about assembling all the pieces together for any particular use case? Or is it a matter of calculating the risks and presenting the risks of different menus back to the company? How do you think about that collection of stuff?
We’ve done work in healthcare, and I think the nice thing about healthcare is there’s comfort in numbers. There are well-established norms now about what it means to remove PHI. So, at least there’s safety in just doing what everyone else is doing. Don’t ask me. But early on, that wasn’t the case. When X-rays were starting to come out, people didn’t know how to remove PHI from X-rays.
So, putting aside the comfort in numbers since you’re creating the market, how do you think about pulling everything you just said together? How does that get put together?
It is incredibly use case specific. I do want to say that your goal isn’t always to de-identify. Sometimes the goal is to redact key information like credit card numbers because you don’t want your identity to be stolen. You might not care if somebody knows that you bought a vacuum cleaner on Saturday, but you’ll definitely care if they have your account information.
“That was really creepy, Patricia. I literally bought a vacuum for you on Saturday.”
Did you? No, no, I’m just kidding.
Okay, well, I use this example a lot, so it’s bound to happen one day. Consider the law of large numbers. You think about it in terms of what you need to do with the data, what kind of risk is associated with the data, and what kind of permissions you have to use the data.
Here, this is actually a side note—very interesting. Problem that organizations are having right now is that they have no idea what they agreed to use original data for, and so they don’t know if they could use it for training large language models or not. But anyway,
what you can do with the data, who has access to the data, what additional security measures you have in place, and what the likelihood is that somebody tries to get access to the data maliciously.
So there are various measures to keep in mind. Thank you to add to that example of identification by your existing digital footprints.
There was this brilliant kind of demo that just went out recently, whereby using open source tools that are available, a couple of kids went around to the local train station. They just used their camera to scan the room, find the face, and would go up to those individuals and be like,
“Oh hey, you work at Blackstone, right? Yeah, we met at that conference last week. How’s it going? We need to follow up on the pitch that you’re doing.”
Because they had so much contextual information, the person on the other side was a bit taken aback but then fully engaged just because it was all too accurate.
So that’s the whole near-term future that we have to look forward to. But getting back to the topic in hand,
similarly echoing Sendal’s comments around it just being a super pragmatic tool that’s addressing a real current need, which is quite refreshing in the field.
But a similar line of question but on the synthetic data side revolves around that parameterization of keywords into identifiers such that you actually generate a like-for-like that is still contextually relevant and preserves that statistical utility.
Technically, how do you go about solving that problem?
So we don’t handle that too much, and there are certain things. If you are using a synthetic data provider, you should understand how they’re calculating risk and how they’re calculating risk of the original data being exposed in the synthetic data.
For example, if you’re using differential privacy within your pipeline, you’re going to have to calculate epsilon values and delta values. You’re going to have to calculate this properly, and then you’re going to have to do this properly. There’s maybe a handful, maybe a couple of handfuls of experts in the world who know how to do this properly when it comes to really complex systems.
So I wouldn’t just take the word of synthetic data being privacy-preserving at face value without understanding what the underlying risk assessment was. Sure, I think the question was also to the contrary of that, as to whether the data they generate isn’t too absolutely unrelated, such that it’s not even applicable for training the model because they’ve abstracted away all of the contextual parameters.
It’s a fine balance between the two. It’s a fine balance, and also with regards to thinking about the use case. You basically have to generate new synthetic data depending on the use case and recalibrate those risk parameters accordingly.
Again in this space, Professor Kaladam Al-Amom does really interesting work, and it has been used in healthcare and in pharmaceutical research very effectively with HIPAA compliance in particular in mind.
Rich, over to you. I have just a couple of things. One is I do appreciate this whole thing about the arms race between the de-identifiers and the identifiers. In the long run, it’s probably impossible to preserve privacy totally against a determined hacker. They really just want to find out one particular thing because your large language models and your redactors are not 100% reliable.
If someone really wanted to do it, with all the resources they can figure things out. I don’t think that’s what this is about. I don’t think that’s what creating a more private world is about. It’s about changing the cost structures.
That, I believe, was essentially your answer:
I think I like that.
The second thing I just wanted to bring up is this company, this organization that I’ve heard of: Venice.ai. You probably know about them. Let me explain for everybody.
Venice.ai essentially is an interface to open source language models that preserves some privacy. Normally, you send your query to ChatGPT, and OpenAI knows about your query and they save it forever.
Hey, they use it in presumably useful ways but not nefarious ways, but they definitely save it, and they know it. Whereas the idea with Venice is that the query is sent to the open source AI systems run in such a way that it’s never… Saved and can’t be saved, and so they claim it’s you have a much greater level of privacy if you use unredacted props to Venice because they will never be saved and stored.
Yeah, those are two very interesting points. For the first one, you’re right that it’s very likely if we’re trying to anonymize all data that if a very motivated actor would likely be able to re-identify something within there. Oftentimes, anonymization isn’t necessarily the goal because in some cases, like the credit card number example, all you want is to remove it. If you remove it, it’s not there; there’s no way you’re going to figure it out.
I love to see these other examples of privacy-preserving large language models in action. If you are deleting the query right after, that’s an excellent way to preserve privacy. Still, I would recommend de-identifying data between yourself and a third party regardless because there can also always be accidents and always be malicious actors, regardless of whether they want them in their system or not.
That is a prime example of how data minimization can help with privacy.
Our conversation made me realize that a lot of that might be true for PIAI, and I might reframe, at least in my own head, what I’ve done. I know it’s helpful for you. I might reframe the core contribution of what you’re doing, which is that there is a lot of interest in privacy and a lot of interest in confidentiality, but it’s almost like a continent that we have not really explored or mapped.
It seems to me that part of what your activity is doing is starting to build a map of that territory. If that makes sense, what are the kinds of CI, what are the kinds of PI, how should I go about thinking about it, and how should regulators go about thinking about it?
It strikes me that, given the huge effects regulations will have on the space, and given the huge extremes, even at the limit, this is less of a computation. I feel like there are two kinds of computational problems:
This one strikes me as one that is very ill-posed to begin with. It seems to me that the limiting thing that you would provide us is to help us start posing meaningful CII and PII problems.
I think as a society we know there’s a need but don’t quite know how to pose it. So I know that’s not a question; it’s more of a comment. It just strikes me as I might have thought of it that way, and therefore I might have thought of myself as in the role of a market maker.
That strikes me as a different startup profile than a market like meeting people where they are. In some sense, what’s nice about your business is you are meeting people where they are, so you’re getting revenue.
However, I would have had an ambition that I’m actually going to make this market. I would have fundraised a lot more. I would have taken some of my money and actually just hired a lobbying person or a person who knows Washington. Not literally to lobby for that purpose, but for taking hold of the marketplace for ideas.
This company, PIE Metrics, Frida Poli, who I work with, I thought they did an amazing job on this because they didn’t just sell a product on hiring to companies. They realized part of what we need to do is we’re making a market, and we need to actually invest some of our resources on the policy side.
It strikes me that you have that type of similar broad stroke ability to move and make the market settle.
To clarify for the listeners, when you talk about making a market as opposed to not, let me paraphrase and see if this is what you mean. Not making a market means that you’re a software vendor that takes the rules or regulations as given, and then you build a tool to help your customers comply with the rules as given.
As a market maker, I think the distinction you’re making is that you are going to do two things:
Is that what you mean?
I 100% agree. I would add one thing: it’s not just rules and regulations but even customer demand. So Etsy made me want things that I didn’t know I wanted. So I would say it’s rules, regulations, and customer preferences.
Okay, excellent, thank you, Neve. As always, I always tease that up. My kind of comment perfectly aligns with that. I think that exact model is a more defensible one long term because otherwise, as I was listening to you, there is an element of that kind of reductive slope.
Whereby we almost simplify the problem to the simplest element: “I just care about the credit card,” and then over time we’re like, “Okay,” and then we get the AI to automate that to free up humans for more valuable work. Tasks but then it can become a recursive loop whereby if the more we hand over the encryption and the security to the AI, the easier it is to also reverse engineer. So, I think this always adding that element of critical thinking, shall we say atop of it, is a great blend.
Wonderful, Neve. Thank you and Rich on your side. Well, my last thought is about if we take it to the limit. It involves sort of an adversarial component. So you’re trying to de-identify things; you could also have somebody that is an adversary that tries to do the identification. You can invest various amounts of resources into that de-identification process. And you know, if someone really wants privacy, then you spend the money and give them greater assurance because you’ve spent more resources trying to remove their privacy and found it to be hopefully impossible. Or, if they do find it possible, then you can fix your processes so that doesn’t happen.
Rich, how would you imagine, when you’re taking this to the limit, how would you imagine from an unsupervised approach to this problem of setting a reward function? Like, what would the reward function look like here?
Yeah, well, you know, it’s surprising that I didn’t, as I always take everything to read voice from there, as you know. But I wasn’t thinking that far. But now that you’ve asked me, you know how does this adversary work? Well, his goal would be to find some—well, I guess he’d have to—his goal would have to be his reward signal. His goal would have to be something that he self-specifies: his confidence that he has verified some personal information about the person or the company. That would have to motivate his searches.
Yeah, I don’t really—I haven’t really thought through how the adversary would work. I mean, the adversary could be a person; it could just be a regular person. It’s almost like traditional red teaming, right? But then you almost take a GAN approach to that loop.
It’s always your quality control pre-release. Okay, let me have an automated testing pen, whereby I give a nefarious actor a reward function to de-identify what I’ve tried to encrypt and see how that goes. It’s super interesting.
Patricia, the last 30 seconds are for you.
Yeah, thank you! Thank you so much for that. The intellectual question is a really great point. If you’re thinking about the various different types of data and what the utility is that you’re going to associate with them. Senil, you mentioned X-rays. So for DICOM images, we can remove text and metadata that might be identifiable. But if somebody, for example, has a piece of jewelry that can re-identify them, right?
As far as I know from the research that I’ve read, there’s nothing out there that can help you properly modify the image and also keep the utility of the image. So that utility question is so important when you’re thinking through this problem from a data set by data set perspective, from a data type by data type perspective, and even from a linguistic perspective. What you can do with romance language characters or alphabets is very different from what you might be able to do with logographic languages.
There are so many different things that our team has to think about when they’re handling different kinds of information. Thank you so much for this great conversation; I really enjoyed it.
Right, thank you Patricia, Senil, Neve, and Rich. Thank you, and that’s our show for today. Thanks to Rich, Senil, and Neve, and a special thanks to Patricia of Private AI.
We’ve also posted links to our hosts’ and guests’ social media feeds and web pages in the show notes.
Follow us on the Intrepid Substack at insights.intrepidgp.com. Rate the podcast and subscribe on your favorite platform for content including YouTube, Spotify, Apple Podcasts, and more.
Thanks everyone for listening!
The views, opinions, and information expressed in this podcast are those of the hosts and guests and do not necessarily reflect the official policy or position of Intrepid Growth Partners. This content is for informational purposes only and should not be considered as financial, investment, or legal advice.
2025-06-17 08:00:01
各位听众大家好,
欢迎收听这一期的忽左忽右。
我是程彦良。今天我们请来的这位嘉宾是爱逛动物园的科普作家——花石老师。
很多人可能和我一样,关注了花石老师的微博和他的微信公号。我们录之前,我在私底下跟他交流。其实在他的上一本书《逛动物园才是正经事》的时候,我就已经想通过这个出版社的编辑朋友邀请花石老师上节目,那今天终于实现了。
当然,今天花老师已经出了他的新书《我不能在鸟兽身旁只是悲伤》。对,这化用了一个歌词的一个书名,我觉得读起来还是有点绕口了。请花石老师给我们的听众打个招呼,介绍一下你的这个兴趣和工作内容吧。
大家好,我是花石,我是个写科普的,写的内容主要是自然保护和生态保护。所以说我日常生活的状态就是满世界跑,尤其是满中国跑,到各种生态特别好,自然特别好的地方去,看那个地方的动物。更重要的是去看当地人和自然之间是如何互动的。
所以说因为跑这些地方就写了这本新书《我不能在鸟兽身旁只是悲伤》,就像讲一讲我们中国人是如何保护自然的,我们中国的一线有哪些故事。
花石老师经历非常多。我看到你之前在果壳工作了很多年,对吧?后来又来做这种独立写作,而且你自己也搞这种生态摄影。你稍微讲一讲你是怎么对这一块产生兴趣,把它当成工作的职业发展的一个过程。
这个要说的话,那得从大学开始说。我读大学时做的那个方向可能说出来大家会想笑,我以前研究的是鸟的方言。对,鸟的方言是什么呢?就是我们可以按鸟的迁徙与否,把鸟分成两种:一种是候鸟,就它按季节到处飞;另一种不迁徙,这种鸟我们叫做流鸟。
这些流鸟可能这个山头一个种群,另外一个山头一个种群,中间有一座山,它们就不会互相沟通,这会导致它们歌唱的方法出现一些差别,这就和我们人的方言现象其实差不多。这些流鸟也是有方言现象的。我当年就是研究白头碑的方言。
我们读大学时研究这些鸟的方言,我们会到处先把这些鸟的叫声录下来。比如说我会去录武汉的白头碑的叫法,以及广东白头碑它们的叫法。录下来之后,我们会分析它们是否一样,就至少在我们人类的视角来看,它们的叫声是完全不一样的。
然后之后我们会把这些叫声再用一个大喇叭放给白头碑听。根据我们的研究,武汉和广东的白头碑,它的叫法有明显的方言现象,确实是不同的。
但是它们能够互相识别。做鸟叫的人经常出现一个状态,我们如果碰到自己熟悉鸟的类群,看它们怎么叫,我们是能够猜出来它的叫声是什么意思。比如说每到春夏是鸟的繁殖季节, 经常出现的状态就是两个相邻的树上面各有一只鸟对着唱。
这个很有可能就是很多人看到的这个场景会觉得它特别浪漫,是不是两个鸟在谈情说爱?其实很可能不是这样的,更可能是两只公鸟在自己领地边缘对着邻居示威。
你觉得它们叫得很浪漫,其实那两只鸟可能骂得可脏了,它们在对骂。我们能通过它们叫的场景和现场的状态,能猜出它们说啥。
后来我有一次去海口出差,遇到了一个事情,就是早上吃了一碗海南粉,是一家很传统的粉店。那个摊子旁边有一个大榕树,榕树上有一群碑,就是我研究的那个类群。榕树下面有一群海口的老大爷,我在那听了一会儿,发现那些碑说啥我听懂了,海口老大爷说啥我听不懂。
鸟语人类是真的可以理解的,可以猜出来它们表达的意义。因为很多鸟的行为,它没有哺乳动物那么丰富,所以说有些模式你能够猜得出来。
然后就是我大学毕业之后就去了果壳网。当时正好是果壳网刚开始兴起,我也是几乎最早的那批编辑。然后在那干了很久,我在果壳主要负责两个事情,一个是负责果壳的芯片点运营,果壳的微博和微信都是我做起来的。
我还有另外一件事就是生态保护方面的报道。我第一次去野外做采访,是跟着西智隆老师去土耳的无量山上。西智隆老师我见过一次,他在云南参与那个保护滇金丝侯的项目。那个项目其实就是他们发起的。
那是中国民间生态保护一个里程碑的事件。
我2013年跟西老师上无量山,当时场景我记忆非常深刻。就是去那儿第一个早上,我端着早饭一大碗,他们那儿饭做得挺好吃的,蹲在保护站的门口,面前的群山被云雾笼罩。
然后吃着吃着,突然就听到远处传来非常悠扬,像鸟,但又更宏远的一种声音。然后这时候旁边的人跟我说“这是长臂园在唱歌”。
长臂园,就两岸原生提不住的,青州已过万重山的长臂园。可能我听到的那个长臂园和李白听到的应该不是一个种,但能够感觉到那种感觉,就是云雾缭绕的群山里面,突然好几个长臂园的家庭开始非常婉转的叫声,这个叫声让我非常震撼,同时就忘记了吃饭。
然后我们就关注到了当地人和这些长臂园之间的关系。我们去采访那儿的科学家,采访那儿的山民,采访巡护员,采访可能周围寨子里面的人,他们是如何和这些长臂园共存的,他们是如何保护这些长臂园的。
自那以后,我开始经常跑这个口,常做生态保护这方面的报道,直到现在。一直到2018年我离开果壳之后,短暂的去了一个大厂,然后就非常难受的发现,我没法在大厂工作。我当不了罗斯丁。
当时我微博还有一点粉丝,就想着干脆做自媒体。然后就一直混到现在。刚开始做自媒体就想着,我得有些事情让大家记住。
所以说有一大半的原因是一个很功利的原因。我当时就花了四个月的时间把全中国重点省市的动物园全部逛了一遍,然后写那本《逛动物园是一个人正经事》。但其实我很早就想把国内全部动物园给逛一遍了,因为也很想知道中国动物园到底是个什么状态。
但是在工作的时候没法做这个事情。真正成自由职业了,就可以把这个事情实现了,所以就写了那本书。
而且你刚刚说你最早研究那个碑类,这个鸟类它可能是主动需要通过那种鸣叫沟通的这样的一个种群。我就想起最近看到的一个社会新闻。在我刷短视频时刷到了好像说成都天府广场附近很多小区的居民,这几天被一群可能夜莺的鸣叫声吵得整夜睡不到觉,就是那种轰轰轰的声音。
特别像激光枪的那个声音。
对对对,像普通夜莺那个很好玩。后来我脑补了一下,确实挺痛苦的。如果你在家睡觉,它一直要到四五点都这么叫的话,这实际上也体现了我们人与自然间的关系。我经常跟人说,我忽悠别人,自然这个东西,你有时候会觉得它是美的,但它本质上是一个很中性的东西,它有时候就是会给你带来一些不方便,但那就是自然。
但这是不是也是一种新现象呢?我想以前,比如中国,大家看过晚清民国那些照片,其实生态都非常的荒凉。尤其是明孝陵这种,山野基本木头都被砍光。
所以像成都这样的,晚上居民会被这样的鸟类的求偶声音吵得整夜睡不着觉,是否也是一种生态恢复的象征?
是这个样子的,有这么一个现象。当一座城市或者一个国家逐渐变得富裕之后,城市居民会越来越渴求自然,同时城市也会开始恢复自然。这种恢复自然的过程我们称之为城市的在野化。中国很多大城市正在经历这样的过程,我们城市的生态环境越来越好,绿化做得越来越好,然后就有一些小动物开始重新回到城市这个区域。
可能刚开始经历这个过程的一些人,就会经历我刚才提到的那个状态。脑子里憧憬的自然是一个很美好的状态,但真当自然到你身边来之后可能发现它也不完全是美好的那一面,它是个中性的东西。
比如说像我们生活的上海,我自己就发现在上海的内外的很多街道里面,尤其那些小的弄堂里面,黄鼠狼其实非常多。你就会发现它挺神奇,但如果数量一旦多了的话,大家会认为这又变成了一个城市的问题。
黄鼠狼其实还好,它主要吃鼠或者虫,个头又很小,会非常明显的避着人类,所以给人带来的影响非常非常小。
其实黄鼠狼在中国大城市里面出现,已经出现几十年了,一直也没产生过什么问题。真正会在上海产生过渲染大波浪的是另外一种动物,那种动物叫做河,溢秋之河的河。
在上海的松江区,有相当多的小区里面就有这种动物。我有两个朋友是做服装设计师的,坐着挺大的,都是在松江,他们有一个自己的工作室的院子里面就来了两只还是三只河。
但他们好像跟那几只河相处的还可以。但是他们后来查了资料发现那小家伙好像破坏力挺强的。
河是一种犬鸽动物,个头和科技差不多大,长得其实也跟狗差不多的。这种动物在正常情况下它其实也会避着人的,至少避着大部分路人,但它非常聪明。它一旦发现某个环境里面有对它们特别有利的东西,或者说有人去投喂它们,就会开始集群。
这个事情在2020年的时候发生过一次。在松江区有一个小区里面有人去投喂河。那时我们去看的时候,他们在那个小区里面投喂了像小山一样高的狗粮,把周围的一些刚出生的小河全部给喂到那来了。
我们当时录了一个视频,视频里面有好几十只河,晚上录的,看到一大片绿汪汪的眼睛发光。它们在啃那狗粮,咯吱咯吱的声音,其实还挺吓人的。
但是因为那河的集群实在太不正常了,有很多人投诉,林业局、还有复旦大学、还有其他几个机构,他们一起出手做了这个事情。首先就禁止投喂,然后第二个就是把一些比较多余的河抓起来,再放到野外去。
最主要的还是因为把投喂这个事情给按下来之后,周围那些河就全散开了。后来我再去那个小区回访,几乎已经没有办法找到河了,恢复到了避人的状态。
最近几年很多人可能也都看到过社会新闻里面那种野生动物泛滥。还有一个例子是南京的野猪。
南京的野猪?
对,最早看到有一只野猪过江的画面。因为过去我对野猪的印象,我觉得这种动物应该是挺危险的,因为我小时候看一些报纸,他们会说野猪攻击人的一些事件,破坏力特别大,尤其跟一些农民发生了冲突。
但是我看到南京的很多野猪,是一种幸存者偏差吗?我看到那些短视频里面,它们似乎与市民相处在一起,好像也还挺和平的。
中国的野猪问题其实不是最近才发生的。二十多年前我们做野外实习去野外调查的时候,就已经非常频繁地听说各地的农村有野猪的猪害。野猪从山上下来,把农田给毁掉了。
但是为什么这几年这个事情闹得特别大呢?其实我觉得有一个很重要的原因,就是因为它发生在了南京,发生在大家都有手机能够拍下视频,能够传到互联网平台上的这样一个当代语境里面,所以它突然就火了。
然后我们确实看到,有些野猪从南京的山上下来,这也和南京这个城市的自然环境有很大的相关性。因为南京城里面有山,山上其实就有野猪生活,它可以到城市里面来。
当然,看着有些视频,野猪和人之间的关系似乎很和谐。但其实那个状态也未必对,因为有很多和谐是人头被头出来的。
野猪这种动物,其实有时候还是有一点神经质的,一旦它和人之间的隔离被打破,它经常出现到人环境里面来,有可能还是会给人造成伤害。野猪的攻击性其实还是很强的。
但有个问题就是,南京那边有几个团队,他们其实给南京的野猪带过红环项圈,做过调查。即使是那些到城市里面来过,然后被抓到过,后来在野放的那些野猪,给它带项圈之后,你也很明显的会发现他们日常的生活是不愿意到城里面来的。
它们很有可能只是因为我生活的那片山被城市包围,有一天我可能盲目择路,然后出现了什么事,走错路了,就到城里面来了,但其实并不太愿意到城里面来。野猪的蹄子在水泥路面上,其实走的效率非常低,很容易打滑。它们在那种山林的土壤里面才能走得更自如,而且山林里面,它们的食物也会更多,也是它们更适应的那种环境。
接下来可以好好谈一谈,花石老师在新书里面,其实去了很多地方。在不同环境下有沙漠的环境、海洋的环境,你看到各种各样的保护动物或者说濒危动物的一个生存状态,以及人类现在对它们的一些介入。这个以后可以请你讲一些故事。
那我们先来讲讲你说的那段时间,好像是自己刻意把它营销出来的小半年的逛动物园的事件。我自己小时候我不是一个特别喜欢逛动物园的人。 因为我自己的感觉好像动物园都有一点大同小异。当然,我自己可能没有去过那么多的动物园,所以我就特别想听听一个逛了这么多动物园的人,来跟我们讲一讲你逛的这些动物园之后,或者说在你的这个研究过程当中,你觉得动物园这种,尤其在中国,这个动物园它的一些很好玩的地方大概有哪些。
我在跑那趟行程的时候,以及在我后面的很多创作人里面都会强调,可能最近十年十几年是中国动物园比较重要的一个转型的时期。我在2018年在各地去逛动物园的时候,会非常明显地感觉到中国动物园老派的那一面。当年的一些传统的遗存,在几十年前,中国的很多动物园里面的动物,尤其是本土动物,它实际上是来自于野外的抓捕。
所以你像在记录里面,北京动物园最高峰的时候,有可能500多种动物,然后上海动物园差不多也有这么多种,数量非常非常的大。有大量的动物是本土的一些野生动物。你像我2018年的时候,我在北京动物园还能够看到有一种动物叫做暗腹雪鸡,这是一种高原的稚鸡。我去过它的野外生存环境,它们特别喜欢做一个什么事呢,就是在一个特别高的那种高山峡谷里面,它们从一个山头飞到另外一个山头,然后从中间飞的时候,还会非常疯狂地在叫。这个画面非常神奇,而那种动物,我第一次见就是在北京动物园见到的。
可以说是中国动物园的上一个时代,它们积攒下来的一些动物,然后遗存在各个动物园里面。但是随着1988年《中华人民共和国野生动物保护法》的颁布之后,对于捕捉野生动物的要求是越来越严格了的。中国动物园从野外获取动物的这条路径,慢慢开始被掐掉了。在这样一个过程当中,动物园里面的动物是变少了的。我在2018年的时候看到很多动物园,它还有很多种这样那样的动物,其实这几年再去看,有些本土动物的数量是变少了。而2018年再往前数,可能这个过程还发生了更快一点。你比方说,2011年最后一只中国华南区域的金猫在杭州动物园去世了。2014年的时候,我去广州动物园还能够看到大猎猫和小猎猫,但现在这两种动物在中国动物园里面就非常非常罕见了。
这个转型期,它一方面的一个体现就是我们获取动物的方法不一样,我们开始逐渐摆脱对野外的种群的这样一种依赖。但是这个过程当中,我们对某些动物的繁殖和保种的工作做得不太好,所以就出现了一些动物数量变少了这个非常明显的一个趋势。
另外一个方面,我觉得对于这个转型期可能更重要的一个趋势是什么呢?就是动物园开始逐渐现代化,逐渐开始像国际非常先进的一些东西开始接轨。你像这几年,南京红山森林动物园的名声特别好,他们的场馆有非常大的更新;他们有些新式的场馆已经能够达到世界先进动物水平了。你甚至是和欧美的一些好动物园比也是很先进的。他们有一些非常先进的新派的思想方法,比如说他们的中国猫科馆,这是一个展示舍利、花豹以及其他一些中国猫科动物的一个展区。
这个展区,他们有一个非常好的一个设施,叫做转移通道,或者在他们那儿被称作天猫通道。这个通道是干什么的呢?就是可以让豹子从龙舍A牵到龙舍B再牵到龙舍C,他们能够让它们在每一个外场之间来回来串。为什么要这么操作呢?因为很多动员强调一个词叫风容。但其实对风容这个词的理解,很多公众理解不对。很多公众认为,它就是我让一个龙舍变得更自然,里面做了更多的爬架、更多的植物,这叫风容。这个其实不叫风容,这个叫装修。
风容的重点是什么呢?风容的重点是在动物饲养的环境里面,在它们的生活里面引入变化,让它们能够有各种各样的挑战可以去执行。因为我们可以想想在野外,动物每天面对的环境,面对的自然状态是不一样的。而在动物园这样一个人工环境里面,这样一种变化,我们就需要人工去引入,而这就是风容。
如果你给一个动物的龙舍装修得再好,唯有再好的植被、再漂亮的爬架、再丰富的水系,一只动物在里面生活一个月、两个月还是会腻。所以说我们就得想方设法,让它们这种腻减弱,让它们有更多的新鲜感。我们就可以引入转移通道,让它们在ABC几个龙舍之间来回来串,今天住A,明天住B,后天住C。这样可以减少它们对环境的那种腻味,而这种饲养方式就是现代的动物园它的一种饲养方式的更新。而这种东西,现在是我们在中国的动物园里面逐渐开始能够见得到的。
当然,这种过程也非常的不容易了。你比方说,武汉动物园,它们的豹子展区也修了这样一种转移通道,但是饲养员拒绝去用,不愿意让它的那些猫从A到B再到C,他们觉得这个操作太麻烦。你能在武汉动物园看到,它的那片猛兽展区,在一堆熊、在一堆豹子中间,突然冒出来了一个高山雾救,就会有一种特别突兀的状态。而它在豹子展区里面养高山雾救,纯粹是因为他们并不愿意用转移通道,导致某一个展区抱着进不去,就还是会有这种基于一些现实情况造成的不尽如人意的管理状况,或者说我们就是改革期的阵痛。
我想的是,你像红杉动物园,他们声明在外,运营得也很成功,那他们可能,比如说从资金角度上来说,还是比较充足的,或者说中国的这些大城市的上海动物园也好,北京动物园也好,广州动物园、武汉动物园,可以想象,他们可能不是特别缺钱,或者相对可以有一些这样的资源做这些事。但我也注意到,就是经常变成梗的,就是中国有大量的二三四五线城市,好像很多地方也有自己的动物园。
然后比如说我的老家安庆,它有安庆动物园。有一年,我过年的时候在刷小红书,突然就发现安庆动物园成了一个梗,因为有人拍了一个牌子,那牌子非常绿,上面写着“本园心道动物”。有哪些心道动物呢?有九斤种大老鼠,有什么鳄甲龟,还有一个什么白蛇娘子,还有一个食人鳄,还有一个绿海贵。就我一方面觉得好像很好笑的,同时也在想,你像这样的一些小城市里面,它是不是从财政也好,从游客的数量也好,确实很难维持一个我们所理解的那种更现代化的动物园的运转。
这个事情呢,就是至少在中国,动物园的好坏与否,很难脱离它所在城市的发展水平。一方面是需要钱,你没有钱什么时候都做不好。但更重要的是人,动物园其实是一个非常专业的一个行业。可能一个大的动物园需要饲养几百种动物,每种动物,它的饲料可能都不一样。可能它都是猴子,但有的猴子能够吃水果,有的猴子就是必须得去吃树叶,这个过程实际上是一个非常专业的一个饲养过程。但是目前中国动物园行业非常确认高素质的饲养员,这个队伍建立得非常缓慢。
当然这几年也能看到有一些新的变化。比方说,上海前一阵有个新闻特别火,就是有康奈尔大学的海归学生到那去当饲养员,引起了轩然大波。一堆人在想,它学历那么高,它是个海归,为什么要去动物园去磋屎呢?这个实际上,我觉得公众对于动物园饲养员的这样一种看法也不太对。
对,我觉得你说这个就反映了一种,可能社会公众对于这事情的认知。他觉得在动物园里面,我去喂河马吃饲料,跟我在一个养猪场里面喂猪吃饲料,似乎性质都是一样的,都是那种技术含量并不高的工作。但其实这两个工作技术含量都特别高。
另外一个方面,我觉得就是一直有一个问题,我自己没有答案,我想不清楚的一个答案。我就觉得中国真的需要那么多动物园吗?对,其实我想问的就是这个问题。我设想过一个场景,中国把一些二三线的城市的动物园给砍掉,然后好好建立一些核心城市的动物园。比方说一个省,它有两三个核心城市,这两三个核心城市里面,搞好一点的那种大型动物园,然后借助我们现在越来越发达的公路铁路网,让大家去参观这些好的动物园,这样一种场景可不可以实现?
但回头一想,想想我小时候我是怎么成长起来的。因为我们城市的武汉动物园,当年还没有彻底办的时候,在汉口还有一个小的园区,门票也特别便宜。我可以随随便便地到那去观察动物。如果一个小城市真的没有动物园了,那那些家里环境不太好的小朋友们,如果他们热爱自然,想看动物,他们能怎么办呢?
对,确实如此。它从便捷的角度上,能方便在地的这些城市居民去看动物。但从另一方面来讲,有的人就讨论,他可能不是你们这些专业人士,但他会从自己的角度说,他觉得动物园,这就跟以前的博物馆很像嘛,都是那种帝国主义时代形成的这套,把世界各地的奇珍异宝聚拢到一个区域,来给人们做展示。这种思路,今天可能大家会觉得,那这种形式天然就是一种不人道的,或者说对动物本身不好的。
我们也看到有很多,比如说野生动物园,甚至我觉得最极致的就是像非洲那种马赛马拉、安博塞利这些国家公园,你在广阔的这些草地上,西树草原上,动物就在那生活。你坐着那些吉普车进去,看这些。这几种模式,天然在你们这些专业的动物保护,或者说从业者来看,这三种模式是不是确实有一个优先级上的先后顺序?
从我自己的角度来看,我们先不谈国家公园,国家公园是一个和动物园完全不一样的一个场所,从管理到对动物的那种态度上是完全不一样的。我们就只说我们中国常见的,比如说城市动物园或者野生动物园来说,这两个从我的角度来看,他们本身除了一些运营方面的不一样之外,并没有什么太大的差别。
所有的这些当代的动物园,它们背后都有一个什么样的逻辑呢?这是我们整个自然行业都相信的一个逻辑,就是“唯有了解才会关心,唯有关心才有行动,唯有行动,生命才有希望”。动物园就是非常关键的带来了解的那个部分。我们想去了解动物,我们可以到什么地方去看呢?我们可以看纪录片,可以去看网上的一些资料,现在网络这么发达,我们有大量的这种资源可以去看。
但是这些没有任何一种能够替代你现场看到一种动物,看到它活生生在里面前进食、在里面前争斗、在里面前做各种各样的事情。它能给你带来的那种感受会更深。但问题在于,如果一个动物园,他不好,动物饲养不好,你过去一看那个动物要死不活的,你还能从它身上获得教育吗?
对,我是对这个很怀疑的。所以说从我自己的角度来看,无论野生动物园还是城市动物园,它都首先得保证动物的基本福利,并且给予动物足够的选择,让它们展示出来自己那种天性。这就是我经常鼓吹的,我们在动物园里面要看动物的自然行为。你得有这样一种条件,有这样一种基础的状态,才能够去谈更高级的,例如说自然教育、科研、动物保护。你没有最基础的福利,什么都做不到。
而现在我们有些好的动物园,像红山,我们可以讲一些很高级的科研,讲他们甚至还开始逐渐参与野外的生态保护。但大量的一些不那么好的动物园,甚至动物福利最基础的一个动物养得好不好,这个我们都得去讨论。最近又是很火的一个例子,是在贵州前林山动物园。我感觉它可能精神上出现了一些问题,因为狮子这种动物,它日常毛发都是自己打理,也不可能是饲养员来给它洗。它就把自己弄得特别脏,整个狮毛就已经脏得跟粪便污水纠缠在一起,成了一个脏便。
很多游客或者网友也会提出质疑,最近我看到前林山动物园出了个公告说,会来尝试给它更换一些自然的、干净的水源,让它自行打理。您提到前林山这个动物园,我其实也想聊聊这个动物园,我对它也是一个迷糊的状态。因为前林山动物园现在是免费的,它是一个中国很少有的,你一分钱都不用花,进去能看到狮子、老虎,能看到大熊猫的一个动物园。
但这样真的好吗?你一旦免费,它的所有能够用来运营的钱,全部都来自政府拨款,而贵州又不是一个很有钱的省份,贵阳也没那么多的钱。你每年可能给它的那个钱,也就那么一点点。在这种一个条件下面,似乎你也没法怪这个动物园动物养得不够好,它就是一个没钱没人的一个状态。那这种状态下,我觉得它不收费,实际上是一个恶性循环。越不收费,它资金越少,越没有一种正向的那种驱动让自己有动力去养好动物。
所以动物园这个事情,背后很多,不光是人与自然的那个关系,更是人类自己这个组织架构,怎么去在这些方面自行的运作。它背后有一些组织形状学的视角。这几年还有一个变化,有些城市又开始发生,城市动物园开始逐渐迁到郊区,变成野生动物园的这样一个过程。而这个过程,很多城市它都有一个同样的一个驱动力,就是政府在甩财政包袱。一个城市的动物园是城市动物园,那么它的属性就是,它是一个市民福利性质的一个动物园。 它是一个公园体系下的市民福利的一个场所。政府要给它拨款,然后要保证它的门票不贵,否则它就不是一个福利性质的一个单位。
而一旦他们从城里搬到郊区,变成一个野生动物园,常常伴随的是一个失业待问改启,至少变成国企,甚至还会变成私企。在这种情况下,政府就可以不向它投入了,“你自己去赚钱去吧”。而这样一个动物园,它的门票可能会从20块钱涨到100多。你从城市到郊区那么一个动物园,还得花大量的交通成本,这样一个人去逛动物园的成本就会高了很多。但你说他是不是拉动经济了呢?好像又拉动了经济,但无论如何,财政包袱是甩出去了。这是现在有些城市正在发生的一些事情。这个现象在你看来是好是坏呢?我觉得是坏事。
为什么呢?像这样的,很多为城市动物园改变成野生动物园的过程,辩护的人都会说什么呢?“我们城市动物园搬到丘区去之后,地盘变大了,空气变好了,动物可以生活得更好。”但这个过程你说实话,我在很多城市的野生动物园里,是没有看到的。就是单纯的把一个动物园变大,让动物有更大的地盘,这是一种提升动物福利,但性价比很低的一个渠道。
我在很多动物园,那个野生动物园里面,看到过他们好像是有很大的地盘,但是一到动物饲养的区域,狮子、老虎、豹子,还是生活在一个小笼子里面。那是为什么呢?因为那片地虽然批给他了,但是建筑更贵,你做一个笼子出来,花的钱是更多的。
所以说他们并没有那么多的成本去把它那个建筑重新改一遍。所以现在我看过有几个城市的城市动物园,他们不搬迁,反而在城里面进行改造,很多钱并没有投在买地上面,几乎大部分都投到了建筑的提升上面,反而呈现出来的效果会更好。你像南京洪山,你像武汉动物园,虽然武汉动物园我会说它有这样那样的问题,但它的建筑设计就是好。
所以我觉得可能这个还是要case by case的来看。因为在2023年,我也组织互左互右的听众去到肯尼亚,我们去看这些动物。从南到北有六个国家公园,我们都走了一遍。其实给了我很大的一个体验,就是我之前只在国内看过一些,比如野生动物园,它也可能会坐在一辆车子里面,但那个毕竟不是这种更大的一个草原。
然后你发现在草原上,你能更理解动物们的生存状况,动物之间并不是那种敌非友的状态。比如说这种食肉动物,吃完动物以后,食草动物的动物群可能也就在旁边过自己的生活。他们有一套自己的潜规则,以及我们中间路过一群旧猪鸡,那是一种肯尼亚本地色彩斑斓的一种鸡,突然发现他们,屁股朝外、头朝内围成一个圈。我们说这是一种防御姿态,附近应该有某种危险的野生动物。
然后我们找了一圈,发现是水沟里面有一条蟒蛇。我觉得我在非洲看这些动物的时候,当然非洲也面临非洲的问题。比如说非洲的那些婴儿的死亡率在降低,人口也在迅速膨胀。有一个叫纳库鲁的自然保护区,附近的人告诉我,这个城市十几年前还是一个很小的城镇,现在变成了肯尼亚第五大城市,而这个保护区也被自然给隔绝掉了。动物们还能迁徙,实际上现在也面临无法迁徙的情况。
但我那时候在思考,即便如此,可能非洲这个大地也是人们今天在地球上为数不多可以看到动物自然生活状态的区域了。但我当时就在想,亚洲的面积其实比非洲要大那么多,亚洲有没有一些普通人可以看到野生动物的区域呢?
是有这么一个问题。非洲动物最集中、最适合人观看的区域是同样一种环境——西树草原。西树草原是一种兼顾森林和草原各种特点的环境,尤其是在西树草原的森林和草原之间的林园这种位置,植物生产量非常高,环境的多样性又高,各种动物生活在其间。
然而,中国有一个地方,号称中国的肯尼亚,叫阿尔金山。阿尔金山五人区在夏天可以达到像马赛马拉那样大量的兽群聚集。比如说这个地方会有大量的藏羚羊,它们在夏天在那地方繁殖,很多野牛从沙漠山上走到低处的草地里去吃草,很多动物也非常震撼。
但那个地方为什么没有开发出类似的旅游项目呢?我们不说政策方面的差异,最重要的差异是因为阿尔金山的动物只有在夏天才会大量集群。第二,阿尔金山的那片动物集群的环境里面,夏天会变成一片沼泽,冻土会化掉,雪水会流到滴出来,然后你可能你的车进入后走十公里就卸了。那个地方要搞出能让车走的路,在夏天是成本极高的事情。
所以说你想着,我们在东亚,确实就有这样或者那样的问题。没有一片像非洲那样能够有动物集群让人那么简单去看,但现在中国也有一些区域去看野生动物挺好的,比如说川北的唐家河保护区。唐家河保护区是中国看有蹄类动物最容易的保护区之一,很容易能看到林牛,很容易能看到毛关鹿等动物。
在我们云南,云南也有一些保护地,也很适合自然观察。比如说我在书里面写过的中缅边境上的心鸟谷,你在那儿能够很容易看到三种心鸟,看到三种心鸟是怎么繁殖的、怎么生活的,还能够看到几百种鸟。可能在别的城市里面,出现后就会有一堆摄影师围着拍的狼猴。在那个地方,我每次去玩,给我的感觉是,头两天我看到狼猴里有琢目,还想拍一拍,但到后来一听声音一看,怎么又是狼猴琢目,算了不拍了。
就是这样一个效果。但是这些环境和非洲的感觉是完全不一样的,因为环境类型不一样。不是西数草原,我们东亚有大量的森林环境。在森林环境里面看眼神动物,感觉也不一样。确实,在西数草原上,它有那种规模上的震撼和体型上的震撼,包括你提到的多样性。我们中间去了一次方舟酒店,那是肯尼亚当地很有名的一个酒店,因为它面前有个巨大的池塘。在那儿待一个下午,可以看到不同种群按照不同的时间来喝水。
一开始可能是一些小型的动物,后来的长颈鹿,最后是大象家族出现了。他们可能还会打架,看到小象在那欺负游猪,过了一会儿猛兽在旁边开始出现。你发现这是一种野生自然形成的秩序状态,就在你眼前,用一个下午的时间展示。但我刚脑补了一下,回到森林当中,是否森林这种情况会隐藏得更好。你可能啥都看不到,但其实一切都在发生。
自然里面也是这样发生的,但它可能就藏在树枝里面。这其实很好玩。我们中国人从小就看人与自然或者Discovery、BBC这些片子,他们拍了很多非洲的野象、动物迁徙,导致我们可能就想着中小天去看那些脚马渡河。
一个有意思的情况是,疫情之后,肯尼亚和坦桑尼亚的旅游业突然开始爆发,开始有大量游客涌入。这两个国家的酒店价格开始疯狂飙升。疫情之前去那里其实比现在要便宜得多。疫情之后,大家都憋坏了,不光是我们,欧洲人也是,美国人也是,都开始涌向非洲,更想去看这些东西。
但是,从我自己的角度来看,如果只看肯尼亚那种西苏草原,其实也不是一个那么有意思的事情。所以说我在非洲尤其推崇去南非。为什么我觉得南非更有意思呢?因为南非的环境多样性会更高。一方面它也有西苏草原,它有克鲁格国家公园,克鲁格国家公园可能有马赛马拉六七成的实力,也有大量的兽群,但可能没有马赛马拉那么多,也没有那个角马渡河,但动物数量也是很多的。
除了克鲁格之外,南非还有海边的环境。最有特点的是有全世界最小的植物区域,叫做开普植物区。在好旺角周围的沙漠环境里面,演化出了一片独特的植物。这个环境有个特别的地方,它是沙漠,但因为在海边,每年有几个月大量的降雨。另外大部分时间又保持干旱状态,所以那个地方的植物学会了一个技能,在下雨季节里疯狂出水,以供后几个月使用。
所以那儿的植物是什么样子?沙漠上大片的多肉植物,这样的场景在别的地方是看不到的。因此,我很喜欢南非,南非的多样性会更适合我这种研究者。而且我到那儿,听说了很多故事,比如说南非有一个人,是一个白人,但是他算是某种特立独行的动物保护主义者。他有自己的私人的犀牛保护区,可能保护了几千头犀牛,但同时这个人又在呼吁合法化买卖犀牛角,他是用买卖犀牛角的钱来投入保护区的运营工作。他有一套自洽的逻辑,但引发的争议也很多。
这中间涉及了大量不同路线的讨论,这个事情在生态保护领域是有非常长久的争论在里面。在中国也有。我们对待动物究竟是该把它们当作神圣不可侵犯的、自由自在与我们并行的客体来看待,还是把它当作一种资源来管理?如果把它们当作一种自由自在与我们并行的客体来保护,我们就应该尊重他们生活的状态,让他们自由自在的生活,我们与他们之间应该保持平行线。
而如果你要把它们当作一个资源来管理,那既然是个资源,你就应该允许买卖、允许去杀、允许去使用。这两种方案是否都能够行之有效,或者说在某一个区域内哪种方案会更优,这是可以讨论的。比如说在南非,南非有大量的私人保护区,那些私人保护区里面,他们做的是什么生意呢?有可能是让人去观看的参观生意,但同时南非有大量的私人保护区,他们做的是肉的生意。你如果去南非,你会在超市里看到野生动物的肉制品,可能有一些甚至以前濒临灭绝的动物的肉,数量恢复后也能够去售卖、去吃。
但我买过,觉得很难吃,因为他们做法实在太难吃了。这实际上说明了一种状态:在一种受控、管理比较好的情况下,那样的动物农场,虽然你可能会觉得它很血腥、很不人道,但养动物就是为了杀,甚至有一些狩猎农庄,就是养了一堆狮子,供人去捕猎。
当然在他们那样的商业化运营情况下,动物数量就增加了,那些需要保护的濒危类动物,也恢复到数量很多的状态。对,这里涉及到一个观点问题,思想上的一个矛盾问题。如果我们杀一只动物能够保证更多动物的生活,我们可否做出这样的选择?我们是否有权力做出这样的选择?如果把这个客体换成人,那争议就更大了。
是的,但说实话,我在做出这种选择时会更冷血。如果它真的是有效的,那么这种方法应该可以被讨论和尝试。比如说在中国,我特别推崇的吸鸟谷里面的鸟,它的观测点分成两类。这些是政府控制了保护区的地,观察点里面可以看到几种吸鸟,这些吸鸟与人不产生关系,自由自在地觅食。
我相信一般人肯定能够接受,只不过人在它们的吸鸟朝周围搭了一些棚子让人去拍。还有一种观察点叫鸟堂,是私人的地,或者说是农村的集体林的地。人在那个地方开出一道水渠,让旱季的时候鸟有水可以喝、可以洗澡,水渠周围给吃的,放虫子、放水果,吸引鸟过来生活,人就在周围去拍。这在生态摄影里面,是一个非常低级的拍摄方法,叫做诱拍。
这种诱拍在吸鸟谷,还有另外一种形式,就是去拍野生的猫头鹰。我们晚上去找猫头鹰是怎么找的呢?找到一个之前找到过猫头鹰的地方,到了地方之后,就开始放出一个喇叭,播放猫头鹰的叫声。吸鸟谷里面,最容易回应这种叫声的猫头鹰叫做鹰销。鹰销是一种长得很像鹰的猫头鹰,脑袋上没有耳朵,看起来没那么圆,然后鹰销听到人的叫声后,有非常强烈的敌意,会上来与这个喇叭吵架。
我最近一次看到的一个鹰销,离我们的喇叭可能就十米,吵得特别起劲,我们拍得也很开心。这样一个过程是否打搅了猫头鹰的生活,绝对打搅了,你让它无谓浪费了很多能量,甚至有可能让它被敌害给注意到。 但是这样一个过程让当地人赚到了钱,只要当地人赚到钱,就会有非常强烈的意愿去保护这些鸟。我在 新鸟谷 是听他们说过一个故事,就是他们当年有两个外地的捕鸟人,非常愚蠢,跑到他们新鸟谷里面去去了一个鸟堂,花钱买了门票进去看鸟。然后借口让堂主中午帮我去买个盒饭,然后把人给支开了,开始张望捕鸟。然后那个堂主回来之后,气笑了,竟然有这么愚蠢的人到我们这种有主的鸟堂里面来抓鸟。然后往群里面一说,全村的人过来看傻子。后来那人就被摁住了,然后送到警察局里去了,警察把他们给拉走了。
你可以看到,就是一旦这些人整个村子,他们的生活、他们的致富和这些鸟产生了很强的关联之后,他们就有非常强烈的保护这些鸟的冲动。这也是一种 共生关系。对,他们的这种鸟的产业是不是对鸟产生了一些负面影响,可能对某些个体是绝对产生了负面影响。但是就是把那边森林保下来,就是把那些鸟保下来了。现在当地的小孩都绝对不会拿弹弓去打鸟的,他们敢拿弹弓去打鸟,会被大人打的。我觉得这个例子就特别好。
既然说到了这个国内的状况,你刚提到西鸟谷这个例子,其实你的书里面就有大量的这些你去到不同地区的故事,对应特定的种群,江河湖海里面,包括这些沙漠、热带雨林其实都有。就是我不能在鸟兽身旁,只是悲伤里面。你所去到的这些区域,给我们讲几个这方面,你觉得特别适合跟听众分享。
好累,我跑了这么多中国的野外, 就是我把中国的荒野分成两类:第一类荒野是没有人的荒野,第二类荒野也是满身人的荒野。没有人的荒野,比方说我们刚才提到了阿尔金山五人区,阿尔金山五人区的行政单位叫做 齐曼塔格乡,那个乡有六个北京那么大,但是六个北京这么大的一个区域里面只有四个常住人口,就是一个很空的地方。我们要保护这个区域,就是得排除人对它的影响,不让盗猎者进去,不让挖矿的人进去,不让那些对它有破坏的人进去,然后尽可能多的往里洒科学家,让科学家去发现其中的问题,找到其中的规律,为下一步的保护做指引。
2023年的时候,我们那个机构叫做荒野新疆,就我书里面写的一个机构。我们机构组织了一支考察团,目标是到阿尔金山里面去装红外相机、捡粪便,来研究里面的雪豹种群。我们当时10月底进到阿尔金山,在进到阿尔金山之前,我们就听到保护站的同事跟我们说:“哎呀,我们明天要住的保护站被熊给占了。”我当时以为他是在跟我开玩笑,你肯定是忽悠我。结果第二天往里走的时候,我们就碰到了一群人,有十几二十个人,他们干嘛呢?他们进保护站去把那两头熊给轰出来了。
我问他们:“你们是怎么把熊轰出来的,有枪吗?”他说:“我们不能用枪,我们是敲着铁盆子,把他们那两头熊睡觉情况下给赶出来了。”领头的那个人,脸上挂着花,歪歪扭扭的伤口。我当时问他,你这个伤是怎么受的?然后他说:“这是被熊一巴掌打到脸上来了。”这么夸张!我跟他说你扯淡,熊一巴掌打你脸上,怎么会有这种歪歪扭扭的伤口?然后他才跟我说:“当时我们把熊赶出来之后,它追我,然后把我赶到了屋顶上面,屋顶上面有铁字网,我就把自己给扎了。”更合理一点。
后来等他们出去后,我们又往里走,最终还是住到那个保护站,发现那个保护站里面确实一片狼藉,高级架的床什么的被熊给翻了,厨房也是被熊给扫了一遍。会有熊的味道留下来吗?有粪便,有爪子,但那个地方温度很低,所以说味道不重。我们甚至还发现那里有一头羊,羊周围全是熊的粪便和脚印,但是那头羊是活下来的,熊没有去吃它,可能在人的厨房里各种泡面对熊的吸引力会更强一点。这还挺奇怪的。
然后我们去了之后,就问那个熊被赶出来了吧?就不在这里面了吧?那个驻站的大爷就跟我说:“确实是的,那个熊被赶出来了,但是它现在待到了我们隔壁的仓库里面。”我们的仓库里面有两吨玉米,被他们给占了,他们在玉米上面睡觉,还吃那个玉米。那个玉米是冬天用来补饲用的,害怕雪下得太大,野生的动物死伤过重,为了防止这种情况,他们冬天有时候会往外面撒玉米,这就是补饲。
后来我们想:我们有十几个人,要不然我们也把这个熊给赶出来吧,结果刚一尝试,熊吼了一声,我们十几个人全跑了,就失败了。我看到你在里面描述,拿一个钢管往里面去捅,那钢管直接被顶出来了,我们就吓跑了。我们就在熊生活的那个仓库门口装了几台红外相机,发现那两头熊可能身体不太健康,毛发明显是那种毛毛赖赖的状态,不是很健康,可能他们就是在野外和其他的熊斗争失败,被赶出了原有的栖息地,才跑到人的地方来碰碰运气。
他们的运气很好,生活得很规律,每天晚上八点钟出门。那边和我们内地可能有两个小时的时差,相当于我们内地六点钟。第二天早上八点的时候,再回到仓库里面睡觉,每天雷打不动是这样一个规律。他们已经把仓库当成洞穴了,当自己的家了。等到后来我们离开阿尔金山,一个月后,前方保护站的兄弟们又给我们发来新的消息,那两头熊又犯事了。他们把乡政府的铁栅栏给推倒了,冲到了乡政府里面去,把乡政府给占了。
现在是更加厉害了,后来他们可能也知道自己做了坏事,结果那两头熊就溜了,后来很长时间就没有出现。那是藏马熊还是棕熊?藏马熊是棕熊的一种,一个亚种。藏马熊很有意思,藏马熊的一个非常标准的形象就是它肩上有一个白色的环带。我们会说就很像是戴着哈达,但其实藏马熊在野外的多样性、花色的多样性非常高。我们在阿尔金山见过一头熊猫配色的藏马熊,它和熊猫的颜色、色块分布一样,眼睛耳朵是黑的,嘴是黑的,四肢是黑的,其他地方是它飘带的那种米黄色。
我看过一些传说,说什么藏马熊这种动物会把粪便顶在头上,伪装成人的帽子,这是属于小道消息还是说真的?这个,我觉得是想多了。藏马熊是近几十年来中国商人最多的棕熊亚种,这和它们的生活环境以及面对的变化是有关系的。这个变化是什么呢?是牧民的定居。近几十年来,中国在青藏高原上推行牧民定居的一个工程,并不是说让他们一定要住到城里,而是说给他们修房子,让他们就地定居。
就地定居之后,可以给他们建学校、建医院,这对他们的健康和教育是有非常大的好处的。而且并没有真正改变他们的生活模式,他们还是会游牧。一般来说,在青藏高原上,牧民定居的点都是他们的冬季牧场,冬季牧场会让各家把自己的房子全部建在一个地方,就形成一个自然村落,他们冬季在那个地方生活。但一旦到了春季,到了夏季,他们还是要游牧,还是要把自己的羊、牛赶到夏季牧场和冬季牧场去。
这个时候,他们冬季的那个冬窝子,那个房子就空出来了,虽然说不住人,但是可以把它当仓库,大量的杂物、各种各样的食物就囤在里面。然后好死不死,熊又特别聪明,熊就发现人的冬窝子是会存食物的。然后就学会了进人的房子,在里面吃东西。大多数情况下,人的房子里的熊是碰不到人的,因为那是一个冬季的房子,里面也不住人。但是一旦熊知道人的房子和食物之间有关联,那么就会经常到房子周围去晃悠,这种现象就会增大人和它们之间相遇的概率,导致它们伤人的比率变高。
但你想想,他们是为什么要到房子那去?他们不是为了吃人,是为了吃房子里面的食物。只是就是这种有可能相遇,导致了更多人受伤。所以说,我觉得那些都市传说是非常无稽的。藏马熊没那么聪明,因为很多都市传说,尤其短视频里面流传的,把熊这种动物弄得非常狡猾。确实,我以前也看过一些,纪录片里面对熊这种动物,确实感觉挺残暴的一种动物,而且确实挺聪明。
我看的很多是,比如说日本,曾经有什么三毛别皮事件。最近几十年,也有一些徒步的大学生在山里面,吸引到了熊。事后它们会有一些更恐怖的描述,会说自己沿途其实一直能闻到一种臭味,后来意识到原来这头熊一直在跟着它们。还有很多人认为熊是一种很小心眼的动物,所以如果你得罪了熊,熊会去报复你,这是真的吗?是这样的,所有的哺乳动物在某种程度上都会有性格这个东西,最终导致一个现象是什么呢?就是个体差异性。
是不是有一些小心眼的熊?是不是有一些喜欢袭击人的熊?这确实有可能。但它是不是一种普遍性,我觉得可能有点难。阿尔金山可能是中国熊密度最高的一个区域,我们在阿尔金山每天开车要开100公里,我们最多的一天碰到了八头熊。在真正的野外,我们碰到的所有熊,毫无意外看到车掉了就跑,看到人就跑。但是当我们人从车上下来,去山上走的时候,熊的攻击性就会很强。
我在这本书里其实还讲了一个故事,就是我们当时去调查的时候,有两个人在山上去装红外相机,实际上是惊掉了一头熊。第一个人往前走,路边那头熊在睡觉,被人惊醒了,但第一个人又没注意,已经走过去了。第二个人过去,发现那个熊站起来对它吼了,还好我们跑得快。熊是很危险,但我并不认为它们普遍上是会以人为食,因为人对于熊来说,也是一种非常危险的动物,可能熊也在那里船,人是一种很残暴、特别狡猾的动物。
对,我觉得总体上人类基本就很少在这些野生动物的食谱里面。其实在非洲,很多狮子也是一样的。你坐在那些丰田的越野车里面,会发现它们更多像是在无视你,甚至会把这些车当成自己的隐蔽物,来捕猎其他动物。我印象里面在有一些非洲和印度的研究,说明那些喜欢袭击人的狮子和老虎,很多时候是它健康不好,它的牙齿磨损得厉害,或者它已经体摔到没法捕捉正常的猎物的时候,才会去抓人。
对,我以前看过一个故事,就是在印度,当年应该是阴影时代,有一只很著名的孟加拉虎,说杀了上百个人,非常狡猾的动物。但后来发现这只老虎确实年纪很大,衰落了。非洲当年有一个食人师,几兄弟,他们当时后来被人干掉之后,发现他们的牙齿已经磨损得非常严重,也是身体有问题,才开始捕猎人的。从这个角度来看,我们人类在某种程度上也是食物链的底端动物,一方面人又很厉害,绝对会去报复那些伤害人的野生动物。另一方面,人又很弱鸡,作为个体,战斗力很差。
真的是这种感觉,尤其你在这种,无论是非洲还是亚洲,这种野外的场地,你跟这些大型动物见面,会发现,即便是在那些纪录片里面,沦为被食用的食草动物,实际上人在他们面前可能都是弱者。除了阿尔金山这个,我们再讲一个,就是满是人的荒野。中国除了几大无人区之外,绝大多数保护区里面都是有住户的,甚至有些保护区的核心区都有人居住在里面。而我们中国又很难说,你排除人去做保护,这在中国是一个政治不正确的事情。
所以说,在很多区域里面,我们想做的是保证人的生存的同时,保护到自然。我书里面提到一个东北的故事,那个地方叫做上海国家级湿地保护区,他们遇到了一个特别大的问题,就是那个地方的支柱产业是养牛。但是那个县就有一个非常大的问题,土壤沙漠化,沙漠化之后,没有牧草了,没有牧草怎么办?去找有牧草的地方去放牛。哪有牧草呢?保护区的湿地里面,牧草特别多,所以他们的保护区就面对牛的问题,面对特别大。
所以说在那个地方,做保护的那批人,他们当年做的事情像扶贫。既然你们牧民到我们保护区来放牧的原因是因为你们没有牧草,我们有没有办法跟你们解决没有牧草的问题?他们就想办法去引入了一种叫做倾注高粮的优良牧草,教当地人去如何种这种牧草。一旦他们有了种牧草的条件,就脱离了对保护区里面野草的需求,那么就可以让他们更少地在保护区里面放牧。这种操作手段与传统意义上的自然保护、生态保护就很不一样。
然后这个保护区的工作人员,为了让自己和周围的公众关系更融洽,他们甚至在假期的时候开办补习班,招募城里面的那些大学生。 到这些村子里面来,给村里的小孩做补习。你想这个事情好像就更不像是生态保护传统的那种叙事了,而这是在我们中国很多地方正在发生的事情。
你刚说到东北,其实我觉得在很多人的视角里面,当然像辽宁这个省份,它是一个人类活动非常多的省份。但是大家对于像吉林也好、黑龙江也好,他们可能会觉得这些地方其实人口的密度没有那么的高。那相对来说,就是整个东北,尤其还有什么大小新安岭,它这个野生动物种群难道不能形成那种类似于阿尔金山或者说云南的这样的一个丰富程度吗?
其实东北的多样性非常丰富。你像我去过东宁那边的东宁林场、草原货林场,这个林场的顶级肉食者是东北虎,然后其次是远东豹,后来我们叫东北豹。然后有大量的鹿、有野猪、有各种各样的游食类,熊应该肯定也有,熊也有,然后鸟类的数量也特别多。它的多样性其实并不低。
但问题是,就是那个区域是一片森林,它是个森林环境,而且冬季的温度实在是太低了。所说你很难在这样一个环境里面,像去非洲那样看那样大的兽群。对,它也其实无法作为旅游开发,当地也想开发了,但是面临很多问题。那还有一个很严重的问题是什么呢?那片是森林脑炎的疫区。森林脑炎是一种被草耙子传播的一种恶性传染病,有可能致死的。
里面所有的工作人员,他们每年夏天之前都要打一针疫苗,然后再上山工作。你说你这样一个地方去发展旅游,然后你得要求他们:“你们进林子之前,我们去打一针疫苗再进去。”就这一步,我觉得都能吓跑好多人,就跟去赤道你要打个什么黄热病疫苗一样。对,是。
这除了东北,我看到你书里面还写了有一章的故事是在莫托,就在中国西南地区去找中华穿山甲。那是我写的一个探险家的故事,他叫李成。其实他本来是一个软件工程师,爱好者。开始是个爱好者,他就是喜欢热带的森林,所以说他作为一个票友,在莫托其实跑了很多次野外。后来慢慢产生了名气,和保护机构一起再去做调查。这个人很有意思,你知道吗?
就是我去莫托的时候,我打听过在当地那些巷道的圈子里面,他的名声。就当地的巷道很多的不愿意去跟他一起去爬山,因为这个人他体力比当地巷道还好。这样的吗?对,赶走特别危险的路线,很多巷道有的路都不敢走,他就敢走,所以很多巷道就不愿意跟他一起出去。但这个人,我们这个圈子里面就说有的人在野外的经验尤其是丰富。他到一个环境里面会有一种直觉,他知道这个环境里面应该有什么东西。
比如说这个地方应该有老虎,这个地方就应该有豹子。我们做调查就应该在这个地方做调查。国内保护圈子里面有几个人是有非常强烈的这种直觉的,其中有一个就是他。十几年前,他在断念发现了中国最高的树,在莫托,结果后来真的被他找到了。这是因为直觉还是说他自己有一种,他跑得多,他在那个地方看到过大量的树。但那个时候他没有条件去测量,他就用一些非常粗略的方法去估算,估算出来数字都非常高,就开始断言这个地方一定有中国最高的树。结果找到了,找到了当时中国第二高的树,第一高的树那个时候还在台湾。其实现在排名第一的树也是他找到的,但不是他一开始宣称的在莫托,是在莫托旁边。
这个我完全不知道中国最高的树大概有多高,80多米。对,像这样的一个,比如说不同的区域,这些地方你都去过,那你觉得在这种自然环境状态当中,它们人和自然的相处,或者说对于动物来说,它有什么非常明显的区别?就比如说同样对于某个物种的生存繁衍,是阿尔金山那种在大山之间被夹着的这样的一个纯无人区的场地,还是说像那种更热带的,它有大量的疟疾疾病、这样的区域,可能更潮湿一点的,维护好自己的生态?
其实我觉得这两个是并行的两件事情。首先,不同环境的生物能够找到适应当地环境的那种策略,所以说无论是哪种环境,你只要排除了一些干扰之后,当地的生物都能够找到它自己生活的方法。而第二个问题是在有些区域,真的应该排除人的因素吗?你比方说,在西北的很多草原区域,那里的草原和野生动物就是与放牧的人类共存了几千上万年,产生了一种生态系统。一旦你非常强硬地、非常粗暴地、非常迅速地把牛羊从那个地方给排除掉,当地牧场的植物会发生失序,有些甚至是野生动物也都不喜欢吃的草会长得特别快,导致整个草场育弊度变得极其高,发生不适合被动物利用的那种演替。
或者在另外一些区域,你确实可能把人给排除出去了,但是在那个过程之后的坏人比好人更容易回到这样一个区域,而原来可以制衡那些坏人的那些本地的牧民或者山民,他们又不存在,那道理好像就变得更严重了。这些事情都有可能是在一线发生的。你像我写这本书的时候,我有个观点就是,有时候我们做生态保护就不应该也不能排除人类这个因素在外,人类其实也应该被视作某些生态环境的一部分。对,或许这样一个过程能够让人和当地的生态都能变得更好。
所以说我就一再说星雕谷那个故事,那是我特别喜欢的一个状态。对,好像传统上大家一般会把,比如说雨林当中的那种原始土著视为这个雨林的一部分,但现代人一般会把自己摘出去,以为自己其实不属于这样的一个环境。我觉得你的那个书里面讲过一个很有意思的,而且很有画面感的一个片段,就是你当时描述的一个对象他在进入到一片原始森林当中,惊吓到了一只松鼠,然后松鼠立刻就跳到另一棵树上,被一只猫头鹰给抓走了。李成了,被那只猫头鹰抓走。
那个画面让李成突然意识到,他只要进入了这个环境,其实就已经带来了一个扰动。对,这个雨林当中的这些动物的命运已经开始被影响到了。
你知道我其实还是想说星雕谷。当你在星雕谷的那个案例中,你会发现你作为一个游客,你去那个地方去消费,甚至你的这样的一个过程实际上也是参与到当地的生态保护的过程当中去的。因为你的消费让当地人更爱自然了,让当地人更有保护自然的动力了。那这样一个过程,或者说这样的一种良性循环,可能是我们现在在中国特别需要去寻找的一种方式。它的这种力量,很有可能比道德、比法律会更强大。
刚刚我们讨论了很多不同环境下,都是这种陆生的生物。其实你书里面也提到了在一些江河湖海中的动物。我不知道在中国的这样的一个自然保护的环境的语境之下,这两种动物面临的生态威胁的级别,有明显的差异吗?
这些水生生物,无论是海洋生物还是淡水生物,它们所面临的生态危机是不是更严重?可能还不太好量化地去比,但是这些水生生物所获得的关注,一定是比陆生生物要低得多得多。你比方说,我们让我们的听众去列举列举,还有哪些濒危的动物。列出陆生动物其实很容易,东北虎、大熊猫,干脆就来了。但你说你能列出来几种已经灭绝的中国鱼类?很难。
能说出来白鳍豚就已经很不容易了。对,我们好像似乎很难跟鱼去共情。而因为这个原因,所以大家对水生生态又不那么关注。所以说很多危机就是在我们面前已经发生了,但我们不知道。你比方说,我这本书里面讲过一个在金沙江,在云南做他们叫香格里拉本土鱼类保护和繁育的机构,叫什么来着?太长了,就和我书名一样不太好记。就是这么一个组织,他们就是一直在云南做本土鱼保护。
那个机构资助了一个大理大学的老师,那个老师是做本土鱼繁育的。他最近几年搞成功了一种鱼,叫做后鳍奴里,是在南昌江里面的一种,是一种能长到我们胳膊那么长的、顶级肉食者的鱼,属于劣势性的鱼类。他搞定了这种鱼的人工繁育,然后现在他们开始做政治放流。但是他当时就有一个很忧心忡忡的论断。我22年的时候去采访他的时候,他就跟我说:“他这几年救助了很多这些鱼,那些渔民他们不小心打了这种鱼之后,会赶紧去通知他,让他来接手。但他就遇到一个问题,他救助了这些成年的鱼,没有一条小鱼八岁。”这就说明了一个什么问题呢?可能正是当时之前的八年发生了一个什么事情,然后后背奴里的产卵场被毁掉了,他们这八年间没有一次成功的繁殖。
所以说在江里面找不到八岁以下的这种鱼了。但预期很好,我们通过人工的繁殖放流,把这种鱼的繁殖续上了。但是有个问题,就是如果这种鱼的产卵场被破坏了,到现在他们还没找到新的产卵场,我们一直去给它做政治放流,可持续吗?能够解决它自然的问题吗?能够让它们真正的恢复自然的繁殖吗?这可能是一个非常要碰运气的事情,得让那些鱼自己去碰运气,自己去看看能不能找到一个新的产卵场。
这个问题后来我问了那个机构的老大,我们叫他屈总,他本来是一个生意人。然后我问屈总:“这个问题就是,你像后背奴里这个例子,他的产卵场已经没有了,你们一直在做政治放流,他能不能自己找到产卵场呢?没有产卵场这个问题,我们该怎么解决呢?”然后屈总就想了半天,回了我一句话:“这个问题我解决不了,我现在能解决的是什么呢?就是让他们活到出现能解决这个问题的人出现,先给他们续命。”
对,现在很多事情是这个状态。你想我们最近这几年开始做长江大保护、长江十年禁渔,对长江的保护的效果是非常好的,很多鱼的数量又开始恢复了,比如江豚。但问题是有些鱼已经没了,那就是没了。我的老家在那个安庆,那也是一个长江边的城市,然后离湖北也特别近。因为我们有个水泥场就叫白鳍豚水泥场,所以对白鳍豚这种动物,我觉得很多安庆人是很有感情的。所以说大部分人都没见到过,但因为这名字反复失去。但这种动物是不是基本上就是已经灭绝了?已经灭绝了,已经灭绝了。最后一只白鳍豚是在武汉死掉的。
然后像这个十年禁渔以来,但我看到一些社会新闻,就因为之前江豚也很多年没被发现,好像现在又发现了。江豚其实这些年数量也不是特别少,在某些城市被发现的可能性很高,比方说湖北的宜昌、比方说南京的市区里面,都是能够看到江豚的。这几年十年禁渔确实给他们带来了一些好的印象。
但这种十年禁渔,比如说有的人会期待有一种奇迹,比如哪天随着十年禁渔到一个某个节点,是不是白鳍豚这种动物也会出现?从生物上有可能吗?不可能,灭绝的动物不可能再重新出现,已然板上钉钉灭绝了,白鳍豚的野外灭绝我印象里已经宣布了十几年二十年了。这几十年来长江上有大量的渔船、长江上有大量的科考船,已经经过几十年了,没有发现这种动物的踪影,这又是一种大型动物。你再发现它的可能性其实不大。
所以这个应该是一个非常遗憾的事情。前几年还有一些机构再组织人去重新去找,看能不能找到白鳍豚,说实话,我都觉得那是浪费钱。嗯,所以长江这个生态在中国的水系里面,算是被改造或者保护的还不错,还是说它依然面临非常多的问题?
长江的问题从狗舟坝的时期就开始了,一些大坝对长江的生态环境的影响是非常非常巨大的。那狗舟坝那已经是五六十年前的事情,就从那会儿就已经开始了。狗舟坝对某些鱼类的产卵场的影响是毁灭性的。我的印象里面,白鳍豚的灭绝和狗舟坝的关系是非常大的。那后来像三峡这种工程,实际上到三峡的时候已经影响不大,因为狗舟坝已经彻底的把问题给解决掉了。
所以从先后顺序上来讲的话,实际上很多问题是出在狗舟坝身上。狗舟坝开始,但那个时候我们真就是处于一个认知有非常大局限的一个时代。可能我们今天操作会有更多更好的操作,但那个时代,每个时代说每个时代的事情。对,那会的关注重心肯定不在于这些什么偏自然保护的事件上。嗯,长江这种级别的大规模的开发,其实我们很难智慧,你真正我们觉得特别痛心的,或者说是很难受的,是一些中小型水电站在云南在一些省份,很多河流的上游那些小的河在做一些中小型的水电站。
那些小型水电站对那边区域的毁灭性的影响也是非常巨大的。前面提到西志农老师那个滇金丝猴是不是也是跟一个,跟那个水电站没有关系,但另外一件事情有关系。就前几年在云南发生的另外一个时间,绿孔雀有一片绿孔雀的栖息地,差点因为一个中型水电站的修建而毁掉了。那个案件后来打了非常久的官司,水电站被摁住了。 但之后会怎么样我们不知道。
你这个书里面有写了一个我觉得跟我们今天很多都市人的生活强相关的一个事,就是官鸟群体的兴起。
我最早其实知道官鸟群体,是我在看美剧的时候。我发现美剧里面经常出现人到什么中央公园里面去看鸟,就是官鸟。后来发现我身边也有朋友来这么干了,北京、上海什么都有。
美国有一部非常好的和官鸟有关的电影,叫做《官鸟大年》。那个电影非常棒,是几个非常有名的喜剧演员演的。你可以看看其中的那些特别疯狂的官鸟人,他们每年去参加推鸟比赛,就比历年里面在美国看到了多少种鸟,这个过程是多么有意思。
而现在全世界都有这样一个规律,就是一旦一个国家的人越来越有钱,那么他们对自然的需求就会越来越大,越来越想去拥抱自然。而实际上,官鸟是在观察各种动物里面,最容易的一种,可以说是最容易的一种。因为像就在我们城市里面,你想看一个兽类,不容易。就算我们身边有核,有那个荒鼠狼,你想去找它也不容易。但是你在城市里面找到一只鸟多容易。找到一只鸟,你就可以去看它长什么样子,去听它是怎么叫的。官鸟实际上成本相对来说会更低,也更容易入门。
而且我发现围绕官鸟产业,竟然有那么多,比如专门的那些APP来服务官鸟人群,专门的什么官鸟的那些设备。
前几天见到一个老朋友,迟早更新的人,因为他是一个官鸟爱好者。他之前还在做自己的耳机,前几年他说的,他要做一个听播客的耳机,现在他说的是,他要做一个面向官鸟人群的耳机,因为他发现了一个很特殊的需求,可能人们在官鸟的过程当中,他需要更安静的环境或者怎么样。我就发现好像,官鸟在所有的这些跟野生动物有关的活动里面,他好像他的商业化也是最成功的。
这个其实也有个非常有意思的现象。你像英国、美国的官鸟产业,他们是一两百年前之前开始萌芽的,所以他们这批官鸟人的日常官鸟的方法是拿万元军去看。但中国人不一样,中国人的官鸟是数码相机时代开始之后,然后开始萌芽的。所以说,中国的官鸟人几乎人手一台长焦相机。就其实官鸟和拍鸟是两个完全不一样的运动。
官鸟这种运动,其实你很有可能看到一种鸟,看到它长什么样子,你就可以走了。但是你拍鸟的话,找到它之后,你可能要在那地方等好久,等到光线好,等到什么什么好,然后终于拍到一张很好的照片你才能够走。这两个是个完全不一样的事情。但就是在中国,中国的官鸟人几乎都是拍鸟人。就是现在甚至就是官鸟这个圈子里面会称那些只用望远镜官鸟的人是传统官鸟人,古典主义官鸟。这是不是也跟社交网络有直接关联?你不拍的话,你怎么发朋友圈或者跟别的朋友交流。
我有时候会感觉,因为我身边有些朋友也在观鸟,而且他在观鸟并且习得各种鸟类的这些知识,分辨鸟类过程当中他好像获得了一些类似于填词游戏的这种快乐。我感觉鸟的这种谱系的复杂程度其实也提供了这种便利。对,你想如果是老虎的话,可能就那么几种,你把它认完了就没了。鸟其实它有一个特点,就是鸟的数量不多也不少,中国有1500种鸟。
所以说中国很多观鸟人热衷于,我们去推鸟,还有一些APP是专门记录,就是我看了多少种鸟,一路统计下来,看到我有多少种。像我有个朋友叫小关,他看过1400多种,然后是中国前五,非常厉害。然后你在中国,如果能看中国鸟,看到1000种,就已经算非常厉害的人了。
但是我们再回头看,就是昆虫。中国已知的昆虫已经快6万种了,你去推昆虫的种类,有点过于难了。然后你去看兽类,兽类中国的兽类绝大部分,如果用数量来算,绝大部分是很难去识别的镊齿类和一手木,就是蝙蝠。然后剩下的那些物种数量,好像也不足以让你去推种。你比方说,我想看全中国所有的路,是一个很简单,不那么难的事情,我基本上已经快完成了。所以说你这么算来,鸟可能是天然的,就是特别适合作为一个运动而探索。
对,尤其你刚说到那些动物里面镊齿类动物或者说蝙蝠类动物,它们实在是不太美观。就是鸟还兼具一种观赏性,又好听又好看。对,从这种角度上来讲,它可能从几个维度都是非常适合最终被一个都市人群接纳成一种生活方式,特别适合变成一种运动,一种有商业价值的运动。
对,天时地利结合在一起。我注意到,就好像一年多以前,就果壳好像写过一个,就是纽约人当年纪念有一只猫头鹰,叫什么Fleckle,说这只猫头鹰就是很热爱自由的一只动物。那只雕销,那是一只雕销。那个事件,如果我没有记错了,是这个样子,就是有一个非法饲养雕销的人,然后他的那只雕销被绞了,送到了公园里面去。然后那个猫头鹰,后来一个不小心溜出来,就是各种各样的巧合碰在一起,然后它猫头鹰回归有外,但其实它不是本土动物。然后那只猫头鹰后来又因为吃了被毒死了老鼠,自己也毒死了。就这个事情有各种各样的奇怪的巧合。
对,就是在你们这个动物圈子里面,比如说有没有诞生类似的一些,就是以这种动物为主节,但是在圈子里面人尽皆知的这样的一些故事?你说个体的故事吗?就类似中串八宫那样,但是中串八宫是个在人类社会当中动物的故事吗?在动物当中有出现吗?因为我觉得刚纽约的这只雕销的故事就好像很适合这种故事。
它会不会火?其实它的核心并不是动物本身,它的重点是赋予这个动物产生的人,就是向往自由。向往自由,或者这么地,它为什么能有这样一个故事,是因为有人长期关注它,有人去讲出它的故事。
我再举个例子,就比方说,这几年也特别火的就是新加坡的水塔黑帮的故事。对,两个黑帮在进行战争。这种战争在自然里面非常常见,水塔就是这么生活的。但为什么新加坡就火了,因为新加坡有人去盯着看。因此这种故事能不能在中国出现,或者说在中国出现多不多,实际上它考验的不是动物,是考验的人。
对,我写过几个这样的故事。比方说,还是我们荒野新疆,在乌鲁木齐周围布设了很多荒野相机和林草局一起做调查的。然后当时我们在一号冰川区,就是离乌鲁木齐最近的一个冰川。那个区我们是长期关注了一只雪豹。那只雪豹叫冰冰。为什么叫冰冰呢?因为一开始我们把它识别成了雌性,然后想想,范冰冰、李冰冰都是美女,然后这个豹子长得这么好看,我们就叫冰冰吧。后来发现是个公的,搞错了。
然后这只冰冰是五六年间一号冰川区域最厉害的公豹子。我们通过这种布设的红外相机,从它刚出现开始,刚在那个地方称王称霸开始,一直到最后消失,讲述了五六年的故事。这个在我的公众号里面可以找到,就雪豹冰冰的王朝公敌。它曾经把周围的另外一只特别厉害的公豹给打跑了,然后自己生的豹子把周围的沟全部给粘住。然后之后,它又被一只新的很厉害的公豹子给战胜,然后它就退位了,然后消失了,就有这么一个故事的。
这个其实故事能够讲,也是因为幻影新疆在那个地方长期跟踪雪豹,长期去收集这些雪豹的数据,才能够讲述这样的故事。对,所以这种故事我觉得,随着这种爱好者群体不断扩大,然后在中国可能逐渐也会衍生出这样的一些传奇传说。
对,就其中一部分传说可能来自于城市的动物,这个爱好者群体可能更容易来去触及。对,另外一个部分的故事,就得来自于科研人员。但现在就有一个不太好的一个状态,就是中国一线的生态保护科研其实做起来越来越有难度,难以拿到钱。
因为你像我们在野外布置红光相机去做一些传统的这种生态调查,这个在当代的生物的研究的学界里面会被歧视。为什么?过于传统,过于老套。所以现在前沿的是怎么样的方法呢?你要前沿的话,你不做个基因,不做个什么模型,拿摇杆做个模型来预测,这都不行的。
但问题就是中国很多一线的生态保护区域,它特别需要去积累一线的基础的数据。没有一线的基础数据,就没有办法去科学的评估当地的状态。而这种一线的数据,它带来的那种准确性,是你的那种模型分析也好,或者说AI识别也好,完全比不过的。而中国又那么大,尤其是西北又大,人又少,目前做这个东西的人还是不够多,数据还是不够多。
像你们这种兴趣者人群,就是我不是说那些专业人士。我就更像你前面说到的深圳的那位兄弟,那种状况的,这个在中国现在它群体大吗?以及他们和你们这样的会越来越大?
我举个例子,就比方说,我当理事的机构荒野新疆。我们这个机构就是一直标榜我们是一个非职业的机构。我们整个机构里面只有一个全职人员,就是我们的财务,然后我们其他所有的人都是志愿者,都是拿出我们的业余的时间来管理机构,拿出我们业余的时间来做野外调查,拿出我们业余的时间去参与各样的社会活动。
我们为什么会有这么一个状态?就是一方面,我们都是兴趣集合在一起,在另外一个方面,就是在新疆那样的一个人文和社会环境里面,一个非专业的机构,反而更没有攻击性,更容易灵活地做出来很多事情,这完全可以理解。
那我觉得最后,你给咱们的这个听众如果对这个感兴趣,或者说对这个看野生动物也好,或者说更加了解野生动物也好,你推荐几个途径呗。我觉得一个是咱们拿着手机就可以自主学习,或者去了解这些不同动物的途径。你可以推荐一些网站或者社群,另一个就是如果在国内,可以去哪看,你也推荐几个适合普通人的目的地。
好,首先我们在互联网上有一些免费的工具可以用。这些工具可以用来做AI识别,来告诉我们看到的动物是什么。首先最值得推荐的就是董师傅系列,最早开始做董鸟这个小程序的那个团队,他们除了董鸟这个小程序之外,还做了董兽和董鸟爬。
董鸟爬,对,这三个小程序可以帮助你去识别鸟类、兽类和两栖以及爬行动物。其中识别成功率最高的是董鸟,因为它用的人最多。你如果能够有一台相机,或者说一台手机,也能拍得很远,能够拍得什么鸟的不知道的话,就可以用这个小程序去识别。
如果你喜欢虫子的话,有一个小程序叫小虫,小是只小的小,虫子的虫,这个小程序也还比较好用。然后植物这个方面其实起步是最早的,有好几个, 就比方说形色,形形色色的形色,然后还有石花菌,之类的一系列这样的ASBA的植物小程序都可以用一用。
如果你让我推荐在中国去观察野外的动物的话,我推荐这么几个地方。第一个是川北的唐家河,唐家河这个保护区看动物的效率非常高,能看到大量的有蹄动物。然后第二个我想推荐的就是云南的新鸟谷。云南德宏州的银江县的新鸟谷。银江就是中国官鸟第一线,中国有1500种鸟,云南有700多种鸟,有中国鸟类的斑银江山。
然后犀鸟谷位于中缅边境上面,那个地方能看到三种犀鸟,这个在中国是绝无仅有的。然后还有大量的其他的铃鸟可以看。然后那儿的鸟岛的专业程度又非常高,在那儿找一个人带你看,你可以几天之内刷大几百种鸟,特别爽。这两个地方是我最想推荐的。
所以前面你说的阿尔金山,咱们这普通游客是去不了的,目前为止是去不了的。但是阿尔金山现在,我们听到的消息是正在创建昆仑山古牙公园。创卖成古牙公园之后,理论上来说,古牙公园应该是欢迎人进来看的。但是如何运营,这个还没有定下来,可能10年之后我们可以实现到那儿去旅游的一个目标。
期待如此,其实我自己很想去看看中国的这些地方,尤其是广阔的西部。
好,那今天非常开心能够跟花石聊这么多他跟书有关的、跟他的这些游历有关的。其实我觉得跟你关于这个话题咱们应该根本是聊不完的。接下来如果有什么其他的更具体的切入点的话,我觉得还是到时候咱们可以邀请你再上咱们互走会友,好好聊聊这些跟自然动物保护有关的故事。
好嘞,那我们这期就到这,感谢各位的收听,我们下期再见,拜拜。