Love and Philosophy

Scales & Science Fiction with Michael Levin

Andrea Hiott Episode 6

Send a love message

Scales and Science Fiction with Michael Levin. The first of a multi-part way-making research conversation with Tufts biologist Michael Levin. We discuss his own path and how science fiction helped him see beyond dichotomies. We also discuss the scales of cognition and what it might mean to reorient our understandings of life and mind.

Links to the papers mentioned:
TAME and Light Cones: https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02688/full
Navigation: https://pubmed.ncbi.nlm.nih.gov/35741540/

Links to more of Michael Levin's work:
Bluesky: @drmichaellevin
Twitter: https://twitter.com/drmichaellevin
Website: https://drmichaellevin.org

SEMF: https://semf.org.es/school2023/

SciFi Story Made of Meat by Terry Bisson: https://en.wikipedia.org/wiki/They%27re_Made_Out_of_Meat

More Papers:
Biological Robots: https://arxiv.org/abs/2207.00880
Limb Regeneration: https://science.org/doi/10.1126/sciadv.abj2164

SubscribeToYouTubeVideos: https://www.youtube.com/@waymaking23
Website for Love and Philosophy Beyond Dichotomy: https://loveandphilosophy.com/

Hosted by Andrea Hiott. More at https://www.andreahiott.com

Support the show

Please rate and review with love.
YouTube, Facebook, Instagram, Twitter, Substack.

05AudioLevinHiott

Andrea Hiott: [00:00:00] Hey everyone, this is Andrea, back with another research conversation. I'm in Heidelberg for the month to work on my waymaking thesis at the university here and do some cognitive science studies. Um, I usually live in the Netherlands, but I'm in Heidelberg. It's rainy, uh, 2023 autumn. I'm alone today except for my little dog.

She's snuggled on the bed. And I'm trying to write some papers and connect some dots as I build this, uh, general navigational framework for cognition. Um, trying to sort of clarify the similar movement that I see across various fields and disciplines that's sort of reorienting us ecologically, uh, so we can understand cognition and consciousness, which are not the same thing, by the way.

But... More on that in a solitary research post, haha. Uh, in any case, I've been thinking about all this as I work on some papers, and so I revisited a conversation I had with Michael Levin, and, um, I want to post it today, and it's been a very important [00:01:00] conversation. This is just part one, I'll post the other parts later, but this dialogue that opened between he and I is, uh, very important for me because I only just discovered his work, at least the work relative to navigation.

A couple months ago in the summer when we were both part of this Society for Multidisciplinary and Fundamental Research, it's SEMF, S E M F, Society for Multidisciplinary and Fundamental Research. We had, um, we both, uh, were talking there and one of the papers that he was talking about. then was Analyzing Cognition in Diverse Embodiments, uh, or, what is it, No Competency in Navigating Arbitrary Spaces as an Invariant for Analyzing Cognition in Diverse Embodiments.

And, uh, that paper is saying something very similar to what I've been sort of saying in a more generalized way and what I've been using hippocampal research to, uh, delineate philosophically, which is, in short, that the best way to understand cognition is as waymaking or as [00:02:00] navigation. Although I don't use the word navigation, Because it has a really specific meaning, which means goal oriented, um, which is actually how Michael uses it, and is an important part of his work, so we discuss this a lot.

I think it's more in part two than in this section, but all this just to say there's a lot of really interesting themes that overlap, but not, not fully. They're not parallel between his work and my work. Of course, I should say he's A very successful scientist, working as a developmental biologist with a lab, and what I'm trying to do is come up with a philosophical framework that would sort of unite his work with other scientific work that I see happening in other different fields.

I'm more or less trying to come up with sort of the background framework of how we can really think of cognition. Um, so his work, finding his work has been incredibly important for me for that reason. I'm still trying to understand it all, that's why I'm having all these conversations with him. And this is the first one where I, where we meet each other, so I want to go ahead and post it.

Just leave it sort of how it is. As it is, I [00:03:00] imagine you already know Michael Levin, but if you don't, he's a biologist at Tufts University and he's making new paths towards understanding, um, complex pattern formation and biological systems. I had seen a talk he gave a couple years ago when I was studying neuroscience, not a couple years ago, like in 2017, and um, about planarians, which just totally, it rocks your world if you've never come across this biology that he's doing and you see him talk about it.

He wasn't talking about cognition as navigation then, but, um. That paper came out just very recently, I think. I'll put it all in the show notes. In any case, um, his work is amazing, not just for what I'm talking with him about. For many things. So, really check him out and look at what he's doing. It's pretty mind blowing.

He's also developed something more recently called TAME, the technological approach to mine, everywhere. I won't even try to summarize it, but I'll put it in the show notes. But again, it has a lot of overlaps and a lot of stuff that I want to really think about, like, uh, how... We use these terms [00:04:00] mechanical and life, I see them as differently, different, but I think Michael would sometimes just say like, machines, he, he would say, you know, these different life forms are machines.

And in a way we are saying the same thing, but we're using different language with really different trajectories coming from different places. And part of all these conversations is trying to figure all this out and get some clarity on it because a lot of time, a lot of times in our disciplines, we talk about the same things, but we use.

Different language trajectories, if you know what I mean, like we're using different references and sometimes different words to mean different things, and we don't realize it, and this obviously can get really messy, so. That's part of what I'm trying to figure out here too, because it is a form of waymaking and it relates very much to this overall framework that I'm, I'm building for cognition.

So, all that just to say, uh, Pike11 is doing some amazing work. I was lucky enough to get to talk to him. And to be in a conversation with him, and I'm just going to keep posting those here in different parts. [00:05:00] Um, so that's what this is. It's, I'm not even going to edit it much or anything. I'm trying to see how it overlaps with a lot of the phenomenological and network models that I've been working with.

Everything from like Varela and Mascherana to Hans Jonas and Nadal and O'Keefe and the Hippocampal stuff, Active Imprint. There's a through line through all of this. I'm trying to sort of bring that out, and Levin's work is really important, uh, as, as a part of that. Not only an example, but something that's, I think, shifting this paradigm of how we see cognition in very real, scientific ways.

There's something he says in here, for example, he's talking about, uh, intelligence. And he says that, I think he's telling me a story, and he says when you say the word intelligence, people think human intelligence. And he says, well, that just has to stop. So he's just sort of bypassing a lot of the dichotomies that people accept, and people love his work.

But of course, Also, they have a lot of trouble with this work at times because exactly for this reason that he's using words in ways they're not used [00:06:00] to, uh, especially when it comes to technology and machines and how it relates to life and intelligence, for example. So some people don't want to think plants and animals have intelligence.

Not to mention something like AI. Another interesting part of this conversation I found was him talking about how science fiction as a kid, reading opened him up to thinking beyond dichotomies in a way. That he was already able to see some, something like a machine and something like a insect and an animal as somehow related.

Not the same thing, but somehow related and um, continuous because you know the technologies we build don't come out of any, out of nowhere. I mean they are connected to us. There is a continuum. It's just, we have to figure out what that continuum is and how we can best, um, understand it and direct it. In any case, I liked hearing that it was novels and science fiction that opened his mind and he just sort of, he never even thought there was a difference, as you'll hear him say, between the TV and the...

Caterpillar or whatever. He just was trying to understand it all [00:07:00] as one process. So anyway, thanks for being here and I hope you're doing well out there wherever you are. And write to me anytime if you have some ideas about how all this fits together. Okay. See you.

Great. Awesome. Got it. Thank you very 

Michael Levin: much. Yeah. No, it's a pleasure to meet you. 

Andrea Hiott: Yeah. It's a pleasure to meet you too. And, uh, I I'm really excited about your work and thank you for your work. Likewise. Yeah. Yeah. So I wonder if it's okay with you, if we could talk a little bit about how you got into this line of work and maybe how you ended up thinking about, um, these kinds of bigger themes like cognition and so forth.

Sure. 

Michael Levin: Um, Well, let's see. Uh, so from the time I was very, very young, I was interested in, um, uh, [00:08:00] minds and how they interact in the physical world. Yeah, so, uh, you know, technology, but then bugs and insects, you know, you sort of see and you ask yourself, uh, so, so what's, what's common, what's different, um. We know how the technology got here sort of, uh, how did the, you know, how did the, the complexity of the of the living forms get here?

And specifically, the fact that they clearly have preferences. So some of them have, um, overt behavioral goals and repertoire and various repertoires, but they clearly have preferences about what happens. Um, and then that raises the obvious question of, well, what would it take to, uh, to, to create technology that, that had that, right?

And, um, Is, uh, is, uh, the, uh, kind of third person observable behavior all there is, or is there something else, right? So it's a kind of first person problem. 

Andrea Hiott: Um. Were you looking at actually like insects and animals and stuff when you were a kid? Is it? Yeah, yeah. Were you wondering, like, what do they? Yeah, what, what's 

Michael Levin: going on there?

Yeah, I mean, it started out, uh, [00:09:00] I, I was, I was, I had asthma and was kind of, kind of, um, uh, didn't have access to, to medications at the time and so my, what my dad used to do is take the back off of this, uh, TV set with these vacuum tubes and everything and I would sit there and just sort of get distracted and stare at it and be like, wow, like somebody knew How to put all those things in exactly the right arrangement that didn't just like somebody knew how to do that.

And then, you know, then, and then I would go outside and play with, you know, caterpillars and beetles and so on. And you say, okay, so, so they also have a bunch of stuff in there. And, uh, and so now what's the same and what's different, right. Between, between these, 

Andrea Hiott: you actually remember kind of thinking that making that connection.

Michael Levin: Oh, totally, 

Andrea Hiott: totally. We'll talk about it later, maybe, but the way you use robotic and biological robotics and biorobotics, it, it's different, I think, um, and it, it's also kind of meshing those things together. So it's interesting you already saw it almost like a continuum as a 

Michael Levin: kid. Yeah, it seemed I mean, I never it never occurred to me that that there would be any kind of a discrete break between those two things.

It [00:10:00] just seemed like there are various, various kinds of behavioral repertoire, some very simple, some more complex. And then, you know, the, the plasticity of it, I remember thinking, like, so, you know, so you see, there's an egg and the egg becomes a caterpillar, and then yet it changes again and becomes a butterfly.

And so you ask yourself, what, I mean, what's that like? What's that like to be? To be a caterpillar changing into a butterfly in real time, not on the evolutionary time scale or whatever, but like literally one creature radically changing and living in a different world in a three dimensional world instead of the two dimensional world of the caterpillar and so on, um, you know, and thinking about that and and thinking about myself and watching.

You know, my older friends and they would, you know, they get older, they go through puberty, whatever, and thinking, wow, like we, we change too. You know, and, and right. And, and not only, not only do you get bigger, but your, your valence changes. So things that you really didn't think were at all important when you're 11, suddenly new things are important when you're, you know, 15 and so on.

Right. And whatever. So, so, so [00:11:00] just thinking about that, thinking about persistence of the self and how your mind changes and whatever. And so those kinds of things. And so, so later. Uh, what I did was I went, uh, to, uh, when I went to undergrad, uh, I went for computer science. I wanted to do, um, artificial intelligence, and, um, pretty soon it became clear that, uh, we really didn't know how to do that, right.

Uh, we just didn't have any, any idea and then really going back and asking, okay, so let's, let's take the one, the n of one example that we do have from the biological world and, and figure out what's going on there. And I was really. Uh, sort of enamored with embryogenesis. And so this idea that it's a it's a it's a self assembling machine.

But more than that, it's like this We all take that journey from, from, uh, you know,

cognition or whatever, that's just physics, like, well, you were an unfertilized oocyte at one point, like you were a quiescent [00:12:00] little blob of chemicals, you, like all of us were just, I hate that phrase, just physics, right? And then somehow, slowly but surely, uh, here we are. And, and there was no magic lightning bolt during development that says, boom, at this point you go from being physics and chemistry to being a being that there's, there's no sharp line there.

And so I was really interested in that. And so I studied developmental biology, uh, as well as an undergrad and then I went and got a PhD in, uh, in developmental genetics. And so that's, that's where these things come together. Like I'm, I'm just really interested in diverse embodiments of mind. I want to understand what it means to be an agent in a, in a physical universe.

And. A lot of the kind of standard barriers that exist between fields, between journals, between ways that we're allowed to think about things, some of the terminology, like, I think a lot of that is just terrible and it holds back progress because it, um, uh, it obscures important invariances. That we can use to, to, to merge fields and, and to, and [00:13:00] to do better than we have.

So that's kind of my, my story. 

Andrea Hiott: Yeah, I totally agree. And when I said coming upon your work was exactly what I was looking for. It's in so many ways because you don't really, um, take the dichotomy seriously. You're just already kind of, uh, speaking past in a way past them, which I find very unusual. So. What was the continuity that you saw?

Like if you're staring at the back of the TV set and you're looking at all of that and you're looking at the Caterpillar, the butterfly, your friends, and then. Yeah, you obviously got very interested in artificial intelligence, but I guess some people would say that's really different, right? That the television set isn't necessarily evolving and changing.

Whereas, um, what you described, like with the Caterpillar or your friends, there's some kind of clear change going on. So can you tell me how you. How did you see that? Did you see that more, um, as part of the same kind of system? 

Michael Levin: So yeah, I mean, it was very clear that the TV needed a lot of help [00:14:00] to be put together, right?

It needed other beings to put it together. The Caterpillar needs A little bit of help because there's the butterfly that laid the egg and it was obvious that in the egg, there was some stuff that had to be also put in, right? It wasn't just a random, not, not just anything turns into a caterpillar. So, so clearly the butterfly had to do a lot of work to now.

Of course, there are differences. The engineers have a pretty good idea of how they're doing. The TV said the butterfly probably probably not. However, in both cases, there are there are two components to this. There's the there's the how much help did you get from outside to be created? There's that component.

And then there's the component of Once you were created, how much do you do on your own? Because, because that's the other thing, right? The people who, and so first with the TV and TV, you know, TV's kind of, kind of boring, but with computers later on, you see that there's all sorts of interesting. You know, after you turn on the juice, right?

The job of the hardware engineer is done. You turn on the juice. Now some interesting stuff happens. And I remember playing with a, with an early, uh, G account, you know, it was just a calculator, but, [00:15:00] but you could already see that. Yeah, they created the hardware, but there's also some really cool software that happens.

And part of that is not explained specifically by what the engineer did. It's explained by the laws of mathematics and the laws of physics and the biology. And so, so of course there are different degrees of, of, um, of, of agency here, but. It didn't seem to me like there had to be any strict barrier between them.

It didn't, you know, it didn't seem, uh, and maybe, you know, part of this is, is, is just thinking about a part of it was, it was a lot of sci fi that I read as a kid. Uh, when you look at these creatures. Living creatures. I don't see, my intuition isn't that this is the only way to do mind. This is one way, but why would you think this is the only way?

And, you know, people, people nowadays often say, you know, well, it doesn't have a brain and it, you know, it's not living, whatever. And, and, and I'm just thinking. It can't be that throughout the whole enormous universe, this, this thing with this cortex that we're looking at, that's the only way to [00:16:00] do proper agency.

I mean, it just can't be that. I, I just don't see that. So, so, so to me, it seemed like we have an end of one example of how it can be done, you know, and then, and then thinking about evolution and thinking why, why should evolution have some kind of monopoly on creating real agents? Like, that's one way it happens, but surely.

Uh, engineers can eventually do better than this sort of random walk that, uh, that evolution does. And, and why, why, why would it be the only way that real beings come? So I don't, you know, this whole business of, oh, it's a machine, you know, and what can machines do? I mean, I think it's a terrible question.

Uh, it completely obscures 

Andrea Hiott: the whole, Yeah. But it's, it's almost an assumption, not even a question anymore. And a lot of the way just. That we talk, that the literature is written, that there's these kind of scaffoldings that people don't think about very much, which you seem to have, and now I understand, and you're already sort of thinking past it, but trying to get a way for people to understand that, does it have something to do with like what I call the agent base, or I think you use this agental a lot too, um, like from your [00:17:00] position, of course the TV is going to look Less animated or whatever in that particular position, we could also scale up or down, right?

Um, cellular or, you know, even like in the TV itself, there's going to be sort of more action at different scales, right? Like your observer base. So yeah, I wonder, I know that plays a role in your work that I'm still trying to understand. So I don't know, what does that make you think of when I say that?

Yeah. 

Michael Levin: Um, so so very much observer, um, thinking about thinking about what, uh, what an observer sees at any particular scale. And the idea that what we do see is is hugely limited by our own scale and what we are trained either by evolution or by experience to to perceive. And so this is this is something else that I think you really get if you read a lot of sci fi is they sort of blow up all those expectations where something that is enormous in scale, or very tiny, or, or happens on a very rapid timescale, or a very slow timescale.

You sort of realize that our, our [00:18:00] perceptual and cognitive systems are so, um, narrowly constrained to think about, and to be good at recognizing agency in a particular guise. So medium sized objects moving at medium speeds through three dimensional space. This, this is what we happens on a.

On a huge scale, like, like whole evolutionary lineages or, you know, at one point I was talking to some physicists to see if we can, if we can design a gravitational synapse, you know, like a, like literally a solar system scale thing that, that when something comes flying through, that's a signal and then the next time it comes, you know, there's habituation or whatever, like you can imagine.

So, so these kinds of, right, these kinds of giant things where the substrate doesn't matter. It's, it's. Uh, and and we and and having to be open to really unconventional and I should I should be really clear. This isn't just me having all these ideas. There are there are plenty of people in the diverse intelligence community that that's, you know, see things this way, but but it's still obviously not not not mainstream in any in any way.

But so so this idea that [00:19:00] the substrate isn't the magic. Uh, what is the magic? That's a good question. And that's what, that's what we're trying to understand is, is the scaling of mind and then goals and all that, um, yeah. And just this idea that, uh, I mean, there's, there's also a fundamental thing here, which is, uh, I, I have a very, um, kind of engineering approach to things in the following way.

I think that the way to know, uh, if these things, uh, if these ideas are, are, are worthwhile or not, is, is to see what they enable you to do, what kind of research programs they open up. And this on the one hand that makes it, uh, nice and easy because. Now we don't have to have philosophical arguments about whether this can or can't, uh, be, be an agent or whatever.

We can, we can lay down specific, um, re research programs of how to determine whether that's a useful worldview or not. The downside of it is that it catches me a lot of flack from the organists. Because they see this as basically capitulating [00:20:00] to, um, kind of the mechanist's worldview and, and, and, and, you know, as, as a lot of people write to me, you know, reducing the, the magic and sort of majesty of life.

And I don't see it that way at all. My, my point isn't that, that, um, we have, we, we can or should, uh, try to reduce. Uh, high agency, uh, kinds of systems to, to simple machines, the kind of thing, you know, but people say machines, they really mean machines from up, up until, you know, the 1940s or so. Um, right. So I don't think it's that at all.

I think it's, I think it's, um, understanding the, the full spectrum and having a, uh, a prayer of actually making progress, which means tight integration with. With engineering with with the real world. I don't think a lot of these things are philosophical questions. I didn't get practical questions. And also, um, you know, it's this, uh, it's this it's this idea that, um, it's been the assumption for a really long time in science that It's really bad to make a mistake to attribute too much agency, right?

That every, that, you know, it's a slippery slope. You start [00:21:00] seeing, uh, you know, um, uh, ghosts and all the rocks and things. And, and, and you really should aim downwards and chemistry is kind of the level at which. And so to me, getting it wrong in either direction is terrible. Like I, the, the slope is actually much slipperier to me.

The slope is much slipperier in the other direction because once you start seeing. You know, once you start seeing everything as a machine, as a simple machine, it's really easy to, to have all kinds of lapses that first impair your technology and then they, they have, um, uh, ethical implications. And so either direction I think is bad.

And I think our job as scientists is to get it right for any given system. You don't just want the lowest level of agency you can get away with you want the optimal level of agency that you need to attribute and I think where we often just assume that that that it's got it you got to go down as far as as possible and I don't yeah I think I think that that screws us up as often as it helps.

Andrea Hiott: Yeah, gosh, there's so much in there is, uh, you, you mentioned science fiction, for example, and, um, just thinking [00:22:00] about the scales and, and stuff. I think one thing about creativity or about, uh, you know, reading when we're kids and is that we kind of learn how to see the world from different positions in a way.

Um, and in. In a way, that's kind of helping us maybe to get a grasp on this, um, perspective that I think you're talking about in your work and that you kind of, that you do demonstrate practically, but that you say the philosophy doesn't matter, but, 

Michael Levin: um, Well, I didn't say it doesn't matter. No, sorry. So, so I didn't mean to, maybe I didn't express myself well, um, I don't mean to say that it doesn't matter, but I actually think it matters a lot because People will say, I've, I've had this, um, after a talk, people will say, well, the data are really cool and the new capabilities really cool, but I wish you'd stop talking about this, all this philosophical stuff like that.

Don't do that. Just do the science. And I think that's a, that's a, right. That's it's like the, um, it's like the old joke where the way the. The business guy tells the public, the book publisher, just only publish the bestsellers. Why do you publish all this other junk? Just it's the same thing. [00:23:00] And I say, yeah, if, if we didn't have the, the philosophical outlook, we wouldn't have done these experiments that nobody else had done before.

Right. That you, you need that. So I'm not saying the philosophy isn't important. I think it's critical. I'm just saying that a lot of the questions that. are often treated as philosophical questions, meaning that you can have an opinion on it without doing anything. So people sitting in their armchair, you know, I'll show you some kind of a thing and I'll say, Oh, there's, there's no way that thing has cognition.

I said, well, what experiments did you do to arrive at this opinion? They're like, that's, you know, you just sort of. Came up with that. That's an, you know, that's a, that's a, that's a background commitment that you have before you've done any work at it. That's cheap. Yeah. Yeah. 

Andrea Hiott: This gets really messy. I think in a way that I like, because actually it's kind of proving the point in a way that everyone would come at you from their position and agent base.

Right. So whatever references, like whatever way they've learned how to define a machine or to define, um, philosophy or whatever, they're all going to put that on you. And then. That's constraining, but at the [00:24:00] same time, um, like part of what we're trying to do is find a practical way of like, of, of expanding that it isn't it?

I mean, isn't it connected the way that like the practical work that you're doing in the lab and how it would change actually? Um, maybe a linear way of thinking, for example, that's an assumption that, you know, there's a beginning and there's an end and there's either something that has cognition or doesn't.

Um, that's so entrenched in such a scaffold that you almost have to call philosophical attention to it. Um, but you also need, which is why, again, I'm so excited about your work. You need practical, actual examples of how to understand that, right? And that's kind of what I see in your work, but I don't know.

Michael Levin: Yeah, no, you're right. Um, the, the definitions that that's really important because it's kind of a, um, it's kind of strategic judgment call all the time and because, because for, so for example, somebody, somebody said to me once we were, we were talking [00:25:00] about learning and some weird unconventional system sets themselves thing.

And, uh, and she said, look, uh, You're going to have so much pushback to this. Why don't you come up with a different word? Don't use learning, come up with something else. And then, uh, and then, and then it'll be fine and nobody will be upset. You just define it however you want. And then, and then that's good.

Right. And okay. You know, sometimes maybe that's the thing to do for a particular purpose, but overall, Like, I just, I think to me, I think back, you know, here, here's Isaac Newton and somebody says to him, um, you know what, uh, we'll, we'll, we'll have gravity for the thing that holds the moon to the, you know, to the earth.

And then we'll have shmavity for the thing that makes the, the apple fall. Right. That'll be great. And then nobody has to be freaked out by like, yeah, I mean, great, but, but you've missed out on the whole point. You've missed out on the unification. And so most of the time I I'll say. No, I don't want to use these words the way everybody else uses them because a lot of it is wrong and and we as scientists, uh, and as philosophers, it's on us to to clear this up because people will say all the time.

Yeah, but when you know, don't use intelligence like [00:26:00] that, because when you say intelligence, people think human intelligence, right? Well, they need to stop. And so, and it's not going to stop until, until the scientists, you know, put forth a better system. And then eventually it sort of trickles down. So it's a, it's a judgment call, you know, when you sort of bow to the, to the conventional usage and come up with other terms.

But much of it, I think we got to be the tip of the spear in changing because that'll change perception and eventually the, the usage will change. 

Andrea Hiott: And that's kind of the dance to that, that discussion and that maybe shock, right? Like, uh, I think even there's a way in which if you could open the space to understand, okay, we don't all use these words in the same way.

So maybe I'm not using it in the same way as you. Let's like talk about that. Um, how, you know, why is this shocking for you or not? And yeah, but you know, of course then you can get mired in, in the philosophy and not, and not in the practical, in the practical aspects too. So before I came, before we met today, I was thinking like, what are, what are the big [00:27:00] issues?

You know, like, I don't know if you think about it too much, you probably do, but you know, you're doing a lot of work and it's a lot of energy and like, what is it? Is it? Is it? What's what's kind of what's driving you? What's the motivation behind it? Um, yeah, 

Michael Levin: Yeah, I mean, it's an interesting, uh, thing because I, I sort of go back and forth.

So on, on Mondays and Wednesdays and Fridays, I think, uh, the real goal is to Come up with practical interventions that improve people's lives. In other words, I'm not even a clinician. Okay. I don't do clinical work, but I get emails every day from people in the most unbelievable circumstances. I mean, birth defects, cancer, missing limbs.

I mean, just the craziest things. I mean, the things I've never even heard of. They want you to help them with, yeah, and they need, and they need help. And, and, and medicine, uh, is still. Largely stuck where, uh, computer science was in the forties and fifties. It's all about the hardware, right? I, when I give a talk, I show this picture of like the, you know, she, she's [00:28:00] rewiring the thing.

Yeah, this is now molecular medicine. And I think until we fix that, and until we really learn to take advantage of the intelligence of the various layers of our, our, um, body architecture, uh, all of these things are not going to be resolved. And so, and so, so every other day, I think that that's gotta be it.

But nothing is as important as permanently. Moving medicine to fix some of this stuff so that I can finally like, I mean, just on a personal level, like I'm driven by the fact that, yeah, I get these, I get these emails and right now, all I can say to him is we're working on it, like, hang on, you know, how satisfying is that?

Right? And so, and so on alternate days, I'm like, yeah, this is we got to drive everything to medicine. And then on sort of Tuesdays, Thursdays and, and, and Saturdays, I think to myself, um, yeah. Well, actually, that's none of that is going to be fundamentally, uh, changed until these really basic conceptual things are sorted out.

So no one, you know, no one can really move in that [00:29:00] direction. It's not enough for me and my lab to be working on some of these therapeutics and whatnot. Everybody has to, uh, be empowered by it. And that means getting some of the very basic stuff sorted out. And that means, you know, these, and so, and so people write to me on that too, I get, I get emails saying.

Stop doing all this philosophical crap and just like solve can't, you know, work on cancer and then be done and then, and then there are the other people that say, you know, I'm seeing your papers about, you know, this kind of method and that kind of enough of like, those are details and where's the, you know, work on the deep philosophical issues.

Why are you spending so much time on this? You know, this, this drug and that drug. So, uh, there, there's a wide ranging kind of opinions. I, I kind of go back and forth. I try to do some of both. I think, um, yeah. You know, in my group, we, we have, we have some of both going on. So yeah, I, I, I wanted, I, I want to have a better understanding of.

Of of embodied minds. I think that will drive medicine. I think it will drive, uh, a better a better set of ethics. [00:30:00] Eventually, I think it will drive, um, you know, human flourishing and so on. That that's the long term. 

Andrea Hiott: Yeah, and I think by being able to stay in that place that crossroads I you know, it's not even just a matter of contradictions are different.

Um Different, uh, desires, you know, uh, it's also kind of, it can be really hard to try to hold that space and just keep doing, uh, all of it, you know, I think this gets to this idea too of how we are usually tried, like we try to go into one discipline or one group or one something, because it gets very difficult, right, to, uh, to try to.

Be open to something that isn't, um, binary and dichotomous or just like very clearly defined, even though most of us who've explored anything deeply know that it's nothing is clearly defined. And there's actually, it's like, as you've expressed, it's very hard to find beginnings and ends to [00:31:00] anything, um, depending where you're going to look at it.

So is some of the work just being able to kind of, um, keep yourself in a state of clarity so that. Yeah. You can keep sort of trying to express this because sometimes it just takes time for people to be able to connect the dots. 

Michael Levin: Yeah, much of it, I mean, much of it is just me, uh, trying to figure it out for myself and, and, you know, and like on my, on my YouTube channel is a bunch of conversations with, um, of me and my co, and my, my co workers and, and my various, various collaborators and so on.

And, and that's just, uh, that's just us, uh, in real time trying to figure out better ways to think about this stuff. You know, this is very much, um, uncharted territory. It's very much a, uh, Um, kind of, uh, an unknown, uh, yeah, you know, lots of unknowns here. And so, and so a lot of it is, is, is for myself and, and, uh, and for us all to kind of, uh, try to figure out what are good ways to think about this and, you know, this idea of, of, uh, the spectrum and, and, and the change and all that is, is really key people.

[00:32:00] Oftentimes people just don't take developmental biology seriously at all. Um, you know, they'll say, uh, the human, they said, well, what exactly is a human? Right. Because, you know, at one point that human was, you know, was, it was a little, uh, it was a little oocyte and, and sort of, so, so where, where did they say, well, when the, you know, let's say when the, you know, when the brain development that takes months, so what was actually happening on it, you know, they sort of have this idea that Uh, these things just sort of pop into, pop into existence and then, and then there you go, but actually, no, you've got to do the hard work of figuring out if you, if you, if you're, if we're going to have a, uh, a rigorous theory of whatever it is that we're talking about, you have to have a story to tell about when it shows up and why, and, and if, and if you think it's binary, you have, you have a really hard story to tell about how, how that phase transition happens and, and, and, and so on.

Um, I was in a, it's kind of a funny story. I was in, I was in court one time as an expert witness. On something. And, uh, some Dr. Drug drug. That's intriguing. . Yeah. Dr. Drug effects [00:33:00] on, on birth defects, you know, this is, and, uh, and, and the opposing, uh, counsel was trying to, um, make it seem like, uh, he, he, he wanted to, he wanted to, to, to discredit to whatever, whatever I was gonna say, as irrelevant to human medicine.

Right. So, so he says, he says, uh, professor Levin, you work on frogs, right? And I said, well, among other things. Yeah. And he says, uh, don't frogs come from an egg. And I said, and it just kind of popped out. I said, I said, well, you came from an egg and the judge was banging the thing. Everybody's yelling because they thought I was trying, they thought I was being rude to them, you know, and they go, Oh my God.

And the judge, you know, and I said, no, no, no. I said, your honor, you don't understand like you too. And me, all of us came from an egg, like, think about it. And then it was just dead silence and everybody was trying to visualize their seventh grade sex ed and like, wow, we. Did come from an egg. No one thinks about this.

You just sort of think that's a movie moment right there. I mean, it was incredible. I was like, wow, you know, out of all of these, you know, brilliant people, nobody remembers that, that, you know, that, yeah, we all frog and us [00:34:00] and everybody else just about, we, we all come from one cell and they have this notion of, you know, you're 18, you're an adult, like bang, something happened, you know, the clock ticked over and now, you know, now, I mean, that's, we, we know that's a, that's a nonsense story, right?

It's, it's a continuum. It's always a continuum. 

Andrea Hiott: Yeah, but, uh, just to even have that realization is a pretty big thing for a lot of people in their life, I think, and also It's a hard thing. I mean, I think it's, it's hard to realize you're not as solid as you think or that everything is sort of continuous and there's no clear boundaries.

These are hard things for us in the way that we currently think of ourself, right? Because we kind of come to awareness of ourselves in this particular way. So we assume all these things. We don't think about all the many, many, many creatures that we are. Yeah. Do you think, I mean, I guess, do you see that in reactions to your work, too, that people just have an almost visceral reaction of it just being a difficult [00:35:00] thing to get a hold of due to their sense of self, somehow?

Um, 

Michael Levin: yeah, it's interesting. And I also, um, I collaborate with a bunch of folks, um, who study, uh, in the Buddhist tradition and so on. And they, of course, have a different, different take on all of this. But yeah, I see kind of, um, I see two reactions really. Some people are really sort of liberated by it and feel really good about it because it makes things finally things make sense.

All these really hard pseudo problems that were bothering them before now it's like, Oh, I see if you don't try to make it binary and if you have a an observer based, um, kind of, uh, perspective, then, then, then these things go away. There aren't these inconsistencies that are really, you know, we're not ever going to be resolved.

Do 

Andrea Hiott: you think of like a practical example of that? I mean, um, 

Michael Levin: well, well, for example, um, you know, just this, uh, just this business of, uh, and we can, we can talk at length about this, this issue, but, but, but just this issue of the changing, uh, [00:36:00] of the changing self and the fact that if you think about it. You, you, you don't have access to your past, which at any given moment, what you have access to are the engrams that were left for you by your past self.

And it's on you to interpret them and to construct memories and to figure out what you are and who you are. And you have to do this all the time. This isn't just a, you know, what planaria do when you, when you cut their head off. This is all of us are doing this all the time. And so, and so, right. And 

Andrea Hiott: just in different ways, according to who you're with and what 

Michael Levin: situation you're in.

Yeah. Yeah. And just realizing that, that, that that's okay. There is no, there is no need. To try and formulate some, uh, uh, sharp thing that, you know, I am this. Yeah, but you were five years old once. What happened to that being? And, you know, all these crazy questions that, you know, you can ask it, but you don't need any of that.

You can just sort of realize that this is a, a, a, a, a progressive, um, kind of, uh, kind of process. And it doesn't mean that you don't exist and it doesn't mean that, you know, your mind is an illusion. That's these things people say something, I mean, any of that, but it, but it does mean that it's a, um, [00:37:00] It's a construction, you know, kind of construction process.

And then, and then people start to think about stuff like, um, you know, because of all the AI stuff you, uh, there's some people that are trying to develop proof of humanity certificates. Right. So that it's actually a person. And so you think about that and you say, okay, what do I want? I'm designing a proof of humanity certificate.

What do I want to make sure of? Do I want to make sure that you have standard homo sapiens DNA? I don't know about you. I could care less about your DNA. Uh, do I, right? Do I want, uh, do I want a certificate that says I've got all my. Original organs. I don't care about anybody's organs. If somebody had a bunch of stuff replaced with, uh, you know, new, new, new sort of cyborg stuff.

I mean, great. Her liver's regenerating. Exactly. Right. What do I care? And, and again, this is all, you get all this from sci fi people who have never read sci fi find all this crazy, but, but in science fiction, they did all this a hundred, you know, 150 years ago, it was all done. And so, and so you think, no, I don't care about any of that.

And so then you ask yourself, so what do I, you know, what does it mean to be a human [00:38:00] actually? And, and, and people ask this too, when I talk about, um, And of course, I'm not the first by far person to talk about this, this idea of freedom of embodiment that at some point, you know, you want to have tentacles and see infrared, knock yourself out, it's all going to be possible, you want to, you know, have a primary sense of the stock market and the solar weather instead of vision, like, sure, why not?

So, so all of this is going to is going to be possible. He says, Oh, my God, then we're going to, you know, we're going to lose our humanity. And, and you just ask, you know, was your humanity tied to your wacky set of organs that you ended up with that are susceptible to all sorts of weird diseases and parasites?

Like, was that really it? Or, and so, uh, or, or your genes, which were the product of this sort of meandering search through, through genetic space. Um, and then, and then you, and then, so that leads you to ask, like, what is really, uh, really critical here? And, and, and, you know, where I ended up was, and we, we have a couple of papers on this, this, this idea of, uh, Of, of, of compassion and, and, and cognitive light cones in this idea that, well, what you want from a human [00:39:00] really to, to know that if, if we, if we're going to be stuck on, on Mars for the next, you know, 20 years, what I really want is, is for us to have an impedance match in our cognitive light cones.

I want us to be able to care about the same scope of things, you know, if, if, if you're a 

Andrea Hiott: Roomba, that's great. Sorry. I just have to say that again. Yeah, 

Michael Levin: it's um, what is 

Andrea Hiott: illuminated sort of 

Michael Levin: it's it's it's specifically so as I gave it a very specific definition, it's it's the scale of goals that you're able to pursue.

So if you're a bacterium, all spatial and temporal, right, your scale of your goals are very tiny. And if you're a dog, they're bigger, but they're still, you know, you're never going to care about What happens three weeks from now, you know, two towns over, you're just, it's impossible. Whereas if you're a human, you might have planetary scale goals about things that go on long after you die and so on.

So, so there's cognitive, so, so we want to kind of impedance match about the kinds of things we care about and, and you want at least, hopefully more, but at least a human level of compassion to, to [00:40:00] literally being able to care about a certain number of. A certain size of welfare of other individuals, right?

And maybe there are beings out there who do way better at that than us. Um, but but at least minimally that. Maybe we'll learn how to be those beings. That's exactly right. And we talk about in this paper, uh, we the the team and I um talk about this idea of enlarging C committing to enlarging your cognitive light going and then enlarging your, your, your sphere of concern.

Mm-Hmm. . And this is something that of course, you know, in the Buddhist tradition, they, they think about a lot. So, so people, oh, I love this so 

Andrea Hiott: much. Gosh. Wow. I need to find that paper. Haven't read that one yet. No, I'll point 

Michael Levin: you to it. Um, uh, they, uh, so, so, so some people, you know, sort of get this and it's, and it's.

And it's, and it's clarifying for them. It makes them feel better. They don't need to worry about getting an implant or changing, you know, or, or, or getting their, you know, DNA repaired or anything like that. They, you know, sort of feel, they feel better other people. So oddly, I mean, I find it surprising, but, but other people are profoundly destabilized by all this stuff.

And it isn't just me, it's sort of like, There's this and, and, and with that, with [00:41:00] the same group, we're writing some, some stuff about this. Um, there's a lot of, so, you know, so, so, so if you're a thoughtful person and you read some physics and some neuroscience and stuff, you know, you find out that, uh, well, let's see, uh, uh, evolution theory tells us that, um, there really is no, uh, morality.

It's all dog eat dog and survival of the genes and the neuroscience, you know, you read like Sapolsky's book or something and they tell you, well, there's no free will. So, so you're just sort of humming along on what, whatever was, you know, ordained, uh, you know, whenever and then, uh, right. So, so you, so you get all this stuff and then, and then here comes, you know, Levin tells you that you're a bunch of cells in the trench coat and, and, you know, you, you're this thing that's sort of constantly, um, morphing and whatnot.

Again, I'm not, I'm not a clinician or anything remotely close, but I get these emails that say, um, well, I know you're not a therapist, but, and then, and then they go on with this stuff that, and I'm like, wow, this is terrible. I really wish, you know, my goodness. Yeah. And then from all sides, you know, yeah. [00:42:00] Um, it's amazing because people feel, I think, even though I don't talk about that stuff publicly, but people feel like this.

Impacts on their life that this is that the answers to these questions are not just like crazy, you know, you know, nerdy philosophers think about this. This is actually matters, right? People feel that, um, and they get and some people get really destabilized by it. And, and I try to, uh, yeah, I try to, I try to walk some, some, some folks, you know, sort of off, off of that ledge and, and, and, uh, it's, it's, it's funny, even in the movie, you know, um, have you, have you seen this movie X Machina?

Have you seen that? Yeah, I have. So, you know, you know, the scene where he's at the mirror and he's cutting his hand because he's worried that he's a robot too, right? And so, and, and so, and this, so this is a, that's exactly right because, because many people think. If, if I look inside and I suddenly find a bunch of cogs and gears and whatever, then I'm not as amazing as I thought I was and I [00:43:00] don't have this and I can't do that or how about, how about the alternative interpretation, which is.

You didn't learn a thing about yourself. What you learned is something cool about cogs and gears. You just found out that, well, geez, I guess cogs and gears can give rise to my amazing self. Awesome. I'm now going to do whatever it was I was going to do 10 minutes ago. And so, and so, and, and people immediately, some people, I guess, immediately jumped to this, to the, to the first interpretation.

But this somehow takes something away from them. I think this is one of those things that Descartes actually got right. I mean, he catches a lot of flack from everybody nowadays, but I think he got this right, which is that you have a primary perception of your primary experience is of being an agent that can go do things and that has certain properties.

Nothing can touch that. If you find out that you're made of this or that, delightful. Now you've learned something about whatever material you're made from. Go do whatever awesome thing you were going to do before you found this out. It's fine. It doesn't mean anything. So I try to kind of, you know, go through this.

So some people, so anyway, that's a long question [00:44:00] to your answer, but some people some people find it disruptive. 

Andrea Hiott: No, this is great. It makes me think of so many things. Like, um, even in Buddhism, sometimes, you know, they say that Uh, you, it can, that too can be a very, um, jolting experience if you have an experience of kind of non self or, you know, a lot of this is just regularities and, um, I mean to get to this idea of navigation or way making or whatever, I mean, if we try to think of it as like trajectories that we've, we've all come on kind of different trajectories with different regularities and sometimes we cross paths with people who have very different ones, right?

I mean, as you're talking, I'm thinking of the television set in the Caterpillar and how, from very early on. You were, this was a comfort zone. You were able to kind of deal with all those different, what, what, what's for some people are very different things, like a machine, like in this TV kind of way, and a biological body, right?

I mean, um, but when you haven't thought about all that stuff, and then you're sort of presented with it, I think it can just [00:45:00] be, be very jolting, but in a way that's, if you can walk, walk through it or sit with it for a minute, it does become, it can become enlightening too in the way that you described with the ex machina, right?

If you can kind of, some of it is this practice of learning how to shift, um, scales or shift perspective to look at the life, look at life from the point of view of the caterpillar or the television. Or, or whatever, right? Like that's a kind of practice in itself, isn't it, that maybe 

Michael Levin: we learn, yeah, it is.

And you know, not to, not to beat this, uh, you know, dead horse, but, uh, the, um, the sci fi thing is, is really useful for that because, because it helps you take the, you know, there was a, there was a story, um, called they're made of meat. Knows, know this old science fiction story. Yeah, yeah. Right. Remember this And Yeah.

And, and these aliens find out that, that, that, you know, that, that humans are actually made of meat. They're, they're disgusted. They're like that this cannot be right meat. Like, what the hell? Gross. And yeah. And, and, and, and here we are, you know, everybody's saying, oh, it's, you know, it's a silicon, uh, you know, thing that can't be all of this stuff [00:46:00] is a, is a, is a, is a matter of, um, uh, perspective until we, we really, uh, develop, and I think this is what the.

Field of diverse intelligence is about is to develop a principled. Approaches to to recognizing agency to relating to other beings. And by the way, all of this, this goes back to our discussion about terminology. All of this is about, uh, I think, practical, uh, Interact optimal relationships to systems, right?

So, so, so when I when I talk about the spectrum and I say, well, you know, yes, human, but really what human like a modern human, uh, you know, a child, a baby, like a fetus, like, well, like, what are we talking about? And then people say, well, you can do that with anything. That's the paradox of the heap. Right.

They say that that's not a good way of reasoning because you could do that with anything and you can say, well, I'm going to pile sand, you know, a pebble at a time. And eventually you've got this heap. So when do you have the pile? Right. And so, and so my answer to that is, oh, no, [00:47:00] it's extremely practical.

If you call me on the phone and you say, hey, Um, I need you to come. I need you to help me move a heap. Here's what I need to know. Am I bringing a shovel, a spoon, a bulldozer? Like, what am I bringing? And that's all, right? We don't need to have a philosophical discussion. We need to know what bag of tools is going to be the way to relate to this, to the system.

I think that is exactly what we need to do in science, is to erase some of these Barriers. Um, what, when I was a, when I was a postdoc, I, I was, I was, I, I was having this, this idea of using, uh, expressing ion channels from, from neurons in other types of cells to, to manipulate the bioelectric and embryos. I, I wrote some, some letters to, to neuroscience asking for ion channel plasmids, you know, and this one guy called my, my boss, my, my, my, uh, my, my postdoc mentor.

And he said, uh. He said your, your, your, he said your postdoc has gone off his rocker. You need to do something about this. He's clearly crazy, but, but the fact, which, which was, [00:48:00] you know, great. But, but, um, the, the fact is that actually none of the tools of neuroscience distinguish neurons from non neurons.

You can use it. All of the stuff works, the optogenetics works, the drugs with the target, the different certain urgent machinery and ion channels, uh, all the conceptual stuff were, you know, the, the multi electrode arrays, the, the, the conceptual stuff, active inference, and, you know, all of these, you know, perceptual control, all of these things work outside the nervous system.

Great. They were very well, if the tools can't tell the difference. Who, who are we to put up these barriers and say, that's the neuroscience department and that's the developmental biology department and don't try to, you know, they're different, they're just completely different. I think it's totally artificial.

Oh, it makes 

Andrea Hiott: me, there's so much, it makes me a little too excited to try to. Say all this at once, but I think again it comes back to that agent base and scale Where are you measuring it from right if you're an ant a sandpile is going to be different than if you're a human And also kind of like learning if if we take this really seriously in a practical way what we're talking about you You [00:49:00] realize and that's the scary thing that you're not what you thought you were this like Only human right you're experiencing that scale But we develop all kinds of tools and measurements to measure at many different scales, whether it's like looking at the scale or looking at the planets or whatever, and science fiction too.

We tell stories as ways of also measuring from different perspectives and scales. Like this is obviously kind of what we're trying to do, um, and maybe there's a way to say, okay, now I'm at this like human self scale of my trajectory and narrative, but I can also look at like. The cellular scale or the planetary scale or maybe put myself in another person's shoes or something like that.

Um, yeah, but so there's all that that I would love to talk about, but also I just tried to get it back to like the science and you mentioned the neurons, right? And I studied neuroscience and this is really interesting. Even to kind of tie it to Descartes because you brought up Descartes of mental physical right and how [00:50:00] we've come to think of the brain the cells in the brain are special Because they're mental because from our agent from our agent base That's how we see it right and it seems different than the cells elsewhere in the body But I hear you saying actually they're all doing something similar, right?

It's just Where are we measuring it from and what are we measuring? 

Michael Levin: Yeah, um, well, well, two, two interesting things there. Uh, one is the, this issue of, of, of, of measurement. My, my feeling is, and, uh, this is just a conjecture, but my feeling is that the reason that physicists, uh, see Uh, machines and not mind is because they're using low agency tools.

So when you have volt meters and rulers and, you know, things like this, you're using low agency tools and there needs to be some, some resonance between what you're using and what you're going to find, right? There's this, this, this, this old, uh, you know, uh, saying, uh, you know, show, show me the net you're using and I'm going to show you what fish you can catch, like, yeah, the tools.

If you use low agency tools, you find low agency stuff. Um, [00:51:00] what's a high, what's a, what's a high agency detector. Well, mines are. Like we're okay, you know, we have a lot of blind spots, but we can recognize at least certain kinds of agency. And so when you use a high agency tool to, to look for things and we'll guess what you find.

Right. So, so, so I think measurement, um, you know, you have to think about what, what your, your, your measurement, uh, tool is going to let you find. And then, um, the thing about the cells is, um, And the neurons. I've been in discussions where, you know, we'll have a conference about this kind of stuff and somebody will give a talk and they start out and they say, well, neurons.

And, and I, you know, I sort of say, so hold on, what's, what's a neuron? And they say, yeah, come on, you know, neurons, but no, really, like, what's a neuron? And, and of course you don't do this in, in neuroanatomy 101, because there you need to just agree and move on with your life. But, but in meetings like this, is it, well, what's it, what's a neuron?

And they say, okay, fine. And they, and they'll write down four or five, um, uh, criteria for what a neuron is. And I'll say, yeah, but every cell in the body does that. [00:52:00] And they say, Oh, well, that can't be or then no, actually. Yeah. In fact, in fact, the caution did some nice work on, on, on neuroscience and bacteria.

Um, and, uh, and, uh, they say, Oh, well, then, uh, you know, it's, it's gotta be the, the, the, the higher level thing. It's, you know, they work together, they make networks to, um, uh, uh, to, to solve problems and to, you know, to, to represent information. Like, yeah, pretty every cell in your body is doing that, especially during development and regeneration.

So, Mm-Hmm. . 

Andrea Hiott: Um, and it can all look a lot like cognition if you take it from the position of whatever agent is using it or is That's Right. Most 

Michael Levin: involved in It's right. That, that's exactly right. And so, and so I've been pushing this idea, uh, you know, which is, you know, not super popular by any means, but, but this idea that, uh, morphogenesis.

Is literally behavior in anatomical space, exactly like we conventional behavior is in three dimensional space and that it's the outcome of a collective intelligence in one case of neurons and the other case of whatever other [00:53:00] body cells you might have, that it's literally the same process. It shares molecular underpinnings.

Um, we have, we have a cool, uh, paper on that coming out soon, but, but, but also, um, uh, algorithmically, it does a lot of the exact same stuff. 

Andrea Hiott: Mm hmm. It gets back to those patterns and regularities and how they scale and how they're never discontinuous. So if you can find the right tools to measure them, you can probably see their continuity, um, from, you know, from scale to scale.

Um, but this gets at this idea of kind of, uh, trying to understand cognition in a different way, which is what I'm really interested in because I feel like it, that's one way we can start to. Uh, understand, get past the linear and look at like maybe a multidimensional, many scaled kind of perspective of the world.

Um, if we could understand that cognition is something that's happening at all levels and in all species, um, but that doesn't mean it's the same cognition or something. And I know [00:54:00] you've, you know, you use the term navigation, which I used to use in someone kind of in neuroscience, as you know, that only always means that there's a goal.

Um, yeah. As opposed to way making or something. So I changed it to way making, but, um, I wonder like, when did you start thinking about cognition as navigation and like, how does it relate to all the stuff we've been talking about? 

Michael Levin: I, I, I like your point about way making, uh, as kind of a broader thing, because I agree, it doesn't always, uh, the relevant, it doesn't always have to have a goal, but.

But it Well, when I when I it first two things sort of coalesced for me. At one point, it was about they sort of came together in right around 2018. Um, what one was that? I, I, I love this idea of, um, uh, the latent morphous space, this idea that, that shapes that we see are, uh, particular regions and morphous space.

And this was, you know, back in, [00:55:00] uh, even, even, um, uh, Darcy Thompson in his classic on growth and forms, the growth and form the, the, my favorite part, I mean, the whole thing is fascinating, but, but my favorite part of it is there's, there's four or five pages where he. He makes a grid, you know, just like a, like a two dimensional grid and he draws, um, some sort of a, an animal on it.

Uh, and then he deforms the grid with these mathematical deformations. And then what you get is another kind of animal. And so, and so this was, you know, the, the 20s, I think, or the, you know, up until the 40s, something like that, he was doing this stuff. And, and you look at this and you say, what the hell is that grid?

Like, what is he actually deforming? What you, you know, physically, biologically, but, but, but, but what you realize is that. Um, any anatomical change is a move in this morphous space and therefore the normal journey from egg to whatever is a set of deformations in this morphous space. In fact, more recently, I've started to believe that actually.

Every that development is really just an instance of regeneration. So, uh, [00:56:00] anyway, and so, and so, and so I was thinking about, so I was already thinking about this idea that these are really all, I'll just, you know, sort of these, these, these, these movements and more for space. And then, um, in 2018, I was at a, um, I was at a meeting at, uh, um, diverse intelligence conference, um, in Scotland and, uh, Pranab Das, uh, sort of laid out this challenge and he said, um, Uh, I want people to think about.

a framework in which you can put diverse intelligences, you can, you can directly compare and contrast very diverse intelligences. And so, you know, people were thinking, um, octopus and, uh, you know, and, and slime mold and things like that. And, uh, And I heard that and I said, well, so conventional beings, but also swarms and aliens and AI's and synthetic weird, synthetic shit that we make in the lab.

And like, like really let's really sort of be inclusive and ask what, what is the, um, underlying, uh, invariant and [00:57:00] all of this, what, what, what, what is a, you know, an agent and, uh, and then I came up with, so, so my take on that, of course, other people have said. Things about this before, but, but I, I kind of, uh, went with this, um, cognitive light cone idea and they said, whatever you're made of, however you got here, meaning evolution, does it, you know, uh, engineering with, however you got here, uh, the only way you're going to be an agent is if you have the ability to pursue some kind of goal.

In in that that that will your your your set of goals will have some sort of envelope or the biggest goal you could possibly conceive and follow. Um, and that doesn't mean that every action has to have a goal. You could be exploring. You could be just hanging out. That's fine. But but but as an agent, you have the ability at some point to pursue.

some states that you prefer over other states, you have to have preferences, valence and so on. And so, and so that then realizing, well, first I drew it and I, and I said, okay, so this is, you know, spatio temporally in three dimensional space, you have these goal states. And I thought, wait a minute, uh, but this could also be a [00:58:00] transcriptional space.

It could be a, um, a physiological state space. And of course it could be anatomical morphous space. You have these goals of I'm going to be a limb and I need to get from here where I'm a limb bud and over here. Um, And of course you have all these. And so then, then I was like, ah, well, what all these things have in common is.

Uh, is, is waymaking. It's what you said. It's the ability to navigate, adaptively navigate these spaces to get to regions that are, uh, more, uh, more, more, more, um, you know, more pleasant for you, basically. 

Andrea Hiott: Gosh, I love it. Um, I have to just dig in here a little bit. I don't actually, I'm not really clear about how, but I'm just going to talk.

But, um, is it, uh, so, so to know you're an agent, you would have to, To have a goal, but maybe you could be an agent to someone else just, um, making way, right? Just, you're making any kind of a way. So I'm trying to get at that distinction. We were talking about measurement before and the word bothers me sometimes because, maybe because of all the references and the trajectory and the history that comes [00:59:00] with measurement and how it sounds very You know, it can sound a certain way to certain people, maybe assessment is better or something, but there's like, I wonder how you see this, this relationship between, um, the observer and.

What it is to be an agent. Um, and you know, if that relates to navigation, I mean, I also use navigation in my, uh, master's thesis because I think it expresses what people, what, what, what I want to say about what cognition is. But yeah, we also just wander around. We, you know, any kind of life, even a tree or something is making its way.

It's, it doesn't necessarily mean. Uh, moving from A to B or, or something like this, but. 

Michael Levin: Well, I mean, it is in physiological space, it is moving all the time, right? In three dimensional space, it's not moving, but in physiological space and also in transcriptional space, it is, it is moving. 

Andrea Hiott: Right. Depends how you, exactly.

It depends what you're, like the spatio temporal scale from what you're looking at. And 

Michael Levin: which space, right? Like we, like we're in love [01:00:00] with three dimensional space because all our sense organs point outwards. So because of our eyes and everything, we love 3D space. But, you know, if, if you, I think if we humans.

Evolved with a primary sense of our own, um, internal physiology. So like almost like a tongue, but it faces inward. So, you know, exactly what your blood chemistry is at all points. And I think we would feel like we're living in a high dimensional space that our livers and our liver and our kidneys are, uh, uh, intelligent agents that are solving problems in the space, because we would be able to.

feel them doing this. We just, you know, we see squirrels and stuff running around in 3d space, and we're sort of good at recognizing that. But if we had the sense organs, I think we wouldn't have that. Yeah. 

Andrea Hiott: And maybe we're trying to get there in a way with our tools and stuff. And also just how much can we handle, right?

How much can like the, the system handle it maybe can't handle being so aware of that. Maybe later we can. Yeah. But 

Michael Levin: yeah, maybe, I don't know. I mean, you know, with all the work on, um, augmented, uh, and, and I apologize, I have about four minutes, but, but before my next thing, but we can, but we can, we, we'll, we'll schedule, you know, we can schedule [01:01:00] another, another chat.

Okay. Okay. Mm-Hmm. , um, uh, you know, w you know, sensory augmentation and, uh, and, and, and, you know, these weird prosthetics and everything. I'm not sure we know how much we can handle. I think we could probably handle a lot. And, um, I, I, I, I fully agree with you that, um. I don't think you need to know that you're an agent in order to be an agent.

I mean, that's a particularly advanced sort of metacognitive thing, but even back, so here's a super simple version of this, right? Bacteria. So, so you got a bacterium and it can sense the sugar concentration and, and, and, and that helps it go up the sugar concentration gradient, right? It loves, it loves them.

But now what if the sugar is poisoned? Somehow, right? That that thing is going to fail because so so what some bacteria have is a they have a um, uh, A metacognitive loop where instead of tracking the sugar concentration, they're tracking the output of their own metabolism So the question is so so so there you say I don't know the details of what's going on, but how am I doing?

Right. And, and if the answer is, well, [01:02:00] yeah, the sugar sensor is firing gangbusters, but you're not, you're not doing very well, then the answer might be, well, then I need to get the hell out of here. Sometimes something's not right. So, so this ability to scale assessment going. Yeah. Yeah. It's assessment, right?

This ability to, you know, measurement doesn't mean somebody, you know, with, with, with a ruler who sort of has a deep understanding of what the hell they're measuring. It's, it's, it's constant. And, and the ability of. Um, beings to, to, to measure internal states as a, in addition to external states, that leads, I think, directly to this kind of metacognitive feedback loop, where eventually you get, you can tell very complex stories about, you know, with the agents around you and, oh, by the way, I'm an agent too.

And, and so on. 

Andrea Hiott: So we can talk about that. Somewhere you actually have to expand into different spaces. Yeah. Yeah. Yeah. And just, I really like, we'll have to do it next time, but I want to think about this idea of regeneration too. And like, can you regenerate, um, if you've never, um, had that trajectory, can you read, you know, even we'll have to talk about [01:03:00] it next time.

But if you looked at the planarian or something, and could it regenerate? Um, without sharing like the regularities of whatever is being regenerated. I mean, there's also some weird constraint there, isn't there? Kind 

Michael Levin: of. Kind of. Um, I, I've, I've got a couple of cool stories about that to tell you. So we can talk about xenobot regeneration, which is relevant because, uh, they, they have never existed before.

Right. Yeah. That's so cool. Yeah. Yeah. And, 

Andrea Hiott: um, even that they're kind of. Yeah, we'll, we'll have to talk about 

Michael Levin: it. Thank you. Thank you so much. Uh, it's great to meet you. Let's, let's talk again. And I really want to hear more about your work 

Andrea Hiott: and what you're up to. Thanks so much. I appreciate your time. Yeah.

Okay. I'll see you soon. Love with everything. See ya. Thanks. Yep. See ya. Bye.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.