Love and Philosophy

Empowerment in Robotics: Solo Brainstorm & AI Bonus Conversation with Dari Trendafilov

Beyond Dichotomy | Andrea Hiott Episode 49

Send a love message

 Decoding the Empowerment Measurement in AI and Robotics with Dari Trendafilov. Dari has a PhD in Computing Science from the University of Glasgow, UK. His research interests are situated at the intersection of Artificial Intelligence, Robotics and Human–Computer Interaction. He specialized in probabilistic information-theoretic modelling of complex systems and analysis of computational and interactive cognitive systems in the context of social and human–robot interaction. Towards his aim of establishing the fundamental information processing principles driving decision-making in living organisms, he has developed information-theoretic models and tools for the study of human sensorimotor dynamics, robotic and simulated systems, based on behavioural and physiological sensing and analysis.

In this episode, Andrea and Dari explore the concept of empowerment in the context of artificial intelligence and robotics. The discussion covers Claude Shannon's information theory, intrinsic and extrinsic motivations, and the application of these theories in human-computer interaction and swarm robotics. Dari shares insights from his research on swarm intelligence and the use of evolutionary algorithms for collective decision-making. The episode also touches on the broader implications of modeling intelligence and the dynamic interaction between agents and their environments.

00:00 Welcome to Love and Philosophy
00:11 Understanding Empowerment and Information Theory
01:41 Empowerment in Artificial Intelligence
04:43 Robotics and Human Interaction
06:56 Exploring the Concept of Empowerment
19:29 Swarm Robotics and Collective Intelligence
33:59 Intrinsic vs Extrinsic Motivation
40:36 Modeling Nature Through Robotics
42:38 The Journey to Empowerment Research
43:28 Challenges in Human-Computer Interaction
44:04 Interdisciplinary Approaches to Usability
44:50 Usability Engineering and Market Demands
45:30 Formal Models and Theories in HCI
47:20 Understanding Empowerment in HCI
51:01 The Role of Affordances
52:33 Introduction to Empowerment
53:07 Empowerment in Practice
53:33 Empowerment as a Measure
01:00:56 Applications and Implications of Empowerment
01:08:11 Swarm Robotics and Collective Intelligence
01:14:16 Modeling Intelligence and Future Directions

Support the show

Please rate and review with love.
YouTube, Facebook, Instagram, Substack.

Solo Brainstorm then discussing Empowerment in Robotics with Dari Trendafilov

Andrea Hiott: [00:00:00] Hello everyone, welcome to Love and Philosophy. This is the third post in one week. I had these conversations many months ago but this is how they all finally come together. I've been trying to understand what empowerment is, and I guess I mean that in an everyday sense, but I also mean it in the sense of, algorithms, artificial intelligence, Shannon information theory, measurement, If you have no idea what I'm talking about, don't worry. I didn't really understand this either, not too long ago, and I still don't, which is why I'm doing this podcast, to try to understand it. Claude Shannon and information theory is at the heart of pretty much every device you're near right now, whatever you're listening to this on, your computers, our phones, everything has to do with this digital computing, which deals with a way of transferring information, It's pretty much that process of my [00:01:00] voice coming into your world right now. Claude Shannon made that possible he reconsidered the way that we might change one form of energy or communication into another one I would have a look at the diagram of Claude Shannon. If you just Google Claude Shannon information theory, you'll see this wonderful little diagram of a receiver.

And the process of encoding and decoding to get to the receiver. 

 is a conversation of me trying to learn about empowerment with someone who works in robotics Dari Trendelov, and he is trying to help me understand it, and we talk about how it relates to a lot of the other themes I've talked about on this show, like affordances, because Gibson is a big inspiration there. Empowerment's looking at this quantity that's information based. So as to characterize how efficient the perception [00:02:00] action loop is between The organism and the environment, but in this case, it's not an organism. It's a robot So the robot and its environment, so the real sticky hard question is what's the difference between an organism or a living body? and a robot

But it is important that we realize we're all using words in different ways information empowerment entropy inference belief all these are really important words and they mean really different things Depending on which world you're using them in.

I'm bringing it up here because it's part of what this conversation is it's trying to understand what I feel like gets really, really blurry when we're working in technology. For example, Dari, who talks with me here, is working in robotics, and I'm very interested in robotics, and the empowerment measurement is something that he's working with as he works with swarming robotics. So, that's fascinating. But as you'll hear in the conversation, we quite often [00:03:00] switch between talking about humans and these words as they're used for humans and the robot, which are really different words things, actually, I mean, empowerment is a measurement in a very literal sense, which I'm trying to understand in the way it's used in, um, Developing artificial intelligence. 

 Empowerment in the field of artificial intelligence is formalizing and quantifying the potential an agent perceives that it has to influence its environment. So it's the measurement of channel capacity of how much the robot, for example, perceives it can, influence its environment. It's the robot's intrinsic parameterization relative to its encounter. It's a utility function that depends on the information gathered from the encounter.

So, how can it gather information from its encounter so as to increase its control within that [00:04:00] interaction? I think that's kind of what empowerment is measuring. So, It helps develop patterns so as to increase control. So it's rewarding what we would call intrinsic behavior, which means the robot will inherently be rewarded for exploration and curiosity, because the more of its environment that it encounters, the more it's going to increase its interest ability to control those kinds of regularities. This is really similar to the way psychologists talk about intrinsic motivation and empowerment too, because that also has to do with the body, our ongoing living activity, and that we are doing things to satisfy our internal desire for control, or wanting to have control. But also intrinsic motivation could just be something like hunger, and wanting to satisfy your hunger. So perhaps the more you can explore the world, the more likely you are [00:05:00] to find something that will be edible

Either way empowerment's measuring the potential of this agent this robot or this organism to uh align with the environment, such that the environment itself becomes a way for it to actualize itself. It's almost imprinting itself in the environment and the environment in itself through this perception action loop of interaction.

And I think that perception action loop is the information. And the technologies are the translators of that perception action loop. I think information is really a translation process. It's not a thing. We tend to think of information as a thing that's being moved through the world. But I actually have come to think of it as a translation. The process of translation, which is the movement from, for example, my voice right now into your ear.

So there's not one thing that's moving from me to you right now, but there's a process of translation that's [00:06:00] happening from me to you right now, and that is information in my point of view. That's not the point of view of information theory as it stands right now. And that's what I want to understand better and discuss

 A lot of Bayesian approaches are doing an inverse. They're working in ways that you can infer the so called internal structure, or maybe the way that we are as living life the way that we're coming to align with or increase our control of the encounter, so to speak. You can kind of infer this from the way that the external world, the data that world generates. So it's going kind of backwards, so you're starting to infer the structure of the external world from the data of that world. So it can go both ways I guess in empowerment. You can understand how, Life, the living body is controlling the world, and by controlling the world I mean, [00:07:00] that means increasing its ability to structure it. And you can also understand how the world is controlling the body, or the mind, which is to say, structuring it.

 So I'm really interested in the way patterns and regularities of the world might empower those of the body, and it goes vice versa. The body can, is also empowering particular regularities that are experienced in the world. And this might be something that empowerment can help us to better understand at the individual level, but also at levels of robots swarming, or birds swarming, or humans acting in crowds.

In any case, empowerment, as it's used by people like Dari, here is a information theoretic quantity that's usually motivated by some desire to make the sensory motor loop more efficient between the agent and the [00:08:00] environment. We sometimes confuse and say it's the robot, that's trying to become more efficient, but really it's whatever that robot's affordances point to, which is the human who has created it, or that wants it to do something.

It's really for those, that's really where the empowerment is. It's not in the robot. But the empowerment measures how well that's happening via this technology this agent environment system, and these stochastic transitions. How much then does that technology, that agent, that robot have over the environment that it's being tested within? Two words that come up a lot are controllability, which is the influence on the environment, and observability, which is the measurement by the sensors.

And that's the controllability and observability has to do with this ongoing encounter, what we call the environment. You can't take those things away from each other, so that's the interaction. This ongoing interaction, sensory motor loop, between the body and the environment. And we substitute the robot [00:09:00] into that, and we discuss it as if it's the robot's, sensory motor loop.

But actually that robot is an extension of the scientist or the engineer or whomever is focused on it or whoever has created it. It's actually an extension of their sensory motor loop. I think that's what gets confused. So we're actually studying the empowerment of The scientist or the researcher, not the robot, but the measurements, it's that are being taken, are also called the empowerment measurements. Because that robot is itself part of our sensory motor loop as the researcher. So we've done something very similar to what happens when we, We talk into a telephone and our voice is translated. We've just sort of talked into this robot, so to speak, with our controllability of the environment. We've, we've sensed that into the robot, or somehow tried to translate that into this robotic model. [00:10:00] Represented it there, so that we can then look at that sensory motor loop from a different point of view and translate it, hopefully, into the environment. Who's the receiver of that? What's the destination? In this case, it's the same researcher. Or, at least, the field of study that's, that the researcher is, is studying.

So it might be, for example, if it's within Nokia phone as Dari says, then the Nokia company and the researcher have extended their affordances into this robot so as to see how that robot might increase the controllability of the sensory motor loop in that environment. Such that it will afford them better because it will mean they can give their customers a better utility function.

It's also important, some of the studies have found that this only works when you have a very particular recognizable semantics. So, that also speaks to this idea that the robot is already unique an [00:11:00] extension of the researcher or the research project so the affordances aren't the robots affordances They're the affordance of the researcher or of this field of study

It only makes sense as long as you have set that parameter that landscape of of potentials so that's why it's so confusing and also so fitting to use these words like intrinsic motivation and empowerment Because in fact in a way that We aren't always very clear on it actually is about the empowerment of the research project or the researcher or the field of study And it is intrinsic motivation.

It's just that we've represented it in the robot And we're now measuring it with very particular kinds of maths and algorithms that we call empowerment and intrinsic motivation. It's a bit of the doubling that Thomas and I talk about, which doesn't mean that we've doubled life. We're not a double body or a double person, but we, we've done this heuristic of doubling so as to better understand ourselves, or understand this process, [00:12:00] this looping between ourselves and our encounter.

But it's a very limited one because we've had to set this very limited scope of semantics so as to be able to study it. So it's a very limited and controlled representation of one part of our sensory motor loop with the environment. But in doing that we can understand it better in the same way that in writing something down we might understand it better or in drawing a diagram. It's just that we've now got been able to, are able to do that in a more complicated way than the creation of a robot, for example. So we say it's about the robot's potential ability to influence the environment by, this capacity to imprint information into that environment or And then perceive it through its sensors. And that's the representation, all of that, is an external representation, like a little model of our own sensory motor loop. But it's also part of that sensory motor loop, so we've literally sort of extended it. And created this little [00:13:00] model. but the, the voice, so to speak, like when the voice goes into the telephone, in this case we're still putting the voice into the robot. And by voice I really just mean the, whatever it is that we're trying to understand, we are the sender of that into the robot. But the affordances always point back to us as the researcher, or us as the company, or whoever. Whatever, wherever you want to put the scale. And that's what often gets confused, I think. And what I was trying to better understand with Tomas in the conversation about the body and it also gets to this very essential question Which I think is the real question at the heart of this which is again what's the difference between the living body or the body and the material technology or machine the phone or the artificial intelligence or the computer or whatever you want to call it and I think that the difference is in the affordances. I talked about this with Karl Friston too, and he seemed to agree we talked about the thermostat for example, [00:14:00] and that the affordances of the thermostat are not its affordances, they're the affordances of whomever might be, using that thermostat in order to control their own sensory motor loop. So, I've been thinking about Boolean logic and George Boole, who had this idea that we might be able to put things into a kind of binary logic that would help us understand thought, that would actually give us a kind of math of thought. And I think that's very interesting, because what we often think it is, is that we're trying to, solve what thought is, but in fact what we're doing is representing thought externally, so as to create ways to better understand the process that is thought. So it's not that we've created something that thinks, instead we've represented our own thinking with something, and the affordances always point back to us. So actually Boolean logic and Shannon information theory and also Shannon's early paper, the masters that he did, where he basically came up [00:15:00] with binary, the binary digital computing code that we are also familiar with. We often think of that as if we're getting a computer to think, but what we've really done is, and it's a remarkable thing to have done, um, is we found a way to represent this ongoing process., by which we come into different states of life. what we call thinking, and we found a way to represent that in the most simple form.

So as to understand what is redundant and what is not. And in so doing that, we're increasing our efficiency for Further translation and communication. But we haven't actually created, we haven't actually created technology that thinks. We've created technology that represents our thinking. This is a really, really different, different thing. 

And I actually think that once we understand that the affordances of [00:16:00] that representation or that model, that robot. actually point to the researcher and the specific semantics of the research interest, which are the researcher's affordances, then it actually starts to make sense. The robot is, is not the one that has the affordances. It's an extension of some sort of research semantic program. Therefore, the affordances point to those bodies, that researcher or that group of researchers. I go into that more in my research diary, trying to understand it and For now, I'll just leave it, and those of you who want to try to learn with me, this is just a conversation I'm sharing with you, with someone who is generous and tried to open up the world of robotics and the measurement of empowerment with me. 

 I'm trying to find a happy balance here between people who've never heard of such things, people like me who are trying to learn of it, and then others of you out there who are already experts if you want to comment on the YouTube video of this or the YouTube audio of this and Share [00:17:00] some points in a friendly way that might help us all you're welcome to do that.

I would love it 

So, here we go.

Dari: hi. 

Andrea Hiott: Hi. Dari.

 Thanks for being here. It's great to talk to you today. 

Dari: Thank you, Andrea. Thank you for inviting me it's a pleasure for me to be here. 

Andrea Hiott: Ah, good. So we met, because of a conference in France about complex systems, and after the conference, actually you.

Brought up this wonderful, which was then at the time new to me, the idea of empowerment, which we're going to talk about in terms of reinforcement learning and machine learning and so on. But to start, first of all, maybe we could just hear a little bit about you. You work in swarm robotics and intelligence would you like to tell us a little bit about. in general, your, your scope? 

Dari: Yes. Um, I currently, I, I work indeed in the field of swarm robotics artificial intelligence, and more precisely in applying the empowerment [00:18:00] measure to to different contexts and scenarios in the field of swarm robotics, for example, one scenario in this domain is collective perception. Um, another scenario is a site selection task and and their further and further scenarios. So we, we try to apply the empowerment measure to provide a generic. Predictions for swarm performance and enable the design of different decision making controllers for the robotic swarms to, to operate in a decentralized and autonomous manner.

So this is currently the goal of, of my research. Um. 

Andrea Hiott: That's great. I just wanted to set it up so people know we're coming at this from. That background in a way because empowerment can mean so much to so many different people, right? Yes. Yes. Yeah. I know. Especially listening to this. 

Dari: The term is actually quite overloaded.

As I noticed yesterday, I gave another talk here, the Sorbonne, and, and actually this question popped [00:19:00] up exactly after the talk that wait a minute, empowerment it means so many different things. And we are just using it as a, as a very narrow technical term it's like a function mathematical function and and it's probably, it's probably misleading because in different contexts, it it could mean, of course it's used in sociology, in, um,

in all, in humanities as well, um, so in politics as well, so, you know, and and, and people tend to have different, slightly different meaning, but here as I will talk about it, it will have purely, um, well defined, um, mathematical content. So, 

Andrea Hiott: yeah. It's kind of a sexy word. It kind of draws you in and it can kind of mean a lot of things and actually the way it's used.

 You sent me a bunch of papers to sort of help me get an overview and there's no way we'll get through Them all today but as I was looking through these and trying to get my head around all of this coming from a more philosophical background so, in your work, are you working with [00:20:00] multi agent reinforcement learning, mostly, or? 

Dari: Well yes and no. Um, so, currently, yes, we'll, uh, involved in, in swarm robotic studies. So this is kind of a multi agent environment. So, yes. Yes.

But that's 

Andrea Hiott: one form of reinforcement learning. And then there's also reinforcement learning that's just with, with a single agent. Yes, yes. That's a more kind of normal way that it's used. Well, yeah, 

Dari: that's the standard way how, how it it began and it, it was invented. Um, but it was then applied to multiple agents and, and the more agents you get into the scene, the more complex it gets.

So in a sense in swarm robotics, I'll get back again to, to that field that I'm working on currently. The difficulty in swarm robotics is to enable multiple agents perform dynamic tasks and operations simultaneously without any interaction. Guidance from control system for a center from [00:21:00] central control system and self organized to perform the task and achieve the goal, which is, um.

Which is a completely decentralized multi, multi agent system. And in this case, it is collective intelligence task. So the term is also very well used. Collective intelligence. Because they are Yes, all agents are intelligent and they write on and they perform, um, and they behave independently from, from everybody else.

What will stay, they still communicate and And, and driven by by a goal in the end of the day. So they, yes. So they like humans, we also we have collective intelligence, right? When, when we do things together and, um, We try to emulate this in, in robotic agents, too. 

Andrea Hiott: I think so, but some people would, you know, that [00:22:00] whole idea of collective intelligence and what even intelligence is, you know, you can go down many rabbit holes, but just for the sake of this an agent, in your case, do you mean an actual kind of robot, an independent robot?

Dari: So it's one independent robot. So if we, if we take the case of bees, for example, it's one bee. So it's we, we also try to emulate nature. So we learn from bees and ants, um, and, and they are also swarms of independent agents. Yes.

Andrea Hiott: And when you say the environment, that's just everything that's not included in what the agent is, or? Does that include all the other agents for each individual agent? It could 

Dari: be, yeah, that's a very good question. So the environment can be anything outside the swarm or depending how you, you look at your model of the world.

It could be also, if you look from the viewpoint of one single agent, then everything else, including all the other agents, could be considered as environment. But it could be that Only [00:23:00] everything else except the swarm is the environment. So you, you see the difference. 

Andrea Hiott: So it depends where you put the scale.

Depending, depending 

Dari: where you put the view, yeah, the viewpoint. 

Andrea Hiott: Okay, and so that'll matter for if learning is happening or not, right? Depending on where you set the scale and what's learning. Is the swarm learning or is the agent, individual agent learning, right? 

Dari: Exactly. Oh, yes, exactly. Very good point. So that is exactly what, what is the difference because if you set the the model to to be a single agent, then you you learn the policy or the behavior of a single agent.

But if you set it for a swarm, then you optimize the behavior of the whole swarm at once. Yeah, so that's exactly 

Andrea Hiott: and so and then by learning, right? So, we've we've got this. Agent, we've got some kind of environment. It's going to depend where we set the scale, but we can definitely set the scale where the agent is, where the X environment is in terms of actual math and everything, but then the learning itself, how would you describe that?

Dari: So the, [00:24:00] the, the learning process is is, is of course driven by by some task. So there is a goal for the swarm and therefore for the individual robots to perform something, something collectively, one simple example is the collective perception where the agents explore a swarm of agents explore a field a site, or you can imagine it, it can be an agricultural field or it can be a minefield and and the agent tried to count the number of the, of the bugs in the agricultural field or the number of mines in the minefield to discover them.

But so it is, exploring the two dimensional environment or the field underground simultaneously and deciding the quality of that field. Is it, is it very damaged or or it's, um, it's it's a good quality, for example. So this is just a very, very abstract um, [00:25:00] example. But the point here is that A single agent, and that's what's important in Swarm Robotics, is that a single agent is not able to cover the whole field, or it would take a very long time, and it would not last so long to, to, detect the quality of the field on its own.

So that's why a group of agents can do this faster, cooperatively, but they have to communicate, and they have to share their their knowledge. And, and therefore, they need a sort of decision making mechanism how they exchange opinions. by communicating with each other locally, and then arrive at a consensus, at a conclusion in the end of the day and solve the problem and decide that the site is badly damaged or the site is is good, good enough.

Um, so if there is something to work on or to perform, on this site. So, Okay, I'm going to 

Andrea Hiott: stop you there just for a second because this is where it gets kind of confusing for me. So if I was thinking of something like rats, right, which is used a [00:26:00] lot in neuroscience. Yes. And of course, you know, there's in the literature, of course, the, the rat needs to explore the space and, and all of this, and then, you know, relative to how it's going to solve the task and so on, and there's a goal and, and all of this, and it sounds a little bit like that, right?

Like they have to explore the space and, and there's a goal, but it's not quite like that, right? Because. What, you've programmed them in a certain way, can you help me understand kind of what you've given these agents such that they then are going to kind of do something new with it? 

Dari: So in this particular example that I mentioned collective perception, the agent or the robot have a simple sensor for detecting the ground, the quality of the soil or the ground, the floor.

And that's what they can detect. They can move freely. They can randomly explore the environment and then they just read the quality of the ground underneath. 

Andrea Hiott: So it might be like a little camera. It's local. 

Dari: Yeah, [00:27:00] it's a little camera. Yeah, exactly. That's a little camera. It's a sensor. So it's trained 

Andrea Hiott: on images of what it's going to see with that camera.

Yes, it 

Dari: collects, yes, it collects the data from, from this camera, from the images and by collecting the data It forms an opinion of the quality of the site, and it changes that opinion because as it moves, it founds different patterns of different quality around as it walks in the field, and it changes its opinion, but it also changes its opinion by sharing information.

To communication with its neighbors with the close neighbors. So so if the close neighbors are telling each other that the quality is very low of this field, then they, they kind of persuade each other. So they convince each other that, um, the quality is low and and that's driving the consensus.

Convergence of the swarm. But nevertheless, the exploration is driven by this sensor, which is deriving the images of the field. But [00:28:00] because this is local, they cannot see the whole field at once. They can only explore one, um, a little piece of it at a time, and they have to move around to explore other pieces of it.

So this would take forever for a single agent. So by communicating and exploring at the same time, they share information, and in a way, they collectively integrate. Or aggregate that information they're 

Andrea Hiott: almost like, we were talking about scales before, so they almost become sensors themselves. Yes, yes, exactly, yes.

Dari: For the larger collection. Intelligent, exactly, intelligent sensors, yes.

Andrea Hiott: But you've, you've put, you've input things in before, they've been trained on many, many different images that they could encounter? 

Dari: So so in this because this is quite simple example, there are actually no real images for training because the, the quality of the site is, um, binary. It's a black or white, so it's really blank. But what is learned here and what [00:29:00] is what is trained is the decision making mechanism.

So how they decide. When to change their opinions and how they aggregate that information. So this is, this is in our case, this is trained using evolutionary rhythms and it evolves and actually a neural network. So this is a neural network, um, which is a black box, as you also know which drives the decision making control mechanism for the, for the individual robot.

And that. put together on a swarm scale drives the self organizing behavior that will collectively solve the problem. So here the learning is how to share and how to exchange information and change the opinions of each other. So this is what is automatically generated. And if we train that with evolution techniques, [00:30:00] as I mentioned we with machine learning for that, and we don't know.

The actual mechanism, because it's as I mentioned, it's a black box. 

Andrea Hiott: Okay, so Does it answer your question? Yeah, yeah, it's great. And when you just 

Dari: So It's not 

standard machine learning as you thought I think it was, but it is the same thing, essentially. 

Andrea Hiott: Well, what gets confusing is sometimes there is no robot, right?

And so Yes. It's just then it's kind of an algorithm is the agent 

Dari: yes. Yeah. Then let me trade on this because probably that would be useful.

So here. We are talking about the behavior of the individual agent of the individual robots. So there are real robots that we are using for this. We just develop using evolutionary algorithms. We develop the neural network that is that we we install into the Agent. So the agent has an algorithm. So it's like an algorithm which [00:31:00] decides what the agent does when it reads black or white.

And when it reads from the communication with the neighbors, when it again reads black or white. How does it integrate that information? So this is the neural network or the algorithm. That is evolved. And that is the black box in this case. So that is the decision making mechanism only. 

Andrea Hiott: So I'll say something and then you correct me, right? Okay. In this very general example, and I'm generalizing to the swarm is a lot of different agents trying to come to a conclusion about the space and then, they're going to do that through communicating is the problem of overfitting that each individual agent will just sort of fit their own algorithm to the algorithm that they're being, they're communicating with, or like, what's this kind of, I'm trying to understand why something like intrinsic. Versus extrinsic, [00:32:00] motivation, you know, to get towards empowerment.

Dari: Yes. Okay. Well when we talk about intrinsic motivation we mean some drivers that are inherent to, to the agent itself which for example, um, we have, um, typically, homeostasis, um, it's one, it's one biological metaphor for that because this is not driven from the environment, from outside, but this is internal for the survival of the organism that is necessary.

And and similarly in robotic systems, we, we try to to apply this same principle of intrinsic motivation that is Independent from the external world and any driver that is coming from from outside then that is extrinsic. So when you have 

something, yeah, that is, yes, that is driving you. So for example, yeah, you can, you can think about hunger. It's an intrinsic [00:33:00] motivation, so this is entirely um, within a single organism. So, that process You see, but 

Andrea Hiott: now we've flipped to bi the biological Yeah, yes, I'm sorry.

No, no, it's not your fault, but that's, uh, the papers do it, everyone does it, and I, understand these things in real life, you know, or even in philosophy or neuroscience or biology, the way they're used, but I don't know if I understand this correct, but maybe you can go into the history a bit, but I see, I seem to think that machine learning and all of this robotics, in terms of robotics, is sort of developed on an extrinsic or with concern about the external , like there's a goal and you have to reach it, and so it doesn't matter that much about Something like the equivalent of hunger, right?

But there were some problems with that in terms of maybe with the swarm intelligence not working as well Or I don't know what but intrinsic motivation, which is basically modeling something like hunger or the the robot itself having its own kind of algorithms that apply only to it and that only it can kind of [00:34:00] through the sensors, this is something different, but I don't understand it yet really.

Dari: Yes. Yes. Um, yes. I will, I will, um, said this right now because yeah, I understand the point because yeah, we were talking about different, a few different things at the same time now. But to make it clear when we talk about intrinsic motivation this is one of the, um, motivations and the driving force for empowerment.

So I think because we have read the papers about empowerment and you have seen increased motivation there many times, hopefully, and empowerment is such a driver. Yeah, let me just explain. Um, because when. You, um, you apply empowerment to, to guide an agent or robot or whatever, or organism you, you are trying to maximize the the influence of that agent over the environment.

And that is something inherent or, or internal, and that is why empowerment is an [00:35:00] intrinsic motivation drive because forces the agent to maximize. It's influence over the environment by applying actions and perceiving the effect of those actions. So that is what actually empowerment measures and by maximizing this level of control over the environment, the agent places itself in a better position to survive and that is driven.

So that is. Not driven by the environment or by external forces. This is driven purely, purely by internal, internal forces for the agent, how much 

Andrea Hiott: one robot can influence its own environment or it's how much agency that robot has? On that environment, would it be the Yes, yes.

So that is 

Dari: the, the the drive for the agent itself and, and as empowerment is, is defined, it is purely measuring the measure, it's purely measuring the level of control [00:36:00] an agent has over, over its environment. And if the agent tries to maximize its level of control, tries to maximize its empowerment, then therefore it is driven by, only purely by intrinsic motivation.

So that is why intrinsic motivation is mentioned so much in those papers. So it's not, Related to machine learning. So we have to separate now this. Okay, that's very good. 

Andrea Hiott: And how is that different from before there was a such thing as, or if you don't use intrinsic motivation,

Dari: it has to do 

Andrea Hiott: with the goal, I guess, or I don't know what, yeah, 

Dari: yeah, yeah, well, yes, so, so the goal, the goal is something external to the agent. And if the agent needs to, um, set targets and achieve goals, then, then there is a usually a reward function and that is used in reinforcement learning, um to, to drive the process.

Um, but, So so there are different different perspectives in this also in machine learning. Um, so the [00:37:00] important thing is that when you so when when you try to look at the, at the natural drives, um, in in, in nature, and you try to mimic them, as I already mentioned, bees and ants and other organisms you know, they're, they're really very well studied nowadays, especially the simple organisms, which are easier to study and, and principles from, from their interesting motivation and from their biophysical process Being implemented in machines and in robots.

So that is why intrinsic motivation is gaining so much um, traction also in research and in, in, in robotics, but so there are different different concepts and there are different applications for intrinsic and extrinsic motivation. So yeah, we, we should not mix them. 

Yeah, I just wanted to say that interest motivation has been, um, evolving over time as a way to to mimic nature. Okay. Because microorganisms, so people, [00:38:00] humans, um, animals, microorganisms, they, they all have interesting motivation for survival and, and that is, that is a basic principle, um, that is driving the behavior from, um, foraging feeding.

to mating, to, to everything. And that is the principle to reproduce and survive. And, and those are purely intrinsic motivation processes. So, so that is something behind many robotics achievements nowadays as well. But yeah, I'll stop here now. 

Andrea Hiott: And it's sort of independent of the task too, right?

So, there's something very interesting, I mean. Can we kind of think about the big picture of why robotics is so important and when you're making these swarm robotics and you're studying intelligence through robotics, do you think of yourself as sort of modeling what's going on in nature so as to try to understand nature?

Or is there any kind of orientation like that for you? 

Dari: [00:39:00] Both. Yeah, yeah. A very good question by the way. Thank you. So, because AI should have raised that topic as well. So

as we try to understand nature and we try to replicate it in, in robots and in machines by creating this theoretical or artificial models which we implement in machines, we also can Reverse the trend and understand better what nature is doing because we can do simulations in machines by using these models.

We can run simulations and by seeing the performance of these models. The performance of the natural models in artificial simulations, we can compare to the observed behavior of the natural systems, and we can pipe that feedback into the, um, into the biological studies and and then help observe better and better natural systems.

So, so this is a two way process, actually. So that's why I did it great. It's great. You say that. [00:40:00] Yeah, no, that's 

Andrea Hiott: perfect. And that's kind of the point too, because, it can get very confusing about the difference between the process itself and modeling the process, first of all, but also um, I'm trying to understand the bigger picture for you to you as a person, but also what your field is doing and what what's motivating the work why Empowerment where you came across it and in so unpacking that, maybe we can start to unpack the history of empowerment too, or.

Dari: Okay. Yes, yes, yes. Okay. Very good question. Because then, then I will say some words about how I came across and why and why empowerment. Um, so, let's get that clear. Um, so as I, as I mentioned, I. Started in the field of human computer interaction and when, when I first, 

Andrea Hiott: did you get a degree in human computer interaction ? Yes, yes 

Dari: yes, actually I did my PhD research in human computer interaction. Okay. And that's when I was looking for, for some potential. Um, [00:41:00] measures that, that could help human computer interaction research. 

Andrea Hiott: Why were you interested in that? Did you just happen to kind of come across it or had it, was it something you'd been interested in for a long time or what was like, was there a problem you wanted 

Dari: to figure out? Well I, I was working at Nokia research, um, on multi mobile, multimodal human computer interaction.

So human computer interaction for mobile devices. Smartphones, et cetera. So there we, we have we have used. The usual methods and tools used in human computer interaction and usability testing.

And we have realized the limits and the difficulties that every researcher in human computer interaction faces. Because there are various theories and schools of thought in that field. I guess that's what I 

Andrea Hiott: want to hear. What was, like, the Because, you know, for those who aren't Familiar like me with that whole area?

What's a problem that's, that you hit your head against early on that you might wanna solve or that you're still trying [00:42:00] to solve or, 

Dari: yeah. Yeah. Yeah. I, I will come to that now because so there different methods and, and there are different disciplines actually, uh, studying this field.

Like psychology, sociology physiology, engineering, it's computer science or linguistics. So you have people from. all different backgrounds. And so it's quite complex research community. And they're all concerned with studying and improving the factors that influence the effectiveness or efficiency of human computer interaction.

But what, what is used in that field is usability testing, usability specifications and metrics. And they are used to ensure consistency, um, and the compatibility. When we create this structurally user interfaces and that that is typically the goal of UI design. However the usability engineering provides tools and measures.

Which guaranteed the user goals and needs but as the market becomes, um, increasingly more [00:43:00] discriminating this term of usability or ease of use becomes very often too vague and it, it needs more rigorous evidence to, to back claims like easy to use or user friendly interface because we, we have been working on, on Improving, um, user interfaces, um, for, for mobile devices and mobile devices um, small.

So it was difficult to, to come across the, the exact paradigm at that time. And, and, and from that, we thought that we need some more formal models and more sound theories and principles to, to enable this this whole process of research, um, that, that will improve the current usability studies and, and because this usually is you know, Like in, in HR research, I, I when I was looking for some potential methods and, and tools, I, I came across empowerment and, and during my PhD research, I, I looked into [00:44:00] into this, um, formalism of empowerment or the principle of empowerment for characterizing human performance since it was quite novel at that time, it was quite novel information theoretic measure that taps directly into the perception action loop of dynamic interactive systems and that that's what I needed for for Mobile human computer interaction research.

Andrea Hiott: Let me just be kind of a little bit ignorant or something because , so if you're working if you're you're basically creating technologies or working on How to create better technologies is that?

 Is that kind of fair? And then you want it to be, usable, which, which basically means that the consumers are going to easily be able to interact with it. So that's the human computer interaction. When you pick up a phone, the way that you interact with it is everything about how successful that technology is going to be, but also like for the company, but also for the person who's using it. But what I don't really understand is how that connects to the algorithms or to this intrinsic motivation maybe you can help me with a bridge [00:45:00] there of why would you start looking at empowerment if you're trying to think about?

The usability of a, a machine device for a human. 

Dari: Yes. Well okay, then I'll explain that now because, um, yeah, I think that we need some motivation for this because the, so the view now of a user interface. For human computer interaction the view arrests that the user behavior is about the user controlling their own perceptions.

Um, so we, we have interaction, we have actions and feedback as we press buttons on the, on the touch screen and then we get feedback from that. And so these are actions and perceptions. So this closed loop interaction. But when you have a human in the loop, then it, there becomes a perception action loop.

Well well, now, now it becomes an integrated system. Because you have a machine and a human, and that's why it's called human computer interaction, and it depends both on the computer and both on the human because, [00:46:00] um, there is a closed loop. The human has sensors like eyes and ears, and it has actuators like fingers and voice.

It can speak as well, and it's interacting through the microphone or through the speaker and through the touch screen. And it can also get vibrotactile feedback as a feedback, it can, it can get audio alerts or audio icons, um, and, and it gets text on the screen, it gets images on the screen. 

Andrea Hiott: Just taking generally for me, this sounds like the machine, which in this case, we're talking about a phone, but all of this is kind of cartoon examples, but that we need the machine in the interaction. The machine needs to be able to work with similar patterns in a way, in a kind of continuous way with the human. to, for the, the communication to be seamless. So somehow the patterns of interaction need to be continuous. Does that make sense?

Dari: Yes. Yes. Perfect sense. And now the focus of my work in this field was [00:47:00] actually to, to develop a measure that could. Qualify or quantify the effectiveness or the user friendliness of user interface from human perspective. So, it was not like Okay, now I get it, now I get it. Yeah, yeah, yeah. So, it's not about designing robots or designing artificial humans.

It was about measuring or capturing the, the essence of a human. Computer interface, how usable it is, because previous usability, as I mentioned they're, they're very very different and, and vague sometimes they cannot be replicated. Usability, I think, um, mostly conveys some satisfaction, user satisfaction in the end of the day.

Andrea Hiott: But actually, in, in this kind of work that you're doing, and in terms of how it's going to scale up into other things like the swarm robotics there's actually a very real connection there between why we would want to care about the way that the phone or whatever the machine is, the, it's algorithms, right?

Or it's motivation, in the sense because There's this dynamic [00:48:00] interconnection. But you were going to tell me how you got into empowerment from here, right?

Dari: Yeah. Yeah. And yeah, so. It it starts from, from from this view that the user's behavior is about them controlling their perceptions. And since the perception is tangled up with specific possibilities of action which are called affordances by Gibson So such a forensics are the possibility for use and interpret the actions which are offered by the environment to the embodied agent.

So 

Andrea Hiott: You bring up Gibson. Were you reading him? Were you aware of that back then? Yeah. Yeah. 

Dari: Okay. That was all 

Andrea Hiott: part of your 

Dari: Yeah. He is yeah. Um, one of the, um, seminal 

uh, reference. 

Andrea Hiott: Mm hmm. Yeah. I just interviewed someone who was one of his students, actually. Harry Heft, who writes a lot about affordance, and it's a really different way that they're using it there. But that doesn't matter. What I'm interested in is just that you were already really influenced by that early on. Uh, 

Dari: well, As I mentioned, [00:49:00] this field is very much mixed with researchers and they have psychologists and sociologists.

And they come with references from from Gibson, Wittgenstein, and others. Because that term affordance is 

Andrea Hiott: perfect for what I was just trying to describe, because that is that interaction, right? It's not in the environment or in the person. Yes. It's the interaction. It's the human computer interface or the whatever, the dynamism of that.

Dari: Exactly. Okay. So, so you get it now because actually that is the primary motivation for, from my work in, in that field that was at that time an ability. to quantify affordances in a principled way, just to benefit the design of HAI systems. So 

Andrea Hiott: fascinating. So that was, that was your motivation in a way.

Dari: Yes. Yes. Okay. That's fascinating. 

Andrea Hiott: And then, so you were kind of dissatisfied with what you had, the tools that you had.

What was there, yeah, 

Dari: right. 

Andrea Hiott: And how did you find Empowerment? Was it already, when did it start? 

Dari: The first first publication of Empowerment was [00:50:00] 2005.

So it's, it was about that time. So I started a bit later my work on human computer interaction, but Empowerment was already published and that, that's what I came across. Do you 

Andrea Hiott: remember when you came across it? 

Dari: 2008, I think. In what, 

Andrea Hiott: in what context were you trying? I, I was, 

Dari: um, I, I was, um, in a collaborative, a collaboration project with researchers in the University of Glasgow.

And that was brought up to my attention by Professor Roderick Murray Smith, who later became my PhD supervisor. So that was, um, how it, uh, came together. 

Andrea Hiott: Okay. And maybe now we could say what empowerment even is in a way, Oh, well, 

Dari: um, yeah, well, if we, if we want to, to give some, um, Some brief description, um, for the original principle of empowerment.

So as I mentioned, it was introduced as a universal task independent utility [00:51:00] which is defined by the channel capacity between the agent's actions and the subsequent sensory observations. The empowerment is the channel capacity. So if I put it with direct Terminology using Shannon channel capacity between the actions of the agent and the subsequent perceptions. Okay. So that's channel capacity is maximum mutual information between the actions and the perceptions.

And in that sense, it, it measures as mutual information measures, the common amount of information between actions and perceptions. So the the, the relevance between actions and perceptions. So if. Your actions and perceptions are irrelevant between themselves, then it doesn't matter what you do and what you observe, there is no correlation at all.

So if we use that term correlation instead of empowerment, so if they're highly correlated, then, then you are in control of your perception action loop because you know what to expect when you apply an [00:52:00] action. You know what perception you will observe. In the end of the perception, action loop.

So, um, so that is essentially what this mutual information or the channel capacity is measuring, because it's maximizing the mutual information between actions and perceptions. And when you maximize this mutual information, you maximize your level of control through this perception, action loop. So it's a measure.

Of perceivable control because you perceive an observation from the environment through which you applying your actions. So it's it's a closed loop through the environment. So that's why it is very important to emphasize that the perception loop is between the agent and the environment. So.

Agent applies an action through the environment and receives a feedback from the environment and applies another action. And that [00:53:00] unrolls over time. So you can, you can do this over many time steps. And then you can ask for the perception after n time steps. And there we talk about n step empowerment, for example.

I don't know what that is. If you, if you skip the intermittent steps. You just apply actions, but you don't observe anything, and you observe the feedback from the environment just after end time step, then you have a quite long horizon of in this perception action loop, and, and of course, the longer the horizon, the more options you, you, you might have, but you never know.

Andrea Hiott: Does does that speak to this idea of potential? Because in the papers or in a lot of the writings, there's these distinctions made between well, al and potential I think of like dis Yeah. And so you can measure kind of almost like patterns or potentials. This is showing you kind of patterns and [00:54:00] potentials, that can be applied in different situations. You don't have to have the environment exactly the same way or do you?

Dari: No, no. No, the environment depends on the, um, on the

the task or the scenario, um, and it's part of the stochastic models. So for the, for the model, you need a specification of both the agents and the environment, and that is all up to you. So that is why it's it's flexible. It can be applied to anything because it is actually it is, um, something like a scheme.

So that's why we call it as a notion and a principle because it's not strictly defined for some applications for the environment, it is, it is it is just a tool that mathematical tool that, that can be applied. It it's not guaranteed that it will beneficial for all cases, but it's, it's in, it is quite abstract and universal.

Um, it might be. With some tweaking and expertise, it might be applied to almost anything on an uncertain level because it's it's quite abstract. [00:55:00] And, and you also mentioned, um, yes. So it's just just to. State that it's measured in a single unit it's measured in bits because that is, that is what what entropy is actually, that's the background But what is measured?

The motivation is 

Andrea Hiott: measured? 

Dari: Well, the, the channel capacity, the channel capacity actually is measured in bits because it's a communication problem. Shannon communication theory is designed, is defined for communication problems. 

Andrea Hiott: We were talking in these bigger senses about, you know, what is intrinsic motivation in this kind of wider sense, and then this is where I get kind of lost, because if we start talking about Shannon information, we are talking about bits, right?

Yeah, yeah, exactly, 

Dari: yes, yeah, that is, that is the measure, it's minimizing 

Andrea Hiott: surprise more or less then, Yeah. Yeah. You can say that. Would that be, would that be almost like what the intrinsic motivation model is helping the agent do, minimize surprise? If we talk about 

Dari: entropy, yes, you can say, yes, yes.

Entropy is measured in bits as well. But it's, but this is a model. 

Andrea Hiott: So [00:56:00] it's literally doing it through bits of information, which is different probably than 

Dari: Exactly. 

Andrea Hiott: How it's happening in the real world, but we've used the model to understand the real world.

Dari: Well well now this measure is quite a bit of diff. I mean, different beast because it gives you a measure of value of bits, which doesn't tell you much because it's incomparable to other problems to other models. It depends only on the model.

So the point of this measure is to give you a benchmark. So it can. So, you know, the, the, the range of this measure for this particular problem when you design it, you know, what's the maximum level of empowerment, what's the minimum and then. Within this range, you can, you can benchmark what you get for different solutions.

And that's how you can tune your system. For example, your user interface or, or your swarm robotics, um, decision decision making controllers. But it will be all different. So if I say now for this problem the range of empowerment is between zero and one [00:57:00] bit. But that doesn't say too much because for other problem, it can be between zero and one million bits.

It all depends on the model. And that's why it's not absolute. It's all relative to the model. And you cannot compare the, the, the values of empowerment in absolute terms. You cannot compare them between different, different models. They're just um, illustrative. For one particular model and for one particular scenario that, that you want to study.

But 

Andrea Hiott: when is this, when is this helpful, I mean, we talked about the phone or we talked about the robots or I don't know how to kind of make it into kind of an example that can then help us understand the parameters towards, a real life situation. I mean, this will be messy, but something like with the phone or with.

If you're trying to understand how you're interacting with your iPhone or whatever

why would that click or put a light on in your head [00:58:00] that, Oh, this might be a way to go at this problem of understanding this affordance relation, 

what is clicking for you that's helping you understand something that interaction differently? If, if you're trying to understand the human computer interaction between any device

why is that way of looking at it, giving you something that without that way you don't have, do you know what I mean? 

Dari: Yeah, yeah, yeah. I understand. Okay. Um,, let's, let's say in, in let in, in the example with the, um, with the phone so if you, um, if you, trying to, to, to make a case or, um, or scenario use case of empowerment in the case of of a mobile phone with which you interact every day.

And you, you know that there are different factors that constraint um, the usability of the device. So there are usually different form factors, different sizes, [00:59:00] smaller, bigger. Um, you, you know, the, Um, the keyboard is, um, is quite packed. It can have different characters from Chinese, Japanese, Latin, whatever.

Um, different number and Arabic as well. So, um, operating with for example, such keyboard with, now we can also consider different people with different finger size. People who can hit three buttons at the same time when they touch the keyboard, and that's not really helpful to type a message, right?

So so this all um, boils down, boils down to usability. So you, you, and you can add to all, so this, you, we can, we can just term it as a disturbance or uncertainty.

So for example you press a button, you try to type a message. And there is uncertainty which letter will come up because sometimes you hit the wrong letter [01:00:00] because they're too small or you you walk in the street and a lot of disturbances again. So, so this is, this is noise. So now let's put this all together.

Uncertainty, noise, disturbances of any kind. So from your sensory motor system, so your finger, your muscle, which is coming, which is operated by your brain, but your brain is at the same time guiding your walk on the street. So you are multitasking. So, so these are the services that, um, constrain your abilities to type properly while you're moving and your hands are moving as well.

Your body is moving. So it's shaking and whatever people are hitting you on the street. So. So there are all kinds of disturbances noise. And on top of that, we also have system disturbances that are bringing delays and they're also sensor inaccuracies, um, in, um, system errors in, in the software itself.

Um, so, you know, there are up windows that come [01:01:00] and disturb your your typing every now and then. And then you, you think that you have hit the button, but actually you just close the pop up window because it was on top of it, right? So, so this all accounts to to the uncertainty in the interaction process.

So if you try to model that and this all affects your measure of control over the, over the interface. Now, if we think the environment as the phone and that's the interface you're trying to interact with. And and if. You have a very bad performance, let's say very bad typing accuracy. It is due to low, low empowerment.

Does it make sense? 

Andrea Hiott: It makes sense, but but what's interesting is this can happen at the algorithmic level can't it?

So like the phone could in a way instead of needing me to get exactly everything, right? Infer from the overall kind of pattern that I've expressed, [01:02:00] it can use this algorithm to find that solution quicker, is what I'm trying to get at.

Yeah, and 

Dari: they are, exactly, and they are doing that, they also use machine learning for correcting, yeah, user errors, that's that's exactly correct. But Empowerment is not the replacement for for these algorithms. It is just a measure that that encapsulates all these different kinds of disturbances that I just know.

Um, just mentioned enlisted all these different types of noise. And uncertainty in, in the interaction process, which otherwise would have to be measured separately by sensory and precision noise with, with some units by communication delays, which are measured by milliseconds or something by, I don't know, size of the, of the, of the keys on the keyboard, because some are bigger, some are smaller.

And and, and these are all contributing. To your user satisfaction or to you to the usability of user interface essentially to your [01:03:00] performance type in performance. And if you consider them all separately then it's not not too handy. It's not too useful empowerment. If you bring all of this.

Uncertainty types of uncertainty into the same model, you measure it in the end in bits. It's just a single value. And as I said, if you know the range, um, 0 to 5 or 0 to 100, let's say in percentage, then you, you can, you can modulate, you, you, you can control the level of user satisfaction by changing, for example, the, the key, the size of the keys on the keyboard.

Increase or decrease. But essentially, if you, uh, that, that was exactly the motivation to understand better the quality of the user interface and therefore. Use that information to design better user interface in the end of the day.

Andrea Hiott: That's wonderful. And that makes things a lot clearer for me. And as you were [01:04:00] talking, I was also thinking back to the robots that we started with because you were saying at the beginning how, each one can't explore the whole space by itself. It needs, everyone else, so to speak. But it also has to sort of communicate with does this help you there too to understand how each robot might have its own Intrinsic motivation. Sorry, I'm kind of making it a little too psychological, but um, if, if each robot is talking to each other and sharing information, is there also a way in which studying it this way Even your, from your point of view, the way you're studying the robots, this can help you understand what collective intelligence really is happening there, or what does that bring up for you when I try to bring it back to that, to these different agents who are trying to communicate and your position trying to understand how to make them communicate better.

Dari: Yeah, yeah, very good point, because there is again, a two way interaction between these studies and the nature, natural studies. So we try to [01:05:00] implement decision making mechanisms, neural networks into robots to drive the swarm towards some goal. But in the same time this observations and this swarm models can be used to, to compare with with data that, that is observed in nature in, in bird flocks or, or in fish schools and, and swarms of ants and bees.

Um, because that's where we learn from this what, what we, we, we get as a background and we just try to, to mimic, but when we implement this simple robotics swarms we, we have the the tools and, and also the simulation. So we also do a lot of work with simulated robots because that's much faster and easier and there you can, you can do much more.

And with those models of simulated [01:06:00] robotic swarms you can, you can try to understand better what happens in nature. So because those are models that you can control. So there are system parameters. You can, you can change control variables and then the behavior of the, of the whole swarm changes.

And then, then you can. Link it to, um, to bird flocks or, or other natural swarms and, and, and then and then infer what is actually the, the behavior or the driving force be behind, behind that. 

Andrea Hiott: That makes a lot of sense to me then, because I think I was trying to put it into the robot or into the swarm, right?

And actually empowerment isn't about it's not about putting it into it in the sense that I'm talking about. It's about from your perspective, understanding what you're trying to understand by building that whole system. Because. Understanding that then sort of increases our agency and our our wonder of the world and, and from that position, from you trying to understand it, it [01:07:00] makes a lot more sense to me because I can see how that would give you something of trying to understand better how certain changes in the pattern of the kind of intrinsic motivation of one robot could lead to.

Um, A whole kind of cascade of changes in a, in a way that like to, to go up a scale would itself seem like one intrinsic, intrinsic motivation. Does that make sense? Like the swarm's intrinsic motivation. 

Dari: Yeah. Yeah. Yeah. And exactly. It's, it's just analytical measure. Which, which gives some kind of quality estimation of what's going on if that makes sense of the process and, and that, that helps you that helps you actually see the big picture because it brings together many things on a higher level.

And it, it can, it can explain and it can, um, actually make predictions. So, so the important thing is that because it's analytical, you can compute it [01:08:00] theoretically with the system that you have designed, you don't have to run any experiments. You don't have to buy expensive robots and, and and invest in that just by.

Computing the level of empowerment for specific system parameters for specific levels of some parameters you, you can, you can decide actually what you need in your future robot when you're building robot, for example, what capacities, what sensor range what communication range, what camera You want to build in Europe because usually swarm robotics, I didn't mention that is, is is driven by by very low cost small robots, which are notorious for Being able to break very easily because they're, they're very low cost, but that's okay because there is a redundancy in the system because the swarm is big enough.

But of course, that is why you you try for minimizing all the costs of [01:09:00] sensors, all the hardware, because you know that it's dispensable anyway. And. You still want this one to solve the problem because you have to invest into the hardware, something, but you want to to trade off the, the cost versus the performance.

And if you can do that in design time, and if you can get a prediction from the empowerment measure, prediction for the system parameters that will solve your problem at this cost, then you don't have to play with real hardware to decide that, oh, no, this is too expensive. To solve the problem, I, I should do something simpler.

So you understand what I what I'm trying to say? You, you can. Yeah. 

Andrea Hiott: Yeah. That makes a lot of sense relative to your work and also how you got into this and everything. 

Dari: Yeah. That's like the big picture of it. Yeah. 

Andrea Hiott: Do you think of yourself as modeling intelligence? 

Dari: When you're making music, it's not, not really.[01:10:00] 

Oh, well, um, modeling intelligence, and, I mean, modeling 

Andrea Hiott: intelligence. When you see intelligence in the world, do you think the birds swarming are intelligent, for example? Or does that word not apply? It only applies. 

Dari: Well, I think it's a, it's kind of collective intelligence. It depends on what we mean by intelligence, of course because, um, it's a collective behavior certainly.

And because it is it is very complex we might say that it needs a certain level of collective intelligence to, to perform. All these shapes of bird flocks without collisions you know, they're, they're very proficient in that. And and, and that is yeah, that's incredible, but yeah, I just want to point out that the empowerment is not, uh, directly supposed to build intelligence.

So if you ask me if I'm creating intelligence not, not with empowerment. So of course for the robotics swarms, we [01:11:00] build systems that behave intelligently, but They, they use neural networks, so not, not empowerment. I don't think you're creating 

Andrea Hiott: intelligence, but I meant modeling it, trying to understand it.

Dari: Modeling, well, yeah, it's part of, yes, yeah, well, it's part of the process, yeah. 

Andrea Hiott: And by intelligence, I mean, to go to Gibson or somebody, that almost any to try to get past that dichotomy. He, he was trying to get past that subject object, the way we think of mind, body, and so on for me, any creature, any being, especially birds and swarms of birds, that's intelligence.

But when people hear that word, they often think of it as human intelligence, which means it's attached to some kind of symbols and it's thinking about itself and all of this, which I definitely don't mean at all. I really mean almost like a foundational problem solving. movement, you know, so to speak.

And it seems you would almost have to go to that level when you're modeling, something like a, a swarm, you know, that you almost have to go to that level of, [01:12:00] yeah, whatever that is.

Dari: Yeah. Yeah. Yeah. Well, that, that's exactly right. So there are different different levels in, in this in this process and, and empowerment is It's operating on one particular level, as I already mentioned on the perception action loop level.

But of course yeah, we are, we are aware that there are levels up and down of that. So um, and which are very important and empowerment just like a building block in that process. 

Andrea Hiott: What's like really exciting for you when something happens in your work? What's like a something you is it when you figure out a math equation?

Is it when the robots do something different? Is it? Collaborating with everyone around you like what 

Dari: what's what feels? Yeah. Well, I think in in my work most exciting

Parts have been by bringing together different fields of science and research and, and applying models from one field to another, like cross pollinating different fields [01:13:00] and in particular applying empowerment to, to different different paradigms. Which which uh, seemingly disparate, but in the end as I said, empowerment is quite abstract too, and it can, it can be leveraged to, to quite many I mean, different sciences I mean, don't mean just natural sciences, but humanities as well.

 So the empowerment is, so empowerment is measuring channel capacity, or somehow the utility is a channel capacity, but I wonder how we relate that to love. What's empowerment relative to love? Does love have a channel capacity? Does our expression of love in the world have Some sort of parameters of capacity. When we're in that capacity, do we feel love? [01:14:00] [01:15:00] 

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.