Blog >
Article

S1 Ep3: Firefighting Nanobots & Swarm Technologies

November 15, 2022

Are nanobots real? Are swarm robots inherently evil? Well it turns out they aren’t just the stuff of science fiction supervillains. Learn about their very real potential - from firefighting, to regrowing limbs, and even battling cancer - as Lars Klint chats to machine learning specialist Ivica Slavkov in this episode of TECHnically Possible. 

The discussion covers:

  • Swarm robotics and its link with nature
  • Moral technicality - Swarms save the world, but at what cost?

  • What is swarm robotics?

  • Pros: self-organized systems are robust

  • Cons: imprecise emergent behavior

  • Drones vs Swarms

  • Morphogenesis or ‘how bodies build themselves’

  • Turing Patterns - Why zebras have stripes

  • Emergent behavior - more than the sum of its parts

  • Firefighting robots!

  • Cancer-killing bacteria swarms

  • The endless redundancy of swarms

  • Regenerating limbs

  • Where’s the kill switch?

 

Ivica Slavkov is a computer scientist with a PhD in Machine Learning. He worked at the interface between biology and robotics and his work on morphogenesis in robotic swarms has been published in Science Robotics. Originally from Macedonia, he is currently working in Brussels, Belgium as a Machine Learning/Data Science expert at the European Commission.

 

Episode Resources & References

 

See more of Ivica’s work

Other resources mentioned in this episode

 

 

 

If you’d like to get your education on, try these further resources about nanobots, swarm technologies and machine learning:

 

If you enjoy this episode, please consider leaving a review on Apple Podcasts or wherever you listen.

Please send any questions or comments to [email protected].


When ants build a nest in nature, how do they all know what to do? And how does one ant know where to put the next stick and what dirt to remove and where to build? How does a school of fish know where to swim and which shape to keep for protection? Nature's wonderful. And this is even more mind blowing when you realize that there isn't a single centralized leader brain or idea to guide them.

So how does that work? In this episode, we dive into swarm technology, in particular, how we can use technology to mimic this behavior for swarms of robots of various sizes to help us solve medical problems and well, much more. Anyone up for cyber cells? My name is Lars Klint, and this is Technically Possible, a show that investigates future technologies impact on us humans and our connections in the world, whether that is good, bad, or a swarm of autonomous drones.

If you're new to the podcast, let me give you a quick rundown. In each episode we discuss an emerging technology and invite an industry expert to help us break down where we are currently at, and more importantly, where this tech could possibly or impossibly take us, all the while keeping it grounded in what exactly that means for us humans, and maybe even some fun along the way.

To help me explore our swarmy future, let me introduce my guest for this episode. Ivica Slavkov is a computer scientist with a PhD in machine learning. He worked at the interface between biology and robotics. That sounds very cool. And his work on morphogenesis, which sounds even cooler in robotic swarms, has been published in Science Robotics. Originally from Macedonia, he's currently working in Brussels, Belgium as a machine learning and data science expert at the European Commission. So welcome Ivica.

Nice to have you with us,

Hi Lars, thanks for inviting me, and thanks for the in depth introduction. No problem. I hope I have something to say after this. No.

Yeah. Oh, well as long we can talk about cyber cells I'm happy

Yes. So, yeah, I hope that together with you, I'll be able to kind of introduce our listeners to the, you know how fascinating actually is the world of swarm robotics and how much it'll be more present in the future. And also to divert a bit from the sci-fi myth, you know, of titanium robots swarm robots that, that kill people and everything.

No. Yeah. So it's much more benevolent than that. Yeah.

That's right. I was gonna say, I'm sure we'll talk about that too. Yeah, no, it's really interesting. I don't know much about swarm technology or, you know, any sort of, adjacent technologies to that. It's quite fascinating. I must admit. I think the closest I've seen is a a YouTuber.

His name is Michael something I can't remember. He's a Filipino American. He's very funny and he made a swarm of killer drones because he's a computer scientist, but that's about it. It wasn't very real, but it was very entertaining. All right, so now before we get into the brains of the episode, I just wanna challenge your moralities now, don't worry, it's just a bit of fun. There are no wrong answers and the points don't matter. So this is where you get swarmed up. Get your brain waves on track and make sure there is no central consciousness that takes over in a segment we like to call moral technicality.

All right, so here's my moral technicality for you Ivica. Large formations of drones of all sizes have huge potential to solve all variety of issues for humankind. Now you have invented a method and implementation of swarms that has the potential to solve human problems from cancer with tiny nano drones to preventing all traffic accidents to solving climate change.

Yep it's pretty amazing Now, however, in order for the technology to work, you have to implant a small microprocessor into all humans, which will track their movement, vital stats, sleep patterns, and much more. So is that worth it? And what do you do?

Well, now. My first answer is yes, definitely , because we're Yeah.

Well, because we're doing it already in some way. Oh, I think, well, we are tracking our movement, whether consciously and our consciously, we didn't really agree to all these things that track us currently. And you know, you have your smart watch tracking your biological stats and everything. So the only difference would be in, in this, you know, whether it's implanted or not.

No, the only thing that I have a problem with is if it's something that's enforced. And of course these things are important. No. And if it's something that's I guess if it's something that can be of course, misused, you know, I immediately see the, how this can be misused.

No. So

hence the moral technicality.

Yeah. Yeah. But yeah. Well it would have to be in all people, cuz otherwise they wouldn't work.

Yeah. So , well, huh, Good question then. Then my answer would be the opposite, then my answer would be no because I'm sure there, there will be people that would not accept it. So then I guess we have to deal with it differently.

I guess we do, but yeah, no, that's I mean, we do this for each guest on each episode. We have a moral technicality question. And it's always meant to not be easy and not be yes or no, right? It's always a, it depends kind of answer. So that's cool. All right. So, enough of testing our moralities.

We should get into what is the current status of swarm technology and where we at right now. All right, Ivica, what is Swarm Robotics in a nutshell?

Yeah. To, well, to start from the top. No. Swarm robotics is basically, well, if we just look at the two words, you know, robotics first, it means that involves robots.

And then we can also think about what are robots in general? No. So robots for me is something that's constructed that can kind of a sense the environment and interact with it and make certain decisions. So no matter how limited this thinking is. And a swarm is when you have, well, a large number of this.

So swarm robotics is usually, is kind of the field that studies large number of robots that interact between themselves and with the environment and kind of solve certain problems. I would go back maybe to where it's inspired from. So it's inspired from nature, we see this swarm behaviors everywhere in nature from you know, ants to termites that build the molds.

For example, if you take the termites, none of them knows what it's building or that it's even building something. They have just information about their local surroundings, and this is very important. And then kind of what they manage to build those wonderful structures with very limited intelligence and very importantly, without any central control.

So the key element here of central control is also very interesting because it's not just okay, we decide not to have central control just because. It's fascinating because systems without central controls with self-organized like this are very robust. So they're very robust to noise. They're very robust to failure.

There is no single point of failure. If one termite might dies, nothing happens. Yeah. Basically. Yeah. Yeah. Exactly. And there is no single operator that can make a mistake, and then the whole colony basically dies because a single one made a mistake. And they're very adaptable. So kind of what they adapt the buildings, they adapt their behavior according to the environment, and they're quick to respond in this sense.

The downside is that if you don't have central control, you are much less precise. And you're much less, especially in the, in terms of swarm robotics, it's much more difficult to predict what kind of emergent behavior you will have of the swarm.

Ah, we'll get to emergent behavior in just a second. That is an interesting right now.

So, sort of trying to bring it in, I guess the robotics side of things into the current world. So you mentioned ants and termites and I kind of get it cuz you go, well who's actually controlling them? How do they know what to do? And as you said no 'one' knows what to do. Is this sort of. I kind of compare it to when you see these drone formations that spell out things in the sky or like fireworks kind of thing.

They're not fireworks, but they kinda look like it. Is that sort of the same idea, But I'm sort of thinking they already have a centralized brain. They must have.

So the difference is that they're already probably, I'm assuming, pre-programmed what to do. Yes. And how to organize. So that's quite different. Yes. What we from the engineering side do is we as well, first thing we need to engineer the robots to mechanically build them.

And the other engineering part is the software or, and kind of what the rules by which they will behave. So, this swarming systems function in a way that you just give to it to each of these robots a simple set of instructions how to, and they have just this local information around them. You give them simple set of instructions, how to react to the local information, whether they're sensing something in environment or whether they're communicating between themselves.

Usually there is communication between the drones. It's very important. And then based on this, they just make a decision what to do, you know, go left or right, but they don't know exactly where, you know, in the sky they will position and which so basically all the decisions are made by just following the simple rules.

But what we see at the end, what happens is that we have this drone formation that does a certain shape or has a certain color or exhibits a certain behavior.

You were saying something that stuck out to me was like simple rules to create complex outcomes. I think that's really powerful. That sort of says it all to me, that no one robot knows the end result.

As you said, it might not be a hundred percent precise, but they only have these simple instructions as in go left, go right. And then the end result is much more complex. Like it's fascinating. I find it very fascinating that it's possible.

Yes. I mean, relating it to what I worked on, there is very obvious, like this kind of a discrepancy in the complex behavior and in the simplicity of the robots.

So what I was working on was it's called like morphogenesis in robot swarms.

That's a great word.

I know you love that term.

Such a good word.

It's not a sci-fi term I must say. It's a very biological term. You have to make science sound interesting, right? No,

That's true. Yep.

So basically it's a term used for how do body build themselves.

So the most fascinating thing that we have is how from a single cell, or from a group of cells, you get the whole body. How do the cells know where to go, what to do, which type of cells to turn into? You know, what kind of structure to build, how to connect? Well, the simple answer, the first answer is they don't. So they don't know that they're doing that, but they're just following these rules that they have inside themselves, like the DNA or whatever.

And then they're sensing the environment and they're reacting according to that. So in the lab where I used to work for in Barcelona, under Professor James Sharp, they used to study biological development, limb development specifically. And one of the principles of development is so-called Turing Patterns.

So Turing Patterns is something which was also kind of, discovered by Alan Turing. So besides him being a big name in AI, he also had an impact in biology. So basically Turing Patterns are something, without going too much into detail, so just really keep in brief, here are something that you see also in everyday life, like stripes of a zebra, which appear due to the interaction between two so-called morphogens or like chemicals in the body that interact with each other to put it simply, or the spots in the giraffe.

So basically kind of took this very biological principle and we said, Okay, if we apply to a non-living system, like robots, would it be able to do something with it or will it just completely not work and nothing happen? No, because we completely detach it from biology. So, the robots that we had were this tiny coin size robots, basically very small and very primitive.

So they had like three legs basically, and the only movement they could do was left or right. They don't even have wheels. No. And they have very limited like processing powering in terms of microprocessor and stuff and very limited things they can do. And the only other thing they were doing is they were passing messages to each other within a certain range.

So they were just broadcasting a message. So those were the only two things that they were capable of doing. And starting from this lump of cells. And by implementing these Turing Patterns and other kind of a simple rules for interaction between the robots. We manage to get the robots first, so if you put them in a pile, they start kind of, communicating between the cells and deciding first, or discovering where they are.

Are they in the middle of the swarm? Are they on the edge of the swarm? So in sense of like cells, depending where they are, they assume a certain identity. In the same way the robots assumed a certain identity. And then they started creating this Turing Patterns and migrating according to the Turing Patterns, and then kind of creating a growth.

So from this lump of cells, you suddenly start getting structures growing out of them, basically because they're reorganizing into those structures. Another fascinating things was even when when we damaged the structure,so we kind of cut off, let's say tentacle that grew the Turing Pattern readjusted and then a new migration happened and the shape kind of where readjusted and we, again, were seeing growth but much smaller because of course we removed part of the robots.

They don't reproduce as cells would do.

So what you're saying is you had cells, I'm doing quotation mark here, cells, which were robots and you programmed them with a Turing Pattern. So that's a little piece of software I'm guessing, which is sort of similar to DNA, like it has some instructions in it.

No. So it's more like, what we use the message passing system for. So, so it's basically what we programmed was you just need two basically concentrations of molecules and using quotation marks here. Which interact There's a rule how these two molecules interact with each other and how much and you can adjust certain coefficients. And basically this was the only thing they pass between themselves. They say, I have this concentration of this molecule and that concentration of the other molecule. And then there's an equation that calculates how they interact. And depending on this, from a completely uniform distribution of these concentrations, you suddenly patterns emerge.

In a self-organized way.

Yeah, that's right. And it's, Oh wow. I'm just thinking like, what are the possibilities? Well, we'll get to that. And we talk about the future, not just yet. Yes. So, okay, so Morphogenesis and you were effectively able to recreate some sort of swarm. It's decentralized pattern really, isn't it?

Cuz it's, as you said, you cut off some of 'em and they didn't even know that they'd been cut off. The rest of 'em just went, Oh, I'm now at the end of the thing. Or I don't have as many of you know, concentration of let's say the, you know, molecule that whatever the program said and it would just adjust.

So what can we do with swarm technology today? Are there actual implementations?

Well today it's still, I think in terms of maturity, it's still limited to a lot of this sci-fi projects. No where we are, we're trying to connect certain things, but you also have, like on larger scale, you have for things like, I think Amazon is using something in the warehouses, let's say, for delivering and getting goods through the warehouses.

So there's already some kind of a movement towards using it, you know, out there in the wild. But there are two major problems. No, first the problem is how to really, once you bring something out on the lab and into nature, you get to interact with nature, right? Yeah. So this is very unpredictable. What will be the outcome of that and how will that react, interact with the full emergent behavior of this form?

So first, it's very difficult to predict it. And second, it's also very important to start thinking early on about the safety of these systems. Because this needs to be also,

Yes, of course

in terms of also legislation, how do you prove that because you can have single robots kind of being a threat to humans, you know, depending on the size, depending what they do.

And you can have the whole emergent behavior behaving in unpredictable ways, which are not good for the individuals.

What is emergent behavior now that you mention it? Because we did say, we were gonna mention that.

Emergent behavior is like, I really like this saying, you know, when the whole is more than the sum of its parts.

So basically when the whole becomes, when the behavior or when the system that you engineered starts behaving in a ways, in some sense completely, that you can't directly predict from the simple instructions that you programmed, that's already emergent behavior. Because this behavior is not directly built in, but it's implicitly there and it arises.

And the only reason it arises is because you have a swarm and because you have an interaction. Because without a swarm, if you have one robot or two robots, you wouldn't see this emergent behavior. So where immediately this is also related somehow to scale, but also most importantly to interactions between these robots.

And then you get behaviors that you can't predict.

Yeah, it's the possibilities of what could happen is just, it's harder to sort of get it all in my head.

Yeah. Pretty much everything is open, huh? Like from, I dunno, from drones delivering things to imagine instead of now for wildfires, let's say it's popular or not, at least here in Europe, instead of having one plane that will go fill in the water and then put out the fire, you just release a swarm of robots which pick up water. They go and then depending on how the fire changes, they decide which part to put off and they kind of interact with the fire in some sense. They sense it and between themselves someone is saying, Okay, I'm running out of water, I'm going back. And this one is kind of a filling in immediately its place.

And again, this is without central control, so you just release them and then they go and do it.

Yeah. And that sort of brings us to you know, the real question we actually want to answer on the show, and it's how does this affect humans in the future? What is technically possible with swarm technology? So at this point, it might be kind of confusing how a swarm of little robots can make a difference in the world.

Like you did Just give a very good example of, you know, firefighting. I hadn't actually heard that before, but yeah, let's dive into the future and talk about how this might actually change things both good and bad. You know, there's always the question, we kind of mentioned this before we started recording, Hey, can I get a killer army of drones? You know, there's always that your mind just go to that sci-fi world, right?

And I know that's not of course where we, you know, you as a scientist, definitely not the aim of this technology, but what is possible? Where are we heading with this in the future?

Well, I think the future will get very interesting and I think we are heading towards a hybrid future in the sense that a lot of these technologies around swarm robotics are also maturing.

And this opens up kind of a lots of endless possibilities of how these technologies will interact with each other and how they will give us these self-organized systems, which do something for us. So, just to mention a few things here, you know, one is the amazing progress we're doing in AI, and another one is amazing progress we're doing in biology, especially in the field of synthetic biology.

So as I mentioned before, when we were getting ready for the show, no, one of the terms that I wrote there and I like, is this cyber cells? No. So the ideas until now, I mean most of the nano materials that we build for drug delivery, for cancer or whatever are, you know, they're constructed by us. But nature has actually spent millions of years of evolution perfecting these systems.

How to move, how to interact with the environment, how to sense the environment. And we kind of can piggyback onto that, but change it in a way we want. So just recently an article came out about using kind of a swarms of bacteria, engineered bacteria for drug delivery. So you just basically use a synthetic gene circuit.

That you put in them and basically you get them to go through this cycle where let's say you, you put them where the tumor is, you put this bacteria and they start synthesizing the drug to kill off the tumor. But not just that, actually what they do is they do a timed release. So they all come and when, and bacterias, I dunno if you know, can also sense each other.

So they can sense how much

Yeah. Right.

concentration there is of each bacteria there in the space, and once they detect that there's enough of them, they trigger a mechanism, which kind of destroys most of them, but releases the drug onto the tumor. And then few of them survive. And those that survive don't activate basically the circuit that they have for this drug release, let's say.

And they start multiplying like normal bacteria do until they reach a certain population. And then again, they release the drugs and again, they, you know, some of them survive. So this is very interesting, you know, that it's, this is not science fiction, this is, you know, already tested, that's in the lab.

Yeah that almost goes back to what you were explaining before with your research. Like it's almost sort of the similar idea of it knows how many there are, whether to go left or right, or in this case whether to release the drug or not. It's sort of, again, very basic instructions, but the complex behaviors you eradicate a tumor potentially. It's quite amazing.

Yeah. Yeah, exactly. And I think especially AI will start playing a big roles in these rules. As I said, it's very difficult to design this rules, so it's a bit trial and error. This local rules that you design, how the agents should behave, or the robots. So there is no clear way how to translate that to the emergent behavior.

So I suspect that a lot of AI will comes here into play. Maybe not for completely designing the rules, but maybe for kind of a learning on the fly, the parameters of those rules. Let's go back to the swarm of firefighting robots. Maybe they detect that the fire is smaller and they kind of need to adjust their flying parameters.

So kind of you would have an AI in the background, which will learn this, and it'll tell them, Okay, if you want to fly in a tighter formation, you need this parameters, all of you. Then all the parameters kind of transmitted or learned on the fly and then they adjust it. You would have AI, without the AI assuming direct control over them.

Just having an AI in the background that's learning these things, improving it.

It's like a knowledge repository where the drones can go, Hey, I need to update information. What do you got? kind of thing.

Yeah. Yeah. Just like to tell them, Okay, now you need to change like this a bit in order to get into a, Well, it'll not tell you to get into a tighter information, but it'll tell him, just adjust this parameters and that will result in the whole swarm coming into a tighter formation.

Yeah. Yeah.

To put out the fire.

And it also like one of the things that I find the most fascinating about this is this whole idea of there's no single point of failure, because a lot of the systems that we build today, and we see this, I mean, so I live in the world of cloud computing and you know, we say oh the cloud, the cloud, the cloud and we, you know, we put a lot of stuff and a lot of faith in it because it is very robust.

But occasionally a data center goes down or like a CDN provider, so a contact delivery network goes down and suddenly we have this massive crash across a whole bunch of services. And this is the opposite. Like it's if something crashes, like you were talking about, say the firefighting drones if a third of these drones sort of suddenly die because they're get too close to the fire, the rest of 'em are still gonna do the exact same thing, they're still good.

Right. That part of it I find extremely fascinating because that is how to build robust systems. Am I getting that right?

Absolutely. Absolutely. I mean, this cloud computing is based on redundancy basically. That's how you make sure, and you can think of the swarm as an endless redundancy. So as long as the swarm is there, there is redundancy.

Drones can die off and the whole thing would survive and would carry on with whatever mission it has. So for me, the most fascinating thing that I learned from this whole kind of a experience working into swarm robotics is that we need to start to think about technology fundamentally different than what we do now about how we engineer things, how we design things.

Let's say instead of building a building by designing it first and then you first put the carriers, you put this, you put that, you just release robots that will do it for you. You give them the design and then they can reverse engineer the rules and they kind of build it for you. Yeah. You know, So it's a completely different way of thinking.

I just, I'm thinking the building, we can get some interesting designs outta that.

Yeah.

But that, yeah, that's a very interesting way of certainly looking Cause you. Sorry, my brain is just really processing this cuz it's completely reverse engineering anything I know. So I'm a software developer right by trade.

And you go about it as, okay, what's my end goal? And you break it down in bits or I'm gonna build this part now and then this part and this part. Well you're saying essentially no. Give us the whole thing, what the end function is or end design and let them figure it out. How would I go about this in a programming way?

That's what I'm like, How do I break it down? You're saying Turing Patterns, is that what would be prevalent?

No. I mean, Turing Patterns were just a kind of a proof of concept that we used. Sure. Because it's one of the very dominant kind of principles in biology and developmental biology. So, so we just took a very well known principle from there.

So you would have, I mean, depending on the application, and that's where the complexity comes from. Depending on the application, depending on the environment you need to interact with. All of these things are, need to be engineered differently.

Yeah, yeah, for sure. It's not like you build one swarm code bot and then that can do everything. Different swarms will have to be made in different ways is what we're saying, right?

Yes, because it's also dependent on the hardware that the swarm is built from. It depends on many things. And I mean ideally what we would do as an ordinary person would go maybe to the shop and say, Okay, I need 15 drones. I dunno.

I want them to weed out my garden, whatever. And they give you a set of drones. You just release them in the garden. They know that they have to kill the weeds, they kill it, they come back and that's it. Yeah. You would have something like that, you know, ideally. Yeah. Yeah. That you could be able to just go to the shop.

Get them off the shelf, tell them what you need. They just put it in you. You just release it. And

Interesting. So we would have Amazon for swarms.

Yeah, exactly.

So it's not just, there's gardening tools. And there's gardening, you think, And there's gardening, swarms.

Yeah, exactly. I mean, we have no gardening bots, but they go and they zoom in directly on the plant and they decide whether they kill it or not.

No, you just release this and it runs around and it's constantly running around. It goes to charge when it wants. And then it comes back and then still kind of, you know, patrols the garden.

Yeah. Oh, that's brilliant. Cuz I'm looking at right now to get robot mowers to mow my lawn and I can just imagine a swarm of them.

That would be amazing. But , yes. I gotta come back to probably the most fundamental thing that I'm guessing, especially medicine and doctors have been trying to do in research forever. And this is this idea of self-healing. Like is it possible to start regrowing limbs or fixing hearing loss or restoring sight, or any of those sort of, things that you know, would have a profound impact on humans with this technology.

Is this where we headed? You were talking about cancer curing, which is pretty cool.

I mean, probably my answer to that would be probably. We would be also thinking in terms of regenerative medicine. Well, I'm thinking that would always be. It'll never be like a clear cut solution. You know, we just have robots and we put the robots and everything.

Usually these things are much more complex, and then you would, it would need to also interact with the cells and then maybe synthetic biology. So everything becomes a bit you know, it won't be a robot, a set of robots probably that you attach to a damaged finger and then it regrows it. It'll be a whole set of kind of different tools that you need.

To use. Okay. Because bodies are quite complex and that's very

Yes. Yes. Very complex.

Yeah. And things might get out of control. Yeah.

Yeah. Well that's I'm gonna get to that in just a second, but it is one of those things we desperately want to be able to regenerate something like sight. Yep. If we lost sight in an accident or a disease or whatever it might be.

And you're saying we have say with the bacteria that knows where the cancer cell is because of, you know, the programming of it. Well, could we have some that know, say something like bacteria maybe, or nanobots or whatever we can come up with that knows what the DNA is and where they are, and, oh, I'm at a finger.

I need to do X, Y, and Z as a one part of the swarm, and the swarm itself will have this greater, you know, goal of recreating that digit or that finger. I know it sounds very scientific signs sci-fi, if I could speak, sounds very sci-fi to me.

It is quite sci-fi but it's also not just sci-fi, I would say.

Everything is becoming slowly, you know, moving to the realm of the quite possible. It's no longer just like a complete abstraction there in the sci-fi novels and.

Yeah, it is fascinating. Now, you did touch on what I was gonna ask you about next, and that is what happens if this sort of gets outta control? Because you have said several times now, you know, while we were speaking that this is somewhat unpredictable.

Like we don't know exactly where this is gonna go cause they're on their own and it is a s swarm behavior. So what happens when this sort of can it sort of get outta control or go bit wrong?

Well, first, hopefully you have a kill switch installed. Yeah. I think that's one of the, No, but joking aside, the kill switch, like, I mean, all of this robots, although they don't have central control, there is a way for you to interact with them.

No you're not taking away that possibility. So in some sense, indeed, I like my swarm of robots that I was working with. If I wanted at a certain moment to just shut it down, all I needed is to press a button and a signal gets transmitted. All of them are down. That's it. There is no in sense. Right, Right.

But we need to start thinking beyond this and how to preemptively prevent this behavior when we engineer it. So it's still, I think it's currently like just at the beginning of it, like all these principles that should be followed, there was recently like a checklist that they read about okay, these are the things exactly that we have to be careful of when we design this kind of swarms.

So potentially it could be very damaging, especially if you release something which has the power to I don't know, self reproduce, like let's say bacteria or whatever. And which you don't have a kill switch for. No. So that would be very, you know, or any kind of nanoparticles which are self reproducing, and then you can imagine releasing them in the environment and then they interact in unpredicted ways and it's very difficult for you to get rid of them.

For bigger robots, I don't worry because we can always just like, you know, essentially shoot them down. They will not reproduce. But for smaller scales when we're talking, that might be some worrying. I think.

Yes. I have seen movies about that.

Yeah, I know exactly. I mean the other potential thing is like, you know, a lot of this are basically computers.

You know, a lot of this robots are in some sense computers, and as such can be hacked. And they can be purposely used or weaponized. So basically it would be problematic if you think of you have a civilian population, a civilian city, and you have different swarm robots. They're cleaning your garden or whatever.

No. And then yep. Their behavior can be adjusted by a malicious actor that they kind of will start behaving in very bad ways. And then it would be extremely difficult to get control over this. Because not all the robots will be under the same central control. And it would be very, you know, and you have things like, if they can communicate between themselves, then they can pass on viruses and whatever.

So, so they can infinitely reproduce this malicious behavior wherever they have access to others swarm agents that they can communicate with.

Yeah. Jeepers. So my garden swarm can get infected by my pool swarm. That's not good. Yeah.

and they don't wanna clean anything. Geez.

Yes. And then everything will be dirty.

No, I can see the security's definitely is a problem for this, now that you mention it.

And it's probably one of those fields where like, I have a good friend, he's on an episode of the show as well, Troy Hunt, and he's always said that he's never gonna run out of a job. He's a security researcher, right? Cause there's always gonna be something that is not secured or needs to be secured or isn't secured well enough and that I can definitely see that in this case.

Like how would you secure it? And what about firmware updates? You suddenly gotta update like a million swarm devices. Yeah. There's definitely some challenges. Okay. So should we just for fun, try and circle back to the moral technicality. Do you wanna change your answer based on what we just talked about I think it's even less likely to do it now, in my opinion. I mean, talking about security.

I mean, yeah, I guess yes. I guess I can say I've changed my mind for the specific scenario, but for the technology as whole, I think like as any technology we have issues how it's gonna be used and whether it can be, and whether it will be weaponized.

So probably it'll be weaponized because that's already kind of what happening in some sense. Yep. But, you know, that shouldn't stop us from developing anything, no?

I wouldn't have thought so. And it's interesting cuz we often ask at the end of the episodes, is this a technology basically that has legs? Are we gonna keep doing it?

Are we gonna not really bother? And it's, to me, this sounds like it has too many benefits not to keep researching and developing it. But I dunno, I don't see us stopping, do you?

Well, no, I mean, because it'll, yeah, the simple answer is no, but think it won't because also, I feel like that now, especially in technology, like things are interacting too much.

You have inspiration from one field flowing into the other. So, so everything is becoming interconnected and let's say you decide to shut down swarm robotics, but the same kind of a principles and way of thinking, will be just transferred to another area, no? That it can be used. And because swarm robotics is based basically on this very general, nature principles.

So it's unavoidable to, it's unavoidable in some sense for us not to try to engineer them and to adjust them to, to the way we want them to behave.

Yep. I think I agree. It's, I don't know, there's some of the things that you've mentioned. I just, they're so good. If we can make it happen that, you know, security things aside and, you know, rogue robots aside, I'm sure we, that's the same problem we have everywhere, whether it's internet or cloud computing or cars or whatever.

We're always gonna have some sort of security issues we need to deal with. I definitely think this this has great promise. Like Wow. On every scale. Every scale.

Yeah. Absolutely. Absolutely.

So anything you wanna add at the end of Ivica that we haven't covered?

Yeah, maybe, just to add one interesting kind of a thing that I've observed when I worked with them is basically you have your robots and after a while you start feeding them as pets.

No, they all have different behaviors. They all have, you know, different kind of characters and flaws. And then I started marking them with certain, you know, with spots to know which one is more problematic, to maybe take it out sometimes. Yeah, right. So it's very interesting how we relate also to technology.

No? Although it's a completely non-human looking entity, let's say this robot, you still tend to build a relation with it and, you know, treat it like a pet.

Yeah. You go, ah, Michelle again. She's always on over there. Geez

Yeah, exactly I had a robot that was like always like kind of a running off, but probably something was wrong with his sensor, so he was just constantly running off and I had to mark him, but still, I used him.

No, it's just like, I don't want to exclude things.

That is brilliant. No, thank you so much. That's awesome. Thanks for your time Ivica. That's that's been very educational of anything. I'm gonna have to go and read more about this.

Yeah. Well it was yeah. Thank you for inviting me. It was very good to talk to you and I hoped we informed the listeners and we made it interesting.

Yeah.

Oh, absolutely.

Yeah, for sure. So, that is all for this time. If you like the episode, consider subscribing to the show. We are available wherever you find good podcasts. Also give us a review, which will help others find the show as well. Tune in again next time for a conversation about what is technically possible.