nikodem

Will we all live in the Matrix eventually? Introduction into Multi-Agent Systems

Blog Post created by nikodem on 22-Jun-2017

This title seems a little bit weird at first, especially because if the Matrix movie was showing us life as it is, the title should be, “when will we wake up from the Matrix?”. Fair point but I would claim that the architect has not finished building it, yet. Now, I have to be careful because the movie is full of symbolism and it is very easy to interpret something which might be completely absurd or even worse the omnipresence of symbols allows for any kind of interpretation without the chance to falsify it.

 

What made me think of the movie was my study on multi-agent systems (MAS). Coincidently, Agent Smith is a wonderful example for a multi agent system, or better a whole army of Agent Smiths, but we will come to it.

 

      Figure 1: Neo battling an army of Agent Smith [1]

 

Let’s first clarify what a MAS is. This term comes from computer science and describes a system in which agents interact with each other. An agent is defined as a complete software or robotic entity designed to perform a task or follow an objective. It has the capabilities to sense its environment, evaluate it and react, or the other way around, be proactive, see how the environment reacts and evaluate it. This is a form of artificial intelligence (AI), more precisely of decentralized artificial intelligence (DAI) because for a MAS there are multiple actors needed which act in the same environment and interact with each other.

 

There is an important difference between a system with many subsystems (the slaves) all performing different functions controlled through a central control (the master) and a MAS. In the first example, all logic and control is situated in one location. There is a communication between the subsystems and its master, but you already guessed it, the master will make decisions and give control to its slaves. In MAS, this is different. The logic is distributed; thus, each agent makes the decisions themself. As we are talking of a system with many components, there is communication between the agents involved. The actions of agents depend on the state of the agent and its environment and the collective actions of all agents participating in the shared environment. The collective of often even simple actions by agents can result in enormous complexity when looking at the system as a whole. This phenomenon is known as “Emergence” [2]. Two essential characteristics can be attributed to MAS: they are inherently distributed and inherently complex. If they are not complex (or not able to solve complex problems) why even bother setting up a MAS in the first place instead of using central control [3]? 

 

OK, enough with all this abstract blabbering, here is an example. Take Wall-e that cute little robot from the Disney movie with the same name, which has been left alone on a plant (could be earth) to clean it up by compressing all trash into cubes. For a robot, it enjoys an enormous amount of autonomy. There is nobody controlling it, there is nobody supervising it, not even an authority that robot needs to report to. It was simply designed and programmed with the objective to clean the earth and let loose. A similar example is the other robot Eve from the same movie which has the objective to scan the messed-up planet for biological life. 

 

      Figure 2: Wall-e and Eve doing what they are designed to do [4][5]

 

The two mentioned examples can be classified as agents, but we are not in the domain of MSA yet. The “only” thing we see are robots which operate independently but without engaging in interactive activities with anybody or anything except the environment they are in.

 

I would like to play around with Wall-e and Eve a little bit more and see how these two robots could become part of a MSA before finally turning to the Matrix and Agent Smith (YES, it is going to get dark!).

 

Imagine, instead of having just one Wall-e we would leave a whole troop of Wall-e robots on a completely messed up world to clean it up. Each Wall-e would have the objective to clean the planet, just like before. A dumb way of doing it would be to take each robot and send them out to start compressing all the trash into cubes. Sometimes robots would get close to each other and we can assume they are smart enough to avoid collisions, but that would be all their interaction. This cannot even be called agent-interaction because there is no difference between another Wall-e or any other random object to avoid collision with. It is simply responding to the environment. Eventually the planet will get clean but when, that is the question.

 

How can we turn this bunch of waste compressors on wheels into a complex cleaning task force using the MAS approach? If every Wall-e knew which place has been already cleaned by other Wall-e robots, knew how much waste is left in different locations and how many other Wall-e robots are currently working in different locations, it could take an informed decision on where to continue with its job in the most effective way. It is even possible to think of task distribution; one set of robots compresses the waste and another transports it away. The decision of tasks each Wall-e performs is not done centrally, but evaluated by each robot individually by looking at what brings most utility to the whole cleaning process. Remember that Emergence, i.e. the formation of complexity, is the result of relatively easy (inter-)actions by multiple actors in a system. For Wall-e robots this could mean that the information each Wall-e needs to send is its location, the amount of waste in its location and its role (compression or transport). The information it should receive is a map with the amount of waste, Wall-e robots and their corresponding roles. One problem appears at this point. How are all the individual information converted into an information map? For that a central agent is necessary. This is the agent, all other agents (Wall-e robots) communicate to. The Central Agent collects all information, processes it and sends it back. Notice the distinct difference to a system with central control. The decision to allocate resources, i.e. sending Wall-e robots to a specific location, is taken by each Wall-e independently and leaves the central agent with a fairly easy task. The example I just outlined, describes a MAS where agents work in cooperation. The alternative can be a MAS where agents compete. To explain that scenario we are introducing Eve to our messed-up planet.

 

 What would it be like when a troop of Wall-e robots and EVE robots come together at the same planet with different objectives. First let’s see how Eve fits into this scenario. In the movie Eve’s role is to scan the world and report to the humans in space when it finds biological live. We can copy-paste the MAS we described for Wall-e and use the same approach for Eve. A troop of Eves now cooperates by sharing local information and receiving global information which are formed based on the set of individual local information. This will turn the search for biological life into a structured activity and probability to find something in time is greatly increased. We can even go one step further and let the Wall-e and Eve robots communicate with each other to make everyone profit from the collective intelligence. This demonstrates how highly scalable well-designed MAS can be. However, Wall-e and Eve are not going to cooperate in every aspect. Both robots will have to recharge eventually. Let’s assume there is charging station which is basically a battery charged through solar cells. There is only a limited charging capacity and not all robots can be charged simultaneously. This situation creates competition and the robots now start to compete against each other to get the chance to charge. Please bear in mind that robots are not selfish, like we are, and don’t do everything possible, like starting to cry that their children have nothing to eat, to get access to the charging port. They will compete based on their objective. That is why it wouldn’t make much sense to compare the Wall-e robots with each other. All their objectives are the same, thus, they will have the same ranking and the charging delegation might end up random or simply by placing all Wall-e robots, which need charging, in line. The situation changes however when we have robots with different objectives. In our example that is (1) cleaning the planet and (2) searching for life. It would make sense to say that at the beginning we want to make sure that the planet gets fairly tidy, so we prioritize that Wall-e robots are active, but as the planet gets cleaner we want more and more Eves to be actively searching for life. Right at the start when we set the system into place, Wall-e and Eve are going to compete for the charging spots, but as the system prioritizes Wall-e’s objective over Eve’s, Wall-e will win the charging spot and we will see only Wall-e robots moving around. Over time more space will become free and we designed the system such that Eve’s charging priority increases with cleaned space, such that when the Wall-e robots freed up 50% of the space the relative amount of active robots will be 50% for each type. As the planet is reaching the level where it is nearly completely free from trash, there will be mainly Eves active and just a small number of Wall-e robots. MAS can be designed to allocate (scarce) resources meaningfully while retaining system complexity in an easy way. In the case of Wall-e and Eve we can say that the weighing factor is a function of the freed space from trash. As the free space grows, the weighing factor for charging in the case of Wall-e goes down and in the case of Eve goes up.

 

So where is the link between MAS and the Matrix? If you haven’t seen the movie (or triology), the matrix is a virtual realm where some peoples’ consciences are trapped. The world around them is simulated. Agent Smith, who is the antagonist in this movie, is a software virus which clones itself to fight the protagonist. In many ways Agent Smith exhibits the properties of an agent as part of a MAS. What a strange coincidence that he (it?) is also called “Agent” Smith. So, when we create MAS around our self, are we not in fact creating a matrix we all end up living in eventually? You could say that this is a typical representation of a horror scenario many envision when it comes to machine dominance over humans through advances in AI. I am not a pessimist and have my doubts about these kind of horror visions.

 

I am also not a pure optimist either and would like to highlight where I think cautious attention should be present. It is well possible that MAS will find our way into our everyday life. I study it currently in the field of electricity grids especially by looking at how smart appliances can become part of such systems. These appliances will have the purpose to serve a certain objective for the user but there is also the potential to make them participate in services for grid objectives (grid support, virtual power plants and so on). When we look at the messed-up planet full of Wall-e robots and Eves we see a complex behavior as a result of all the individual interactions. Even though, the individual actions are fairly simple, the result is enormous. I find MAS fascinating, but what good does it do if the people (referring to non-experts on this subject) subjected to it loose a degree of autonomy and instead of benefitting from the complex tasks it can provide, they start to adapt to the structural changes coming with MAS. Wouldn’t that be a small victory for the machines?

 

By Nikodem Bienia

 

 

References:

[1] Agent Smith army picture: https://imgur.com/kcpGv

[2] Koen Kok; Power Matcher Smart Coordination for the Smart Electricity Grid; 2013

[3] Gerhard Weissbach; Multiagent Systems – A modern approach to Distributed Artificial Intelligence; 1999

[4] Wall-e picture: http://2.bp.blogspot.com/-kdPbdeaeeAw/Vkz44PaNhpI/AAAAAAAAC-4/0CrypwYSiG8/s1600/Screen-Shot-2012-09-10-at-13.29.41.png

[5] Eve picture: https://i0.wp.com/media2.slashfilm.com/slashfilm/wp/wp-content/images/evescan.jpg

Outcomes