Artificial Intelligence: lessons on trust from the prisoner’s dilemma

A trustworthy environment is essential to empower intelligent beings make smart decisions


Vaqar Khamisani February 27, 2020
Representational image. PHOTO: REUTERS

The coming world will witness robots and humans interact amongst themselves to achieve their respective goals. In this atmosphere of enhanced coupling between intelligent entities, it is important for them to establish dependable relationships. Although the current artificial intelligence trend is to develop super-smart systems, research shows that cleverness alone is grossly insufficient to produce desired outcomes. For achieving optimum results in a multi-agent environment, it is imperative that besides being intelligent, machine learning systems should also collaborate with reliable partners.

To develop a framework of trust amongst machines, we can take inspiration from human societies that have perfected collaboration over many years of evolution. An insightful way to examine the rules of engagement between people is through a popular approach called game theory. Pioneered in the nineteen forties by a renowned mathematician John Von Neumann, it has since grown to be one of the leading ways by which economists understand micro and macro level rational behaviour.

Within the field of game theory, prisoner’s dilemma best describes a simulated setting that underscores the vital role of trust in decision making. In this game, the players act out the role of two accused that are kept in different cells after being caught by the police following a robbery. Both players are considered credible suspects, but the police do not have enough evidence to charge them for the crime. Due to this predicament, they are individually offered the following proposal by the police. If one of them defects (D) to the police, the defecting suspect will get a reduced sentence of 1-year in jail whereas the other one will get 7-years. If both defect (D), they will each get 5-years in jail. Finally, if none of them defect and stay loyal (L) to each other, they will each get 3-years in jail.

We can employ a game theory algorithm called minimax to understand how the two captives might respond using logical decisioning process. So, in the scenario presented above, an individual has two options, namely to defect or stay loyal to their partner. If player 1 choses to stay loyal, the outcome could be as bad as 7-years imprisonment since player 2 could defect getting just 1-year sentence. On the other hand, if player 1 chooses to defect, the worst-case outcome will be 5-years. Incidentally, player 2 will independently go through the same reasoning process. Hence, out of the two options, the rational decision which minimizes the worst-case possibility is for both the players to defect and each get 5-years imprisonment.

Although the minimax approach explains the rational course, it is straightforward to see that it converges to a suboptimal outcome. The best result for them would have been to stay loyal and get 3-years each. However, therein lies the paradox, if either one of them individually chooses to stay loyal, how could they be sure that the other player would not cheat on them. If in case one of them stay loyal and the other one defects, the loyal player will be incarcerated for 7-years and the defector will walk away after just 1-year imprisonment. In other words, without mutual confidence, any intelligent person will have no option but to opt for an outcome which is inferior than the best possible result.

Let us now suppose that before the prisoner’s made up their minds, the police offered a provision of ‘phone a friend’ to help them decide. If they availed this offer, we then have two brains at work on both ends. Interestingly, the beefing up of mental capacity on each side will do little to change the outcome as the rational choice will stay the same. Hence, mere enhancement of intellectual capabilities without establishing two-way trust is insufficient to achieve global optimum.

There are many instances of how prisoner’s dilemma plays out in real-life scenario. An example often quoted is of any two nations locked in an arms race. At any given time, the countries have a choice to spend on military (M), or, alternatively fund their public welfare (W) projects. The ideal outcome for both the countries is to spend on people’s welfare, however, each is fearful to do that, lest the other nation does not do likewise. Therefore, lack of mutual trust causes both the parties to select an undesirable choice of increased spending on defence projects. The situation is analogous to both players defecting in the prisoner’s dilemma game as described previously. In this instance, the deadlock is usually mitigated by confidence building measures, treaties, and neutral inspections that helps both countries divert their resources on welfare to achieve better outcomes.

Although at a macro level, laws and treaties are formulated to minimize doubts between competing parties, but this is less likely to happen at a personal level. Imagine attempting to collaborate with a colleague in your office for different tasks and projects. In this case every day presents itself as a new instance of an iterated type of the prisoner’s dilemma where both individuals must repeatedly decide if they would collaborate or combat. Therefore, in the absence of formal regulations, it is the history and pattern of repetitive mutual engagements that will support formation of trustworthy relationships.

To investigate the effectiveness of various interaction strategies for iterated version of prisoner’s dilemma, a renowned political scientist named Robert Axelrod conducted a novel experiment. He initially reached out to experts to source various playing tactics which were programmatically made to compete. For instance, an approach called Jesus was sent to him which represented a player that always stayed loyal. Another one called Lucifer was also sent which referred to an individual that would always defect. However, at the end of the competition, the methodology that outperformed all the rest was called Tit for Tat. This strategy starts with being loyal but then mirrors what the opponent does in the last move. A few years later, a relatively more forgiving version called generous Tit for Tat was introduced. Despite facing tough competition with newer and niche approaches, generous Tit for Tat remains one of the best general-purpose strategy for prisoner’s dilemma.

The emergence of Tit for Tat as a dominant strategy is heartening as it was able to defeat selfish and retaliatory approaches that were also thrown in the mix. Tit for Tat has several aspects that are particularly appealing. For example, when it plays versus Jesus, it will always stay loyal resulting in an optimum outcome for both. However, on the other hand, it will quickly become combative in case of a match against Lucifer. Pitched against itself, it is likely to converge to a mutually loyal state to achieve ideal results. Understandably therefore, researchers have also compared Tit for Tat with reciprocal altruism, which is an evolved biological phenomenon in which species collaborate in anticipation of being returned the favour later.

The world of technology is relentlessly marching towards creating higher echelons of machine intelligence using the most advanced algorithms. During this journey, it is important to remember the key lesson of prisoner’s dilemma which explains that intelligence alone is inadequate to deliver ideal outputs. As the investments in artificial intelligence leads to a society reliant on multitude of super-smart agents, it is imperative that they are also designed to establish trust as well as collaborate to deliver optimum results for us.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ