22
1 Deviating from Traditional Game Theory: Complex Variations in Behavior and Setting Jun Park A literature review submitted for Advanced Topics in Cognitive Science taught by Professor Mary Rigdon assisted by Robert Beddor School of Arts and Sciences Rutgers University New Brunswick May 4, 2015

Literature Review Game Theory

Embed Size (px)

DESCRIPTION

Game theory has been a popular area of research due to its wide application in fields that concern social interactions between interdependent agents, and consequently has been repeatedly studied by countless economists, computer scientists, cognitive scientists, and military strategists. The purpose of this paper is to gather and explore recent research on advanced models of game theory, with emphasis on complex and incomplete information scenarios in games, methods for predicting strategies, reinforcement learning, and cost of computations and its effect on behavior.

Citation preview

Page 1: Literature Review Game Theory

1

Deviating from Traditional Game Theory: Complex Variations in Behavior and Setting

Jun Park

A literature review submitted for Advanced Topics in Cognitive Science taught by Professor

Mary Rigdon assisted by Robert Beddor

School of Arts and Sciences

Rutgers University – New Brunswick

May 4, 2015

Page 2: Literature Review Game Theory

2

1: INTRODUCTION

1.1 Objective

Game theory has been a popular area of research due to its wide application in fields that

concern social interactions between interdependent agents, and consequently has been repeatedly

studied by countless economists, computer scientists, cognitive scientists, and military

strategists. The purpose of this paper is to gather and explore recent research on advanced

models of game theory, with emphasis on complex and incomplete information scenarios in

games, methods for predicting strategies, reinforcement learning, and cost of computations and

its effect on behavior.

1.2 Basic Background of Game Theory

Game theory is the analysis of interdependent relationships between competitive agents

and the evaluation of strategies for dealing with these situations. Games between agents depend

heavily on how the each agent acts and reacts, and thus a decision-making algorithm that

considers the potential decisions that the opponent can make is essential in solutions to a game

theory problem. Also, an important aspect of game theory to note is that these agents are

considered to be rational agents, which means that they have clear preferences and always aim to

choose the decision that will most benefit themselves. Rational agents can be anything or anyone

that can make decisions based upon the scenario and information given to them, and can be

automata (machines) or humans.

A classic example of a game theory problem is that of the prisoner’s dilemma, which was

first developed by Merrill Flood and Melvin Dresher in 1950. In the scenario, two suspected

criminals referred to as A and B were caught for a minor offense such as vandalism and face

small amount of jail time. However, the investigator suspects that the criminals committed other

Page 3: Literature Review Game Theory

3

crimes previously and wants to arrest them for a major offense such as robbery (assuming they

did commit both crimes). The investigator tells each of the criminal of their possible options: if

they both confess, they both get three years; if one confesses but the other denies, the one who

confesses get one year, but the other gets seven years; if they both stay silent, they both receive

two years. In here, the optimal outcome for both criminals two years if both stay silent. However,

knowing each of the possible choices (and assuming these criminals are rational agents), each

criminal will act according to his own self-interest. Then, given B’s possible options (confess or

deny), A will try to confess every time since it works for his favor regardless of what B chooses

to do: if B was to confess, A can deny (receive seven years) or confess (receive three years); if B

was to deny, A can deny (receive two years) or confess (receive one year). Thus, regardless of

what the other criminal chooses to do, the action that the criminal will always follow through is

to confess, and will not deviate from this action in all subsequent cases since it does not benefit

them to do so. This outcome is called the Nash equilibrium, where players have nothing to gain

from deviating from their current strategy after considering the opponents’ actions.

1.3 Types of Game theory and Common Terminologies

Different types of game theory can alter the amount of information shared, the extent to

which one agent can influence another, and the possible rewards or consequences from each

agents’ actions. Game theory is typically split into two branches, cooperative and non-

cooperative. Cooperative games mean that the agents in the game can communicate with

another, while non-cooperative games do not. The prisoner’s dilemma, as explained above, is a

non-cooperative game, since they cannot communicate with each other. Other examples of

games with game theoretic framework include Rock-Paper-Scissors, Ultimatum Games, Dictator

Games, Peace War Games, Dollar Auction Game, and Trust Games, to name a few.

Page 4: Literature Review Game Theory

4

Another important game type to note is a negotiation game. Negotiation is a game in

which each player tries to come up with a mutually beneficial solution, but at the same time

acting as a rational agent (thus attempts to exploit the other agents). There are terms in

negotiations that are frequently used in the sources within this paper, which will be briefly

defined.

An agent’s preference, regarding economics, is how an agent values each alternative or

outcome based on the amount of utility the outcome provides or how much the agent feel like

they are being rewarded with what they want (e.g., happiness, satisfaction). Utility is how useful

the outcome of the negotiation will be to the agent. In economics, it is equivalent to how much

people would pay for a company’s product.

Test domain consists of two parts, as defined by Chen and Weiss (2015, p. 2295):

competitiveness and domain size. Competitiveness describes the distances between the issues

that each agents have, meaning, the greater the competitiveness, the harder to reach an

agreement. Domain size refer to the number of possible agreements, and as such, the efficiency

of negotiation tactics is crucial if the agent wants to resolve multiple issues.

Discounting factor is the factor in which the value of the potential profit in a negotiation

decreases as time passes. The discounting factor starts at 1 and goes to 0, and can only decrease

the maximum utility that can be gained from the negotiation. A typical explanation of the

discounting factor is that “a dollar today is worth more than a dollar tomorrow.” Discounting

factor becomes irrelevant for issues that does not concern the future (factor stays at 1 for the

whole negotiation).

Concession (or giving into an agreement) in complex negotiations is strictly dependent on

the type of agreement and the amount of time that has elapsed since the negotiation began. As

Page 5: Literature Review Game Theory

5

explained by Williams, Robu, Gerding, and Jennings (2011, p.432), “there exists a trade-off

between conceding quickly, thereby reaching an agreement early with a low utility, versus a

more patient approach, that concedes slowly, therefore reaching an agreement later, but

potentially with a higher utility.” As such, discounting factors play a significant role in

influencing the agents’ behavior, depending on their intention for higher or lower utility

outcome.

1.4 Evaluation of Efficiency of Proposed Strategies, Mechanisms, and Algorithms

Since the introduction of Generic Environment for Negotiation with Intelligent multi-

purpose Usage Simulation (GENIUS) by Hindriks et al. (2009), almost all new negotiation

studies evaluated their agents’ effectiveness through their program. With GENIUS,

experimenters can configure negotiation domains, preference profiles, agent (both human and

automata), and scenarios and provides analytic feedback such as Nash equilibriums, visualization

of data and optimal solutions. The purpose of GENIUS is to facilitate research of game theory

applications in complex negotiations and practical environments.

Page 6: Literature Review Game Theory

6

2: RISING RESEARCH TOPICS

2.1 Model Prediction with Incomplete Information

Previous research of game theory applications consists mostly of simple models or

scenarios with predetermined information and preferences of opponent agents in restricted

environments. Since applying game theory to dynamic and interacting games can be very

difficult, researchers often used to over-simplify the games’ conditions and environment to

gather data on the effectiveness of game theory. However, these environments are seldom

encountered in real-world situations, and “real-world decision-makers mistrust these over-

simplifications and thus lack confidence in any presented results” (Collins and Thomas, 2012).

Thus, recent studies have moved on to focusing on experiments with highly complex scenarios to

mimic behavior and to gather experimental data of applied game theory in realistic

environments. As defined by Chen and Weiss (2015, p. 2287), a complex multi-agent system has

three assumptions made about the setting. First, agents have no prior knowledge or information

about their opponents such as their preferences or strategies, and thus cannot use external sources

to configure or adjust their strategies. Second, the negotiation is restricted with time and the

agents do not have information on when the negotiation will end. Thus the agents must consider

their remaining chances to offer exchanges and also realize the fact that the profit or reward

through agreement of the negotiation decreases as time passes by a discounting factor. Third,

each agent has its own private reservation value in which an offer below the threshold will not be

accepted, and can at least achieve that amount by ending the negotiation before the time-

discounting effect diminishes the profit to the reservation value. These assumptions create

complicated interaction models between the agents, but also serve to create more accurate

datasets for realistic situations.

Page 7: Literature Review Game Theory

7

The study of complex and multi-agent negotiations is becoming a growing area for

research in game theory since the inception of negotiation tactics (i.e., Raiffa, 1982). As such, a

difficult problem that frequently arises when facing complex scenarios is how to compensate for

the lack of prior knowledge of the opponent agent in choosing one’s strategy. Without knowing

the opponents’ strategies, preferences, or reservation values, an agent cannot rely on a dominant

strategy to gain advantages and instead must present an adaptive behavior throughout the game.

Consequently, recent research have been focused on finding methods to model the

opponent’s behavior within the negotiation itself, along with its preferences and reservation

values. For instance, Lin, Kraus, Wilkenfield, and Barry (2008) proposed an automated

negotiating agent that can achieve significantly better results in agreements and in its utility

capabilities than a human agent playing that role. Lin et al. (2008) based the agent’s model on

Bayesian learning algorithms to refine a reasoning model based on a belief and decision updating

mechanism. For the purpose of this paper, the algorithm will be explained in a grossly simplified

perspective. The main algorithm for the agent is split into two major components – “the decision

making valuation component” and the “Bayesian updating rule component” (Lin et al., 2008, p.

827). The former inputs the agent’s utility function, i.e., a function to calculate utility or

potential, as well as a formed conjecture of the type of opponent that the agent is facing. Both

data are used to generate or accept an offer, based on how the inputs compare to a set of

probabilities describing if the rival will accept them, or how the potential reward and loss relate

to the reservation value. The ladder component updates the agent’s beliefs about the opponent as

well as itself based on Bayes’ theorem of conditional probabilities. The two combine to create a

working design mechanism that shows consistently successful results. Lin et al. (2008)

experimented their mechanism in three experiments in which they matched their automated agent

Page 8: Literature Review Game Theory

8

against humans, a different automated agent following an equilibrium strategy, and itself. The

results demonstrate a superior performance of the agent against humans in any given scenario in

relation to their utility values, and thus the researchers claimed to have succeeded in developing

an agent that can negotiate efficiently with humans with incomplete information.

Similar line of research have been taken by others to develop agent mechanisms that can

learn the opponent’s strategy in complex negotiations. Williams, Robu, Gerding, and Jennings

(2011) aimed to develop a profitable concession strategy using Gaussian processes to find

optimal expected utility in a complex (as defined above) and realistic setting. The algorithm for

the strategy is separated into three stages – 1) predicting the opponent’s concession, 2) setting the

concession rate for optimal expected utility, and 3) selecting an offer to generate or choose

(Williams et al., 2011, p. 434). The estimation process for the opponent’s concession or expected

utility is handled by a Gaussian process with covariance and mean functions, which provide

predictions and a measure of confidence of that prediction within real-time time constraints. The

process is evaluated for each particular time window of some duration to reduce both the effect

of noise and amount of input data (Williams et al., 2011, p.434). After the prediction is

generated, the strategy utilizes Gaussian processes to set the concession rate by simulating the

probability that an offer of a given utility will be accepted by the opponent. When a target utility

is generated after the concession rate is set, the strategy creates a range from the target utility, for

instance, [µ – 0.025, µ +0.025], and chooses a random value within the range as an offer. If the

offer is not available within the range, the range of the offer increases until an offer is accepted

(Williams et al., 2011, p. 435). The results, after being compared to the performance of the

negotiating agents from the 2010 Automated Negotiating Agents Competition, indicate that their

new strategy is superior in terms of accuracy and efficiency. For concrete numbers, an analysis

Page 9: Literature Review Game Theory

9

done in self-play scores show that this new strategy achieved 91.9 ± 0.7 % of the maximum

score possible, with the next distinct score achieved by IAMhaggler2010 (previous leading

strategy developed by Williams et al. in 2010) with 71.1 ± 3.4 % of the maximum score

(Williams et al., 2011, p. 437).

However, in these previous research attempts to solve complex negotiation scenarios,

there are cases where inacceptable assumptions were made which may have overestimated the

agents’ true effectiveness in real-life negotiations. In their most recent work, Chen and Weiss

(2015) pointed out two issues that were not fully addressed in previous studies. One issue was

that the way that the agents learn their opponents’ strategies is flawed, and the other issue was

that there is no effective existing decision-making algorithm for complex negotiations to concede

to the opponent’s offers. Previous researchers undermined or overlooked the first issue as their

methods of modeling was computationally cost-ineffective (which will be discussed in another

section) or they made impractical assumptions about the opponent before the negotiation began.

For example, in the research performed by Lin et al. (2008), the proposed Bayesian learning

method started with out-of-scope assumptions, as Chen and Weiss (2015, p. 2288) pointed out

that the experiment “assumed that the set of possible opponent profiles is known as a priori.”

This convenient assumption allowed comparative analysis to be made within the negotiation to

determine which profile the opponent resembles, but such assumption goes against the

conditions set as a complex scenario in which no prior knowledge about the opponent is brought

into the negotiation. Furthermore, the method fails to compensate for profiles that are not

contained within the premade set, and thus only produces results based on the model that is the

most similar to one of the set profiles. However, for realistic applications, the model proposed by

Lin et al. (2008) can be deemed appropriate, since a human agent could have prior experience

Page 10: Literature Review Game Theory

10

with other agents and have learned their possible profiles. Hao et al. (2014, p. 46) also pointed

out that Saha et al. (2005) made their strategy under the assumption that agents negotiate over a

single item, rather than multiple items like in practical negotiation settings, which diminished the

complexity of the setting.

In addition, in the experiment done by Williams et al. (2011), Chen and Weiss (2015)

noticed that while optimizing its own expected utility based on the opponents’ strategy provides

strong results, the strategy suffers from the problem of “irrational concession.” Most likely, the

time window factor creates opportunity for errors, as Williams et al (2011, p. 434) stated, “small

change in utility for the opponent can be observed as a large change by our agent.” Thus, the

agent can misinterpret the opponent agent’s intentions (especially against sophisticated agents),

which can lead to the formation of high risk behavior that can potentially backfire on the agent

by conceding too quickly.

Therefore, Chen and Weiss (2015) claimed that the current methods of learning opponent

models is too inflexible or inefficient, and proposed a new adaptive model called the OMAC*

(opponent modeling and adaptive concession) that carefully accounts for all the aforementioned

errors. The OMAC* model contains an improved version of Gaussian processing algorithm with

the addition of discrete wavelet transformation (DWT) analysis for its opponent-modeling

component. Discrete wave transformation analysis allows noise filtering and function

approximation among others (Ruch and Fleet, 2009) through repeated representation of a signal

and its frequency. In relevance to the OMAC*, DWT allows the agent to extract “previous main

trends of an ongoing negotiation” with smoother curves and more accurate mean approximations

as time elapses (Chen and Weiss, 2015, p. 2290). Combined with Gaussian processes, DWT

analysis have shown one of the highest capabilities to predict opponent’s behavior in real-time.

Page 11: Literature Review Game Theory

11

As for the OMAC*’s decision-making component, the researchers implemented an adaptive

concession-making mechanism which relies on the learnt opponent model. The component

handles how the agent will behave in response to counter-offers and also decides when the agent

should withdraw from the negotiation. The results (Chen and Weiss, 2015, p. 2298), through

experimental and empirical game theoretical analysis, suggest that the OMAC* outperforms all

previous agents, including IAMhaggler2011 (the new agent by Williams et al.) by a large margin

– OMAC* (rank 1, normalized score = 0.667 ± 0.01) versus IAMhaggler2011 (rank 9,

normalized score = 0.511 ± 0.005) (Chen and Weiss, 2015, p. 2298). The strength of OMAC*

also lies in its stable robustness, as the agent was able to perform well against in non-tournament

(non-fixed) settings where agents can switch strategies in an ongoing negotiation.

As seen from the results, research in the agents’ performance in complex settings is

advancing and evolving at a promising rate. Future research on the topic can focus on even more

efficient learning techniques, more adaptive behaviors (perhaps on par with human flexibility),

and more fluent predictive abilities using Gaussian processes, DWT, and utility or mass

functions.

2.2 Reinforcement Learning Applications in Game Theory

Reinforcement learning, or learning through a system of reward and punishment, is an

effective strategy in training agents to perform repeated tasks to maximize their predictive

capabilities. As research on complex scenarios arose, reinforcement-learning based models

became increasingly popular due to its dynamic and adaptive ability to observe and calculate

trends and offer predictions to maximize the agents’ rewards.

As an example, Collins and Thomas (2012) utilized reinforcement learning to search for

Nash equilibriums in games using a dynamic airline pricing game case study. The idea was that

Page 12: Literature Review Game Theory

12

through episodes, or repeated play of the game, the agents within the game can estimate and

store the values of possible strategies and actions until enough data has been acquired to form a

policy that models the opponent’s behavior. The case study observed three types of

reinforcement learning techniques: SARSA, Q-learning, and Monte Carlo. As described by

Collins and Thomas (2012, p. 1167), the differences between the three methods are in how they

interpret and use data, and how they evaluate policies relevant to the scenario. SARSA and Q-

learning are called bootstrapping methods since they use the current estimated data to update

other estimates in a self-sustaining manner. Monte Carlo and SARSA are implemented as “on-

policy” methods where the current policy is evaluated, compared to Q-learning which is

implemented as an “off-policy” method that does not consider the current policy when

evaluating the algorithm. All three methods use Q-value function that estimate the quality of an

action a given state s, and choose the most optimal option (i.e., the one with the maximum

award) for each state. This algorithm is repeated constantly, in which the old Q-value is assumed

and then updated depending on new information (Wiering, 2005).

An important finding from the implementation of these learning models is that after each

batch was ran 10 million episodes, there appeared four pattern-like stages or phases that were

consistent within all three models. First there was the random policy stage, with expected

random behavior. Then a near optimal policy stage, in which the model seemed to approach

optimal values. However, the algorithm deviates from the near optimal policy and enters a

myopic policy phase, where the agents began to play myopically due to either changes in

learning rate or chaos within action selection mechanisms (Collins and Thomas, 2012, p. 1171).

And after very large number of episode runs (~70,000 episodes) the algorithm moved towards

the expect solution policy, and 9 million more episode runs confirmed the stability of phase.

Page 13: Literature Review Game Theory

13

These four stages, due to their consistency, present a new problem of finding stable trends within

a reasonable amount of test cases, and thus further research should be done to ensure that the

problem is existent in other experimental settings. In addition, the study showcased how

reinforcement learning can be applied to solve simple systems with complex opponents, which

implies that further research can be done to find non-traditional solutions for complex games.

Hao et al. (2014) proposed another successful complex automated negotiation technique

with non-exploitation time points and reinforcement-learning based approach to predict the

optimal offers for the opponent. This strategy, named ABiNeS, incorporates a non-exploitation

point λ that determines the degree to which the agent exploits its opponent. The strategy also

includes a reinforcement-learning algorithm to calculate the optimal proposal setting from the

current negotiation history to create an efficient and robust agent. In detail, the ABiNeS can be

viewed as four basic decision components working together (Hao et al., 2014, p. 47). The first is

the acceptance-threshold (AT) component which estimates its own agent’s minimum acceptance

threshold at a specific time. The threshold function considers maximum utility over the

negotiation, the discounting factor, and the time passed t ∈ [0, 1]. To maximize the potential

profit, the agent should try to exploit its opponent as much as possible. However, this greedy

algorithm can potentially prevent any negotiations from being made, resulting in a t = 1

negotiation in which the discounting factor forces the agents to end up with zero utility. Thus,

Hao et al. (2014, p. 48) introduced a non-exploitation point λ that prevents the agent from

exploiting the opponent after that time point is reached to ensure maximum profit. The next-bid

(NB) component determines what the ABiNeS will propose to the opponent using acceptance

threshold and estimation of the opponent’s reservation value. To predict the best possible

negotiation offer for the opponent agent, the algorithm used a model-free reinforcement method

Page 14: Literature Review Game Theory

14

with the one assumption that the opponent is rational, which is true for these cases (Hao et al.,

2014, p. 49). The acceptance-condition (AC) component determines if the proposal made by the

opponent is acceptable, which essentially checks if the proposed value is higher than the current

acceptance threshold at time t, and if so, accepts the offer. Finally, the termination-condition

(TC) component determines when to terminate the negotiation to receive the reservation value,

which runs the same algorithm as the acceptance-condition except with the reservation value as

comparison (Hao et al., 2014, p. 50). The experimental evaluation and analysis of ABiNeS

demonstrate that the agent obtained the highest average payoff compared to other agents from

ANAC 2012 and also that the agent is robust for both bilateral and tournament negotiations (Hao

et al., 2014, p. 56). Since the ABiNeS strategy lacks efficiency in modeling and estimation, the

researchers suggest that incorporating such algorithms from other high performing agents can be

very beneficial to its performance and stability.

2.3 Computational Cost and its Effect on Behavior

Traditional game theory does not consider computational cost as a factor of variance for

the utility of each agents’ strategies. As an attempt to resolve this issue, past experimental

research split into two major approaches. Abraham Neyman (1985) initiated one line of research

by treating the players in the game as finite state automata, and claiming that a Nash equilibrium

can be reached with sufficiently enough games. Neyman (1985, p. 228) proved that, even though

a game is non-cooperative (e.g., prisoner’s dilemma), a payoff that resembles an outcome from a

cooperative game can exist depending on the method used and the amount of games played (in

his work, he used Tit for Tat in the prisoner’s dilemma, in which agents copy the opponents’

previous move).

Page 15: Literature Review Game Theory

15

Ariel Rubinstein (1986) initiated the other, more prominent line of research which

focuses on the effect of costly computation on an agent’s utility and performance. Rubinstein

experimented with the idea that players decide which finite automata to use rather than which

strategy to use, and hypothesized that the player’s performance is depended on the actions of the

chosen automaton and its complexity level. To continue such line of research, Halpern and Pass

(2015) reexamined Rubinstein’s work with a Turing machine, which has unlimited memory

capacity, in place of a finite automaton. Their research showed that a Nash equilibrium does not

necessarily have to exist, and a psychologically appealing explanation can be provided for its

absence.

The significance of recognizing the effect of costly computation can be explained through

an example. Consider a scenario, as described by Halpern and Pass (2015, p. 246-247),

consisting of a mathematical problem with multiple basic operations. For each problem an agent

is able to solve, the agent earns a small amount of money, say $2. However, outputting an

incorrect answer results in a penalty of $1000. The agent also have the ability to forfeit, and be

rewarded with $1. In traditional game theory – which does not consider the cost of computation

to succeed – the given solution would be to keep solving the problem to earn the maximum

reward. Such behavior is determined to be the Nash equilibrium of this type of problem, despite

the size and complexity of the equation. But in real-world situations, the cost of computation

must be considered since it potentially can affect the agent’s utility or strategy. In this example,

for a trivial mathematical problem, solving the problem by hand is reasonable and a reward can

be earned. However, for more computationally unpleasant problems, most agents or players will

give up, since they realize that the dollar reward from giving up outweighs the cost of solving the

problem through efficient means (e.g., with the help of powerful but expensive calculators). As

Page 16: Literature Review Game Theory

16

shown, the change in strategy due to computational cost can become a significant factor in

deciding the agent’s strategy. If the previous researchers, such as Lin et al. (2008) and Collins

and Thomas (2012), considered cost of computation as part of their analysis, their results would

be judged as an inefficient method due to the heavy processing power of Bayesian updating

algorithms relative to real-time negotiations, as pointed out by Chen and Weiss (2015, p. 2288).

To take these issues into account, Halpern and Pass (2015, p. 249) studied Bayesian

machine games, which are Bayesian games (incomplete information, single-turn, information

updating game) with the exception that computations are not free of cost. This cost includes the

cost of run time (the time it takes for an algorithm to complete), the cost of rethinking and

updating an agent’s strategy given information at a certain state, and the general complexity of

the machine. Adding the weight of cost produces interesting outcomes, one of which is that Nash

equilibria become more intuitively reasonable. Since the cost prevents agents from switching

strategies due to the cost of changing, the agents fall into a Nash equilibrium much quicker

(Halpern and Pass, 2015, p. 251).

Halpern and Pass also claims that for games that charge for randomizations (instead of it

being free) charge may prevent an equilibrium from being reached (2015, p. 253). In games with

mixed strategies such as Rock-Paper-Scissor, the traditional game theory equilibrium states that

uniform randomization of the available choices (rock, paper, or scissor) as a strategy is known to

be the equilibrium. An equal randomized cost to play each strategy (such as deciding to use a

certain strategy depending on the combination of how two coins land) results in the existence of

a “deterministic best reply that can be computed quickly” (Halpern and Pass, 2015, p. 252). With

these conditions, a uniform distribution of the three strategies (performed finite number of times)

can be approximated. Since the outcome relies on a systematic way of playing a certain strategy

Page 17: Literature Review Game Theory

17

depending on the cost to play (i.e., flipping a coin to choose rather than specifically choosing

rock, paper, or scissor), the traditional Nash equilibrium of randomizing the three strategies

cannot be applied.

Page 18: Literature Review Game Theory

18

3: CONCLUSION

This paper aims to synthesize recent research in regards to progress in game theory and

its applied uses in complex, incomplete scenarios as defined previously. Three major issues that

frequently appears on the sources were identified and reviewed: the making of strategies and

models and the impractical assumptions that often serve as their foundation; reinforcement

learning based approach that significantly increases trend and offer predictions in incomplete

information games; and a computational analysis of how the cost of utilizing and updating

strategies can impact the agent’s utility. All these issues tackle on the problems that researchers

face when trying to incorporate game theoretic framework in realistic settings.

Although previous researchers made several “incorrect” assumptions in relation to the

strict set of rules defining a complex scenario, the proposed novel strategies can be used as

practical applications in specific negotiation situations. The developing methods of constructing

efficient and robust algorithms, such as Gaussian processes, Bayesian updating, and discrete

wavelet transformations, can be used as significant stepping stones in the way of future

technological development in artificial intelligence. Previous research have proved that

reinforcement learning has great potential in its Bayesian applications, and several of these

experiments show that the combination of these methods seem to produce high-yielding results.

Though the study of computational cost is not fully recognized by all researchers due to its

untraditional claims about the lack of Nash equilibriums through randomizations, the analysis

provided a new insight and depth in which future researchers should consider in creating

strategies that are efficient for real-time applications.

Through further development of automated agent negotiations and those alike, these

adaptive agents can then guide humans to better understand their own behaviors and decision-

Page 19: Literature Review Game Theory

19

making processes during negotiations or even in any applicable game. And as evaluated by this

literature review, the evidence seems to point towards a positive and strong direction in the

advancements of game theory and its applications.

Page 20: Literature Review Game Theory

20

Bibliography

Chen, S., & Weiss, G. (2015). An approach to complex agent-based negotiations via effectively

modeling unknown opponents. Expert Systems with Applications, 2287-2304.

http://www.sciencedirect.com/science/article/pii/S0957417414006800

Collins, A., & Thomas, L. (2012). Comparing reinforcement learning approaches for solving

game theoretic models: A dynamic airline pricing game example. Journal of the

Operational Research Society, 1165-1173.

http://www.palgravejournals.com/jors/journal/v63/n8/full/jors201194a.html

Hao, J., Song, S., Leung, H., & Ming, Z. (2014). An efficient and robust negotiating strategy in

bilateral negotiations over multiple items. Engineering Applications of Artificial

Intelligence, 45-57.

http://www.sciencedirect.com/science/article/pii/S0952197614001067

Halpern, J., & Pass, R. (2015). Algorithmic rationality: Game theory with costly computation.

Journal of Economic Theory, 246-268.

http://www.sciencedirect.com/science/article/pii/S0022053114000611

Hindriks, K., Jonker, C., Kraus, S., Lin, R., & Tykhonov, D. (2009). GENIUS – Negotiation

Environment for Heterogeneous Agents. Proc. Of 8th Int. Conf. on Autonomous Agents

and Multiagent Systems, 1397-1398.

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.149.7751&rep=rep1&type=pdf

Lin, R., Kraus, S., Wilkenfeld, J., & Barry, J. (2008). Negotiating with bounded rational agents

in environments with incomplete information using an automated agent. Artificial

Intelligence, 823-851.

http://www.sciencedirect.com/science/article/pii/S0004370207001518

Page 21: Literature Review Game Theory

21

Neyman, A. (1985). Bounded complexity justifies cooperation in finitely repeated prisoner’s

dilemma. Economics Letters 19, 227-229.

http://www.sciencedirect.com/science/article/pii/0165176585900266#

Raiffa, H. (1982). The art and science of negotiation. Cambridge, Mass.: Belknap Press of

Harvard University Press.

Rubinstein, A. (1986). Finite Automata play the repeated prisoner’s dilemma. Journal of

Economic Theory, 83-96. http://ac.els-cdn.com/0022053186900219/1-s2.0-

0022053186900219-main.pdf?_tid=9dcd94de-f23e-11e4-9118-

00000aab0f02&acdnat=1430731327_0d34e38015f429f4438c8f979a631f97

Ruch, D., & Fleet, P. (2011). Wavelet Theory an Elementary Approach with

Applications. Chicester: Wiley.

Saha, S., Biswas, A., & Sen, S. Modeling opponent decision in repeated one-shot negotiations.

AAMAS’05, 397-403. http://delivery.acm.org/10.1145/1090000/1082534/p397-

saha.pdf?ip=128.6.36.193&id=1082534&acc=ACTIVE%20SERVICE&key=777711629

8C9657D%2E56792D02161BECDE%2E4D4702B0C3E38B35%2E4D4702B0C3E38B3

5&CFID=669812475&CFTOKEN=45695084&__acm__=1430743135_59b6f21fd48591

8402fadf4475c79278

Williams, C., Robu, V., Gerding, E., & Jennings, N. (2011). Using Gaussian Processes to

Optimise Concession in Complex Negotiations against Unknown Opponents.

Proceedings of the Twenty-Second International Joint Conference on Artificial

Intelligence, 432-438.

http://ijcai.org/papers11/Papers/IJCAI11-080.pdf

Page 22: Literature Review Game Theory

22

Wiering, M. A. (2005). QV(λ)-learning: A New On-policy Reinforcement Learning Algorithm.

Proceedings of the 7th European Workshop on Reinforcement Learning, 17-18.

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact

=8&ved=0CB8QFjAA&url=http%3A%2F%2Fwww.researchgate.net%2Fprofile%2FMa

rco_Wiering%2Fpublication%2F228339324_QV_%2528%2529-learning_A_new_on-

policy_reinforcement_learning_algorithm%2Flinks%2F02e7e51948462777e9000000.pdf

&ei=lWdHVcLDEYjYggTR84HADQ&usg=AFQjCNE25WamNDJuJthd65X528RibBdz

_w&sig2=xoyoGxPccP0c2ndwPB3DPQ&bvm=bv.92291466,d.eXY