124
Spatially Inhomogeneous Evolutionary Games Massimo Fornasier Fakult¨ at f¨ ur Mathematik Technische Universit¨ at M¨ unchen [email protected] http://www-m15.ma.tum.de/ Applied Mathematics Seminar, FAU May 12, 2020 Joint work with Luigi Ambrosio, Nathanael Bosch, Marco Morandotti, and Giuseppe Savar´ e

Spatially Inhomogeneous Evolutionary Games€¦ · Spatially Inhomogeneous Evolutionary Games Massimo Fornasier Fakult at fur Mathematik Technische Universit at Munchen [email protected]

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

  • Spatially Inhomogeneous Evolutionary Games

    Massimo Fornasier

    Fakultät für MathematikTechnische Universität München

    [email protected]://www-m15.ma.tum.de/

    Applied Mathematics Seminar, FAUMay 12, 2020

    Joint work with Luigi Ambrosio, Nathanael Bosch, Marco Morandotti, and

    Giuseppe Savaré

  • Game equilibria: why are they so important?

    I Physical systems naturally tend to minimize the potentialenergy ⇒ steady states;

    I Game theorists have focused on the characterization of gameequilibria;

    I Nash equilibria (1951): building on von Neumann’s notion ofmixed strategy (necessary to ensure existence of saddle pointsin zero sum games with two players);

    I Morgenstern and von Neumann pointed out in their classicaltreatise on game theory (1947) the desirability of a“dynamical” approach to complement their “static” gamesolution concept;

    I In physical systems evolutions towards minima of the potentialenergy are explained according to Newton’s law ⇒ it is not atall clear whether and how in dynamical games equilibria canemerge.

  • Game equilibria: why are they so important?

    I Physical systems naturally tend to minimize the potentialenergy ⇒ steady states;

    I Game theorists have focused on the characterization of gameequilibria;

    I Nash equilibria (1951): building on von Neumann’s notion ofmixed strategy (necessary to ensure existence of saddle pointsin zero sum games with two players);

    I Morgenstern and von Neumann pointed out in their classicaltreatise on game theory (1947) the desirability of a“dynamical” approach to complement their “static” gamesolution concept;

    I In physical systems evolutions towards minima of the potentialenergy are explained according to Newton’s law ⇒ it is not atall clear whether and how in dynamical games equilibria canemerge.

  • Game equilibria: why are they so important?

    I Physical systems naturally tend to minimize the potentialenergy ⇒ steady states;

    I Game theorists have focused on the characterization of gameequilibria;

    I Nash equilibria (1951): building on von Neumann’s notion ofmixed strategy (necessary to ensure existence of saddle pointsin zero sum games with two players);

    I Morgenstern and von Neumann pointed out in their classicaltreatise on game theory (1947) the desirability of a“dynamical” approach to complement their “static” gamesolution concept;

    I In physical systems evolutions towards minima of the potentialenergy are explained according to Newton’s law ⇒ it is not atall clear whether and how in dynamical games equilibria canemerge.

  • Game equilibria: why are they so important?

    I Physical systems naturally tend to minimize the potentialenergy ⇒ steady states;

    I Game theorists have focused on the characterization of gameequilibria;

    I Nash equilibria (1951): building on von Neumann’s notion ofmixed strategy (necessary to ensure existence of saddle pointsin zero sum games with two players);

    I Morgenstern and von Neumann pointed out in their classicaltreatise on game theory (1947) the desirability of a“dynamical” approach to complement their “static” gamesolution concept;

    I In physical systems evolutions towards minima of the potentialenergy are explained according to Newton’s law ⇒ it is not atall clear whether and how in dynamical games equilibria canemerge.

  • Game equilibria: why are they so important?

    I Physical systems naturally tend to minimize the potentialenergy ⇒ steady states;

    I Game theorists have focused on the characterization of gameequilibria;

    I Nash equilibria (1951): building on von Neumann’s notion ofmixed strategy (necessary to ensure existence of saddle pointsin zero sum games with two players);

    I Morgenstern and von Neumann pointed out in their classicaltreatise on game theory (1947) the desirability of a“dynamical” approach to complement their “static” gamesolution concept;

    I In physical systems evolutions towards minima of the potentialenergy are explained according to Newton’s law

    ⇒ it is not atall clear whether and how in dynamical games equilibria canemerge.

  • Game equilibria: why are they so important?

    I Physical systems naturally tend to minimize the potentialenergy ⇒ steady states;

    I Game theorists have focused on the characterization of gameequilibria;

    I Nash equilibria (1951): building on von Neumann’s notion ofmixed strategy (necessary to ensure existence of saddle pointsin zero sum games with two players);

    I Morgenstern and von Neumann pointed out in their classicaltreatise on game theory (1947) the desirability of a“dynamical” approach to complement their “static” gamesolution concept;

    I In physical systems evolutions towards minima of the potentialenergy are explained according to Newton’s law ⇒ it is not atall clear whether and how in dynamical games equilibria canemerge.

  • What would be a good notion of evolutionary game?

    I One of most advocated mechanisms of dynamical choice ofstrategies is based on a selection principle, inspired byDarwinian evolution concepts;

    I The main idea is to re-interpret the probability of picking acertain strategy with the distribution of a population ofplayers adopting those strategies;

    I The emergence of steady mixed strategies would be the resultof an evolutionary selection: at discrete times players meetrandomly, interact according to their strategies, and obtain apayoff;

    I This payoff determines how the frequencies in the strategieswill evolve.

  • What would be a good notion of evolutionary game?

    I One of most advocated mechanisms of dynamical choice ofstrategies is based on a selection principle, inspired byDarwinian evolution concepts;

    I The main idea is to re-interpret the probability of picking acertain strategy with the distribution of a population ofplayers adopting those strategies;

    I The emergence of steady mixed strategies would be the resultof an evolutionary selection: at discrete times players meetrandomly, interact according to their strategies, and obtain apayoff;

    I This payoff determines how the frequencies in the strategieswill evolve.

  • What would be a good notion of evolutionary game?

    I One of most advocated mechanisms of dynamical choice ofstrategies is based on a selection principle, inspired byDarwinian evolution concepts;

    I The main idea is to re-interpret the probability of picking acertain strategy with the distribution of a population ofplayers adopting those strategies;

    I The emergence of steady mixed strategies would be the resultof an evolutionary selection: at discrete times players meetrandomly, interact according to their strategies, and obtain apayoff;

    I This payoff determines how the frequencies in the strategieswill evolve.

  • What would be a good notion of evolutionary game?

    I One of most advocated mechanisms of dynamical choice ofstrategies is based on a selection principle, inspired byDarwinian evolution concepts;

    I The main idea is to re-interpret the probability of picking acertain strategy with the distribution of a population ofplayers adopting those strategies;

    I The emergence of steady mixed strategies would be the resultof an evolutionary selection: at discrete times players meetrandomly, interact according to their strategies, and obtain apayoff;

    I This payoff determines how the frequencies in the strategieswill evolve.

  • Replicator dynamicsI Players can adopt pure strategies u1, . . . , uN ∈ U (for us U is

    a compact set in a suitable metric space), where U is the setof strategies;

    I Denote with σi the frequency with which players pick thestrategy ui ;

    I The payoff of playing strategy ui against uj will be denoted byJ(ui , uj), where J : U × U → R;

    I The relative success of the strategy ui with respect to thestrategies played by the population is measured by

    ∆N(ui ) =N∑j=1

    J(ui , uj)σj−N∑`=1

    N∑j=1

    σ`J(u`, uj)σj , i = 1, . . . ,N;

    I The relative rate of change of usage of the strategy ui is thendescribed by

    σ̇iσi

    = ∆N(ui ) =N∑j=1

    J(ui , uj)σj−N∑`=1

    N∑j=1

    σ`J(u`, uj)σj , i = 1, . . . ,N

  • Replicator dynamicsI Players can adopt pure strategies u1, . . . , uN ∈ U (for us U is

    a compact set in a suitable metric space), where U is the setof strategies;

    I Denote with σi the frequency with which players pick thestrategy ui ;

    I The payoff of playing strategy ui against uj will be denoted byJ(ui , uj), where J : U × U → R;

    I The relative success of the strategy ui with respect to thestrategies played by the population is measured by

    ∆N(ui ) =N∑j=1

    J(ui , uj)σj−N∑`=1

    N∑j=1

    σ`J(u`, uj)σj , i = 1, . . . ,N;

    I The relative rate of change of usage of the strategy ui is thendescribed by

    σ̇iσi

    = ∆N(ui ) =N∑j=1

    J(ui , uj)σj−N∑`=1

    N∑j=1

    σ`J(u`, uj)σj , i = 1, . . . ,N

  • Replicator dynamicsI Players can adopt pure strategies u1, . . . , uN ∈ U (for us U is

    a compact set in a suitable metric space), where U is the setof strategies;

    I Denote with σi the frequency with which players pick thestrategy ui ;

    I The payoff of playing strategy ui against uj will be denoted byJ(ui , uj), where J : U × U → R;

    I The relative success of the strategy ui with respect to thestrategies played by the population is measured by

    ∆N(ui ) =N∑j=1

    J(ui , uj)σj−N∑`=1

    N∑j=1

    σ`J(u`, uj)σj , i = 1, . . . ,N;

    I The relative rate of change of usage of the strategy ui is thendescribed by

    σ̇iσi

    = ∆N(ui ) =N∑j=1

    J(ui , uj)σj−N∑`=1

    N∑j=1

    σ`J(u`, uj)σj , i = 1, . . . ,N

  • Replicator dynamicsI Players can adopt pure strategies u1, . . . , uN ∈ U (for us U is

    a compact set in a suitable metric space), where U is the setof strategies;

    I Denote with σi the frequency with which players pick thestrategy ui ;

    I The payoff of playing strategy ui against uj will be denoted byJ(ui , uj), where J : U × U → R;

    I The relative success of the strategy ui with respect to thestrategies played by the population is measured by

    ∆N(ui ) =N∑j=1

    J(ui , uj)σj−N∑`=1

    N∑j=1

    σ`J(u`, uj)σj , i = 1, . . . ,N;

    I The relative rate of change of usage of the strategy ui is thendescribed by

    σ̇iσi

    = ∆N(ui ) =N∑j=1

    J(ui , uj)σj−N∑`=1

    N∑j=1

    σ`J(u`, uj)σj , i = 1, . . . ,N

  • Replicator dynamicsI Players can adopt pure strategies u1, . . . , uN ∈ U (for us U is

    a compact set in a suitable metric space), where U is the setof strategies;

    I Denote with σi the frequency with which players pick thestrategy ui ;

    I The payoff of playing strategy ui against uj will be denoted byJ(ui , uj), where J : U × U → R;

    I The relative success of the strategy ui with respect to thestrategies played by the population is measured by

    ∆N(ui ) =N∑j=1

    J(ui , uj)σj−N∑`=1

    N∑j=1

    σ`J(u`, uj)σj , i = 1, . . . ,N;

    I The relative rate of change of usage of the strategy ui is thendescribed by

    σ̇iσi

    = ∆N(ui ) =N∑j=1

    J(ui , uj)σj−N∑`=1

    N∑j=1

    σ`J(u`, uj)σj , i = 1, . . . ,N

  • Folk theorem of evolutionary game theoryω-limits (the set of accumulation points of the dynamics) andsteady states are closely related to the Nash equilibria of the gamedescribed by the payoff matrix A = (J(ui , uj))ij :

    Theorem

    (a) if σ̄ is a Nash equilibrium, then it is a rest point;

    (b) if σ̄ is a strict Nash equilibrium, then it is asymptotically stable;

    (c) if the rest point σ̄ is the limit of an interior orbit of the symplex (an orbitσ(t) ∈ int(SN)), then σ̄ is a Nash equilibrium; and

    (d) if the rest point σ̄ is stable, then it is a Nash equilibrium.

  • Folk theorem of evolutionary game theoryω-limits (the set of accumulation points of the dynamics) andsteady states are closely related to the Nash equilibria of the gamedescribed by the payoff matrix A = (J(ui , uj))ij :

    Theorem

    (a) if σ̄ is a Nash equilibrium, then it is a rest point;

    (b) if σ̄ is a strict Nash equilibrium, then it is asymptotically stable;

    (c) if the rest point σ̄ is the limit of an interior orbit of the symplex (an orbitσ(t) ∈ int(SN)), then σ̄ is a Nash equilibrium; and

    (d) if the rest point σ̄ is stable, then it is a Nash equilibrium.

  • Folk theorem of evolutionary game theoryω-limits (the set of accumulation points of the dynamics) andsteady states are closely related to the Nash equilibria of the gamedescribed by the payoff matrix A = (J(ui , uj))ij :

    Theorem

    (a) if σ̄ is a Nash equilibrium, then it is a rest point;

    (b) if σ̄ is a strict Nash equilibrium, then it is asymptotically stable;

    (c) if the rest point σ̄ is the limit of an interior orbit of the symplex (an orbitσ(t) ∈ int(SN)), then σ̄ is a Nash equilibrium; and

    (d) if the rest point σ̄ is stable, then it is a Nash equilibrium.

  • Folk theorem of evolutionary game theoryω-limits (the set of accumulation points of the dynamics) andsteady states are closely related to the Nash equilibria of the gamedescribed by the payoff matrix A = (J(ui , uj))ij :

    Theorem

    (a) if σ̄ is a Nash equilibrium, then it is a rest point;

    (b) if σ̄ is a strict Nash equilibrium, then it is asymptotically stable;

    (c) if the rest point σ̄ is the limit of an interior orbit of the symplex (an orbitσ(t) ∈ int(SN)), then σ̄ is a Nash equilibrium; and

    (d) if the rest point σ̄ is stable, then it is a Nash equilibrium.

  • Folk theorem of evolutionary game theoryω-limits (the set of accumulation points of the dynamics) andsteady states are closely related to the Nash equilibria of the gamedescribed by the payoff matrix A = (J(ui , uj))ij :

    Theorem

    (a) if σ̄ is a Nash equilibrium, then it is a rest point;

    (b) if σ̄ is a strict Nash equilibrium, then it is asymptotically stable;

    (c) if the rest point σ̄ is the limit of an interior orbit of the symplex (an orbitσ(t) ∈ int(SN)), then σ̄ is a Nash equilibrium; and

    (d) if the rest point σ̄ is stable, then it is a Nash equilibrium.

  • Mean-field approximation for N →∞Consider σNt :=

    ∑Nj=1 σj ,tδuj ∈P(U),

    and its evolution accordingto

    σ̇Nt =

    (∫U

    J(·, u′)dσNt (u′)−∫U×U

    J(w , u′) dσNt (w)dσNt (u

    ′)

    )σNt .

    For any σ ∈P(U), we may denote

    ∆σ(u) :=

    (∫U

    J(u, u′)dσ(u′)−∫U×U

    J(w , u′)dσ(w)dσ(u′)

    ),

    so that ∆σN (ui ) = ∆N(ui ). By assuming that the initial conditionsσN0 ⇀ σ̄ for a given σ̄ ∈P(U), one can show that σNt ⇀ σt forN →∞ for any t, where σ is the solution to

    d

    dt

    ∫Uϕ(u)dσt(u) =

    ∫Uϕ(u)∆σ(u)dσt(u), σ(0) = σ̄,

    for any ϕ ∈ C(U).

  • Mean-field approximation for N →∞Consider σNt :=

    ∑Nj=1 σj ,tδuj ∈P(U), and its evolution according

    to

    σ̇Nt =

    (∫U

    J(·, u′)dσNt (u′)−∫U×U

    J(w , u′) dσNt (w)dσNt (u

    ′)

    )σNt .

    For any σ ∈P(U), we may denote

    ∆σ(u) :=

    (∫U

    J(u, u′)dσ(u′)−∫U×U

    J(w , u′)dσ(w)dσ(u′)

    ),

    so that ∆σN (ui ) = ∆N(ui ). By assuming that the initial conditionsσN0 ⇀ σ̄ for a given σ̄ ∈P(U), one can show that σNt ⇀ σt forN →∞ for any t, where σ is the solution to

    d

    dt

    ∫Uϕ(u)dσt(u) =

    ∫Uϕ(u)∆σ(u)dσt(u), σ(0) = σ̄,

    for any ϕ ∈ C(U).

  • Mean-field approximation for N →∞Consider σNt :=

    ∑Nj=1 σj ,tδuj ∈P(U), and its evolution according

    to

    σ̇Nt =

    (∫U

    J(·, u′)dσNt (u′)−∫U×U

    J(w , u′) dσNt (w)dσNt (u

    ′)

    )σNt .

    For any σ ∈P(U), we may denote

    ∆σ(u) :=

    (∫U

    J(u, u′) dσ(u′)−∫U×U

    J(w , u′)dσ(w)dσ(u′)

    ),

    so that ∆σN (ui ) = ∆N(ui ).

    By assuming that the initial conditionsσN0 ⇀ σ̄ for a given σ̄ ∈P(U), one can show that σNt ⇀ σt forN →∞ for any t, where σ is the solution to

    d

    dt

    ∫Uϕ(u)dσt(u) =

    ∫Uϕ(u)∆σ(u)dσt(u), σ(0) = σ̄,

    for any ϕ ∈ C(U).

  • Mean-field approximation for N →∞Consider σNt :=

    ∑Nj=1 σj ,tδuj ∈P(U), and its evolution according

    to

    σ̇Nt =

    (∫U

    J(·, u′)dσNt (u′)−∫U×U

    J(w , u′) dσNt (w)dσNt (u

    ′)

    )σNt .

    For any σ ∈P(U), we may denote

    ∆σ(u) :=

    (∫U

    J(u, u′) dσ(u′)−∫U×U

    J(w , u′)dσ(w)dσ(u′)

    ),

    so that ∆σN (ui ) = ∆N(ui ). By assuming that the initial conditionsσN0 ⇀ σ̄ for a given σ̄ ∈P(U), one can show that σNt ⇀ σt forN →∞ for any t, where σ is the solution to

    d

    dt

    ∫Uϕ(u) dσt(u) =

    ∫Uϕ(u)∆σ(u) dσt(u), σ(0) = σ̄,

    for any ϕ ∈ C(U).

  • Hellinger-Kakutani distancesA reaction distance via dynamic interpolation:

    H2(µ0, µ1) = inf

    {1

    4

    ∫ 10

    ∫|wt |2dµtdt : µ ∈ C([0, 1],M+U),

    ∂tµt = wtµt , µt=i = µi

    }

    =

    ∫(√

    p0 −√

    p1)2 dµ∗, pi :=

    dµidµ∗

    .

    In order to establish a similar distance for probability measures,one introduces the so-called spherical Hellinger-Kakutani distance

    HS2(σ0, σ1) = inf

    {1

    4

    ∫ 10

    ∫|wt |2dσtdt : σ ∈ C([0, 1],P(U)),

    ∂tσt =

    (wt−

    (∫wtdσt

    ))σt , σt=i = σi

    }

    = arccos

    (1− H

    2(σ0, σ1)2

    2

    )= arcsin

    (√1−

    ∫ √p0√

    p1dσ∗

    )2

  • Hellinger-Kakutani distancesA reaction distance via dynamic interpolation:

    H2(µ0, µ1) = inf

    {1

    4

    ∫ 10

    ∫|wt |2dµtdt : µ ∈ C([0, 1],M+U),

    ∂tµt = wtµt , µt=i = µi

    }

    =

    ∫(√

    p0 −√

    p1)2 dµ∗, pi :=

    dµidµ∗

    .

    In order to establish a similar distance for probability measures,one introduces the so-called spherical Hellinger-Kakutani distance

    HS2(σ0, σ1) = inf

    {1

    4

    ∫ 10

    ∫|wt |2dσtdt : σ ∈ C([0, 1],P(U)),

    ∂tσt =

    (wt−

    (∫wtdσt

    ))σt , σt=i = σi

    }

    = arccos

    (1− H

    2(σ0, σ1)2

    2

    )= arcsin

    (√1−

    ∫ √p0√

    p1dσ∗

    )2

  • Hellinger-Kakutani distancesA reaction distance via dynamic interpolation:

    H2(µ0, µ1) = inf

    {1

    4

    ∫ 10

    ∫|wt |2dµtdt : µ ∈ C([0, 1],M+U),

    ∂tµt = wtµt , µt=i = µi

    }

    =

    ∫(√

    p0 −√

    p1)2 dµ∗, pi :=

    dµidµ∗

    .

    In order to establish a similar distance for probability measures,one introduces the so-called spherical Hellinger-Kakutani distance

    HS2(σ0, σ1) = inf

    {1

    4

    ∫ 10

    ∫|wt |2dσtdt : σ ∈ C([0, 1],P(U)),

    ∂tσt =

    (wt−

    (∫wtdσt

    ))σt , σt=i = σi

    }

    = arccos

    (1− H

    2(σ0, σ1)2

    2

    )= arcsin

    (√1−

    ∫ √p0√

    p1dσ∗

    )2

  • Hellinger-Kakutani distancesA reaction distance via dynamic interpolation:

    H2(µ0, µ1) = inf

    {1

    4

    ∫ 10

    ∫|wt |2dµtdt : µ ∈ C([0, 1],M+U),

    ∂tµt = wtµt , µt=i = µi

    }

    =

    ∫(√

    p0 −√

    p1)2 dµ∗, pi :=

    dµidµ∗

    .

    In order to establish a similar distance for probability measures,one introduces the so-called spherical Hellinger-Kakutani distance

    HS2(σ0, σ1) = inf

    {1

    4

    ∫ 10

    ∫|wt |2dσtdt : σ ∈ C([0, 1],P(U)),

    ∂tσt =

    (wt−

    (∫wtdσt

    ))σt , σt=i = σi

    }

    = arccos

    (1− H

    2(σ0, σ1)2

    2

    )= arcsin

    (√1−

    ∫ √p0√

    p1dσ∗

    )2

  • Replicator dynamics as Hellinger flow

    For a functional J : P(U)→ R we consider the minimizingmovement scheme:

    given τ > 0 and σ0τ = σ0, recursively define σnτ

    as the minimizing solution of

    minσ∈P(U)

    1

    2τHS2(σ, σn−1τ ) + J (σ).

    Hence, σt = limN→∞ σNt/N solves the equation

    ∂tσt = −(δJδσ

    (u, σt)−∫δJδσ

    (w , σt)dσt(w)

    )σt

    If J (σ) = −12∫U

    ∫U J(u, u

    ′)dσ(u)dσ(u′) (and symmetric J) thenone obtains again the replication dynamics.

  • Replicator dynamics as Hellinger flow

    For a functional J : P(U)→ R we consider the minimizingmovement scheme: given τ > 0 and σ0τ = σ0, recursively define σ

    as the minimizing solution of

    minσ∈P(U)

    1

    2τHS2(σ, σn−1τ ) + J (σ).

    Hence, σt = limN→∞ σNt/N solves the equation

    ∂tσt = −(δJδσ

    (u, σt)−∫δJδσ

    (w , σt)dσt(w)

    )σt

    If J (σ) = −12∫U

    ∫U J(u, u

    ′)dσ(u)dσ(u′) (and symmetric J) thenone obtains again the replication dynamics.

  • Replicator dynamics as Hellinger flow

    For a functional J : P(U)→ R we consider the minimizingmovement scheme: given τ > 0 and σ0τ = σ0, recursively define σ

    as the minimizing solution of

    minσ∈P(U)

    1

    2τHS2(σ, σn−1τ ) + J (σ).

    Hence, σt = limN→∞ σNt/N solves the equation

    ∂tσt = −(δJδσ

    (u, σt)−∫δJδσ

    (w , σt)dσt(w)

    )σt

    If J (σ) = −12∫U

    ∫U J(u, u

    ′)dσ(u)dσ(u′) (and symmetric J) thenone obtains again the replication dynamics.

  • Spatially inhomogeneous evolutionary games

    Idea of a spatially inhomogeneous replicator dynamics:

    I A population of players is distributed over a position space Rdand they are each endowed with probability distributions ofstrategies (mixed strategies) σ ∈P(U), from which theydraw at random pure strategies to evolve their positions.

    I Each player’s probability distribution of strategies σ ∈P(U)evolves in time according to a replicator dynamics, which takesinto account all other players and respective mixed strategies.

  • Spatially inhomogeneous evolutionary games

    Idea of a spatially inhomogeneous replicator dynamics:

    I A population of players is distributed over a position space Rdand they are each endowed with probability distributions ofstrategies (mixed strategies) σ ∈P(U), from which theydraw at random pure strategies to evolve their positions.

    I Each player’s probability distribution of strategies σ ∈P(U)evolves in time according to a replicator dynamics, which takesinto account all other players and respective mixed strategies.

  • A combined dynamics: informal derivation

    Fix a Lipschitz payoff function J : (Rd × U)2 → R, a time interval[0,T ], a time step h = T/n.

    At time t = ih we consider anempirical distribution of N players µNt =

    1N

    ∑Nj=1 δxj (t) ∈P(R

    d),

    so that each player x ∈ supp(µNt ) is endowed with a correspondingdistribution of strategies σx ,t ∈P(U) and ηt = σx ,tµNt is thedisintegration of ηt ∈P(Rd × U). We define formally thedeterministic/stochastic evolution of the individual player and itsmixed strategies by

    σx ,t+h = (1 + h∆ηt (x , u))σx ,t

    x(t + h) = x(t) + he(x(t), u), for u ∼ σx ,t+h,

    where

    ∆ηt (x , u) =

    ∫Rd×U

    J(x , u, x ′, u′)dηt(x′, u′)−

    ∫U

    ∫Rd×U

    J(x , u, x ′, u′)dηt(x′, u′)dσx,t(u).

    Then one formally sets σx+h,t+h := σx,t+h.

  • A combined dynamics: informal derivation

    Fix a Lipschitz payoff function J : (Rd × U)2 → R, a time interval[0,T ], a time step h = T/n. At time t = ih we consider anempirical distribution of N players µNt =

    1N

    ∑Nj=1 δxj (t) ∈P(R

    d),

    so that each player x ∈ supp(µNt ) is endowed with a correspondingdistribution of strategies σx ,t ∈P(U) and ηt = σx ,tµNt is thedisintegration of ηt ∈P(Rd × U).

    We define formally thedeterministic/stochastic evolution of the individual player and itsmixed strategies by

    σx ,t+h = (1 + h∆ηt (x , u))σx ,t

    x(t + h) = x(t) + he(x(t), u), for u ∼ σx ,t+h,

    where

    ∆ηt (x , u) =

    ∫Rd×U

    J(x , u, x ′, u′)dηt(x′, u′)−

    ∫U

    ∫Rd×U

    J(x , u, x ′, u′)dηt(x′, u′)dσx,t(u).

    Then one formally sets σx+h,t+h := σx,t+h.

  • A combined dynamics: informal derivation

    Fix a Lipschitz payoff function J : (Rd × U)2 → R, a time interval[0,T ], a time step h = T/n. At time t = ih we consider anempirical distribution of N players µNt =

    1N

    ∑Nj=1 δxj (t) ∈P(R

    d),

    so that each player x ∈ supp(µNt ) is endowed with a correspondingdistribution of strategies σx ,t ∈P(U) and ηt = σx ,tµNt is thedisintegration of ηt ∈P(Rd × U). We define formally thedeterministic/stochastic evolution of the individual player and itsmixed strategies by

    σx ,t+h = (1 + h∆ηt (x , u))σx ,t

    x(t + h) = x(t) + he(x(t), u), for u ∼ σx ,t+h,

    where

    ∆ηt (x , u) =

    ∫Rd×U

    J(x , u, x ′, u′)dηt(x′, u′)−

    ∫U

    ∫Rd×U

    J(x , u, x ′, u′)dηt(x′, u′)dσx,t(u).

    Then one formally sets σx+h,t+h := σx,t+h.

  • A combined dynamics: informal derivationGiven now a bounded test function Φ: Rd × U → R∫

    Φ(x , u)d(ηt+h − ηt)(x , u)

    =1

    N

    N∑j=1

    (∫Φ(xj(t + h), u)dσxj (t+h),t+h(u)−

    ∫Φ(xj(t), u)dσxj (t),t(u)

    )

    =1

    N

    N∑j=1

    (∫(Φ(xj(t + h), u)− Φ(xj(t), u))dσxj (t+h),t+h(u)

    +

    ∫Φ(xj(t), u)d(σxj (t+h),t+h(u)− σxj (t),t(u))

    )

    ≈ hN

    N∑j=1

    (∫∇xΦ(xj(t), u) ·

    (∫e(xj(t), v)dσxj (t),t(v)

    )dσxj (t+h),t+h(u)

    +

    ∫Φ(xj(t), u)∆ηt (xj(t), u)dσxj (t),t(u)

    )

    = h

    ∫ (∇xΦ(x , u) ·

    (∫e(x , v)dσx,t(v)

    )+ Φ(x(t), u)∆ηt (x(t), u)

    )dηt(x , u)

  • A combined dynamics: informal derivationGiven now a bounded test function Φ: Rd × U → R∫

    Φ(x , u)d(ηt+h − ηt)(x , u)

    =1

    N

    N∑j=1

    (∫Φ(xj(t + h), u)dσxj (t+h),t+h(u)−

    ∫Φ(xj(t), u)dσxj (t),t(u)

    )

    =1

    N

    N∑j=1

    (∫(Φ(xj(t + h), u)− Φ(xj(t), u))dσxj (t+h),t+h(u)

    +

    ∫Φ(xj(t), u)d(σxj (t+h),t+h(u)− σxj (t),t(u))

    )

    ≈ hN

    N∑j=1

    (∫∇xΦ(xj(t), u) ·

    (∫e(xj(t), v)dσxj (t),t(v)

    )dσxj (t+h),t+h(u)

    +

    ∫Φ(xj(t), u)∆ηt (xj(t), u)dσxj (t),t(u)

    )

    = h

    ∫ (∇xΦ(x , u) ·

    (∫e(x , v)dσx,t(v)

    )+ Φ(x(t), u)∆ηt (x(t), u)

    )dηt(x , u)

  • A combined dynamics: informal derivationGiven now a bounded test function Φ: Rd × U → R∫

    Φ(x , u)d(ηt+h − ηt)(x , u)

    =1

    N

    N∑j=1

    (∫Φ(xj(t + h), u)dσxj (t+h),t+h(u)−

    ∫Φ(xj(t), u)dσxj (t),t(u)

    )

    =1

    N

    N∑j=1

    (∫(Φ(xj(t + h), u)− Φ(xj(t), u))dσxj (t+h),t+h(u)

    +

    ∫Φ(xj(t), u)d(σxj (t+h),t+h(u)− σxj (t),t(u))

    )

    ≈ hN

    N∑j=1

    (∫∇xΦ(xj(t), u) ·

    (∫e(xj(t), v)dσxj (t),t(v)

    )dσxj (t+h),t+h(u)

    +

    ∫Φ(xj(t), u)∆ηt (xj(t), u)dσxj (t),t(u)

    )

    = h

    ∫ (∇xΦ(x , u) ·

    (∫e(x , v)dσx,t(v)

    )+ Φ(x(t), u)∆ηt (x(t), u)

    )dηt(x , u)

  • A combined dynamics: informal derivationGiven now a bounded test function Φ: Rd × U → R∫

    Φ(x , u)d(ηt+h − ηt)(x , u)

    =1

    N

    N∑j=1

    (∫Φ(xj(t + h), u)dσxj (t+h),t+h(u)−

    ∫Φ(xj(t), u)dσxj (t),t(u)

    )

    =1

    N

    N∑j=1

    (∫(Φ(xj(t + h), u)− Φ(xj(t), u))dσxj (t+h),t+h(u)

    +

    ∫Φ(xj(t), u)d(σxj (t+h),t+h(u)− σxj (t),t(u))

    )

    ≈ hN

    N∑j=1

    (∫∇xΦ(xj(t), u) ·

    (∫e(xj(t), v)dσxj (t),t(v)

    )dσxj (t+h),t+h(u)

    +

    ∫Φ(xj(t), u)∆ηt (xj(t), u)dσxj (t),t(u)

    )

    = h

    ∫ (∇xΦ(x , u) ·

    (∫e(x , v)dσx,t(v)

    )+ Φ(x(t), u)∆ηt (x(t), u)

    )dηt(x , u)

  • A combined dynamics: informal derivationGiven now a bounded test function Φ: Rd × U → R∫

    Φ(x , u)d(ηt+h − ηt)(x , u)

    =1

    N

    N∑j=1

    (∫Φ(xj(t + h), u)dσxj (t+h),t+h(u)−

    ∫Φ(xj(t), u)dσxj (t),t(u)

    )

    =1

    N

    N∑j=1

    (∫(Φ(xj(t + h), u)− Φ(xj(t), u))dσxj (t+h),t+h(u)

    +

    ∫Φ(xj(t), u)d(σxj (t+h),t+h(u)− σxj (t),t(u))

    )

    ≈ hN

    N∑j=1

    (∫∇xΦ(xj(t), u) ·

    (∫e(xj(t), v)dσxj (t),t(v)

    )dσxj (t+h),t+h(u)

    +

    ∫Φ(xj(t), u)∆ηt (xj(t), u)dσxj (t),t(u)

    )

    = h

    ∫ (∇xΦ(x , u) ·

    (∫e(x , v)dσx,t(v)

    )+ Φ(x(t), u)∆ηt (x(t), u)

    )dηt(x , u)

  • An evolution of measures

    For h→ 0 and N →∞ we formally derive the following PDE for η:

    ∂tηt +∇x[(∫

    e(x , v)dσx ,t(v)

    )ηt

    ]−∆ηt (x , u)ηt(x , u) = 0.

    By integrating against tests functions Φ(x , u) = Φ(x) one woulddeduce the equation for the marginal µt

    ∂tµt+∇x[(∫

    e(x , v)dσx ,t(v)

    )µt

    ]︸ ︷︷ ︸

    transport term

    −(∫

    ∆ηt (x , v)dσt(x , v)

    )︸ ︷︷ ︸

    ≡0

    µt = 0.

    How can we get a modellistically consistent evolution, whichcombines replicator dynamics and player’s moves? We need tomake a further abstraction step ...

  • An evolution of measures

    For h→ 0 and N →∞ we formally derive the following PDE for η:

    ∂tηt +∇x[(∫

    e(x , v)dσx ,t(v)

    )ηt

    ]−∆ηt (x , u)ηt(x , u) = 0.

    By integrating against tests functions Φ(x , u) = Φ(x) one woulddeduce the equation for the marginal µt

    ∂tµt+∇x[(∫

    e(x , v)dσx ,t(v)

    )µt

    ]︸ ︷︷ ︸

    transport term

    −(∫

    ∆ηt (x , v)dσt(x , v)

    )︸ ︷︷ ︸

    ≡0

    µt = 0.

    How can we get a modellistically consistent evolution, whichcombines replicator dynamics and player’s moves? We need tomake a further abstraction step ...

  • An evolution of measures

    For h→ 0 and N →∞ we formally derive the following PDE for η:

    ∂tηt +∇x[(∫

    e(x , v)dσx ,t(v)

    )ηt

    ]−∆ηt (x , u)ηt(x , u) = 0.

    By integrating against tests functions Φ(x , u) = Φ(x) one woulddeduce the equation for the marginal µt

    ∂tµt+∇x[(∫

    e(x , v)dσx ,t(v)

    )µt

    ]︸ ︷︷ ︸

    transport term

    −(∫

    ∆ηt (x , v)dσt(x , v)

    )︸ ︷︷ ︸

    ≡0

    µt = 0.

    How can we get a modellistically consistent evolution, whichcombines replicator dynamics and player’s moves?

    We need tomake a further abstraction step ...

  • An evolution of measures

    For h→ 0 and N →∞ we formally derive the following PDE for η:

    ∂tηt +∇x[(∫

    e(x , v)dσx ,t(v)

    )ηt

    ]−∆ηt (x , u)ηt(x , u) = 0.

    By integrating against tests functions Φ(x , u) = Φ(x) one woulddeduce the equation for the marginal µt

    ∂tµt+∇x[(∫

    e(x , v)dσx ,t(v)

    )µt

    ]︸ ︷︷ ︸

    transport term

    −(∫

    ∆ηt (x , v)dσt(x , v)

    )︸ ︷︷ ︸

    ≡0

    µt = 0.

    How can we get a modellistically consistent evolution, whichcombines replicator dynamics and player’s moves? We need tomake a further abstraction step ...

  • Inconsistency with finite particle approximation

    In order to understand the inconsistency, let us consider thesimplest situation where η is concentrated in two distinct positionsx1, x2, e.g.

    η =1

    2δx1 ⊗ σ1 +

    1

    2δx2 ⊗ σ2, σi ∈P(U).

    We thus find

    ∆η(xi , u) = Z (xi , u)−∫U

    Z (xi , v) dσi (v) where

    Z (xi , u) =

    ∫Rd×U

    J(x , u, x ′, u′) dη(x ′, u′).

    Clearly Z depends continuously on η w.r.t. the weak topology.

  • Inconsistency with finite particle approximation

    In order to understand the inconsistency, let us consider thesimplest situation where η is concentrated in two distinct positionsx1, x2, e.g.

    η =1

    2δx1 ⊗ σ1 +

    1

    2δx2 ⊗ σ2, σi ∈P(U).

    We thus find

    ∆η(xi , u) = Z (xi , u)−∫U

    Z (xi , v) dσi (v) where

    Z (xi , u) =

    ∫Rd×U

    J(x , u, x ′, u′) dη(x ′, u′).

    Clearly Z depends continuously on η w.r.t. the weak topology.

  • Inconsistency with finite particle approximation

    However, when (x1, x2)→ (x , x) we get a discontinuous behaviourof ∆.

    In fact, if x1 = x2 = x the previous equations read as

    η = δx⊗σ, σ =1

    2(σ1+σ2), ∆η(x , u) = Z (x , u)−

    ∫U

    Z (x , v) dσ

    but it is also

    lim(x1,x2)→(x ,x)

    ∆η(xi , u) = Z (x , u)−∫U

    Z (x , v)dσi (v) 6= ∆η(x , u).

  • Inconsistency with finite particle approximation

    However, when (x1, x2)→ (x , x) we get a discontinuous behaviourof ∆.In fact, if x1 = x2 = x the previous equations read as

    η = δx⊗σ, σ =1

    2(σ1+σ2), ∆η(x , u) = Z (x , u)−

    ∫U

    Z (x , v) dσ

    but it is also

    lim(x1,x2)→(x ,x)

    ∆η(xi , u) = Z (x , u)−∫U

    Z (x , v)dσi (v) 6= ∆η(x , u).

  • Inconsistency with finite particle approximation

    However, when (x1, x2)→ (x , x) we get a discontinuous behaviourof ∆.In fact, if x1 = x2 = x the previous equations read as

    η = δx⊗σ, σ =1

    2(σ1+σ2), ∆η(x , u) = Z (x , u)−

    ∫U

    Z (x , v) dσ

    but it is also

    lim(x1,x2)→(x ,x)

    ∆η(xi , u) = Z (x , u)−∫U

    Z (x , v) dσi (v) 6= ∆η(x , u).

  • The correct approach

    I The space of pairs of positions and mixed strategies isC := Rd ×P(U), whose elements are denoted y =(x , σ)describing the state of a player.

    I The system will be described by the evolution of a measureΣ ∈P(C ) = P(Rd ×P(U)) on our state space, whichrepresents a distribution of players with strategies.

    I we define the new interaction potential

    ∆Σ,(x,σ)(u) :=

    ∫C

    ∫U

    J(x , u, x ′, u′)dσ′(u′) dΣ(x ′, σ′)

    −∫U

    ∫C

    ∫U

    J(x ,w , x ′, u′) dσ′(u′)dΣ(x ′, σ′) dσ(w).

  • The correct approach

    I The space of pairs of positions and mixed strategies isC := Rd ×P(U), whose elements are denoted y =(x , σ)describing the state of a player.

    I The system will be described by the evolution of a measureΣ ∈P(C ) = P(Rd ×P(U)) on our state space, whichrepresents a distribution of players with strategies.

    I we define the new interaction potential

    ∆Σ,(x,σ)(u) :=

    ∫C

    ∫U

    J(x , u, x ′, u′)dσ′(u′) dΣ(x ′, σ′)

    −∫U

    ∫C

    ∫U

    J(x ,w , x ′, u′) dσ′(u′)dΣ(x ′, σ′) dσ(w).

  • The correct approach

    I The space of pairs of positions and mixed strategies isC := Rd ×P(U), whose elements are denoted y =(x , σ)describing the state of a player.

    I The system will be described by the evolution of a measureΣ ∈P(C ) = P(Rd ×P(U)) on our state space, whichrepresents a distribution of players with strategies.

    I we define the new interaction potential

    ∆Σ,(x,σ)(u) :=

    ∫C

    ∫U

    J(x , u, x ′, u′)dσ′(u′) dΣ(x ′, σ′)

    −∫U

    ∫C

    ∫U

    J(x ,w , x ′, u′) dσ′(u′)dΣ(x ′, σ′) dσ(w).

  • The correct approach

    I We fix a time interval [0,T ], a time step h = T/n and aninitial datum Σ̄ ∈P(Rd ×P(U)).

    I For C = Rd ×P(U) we build a discrete solution

    Mh ∈P(C([0,T];C))

    concentrated on paths (x(t), σ(t)) : [0,T ]→ C , which arepiecewise affine (in the n intervals);

    I If the player at time t = ih is in position x̄ , with mixedstrategy σ̄, first it upgrades t the probability replacing σ̄ by

    σ̄′ := (1 + h∆Σt,h,(x̄ ,σ̄)) σ̄.

    I The conditional probability relative to Mh|[0,t+h] of(x(t + h), σ(t + h)), given the information that at time t onehas (x , σ)(t) = (x̄ , σ̄), is ((x̄ + he(x̄ , ·))#σ̄′)× δσ̄′ .

    I By iterating this process n times one can build Mh on [0,T ].

  • The correct approach

    I We fix a time interval [0,T ], a time step h = T/n and aninitial datum Σ̄ ∈P(Rd ×P(U)).

    I For C = Rd ×P(U) we build a discrete solution

    Mh ∈P(C([0,T];C))

    concentrated on paths (x(t), σ(t)) : [0,T ]→ C , which arepiecewise affine (in the n intervals);

    I If the player at time t = ih is in position x̄ , with mixedstrategy σ̄, first it upgrades t the probability replacing σ̄ by

    σ̄′ := (1 + h∆Σt,h,(x̄ ,σ̄)) σ̄.

    I The conditional probability relative to Mh|[0,t+h] of(x(t + h), σ(t + h)), given the information that at time t onehas (x , σ)(t) = (x̄ , σ̄), is ((x̄ + he(x̄ , ·))#σ̄′)× δσ̄′ .

    I By iterating this process n times one can build Mh on [0,T ].

  • The correct approach

    I We fix a time interval [0,T ], a time step h = T/n and aninitial datum Σ̄ ∈P(Rd ×P(U)).

    I For C = Rd ×P(U) we build a discrete solution

    Mh ∈P(C([0,T];C))

    concentrated on paths (x(t), σ(t)) : [0,T ]→ C , which arepiecewise affine (in the n intervals);

    I If the player at time t = ih is in position x̄ , with mixedstrategy σ̄, first it upgrades t the probability replacing σ̄ by

    σ̄′ := (1 + h∆Σt,h,(x̄ ,σ̄)) σ̄.

    I The conditional probability relative to Mh|[0,t+h] of(x(t + h), σ(t + h)), given the information that at time t onehas (x , σ)(t) = (x̄ , σ̄), is ((x̄ + he(x̄ , ·))#σ̄′)× δσ̄′ .

    I By iterating this process n times one can build Mh on [0,T ].

  • The correct approach

    I We fix a time interval [0,T ], a time step h = T/n and aninitial datum Σ̄ ∈P(Rd ×P(U)).

    I For C = Rd ×P(U) we build a discrete solution

    Mh ∈P(C([0,T];C))

    concentrated on paths (x(t), σ(t)) : [0,T ]→ C , which arepiecewise affine (in the n intervals);

    I If the player at time t = ih is in position x̄ , with mixedstrategy σ̄, first it upgrades t the probability replacing σ̄ by

    σ̄′ := (1 + h∆Σt,h,(x̄ ,σ̄)) σ̄.

    I The conditional probability relative to Mh|[0,t+h] of(x(t + h), σ(t + h)), given the information that at time t onehas (x , σ)(t) = (x̄ , σ̄), is ((x̄ + he(x̄ , ·))#σ̄′)× δσ̄′ .

    I By iterating this process n times one can build Mh on [0,T ].

  • The correct approach

    I We fix a time interval [0,T ], a time step h = T/n and aninitial datum Σ̄ ∈P(Rd ×P(U)).

    I For C = Rd ×P(U) we build a discrete solution

    Mh ∈P(C([0,T];C))

    concentrated on paths (x(t), σ(t)) : [0,T ]→ C , which arepiecewise affine (in the n intervals);

    I If the player at time t = ih is in position x̄ , with mixedstrategy σ̄, first it upgrades t the probability replacing σ̄ by

    σ̄′ := (1 + h∆Σt,h,(x̄ ,σ̄)) σ̄.

    I The conditional probability relative to Mh|[0,t+h] of(x(t + h), σ(t + h)), given the information that at time t onehas (x , σ)(t) = (x̄ , σ̄), is ((x̄ + he(x̄ , ·))#σ̄′)× δσ̄′ .

    I By iterating this process n times one can build Mh on [0,T ].

  • A combined dynamics: informal derivation II

    For t = ih, 0 ≤ i ≤ n − 1, we denote Σt,h := (evt)#Mh, whereevt : C([0,T];C)→ C defined by evt(x , σ) := (x(t), σ(t)).

    Givennow a bounded test function Φ: Rd ×P(U)→ R, one has∫

    Φ(x , σ) d(Σt+h,h − Σt,h)(x , σ)

    =

    ∫ [Φ(x(t + h), σ(t + h))− Φ(x(t), σ(t))

    ]dMh(x(·), σ(·))

    =

    ∫ [Φ(x(t + h), (1 + h∆Σt,h,x(t),σ(t))σ(t))− Φ(x(t), σ(t))

    ]dMh(x(·), σ(·))

    ≈ h∫

    DΦ(x(t), σ(t)) ·( ∫

    e(x , u)dσ(t, u),∆Σt,h,x(t),σ(t)σ(t))dMh(x(·), σ(·))

    = h

    ∫DΦ(x , σ) · bΣt,h (x , σ) dΣt,h(x , σ),

    for the vector field

    bΣ(x , σ) :=

    (∫e(x , u)dσ(u),∆Σ,(x,σ) σ

    ).

  • A combined dynamics: informal derivation II

    For t = ih, 0 ≤ i ≤ n − 1, we denote Σt,h := (evt)#Mh, whereevt : C([0,T];C)→ C defined by evt(x , σ) := (x(t), σ(t)). Givennow a bounded test function Φ: Rd ×P(U)→ R, one has∫

    Φ(x , σ) d(Σt+h,h − Σt,h)(x , σ)

    =

    ∫ [Φ(x(t + h), σ(t + h))− Φ(x(t), σ(t))

    ]dMh(x(·), σ(·))

    =

    ∫ [Φ(x(t + h), (1 + h∆Σt,h,x(t),σ(t))σ(t))− Φ(x(t), σ(t))

    ]dMh(x(·), σ(·))

    ≈ h∫

    DΦ(x(t), σ(t)) ·( ∫

    e(x , u)dσ(t, u),∆Σt,h,x(t),σ(t)σ(t))dMh(x(·), σ(·))

    = h

    ∫DΦ(x , σ) · bΣt,h (x , σ) dΣt,h(x , σ),

    for the vector field

    bΣ(x , σ) :=

    (∫e(x , u)dσ(u),∆Σ,(x,σ) σ

    ).

  • A combined dynamics: informal derivation II

    For t = ih, 0 ≤ i ≤ n − 1, we denote Σt,h := (evt)#Mh, whereevt : C([0,T];C)→ C defined by evt(x , σ) := (x(t), σ(t)). Givennow a bounded test function Φ: Rd ×P(U)→ R, one has∫

    Φ(x , σ) d(Σt+h,h − Σt,h)(x , σ)

    =

    ∫ [Φ(x(t + h), σ(t + h))− Φ(x(t), σ(t))

    ]dMh(x(·), σ(·))

    =

    ∫ [Φ(x(t + h), (1 + h∆Σt,h,x(t),σ(t))σ(t))− Φ(x(t), σ(t))

    ]dMh(x(·), σ(·))

    ≈ h∫

    DΦ(x(t), σ(t)) ·( ∫

    e(x , u)dσ(t, u),∆Σt,h,x(t),σ(t)σ(t))dMh(x(·), σ(·))

    = h

    ∫DΦ(x , σ) · bΣt,h (x , σ) dΣt,h(x , σ),

    for the vector field

    bΣ(x , σ) :=

    (∫e(x , u)dσ(u),∆Σ,(x,σ) σ

    ).

  • A combined dynamics: informal derivation II

    For t = ih, 0 ≤ i ≤ n − 1, we denote Σt,h := (evt)#Mh, whereevt : C([0,T];C)→ C defined by evt(x , σ) := (x(t), σ(t)). Givennow a bounded test function Φ: Rd ×P(U)→ R, one has∫

    Φ(x , σ) d(Σt+h,h − Σt,h)(x , σ)

    =

    ∫ [Φ(x(t + h), σ(t + h))− Φ(x(t), σ(t))

    ]dMh(x(·), σ(·))

    =

    ∫ [Φ(x(t + h), (1 + h∆Σt,h,x(t),σ(t))σ(t))− Φ(x(t), σ(t))

    ]dMh(x(·), σ(·))

    ≈ h∫

    DΦ(x(t), σ(t)) ·( ∫

    e(x , u)dσ(t, u),∆Σt,h,x(t),σ(t)σ(t))dMh(x(·), σ(·))

    = h

    ∫DΦ(x , σ) · bΣt,h (x , σ) dΣt,h(x , σ),

    for the vector field

    bΣ(x , σ) :=

    (∫e(x , u)dσ(u),∆Σ,(x,σ) σ

    ).

  • A combined dynamics: informal derivation II

    For t = ih, 0 ≤ i ≤ n − 1, we denote Σt,h := (evt)#Mh, whereevt : C([0,T];C)→ C defined by evt(x , σ) := (x(t), σ(t)). Givennow a bounded test function Φ: Rd ×P(U)→ R, one has∫

    Φ(x , σ) d(Σt+h,h − Σt,h)(x , σ)

    =

    ∫ [Φ(x(t + h), σ(t + h))− Φ(x(t), σ(t))

    ]dMh(x(·), σ(·))

    =

    ∫ [Φ(x(t + h), (1 + h∆Σt,h,x(t),σ(t))σ(t))− Φ(x(t), σ(t))

    ]dMh(x(·), σ(·))

    ≈ h∫

    DΦ(x(t), σ(t)) ·( ∫

    e(x , u)dσ(t, u),∆Σt,h,x(t),σ(t)σ(t))dMh(x(·), σ(·))

    = h

    ∫DΦ(x , σ) · bΣt,h (x , σ) dΣt,h(x , σ),

    for the vector field

    bΣ(x , σ) :=

    (∫e(x , u)dσ(u),∆Σ,(x,σ) σ

    ).

  • A new master equation

    This construction builds for h→ 0 a continuous mapΣ: [0,T ]→ C by Σt := (evt)#M, satisfying the equation

    ∂tΣt + divx,σ(bΣt Σt) = 0,

    In particular, by disintegration Σt = πx ,tµt for µt ∈P(Rd) andπx ,t ∈P(P(U)), the marginal µt fulfills

    ∂tµt +∇x · (v(t, x)µt)︸ ︷︷ ︸transport

    = 0,

    where v(t, x) =∫P(U) e(x , u)dσ(u)dπx ,t(σ) is the “mean of the

    mean” vector field of transport of µt .

  • A new master equation

    This construction builds for h→ 0 a continuous mapΣ: [0,T ]→ C by Σt := (evt)#M, satisfying the equation

    ∂tΣt + divx,σ(bΣt Σt) = 0,

    In particular, by disintegration Σt = πx ,tµt for µt ∈P(Rd) andπx ,t ∈P(P(U)), the marginal µt fulfills

    ∂tµt +∇x · (v(t, x)µt)︸ ︷︷ ︸transport

    = 0,

    where v(t, x) =∫P(U) e(x , u)dσ(u)dπx ,t(σ) is the “mean of the

    mean” vector field of transport of µt .

  • ODEs in Banach spaces

    I The characteristics are evolutions in the convex setC = Rd ×P(U).

    I In order to take advantage of the theory of ODE, Bochner,and Frechét calculus in Banach spaces we embed C in theBanach space Y := Rd × F (U), whereF (U) := span(P(U))

    ‖·‖BLand the closure is taken in the

    dual space (Lip(U))′;

    I Notice that F (U) ⊂ (Lip(U))′, and that (Y , ‖ · ‖Y ), with‖y‖Y = ‖(x , σ)‖Y := |x |+ ‖σ‖BL, is a separable Banachspace. The space F (U) defined above, known in the literatureas the Arens-Eells space, is isometric to the predual of Lip(U).

  • ODEs in Banach spaces

    I The characteristics are evolutions in the convex setC = Rd ×P(U).

    I In order to take advantage of the theory of ODE, Bochner,and Frechét calculus in Banach spaces we embed C in theBanach space

    Y := Rd × F (U), whereF (U) := span(P(U))

    ‖·‖BLand the closure is taken in the

    dual space (Lip(U))′;

    I Notice that F (U) ⊂ (Lip(U))′, and that (Y , ‖ · ‖Y ), with‖y‖Y = ‖(x , σ)‖Y := |x |+ ‖σ‖BL, is a separable Banachspace. The space F (U) defined above, known in the literatureas the Arens-Eells space, is isometric to the predual of Lip(U).

  • ODEs in Banach spaces

    I The characteristics are evolutions in the convex setC = Rd ×P(U).

    I In order to take advantage of the theory of ODE, Bochner,and Frechét calculus in Banach spaces we embed C in theBanach space Y := Rd × F (U), whereF (U) := span(P(U))

    ‖·‖BLand the closure is taken in the

    dual space (Lip(U))′;

    I Notice that F (U) ⊂ (Lip(U))′, and that (Y , ‖ · ‖Y ), with‖y‖Y = ‖(x , σ)‖Y := |x |+ ‖σ‖BL, is a separable Banachspace. The space F (U) defined above, known in the literatureas the Arens-Eells space, is isometric to the predual of Lip(U).

  • ODEs in Banach spaces

    I The characteristics are evolutions in the convex setC = Rd ×P(U).

    I In order to take advantage of the theory of ODE, Bochner,and Frechét calculus in Banach spaces we embed C in theBanach space Y := Rd × F (U), whereF (U) := span(P(U))

    ‖·‖BLand the closure is taken in the

    dual space (Lip(U))′;

    I Notice that F (U) ⊂ (Lip(U))′, and that (Y , ‖ · ‖Y ), with‖y‖Y = ‖(x , σ)‖Y := |x |+ ‖σ‖BL, is a separable Banachspace. The space F (U) defined above, known in the literatureas the Arens-Eells space, is isometric to the predual of Lip(U).

  • Evolution of the characteristics

    for every continuous curve t ∈ [0,T ] 7→ Σt ∈P1(C ) we can thenassociate to bΣ the transition maps YΣ(t, s, y) induced by ODE inY

    ẏt = bΣt (yt), ys = y .

    A solution y = (x , σ) satisfies,

    ẋt =

    ∫U

    e(xt , u)dσt(u),

    σ̇t =

    (∫C

    (∫U

    J(xt , ·, x ′, u′)dσ′(u′)−∫U×U

    J(xt ,w , x′, u′)dσ′(u′)dσt(w)

    )dΣt(x

    ′, σ′)

    )σt ,

  • Classical well-posedness of ODE in Banach spacesTheorem (Brézis, 1973)Let (Y , ‖ · ‖Y ) be a Banach space, C a closed convex subset of Y and letA(t, ·) : C → Y , t ∈ [0,T ], be a family of operators satisfying the followingproperties:

    1. there exists a constant L ≥ 0 such that

    ‖A(t, y1)−A(t, y2)‖Y ≤ L‖y1−y2‖Y for every y1, y2 ∈ C and t ∈ [0,T ];

    2. for every y ∈ C the map t 7→ A(t, y) is continuous in [0,T ];3. for every R > 0 there exists θ > 0 such that

    y ∈ C , ‖y‖Y ≤ R ⇒ y + θA(t, y) ∈ C .

    Then for every ȳ ∈ Y there exists a unique curve y : [0,T ]→ C of class C1satisfying yt ∈ C for all t ∈ [0,T ] and

    d

    dtyt = A(t, yt) in [0,T ], y0 = ȳ .

    Moreover, if y 1, y 2 are the solutions starting from the initial data ȳ 1, ȳ 2 ∈ Crespectively, we have ‖y 1t − y 2t ‖Y ≤ eLt‖ȳ 1 − ȳ 2‖Y , for every t ∈ [0,T ].

  • Classical well-posedness of ODE in Banach spacesTheorem (Brézis, 1973)Let (Y , ‖ · ‖Y ) be a Banach space, C a closed convex subset of Y and letA(t, ·) : C → Y , t ∈ [0,T ], be a family of operators satisfying the followingproperties:

    1. there exists a constant L ≥ 0 such that

    ‖A(t, y1)−A(t, y2)‖Y ≤ L‖y1−y2‖Y for every y1, y2 ∈ C and t ∈ [0,T ];

    2. for every y ∈ C the map t 7→ A(t, y) is continuous in [0,T ];3. for every R > 0 there exists θ > 0 such that

    y ∈ C , ‖y‖Y ≤ R ⇒ y + θA(t, y) ∈ C .

    Then for every ȳ ∈ Y there exists a unique curve y : [0,T ]→ C of class C1satisfying yt ∈ C for all t ∈ [0,T ] and

    d

    dtyt = A(t, yt) in [0,T ], y0 = ȳ .

    Moreover, if y 1, y 2 are the solutions starting from the initial data ȳ 1, ȳ 2 ∈ Crespectively, we have ‖y 1t − y 2t ‖Y ≤ eLt‖ȳ 1 − ȳ 2‖Y , for every t ∈ [0,T ].

  • Classical well-posedness of ODE in Banach spacesTheorem (Brézis, 1973)Let (Y , ‖ · ‖Y ) be a Banach space, C a closed convex subset of Y and letA(t, ·) : C → Y , t ∈ [0,T ], be a family of operators satisfying the followingproperties:

    1. there exists a constant L ≥ 0 such that

    ‖A(t, y1)−A(t, y2)‖Y ≤ L‖y1−y2‖Y for every y1, y2 ∈ C and t ∈ [0,T ];

    2. for every y ∈ C the map t 7→ A(t, y) is continuous in [0,T ];

    3. for every R > 0 there exists θ > 0 such that

    y ∈ C , ‖y‖Y ≤ R ⇒ y + θA(t, y) ∈ C .

    Then for every ȳ ∈ Y there exists a unique curve y : [0,T ]→ C of class C1satisfying yt ∈ C for all t ∈ [0,T ] and

    d

    dtyt = A(t, yt) in [0,T ], y0 = ȳ .

    Moreover, if y 1, y 2 are the solutions starting from the initial data ȳ 1, ȳ 2 ∈ Crespectively, we have ‖y 1t − y 2t ‖Y ≤ eLt‖ȳ 1 − ȳ 2‖Y , for every t ∈ [0,T ].

  • Classical well-posedness of ODE in Banach spacesTheorem (Brézis, 1973)Let (Y , ‖ · ‖Y ) be a Banach space, C a closed convex subset of Y and letA(t, ·) : C → Y , t ∈ [0,T ], be a family of operators satisfying the followingproperties:

    1. there exists a constant L ≥ 0 such that

    ‖A(t, y1)−A(t, y2)‖Y ≤ L‖y1−y2‖Y for every y1, y2 ∈ C and t ∈ [0,T ];

    2. for every y ∈ C the map t 7→ A(t, y) is continuous in [0,T ];3. for every R > 0 there exists θ > 0 such that

    y ∈ C , ‖y‖Y ≤ R ⇒ y + θA(t, y) ∈ C .

    Then for every ȳ ∈ Y there exists a unique curve y : [0,T ]→ C of class C1satisfying yt ∈ C for all t ∈ [0,T ] and

    d

    dtyt = A(t, yt) in [0,T ], y0 = ȳ .

    Moreover, if y 1, y 2 are the solutions starting from the initial data ȳ 1, ȳ 2 ∈ Crespectively, we have ‖y 1t − y 2t ‖Y ≤ eLt‖ȳ 1 − ȳ 2‖Y , for every t ∈ [0,T ].

  • Classical well-posedness of ODE in Banach spacesTheorem (Brézis, 1973)Let (Y , ‖ · ‖Y ) be a Banach space, C a closed convex subset of Y and letA(t, ·) : C → Y , t ∈ [0,T ], be a family of operators satisfying the followingproperties:

    1. there exists a constant L ≥ 0 such that

    ‖A(t, y1)−A(t, y2)‖Y ≤ L‖y1−y2‖Y for every y1, y2 ∈ C and t ∈ [0,T ];

    2. for every y ∈ C the map t 7→ A(t, y) is continuous in [0,T ];3. for every R > 0 there exists θ > 0 such that

    y ∈ C , ‖y‖Y ≤ R ⇒ y + θA(t, y) ∈ C .

    Then for every ȳ ∈ Y there exists a unique curve y : [0,T ]→ C of class C1satisfying yt ∈ C for all t ∈ [0,T ] and

    d

    dtyt = A(t, yt) in [0,T ], y0 = ȳ .

    Moreover, if y 1, y 2 are the solutions starting from the initial data ȳ 1, ȳ 2 ∈ Crespectively, we have ‖y 1t − y 2t ‖Y ≤ eLt‖ȳ 1 − ȳ 2‖Y , for every t ∈ [0,T ].

  • Classical well-posedness of ODE in Banach spacesTheorem (Brézis, 1973)Let (Y , ‖ · ‖Y ) be a Banach space, C a closed convex subset of Y and letA(t, ·) : C → Y , t ∈ [0,T ], be a family of operators satisfying the followingproperties:

    1. there exists a constant L ≥ 0 such that

    ‖A(t, y1)−A(t, y2)‖Y ≤ L‖y1−y2‖Y for every y1, y2 ∈ C and t ∈ [0,T ];

    2. for every y ∈ C the map t 7→ A(t, y) is continuous in [0,T ];3. for every R > 0 there exists θ > 0 such that

    y ∈ C , ‖y‖Y ≤ R ⇒ y + θA(t, y) ∈ C .

    Then for every ȳ ∈ Y there exists a unique curve y : [0,T ]→ C of class C1satisfying yt ∈ C for all t ∈ [0,T ] and

    d

    dtyt = A(t, yt) in [0,T ], y0 = ȳ .

    Moreover, if y 1, y 2 are the solutions starting from the initial data ȳ 1, ȳ 2 ∈ Crespectively, we have ‖y 1t − y 2t ‖Y ≤ eLt‖ȳ 1 − ȳ 2‖Y , for every t ∈ [0,T ].

  • Eulerian and Lagrangian solutions of the master equationFor a flow map YΣ and for every φ ∈ C1b([0,T ]× Y ), thetransported measures Σ̂t := YΣ(t, 0, ·)#Σ̄ solve

    d

    dt

    ∫C

    φ(t, y)dΣ̂t(y) =

    ∫C

    d

    dtφ(t,YΣ(t, 0, z)

    )dΣ̄(z)

    =

    ∫C

    (∂tφ(t,YΣ(t, 0, z)) + Dφ(t,YΣ(t, 0, z))(bΣt (YΣ(t, 0, z)))

    )dΣ̄(z)

    =

    ∫C

    (∂tφ(t, y) + Dφ(t, y)(bΣt (y))

    )dΣ̂t(y)

    We seek for an evolving distribution Σ which is self-transported bythe generated vector field bΣ, so that Σ̂ = Σ.

    Definition (Eulerian and Lagrangian solutions)Let Σ ∈ C0([0,T ]; (P1(C),W1)) and Σ̄ ∈P1(C). We say that Σ is anEulerian solution of the initial value problem for the master equation startingfrom Σ̄ if Σ0 = Σ̄ and the equation holds with Σ̂ = Σ. We say that Σ is aLagrangian solution starting from Σ̄ if

    Σt = YΣ(t, 0, ·)#Σ̄ for every 0 ≤ t ≤ T ,

    where YΣ(t, s, y) are the transition maps of the characteristics.

  • Eulerian and Lagrangian solutions of the master equationFor a flow map YΣ and for every φ ∈ C1b([0,T ]× Y ), thetransported measures Σ̂t := YΣ(t, 0, ·)#Σ̄ solve

    d

    dt

    ∫C

    φ(t, y)dΣ̂t(y) =

    ∫C

    d

    dtφ(t,YΣ(t, 0, z)

    )dΣ̄(z)

    =

    ∫C

    (∂tφ(t,YΣ(t, 0, z)) + Dφ(t,YΣ(t, 0, z))(bΣt (YΣ(t, 0, z)))

    )dΣ̄(z)

    =

    ∫C

    (∂tφ(t, y) + Dφ(t, y)(bΣt (y))

    )dΣ̂t(y)

    We seek for an evolving distribution Σ which is self-transported bythe generated vector field bΣ, so that Σ̂ = Σ.

    Definition (Eulerian and Lagrangian solutions)Let Σ ∈ C0([0,T ]; (P1(C),W1)) and Σ̄ ∈P1(C). We say that Σ is anEulerian solution of the initial value problem for the master equation startingfrom Σ̄ if Σ0 = Σ̄ and the equation holds with Σ̂ = Σ. We say that Σ is aLagrangian solution starting from Σ̄ if

    Σt = YΣ(t, 0, ·)#Σ̄ for every 0 ≤ t ≤ T ,

    where YΣ(t, s, y) are the transition maps of the characteristics.

  • Eulerian and Lagrangian solutions of the master equationFor a flow map YΣ and for every φ ∈ C1b([0,T ]× Y ), thetransported measures Σ̂t := YΣ(t, 0, ·)#Σ̄ solve

    d

    dt

    ∫C

    φ(t, y)dΣ̂t(y) =

    ∫C

    d

    dtφ(t,YΣ(t, 0, z)

    )dΣ̄(z)

    =

    ∫C

    (∂tφ(t,YΣ(t, 0, z)) + Dφ(t,YΣ(t, 0, z))(bΣt (YΣ(t, 0, z)))

    )dΣ̄(z)

    =

    ∫C

    (∂tφ(t, y) + Dφ(t, y)(bΣt (y))

    )dΣ̂t(y)

    We seek for an evolving distribution Σ which is self-transported bythe generated vector field bΣ, so that Σ̂ = Σ.

    Definition (Eulerian and Lagrangian solutions)Let Σ ∈ C0([0,T ]; (P1(C),W1)) and Σ̄ ∈P1(C). We say that Σ is anEulerian solution of the initial value problem for the master equation startingfrom Σ̄ if Σ0 = Σ̄ and the equation holds with Σ̂ = Σ. We say that Σ is aLagrangian solution starting from Σ̄ if

    Σt = YΣ(t, 0, ·)#Σ̄ for every 0 ≤ t ≤ T ,

    where YΣ(t, s, y) are the transition maps of the characteristics.

  • Eulerian and Lagrangian solutions of the master equationFor a flow map YΣ and for every φ ∈ C1b([0,T ]× Y ), thetransported measures Σ̂t := YΣ(t, 0, ·)#Σ̄ solve

    d

    dt

    ∫C

    φ(t, y)dΣ̂t(y) =

    ∫C

    d

    dtφ(t,YΣ(t, 0, z)

    )dΣ̄(z)

    =

    ∫C

    (∂tφ(t,YΣ(t, 0, z)) + Dφ(t,YΣ(t, 0, z))(bΣt (YΣ(t, 0, z)))

    )dΣ̄(z)

    =

    ∫C

    (∂tφ(t, y) + Dφ(t, y)(bΣt (y))

    )dΣ̂t(y)

    We seek for an evolving distribution Σ which is self-transported bythe generated vector field bΣ, so that Σ̂ = Σ.

    Definition (Eulerian and Lagrangian solutions)Let Σ ∈ C0([0,T ]; (P1(C),W1)) and Σ̄ ∈P1(C). We say that Σ is anEulerian solution of the initial value problem for the master equation startingfrom Σ̄ if Σ0 = Σ̄ and the equation holds with Σ̂ = Σ. We say that Σ is aLagrangian solution starting from Σ̄ if

    Σt = YΣ(t, 0, ·)#Σ̄ for every 0 ≤ t ≤ T ,

    where YΣ(t, s, y) are the transition maps of the characteristics.

  • Eulerian and Lagrangian solutions of the master equationFor a flow map YΣ and for every φ ∈ C1b([0,T ]× Y ), thetransported measures Σ̂t := YΣ(t, 0, ·)#Σ̄ solve

    d

    dt

    ∫C

    φ(t, y)dΣ̂t(y) =

    ∫C

    d

    dtφ(t,YΣ(t, 0, z)

    )dΣ̄(z)

    =

    ∫C

    (∂tφ(t,YΣ(t, 0, z)) + Dφ(t,YΣ(t, 0, z))(bΣt (YΣ(t, 0, z)))

    )dΣ̄(z)

    =

    ∫C

    (∂tφ(t, y) + Dφ(t, y)(bΣt (y))

    )dΣ̂t(y)

    We seek for an evolving distribution Σ which is self-transported bythe generated vector field bΣ, so that Σ̂ = Σ.

    Definition (Eulerian and Lagrangian solutions)Let Σ ∈ C0([0,T ]; (P1(C),W1)) and Σ̄ ∈P1(C). We say that Σ is anEulerian solution of the initial value problem for the master equation startingfrom Σ̄ if Σ0 = Σ̄ and the equation holds with Σ̂ = Σ.

    We say that Σ is aLagrangian solution starting from Σ̄ if

    Σt = YΣ(t, 0, ·)#Σ̄ for every 0 ≤ t ≤ T ,

    where YΣ(t, s, y) are the transition maps of the characteristics.

  • Eulerian and Lagrangian solutions of the master equationFor a flow map YΣ and for every φ ∈ C1b([0,T ]× Y ), thetransported measures Σ̂t := YΣ(t, 0, ·)#Σ̄ solve

    d

    dt

    ∫C

    φ(t, y)dΣ̂t(y) =

    ∫C

    d

    dtφ(t,YΣ(t, 0, z)

    )dΣ̄(z)

    =

    ∫C

    (∂tφ(t,YΣ(t, 0, z)) + Dφ(t,YΣ(t, 0, z))(bΣt (YΣ(t, 0, z)))

    )dΣ̄(z)

    =

    ∫C

    (∂tφ(t, y) + Dφ(t, y)(bΣt (y))

    )dΣ̂t(y)

    We seek for an evolving distribution Σ which is self-transported bythe generated vector field bΣ, so that Σ̂ = Σ.

    Definition (Eulerian and Lagrangian solutions)Let Σ ∈ C0([0,T ]; (P1(C),W1)) and Σ̄ ∈P1(C). We say that Σ is anEulerian solution of the initial value problem for the master equation startingfrom Σ̄ if Σ0 = Σ̄ and the equation holds with Σ̂ = Σ. We say that Σ is aLagrangian solution starting from Σ̄ if

    Σt = YΣ(t, 0, ·)#Σ̄ for every 0 ≤ t ≤ T ,

    where YΣ(t, s, y) are the transition maps of the characteristics.

  • Eulerian and Lagrangian solutions of the master equation

    Lagrangian solutions are Eulerian, this settles the existenceproblem also for Eulerian solutions. The uniqueness of Euleriansolutions is technically harder.

    Differently from Euclidean domains,Y is a separable Banach space, and there is a potential lack ofdensity of tensor product of smooth functions in C 0([0,T ]× Y ).

  • Eulerian and Lagrangian solutions of the master equation

    Lagrangian solutions are Eulerian, this settles the existenceproblem also for Eulerian solutions. The uniqueness of Euleriansolutions is technically harder. Differently from Euclidean domains,Y is a separable Banach space, and there is a potential lack ofdensity of tensor product of smooth functions in C 0([0,T ]× Y ).

  • Lagrangian well-posedness

    Theorem (Existence and uniqueness of Lagrangian solutions)

    Suppose that J : (Rd × U)2 → R and e : Rd × U → R areLipschitz maps. Then, for every Σ̄ ∈P1(C ), there exists a uniqueLagrangian solution Σ.

    Moreover, there exists L ≥ 0 such that for every pair Σ̄i , i = 1, 2,of initial data in P1(C ), the corresponding solutions Σit satisfy

    W1(Σ1t ,Σ

    2t ) ≤ eLt W1(Σ̄1, Σ̄2), for every t ∈ [0,T ].

    Proof. It follows from stability of flow maps w.r.t. to Σ in Wasserstein distance

    and a fixed point result based on a contraction principle: For

    T [Λ]t := YΛ(t, 0, ·)#Σ̄, we show first thatW1(T [Λ1]t , T [Λ2]t) ≤ L

    ∫ t0eL(t−τ)W1(Λ

    1τ ,Λ

    2τ ) dτ. Then by defining L

    ′ > 2L so

    that ` := L/(L′ − L) < 1, and d(Λ,Λ′) := maxt∈[0,T ] e−L′tW1(Λt ,Λ

    ′t), we get

    from the stabilty estimate d(T [Λ], T [Λ′]) ≤ `d(Λ,Λ′).

  • Lagrangian well-posedness

    Theorem (Existence and uniqueness of Lagrangian solutions)

    Suppose that J : (Rd × U)2 → R and e : Rd × U → R areLipschitz maps. Then, for every Σ̄ ∈P1(C ), there exists a uniqueLagrangian solution Σ.Moreover, there exists L ≥ 0 such that for every pair Σ̄i , i = 1, 2,of initial data in P1(C ), the corresponding solutions Σit satisfy

    W1(Σ1t ,Σ

    2t ) ≤ eLt W1(Σ̄1, Σ̄2), for every t ∈ [0,T ].

    Proof. It follows from stability of flow maps w.r.t. to Σ in Wasserstein distance

    and a fixed point result based on a contraction principle: For

    T [Λ]t := YΛ(t, 0, ·)#Σ̄, we show first thatW1(T [Λ1]t , T [Λ2]t) ≤ L

    ∫ t0eL(t−τ)W1(Λ

    1τ ,Λ

    2τ ) dτ. Then by defining L

    ′ > 2L so

    that ` := L/(L′ − L) < 1, and d(Λ,Λ′) := maxt∈[0,T ] e−L′tW1(Λt ,Λ

    ′t), we get

    from the stabilty estimate d(T [Λ], T [Λ′]) ≤ `d(Λ,Λ′).

  • Lagrangian well-posedness

    Theorem (Existence and uniqueness of Lagrangian solutions)

    Suppose that J : (Rd × U)2 → R and e : Rd × U → R areLipschitz maps. Then, for every Σ̄ ∈P1(C ), there exists a uniqueLagrangian solution Σ.Moreover, there exists L ≥ 0 such that for every pair Σ̄i , i = 1, 2,of initial data in P1(C ), the corresponding solutions Σit satisfy

    W1(Σ1t ,Σ

    2t ) ≤ eLt W1(Σ̄1, Σ̄2), for every t ∈ [0,T ].

    Proof. It follows from stability of flow maps w.r.t. to Σ in Wasserstein distance

    and a fixed point result based on a contraction principle:

    For

    T [Λ]t := YΛ(t, 0, ·)#Σ̄, we show first thatW1(T [Λ1]t , T [Λ2]t) ≤ L

    ∫ t0eL(t−τ)W1(Λ

    1τ ,Λ

    2τ ) dτ. Then by defining L

    ′ > 2L so

    that ` := L/(L′ − L) < 1, and d(Λ,Λ′) := maxt∈[0,T ] e−L′tW1(Λt ,Λ

    ′t), we get

    from the stabilty estimate d(T [Λ], T [Λ′]) ≤ `d(Λ,Λ′).

  • Lagrangian well-posedness

    Theorem (Existence and uniqueness of Lagrangian solutions)

    Suppose that J : (Rd × U)2 → R and e : Rd × U → R areLipschitz maps. Then, for every Σ̄ ∈P1(C ), there exists a uniqueLagrangian solution Σ.Moreover, there exists L ≥ 0 such that for every pair Σ̄i , i = 1, 2,of initial data in P1(C ), the corresponding solutions Σit satisfy

    W1(Σ1t ,Σ

    2t ) ≤ eLt W1(Σ̄1, Σ̄2), for every t ∈ [0,T ].

    Proof. It follows from stability of flow maps w.r.t. to Σ in Wasserstein distance

    and a fixed point result based on a contraction principle: For

    T [Λ]t := YΛ(t, 0, ·)#Σ̄, we show first thatW1(T [Λ1]t , T [Λ2]t) ≤ L

    ∫ t0eL(t−τ)W1(Λ

    1τ ,Λ

    2τ ) dτ.

    Then by defining L′ > 2L so

    that ` := L/(L′ − L) < 1, and d(Λ,Λ′) := maxt∈[0,T ] e−L′tW1(Λt ,Λ

    ′t), we get

    from the stabilty estimate d(T [Λ], T [Λ′]) ≤ `d(Λ,Λ′).

  • Lagrangian well-posedness

    Theorem (Existence and uniqueness of Lagrangian solutions)

    Suppose that J : (Rd × U)2 → R and e : Rd × U → R areLipschitz maps. Then, for every Σ̄ ∈P1(C ), there exists a uniqueLagrangian solution Σ.Moreover, there exists L ≥ 0 such that for every pair Σ̄i , i = 1, 2,of initial data in P1(C ), the corresponding solutions Σit satisfy

    W1(Σ1t ,Σ

    2t ) ≤ eLt W1(Σ̄1, Σ̄2), for every t ∈ [0,T ].

    Proof. It follows from stability of flow maps w.r.t. to Σ in Wasserstein distance

    and a fixed point result based on a contraction principle: For

    T [Λ]t := YΛ(t, 0, ·)#Σ̄, we show first thatW1(T [Λ1]t , T [Λ2]t) ≤ L

    ∫ t0eL(t−τ)W1(Λ

    1τ ,Λ

    2τ ) dτ. Then by defining L

    ′ > 2L so

    that ` := L/(L′ − L) < 1, and d(Λ,Λ′) := maxt∈[0,T ] e−L′tW1(Λt ,Λ

    ′t),

    we get

    from the stabilty estimate d(T [Λ], T [Λ′]) ≤ `d(Λ,Λ′).

  • Lagrangian well-posedness

    Theorem (Existence and uniqueness of Lagrangian solutions)

    Suppose that J : (Rd × U)2 → R and e : Rd × U → R areLipschitz maps. Then, for every Σ̄ ∈P1(C ), there exists a uniqueLagrangian solution Σ.Moreover, there exists L ≥ 0 such that for every pair Σ̄i , i = 1, 2,of initial data in P1(C ), the corresponding solutions Σit satisfy

    W1(Σ1t ,Σ

    2t ) ≤ eLt W1(Σ̄1, Σ̄2), for every t ∈ [0,T ].

    Proof. It follows from stability of flow maps w.r.t. to Σ in Wasserstein distance

    and a fixed point result based on a contraction principle: For

    T [Λ]t := YΛ(t, 0, ·)#Σ̄, we show first thatW1(T [Λ1]t , T [Λ2]t) ≤ L

    ∫ t0eL(t−τ)W1(Λ

    1τ ,Λ

    2τ ) dτ. Then by defining L

    ′ > 2L so

    that ` := L/(L′ − L) < 1, and d(Λ,Λ′) := maxt∈[0,T ] e−L′tW1(Λt ,Λ

    ′t), we get

    from the stabilty estimate d(T [Λ], T [Λ′]) ≤ `d(Λ,Λ′).

  • Uniqueness of Eulerian solutions by duality

    TheoremSuppose that J : (Rd ×U)2 → R and e : Rd ×U → R are Lipschitzmaps,

    with J(·, u, x ′, u′) of class C1 for all (u, x ′, u′) ∈ U ×Rd ×U,and e(·, u) of class C1 for all u ∈ U. Then, for all Σ̄ ∈M (C ) with∫C |x |d|Σ̄| < +∞, the master equation admits a unique solution in

    the class of weakly continuous maps t ∈ [0,T ] 7→ Σt ∈M (C )with supt

    ∫C (1 + |x |)d|Σt | < +∞ and Σ0 = Σ̄.

  • Uniqueness of Eulerian solutions by duality

    TheoremSuppose that J : (Rd ×U)2 → R and e : Rd ×U → R are Lipschitzmaps, with J(·, u, x ′, u′) of class C1 for all (u, x ′, u′) ∈ U ×Rd ×U,and e(·, u) of class C1 for all u ∈ U.

    Then, for all Σ̄ ∈M (C ) with∫C |x |d|Σ̄| < +∞, the master equation admits a unique solution in

    the class of weakly continuous maps t ∈ [0,T ] 7→ Σt ∈M (C )with supt

    ∫C (1 + |x |)d|Σt | < +∞ and Σ0 = Σ̄.

  • Uniqueness of Eulerian solutions by duality

    TheoremSuppose that J : (Rd ×U)2 → R and e : Rd ×U → R are Lipschitzmaps, with J(·, u, x ′, u′) of class C1 for all (u, x ′, u′) ∈ U ×Rd ×U,and e(·, u) of class C1 for all u ∈ U. Then, for all Σ̄ ∈M (C ) with∫C |x |d|Σ̄| < +∞, the master equation admits a unique solution

    inthe class of weakly continuous maps t ∈ [0,T ] 7→ Σt ∈M (C )with supt

    ∫C (1 + |x |)d|Σt | < +∞ and Σ0 = Σ̄.

  • Uniqueness of Eulerian solutions by duality

    TheoremSuppose that J : (Rd ×U)2 → R and e : Rd ×U → R are Lipschitzmaps, with J(·, u, x ′, u′) of class C1 for all (u, x ′, u′) ∈ U ×Rd ×U,and e(·, u) of class C1 for all u ∈ U. Then, for all Σ̄ ∈M (C ) with∫C |x |d|Σ̄| < +∞, the master equation admits a unique solution in

    the class of weakly continuous maps t ∈ [0,T ] 7→ Σt ∈M (C )with supt

    ∫C (1 + |x |) d|Σt | < +∞ and Σ0 = Σ̄.

  • A technical Lemma

    Lemma (C1 duality)

    For any separable Banach space Y one has that

    dC1(µ, ν) := supφ∈C1

    b(Y )

    Lip(φ)≤1

    (∫Yφ dµ−

    ∫Yφ dν

    )

    defines a distance in M (Y ), which coincides with W1 whenrestricted to P1(Y ).

    Proof. Symmetry and triangle inequality are easy. Non-degeneracy

    dC1 (µ, ν) = 0⇒ µ = ν follows from the equivalence betwen Borel σ-algebraand the σ-algebra generated by cylindrical functions (which are admissible test

    functions in C 1b (Y ) ∩ Lip(φ)). dC1 = W1 on P1(Y ) follows by the pointwiseclosure of bnd. functions φ ∈ Lipb such that

    ∫Yφ dµ−

    ∫Yφ dν ≤ dC1 (µ, ν)

    with unif. bnd. and Lip. coincides with the one of functions in C1b ∩ Lipb withthe same property.

  • A technical Lemma

    Lemma (C1 duality)

    For any separable Banach space Y one has that

    dC1(µ, ν) := supφ∈C1

    b(Y )

    Lip(φ)≤1

    (∫Yφ dµ−

    ∫Yφ dν

    )

    defines a distance in M (Y ), which coincides with W1 whenrestricted to P1(Y ).

    Proof. Symmetry and triangle inequality are easy.

    Non-degeneracy

    dC1 (µ, ν) = 0⇒ µ = ν follows from the equivalence betwen Borel σ-algebraand the σ-algebra generated by cylindrical functions (which are admissible test

    functions in C 1b (Y ) ∩ Lip(φ)). dC1 = W1 on P1(Y ) follows by the pointwiseclosure of bnd. functions φ ∈ Lipb such that

    ∫Yφ dµ−

    ∫Yφ dν ≤ dC1 (µ, ν)

    with unif. bnd. and Lip. coincides with the one of functions in C1b ∩ Lipb withthe same property.

  • A technical Lemma

    Lemma (C1 duality)

    For any separable Banach space Y one has that

    dC1(µ, ν) := supφ∈C1

    b(Y )

    Lip(φ)≤1

    (∫Yφ dµ−

    ∫Yφ dν

    )

    defines a distance in M (Y ), which coincides with W1 whenrestricted to P1(Y ).

    Proof. Symmetry and triangle inequality are easy. Non-degeneracy

    dC1 (µ, ν) = 0⇒ µ = ν follows from the equivalence betwen Borel σ-algebraand the σ-algebra generated by cylindrical functions (which are admissible test

    functions in C 1b (Y ) ∩ Lip(φ)).

    dC1 = W1 on P1(Y ) follows by the pointwise

    closure of bnd. functions φ ∈ Lipb such that∫Yφ dµ−

    ∫Yφ dν ≤ dC1 (µ, ν)

    with unif. bnd. and Lip. coincides with the one of functions in C1b ∩ Lipb withthe same property.

  • A technical Lemma

    Lemma (C1 duality)

    For any separable Banach space Y one has that

    dC1(µ, ν) := supφ∈C1

    b(Y )

    Lip(φ)≤1

    (∫Yφ dµ−

    ∫Yφ dν

    )

    defines a distance in M (Y ), which coincides with W1 whenrestricted to P1(Y ).

    Proof. Symmetry and triangle inequality are easy. Non-degeneracy

    dC1 (µ, ν) = 0⇒ µ = ν follows from the equivalence betwen Borel σ-algebraand the σ-algebra generated by cylindrical functions (which are admissible test

    functions in C 1b (Y ) ∩ Lip(φ)). dC1 = W1 on P1(Y ) follows by the pointwiseclosure of bnd. functions φ ∈ Lipb such that

    ∫Yφ dµ−

    ∫Yφ dν ≤ dC1 (µ, ν)

    with unif. bnd. and Lip. coincides with the one of functions in C1b ∩ Lipb withthe same property.

  • Uniqueness of Eulerian solutions by duality: proofcontinued ...

    Proof. The difference Σt := Σ1t − Σ2t solves

    d

    dtΣt + div(bΣ1t Σt) = −div(btΣ

    2t ),

    with bt := bΣ1t − bΣ2t .

    We consider{∂tgt + Dgt(bΣ1t ) = rt in [0, h]× C ,gh = 0 in C,

    with explicit solution gt(y) = −∫ ht

    rs(YΣ1 (s, t, y)) ds. Since gtΣt vanishes att = 0 and t = h

    0 =

    ∫ h0

    d

    dt

    ∫C

    gt dΣt dt =

    ∫ h0

    ∫C

    (∂tgt+Dgt(bΣ1t ))dΣt dt+

    ∫ h0

    ∫C

    Dgt(bΣ1t ) dΣ2t dt,

    and∫ h0

    dC1 (Σ1t ,Σ

    2t )dt = sup

    r∈C1b

    Lip(φ)≤1

    ∫ h0

    ∫C

    rt dΣt dt ≤∫ h

    0

    ∫C

    |Dgt(bt)|d|Σ2t |dt

    ≤ [...] ≤ Ch∫ h

    0

    dC1 (Σ1t ,Σ

    2t ) dt,

    so that the the curves Σ1 and Σ2 coincide in [0, h].

  • Uniqueness of Eulerian solutions by duality: proofcontinued ...

    Proof. The difference Σt := Σ1t − Σ2t solves

    d

    dtΣt + div(bΣ1t Σt) = −div(btΣ

    2t ),

    with bt := bΣ1t − bΣ2t . We consider{∂tgt + Dgt(bΣ1t ) = rt in [0, h]× C ,gh = 0 in C,

    with explicit solution gt(y) = −∫ ht

    rs(YΣ1 (s, t, y)) ds. Since gtΣt vanishes att = 0 and t = h

    0 =

    ∫ h0

    d

    dt

    ∫C

    gt dΣt dt =

    ∫ h0

    ∫C

    (∂tgt+Dgt(bΣ1t ))dΣt dt+

    ∫ h0

    ∫C

    Dgt(bΣ1t ) dΣ2t dt,

    and∫ h0

    dC1 (Σ1t ,Σ

    2t )dt = sup

    r∈C1b

    Lip(φ)≤1

    ∫ h0

    ∫C

    rt dΣt dt ≤∫ h

    0

    ∫C

    |Dgt(bt)|d|Σ2t |dt

    ≤ [...] ≤ Ch∫ h

    0

    dC1 (Σ1t ,Σ

    2t ) dt,

    so that the the curves Σ1 and Σ2 coincide in [0, h].

  • Uniqueness of Eulerian solutions by duality: proofcontinued ...

    Proof. The difference Σt := Σ1t − Σ2t solves

    d

    dtΣt + div(bΣ1t Σt) = −div(btΣ

    2t ),

    with bt := bΣ1t − bΣ2t . We consider{∂tgt + Dgt(bΣ1t ) = rt in [0, h]× C ,gh = 0 in C,

    with explicit solution gt(y) = −∫ ht

    rs(YΣ1 (s, t, y)) ds. Since gtΣt vanishes att = 0 and t = h

    0 =

    ∫ h0

    d

    dt

    ∫C

    gt dΣt dt =

    ∫ h0

    ∫C

    (∂tgt+Dgt(bΣ1t ))dΣt dt+

    ∫ h0

    ∫C

    Dgt(bΣ1t ) dΣ2t dt,

    and∫ h0

    dC1 (Σ1t ,Σ

    2t )dt = sup

    r∈C1b

    Lip(φ)≤1

    ∫ h0

    ∫C

    rt dΣt dt ≤∫ h

    0

    ∫C

    |Dgt(bt)|d|Σ2t | dt

    ≤ [...] ≤ Ch∫ h

    0

    dC1 (Σ1t ,Σ

    2t ) dt,

    so that the the curves Σ1 and Σ2 coincide in [0, h].

  • Uniqueness of Eulerian solutions by superposition principle

    TheoremLet Σ̄ ∈P1(C ), f : C × C → Y be L-Lipschitz, and let bΣ bedefined as

    bΣ(t, y) = bΣt (y) =

    ∫C

    f (y , y ′)dΣt(y′).

    Then there is a unique Σ ∈ C([0,T ]; P1(C )) with Σ0 = Σ̄,satisfying ∫

    Cφ(t, y)dΣt(y)−

    ∫Cφ(0, y) dΣ0(y)

    =

    ∫ t0

    ∫C

    (∂sφ(s, y) + Dφ(s, y)(bΣs (y))

    )dΣs(y)ds

    for all φ ∈ C1b([0,T ]× Y ).

  • Superposition principle in separable Banach spaces

    TheoremLet (Y , ‖ · ‖Y ) be a separable Banach space, letb : (0,T )× Y → Y be a Borel vector field and let µt ∈P(Y ),t ∈ [0,T ], be a continuous curve with

    ∫ T0

    ∫Y ‖bt‖Y dµtdt < +∞.

    Ifd

    dtµt + div(btµt) = 0

    in duality with cylindrical functions φ ∈ C1b(Y ), then there existsη ∈P(C([0,T ]; Y )) concentrated on absolutely continuoussolutions to the ODE ẏ = bt(y) and with evt(η) = µt for allt ∈ [0,T ].Proof. Hierarchies of superposition principles:

    for Y = RN and∫ T0

    ∫Y‖bt‖pY dµtdt < +∞, p > 1 ⇒ Y = R

    N and∫ T

    0

    ∫Y‖bt‖Y dµtdt < +∞

    ⇒ Y = R∞ ⇒ Y separable Banach space by the identification mapE : Y → `∞ ⊂ R∞ defined by E(y) := (〈y , z ′1〉, 〈y , z ′2〉, . . .).

  • Superposition principle in separable Banach spaces

    TheoremLet (Y , ‖ · ‖Y ) be a separable Banach space, letb : (0,T )× Y → Y be a Borel vector field and let µt ∈P(Y ),t ∈ [0,T ], be a continuous curve with

    ∫ T0

    ∫Y ‖bt‖Y dµtdt < +∞.

    Ifd

    dtµt + div(btµt) = 0

    in duality with cylindrical functions φ ∈ C1b(Y ), then there existsη ∈P(C([0,T ]; Y )) concentrated on absolutely continuoussolutions to the ODE ẏ = bt(y) and with evt(η) = µt for allt ∈ [0,T ].Proof. Hierarchies of superposition principles: for Y = RN and∫ T

    0

    ∫Y‖bt‖pY dµtdt < +∞, p > 1

    ⇒ Y = RN and∫ T

    0

    ∫Y‖bt‖Y dµtdt < +∞

    ⇒ Y = R∞ ⇒ Y separable Banach space by the identification mapE : Y → `∞ ⊂ R∞ defined by E(y) := (〈y , z ′1〉, 〈y , z ′2〉, . . .).

  • Superposition principle in separable Banach spaces

    TheoremLet (Y , ‖ · ‖Y ) be a separable Banach space, letb : (0,T )× Y → Y be a Borel vector field and let µt ∈P(Y ),t ∈ [0,T ], be a continuous curve with

    ∫ T0

    ∫Y ‖bt‖Y dµtdt < +∞.

    Ifd

    dtµt + div(btµt) = 0

    in duality with cylindrical functions φ ∈ C1b(Y ), then there existsη ∈P(C([0,T ]; Y )) concentrated on absolutely continuoussolutions to the ODE ẏ = bt(y) and with evt(η) = µt for allt ∈ [0,T ].Proof. Hierarchies of superposition principles: for Y = RN and∫ T

    0

    ∫Y‖bt‖pY dµtdt < +∞, p > 1 ⇒ Y = R

    N and∫ T

    0

    ∫Y‖bt‖Y dµtdt < +∞

    ⇒ Y = R∞ ⇒ Y separable Banach space by the identification mapE : Y → `∞ ⊂ R∞ defined by E(y) := (〈y , z ′1〉, 〈y , z ′2〉, . . .).

  • Superposition principle in separable Banach spaces

    TheoremLet (Y , ‖ · ‖Y ) be a separable Banach space, letb : (0,T )× Y → Y be a Borel vector field and let µt ∈P(Y ),t ∈ [0,T ], be a continuous curve with

    ∫ T0

    ∫Y ‖bt‖Y dµtdt < +∞.

    Ifd

    dtµt + div(btµt) = 0

    in duality with cylindrical functions φ ∈ C1b(Y ), then there existsη ∈P(C([0,T ]; Y )) concentrated on absolutely continuoussolutions to the ODE ẏ = bt(y) and with evt(η) = µt for allt ∈ [0,T ].Proof. Hierarchies of superposition principles: for Y = RN and∫ T

    0

    ∫Y‖bt‖pY dµtdt < +∞, p > 1 ⇒ Y = R

    N and∫ T

    0

    ∫Y‖bt‖Y dµtdt < +∞

    ⇒ Y = R∞

    ⇒ Y separable Banach space by the identification mapE : Y → `∞ ⊂ R∞ defined by E(y) := (〈y , z ′1〉, 〈y , z ′2〉, . . .).

  • Superposition principle in separable Banach spaces

    TheoremLet (Y , ‖ · ‖Y ) be a separable Banach space, letb : (0,T )× Y → Y be a Borel vector field and let µt ∈P(Y ),t ∈ [0,T ], be a continuous curve with

    ∫ T0

    ∫Y ‖bt‖Y dµtdt < +∞.

    Ifd

    dtµt + div(btµt) = 0

    in duality with cylindrical functions φ ∈ C1b(Y ), then there existsη ∈P(C([0,T ]; Y )) concentrated on absolutely continuoussolutions to the ODE ẏ = bt(y) and with evt(η) = µt for allt ∈ [0,T ].Proof. Hierarchies of superposition principles: for Y = RN and∫ T

    0

    ∫Y‖bt‖pY dµtdt < +∞, p > 1 ⇒ Y = R

    N and∫ T

    0

    ∫Y‖bt‖Y dµtdt < +∞

    ⇒ Y = R∞ ⇒ Y separable Banach space by the identification mapE : Y → `∞ ⊂ R∞ defined by E(y) := (〈y , z ′1〉, 〈y , z ′2〉, . . .).

  • Uniqueness of Eulerian solutions by superposition principle:proof continues ...

    Proof. Let us consider Eulerian solutions Σ1 and Σ2 starting from the sameinitial datum Σ̄ and let us denote b1(t, y) e b2(t, y) the respective velocityfields and Y1(t, y), Y2(t, y) be the respective flow maps.

    By superpositionΣit = evt(η

    i ) for suitable ηi ∈P(C([0,T ]; Y )) concentrated on absolutelycontinuous in [0,T ] solutions to the Cauchy problem

    ẏ = bΣi (t, y) in (0,T ).

    By uniqueness of characteristics, ηiy = δYi (·,y). It follows that Σit = Y

    i (t, ·)#Σ̄for all t ∈ [0,T ] is the unique Lagrangian solution.

  • Uniqueness of Eulerian solutions by superposition principle:proof continues ...

    Proof. Let us consider Eulerian solutions Σ1 and Σ2 starting from the sameinitial datum Σ̄ and let us denote b1(t, y) e b2(t, y) the respective velocityfields and Y1(t, y), Y2(t, y) be the respective flow maps. By superpositionΣit = evt(η

    i ) for suitable ηi ∈P(C([0,T ]; Y )) concentrated on absolutelycontinuous in [0,T ] solutions to the Cauchy problem

    ẏ = bΣi (t, y) in (0,T ).

    By uniqueness of characteristics, ηiy = δYi (·,y). It follows that Σit = Y

    i (t, ·)#Σ̄for all t ∈ [0,T ] is the unique Lagrangian solution.

  • Uniqueness of Eulerian solutions by superposition principle:proof continues ...

    Proof. Let us consider Eulerian solutions Σ1 and Σ2 starting from the sameinitial datum Σ̄ and let us denote b1(t, y) e b2(t, y) the respective velocityfields and Y1(t, y), Y2(t, y) be the respective flow maps. By superpositionΣit = evt(η

    i ) for suitable ηi ∈P(C([0,T ]; Y )) concentrated on absolutelycontinuous in [0,T ] solutions to the Cauchy problem

    ẏ = bΣi (t, y) in (0,T ).

    By uniqueness of characteristics, ηiy = δYi (·,y).

    It follows that Σit = Yi (t, ·)#Σ̄

    for all t ∈ [0,T ] is the unique Lagrangian solution.

  • Uniqueness of Eulerian solutions by superposition principle:proof continues ...

    Proof. Let us consider Eulerian solutions Σ1 and Σ2 starting from the sameinitial datum Σ̄ and let us denote b1(t, y) e b2(t, y) the respective velocityfields and Y1(t, y), Y2(t, y) be the respective flow maps. By superpositionΣit = evt(η

    i ) for suitable ηi ∈P(C([0,T ]; Y )) concentrated on absolutelycontinuous in [0,T ] solutions to the Cauchy problem

    ẏ = bΣi (t, y) in (0,T ).

    By uniqueness of characteristics, ηiy = δYi (·,y). It follows that Σit = Y

    i (t, ·)#Σ̄for all t ∈ [0,T ] is the unique Lagrangian solution.

  • What about equilibria? (joint work with Nathanael Bosch)Let us assume g : R→ R+ be a perhaps mildly smooth, butcertainly nonconvex function with one or more global minimizers.

    Can we design an evolutionary game of N players (for N large) toconverge to one of the global minimizers? Consider as a set ofstrategies the interval U = [−a, a] for a > 0 large enough and thepayoff functional

    J�(x , u, x′) = exp

    (−u − tanh(3(g(x)− g(x

    ′)))+(x′ − x))2

    |x − x ′|2 + �

    ),

    the weight functions ωαg (x) := exp(−αg(x)) and

    ωαΣ(y) = ωαΣ(x , σ) =

    ωαg (x)∫C ω

    αg (x′)dΣ(x ′, σ′)

    ,

    and the weighted interaction potential

    ∆Σ,(x,σ)(u) :=

    ∫C

    ωαΣ (x′, σ′)

    (∫U

    J�(x , u, x′) dσ′(u′)

    −∫U

    ∫U

    J�(x ,w , x′)dσ′(u′) dσ(w)

    )Σ(x ′, σ′)

  • What about equilibria? (joint work with Nathanael Bosch)Let us assume g : R→ R+ be a perhaps mildly smooth, butcertainly nonconvex function with one or more global minimizers.Can we design an evolutionary game of N players (for N large) toconverge to one of the global minimizers?

    Consider as a set ofstrategies the interval U = [−a, a] for a > 0 large enough and thepayoff functional

    J�(x , u, x′) = exp

    (−u − tanh(3(g(x)− g(x

    ′)))+(x′ − x))2

    |x − x ′|2 + �

    ),

    the weight functions ωαg (x) := exp(−αg(x)) and

    ωαΣ(y) = ωαΣ(x , σ) =

    ωαg (x)∫C ω

    αg (x′)dΣ(x ′, σ′)

    ,

    and the weighted interaction potential

    ∆Σ,(x,σ)(u) :=

    ∫C

    ωαΣ (x′, σ′)

    (∫U

    J�(x , u, x′) dσ′(u′)

    −∫U

    ∫U

    J�(x ,w , x′)dσ′(u′) dσ(w)

    )Σ(x ′, σ′)

  • What about equilibria? (joint work with Nathanael Bosch)Let us assume g : R→ R+ be a perhaps mildly smooth, butcertainly nonconvex function with one or more global minimizers.Can we design an evolutionary game of N players (for N large) toconverge to one of the global minimizers? Consider as a set ofstrategies the interval U = [−a, a] for a > 0 large enough and thepayoff functional

    J�(x , u, x′) = exp

    (−u − tanh(3(g(x)− g(x

    ′)))+(x′ − x))2

    |x − x ′|2 + �

    ),

    the weight functions ωαg (x) := exp(−αg(x)) and

    ωαΣ(y) = ωαΣ(x , σ) =

    ωαg (x)∫C ω

    αg (x′)dΣ(x ′, σ′)

    ,

    and the weighted interaction potential

    ∆Σ,(x,σ)(u) :=

    ∫C

    ωαΣ (x′, σ′)

    (∫U

    J�(x , u, x′) dσ′(u′)

    −∫U

    ∫U

    J�(x ,w , x′)dσ′(u′) dσ(w)

    )Σ(x ′, σ′)

  • What about equilibria? (joint work with Nathanael Bosch)Let us assume g : R→ R+ be a perhaps mildly smooth, butcertainly nonconvex function with one or more global minimizers.Can we design an evolutionary game of N players (for N large) toconverge to one of the global minimizers? Consider as a set ofstrategies the interval U = [−a, a] for a > 0 large enough and thepayoff functional

    J�(x , u, x′) = exp

    (−u − tanh(3(g(x)− g(x

    ′)))+(x′ − x))2

    |x − x ′|2 + �

    ),

    the weight functions ωαg (x) := exp(−αg(x)) and

    ωαΣ(y) = ωαΣ(x , σ) =

    ωαg (x)∫C ω

    αg (x′)dΣ(x ′, σ′)

    ,

    and the weighted interaction potential

    ∆Σ,(x,σ)(u) :=

    ∫C

    ωαΣ (x′, σ′)

    (∫U

    J�(x , u, x′) dσ′(u′)

    −∫U

    ∫U

    J�(x ,w , x′)dσ′(u′) dσ(w)

    )Σ(x ′, σ′)

  • What about equilibria? (joint work with Nathanael Bosch)Let us assume g : R→ R+ be a perhaps mildly smooth, butcertainly nonconvex function with one or more global minimizers.Can we design an evolutionary game of N players (for N large) toconverge to one of the global minimizers? Consider as a set ofstrategies the interval U = [−a, a] for a > 0 large enough and thepayoff functional

    J�(x , u, x′) = exp

    (−u − tanh(3(g(x)− g(x

    ′)))+(x′ − x))2

    |x − x ′|2 + �

    ),

    the weight functions ωαg (x) := exp(−αg(x)) and

    ωαΣ(y) = ωαΣ(x , σ) =

    ωαg (x)∫C ω

    αg (x′)dΣ(x ′, σ′)

    ,

    and the weighted interaction potential

    ∆Σ,(x,σ)(u) :=

    ∫C

    ωαΣ (x′, σ′)

    (∫U

    J�(x , u, x′) dσ′(u′)

    −∫U

    ∫U

    J�(x ,w , x′)dσ′(u′) dσ(w)

    )Σ(x ′, σ′)

  • Global function minimization

    Lavf58.12.100

    5.mp4Media File (video/mp4)

  • Global function minimization

    Lavf58.12.100

    1.mp4Media File (video/mp4)

  • Global function minimization

    Lavf58.12.100