The replicator dynamics of generalized Nash games

Generalized Nash Games are a powerful modelling tool, first introduced in the 1950's. They have seen some important developments in the past two decades. Separately, Evolutionary Games were introduced in the 1960's and seek to describe how natural selection can drive phenotypic changes in interacting populations. In this paper, we show how the dynamics of these two independently formulated models can be linked under a common framework and how this framework can be used to expand Evolutionary Games. At the center of this unified model is the Replicator Equation and the relationship we establish between it and the lesser known Projected Dynamical System.


Introduction
Nash Games as we know them were first introduced in 1950 by Nash [17] and have a wide array of applications in applied sciences, most notably economics and engineering. The Generalized Nash Game, the subject of this paper, was described only four years later by Arrow and Debreu [1], but took much longer to unravel and has not yet gained the currency of its precursor.
A Generalized Nash Game (GN) seeks to describe a situation where each player's choice of strategy somehow affects the choices available to his/her opponents. However, since everyone takes their turn all at the same time, this leads to games that cannot be played by normal people, at least not in the traditional sense. To illustrate, a classic example of a Nash Game is rock-paper-scissors. A GN version of this is rock-paper-scissors where ties are prohibited, i.e. if one player picks rock, another cannot also pick rock. But this game is impossible to play with another individual because a player cannot possibly know in advance what their opponent is going to pick, thus one cannot knowingly adhere to the mentioned restriction. For this reason, Ichiishi calls them pseudo-games [13]. Despite this artificiality, there are settings where outside forces ensure the satisfaction of constraints and, moreover, the model has explanatory value even in circumstances where this is not the case [9].
In general, it is difficult to find the equilibrium points of a GN. However the issue becomes easier if each player is subject to identical constraints (as far as variables under that player's control are concerned). This is known as a GNSC (Generalized Nash Game with Shared Constraints). There are many computational methods for finding some equilibria of the GNSC [9] and recent work gives a method for extracting all such points [15,6].
Another type of game we consider is the Evolutionary Game, first described by Maynard Smith [19]. Evolutionary games seek to model the evolution of phenotypes as a function of natural selection, such as in the well-known Hawk-Dove game [12]. The dynamics of these games can often be described by the Replicator Equation of Taylor and Jonker [20].
In this paper we build a bridge between the most fundamental type of Evolutionary Game and the GNSC, by establishing a connection between their Nash Equilibria via the Replicator Equation. This bridge allows us to extend the existing model to accommodate new types of problems, as GNSC's are richer and more diverse than population games. We build this bridge by extending the Replicator Equation so that it may be applied to GNSCs and we derive this extension using an analogy between the Replicator Equation and what is known as the Projected Dynamical System.
The Projected Dynamical System (PDS) is a type of discontinuous dynamical system that was first introduced in the 70's by Henry [11], studied further in the 90's by Dupuis and Nagurney [8] and extended in the early 2000's by Cojocaru and Jonker [5] to Hilbert spaces of any dimension. PDS are intimately linked to GNSCs in that the steady state set of a PDS is a subset of the Nash Equilibria of its associated GNSC. The PDS is useful to us in this paper because it gives a known, distinct, game dynamic that's already applicable to Evolutionary Games and GNSCs alike.
The relationship between the Projection Dynamic and the Replicator Dynamic was first studied by Lahkar and Sandholm [18,14]. In their papers, the authors elucidate similarities between the revision protocols implied by the two dynamics, and establish some properties of the solution trajectories of these systems for population games. In this paper we aim to expand this analogy beyond just population games, by extending the Replicator Equation and showing that a key theorem still holds that relates the rest points of our extended Replicator Equation to those of the Projection Dynamical System. In doing so, we allow for these two dynamics to be considered for GNSCs, which are varied and more general than population games.
To show that our extension of the Replicator Equation is useful, we prove a part of the Folk Theorem of Evolutionary Game Theory, namely that every stable rest point of the Replicator Equation is a Nash Equilibrium of the corresponding Population Game [7,12]. As such, our Theorem 4.2 generalizes this aspect of the Folk Theorem so that it may be applied to any GNSC defined on a polytope. Then, after associating these two concepts, we show how we can extend Evolutionary Games under our new framework in Section 5. But to accomplish this, we first need a way to frame population dynamics problems as Nash Games, which we illustrate in Section 3 for a standard Nash Game, and in Section 5 for a GNSC. 2. Brief mathematical background 2.1. Convex Analysis. We will recall some basic definitions used in convex analysis, for ease of reading. Most of these definitions and results are drawn from or based on those found in [3]. Given a finite set of vectors β = {x 1 , . . . , x m } where x i ∈ R n we say that a vector y is an affine combination of β if we can find λ 1 , . . . , λ m ∈ R such that y = λ 1 x 1 + . . . + λ m x m , λ 1 + . . . + λ m = 1. If each λ i ≥ 0, then we say that y is a convex combination of β.
A set K ⊆ R n is said to be convex if K contains every convex combination of vectors in K. Given a set K ⊆ R n , we can construct a convex set by taking all convex combinations of vectors in K or construct an affine set by taking all affine combinations. We call these the convex hull and affine hull of K respectively and we formally define them as The affine hull is important for defining the relative interior of a set. In optimization we are often working with low dimensional sets embedded in higher dimensional spaces, so we need a more general notion for the interior of a set: the relative interior fulfills this role. Given some set K, the relative interior of K (ri(K)) is defined as is the open ball of radius , centered at x.
We also often consider the normal cone of a convex set. Given some convex set K ⊆ R n , we define the normal cone of K at some point x ∈ K to be In addition to convex sets, we also consider convex functions. Given some convex set K ⊆ R n , a function θ : K → R is said to be convex if for every t ∈ [0, 1] and x 1 , x 2 ∈ K, we have Polytopes. Now we will give a brief review of polytopes. A bounded, convex polytope is defined as the convex hull of a non-empty finite set of real points. For simplicity, we will just call this a polytope for the remainder of our paper. Let P be a polytope. The set s = {x 1 , . . . , x n } such that P = conv(s) is not unique; however, there does exist a unique minimal spanning set. We call this set ext(P ) and its elements are called vertices of P [14]. A convex subset F of P is called a face when for every distinct x, y ∈ P , if the line segment connecting x and y intersects F at some point other than x or y, then F contains the entire line segment. A face is itself a polytope, the convex hull of some subset of the vertices of P , therefore we can denote a face F r where r ⊆ ext(P ). Note that P = F ext(P ) is a face of itself. If the line segment connecting two vertices V i , V j is a face of K, we call them adjacent vertices and denote this relationship V i V j .
Theorem 2.1. Suppose P is a polytope and F r is a face of P . Let x * ∈ F r and r = {V 1 , . . . , V n }. Then for every V i ∈ r, we have that span( Proof. This follows from Corollary 11.7 in [10]. For any face F r , we define the dim(F r ) as the dimension of the vector space associated with the affine hull of F r . Equivalently, for every x * ∈ F r , we have that dim(F r ) = dim(span(F r − x * )).
Theorem 2.2. Suppose F r is a face of some polytope K and x ∈ F r . Then F r is the lowest dimensional face containing x iff x ∈ ri(F r ).
If F r is a face of F r and dim(F r ) < dim(F r ) we say that F r is a proper face of F r . If dim(F r ) = dim(F r ) − 1 we call F r a facet of F r . Theorem 2.3. Suppose F r is a face of some polytope K and dim(F r ) ≤ dim(K) − 2. Then F r is the intersection of at least two facets of K with K. If dim(F r ) = dim(K) − 1 then F r is the intersection of exactly one facet of K with K.
We say that two distinct faces F r and F r are adjacent if F r ∩F r is nonempty and dim(F r ) = dim(F r ).
Lemma 2.4. If F r is a facet of K, x ∈ F r and x / ∈ ri(F r ), then x ∈ F r for some facet F r adjacent to F r .
Proof. Since x / ∈ ri(F r ), then by Theorem 2.2 we can find a face F r * with dim(F r * ) < dim(F r ) such that x ∈ F r * . Then dim(F r * ) ≤ dim(K) − 2, hence by Theorem 2.3 F r * is the intersection of at least two facets of K. Any such facet is adjacent to F r and contains x.
We can equivalently describe a convex bounded polytope K in terms of a set of affine functions H = {g 1 , . . . , g n }, where K = {x : g 1 (x) ≥ 0, . . . , g n (x) ≥ 0}. We call this the halfspace representation, and just like with the vertex representation, the choice of H is not unique. The fact that we can express bounded polytopes in these two different ways was first proved by Krein and Milman [16]. Any face of K is just K intersected with some combination of hyperplanes g i (x) = 0 where g i ∈ H. Given such a representation of K, we can define the faces based on which functions g i are nonzero somewhere on that face. Specifically, we can denote the faces of K as We will refer to the set h as the halfspace identifier of the face F h with respect to H. We now have both a top-down and bottom-up representation for a face. 1]. Since x * ∈ ri(F h ) we can extend this segment beyond x * slightly so that γ(λ) ∈ F h for all λ ∈ [0, 1 + ] for some > 0. Since g i (x) is affine, this gives us:

Variational Inequalities and Projected Dynamical Systems.
Given a subset K of R n and a mapping F : R n → R n , the Variational Inequality Problem (VI) is to find a solution x to the inequality (see [5]) (2.1) A Projected Differential Equation is defined as follows: Given some closed convex set K ⊆ R n , we define the projection operator P K at a point x, as the unique element P K (x) ∈ K such that We then define the vector projection operator from a vector v ∈ R n to a vector x ∈ K as Note that this is equivalent to taking the Gateaux derivative of the projection operator onto K, in the direction of v. Now, for some vector field −F : R n → R n , the class of differential equations known as Projected Differential Equations takes the forṁ Last but not least, it is known from [5], that if K is closed and convex then for any x * such that and the converse is also true. In general the projected system we introduced here has a discontinuous righthand side, though existence and qualitative studies have been known since the first work by Henry in 1970's [11], Aubin and Cellina in the 80's [2] and followed up by many others (see [8,5], see [4] and the extensive references therein). There is a similar notion of a projected equation (see [2,21]) defined byẋ The righthand side of this equation is continuous and, with good values of α, solutions of this projected equation amount to "smoothed out" (differential) approximations of the trajectories of (2.2). In general it is up to practitioners to choose one of the two versions of the projected system. If continuity in the equation's vector field is desired, then equation (2.3) can be considered. We take here the point of view of the PDS (2.2).

Games and the Replicator Equation
A Generalized Nash Game (GN) is characterized by a finite set of players {P 1 , . . . , P n }, where player P i controls the variable x i and has an objective function θ i (x 1 , . . . , x n ). The goal of each player is to minimize their objective function subject to some constraint set The key feature here is that each K i depends on variables beyond Player i's control. A Nash Equilibrium is any strategy (x 1 , . . . , x n ) where no player can lower their objective function by unilaterally altering their strategy, i.e. for every i ∈ {1, . . . , n}, for every Here is the basic form of a Generalized Nash Game: For the remainder of this paper we will assume that K i (x −i ) is closed and convex and ∇θ i is Lipschitz continuous for each i. If there additionally exists a convex set K such that for each i we have that then we call the game a GNSC, or a Generalized Nash Game with Shared Constraints, named because in the case of sets defined by inequalities, it is easy to see that this restriction amounts to saying that each player's strategy set K i (x −i ) can be defined by the exact same set of inequality constraints. Now, let us turn to a more specific kind of problem, Evolutionary Games. An evolutionary game is a game where there is a population of agents, whose strategies evolve according to some rule, that may model various adaptation processes. In the simple two-player symmetric case, Evolutionary Games are matrix games where each member of a population has a choice among n pure strategies {e 1 , . . . , e n }. In the associated matrix A, we have that A ij = π(e i , e j ) is just the payoff of playing pure strategy e i in a population that exclusively plays strategy e j [7]. All strategies must belong to the simplex ∆ n = {(p 1 , . . . , p n ) : A Nash Equilibrium is any strategy x ∈ ∆ n such that [12] π(p, x) ≤ π(x, x) for every p ∈ ∆ n .
Although these games are usually described in the literature as we did above, it is easy to see that they are essentially solving the following Nash Game: where K = ∆ n . In this system y is just a shadow variable that tests strategy x against all other strategies, and its objective function ensures x = y at all solutions. This system is known from [9] to share its Nash Equilibria with the solutions to the Variational Inequality Note that at any solution to (3.1) we must have that y * − x * = 0, otherwise we could always find some y ∈ R n to make the inner-product negative. Therefore this variational inequality shares its solutions which is known from [8] to share its solutions with the rest points of the Projected Dynamical Systeṁ For simplicity of notation, let us denote x := x * going forward, hence we writė It is also known [7] that the Replicator Equation associated to our game iṡ We therefore have two different dynamics we can use to study evolutionary games, (3.3) and (3.4). In [18,14], the authors throughly study the relationship between these two dynamics and the revision protocol implied by each. We would like to extend these dynamics to the much more general problem of GNSCs. The projection dynamic of course, is already used extensively in the study of GNSCs. However the Replicator Equation has not to our knowledge ever been applied to GNSCs; equation (3.4) only gives us the dynamics of very elementary evolutionary games. Versions of (3.4) have been adapted for more sophisticated kinds of population games (see [7] for an overview), however so far there is no analogue for anything as broad as GNSCs. In the next section we build this analogue, by devising a method to derive a Replicator Equation from a given Projected Dynamical System.

Extending the Replicator Equation.
It is known by the Folk Theorem of Evolutionary Game Theory [7], that any stable rest point of (3.4) in K = ∆ n must be a Nash Equilibrium. This implies that such a point is also a rest point of (3.3), which raises the question: what is the relationship between the system in (3.4) and the system in (3.3)? Notice that we can rewrite (3.4) in the following way ([·] denotes the Iverson bracket)ẋ where e i and e j are just the coordinates of two adjacent vertices on the simplex ∆ n , Ax·(e i −e j ))(e i −e j ) is the un-normalized projection of Ax onto the line connecting these two vertices and x i and x j are just the constraints which aren't identically zero on that line. This system is similar to (3.3), however we exclusively project onto the edges of the polytope K instead of the tangent cone of the entire set. This system is continuous, but the cost of this continuity is that we generate new rest points which are not necessarily equilibria of the original game. However we can find some, but not all, of these equilibria via stability tests [7].
In the spirit of this process, let us now try and apply this technique to a more general type of problem. Suppose K is a bounded convex polytope in R n (See Chapter 2 for background on polytopes). Then K has some half-space representation where each g i is affine and K = {x : g 1 (x) ≥ 0, . . . , g k (x) ≥ 0}. Let −F : R n → R n be Lipschitz continuous. Consider the Projected Differential Equatioṅ Number the vertices of K as {V 1 , . . . V m }. For each V i V j , we can find a halfspace identifier in H for the edge connecting these two vertices h ij ⊆ H. Let g ij = g∈hij g. Then, mirroring the procedure we used with the Replicator Equation, we can consider the following classical dynamical systeṁ  [12] for a short and simple proof), there is no obvious way to extend this proof to GNSCs. Therefore the next subsection is dedicated to proving that a result analogous to a part of the Folk Theorem holds for our system in (4.3) (Theorem 4.2). Equipped with this theorem we can then relate the rest points of our system in (4.3) to the Nash Equilibria of our original GNSC, which we do in Section 5.

Connecting the extended Replicator Equation to
GNSCs. For absolute clarity, in the theorems that follow when we say stable rest point we mean the usual definition: a rest point x * is stable if and only if, for every > 0, there exists a δ > 0 such that for every solution x(t), if t 0 is such that x(t 0 ) − x * < δ, then x(t) − x * < for every t ≥ t 0 . Also, when we say face invariant, we mean that if any solution x(t) lies on some face at time t 0 , then it will remain on that face for all future t ≥ t 0 (usually called forward invariance).  Before we can prove these theorems, we need four more lemmas about polytopes.
Lemma 4.3. Let F r be a face of some polytope K and x * ∈ F r . We have that span( ) and x / ∈ F r − x * . Now consider any y ∈ ri(F r − x * ). By convexity we have that λx + (1 − λ)y ∈ span(F r − x * ) ∩ (K − x * ) for every λ ∈ [0, 1]. Since y ∈ ri(F r − x * ), then for some λ ∈ (0, 1) we must have that λx + (1 − λ)y ∈ F r − x * . Therefore by the definition of a face, F r contains x, a contradiction. The (⇐) direction is obvious.
Lemma 4.5. Suppose F r is a facet of F r . Then for every x * ∈ F r there is a unit vector n r (x * ) ∈ (F r − x * ) called the inner-normal of F r at x * on F r , such that for every v ∈ span(F r − x * ) n r (x * ) · u = 0, for every u ∈ span(F r − x * ) (4.4) v = u + kn r (x * ), for some u ∈ span(F r − x * ), k ∈ R (4.5) If x * ∈ ri(F r ), then n r (x * ) is a feasible direction, (4.6) Then we can take any orthonormal basis of span(F r − x * ) and extend it to an orthonormal basis of span(F r − x * ) by the addition of a single vector, call it w.
Then from the last paragraph, we know we can write v 1 = u 1 + k 1 w and v 2 = u 2 + k 2 w, with u 1 , u 2 ∈ span(F r − x * ) and k ∈ R. If k 1 = 0 or k 2 = 0, this would contradict Lemma 4.3. Assume that k 1 < 0 < k 2 . Then the line connecting v 1 and v 2 intersects F r , but contains points that aren't in F r , contradicting the fact that F r is a face. Therefore k 1 and k 2 must have the same sign, and so by choosing n r (x * ) = w or n r (x * ) = −w as appropriate, we get that (4.4) and (4.5) hold.
Lemma 4.6. Suppose F r is a polytope, and F r is a proper face of F r . Then for every If there doesn't exist V j ∈ r \ r with the desired property then and therefore span(F r − x * ) = span(F r − x * ), a contradiction.
Equipped with the above Lemmas, we can now prove Theorem 4 and 5.
Proof of Theorem 4.1.
Then h ⊆ h. Therefore g ij (x) = 0. Taking this together with Theorem 2.2 ensures that we remain on F h and therefore every face to which x belongs.
Proof of Theorem 4.2. If K is a singleton, then the result is trivial, therefore assume dim(K) ≥ 1. Suppose x * is not a rest point of (4.2), but is a rest point of (4.3). Then −F (x * ) / ∈ N K (x * ). Let F r be the lowest dimensional face containing x * such that −F (x * ) / ∈ N Fr (x * ). We have thaṫ Where the sum is over all i and j such that i < j, V i V j and V i , V j ∈ r. Since x * is a rest point of (3.4) we must have thatẋ * = 0. If x ∈ ri(F r ), this means that However since for any fixed V i we have that F r − x * ⊆ span{V i − V j : V j ∈ r , V i V j } (Theorem 2.1) then this contradicts the fact that −F (x * ) / ∈ N Fr (x * ). Therefore x * / ∈ ri(F r ), hence there must exist some facet F r * of F r such that x ∈ F r * (Theorem 2.2). Now suppose x * / ∈ ri(F r * ). Then we can find another facet, F r adjacent to F r * such that x * ∈ F r (Lemma 2.4). We must have that −F (x * ) ∈ N F r * (x * ) and −F (x * ) ∈ N F r (x * ) (otherwise we've found a lower dimensional face that doesn't have this property). However by Lemma 4.4, this implies that −F (x * ) ∈ N Fr (x * ), a contradiction. Therefore x * ∈ ri(F r * ) and x * belongs to only one facet of F r . Let n r * (x * ) be the unit inner-normal of F r * on F r at x * (obtained from Lemma 4.5).
which is ≥ 0. Now fix V p ∈ r, V q ∈ r * \ r such that V p V q (exists by Lemma 4.6). By Lemma 4.3 we have that n r * (x * ) · (V p − V q ) = 0. This implies that (−F (x * ) · n r * (x * ))(n r * (x * ) · (V p − V q )) 2 > 0. Then by continuity of F , we can find an open ball B 0 (x * ) in F r such that for some γ > 0 We can further constrain 0 so that B 0 (x * ) ∩ F r * ⊆ ri(F r * ) and B 0 (x * ) contains no other facets of F r aside from F r * (we can do this second part because x * belongs to only one facet). Therefore within this neighbourhood we have Where the sums are over all i and j such that i < j, V i V j and V i , V j ∈ r. Consider any , δ such that 0 < δ ≤ < 0 . We know that n r (x * ) is a feasible direction by Lemma 4.5, therefore let y(0) = k 0 n r (x * ) be a solution to the IVP, where k 0 > 0 is sufficiently small so that hence U contains no facets of F r . Thereore U ⊆ ri(F r ) (Theorem 2.2). Thus we can find λ > 0 such that g pq (x) > λ for all x ∈ U (continuity and Lemma 2.5). Since d((x−x * )·nr) dt > 0 on ri(F r ) and (4.2) is face invariant (Theorem 4.1) we know that y(t) is also U invariant. Therefore contradicting Lyapunov stability.
Note that the converse to Theorem 4.2 is not true (see [12], exercise 7.2.2).
With this theorem we show that stable rest points of our extended replicator equation are rest points of the projection dynamic. Since it is known that the rest points of PDS are Nash Equilibria, then this taken together with Theorem 4.2 allows us to ultimately say that stable rest points are Nash Equilibria, and hence a key part of the Folk Theorem applies to our extension of the replicator equation. While Lahkar and Sandholm [17] could rely on an already established link between their games and the Replicator Equation, we needed to reprove that this link is still there for our extension. Our hope is that this result potentially paves the way for the type of analyses conducted by Lahkar and Sandholm [18] to be extended into this more general setting.

Examples and Extensions
In this section we will consider examples that illustrate how (4.3) achieves three basic purposes. First, it recovers the standard Replicator Equation, showing that what we have is in fact a generalization of that concept. Second, it allows us to incorporate the shared constraints of Generalized Nash Games into elementary Evolutionary Games. Finally, it enables us to express a given GNSC as a classical dynamical system, regardless of whether that GNSC corresponds to any particular Evolutionary Game.
We should recall that our method for finding the extended Replicator Equation associated to a game and vice versa, consists of several steps. For clarity, and since they are used to solve the examples that follow, we will enumerate these steps. First, to find the extended Replicator Equation associated to a Generalized Nash Game we must (call this Method 1): (1) Find the Variational Inequality associated to the Generalized Nash Game (2) Find the Projected Dynamical System associated to this Variational Inequality To take an Evolutionary Game and find the extended Replicator Equation we must (call this Method 2): (1) Express the Evolutionary Game as a Generalized Nash Game (as at start of Section 4) (2) Find the Variational Inequality associated to the Generalized Nash Game We can of course skip Variational Inequalities, and just move straight to a Projected Dynamical System, however we include this procedure for clarity of analysis and to better mirror our exposition in the previous sections. We will now apply these steps to three examples.
Example 5.1. The 1-species evolutionary game in (2.1) can be extended to an arbitrary number of species [7]. In the simplest case, we have two species, call them Species A and Species B, each of whose members can choose between n and m possible pure strategies {e 1 , . . . , e n } and {f 1 , . . . , f m } respectively. In this case we have two associated matrices A ∈ R n×(m+n) and B ∈ R m×(m+n) , where e T i A(p, q) represents π 1 (e i , (p, q)), the payoff of playing pure strategy e i in a population where Species A adopts mixed strategy p and Species B adopts mixed strategy q. π 2 (f i , (p, q)) has a similar meaning for Species B. The strategies of Species A and Species B respectively must belong to the simplices ∆ n = {(p 1 , . . . , p n ) : A Nash Equilibrium is defined as any strategy x ∈ ∆ n and y ∈ ∆ m such that [6] π 1 (p, (x, y)) ≤ π 1 (x, (x, y)) for every p ∈ ∆ n , π 2 (q, (x, y)) ≤ π 2 (y, (x, y)) for every q ∈ ∆ m .

Conclusions and Future Work
In this paper we generalize the Replicator Equation so that it may applied to any GNSC defined on a Polytope. Theorem 4.2 relates the stable rest points of this extended Replicator Equation with the rest points of a Projected Differential Equation. This connection allows us to expand certain Evolutionary Games by introducing shared inter-species constraints via the GNSC. Currently there are many different variations of the standard two player population game, for example games where the payoff is not a matrix or multiplayer games (see [7] for an overview), each of which has its own version of the Folk Theorem of Evolutionary Game Theory. If further work is done to adapt parts 2 and 3 of the Folk Theorem to our model, then we would have a complete general version of the Folk Theorem for which these all could be considered special cases.
In the literature, the Replicator Equation has already been unified with other models such as the Price equation and the Generalized Lotka-Volterra equation [14]. With this result we make yet another such connection, however it should be noted that Generalized Nash Games are extremely broad and are a superset of all classical Nash Games. We also point out that our connection is reciprocal, we don't just place the Replicator Equation under the umbrella of the Projected Dynamical System, it actually gives us a new way of looking at GNSCs.
This new perspective is an alternative to the Projected Dynamical System, in that the Replicator Equation frames these problems as classical (with righthand sides of class C 1 ) dynamical systems. Further work could investigate whether our results hold on an arbitrary convex set, not just a convex bounded polytope. We are optimistic that such a generalization is achievable, and it should be noted that the Projected Dynamical System itself was originally only shown to work on polyhedral constraints in [8] before it was shown to apply to arbitrary convex sets 11 years later in [5].
Another possible direction would be adapting our method to the much broader class of Generalized Nash Games without shared constraints. This would perhaps be the most ambitious way to continue since there are still large theoretical obstacles that need to be overcome before we can solve these types of games in general. More specifically, we would need to find a way to determine the Replicator Dynamics of a quasivariational inequality, which is a much less well understood mathematical object.