Multi-Agent Dynamical Systems (3/3)
We slightly switch gear in this third blog-post of the multi-agent series. Letting go of game theoretic concepts, we instead discuss some training paradigms for multi-agent reinforcement learning. We will cover the main limitations behind centralised and independent learning, to land at the centralised training with decentralised execution (CTDE), arguably the more established framework to train autonomous agents in decentralised multiplayer games.
To simplify matter and avoid game-theoretic subtleties, we will restrict our attention to DecPOMDPs—that is, partially observable stochastic games with a unique, shared reward function: $$ \mathcal{M} = (n, \mathcal{S}, \mathcal{A}, \Omega, p, q, r)\;. $$
Similar to the POMDP blog post, and in order to reduce clutter, we will assume that the emission probability does not depend on actions but only on the current state: $ q(\bm{\omega} \vert s, \bm{a}) = q(\bm{\omega} \vert s) \; . $ Finally, we will assume a discounted, infinite-horizon objective–a joint policy $\bm\pi$ will be evaluated according to: $$ v_\lambda^{\bm\pi} = \mathbb{E}^{\bm\pi} \left[\sum_{t\geq 1} \lambda^{t-1} r(s_t, \bm{a}_t)\right]\; . $$
Because we place ourselves in a reinforcement learning set-up, we now consider that both the transition kernel $p$, the emission kernel $q$ and the reward function $r$ are unknown.
Independent Training
Independent training (IT) is a rather naive approach (yet sometimes surprisingly efficient) to solving $\mathcal{M}$. It simply consists of training each agent independently of the others, using some pre-defined single-agent training routine. From a given agent’s perspective, this ultimately means fusing the other agents into the environment itself: each of the $n$ agent is facing a different POMDP. We denote $\mathcal{M}^i = (\mathcal{S}, \mathcal{A}, \Omega, p_i, q_i, r)$ the POMDP perceived by agent $i$, where: $$ \begin{aligned} p_i(s’\vert s, a^i) &= \sum_{a^{-i}\in\mathcal{A}^{-i}} p(s’\vert s, \bm{a})\bm\pi_{-i}(\bm{a}^{-i}\vert s) \; , \\ \quad q_i(\omega^i \vert s) &= \sum_{\bm{\omega}^{-i}\in\Omega^{-i}} q(\bm{\omega}\vert s)\; , \end{aligned} $$ with $ \bm{a}=(a^i, \bm{a}^{-i})$ and $ \bm{\omega}=(\omega^i, \bm{\omega}^{-i})$.
One can notice that $\mathcal{M}^i$ is dependent on $\bm\pi_{-i}$, the joint policy of other agents but $i$. This policy is not static; as other agents learn and change their respective strategies, the dynamic of $\mathcal{M}^i$ will naturally evolve. Therefore, when trained independently, each agent has to solve a non-stationary POMDP. From a RL perspective, this is quite far from being great news.
Despite its simplicity, IT has been shown to empirically work relatively well, even in moderately hard scenarii. Of course, this approach’s main weakness is its blindness to the non-stationarity each agent must face. For this reason, training might be quite unstable, if not downright chaotic. From a practical perspective, the IT framework is one of the simplest to implement. Simply pick your favorite RL algorithm and train each agent like you would in a typical single-agent RL problem.
Example: Independent DQN
Centralised Training
Centralised Training (CT) lies at the other extreme end of the spectrum. It consists of letting go of the multi-agent nature of $\mathcal{M}$ to adopt a centralised, single-agent solver. Formally, CT solves the POMDP $\mathcal{M}$ by directly learning the joint policy $\bm\pi : \Omega \mapsto \Delta(\mathcal{A})$. This solves the non-stationary issues of IT, and allows for seamless coordination between agents.
Of course, CT is often downright unfeasible, except for a few toy examples. The combinatorial nature of the action space $\mathcal{A} = \mathcal{A}_1 \times\ldots\times \mathcal{A}_n$ tears down any hope of efficient learning whenever there are more than a handful of agents. Also, let’s not forget that for most realistic multi-agent situations, centralisation simply is not an option. For instance, networking issues can prevent agents from sending their observation to a common server once we let go of our training simulator and deploy policies in the “real world”.
Centralised Training with Decentralised Execution
Centralised Training with Decentralised Execution (CTDE) attempts to get the best of both worlds. The underpinning idea is quite simple: we will continue to maintain decentralised policies, but allow them to centralise information during training only. This will give rise to more stable learning dynamics by mitigating non-stationarity. Let’s emphasize that when acting, each policy still relies only on its own private observation: it is at training time that it can broadcast information to a central instance. Below, we give some concrete examples of algorithmic approach relying on the CTDE framework.
Actor-Critic CTDE
The actor-critic architecture easily adapts to the CTDE framework. In short, the main idea is to leave the “actor” part decentralised while the “critic” part is centralised. Concretely, this would typically mean that maintaining decentralised policies: $ \pi_i : \Omega_i \longmapsto \Delta(\mathcal{A}_i) \;, $ while capturing, through the critic, the value of the joint policy based on the joint observations and actions: $$ \begin{aligned} q^{\bm\pi} : \Omega \times \mathcal{A} &\longmapsto \mathbb{R} \;, \\ \bm\omega, \bm{a} &\longmapsto q(\bm\omega, \bm a) \approx q_\lambda^{\bm{\pi}}(s, \bm a) \; , \end{aligned} $$ where $q_\lambda^{\bm{\pi}}$ is the state-action joint value function. This joint critic serves only at training time to guide gradient-based policy optimisation (see example below).
Of course, there exist countless variants of this approach (e.g one could decide to learn a state-dependent value function $v^{\bm\pi}$ to use a joint baseline for advantage estimation). Virtually any single-agent actor-critic can be mapped to its multi-agent variant.
Example: Vanilla MA-PG
Value-Based CTDE
Value-based method also adapts to the CTDE framework. The leading idea is to train a joint value-function: $$ q(\bm\omega, \bm a) \approx q_\lambda^\star(s, \bm a) $$ and then decompose it into agent-centric value-functions $q_1(\cdot, \cdot), \ldots, q_n(\cdot, \cdot)$ where each $q_i : \,\Omega_i \times \mathcal{A}_i \longmapsto \mathbb{R}$ acts as the marginal value-function for agent $i$. This is typically done by directly modelling each agent-centric value function $q_i = q_{\theta_i}$ and assuming that a combination of said values will yield the joint value: $$ q(\bm\omega, \bm a) \approx f\Big(q_1(\omega^1, a^1), \ldots, q_n(\omega^1, a^1)\Big) \; . $$ This joint value-function can be learned during training, while each marginal $q_i$ can be used for decentralised decision-making at execution time.
The aggregation function $f$ is either fixed or learned, but it should not be arbitrary. Ideally, we’d need some homogeneity between two sets of maximisers: the ones for the joint value function, and the ones for the marginal, decentralised value functions. This allows the decentralised agents to be consistent with the joint plan that was made offline (during training) by acting greedily w.r.t their marginal value function. To stabilize the training, one usually restricts $f$ to be monotonously increasing in all its argument: this is to enforce the joint greedy action to match with the collection of marginal greedy actions. Formally $$ \argmax_{\bm a \in \mathcal{A} } q(\bm \omega, \bm a) = \left(\argmax_{a^1\in\mathcal{A}_1} q_1(\omega^1, a^1), \ldots, \argmax_{a^n\in\mathcal{A}_1} q_n(\omega^n, a^n)\right)^\top \;. $$ Under this functional constraint, the joint value function is trained like any single agent one – e.g., in a DQN-style.