This page requires Javascript. Please enable it for https://attentionagent.github.io/


Examples of permutation-invariant reinforcement learning agents
In this work, we investigate the properties of RL agents that treat their observations as an arbitrarily ordered, variable-length list of sensory inputs. Here, we partition the visual input from CarRacing (Left) and Atari Pong (right) into a 2D grid of small patches, and shuffled their ordering. Each sensory neuron in the system receives a stream of visual input at a particular permuted patch location, and through coordination, must complete the task at hand, even if the visual ordering is randomly permuted again several times during an episode.

The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning

Abstract

In complex systems, we often observe complex global behavior emerge from a collection of agents interacting with each other in their environment, with each individual agent acting only on locally available information, without knowing the full picture. Such systems have inspired development of artificial intelligence algorithms in areas such as swarm optimization and cellular automata. Motivated by the emergence of collective behavior from complex cellular systems, we build systems that feed each sensory input from the environment into distinct, but identical neural networks, each with no fixed relationship with one another. We show that these sensory networks can be trained to integrate information received locally, and through communication via an attention mechanism, can collectively produce a globally coherent policy. Moreover, the system can still perform its task even if the ordering of its inputs is randomly permuted several times during an episode. These permutation invariant systems also display useful robustness and generalization properties that are broadly applicable.


Introduction

Sensory substitution refers to the brain's ability to use one sensory modality (e.g., touch) to supply environmental information normally gathered by another sense (e.g., vision). Numerous studies have demonstrated that humans can adapt to changes in sensory inputs, even when they are fed into the wrong channels . But difficult adaptations--such as learning to “see” by interpreting visual information emitted from a grid of electrodes placed on one's tongue , or learning to ride a “backwards” bicycle --require months of training to attain mastery. Can we do better, and create artificial systems that can rapidly adapt to sensory substitutions, without the need to be retrained?

Interactive Demo

Permutation Invariant Cart-Pole Swing Up Demo
A permutation invariant network performing CartpoleSwingupHarder. Shuffle the order of the 5 observations at any time, and see how the agent adapts to the new ordering of the observations.

Modern deep learning systems are generally unable to adapt to a sudden reordering of sensory inputs, unless the model is retrained, or if the user manually corrects the ordering of the inputs for the model. However, techniques from continual meta-learning, such as adaptive weights , Hebbian-learning , and model-based approaches can help the model adapt to such changes, and remain a promising active area of research.

In this work, we investigate agents that are explicitly designed to deal with sudden random reordering of their sensory inputs while performing a task. Motivated by recent developments in self-organizing neural networks related to cellular automata , in our experiments, we feed each sensory input (which could be an individual state from a continuous control environment, or a patch of pixels from a visual environment) into an individual neural network module that integrates information from only this particular sensory input channel over time. While receiving information locally, each of these individual sensory neural network modules also continually broadcasts an output message. Inspired by the Set Transformer architecture, an attention mechanism combines these messages to form a global latent code which is then converted into the agent's action space. The attention mechanism can be viewed as a form of adaptive weights of a neural network, and in this context, allows for an arbitrary number of sensory inputs that can be processed in any random order.

In our experiments, we find that each individual sensory neural network module, despite receiving only localized information, can still collectively produce a globally coherent policy, and that such a system can be trained to perform tasks in several popular reinforcement learning (RL) environments. Furthermore, our system can utilize a varying number of sensory input channels in any randomly permuted order, even when the order is shuffled again several times during an episode.

Our pong agent continues to work even when it is given a small subset (30%) of the screen, in a shuffled order. The screen is reshuffled multiple times during the game. For comparison, the actual game is shown on the left.

Permutation invariant systems have several advantages over traditional fixed-input systems. We find that encouraging a system to learn a coherent representation of a permutation invariant observation space leads to policies that are more robust and generalizes better to unseen situations. We show that, without additional training, our system continues to function even when we inject additional input channels containing noise or redundant information. In visual environments, we show that our system can be trained to perform a task even if it is given only a small fraction of randomly chosen patches from the screen, and at test time, if given more patches, the system can take advantage of the additional information to perform better. We also demonstrate that our system can generalize to visual environments with novel background images, despite training on a single fixed background. Lastly, to make training more practical, we propose a behavioral cloning scheme to convert policies trained with existing methods into a permutation invariant policy with desirable properties.

We find that a side-effect of permutation invariant RL agents is that without any additional training or fine-tuning, they also tend to work even when the original training background is replaced with various images.

Method

Background

Our goal is to devise an agent that is permutation invariant (PI) in the action space to the permutations in the input space. While it is possible to acquire a quasi-PI agent by training with randomly shuffled observations and hope the agent's policy network has enough capacity to memorize all the patterns, we aim for a design that achieves true PI even if the agent is trained with fix-ordered observations. Mathematically, we are looking for a non-trivial function f(x):RnRmf(x): \mathcal{R}^n \mapsto \mathcal{R}^m such that f(x[s])=f(x)f(x[{s}]) = f(x) for any xRnx \in \mathcal{R}^n, and ss is any permutation of the indices {1,,n}\{1, \cdots, n\}. A different but closely related concept is permutation equivariance (PE) which can be described by a function h(x):RnRnh(x): \mathcal{R}^n \mapsto \mathcal{R}^n such that h(x[s])=h(x)[s]h(x[{s}]) = h(x)[s]. Unlike PI, the dimensions of the input and the output must equal in PE.

Self-attentions can be PE. In its simplest form, self-attention is described as y=σ(QK)Vy = \sigma(QK^{\top})V where Q,KRn×dq,VRn×dvQ,K \in \mathcal{R}^{n \times d_q}, V \in \mathcal{R}^{n \times d_v} are the Query, Key and Value matrices and σ()\sigma(\cdot) is a non-linear function. In most scenarios, Q,K,VQ, K, V are functions of the input xRnx \in \mathcal{R}^n (e.g. linear transformations), and permuting xx therefore is equivalent to permuting the rows in Q,K,VQ, K, V and based on its definition it is straightforward to verify the PE property. Set Transformer cleverly replaced QQ with a set of learnable seed vectors, so it is no longer a function of input xx, thus enabling the output to become PI.

Here, we provide a simple, non-rigorous example demonstrating permutation invariant property of the self-attention mechanism, to give some intuition to readers who may not be familiar with self-attention. For a detailed treatment, please refer to .

As mentioned earlier, in its simplest form, self-attention is described as:

    y=σ(QK)Vy = \sigma(QK^{\top})V

where QRNq×dq,KRN×dq,VRN×dvQ \in \mathcal{R}^{N_q \times d_q}, K \in \mathcal{R}^{N \times d_q}, V \in \mathcal{R}^{N \times d_v} are the Query, Key and Value matrices and σ()\sigma(\cdot) is a non-linear function. In this work, QQ is a fixed matrix, and K,VK, V are functions of the input XRN×dinX \in \mathcal{R}^{N \times d_{in}} where NN is the number of observation components (equivalent to the number of sensory neurons) and dind_{in} is the dimension of each component. In most settings, K=XWk,V=XWvK=X W_k, V=X W_v are linear transformations, thus permuting XX therefore is equivalent to permuting the rows in K,VK, V.

We would like to show that the output yy is the same regardless of the ordering of the rows of K,VK, V. For simplicity, suppose N=3N=3, Nq=2N_q=2, dq=dv=1d_q=d_v=1, so that QR2×1Q \in \mathcal{R}^{2 \times 1}, KR3×1K \in \mathcal{R}^{3 \times 1}, VR3×1V \in \mathcal{R}^{3 \times 1}:

The output yR2×1y \in \mathcal{R}^{2 \times 1} remains the same when the rows of K,VK, V are permuted from [1,2,3][1, 2, 3] to [3,1,2][3, 1, 2]:

We have highlighted the same terms with the same color in both equations to show the results are indeed identical. In general, we have yij=b=1Nσ[a=1dqQiaKba]Vbjy_{ij} = \sum_{b=1}^{N} \sigma [ \sum_{a=1}^{d_q} Q_{ia} K_{ba} ] V_{bj}. Permuting the input is equivalent to permuting the indices bb (i.e. rows of KK and VV), which only affects the order of the outer summation and does not affect yijy_{ij} because summation is a permutation invariant operation. Notice that in the above example and the proof here we have assumed that σ()\sigma(\cdot) is an element-wise operation--a valid assumption since most activation functions satisfy this condition.Applying softmax to each row only brings scalar multipliers to each row and the proof still holds.

As we'll discuss next, this formulation lets us convert an observation signal from the RL environment into a permutation invariant representation yy. We'll use this representation in place of the actual observation as the input that goes into the downstream policy network of an RL agent.

Sensory Neurons with Attention

To create permutation invariant (PI) agents, we propose to add an extra layer in front of the agent's policy network π\pi, which accepts the current observation oto_t and the previous action at1a_{t-1} as its inputs. We call this new layer AttentionNeuron, and the following figure gives an overview of our method:

Overview of Method
AttentionNeuron is a standalone layer, in which each sensory neuron only has access to a part of the unordered observations oto_t. Together with the agent's previous action at1a_{t-1}, each neuron generates messages independently using the shared functions fk(ot[i],at1)f_k(o_t[i], a_{t-1}) and fv(ot[i])f_v(o_t[i]). The attention mechanism summarizes the messages into a global latent code mtm_t.

The operations inside AttentionNeuron can be described by the following two equations:

Equation 1 shows how each of the NN sensory neuron independently generates its messages fkf_k and fvf_v, which are functions shared across all sensory neurons. Equation 2 shows the attention mechanism aggregate these messages. Note that although we could have absorbed the projection matrices Wq,Wk,WvW_q, W_k, W_v into Q,K,VQ, K, V, we keep them in the equation to show explicitly the formulation. Equation 2 is almost identical to the simple definition of self-attention mentioned earlier. Following , we make our QQ matrix a bank of fixed embeddings, rather than depend on the observation oto_t.

Note that permuting the observations only affects the row orders of KK and VV, and that applying the same permutation to the rows of both KK and VV still results in the same mtm_t which is PI. As long as we set constant the number of rows in QQ, the change in the input size affects only the number of rows in KK and VV and does not affect the output mtm_t. In other words, our agent can accept inputs of arbitrary length and output a fixed sized mtm_t. Later, we apply this flexibility of input dimensions to RL agents.

For clarity, the following table summarizes the notations as well as the corresponding setups we used for the experiments:

Notation list
In this table, we also provide the dimensions used in our model for different RL environments, to give the reader a sense of the relative magnitudes involved in each part of the system.

Design Choices

It is worthwhile to have a discussion on the design choices made. Since the ordering of the input is arbitrary, each sensory neuron is required to interpret and identify their received signal. To achieve this, we want fk(ot[i],at1)f_k(o_t[i], a_{t-1}) to have temporal memories. In practice, we find both RNNs and feed-forward neural networks (FNN) with stacked observations work well, with FNNs being more practical for environments with high dimensional observations.

In addition to the temporal memory, including previous actions is important for the input identification too. Although the former allows the neurons to infer the input signals based on the characteristics of the temporal stream, this may not be sufficient. For example, when controlling a legged robot, most of the sensor readings are joint angles and velocities from the legs, which are not only numerically identically bounded but also change in similar patterns. The inclusion of previous actions gives each sensory neuron a chance to infer the casual relationship between the input channel and the applied actions, which helps with the input identification.

Finally, in Equation 2 we could have combined QWqRM×dqQW_q \in \mathcal{R}^{M \times d_q} as a single learnable parameters matrix, but we separate them for two reasons. First, by factoring into two matrices, we can reduce the number of learnable parameters. Second, we find that instead of making QQ learnable, using the positional encoding proposed in Transformer encourages the attention mechanism to generate distinct codes. Here we use the row indices in QQ as the positions for encoding.


Experiments

We experiment on several different RL environments to study various properties of permutation invariant RL agents. Due to the nature of the underlying tasks, we will describe the different architectures of the policy networks used and discuss various training methods. However, the AttentionNeuron layers in all agents are similar, so we first describe the common setups. Hyper-parameters and other details for all experiments are summarized in the Appendix.

For non-vision continuous control tasks, the agent receives an observation vector otROo_t \in \mathcal{R}^{|O|} at time tt. We assign N=ON=|O| sensory neurons for the tasks, each of which sees one element from the vector, hence ot[i]R1,i=1,,Oo_t[i] \in \mathcal{R}^1, i=1, \cdots, |O|. We use an LSTM as our fk(ot[i],at1)f_k(o_t[i], a_{t-1}) to generate Keys, the input size of which is 1+A1 + |A| (22 for Cart-Pole and 99 for PyBullet Ant). A simple pass-through function f(x)=xf(x) = x serves as our fv(ot[i])f_v(o_t[i]), and σ()\sigma(\cdot) is tanhtanh. For simplicity, we find Wv=IW_v = I works well for the tasks, so the learnable components are the LSTM, WqW_q and WkW_k.

For vision based tasks, we gray-scale and stack k=4k=4 consecutive RGB frames from the environment, and thus our agent observes otRH×W×ko_t \in \mathcal{R}^{H \times W \times k}. oto_t is split into non-overlapping patches of size P=6P=6 using a sliding window, so each sensory neuron observes ot[i]R6×6×ko_t[i] \in \mathcal{R}^{6 \times 6 \times k}. Here, fv(ot[i])f_v(o_t[i]) flattens the data and returns it, hence V(ot)V(o_t) returns a tensor of shape N×dfv=N×(6×6×4)=N×144N \times d_{f_v} = N \times (6 \times 6 \times 4) = N \times 144. Due to the high dimensionality for vision tasks, we do not use RNNs for fkf_k, but instead use a simpler method to process each sensory input. fk(ot[i],at1)f_k(o_t[i], a_{t-1}) takes the difference between consecutive frames (ot[i]o_t[i]), then flattens the result, appends at1a_{t-1}, and returns the concatenated vector. K(ot,at1)K(o_t, a_{t-1}) thus gives a tensor of shape N×dfkN \times d_{f_k} == N×[(6×6×3)+A]N \times [(6 \times 6 \times 3) + |A|] == N×(108+A)N \times (108 + |A|) (111 for CarRacing and 114 for Atari Pong). We use softmax as the non-linear activation function σ()\sigma(\cdot), and we apply layer normalization to both the input patches and the output latent code.


Cart-pole swing up

We examine Cart-pole swing up to first illustrate our method, and also use it to provide a clear analysis of the attention mechanism. We use CartPoleSwingUpHarder , a more difficult version of the task where the initial positions and velocities are highly randomized, leading to a higher variance of task scenarios. In the environment, the agent observes [x,x˙,cos(θ),sin(θ),θ˙][x, \dot{x}, cos(\theta), sin(\theta), \dot{\theta}], outputs a scalar action, and is rewarded at each step for getting xx close to 0 and cos(θ)cos(\theta) close to 1.

Interactive Demo

Permutation Invariant Agent in CartPoleSwingUpHarder
In this demo, the user can shuffle the order of the 5 inputs at any time, and observe how the agent adapts to the new ordering of the inputs.

We use a two-layer neural network as our agent. The first layer is an AttentionNeuron layer with N=5N=5 sensory neurons and outputs mtR16m_t \in \mathcal{R}^{16}. A linear layer takes mtm_t as input and outputs a scalar action. For comparison, we also trained an agent with a two-layer FNN policy with 1616 hidden units. We use direct policy search to train agents with CMA-ES , an evolution strategies (ES) method.

We report experimental results in the following table:

Cart-pole Tests
For each experiment, we report the average score and the standard deviation from 1000 test episodes. Our agent is trained only in the environment with 5 sensory inputs.

Our agent can perform the task and balance the cart-pole from an initially random state. Its average score is slightly lower than the baseline (See column 1) because each sensory neuron requires some time steps in each episode to interpret the sensory input signal it receives. However, as a trade-off for the performance sacrifice, our agent can retain its performance even when the input sensor array is randomly shuffled, which is not the case for an FNN policy (column 2). Moreover, although our agent is only trained in an environment with five inputs, it can accept an arbitrary number of inputs in any order without re-training.Because our agent was not trained with normalization layers, we scaled the output from the AttentionNeuron layer by 0.5 to account for the extra inputs in the last 2 experiments. We test our agent by duplicating the 5 inputs to give the agent 10 observations (column 3). When we replace the 5 extra signals with white noises with σ=0.1\sigma=0.1 (column 4), we do not see a significant drop in performance.

The AttentionNeuron layer should possess 2 properties to attain these: its output is permutation invariant to its input, and its output carries task-relevant information. The following figure is a visual confirmation of the permutation invariant property, whereby we plot the output messages from the layer and their changes over time from two tests. Using the same environment seed, we keep the observation as-is in the first test but we shuffle the order in the second. As the figure shows, the output messages are identical in the two roll-outs.

Permutation invariant outputs
The output (16-dimensional global latent code) from the AttentionNeuron layer does not change when we input the sensor array as-is (top) or when we randomly shuffle the array (bottom). Yellow represents higher values, and blue for lower values.

We also perform a simple linear regression analysis on the outputs (based on the shuffled inputs) to recover the 5 inputs in their original order. The following table shows the R2R^2 valuesR2R^2 measures the goodness-of-fit of a model. An R2R^2 of 1 implies that the regression perfectly fits the data. from this analysis, suggesting that some important indicators (e.g. x˙\dot{x} and θ˙\dot{\theta}) are well represented in the output:

Linear regression analysis on the output
For each of the N=5N=5 sensory inputs we have one LR model with mtR16m_t \in \mathcal{R}^{16} as the explanatory variables.

Finally, to accompany the quantitative results in this section, we extended the earlier interactive demo to showcase the flexibility of PI agents. Here, our agent, with no additional training, receives 15 input signals in shuffled order, ten of which are pure noise, and the other five are the actual observations from the environment.

Interactive Demo

Dealing with unspecified number of extra noisy channels
Without additional training, our agent receives 15 input signals in shuffled order, 10 of which are pure Gaussian noise (σ=0.1), and the other 5 are the actual observations from the environment. Like the earlier demo, the user can shuffle the order of the 15 inputs, and observe how the agent adapts to the new ordering of the inputs.

The existing policy is still able to perform the task, demonstrating the system's capacity to work with a large number of inputs and attend only to channels it deems useful. Such flexibility may find useful applications for processing a large unspecified number of signals, most of which are noise, from ill-defined systems.


PyBullet Ant

While direct policy search methods such as evolution strategies (ES) can train permutation invariant RL agents, oftentimes we already have access to pre-trained agents or recorded human data performing the task at hand. Behavior cloning (BC) can allow us to convert an existing policy to a version that is permutation invariant with desirable properties associated with it. We report experimental results here:

PyBullet Ant Experimental Results

We train a standard two-layer FNN policy to perform AntBulletEnv-v0, a 3D locomotion task in PyBullet , and use it as a teacher for BC. For comparison, we also train a two-layer agent with AttentionNeuron for its first layer. Both networks are trained with ES. Similar to CartPole, we expect to see a small performance drop due to some time steps required for the agent to interpret an arbitrarily ordered observation space. We then collect data from the FNN teacher policy to train permutation invariant agents using BC. More details of the BC setup can be found in the Appendix.

The performance of the BC agent is lower than the one trained from scratch with ES, despite having the identical architecture. This suggests that the inductive bias that comes with permutation invariance may not match the original teacher network, so the small model used here may not be expressive enough to clone any teacher policy, resulting in a larger variance in performance. A benefit of gradient-based BC, compared to RL, is that we can easily train larger networks to fit the behavioral data. We show that increasing the size of the subsequent layers for BC does enhance the performance.

While not explicitly trained to do so, we note that the policy still works even when we reshuffle the ordering of the observations several times during an episode:

PyBullet Ant with a permutation invariant policy.
The ordering of the 28 observations is reshuffled every 100 frames.

As we will demonstrate next, BC is a useful technique for training permutation invariant agents in environments with high dimensional visual observations that may require larger networks.


Atari Pong

Here, we are interested in solving screen-shuffled versions of vision-based RL environments, where each observation frame is divided up into a grid of patches, and like a puzzle, the agent must process the patches in a shuffled order to determine a course of action to take. A shuffled version of Atari Pong , in the following figure, can be especially hard for humans to play when inductive biases from human priors that expect a certain type of spatial structure is missing from the observations:

Pong and Puzzle Pong

But rather than throwing away the spatial structure entirely from our solution, we find that convolution neural network (CNN) policies work better than fully connected multi-layer perceptron (MLP) policies when trained with behavior cloning for Atari Pong. In this experiment, we reshape the output mtm_t of the AttentionNeuron layer from R400×32\mathcal{R}^{400 \times 32} to R20×20×32\mathcal{R}^{20 \times 20 \times 32}, a 2D grid of latent codes, and pass this 2D grid into a CNN policy. This way, the role of the AttentionNeuron layer is to take a list of unordered observation patches, and learn to construct a 2D grid representation of the inputs to be used by a downstream policy that expects some form of spatial structure in the codes. Our permutation invariant policy trained with BC can consistently reach a perfect score of 21, even with shuffled screens. The details of the CNN policy and BC training can be found in the Appendix.

Unlike typical CNN policies, our agent can accept a subset of the screen, since the agent's input is a variable-length list of patches. It would thus be interesting to deliberately randomly discard a certain percentage of the patches and see how the agent reacts. The net effect of this experiment for humans is similar to being asked to play a partially occluded and shuffled version of Atari Pong. During training via BC, we randomly remove a percentage of observation patches. In tests, we fix the randomly selected positions of patches to discard during an entire episode. The following figure demonstrates the agent's effective policy even when we also remove 70% of the patches:

70% Occluded, Shuffled-screen Atari Pong (right). Observations reshuffled every 500 frames.

We present the results in a heat map in the following figure, where the y-axis shows the patches removed during training and the x-axis gives the patch occlusion ratio in tests:

Linear regression analysis on the output
Mean test scores in Atari Pong, and example of a randomly-shuffled occluded observation.} In the heat map, each value is the average score from 100 test episodes.

The heat map shows clear patterns for interpretation. Looking horizontally along each row, the performance drops because the agent sees less of the screen which increases the difficulty. Interestingly, an agent trained at a high occlusion rate of 80%80\% rarely wins against the Atari opponent, but once it is presented with the full set of patches during tests, it is able to achieve a fair result by making use of the additional information.

To gain insights into understanding the policy, we projected the AttentionNeuron layer's output in a test roll-out to 2D space using t-SNE . In the figure below, we highlight several groups and show their corresponding inputs. The AttentionNeuron layer clearly learned to cluster inputs that share similar features:

2D embedding of the AttentionNeuron layer's output in a test episode
We highlight several representative groups in the plot, and show the sampled inputs from them. For each group, we show 3 corresponding inputs (rows) and unstack each to show the time dimension (columns).

For example, the 3 sampled inputs in the blue group show the situation when the agent's paddle moved toward the bottom of the screen and stayed there. Similarly, the orange group shows the cases when the ball was not in sight, this happened right before/after a game started/ended. We believe these discriminative outputs enabled the downstream policy to accomplish the agent's task.


Car Racing

CarRacing base task (left), modified shuffled-screen task (right)
Our agent is only trained on this environment. The right screen is what our agent observes and the left is for human visualization. A human will find driving with the shuffled observation to be very difficult because we are not constantly exposed to such tasks, just like in the “reverse bicycle” example mentioned earlier.

We find that encouraging an agent to learn a coherent representation of a deliberately shuffled visual scene leads to agents with useful generalization properties. Such agents are still able to perform their task even if the visual background of the environment changes, despite being trained only on a single static background. Out-of-domain generalization is an active area, and here, we combine our method with AttentionAgent , a method that uses selective, hard-attention via a patch voting mechanism. AttentionAgents in generalize well to several unseen visual environments where task irrelevant elements are modified, but fail to generalize to drastic background changes in a zero-shot setting. We find that combining the permutation invariant AttentionNeuron layer with AttentionAgent's policy network results in good generalization performance when we change the background:

KOF background
Mt. Fuji background
DS background
Ukiyo-e background

As mentioned, we combine the AttentionNeuron layer with the policy network used in AttentionAgent. As the hard-attention-based policy is non-differentiable, we train the entire system using ES. We reshape the AttentionNeuron layer's outputs to adapt for the policy network. Specifically, we reshape the output message to mtR32×32×16m_t \in \mathcal{R}^{32 \times 32 \times 16} such that it can be viewed as a 32-by-32 grid of 16 channels. The end result is a policy with two layers of attention: the first layer outputs a latent code book to represent a shuffled scene, and the second layer performs hard attention to select the top K=10K=10 codes from a 2D global latent code book. A detailed description of the selective hard attention policy from , a method that uses selective, hard-attention via a patch voting mechanism. AttentionAgents in and other training details can be found in the Appendix.

We first train the agent in the CarRacing environment, and report the average score from 100 test roll-outs in the following table. As the first column shows, our agent's performance in the training environment is slightly lower but comparable to the baseline method, as expected. But because our agent accepts randomly shuffled inputs, it is still able to navigate even when the patches are shuffled.

CarRacing Test Results

Without additional training or fine-tuning, we test whether the agent can also navigate in four modified environments where the green grass background is replaced with various images. In the CarRacing Test Result (from column 2) shows, our agent generalizes well to most of the test environments with only mild performance drops while the baseline method fails to generalize. We suspect this is because the AttentionNeuron layer has transformed the original RGB space to a useful hidden representation (represented by mtm_t) that has eliminated task irrelevant information after observing and reasoning about the sequences of (ot,at1)(o_t, a_{t-1}) during training, enabling the downstream hard attention policy to work with an optimized abstract representation tailored for the policy, instead of raw RGB patches.

We also compare our method to NetRand , a simple but effective technique developed to perform similar generalization tasks. In the second row of CarRacing Test Result Table are the results of training NetRand on the base CarRacing task. The CarRacing task proved to be too difficult for NetRand, but despite a low performance score of 480 in the training environment, the agent generalizes well to the “Mt. Fuji” and “Ukiyoe” modifications. In order to obtain a meaningful comparison, we combine NetRand with AttentionAgent so that it can get close to a mean score of 900 on the base task. To do that, we use NetRand as an input layer to the AttentionAgent policy network, and train the combination end-to-end using ES, which is consistent with our proposed method for this task. The combination attains a respectable mean score of 885, and as we can see in the third row of the above table, this approach also generalizes to a few of the unseen modifications of the CarRacing environment.

Our score on the base CarRacing task is lower than NetRand, but this is expected since our agent requires some amount of time steps to identify each of the inputs (which could be shuffled), while the NetRand and AttentionAgent agent will simply fail on the shuffled versions of CarRacing. Despite this, our method still compares favorably on the generalization performance.

To gain some insight into how the agent achieves its generalization ability, we visualize the attentions from the AttentionNeuron layer in the following figure:

Attention visualization
We highlight the patches that receive the most attention.
Top: Attention plot in training environment.
Bottom: Attention plot in a test environment with unseen background.

In CarRacing, the agent has learned to focus its attention (indicated by the highlighted patches) on the road boundaries which are intuitive to human beings and are critical to the task. Notice that the attended positions are consistent before and after the shuffling. This type of attention analysis can also be used to analyze failure cases too. More details about this visualization can be found in the Appendix.


Related Work

Our work builds on ideas from various different areas:

Self-organization is a process where some form of global order emerges from local interactions between parts of an initially disordered system. It is also a property observed in cellular automata (CA) , which are mathematical systems consisting of a grid of cells that perform computation by having each cell communicate with its immediate neighbors and performing a local computation to update its internal state. Such local interactions are useful in modeling complex systems and have been applied to model non-linear dynamics in various fields . Cellular Neural Networks were first introduced in the 1980s to use neural networks in place of the algorithmic cells in CA systems. They were applied to perform image processing operations with parallel computation. Eventually, the concept of self-organizing neural networks found its way into deep learning in the form of Graph Neural Networks (GNN) .

Using modern deep learning tools, recent work demonstrates that neural CA, or self-organized neural networks performing only local computation, can generate (and re-generate) coherent images and voxel scenes , and even perform image classification . Self-organizing neural network agents have been proposed in the RL domain , with recent work demonstrating that shared local policies at the actuator level , through communicating with their immediate neighbors, can learn a global coherent policy for continuous control locomotion tasks. While existing CA-based approaches present a modular, self-organized solution, they are not inherently permutation invariant. In our work, we build on neural CA, and enable each cell to communicate beyond its immediate neighbors via an attention mechanism that enables permutation invariance.

Meta-learning recurrent neural networks (RNN) have been proposed to approach the problem of learning the learning rules for a neural network using the reward or error signal, enabling meta-learners to learn to solve problems presented outside of their original training domains. The goals are to enable agents to continually learn from their environments in a single lifetime episode, and to obtain much better data efficiency than conventional learning methods such as stochastic gradient descent (SGD). A meta-learned policy that can adapt the weights of a neural network to its inputs during inference time have been proposed in fast weights , associative weights , hypernetworks , and Hebbian-learning approaches. Recently works combine ideas of self-organization with meta-learning RNNs, and have demonstrated that modular meta-learning RNN systems not only can learn to perform SGD-like learning rules, but can also discover more general learning rules that transfer to classification tasks on unseen datasets.

In contrast, the system presented here does not use an error or reward signal to meta-learn or fine-tune its policy. But rather, by using the shared modular building blocks from the meta-learning literature, we focus on learning or converting an existing policy to one that is permutation invariant, and we examine the characteristics such policies exhibit in a zero-shot setting, without additional training.

Attention can be viewed as an adaptive weight mechanism that alters the weight connections of a neural network layer based on what the inputs are. Linear dot-product attention has first been proposed for meta-learning , and versions of linear attention with softmax nonlinearity appeared later , now made popular with Transformer . The adaptive nature of attention provided the Transformer with a high degree of expressiveness, enabling it to learn inductive biases from large datasets and have been incorporated into state-of-the-art methods in natural language processing , image recognition and generation , audio and video domains .

Attention mechanisms have found many uses for RL . Our work here specifically uses attention to enable communication between arbitrary numbers of modules in an RL agent. While previous work utilized attention as a communication mechanism between independent neural network modules of a GNN, our work focuses on studying the permutation invariant properties of attention-based communication applied to RL agents. Related work used permutation invariant critics to enhance performance of multi-agent RL. Building on previous work on PI , Set Transformers investigated the use of attention explicitly for permutation invariant problems that deal with set-structured data, which have provided the theoretical foundation for our work.


Discussion and Future Work

In this work, we investigate the properties of RL agents that can treat their observations as an arbitrarily ordered, variable-length list of sensory inputs. By processing each input stream independently, and consolidating the processed information using attention, our agents can still perform their tasks even if the ordering of the observations is randomly permuted several times during an episode, without explicitly training for frequent re-shuffling. We report results of performance versus shuffling frequency in the following table for each environment:

Reshuffle observations during a roll-out
In each test episode, we reshuffle the observations every t steps. For CartPole, we test for 1000 episodes because of its larger task variance. For the other tasks, we report mean and standard deviation from 100 tests. All environments except for Atari Pong have a hard limit of 1000 time steps per episode. In Atari Pong, while the maximum length of an episode does not exist, we observed that an episode usually lasts for around 2500 steps.

Applications  By presenting the agent with shuffled, and even incomplete observations, we encourage it to interpret the meaning of each local sensory input and how they relate to the global context. This could be useful in many real world applications. For example, such policies could avoid errors due to cross-wiring or complex, dynamic input-output mappings when being deployed in real robots. A similar setup to the CartPole experiment with extra noisy channels could enable a system that receives thousands of noisy input channels to identify the small subset of channels with relevant information.

Limitations  For visual environments, patch size selection will affect both performance and computing complexity. We find that patches of 6x6 pixels work well for our tasks, as did 4x4 pixels to some extent, but single pixel observations fail to work. Small patch sizes also result in a large attention matrix which may be too costly to compute, unless approximations are used .

Another limitation is that the permutation invariant property applies only to the inputs, and not to the outputs. While the ordering of the observations can be shuffled, the ordering of the actions cannot. For permutation invariant outputs to work, each action will require feedback from the environment, including reward information, in order to learn the relationship between itself and the environment.

Societal Impact  Like most algorithms proposed in computer science and machine learning, our method can be applied in ways that will have potentially positive or negative impacts to society. While our small-scale, self-contained experiments study only the properties of RL agents that are permutation invariant to their observations, and we believe our results do not directly cause harm to society, the robustness and flexible properties of the method may be of use for data-collection systems that receive data from a large variable number of sensors. For instance, one could apply permutation invariant sensory systems to process data from millions of sensors for anomaly detection, which may lead to both positive or negative impacts, if used in applications such as large-scale sensor analysis for weather forecasting, or deployed in large-scale surveillance systems that could undermine our basic freedoms.

Our work also provides a way to view the Transformer through the lens of self-organizing neural networks. Transformers are known to have potentially negative societal impacts highlighted in studies about possible data-leakage and privacy vulnerabilities , malicious misuse and issues concerning bias and fairness , and energy requirements for training these models .

Future Work  An interesting future direction is to also make the action layer have the same properties, and model each motor neuron as a module connected using attention. With such methods, it may be possible to train an agent with an arbitrary number of legs, or control robots with different morphology using a single policy that is also provided with a reward signal as feedback. Moreover, our method accepts previous actions as a feedback signal in this work. However, the feedback signal is not restricted to the actions. We look forward to seeing future works that include signals such as environmental rewards to train permutation invariant meta-learning agents that can adapt to not only changes in the observed environment, but also to changes to itself.

If you would like to discuss any issues or give feedback, please visit the GitHub repository of this page for more information.

Acknowledgements

The authors would like to thank Rishabh Agarwal, Jie Tan, Yingtao Tian, Douglas Eck, Aleksandra Faust and our NeurIPS2021 reviewers for valuable discussion and feedback.

The experiments in this work were performed on virtual machines provided by Google Cloud Platform.

This article was prepared using the Distill template. Interactive demos were built with p5.js.

Any errors here are our own and do not reflect opinions of our proofreaders and colleagues. If you see mistakes or want to suggest changes, feel free to contribute feedback by participating in the discussion forum for this article.

Vision Icon by artist Laymik on Noun Project.  Neuron icon by artist Laymik.

Citation

For attribution in academic contexts, please cite this work as

Yujin Tang and David Ha, "The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning", 2021.

BibTeX citation

@inproceedings{attentionneuron2021,
  title={The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning},
  author={Yujin Tang and David Ha},
  booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
  year={2021},
  url={https://openreview.net/forum?id=wtLW-Amuds},
  note="\url{https://attentionneuron.github.io}",
}

Open Source Code

Please see our repo for details about the code release.

Reuse

Diagrams and text are licensed under Creative Commons Attribution CC-BY 4.0 with the source available on GitHub, unless noted otherwise. The figures that have been reused from other sources don’t fall under this license and can be recognized by the citations in their caption.

Appendix

Hyper-parameters

The Notation List Table in the Method Section in the main text contains the hyper-parameters used for each experiment. We did not employ exhaustive hyper-parameter tuning, but have simply selected parameters that can appropriately size our models to work with training methods such as evolution strategies, where the number of parameters cannot be too large. As mentioned in the discussion section about the limitations, we tested a small range of patch sizes (1 pixel, 4 pixels, 6 pixels), and we find that a patch size of 6x6 works well across tasks.

Description of compute infrastructure used to conduct experiments

For all ES results, we train on Google Kubernetes Engines (GKE) with 256 CPUs (N1 series) for each job. The approximate time, including both training and periodic tests, for the jobs are: 3 days (CartPole), 5 days (PyBullet Ant ES) and 10 days (CarRacing). For BC results, we train with Google Computing Engines (GCE) on an instance that has one V100 GPU. The approximate time, including both training and periodic tests, for the jobs are: 5 days (PyBullet Ant BC), 1 day (Atari Pong).

Training budget

The costs of ES training are summarized in the following table. A maximum of 20K generations is specified in the training, but stopped early if the performance converged. Each generation has 256×16=4096256 \times 16=4096 episode rollouts, where 256256 is the population size and 1616 is the rollout repetitions. The Pong permutation-invariant (PI) agents were trained using behavior cloning (BC) on a pre-trained PPO policy (which is not PI-capable), with 10M training steps.

  CartPole     AntBullet     Pong     CarRacing  
Number of Generations     14,000     12,000     4,000     -  

Note that we used the hyper-parameters (e.g., population size, rollout repetitions) that proved to work on a wide range of tasks from past experience, and did not tune them for each experiment. In other words, these settings were not chosen with sample-efficiency in mind, but rather for learning a working PI-capable policy using distributed computation within a reasonable wall-clock time budget.

We consider two possible approaches when we take sample-efficiency into consideration. In the experiments, we have demonstrated that it is possible to simply use state-of-the-art RL algorithms to learn a non-PI policy, and then use BC to produce a PI version of the policy. The first approach is thus to rely on the conventional RL algorithms to increase sample efficiency, which is a hot and on-going topic in the area. On the other hand, we do think that an interesting future direction is to formulate environments where BC will fail in a PI setting, and that interactions with the environment (in a PI setting) is required to learn a PI policy. For instance, we have demonstrated in PyBullet Ant that the BC method requires the cloned agent to have a much larger number of parameters compared to one trained with RL. This is where an investigation in sample-efficiency improvements in the RL algorithm explicitly in the PI setting may be beneficial.

Detailed setups for the experiments

PyBullet Ant

In the PyBullet Ant experiment, we demonstrated that a pre-trained policy can be converted into a permutation invariant one with behavior cloning (BC). We give detailed task description and experimental setups here. In AntBulletEnv-v0, the agent controls an ant robot that has 8 joints (A=8|A| = 8), and gets to see an observation vector that has base and joint states as well as foot-ground contact information at each time step (|O|=28). The mission is to make the ant move along a pre-defined straight line as fast as possible. The teacher policy is a 2-layer FNN policy that has 32 hidden units trained with ES. We collected data from 1000 test roll-outs, each of which lasted for 500 steps. During training, we add zero-mean Gaussian noise (σ=0.03\sigma=0.03) to the previous actions. For the student policy, We set up two networks. The first policy is a 2-layered network that has the AttentionNeuron with output size mtR32m_t \in \mathcal{R}^{32} as its first layer, followed by a fully-connected (FC) layer. The second, larger policy is similar in architecture, but we added one more FC layer and expanded all hidden size to 128128 to increase its expressiveness. We train the students with a batch size of 6464, an Adam optimizer of lr=0.001lr=0.001 and we clip the gradient at maximum norm of 0.50.5.

Atari Pong

In the Atari game Pong, we append a deep CNN to the AttentionNeuron layer in our agent (student policy). To be concrete, we reshape the AttentionNeuron's output message mtR400×32m_t \in \mathcal{R}^{400 \times 32} to mtR20×20×32m_t \in \mathcal{R}^{20 \times 20 \times 32} and pass it to the trailing CNN: [Conv(in=32, out=64, kernel=4, stride=2), Conv(in=64, out=64, kernel=3, stride=1), FC(in=3136, out=512), FC(in=512, out=6)]. We use ReLUReLU as the activation functions in the CNN. We collect the stacked observations and the corresponding logits output from a pre-trained PPO agent (teacher policy) from 1000 roll-outs, and we minimize the MSE loss between the student policy's output and the teacher policy's logits. The learning rate and norm clip are the same as the previous experiment, but we use a batch size of 256256.

For the occluded Pong experiment, we randomly remove a certain percentage of the patches across a training batch of stacked observation patches. In tests, we sample a patch mask to determine the positions to occlude at the beginning of the episode, and apply this mask throughout the episode.

CarRacing

In AttentionAgent , the authors observed that the agent generalizes well if it is forced to make decisions based on only a fraction of the available observations. Concretely, proposed to segment the input image into patches and let the patches vote for each other via a modified self-attention mechanism. The agent would then take into consideration only the top K=10K=10 patches that have the most votes and based on the coordinates of which an LSTM controller makes decisions. Because the voting process involves sorting and pruning that are not differentiable, the agent is trained with ES. In their experiments, the authors demonstrated that the agent could navigate well not only in the training environment, but also zero-shot transfer to several modified environments.

We need only to reshape the AttentionNeuron layer's outputs to adapt for AttentionAgent's policy network. Specifically, we reshape the output message mtR1024×16m_t \in \mathcal{R}^{1024 \times 16} to mtR32×32×16m_t \in \mathcal{R}^{32 \times 32 \times 16} such that it can be viewed as a 32-by-32 “image” of 16 channels. Then if we make AttentionAgent's patch segmentation size 1, the original patch voting becomes voting among the mtm_t's and thus the output fits perfectly into the policy network. Except for this patch size, we kept all hyper-parameters in AttentionAgent unchanged, we also used the same CMA-ES training hyper-parameters.

Although the simple settings above allows our augmented agent to learn to drive and generalize to unseen background changes, we found the car jittered left and right through the courses. We suspect this is because of the frame differential operation in our fk(ot,at1)f_k(o_t, a_{t-1}). Specifically, even when the car is on a straight lane, constantly steering left and right allows fk(ot,at1)f_k(o_t, a_{t-1}) to capture more meaningful signals related to the changes of the road. To avoid such jittering behavior, we make mtm_t a rolling average of itself: mt=(1α)mt+αmt1,0α1m_t = (1 - \alpha)m_t + \alpha m_{t-1}, 0 \le \alpha \le 1. In our implementation α=g([ht1,at1])\alpha = g([h_{t-1}, a_{t-1}]), where ht1h_{t-1} is the hidden state from AttentionAgent's LSTM controller and at1a_{t-1} is the previous action. g()g(\cdot) is a 2-layer FNN with 16 hidden units and a sigmoidsigmoid output layer.

We analyzed the attention matrix in the AttentionNeuron layer and visualized the attended positions. To be concrete, in CarRacing, the Query matrix has 10241024 rows. Because we have 16×16=25616 \times 16=256 patches, the Key matrix has 256256 rows, we therefore have an attention matrix of size 1024×2561024 \times 256. To plot attended patches, we select from each row in the attention matrix the patch that has the largest value after softmax, this gives us a vector of length 10241024. This vector represents the patches each of the 10241024 output channels has considered to be the most important. 10241024 is larger than the total patch count, however there are duplications (i.e. multiple output channels have mostly focused on the same patches). The unique number turns out to be 102010 \sim 20 at each time step. We emphasize these patches on the observation images to create an animation.