 The mathematical foundations of policy gradient algorithms.

where $\Psi_t$ may be the following forms:

where (6) yields the lowest possible variance:

• Intuitional interpretation: a step in policy gradient direction should increase the probability of better-than-average actions and decrease the probability of worse-than-average actions. The advantage function measures whether or not the action is better or worse than the policy’s default behavior (expection).

## The goal of RL

### Objective

• Infinite horizon
• Finite horizon

### Evaluating the objective ## Direct differentiation

A convenient identity:

Overall,

• Comparison to Maximum likelihood:

## Drawbacks: variance

• Reducing variance
• Future does not affect the past
• Casuality: policy at time $t’$ cannot affect reard at time t when $t<t’$

where $\hat{Q}_{i,t}$ denotes the reward to go:

### Baselines

where $b=\frac{1}{N} \sum_{i=1}^N r(\tau)$

proof:

• Substracting abseline is unbiased in expectation
• Average reward is not the best baseline, but it’s pretty good.

• Optimal baseline:

Hence,

We get:

This is just the expected reward, weighted by gradient magnitudes.

## Deriving the simplest policy gradient (Spinning Up)

Consider the case of a stochastic, parameterized policy, $\pi_{\theta}$. We maximize the expected return

Optimize the policy by gradient descent:

Step by step:

1. Probability of a Trajectory. The probability of a trajectory $\tau = (s_0, a_0, \cdots, s_{T+1})$ given that actions come from $\pi_\theta$ is:

2. The log-derivative trick.

3. Log-probability of a trajectory:

The environment has no dependence on $\theta$, so gradients of $\rho_0(s_0)$, $P(s_{t+1} \vert s_t, a_t)$ and $R(\tau)$ are zero.
The gradient of the log-prob of a trajectory is thus Overall:

We can estimate the expectation with a sample mean. If we collect a set of trajectories $\mathcal{D} = \{ \tau_i \}_{i=1,\cdots,N}$ where each trajectory is obtained by letting the agent act in the environment using the policy $\pi_\theta$, the policy gradient can be estimated as:

where $|\mathcal{D}|$ is the # of trajectories in $\mathcal{D}$ # REINFORCE

REINFORCE(Monte-Carlo policy gradient) is a Monte-Carlo algorithm using the complete return from the time $t$, which includes future rewards up until the end of episode. 

• Intuition: the update increases the paramer vector in this distribution proportional to the return, and inversely proportional to the action probability.

REINFORCE algorithm:

1. Initialize policy parameter $\theta$ at random;
2. For each episode:
1. Generate an episode $S_0, A_0, R_1, \cdots, S_{T-1}, A_{T-1}, R_T$, following $\pi(\cdot \vert \cdot, \theta)$
2. Loop for each step of the episode $t=0,1,\cdots, T-1$:
1. $G \leftarrow \sum_{k=t+1}^T \gamma^{k-t-1} R_k$
2. $\theta \leftarrow \theta + \alpha \gamma^t G \nabla \ln \pi (A_t \vert S-t, \theta)$
• Drawbacks: REINFORCE has a high variance and thus produces slow learning.

## REINFORCE with Baseline

A variant of REINFORCE is to substract a baseline value from the return $G_t$ to reduce the variance of policy gradient while keeping the bias unchanged.

# Actor-Critic

Actor-Critic consists of two components

Actor-Critic algorithms:

1. Initialize policy parameter $\pmb{\theta} \in \mathbb{R}^{d’}$ and state-value weights $\pmb{w} \in \mathbb{R}^d$
2. For each episode:
1. Initilize the first state of episode $S \leftarrow 1$
2. While $S$ != TERMINAL (for each time step):
1. $A \sim \pi(\cdot \vert S,\theta)$
2. Take action $A$, observe $S’$, $R$
3. $\delta \leftarrow R + \gamma \hat{v}(S’,\pmb{w}) - \hat{v}(S,\pmb{w})$ (If $S’$ is terminal, then $\hat{v}(S’, \pmb{w})=0$)
4. $\pmb{w} \leftarrow \pmb{w} + \alpha^{\pmb{w}} \delta \nabla \hat{v}(S,\pmb{w})$
5. $\pmb{\theta} \leftarrow \pmb{\theta} + \alpha^{\theta} I \delta \nabla \ln \pi(A \vert S,\theta)$
6. $I \leftarrow \gamma I$
7. $S \leftarrow S’$

Mnih et.al(2016)  proposed an asynchronous gradient descent for optimization of deep neural networks, showing that parallel actor-learners have a stabilizing effect on training, greatly reducing the training time with a single multi-core CPU instead of GPU.

• Instead of experience replay, they asynchronously execute multiple agents in parallel on multiple instances of the environment. This parallelism also decorrelates the agents’ data into a more stationary process, since at any given time-step the parallel agents will be experiencing a variety of different states.
• Apply different exploration policies in each actor-learner to maximize the diversity. By running different exploration policies in different threads, the overall updates of parameters are likely to be less correlated in time than a single agent applying online updates.
• The gradient accumulation in parallelism can be seen as a prarallized minibatch of stochastic gradient update, where the parameters update thread-by-thread in the direction of each thread independently.

A3C pseudocode for each actor-learner thread:

1. Assume global shared parameter vectors $\theta$ and $\theta_v$ and global shared counter $T=0$, thread-specified parameter vectors $\theta’$ and $\theta’_v$
2. Initialize the thread step count $t \leftarrow 1$
3. While $T<=T_{max}$:
1. Reset gradients: $d\theta \leftarrow 0$ and $d\theta_v \leftarrow 0$
2. Synchronize thread-specific parameters $\theta’=\theta$ and $\theta’_v = \theta_v$
3. $t_\text{start} = t$
4. sample state $s_t$
5. while ($s_t$ != TERMINAL and $t - t_\text{start} <= t_\text{max}$):
1. Perform the action $a_t \sim \pi(a_t \vert s_t; \theta')$
2. Receive reward $r_t$ and new state $s_{t+1}$;
3. $t \leftarrow t+1$
4. $T \leftarrow T+1$
6. The return estimation:
7. For $i \in \{ t-1, \cdots, t_\text{start} \}$ do
1. $R \leftarrow \gamma R + R_i$; here $R$ is a MC measure of $G_i$
2. Accumulate gradients w.r.t $\theta’$: $d\theta \leftarrow d\theta + \nabla_{\theta'} \log \pi(a_i \vert s_i;\theta') (R-V(s_i;\theta_v'))$
3. Accumulate gradients w.r.t $d\theta_v \leftarrow d\theta_v + \frac{\partial (R-V(s_i;\theta_v'))^2}{\partial \theta_v'}$
8. Perform asynchronous update of $\theta$ using $d\theta$ and of $\theta_v$ using $d\theta_v$

Removing the first “A”(Asynchronous) from A3C, we get advantage actor-critic (A2C). A3C updates the global parameters independently, thus thread-specific agents updates the policy with different versions and aggregated updates could not be optimal.

A2C waits for each actor to finish its segment of experience before performing an update, averaging over all of the actors. In the next iteration, parallel actors starts from the same policy. A2C is more cost-effective than A3C when using single-GPU machines, and is faster than a CPU-only A3C implementation when using larger policies. # Trust Region Policy Optimization (TRPO)

Trust Region Policy Optimization (TRPO) enforces a KL divergence constraint at every point in the state space.

TRPO minimizes a certain surrogate obejctive fuction guarateeing policy improvement with non-trivial step sizes, giving monotonic improvements with little tuning of hyperparameters at each update.

• Let $\tilde{\pi}$ denote the expected return of another policy $\tilde{\pi}$ in terms of the advantage over the policy $\pi$
• $\rho_\pi$ denotes discounted visitation frequency: $\rho_\pi (s) = P(s_0=s) + \gamma P(s_1 = s) + \gamma^2 P(s_2=s)+ \cdot$.

The aforementioned update does not give any guidance on the step size to update.

• Conservative policy iteration provides explicit lower bounds on the improvements of $\eta$.

Let $\pi_\text{old}$ denote the current policy and $\pi' = \arg \max_{\pi'} L_{\pi_\text{old}} (\pi')$, the new policy is:

where the lower bound:

Replace $\alpha$ with the distance measure between $\pi$ and $\tilde{\pi}$, total vairation divergence: $D_\text{TV}=\frac{1}{2} \sum_i |p_i - q_i|$ for discrete distributions $p,q$.
Define $D_\text{TV}^{\max} = \max_{s,a} |A_\pi (s,a)|$

Since $D_{TV} (p || q)^2 \leq D_{KL}(p || q)$, Let $D_{KL}^{\max} (\pi, \tilde{\pi}) = \max_s D_{KL}(\pi(\cdot \vert s) || \tilde{\pi}(\cdot \vert s))$, we get:

Let $M_i(\pi) = L_{\pi_i} (\pi) - C D_{KL}^{\max}(\pi_i, \pi)$, then:

Therefore,

This guarantees that the true objective $\eta$ is non-decreasing.

Afterwards, we improve the true objective $\eta$. Let $\theta_\text{old}$ represent $\pi_{\theta_\text{old}}$, and $\theta$ represent $\pi_\theta$.

Thus, we use a constraint on the KL divergence beween the new policy and the old policy, i.e., a trust region constraint:

By heuristic approximation, we consider the average KL divergence to replace the $\max$ KL divergence:

• Expand $L_{\theta_\text{old}}$:
• Replace the sum over actions by an important sampling estimator:
• Replace $\sum_s \rho_{\theta_\text{old}}$ with expectation $\mathbb{E}_{s \sim \rho_{\theta_\text{old}}}[\cdots]$; replace the advantage values $A_{\theta_\text{old}}$ by the $Q$-values $Q_{\theta_\text{old}}$. Finally, we get

# Proximal Policy Optimization (PPO)

• Problems: TRPO is relatively complicated, and is not compatible with architectures that include noise (e.g. dropout) or parameter sharing (between the policy and value function, or with auxiliary tasks).

• PPO with clipped surrogate objective performs better than that with KL penalty.

## PPO-clip *

Clipped surrogate objective

• Let $r_t(\theta) = \frac{\pi_\theta (a_t \vert s_t)}{\pi_{\theta_\text{old}}(a_t \vert s_t)}$, so $r(\theta_\text{old})=1$. TRPO maimize a surrogate objective with conservative policy iteration (CPI). Without the constraint, maximization of $L^{\text{CPI}}$ would lead to excessively large policy update.
• PPO pernalize changes to the policy that move $r_t(\theta)$ away from 1:

where $\epsilon$ is a hyperparameter, say, $\epsilon=0.2$. The intuition is to take the minimum of the clipped and unclipped objective, thus the final objective is a lower bound (i.e. a pessimistic bound) on the unclipped objective.

## PPO-penalty

• Variant: add a penalty on KL divergence
• With mini-batch SGD, optimize the KL-penalized objective:
• Compute $d=\hat{E}_t [\text{KL} [ \pi_{\theta_\text{old}} (\cdot \vert s_t), \pi_\theta (\cdot \vert s_t) ] ]$
• If $d < d_\text{target}/1.5, \beta \leftarrow \beta/2$
• If $d > d_\text{target} \times 1.5, \beta \leftarrow \beta \times 2$

## PPO algorithms

Finally, the objective function is augmented with an error term on the value estimation and an entropy term to encourage sufficient exploration.

where $c_1$, $c_2$ are constant coefficients.

• Settings:
• RNN
• mini-batch SGD

PPO algorithms with Actor-Critic style:

1. for iteration=$1,2,,\cdots$:
1. for actor=$1,2,,\cdots, N$:
1. run policy $\pi_{\theta_\text{old}}$ in environment for $T$ timesteps;
2. Compute advantage estimates $\hat{A}_1, \cdots, \hat{A}_T$;
2. Optimize surrogate $L$ w.r.t $\theta$, with $K$ epochs and mini-batch size $M \leq NT$
3. $\theta_\text{old} \leftarrow \theta$

## Distributed PPO

Let $W$ denote # of workers; $D$ sets a threshold for the # of workers whose gredients must be available to update the parameters; $M$, $B$ is the # of sub-iterations with policy and baseline updates given a batch of datapoints.

The distributed PPO-penalty algorithms:  Experiments indicates that averaging gradients and applying them synchronously leads to better results than asynchronously in practice.

Challenges:

1. Requires a large number of samples;
2. Difficulty of obtaining stable and steady improvement despite the non-stationarity of the incoming data.
3. credit reward problem in RL (a.k.a distal reward problem in the behavioral literature): long time delay on rewards.

Solution:

1. Use value functions to reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of advantage function that is analogous to TD($\lambda$);
2. Use TRPO for both policy and value function with NNs.

Now take the form of $k$ of $\delta$ terms, denoted by $\hat{A}_t^{(k)}$:

With $k \rightarrow \infty$, the bias generally becomes smaller:

which is simply the empirical returns minus the value function baseline.

### GAE

GAE is defined as the exponentially-weighted average of these $k$-step estimators:

Now consider two special cases:

• when $\lambda \rightarrow 0$:
• when $\lambda \rightarrow 1$:

It shows that GAE($\gamma,1$) has high variance due to the sum of terms; GAE($\gamma,0$) induces bias but with lower variance. The GAE with $\lambda \in (0,1)$ reaches a tradeoff between the bias and variance.

## Interpretation as Reward Shaping

• Reward shaping refers to the following reward transformation of MDP:

where $\Phi: \mathcal{S} \rightarrow \mathbb{R}$ is an arbitrary scalar-valued function on the state space.

• Let $\tilde{Q}^{\pi, \gamma}$, $\tilde{V}^{\pi, \gamma}$, $\tilde{A}^{\pi, \gamma}$ be the value and advatage functions of the transformed MDP:

Let $\Phi = V$, then

## Value function estimation

Constrain the average KL divergence between the previous value function and new value function to be smaller than $\epsilon$:

# Actor-Critic with Experience Replay(ACER)

Actor-Critic with Experience Replay(ACER) employs truncated importance sampling with bias correction, stochastic dueling network architectures, and a new TRPO method.

## Importance weight truncation with bias correction

where $\bar{\rho}_t = \min(c,\rho_t)$, with importance weight $\rho_t = \frac{\pi(a_t \vert x_t)}{\mu (a_t \vert x_t)}$

## Efficient TRPO

ACER maintains an average policy network $\phi_{\theta_a}$ that represents a running average of past policies and forces the updated policy to not deviate far from the average.

Update the parameter $\theta_a$ of the average policy net work “softly” after each update:

The policy gradient w.r.t $\phi$:  # Soft Q-learning

Soft Q-learning(SQL) expresses the optimal policy via Boltzmann distribution (a.k.a Gibbs distribution).

• Soft Q-learning fomulates a stochastic policy as a (conditional) energy-based model (EBM), with the energy function corresponding to the “soft” Q-function obtained when optimizing the maximum entropy objective.
• “The entropy regularized actor-critic algorithms can be viewed as approaximate Q-learning methods, with the actor serving the role of an approimate sampler from an intrctable posterior” .

Contributions:

1. Improved exploration performance is with multi-modal reward landscapes, where conventional deterministic or unimodal methods are at high risk of falling into suboptimal local optima.
2. Stochastic energy-based policies can provide a much better initialization for learning new skills than either random policies or policies pretrained with maximum expected reward objectives.

## Maximum Entropy RL

• Conventional RL objectives to learn a policy $\pi(\pmb{a}_t \vert \pmb{s}_t)$:
• Maximum entropy RL augments the entropy term to maximize its entropy at each visited state:where $\alpha$ is used to determine the relative importance of entropy and reward.

## Energy-based models (EBM)

As the figure below, the conventional RL specifies a unimodal policy distribution, centered at the maimal Q-value and extending toe the neighboring actions to provide noise for exploration (as red distribution). The exploration is biased towards the upper mode, RL ignores the lower mode completely. To ensure the agent to explore all promising states while prioritizing the more promising mode, Soft Q-learning definesthe policy w.r.t exponentiated Q-values (see green distribution):

where $\mathcal{Q}$ is an energy function, that could be represented by NNs. ## Soft value functions

The soft Bellman equation:

where

The soft Q-function is:

The soft value function:

Then the optimal policy is:

## Proofs

• Define the optimal policy with the EBM form:where $Z$ is the sum of the numerator.
• Minimize the KL divergence:

## Soft Q-learning

• Soft Bellman-backup

This cannot be performed exactly in countinuous or large state and action spaces. Sampling from the energy-based model is intractable in general.

### Sampling

#### Importance sampling

where $q_{a'}$ can be an arbitrary distribution over the action space.
It can also be equivalent as minimizing:

where $\hat{Q}^{\bar{\theta}}_\text{soft} (s_t, a_t) = r_t + \gamma \mathbb{E}_{s_{t+1} \sim p_s} [V_\text{soft}^{\bar{\theta}}(s_{t+1})]$ is the target $Q$-value.

#### Stein Variational Gradient Descent (SVGD)

• How to approximately sample from the soft Q-funtion?
1. MCMC based sampling
2. learn a stochastic sampling network trained to output approximate samples from the target distribution

Soft Q-learning applies the sampling network based on Stein variational gradient descent (SVGD) and amortized SVGD.

1. Learn a state-conditioned stochastic NN $a_t = f^\phi(\xi; s_t)$ that maps noise samples $\xi$ drawn from an arbitrary distribution into unbiased action samples from the target EBM of $Q_\text{soft}^\theta$.
2. The induced distribution of the actions $\pi^\phi (a_t \vert s_t)$ approximates the energy-based distribution w.r.t KL divergence:
3. SVGD provides the most greedy directions as a functional:Update the policy networks:

### Algorithms

$\mathcal{D} \leftarrow$ empty replay memory
Assign target parameters: $\bar{\theta} \leftarrow \theta$, $\bar{\phi} \leftarrow \phi$

1. for each epoch:
1. for each t do:
1. Collect experience
Sample an action for $s_t$ using $f^\phi$: $a_t f^\phi(\xi;s_t)$ where $\xi \sim (0; I)$
Sample next state from the environment: $s_{t+1} \sim p_s(s_{t+1} \vert s_t, a_t)$
Save the new experience in the replay memory: $\mathcal{D} \leftarrow \mathcal{D} \cup \{ (s_t,a_t,r(s_t,a_t), s_{t+1}) \}$
2. Sample a mini-batch from the replay memory $\{(s_t^{(i)}, a_t^{(i)}, r_t^{(i)},s_{t+1}^{(i)})\}_{i=0}^N \sim \mathcal{D}$
3. Update the soft Q-function parameters:
Sample $\{ a^{(i,j)} \}_{j=0}^M \sim q_{a'}$ for each $s_{t+1}^{(i)}$
Compute empirical soft values $\hat{V}_\text{soft}^\bar{\theta} (s_{t+1}^{(i)})$ in Compute empirical gradient $\hat{\nabla}_\theta J_Q$ of Update $\theta$ according to $\hat{\nabla}_\theta J_Q$ using Adam.
4. Update policy
Sample $\{ \xi^{(i,j)} \}_{j=0}^{M} \sim \mathcal{N} (\pmb{0},\pmb{I})$ for each $s_t^{(i)}$
Compute actions $a_t^{(i,j)}= f^\phi(\xi^{(i,j)}, s_t^{(i)})$
Compute $\Delta f^\phi$ using empirical estimate of Compute empirical estimtate of $\hat{\nabla}_\phi J_\pi$ of Update $\phi$ according to $\hat{\nabla}_\phi J_\pi$ using Adam
2. If epoch $%$ update_interval == 0:
$\bar{\theta} \leftarrow \theta, \quad \bar{\phi} \leftarrow \phi$

### Benefits

• Better exploration. SQL provides an implicit exploration stategy by assgining each action a non-zero probability, shaped by the current belief about its value, effectively combining exploration and exploitation in a natural way.
• Fine-tuning maximum entropy policies: general-to-specific transfer.Pre-train policies for general purpose tasks, then use them as templates or initializations for more specific tasks.
• Compositionality. Compose new skills from existing policies—even without any fine-tuning—by intersecting different skills. • Robustness. Maximum entropy formulation encourages to try all possible solutions, the agents learn to explore a large portion of the state space. Thus they learn to act in various situations, more robust against perturbations in the environment. # Soft Actor-Critic(SAC)

Soft Actor-Critic (SAC) is an off-policy actor-critic algorithm based on the maximum entropy framework. Like SQL, SAC augments the conventional RL objectives with a maximum entropy objective:

where the $\alpha$ determines the relative importance of the entropy term against the reward, and thus controls the stochasticity of the optimal policy.

The maximum entropy terms

1. Encourage to explore more widely, while giving up on clearly unpromising avenues;
2. Capture multiple modes of near-optimal behavior.
3. Improve learning speed and exploration.

## Soft policy iteration

### Policy evaluation

The soft-Q value of the policy $\pi$:

where the soft state value function:

### Policy Improvement

For each state, we update the policy according to:

where the partitioning function $Z^{\pi_\text{old}}(s_t)$ normalizes the distribution.

## SAC

Consider a parameterized state value function $V_\Psi (s_t)$, soft Q-function $Q_\theta (s_t,a_t)$ and a tractable policy $\pi_\phi (a_t \vert s_t)$.

The soft value function is trained to minimize the squared residual error:

The soft Q-function can be trained to minimize the soft Bellman residual:

with

The policy can be optimized by the expected KL-divergence:

Minimizing $J_\pi$ with reparameterization trick.

• Reparameterize the policy with an NN transformation:where $\epsilon$ is an input noise vector, sampled from some fixed distribution.
• Rewrite the previous objective as:

• Employ two Q-functions to mitigate positive bias in the policy improvement step, using the minimum of the Q-functions for the value gradient and policy gradient.

SAC is brittle w.r.t the temperature parameter. Choosing the optimal temperature $\alpha$ is non-trival, and the temperature needs to be tuned for each task. 

SAC finds a stochastic policy with maximal expected return that satisfies a minimum expected entropy constraint.

where $\mathcal{H}$ is a desired minimum expected entropy.

• Rewite the objective as an iterated maximization
• Finally we get:

## Algorithms

Update $\alpha$ with following objective: # Twin Delayed Deep Deterministic PG(TD3)

1. 1.Mnih, V., Badia, A.P., Mirza, M.P., Graves, A., Lillicrap, T.P., Harley, T., Silver, D., & Kavukcuoglu, K. (2016). Asynchronous Methods for Deep Reinforcement Learning. ICML.
2. 2.
3. 3.- Actor: updates the policy parameters $\theta$ for $\pi_\theta(a \vert s)$ in the direction suggested by the critic. - Critic: updates the value function parameter $w$ and could be action-value $Q_w(a \vert s)$ or state-value $V_w(s)$
4. 3.Sutton, R.S., & Barto, A.G. (1988). Reinforcement Learning: An Introduction. IEEE Transactions on Neural Networks, 16, 285-286.
5. 4.
6. 5.Schulman, J., Levine, S., Abbeel, P., Jordan, M.I., & Moritz, P. (2015). Trust Region Policy Optimization. ICML.
7. 6.
8. 7.
9. 8.Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal Policy Optimization Algorithms. ArXiv, abs/1707.06347.
10. 9.Heess, N., Dhruva, T., Sriram, S., Lemmon, J., Merel, J., Wayne, G., Tassa, Y., Erez, T., Wang, Z., Eslami, S.M., Riedmiller, M.A., & Silver, D. (2017). Emergence of Locomotion Behaviours in Rich Environments. ArXiv, abs/1707.02286.
11. 10.Schulman, J., Moritz, P., Levine, S., Jordan, M.I., & Abbeel, P. (2016). High-Dimensional Continuous Control Using Generalized Advantage Estimation. CoRR, abs/1506.02438.
12. 11.Wu, Y., Mansimov, E., Grosse, R. B., Liao, S., & Ba, J. (2017). Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In Advances in neural information processing systems (pp. 5279-5288).
13. 12.Wang, Z., Bapst, V., Heess, N., Mnih, V., Munos, R., Kavukcuoglu, K., & de Freitas, N. (2016). Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224.
14. 13.Haarnoja, T., Zhou, A., Abbeel, P., & Levine, S. (2018). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290.
15. 14.Haarnoja, T., Tang, H., Abbeel, P., & Levine, S. (2017, August). Reinforcement learning with deep energy-based policies. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 1352-1361). JMLR. org.
16. 15.
17. 16.
18. 17.Haarnoja, T., Zhou, A., Hartikainen, K., Tucker, G., Ha, S., Tan, J., ... & Levine, S. (2018). Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905.