Sam Stites

Policy vs Value functions

Usually, policy and value functions work best, but most of the time finding a policy is preferred over finding a value function. Of course this it dependent on your use case – it might not be possible to find a policy as easily as a value function, or finding a policy might over-simplify the situation. Part of what makes policies nice is the fact that they can be represented in a more compact manner. With a policy you get something as concrete as: “Knights can move in L-shape, here is a distribution to make your choice” – paired with more features, you get complex strategies which allow agents to win at a human level of skill.

Conversely, a value function says, “your position is (A,4) with value X, let’s compute the maximum possible value we can squeeze out of all options.” Furthermore, the policy used with a value function is classically a maximum – something which can be very costly in high dimensions of state- or state-action- spaces. The maximizing actually turns out to be very interesting (as points of friction always tend to be): “computing the distribution to argmax over is your policy, but if we” move to a policy-based learner we just comput that directly.

As an added benefit you also you wind up with better convergence properties. Value functions solve for estimated value and advantage, but it’s possible that the hyperparameters might chatter, or oscillate, for non-linear approxmiators since value can sometimes be at odds with advantage. So just throwing a neural network at your problem which doesn’t live on the gym might fall apart for no reason at all, and also motivates dueling networks (ie: maintaining a network for value and a network for advantage).


Policy gradients are pretty good at finding stochastic policies and it’s what they are most famous for up until DDPGs. Deterministic policies have become pretty popular of late as well because of Deepmind’s research into learning that compatible value functions can speed up learning (ie: if the value function and the policy gradient use the same hyperparameters, then a critic can directly influence an actor). Keep in mind, however, that deterministic policies aren’t always advisable. Imagine writing a deterministic policy for rock-paper-sissors: ““since she threw rock last time, I’ll throw paper” this time.” As soon as your opponent figures that out, you’re done!

In the wild, policy gradients are sometimes very noisy and taking expectations of noisy gradients can be very hard to estimate. If your model has a high accuracy, but noise naturally exists in your problem, then your variance will also be high. So instead it’s better to start off with a deterministic policy, and then adjust it to be more stochastic. The natural policy gradient finds the deterministic policy (but only works for continuous actions). How it works is that it, when we get the gradient of the Q-function the critic can give the entire gradient to the policy and just say, “if you adjusted to this better gradient, you’d win.” This is allowed since the critic doesn’t just have the critique, but also the “true answer” since the Q-function is compatible. In sort, we are just updating actor parameters in the direction of critic parameters. See Natural Actor Critics for more.


A few other, possibly incoherent, notes: