From Information Theory to Variational Inference

Outline

Variational Inference $(\text{VI})$ are useful methods for approximating hard-to-compute probabilisty densities. The main idea behind VI is that a target distribution $p$ of some dataset can be estimated by introducing an approximate distribution $q$, and then, iteratively minimizing the Kullback-Leibler divergence $\text{KL}(q||p)$ between $q$ and $p$. Many reinforcement learning algorithms, e.g., variational inference for policy search, aim to optimize the policy by minimizing the KL-divergence between a policy distribution and an improper reward-weighted distribution. This post discussse the following topics that are basic, but important, concepts to understanding VI.

  • Information Theory
    • Information
    • Entropy
    • Kullback-Leibler divergence
  • Statistics
    • Jensen’s inequality
    • Evidence lower bound $(\text{ELBO})$
  • Graphic models
    • Bayesian Networks

and then, talk about

  • Varitional Inference

1. Information Theory

Information

One of the core basic concept in information theory is “Information”. The amount of “information” contains in an event $x$ is defined formally $(\text{or mathematically})$ as

where $p(x)$ is the occurrence probability of event $x$. Informally, the more one knows about an event $(\text{high probability})$, the less hidden information he is apt to get about it $(\text{less information})$.

  • For example, the probability of a dice being a particular number, e.g., 3, is 1/6. Thus, the information of a dice rolling is $I(x) = \log_{2}{\frac{1}{1/6}} = \log_{2}{6}=2.58$ bits. On the other hand, the probability of a coin being head or tail is $1/2$, and hence, the information of a coin toss is $I(x) = \log_{2}{\frac{1}{1/2}}=1$ bit.
  • Event Probability Information
    Coin toss 1/2 $I(X)=\log_{2}{\frac{1}{1/2}}=1$ bit
    Dice rolling 1/6 $I(X)=\log_{2}{\frac{1}{1/6}} = 2.58$ bits

Entropy

Entropy” measures the average “Information” of the source data. Shannon defined the entropy $(H)$ of a set of discrete random variable $X={x_1, x_2, x_3, \cdots, x_n}$ with probability mass function $P(X)$ explicitly as

where $b$ is the base of the logarithm, e.g., $b=2$, $b=10$, or $b=e$.

  • Discrete variables $X=[10, 20, 30, 40]$ with equal probability $p(x_{i})=\frac{1}{4}$
  • Continuous variables $X \in \mathbb{R}$ with probabilty density function of the exponential distribution $p(x)=\lambda e^{-\lambda x}$

Kullback-Leibler $(\text{KL})$divergence

The Kullback-Leibler divergence $(\text{also named relative entropy})$ was first introduced by Solomon Kullback and Richard Leibler in 1951 as the directed divergence between two distributions. In statistics, the KL-divergence is commonly used to measure how one probability distribution different from a second, reference probability distribution.

For discrete probability distributions, $P$ and $Q$ are defined on a same probability space, the KL-divergence between $P$ and $Q$ is defined as $(\text{see below})$. The KL-divergence is also interpreted as the mean of the logarithm difference between two distributions, where the expectation is taken using the probability $P$.

For continuous probability distributions, $P$ and $Q$, the KL-divergence is defined as an integral $(\text{see below})$.

  • Example: Calculate the KL-divergence in discrete domain between $p$ and $q$, and the KL-divergence between $p$ and $t$?
  • Example: Calculate the KL-divergence between two normal distributions, $p \sim \mathcal{N}(\mu_1, \sigma_1^2)$ and $q \sim \mathcal{N}(\mu_2, \sigma_2^2)$.

Some properties of the KL-divergence are

  • It is Non-negative: $D_{KL}(P || Q) \geq 0 $,
  • It is asymmetric. $D_{KL}(P || Q) \neq D_{KL}(Q||P)$,
  • It is invariant under parameter transformation. $\text{(this property turns out very useful in machine learning or reinforcement learning, e.g., natural gradient)}$.
2. Statistics

Jensen's inequality

Jensen’s inequality generalizes the statement that a secant line of a convex function lies above the graph $(\text{Wikipedia})$. Let $f(x)$ to be a real continuous function $(\text{convex or concave})$, thus the Jensen’s inequality is

In the domain of probability theory, if the $p_1, p_2, \cdots, p_n$ are positive number that sum to 1, and $f(x)$ is a convex function, then

On the other hand, if $f(x)$ is a concave function, then

Evidence Lower Bound $(\text{ELBO})$

Now, let’s start from the log probability of a random variable $X$. Here, $f(x)=\log{(x)}$ is a concave function. Thus, we can have

We denote $L=E_z [\log p(X, Z)] + H(Z)$ as the Evidence Lower Bound $(\text{ELBO})$, where $H(Z)=- E_z [\log(q(Z))]$ is the Shannon entropy. The $q(Z)$ in the equation is a distribution used to approximate the true posterior distribution $p(Z|X)$ in VI. Maximizing the ELBO gives as tight a bound on the log probability. Or, if we want to maximize the marginal probability, we can instead maximize its ELBO $L$.

3. Graphical Models

Probability theory plays a crucial role in modern machine leanring. Graphical models provide a simple and elegant way to represent the structure of a probabilitic model, show insights into the properties of the mdoel, especially conditional independence properties. Generally, a graph consists of nodes and links, where each node represent a random variable and the links express probabilistic relationships between these variables. The graph then captures the way in which the joint distribution over all the random variables can be decomposed into a product of factors each depending only on a subset of the variables.

Bayesian Networks

The graph is a directed graphical model which is typically used to describe probability distribution in Bayesian inference. The graphical model represents the joint probability distribution over three variables $A$, $B$ and $C$. We can therefore write the joint distribution in the form

4. Variational Inference

Finally, we are now ready to introduce the Variational Inference.

Problem Setup

Assume that $X$ are observations $(\text{data})$ and $Z$ are hidden variables, the hidden variables might include the “parameters”. The relationship of these two variabels can be represented using a grapical model

The goal of variational inference is to infer hidden variables from observations, that is we want the posterior distribution

where the joint probability $p(X, Z)$ is generally easy to compute, however, the marginal probability $p(X)=\int_Z p(X,Z) dz$ is intractable in most cases.

References

A YouTube tutorial on variational inference.

Another YouTube tutorial on variational inference.