0e Grad Est

$$ \nonumber \newcommand{aster}{*} \newcommand{exist}{\exists} \newcommand{B}{\mathbb B} \newcommand{C}{\mathbb C} \newcommand{I}{\mathbb I} \newcommand{N}{\mathbb N} \newcommand{Q}{\mathbb Q} \newcommand{R}{\mathbb R} \newcommand{Z}{\mathbb Z} \newcommand{eR}{\overline {\mathbb R}} \newcommand{cD}{ {\mathbb D}} \newcommand{dD}{ {\part \mathbb D}} \newcommand{dH}{ {\part \mathbb H}} \newcommand{eC}{\overline {\mathbb C}} \newcommand{A}{\mathcal A} \newcommand{D}{\mathcal D} \newcommand{E}{\mathcal E} \newcommand{F}{\mathcal F} \newcommand{G}{\mathcal G} \newcommand{H}{\mathcal H} \newcommand{J}{\mathcal J} \newcommand{L}{\mathcal L} \newcommand{U}{\mathcal U} \newcommand{M}{\mathcal M} \newcommand{O}{\mathcal O} \newcommand{P}{\mathcal P} \newcommand{S}{\mathcal S} \newcommand{T}{\mathcal T} \newcommand{V}{\mathcal V} \newcommand{W}{\mathcal W} \newcommand{X}{\mathcal X} \newcommand{Y}{\mathcal Y} \newcommand{bE}{\symbf E} \newcommand{bF}{\symbf F} \newcommand{bD}{\symbf D} \newcommand{bI}{\symbf I} \newcommand{bX}{\symbf X} \newcommand{bY}{\symbf Y} \newcommand{nz}{\mathcal Z} \newcommand{bT}{\mathbb T} \newcommand{bB}{\mathbb B} \newcommand{bS}{\mathbb S} \newcommand{bA}{\mathbb A} \newcommand{bL}{\mathbb L} \newcommand{bP}{\symbf P} \newcommand{bM}{\symbf M} \newcommand{bH}{\mathbb H} \newcommand{dd}{\mathrm d} \newcommand{Mu}{\mathup M} \newcommand{Tau}{\mathup T} \newcommand{ae}{\operatorname{a.e.}} \newcommand{aut}{\operatorname{aut}} \newcommand{adj}{\operatorname{adj}} \newcommand{char}{\operatorname{char}} \newcommand{cov}{\operatorname{Cov}} \newcommand{cl}{\operatorname{cl}} \newcommand{cont}{\operatorname{cont}} \newcommand{e}{\mathbb E} \newcommand{pp}{\operatorname{primitive}} \newcommand{dist}{\operatorname{dist}} \newcommand{diam}{\operatorname{diam}} \newcommand{fp}{\operatorname{Fp}} \newcommand{from}{\leftarrow} \newcommand{Gal}{\operatorname{Gal}} \newcommand{GCD}{\operatorname{GCD}} \newcommand{LCM}{\operatorname{LCM}} \newcommand{fg}{\mathrm{fg}} \newcommand{gf}{\mathrm{gf}} \newcommand{im}{\operatorname{Im}} \newcommand{image}{\operatorname{image}} \newcommand{inj}{\hookrightarrow} \newcommand{irr}{\operatorname{irr}} \newcommand{lcm}{\operatorname{lcm}} \newcommand{ltrieq}{\mathrel{\unlhd}} \newcommand{ltri}{\mathrel{\lhd}} \newcommand{loc}{ {\operatorname{loc}}} \newcommand{null}{\operatorname{null}} \newcommand{part}{\partial} \newcommand{pf}{\operatorname{Pf}} \newcommand{pv}{\operatorname{Pv}} \newcommand{rank}{\operatorname{rank}} \newcommand{range}{\operatorname{range}} \newcommand{re}{\operatorname{Re}} \newcommand{span}{\operatorname{span}} \newcommand{su}{\operatorname{supp}} \newcommand{sgn}{\operatorname{sgn}} \newcommand{syn}{\operatorname{syn}} \newcommand{var}{\operatorname{Var}} \newcommand{res}{\operatorname{Res}} \newcommand{data}{\operatorname{data}} \newcommand{erfc}{\operatorname{erfc}} \newcommand{erfcx}{\operatorname{erfcx}} \newcommand{tr}{\operatorname{tr}} \newcommand{col}{\operatorname{Col}} \newcommand{row}{\operatorname{Row}} \newcommand{sol}{\operatorname{Sol}} \newcommand{lub}{\operatorname{lub}} \newcommand{glb}{\operatorname{glb}} \newcommand{ltrieq}{\mathrel{\unlhd}} \newcommand{ltri}{\mathrel{\lhd}} \newcommand{lr}{\leftrightarrow} \newcommand{phat}{^\widehat{\,\,\,}} \newcommand{what}{\widehat} \newcommand{wbar}{\overline} \newcommand{wtilde}{\widetilde} \newcommand{iid}{\operatorname{i.i.d.}} \newcommand{Exp}{\operatorname{Exp}} \newcommand{abs}[1]{\left| {#1}\right|} \newcommand{d}[2]{D_{\text{KL}}\left (#1\middle\| #2\right)} \newcommand{n}[1]{\|#1\|} \newcommand{norm}[1]{\left\|{#1}\right\|} \newcommand{pd}[2]{\left \langle {#1},{#2} \right \rangle} \newcommand{argmax}[1]{\underset{#1}{\operatorname{argmax}}} \newcommand{argmin}[1]{\underset{#1}{\operatorname{argmin}}} \newcommand{p}[1]{\left({#1}\right)} \newcommand{c}[1]{\left \{ {#1}\right\}} \newcommand{s}[1]{\left [{#1}\right]} \newcommand{a}[1]{\left \langle{#1}\right\rangle} \newcommand{cc}[2]{\left(\begin{array}{c} #1 \\ #2 \end{array}\right)} \newcommand{f}{\mathfrak F} \newcommand{fi}{\mathfrak F^{-1}} \newcommand{Fi}{\mathcal F^{-1}} \newcommand{l}{\mathfrak L} \newcommand{li}{\mathfrak L^{-1}} \newcommand{Li}{\mathcal L^{-1}} \newcommand{const}{\text{const.}} \newcommand{Int}{\operatorname{Int}} \newcommand{Ext}{\operatorname{Ext}} \newcommand{Bd}{\operatorname{Bd}} \newcommand{Cl}{\operatorname{Cl}} \newcommand{Iso}{\operatorname{Iso}} \newcommand{Lim}{\operatorname{Lim}} \newcommand{src}{\operatorname{src}} \newcommand{tgt}{\operatorname{tgt}} \newcommand{input}{\operatorname{input}} \newcommand{output}{\operatorname{output}} \newcommand{weight}{\operatorname{weight}} \newcommand{paths}{\operatorname{paths}} \newcommand{init}{\bullet} \newcommand{fin}{\circledcirc} \newcommand{advance}{\operatorname{advance}} \newcommand{di}[2]{\frac{\part}{\part {#1}^{#2}}} $$

Gradient Estimation #

STA 4273 Winter 2021: Minimizing Expectations, Chris Maddison, University of Toronto

Expectation minimization #

Suppose $(O, \O, \mu)$ is a measure space. And $\mathcal Q$ is a set of density functions on $O$ with respect to $\mu$.

Suppose $f: O \times \mathcal Q \to \R$ is a function.

Let $X \sim q \in \mathcal Q$. We are interested in optimization problems of the form $\inf _ {q \in \mathcal Q} E\s{f(X, q)}$.

This type of optimization is a recurring pattern in machine learning.

Gradient estimation #

Continue above discussion. Suppose $f: O \times \R^D \to \R$ is a function.

Suppose $\mathcal Q = \c{q _ \theta\mid \theta \in \R^D}$. And consider $J(\theta) := E\s{f(X, \theta)}$, where $X \sim q _ \theta(x)$.

A gradient estimator is a random variable $G(\theta) \in \L(\Omega\to \R^D, \F)$.

  • If $E[G(\theta)] = \nabla _ \theta J(\theta)$, the estimator is called unbiased.
  • $E\norm{G(\theta) - \nabla _ \theta J(\theta)}^2 _ 2$ is called the variance of the estimator.
Pathwise gradient estimator #

Consider $J(\theta) = E\s{f(X, \theta)}$ following previous assumptions.

  • Suppose there exists some random variable $R \in \L(\Omega \to E, \F/\E)$.
  • Suppose for function $g: \E \times \R^D \to O$ such that $g(R, \theta) \sim q _ \theta$.

Now suppose the exchange of integral and derivative is allowed, then $$ \nabla _ \theta E\s{f(X, \theta)} = \nabla _ \theta E\s{f(g(R, \theta), \theta)} = E\s{\nabla _ \theta f(g(R, \theta), \theta)} $$ Now take $G(\theta) = \nabla _ \theta f(g(R, \theta))$, the Monte Carlo estimator.

Gaussian reparameterization #

For $X \sim q _ \theta = \mathcal N(\mu _ \theta, A _ \theta A^ * _ \theta)$. Take a $R \sim \mathcal N(0, I)$, and notice $A _ \theta R + \mu _ \theta \sim X$.

Score function gradient estimator #

Consider $J(\theta) = E\s{f(X, \theta)}$ following previous assumptions. $$ \begin{aligned} \nabla _ \theta J(\theta) & = \nabla _ \theta E \s{f(X, \theta)} = \nabla _ \theta \int f(x, \theta) q _ \theta(x) \dd x\\ & = \int \nabla _ \theta f(x, \theta) q _ \theta(x) \dd x + \int f(x, \theta) \nabla _ \theta \log q _ \theta(x) q _ \theta(x) \dd x\\ & = E\s{\nabla _ \theta f(X, \theta)} + E\s{f(X, \theta) \nabla _ \theta \log q _ \theta(X)} \end{aligned} $$

  • $\nabla _ \theta \log q _ \theta(x)$ is also called the score function.
  • Notice how the second term computes gradient before the sampling.