在深度學習中,我們通常對模型進行抽樣並計算與真實樣本之間的損失,來估計模型分佈與真實分佈之間的差異。並且損失可以定義得很簡單,比如二範數即可。但是對於已知引數的兩個確定分佈之間的差異,我們就要通過推導的方式來計算了。
下面對已知均值與協方差矩陣的兩個多維高斯分佈之間的KL散度進行推導。當然,因為便於分佈之間的逼近,Wasserstein distance可能是衡量兩個分佈之間差異的更好方式,但這個有點難,以後再記錄。
首先定義兩個$n$維高斯分佈如下:
$\begin{aligned} &p(x) = \frac{1}{(2\pi)^{0.5n}|\Sigma|^{0.5}}\exp\left(-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right)\\ &q(x) = \frac{1}{(2\pi)^{0.5n}|L|^{0.5}}\exp\left(-\frac{1}{2}(x-m)^T L^{-1}(x-m)\right)\\ \end{aligned}$
需要計算的是:
$\begin{aligned} \text{KL}(p||q) = \text{E}_p\left(\log\frac{p(x)}{q(x)}\right) \end{aligned}$
為了方便說明,下面分步進行推導。首先:
$\begin{aligned} \frac{p(x)}{q(x)} &= \frac {\frac{1}{(2\pi)^{0.5n}|\Sigma|^{0.5}}\exp\left(-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right)} {\frac{1}{(2\pi)^{0.5n}|L|^{0.5}}\exp\left(-\frac{1}{2}(x-m)^T L^{-1}(x-m)\right)}\\ &=\left(\frac{|L|}{|\Sigma|}\right)^{0.5}\exp\left(\frac{1}{2}(x-m)^T L^{-1}(x-m) -\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right) \end{aligned}$
然後加上對數:
$\begin{aligned} \log\frac{p(x)}{q(x)} &= \frac{1}{2}\log\frac{|L|}{|\Sigma|}+ \frac{1}{2}(x-m)^T L^{-1}(x-m) - \frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu) \end{aligned}$
再加上期望:
$\begin{aligned} \text{E}_p\log\frac{p(x)}{q(x)} &=\frac{1}{2}\log\frac{|L|}{|\Sigma|}+ \text{E}_p\left[\frac{1}{2}(x-m)^T L^{-1}(x-m) - \frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right]\\ &=\frac{1}{2}\log\frac{|L|}{|\Sigma|}+ \text{E}_p\text{Tr}\left[\frac{1}{2}(x-m)^T L^{-1}(x-m) - \frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right]\\ \end{aligned}$
第二步是因為結果為標量,可以轉換為計算跡的形式。接著由跡的平移不變性得:
$\begin{align} &\frac{1}{2}\log\frac{|L|}{|\Sigma|}+ \text{E}_p\text{Tr} \left[ \frac{1}{2}L^{-1}(x-m)(x-m)^T - \frac{1}{2}\Sigma^{-1}(x-\mu)(x-\mu)^T \right]\\ = &\frac{1}{2}\log\frac{|L|}{|\Sigma|}+ \frac{1}{2}\text{E}_p\text{Tr} \left(L^{-1}(x-m)(x-m)^T\right) - \frac{1}{2}\text{E}_p\text{Tr} \left(\Sigma^{-1}(x-\mu)(x-\mu)^T\right) \\ = &\frac{1}{2}\log\frac{|L|}{|\Sigma|}+ \frac{1}{2}\text{E}_p\text{Tr} \left(L^{-1}(x-m)(x-m)^T\right) - \frac{n}{2} \end{align}$
其中最後一項是因為,首先期望與跡可以調換位置,然後$(x-\mu)(x-\mu)^T$在分佈$p$下的期望就是對應的協方差矩陣$\Sigma$,於是得到一個$n$維單位陣,再計算單位陣的跡為$n$。
接下來,把中間項提出來推導,得:
$\begin{align} &\frac{1}{2}\text{E}_p\text{Tr} \left(L^{-1}(x-m)(x-m)^T\right)\\ =&\frac{1}{2}\text{Tr}\left(L^{-1}\text{E}_p \left(xx^T-xm^T-mx^T+mm^T \right) \right) \\ =&\frac{1}{2}\text{Tr}\left(L^{-1} \left(\Sigma +\mu\mu^T-2\mu m^T+mm^T \right) \right) \end{align}$
其中$\text{E}_p(xx^T) = \Sigma + \mu\mu^T$推導如下:
$\begin{aligned} \Sigma &= \text{E}_p\left[(x-\mu)(x-\mu)^T\right]\\ &= \text{E}_p\left(xx^T-x\mu^T-\mu x^T+\mu\mu^T\right)\\ &= \text{E}_p\left(xx^T\right)-2\text{E}_p\left(x\mu^T\right)+\mu\mu^T \\ &= \text{E}_p\left(xx^T\right)-\mu\mu^T \\ \end{aligned}$
接著推導$(6)$式:
$\begin{aligned} &\frac{1}{2}\text{Tr}\left(L^{-1} \left(\Sigma +\mu\mu^T-2\mu m^T+mm^T \right) \right) \\ = &\frac{1}{2}\text{Tr}\left(L^{-1}\Sigma +L^{-1} (\mu-m)(\mu-m)^T \right) \\ = &\frac{1}{2}\text{Tr}\left(L^{-1}\Sigma\right)+ \frac{1}{2}(\mu-m)L^{-1}(\mu-m)^T \\ \end{aligned}$
最後代回$(3)$式,得到最終結果:
$\begin{aligned} \text{E}_p\log\frac{p(x)}{q(x)} =&\frac{1}{2}\left\{ \log\frac{|L|}{|\Sigma|}+ \text{Tr}\left(L^{-1}\Sigma\right)+ (\mu-m)L^{-1}(\mu-m)^T - n \right\} \end{aligned}$