AML HW2

Blackteaxx發表於2024-05-14

一、VC 維

二、圖半監督學習

img

上述正則化框架由兩部分組成:

  1. 傳播項:\(\sum_{i,j} W_{ij} || \frac{1}{\sqrt{d_i}}F_i - \frac{1}{\sqrt{d_j}}F_j||^2\), 這部分是為了保證相似的節點應當有同樣的標籤,\(W_{ij}\)是節點之間的相似度,而\(F\)是標記矩陣,透過最小化平方項保證相似的節點有相同的標籤。
  2. 正則化項:\(\sum_{i=1}^l ||F_i - Y_i||^2\), 這部分是確保在標記傳播的過程中,已知標籤的節點儘量保持不變,能夠保證在已知標籤上的表現。

這個正則化框架透過兩個項的平衡實現了對圖中未標記節點標籤的推斷:傳播項確保了標籤資訊能在圖中相似的節點間有效傳播,而擬合項則確保模型在已知標籤節點上的預測準確性。

三、高斯混合模型

img

Problem 3.1

高斯混合模型:

\[p(x) = \sum_{k=1}^{K} p(z = k) \mathcal{N}(x | \mu_k, \Sigma_k) \]

其中\(\mathcal{N}(x | \mu_k, \Sigma_k)\)由 k 唯一確定,因此可以用\(p(x|z)\)來表示,即為

\[p(x) = \sum_{k=1}^{K} p(z = k) p(x|z = k) \]

使用機率圖模型可以表示為:
img

Problem 3.2

EM 演算法的步驟為:

  1. E-step:求解\(\mathbb{E}_{p(z|x;\theta^t)}[\log p(x,z;\theta)]\)
  2. M-step:求解\(\theta^{t+1} = \arg \max_{\theta} \mathbb{E}_{p(z|x;\theta^t)}[\log p(x|z;\theta)]\)

我們假設樣本集合為\(\{x_1, x_2, \cdots, x_N\}\)每一對\((x_i,z_i)\)相互獨立

在 GMM 中,我們定義\(p(x,z;\theta)\)

\[p(x,z;\theta) = \prod_{i=1}^N p(x_i,z_i;\theta_k) = \prod_{i=1}^N p(z_i) p(x_i|z_i) = \prod_{i=1}^N \pi_{z_i} \mathcal{N}(x_i|\mu_{z_i}, \Sigma) \]

定義\(p(z|x;\theta)\)

\[p(z|x; \theta) = \prod_{i=1}^N p(z_i|x_i;\theta) \]

\[p(z_i|x_i;\theta) = \frac{p(x_i|z_i)p(z_i)}{\sum_{k=1}^K p(x_i|z_i=k)p(z_i=k)} \]

E-step:

\[\mathbb{E}_{p(z|x;\theta^t)}[\log p(x,z;\theta)] = \sum_{k=1}^K p(z=k|x;\theta^t) \log p(x,z=k;\theta) = \\ \sum_{z_1 = 1}^K \cdots \sum_{z_N = 1}^K \prod_{i=1}^N p(z_i|x_i;\theta^t) (\log \prod_{i=1}^N \pi_{z_i} \mathcal{N}(x_i|\mu_{z_i}, \Sigma)) = \\ \sum_{z_1 = 1}^K \cdots \sum_{z_N = 1}^K \prod_{i=1}^N p(z_i|x_i;\theta^t) \sum_{i=1}^N (\log \pi_{z_i} + \log \mathcal{N}(x_i|\mu_{z_i}, \Sigma)) = \\ \sum_{i=1}^N \sum_{z_1 = 1}^K \cdots \sum_{z_N = 1}^K \prod_{i=1}^N p(z_i|x_i;\theta^t)(\log \pi_{z_i} + \log \mathcal{N}(x_i|\mu_{z_i}, \Sigma)) = \\ \sum_{i=1}^N \sum_{k = 1}^K p(z=k|x_i;\theta^t)(\log \pi_{k} + \log \mathcal{N}(x_i|\mu_{k}, \Sigma)) \]

M-step:

\[(\pi^{t+1}, \mu^{t+1}, \Sigma^{t+1}) = \\ \arg \max_{\pi, \mu, \Sigma} \sum_{i=1}^N \sum_{k = 1}^K p(z=k|x_i;\theta^t)(\log \pi_{k} + \log \mathcal{N}(x_i|\mu_{k}, \Sigma)) \iff \\ \arg \max_{\pi, \mu, \Sigma} \sum_{i=1}^N \sum_{k = 1}^K p(z=k|x_i;\theta^t)\log \pi_{k} + \sum_{i=1}^N \sum_{k = 1}^K p(z=k|x_i;\theta^t) \log \mathcal{N}(x_i|\mu_{k}, \Sigma) \]


那麼,對於\(\pi^{t+1}\)

\[\pi^{t+1} = \argmax_{\pi} \sum_{i=1}^N \sum_{k = 1}^K p(z=k|x_i;\theta^t)\log \pi_{k} \]

由於\(\sum_{k=1}^K \pi_k = 1\),我們可以使用拉格朗日乘子法求解,即

\[\mathcal{L}(\pi, \lambda) = \sum_{k=1}^K \sum_{i=1}^N p(z=k|x_i;\theta^t) \log \pi_k + \lambda(1 - \sum_{k=1}^K \pi_k) \]

\(\pi_k\)求導,令導數為 0,即

\[\frac{\partial \mathcal{L}(\pi, \lambda)}{\partial \pi_k} = \sum_{i=1}^N p(z=k|x_i;\theta^t) \frac{1}{\pi_k} - \lambda = 0 \Rightarrow \pi_k^{t+1} = \frac{\sum_{i=1}^N p(z=k|x_i;\theta^t)}{\lambda} = \\ \frac{\sum_{i=1}^N p(z=k|x_i;\theta^t)}{\sum_{k=1}^K \sum_{i=1}^N p(z=k|x_i;\theta^t)} = \frac{\sum_{i=1}^N p(z=k|x_i;\theta^t)}{N} \]

其中\(\lambda\)\(\sum_{k=1}^K \sum_{i=1}^N p(z=k|x_i;\theta^t) = N\)確定


對於\(\mu^{t+1}\)

\[\mu^{t+1} = \argmax_{\mu} \sum_{i=1}^N \sum_{k = 1}^K p(z=k|x_i;\theta^t) \log \mathcal{N}(x_i|\mu_{k}, \Sigma) \]

由於\(\mu\)各分量之間無關,因此可以分別求解

\[\mu_k^{t+1} = \argmax_{\mu_k} \sum_{i=1}^N p(z=k|x_i;\theta^t) \log \mathcal{N}(x_i|\mu_{k}, \Sigma) = \\ \argmax_{\mu_k} \sum_{i=1}^N p(z=k|x_i;\theta^t) \left( -\frac{d}{2} \log 2\pi - \frac{1}{2} \log |\Sigma| - \frac{1}{2}(x_i - \mu_k)^T \Sigma^{-1} (x_i - \mu_k) \right) \]

\(\mu_k\)求導,令導數為 0,即

\[\frac{\partial}{\partial \mu_k} \sum_{i=1}^N p(z=k|x_i;\theta^t) \left( -\frac{d}{2} \log 2\pi - \frac{1}{2} \log |\Sigma| - \frac{1}{2}(x_i - \mu_k)^T \Sigma^{-1} (x_i - \mu_k) \right) = 0 \Rightarrow \\ \sum_{i=1}^N p(z=k|x_i;\theta^t) \Sigma^{-1} (x_i - \mu_k) = 0 \Rightarrow \\ \mu_k^{t+1} = \frac{\sum_{i=1}^N p(z=k|x_i;\theta^t) x_i}{\sum_{i=1}^N p(z=k|x_i;\theta^t)} \]


對於\(\Sigma^{t+1}\)

\[\Sigma^{t+1} = \argmax_{\Sigma} \sum_{i=1}^N \sum_{k = 1}^K p(z=k|x_i;\theta^t) \log \mathcal{N}(x_i|\mu_{k}, \Sigma) \]

\(\Sigma\)求導,令導數為 0,即

\[\frac{\partial}{\partial \Sigma} \sum_{i=1}^N p(z=k|x_i;\theta^t) \left( -\frac{d}{2} \log 2\pi - \frac{1}{2} \log |\Sigma| - \frac{1}{2}(x_i - \mu_k)^T \Sigma^{-1} (x_i - \mu_k) \right) = 0 \Rightarrow \\ \sum_{i=1}^N p(z=k|x_i;\theta^t) \left( -\frac{1}{2} \Sigma^{-1} + \frac{1}{2} \Sigma^{-1} (x_i - \mu_k) (x_i - \mu_k)^T \Sigma^{-1} \right) = 0 \Rightarrow \\ \Sigma^{t+1} = \frac{\sum_{i=1}^N p(z=k|x_i;\theta^t) (x_i - \mu_k) (x_i - \mu_k)^T}{\sum_{i=1}^N p(z=k|x_i;\theta^t)} \]