CRF as RNN 程式碼解讀

thesby發表於2016-06-06

論文:http://www.robots.ox.ac.uk/~szheng/papers/CRFasRNN.pdf
CRF as RNN論文的程式碼在https://github.com/torrvision/crfasrnn可以找到。
有一個線上的demo可以演示http://www.robots.ox.ac.uk/~szheng/crfasrnndemo

這篇博文主要是記錄自己對CRF as RNN中的 MultiStageMeanfieldLayer 的解讀。涉及到的檔案有multi_stage_meanfield的標頭檔案與實現、meanfield的標頭檔案與實現。

這個程式碼是基於老版本的caffe,大部分的層的標頭檔案都在vision_layers.hpp中,
對應的位置是class MultiStageMeanfieldLayer 和 class MeanfieldIteration,比較簡單,MultiStageMeanfieldLayer才是真正的層,而MeanfieldIteration是一個輔助類,直接看實現。

層運算的入口便是LayerSetUp,前面都是成員變數的初始化,接著是讀取spatial.par和bilateral.par。 然後是計算spatial_kernel,直接呼叫了
compute_spatial_kernel()函式:

template <typename Dtype>
void MultiStageMeanfieldLayer<Dtype>::compute_spatial_kernel(float* const output_kernel) {

  for (int p = 0; p < num_pixels_; ++p) {
    output_kernel[2*p] = static_cast<float>(p % width_) / theta_gamma_;
    output_kernel[2*p + 1] = static_cast<float>(p / width_) / theta_gamma_;
  }
}

這個功能很簡單,就是用一個2倍於畫素點個數的矩陣,儲存 (列/theta_gamma_,行/theta_gamma_)的kernel.
接下來就是將spatial_lattice_初始化。然後將後面計算需要的一元項先分配記憶體。由於需要使用多次的meanfield,所以接下來就為每個meanfield進行了一次初始化。就這樣,層就可以啟動了。

接下來就是Forward_cpu

/**
 * Performs filter-based mean field inference given the image and unaries.
 *
 * bottom[0] - Unary terms
 * bottom[1] - Softmax input/Output from the previous iteration (a copy of the unary terms if this is the first stage).
 * bottom[2] - RGB images
 *
 * top[0] - Output of the mean field inference (not normalized).
 */
template <typename Dtype>
void MultiStageMeanfieldLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {

  split_layer_bottom_vec_[0] = bottom[0];
  split_layer_->Forward(split_layer_bottom_vec_, split_layer_top_vec_);

  // Initialize the bilateral lattices.
  bilateral_lattices_.resize(num_);
  for (int n = 0; n < num_; ++n) {

    compute_bilateral_kernel(bottom[2], n, bilateral_kernel_buffer_.get());
    bilateral_lattices_[n].reset(new ModifiedPermutohedral());
    bilateral_lattices_[n]->init(bilateral_kernel_buffer_.get(), 5, num_pixels_);

    // Calculate bilateral filter normalization factors.
    Dtype* norm_output_data = bilateral_norms_.mutable_cpu_data() + bilateral_norms_.offset(n);
    bilateral_lattices_[n]->compute(norm_output_data, norm_feed_.get(), 1);
    for (int i = 0; i < num_pixels_; ++i) {
      norm_output_data[i] = 1.f / (norm_output_data[i] + 1e-20f);
    }
  }

  for (int i = 0; i < num_iterations_; ++i) {

    meanfield_iterations_[i]->PrePass(this->blobs_, &bilateral_lattices_, &bilateral_norms_);

    meanfield_iterations_[i]->Forward_cpu();
  }
}

功能就是讓前面的多次meanfield每一個跑一次。

下面是Backward_cpu()

/**
 * Backprop through filter-based mean field inference.
 */
template<typename Dtype>
void MultiStageMeanfieldLayer<Dtype>::Backward_cpu(
    const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down,
    const vector<Blob<Dtype>*>& bottom) {

  for (int i = (num_iterations_ - 1); i >= 0; --i) {
    meanfield_iterations_[i]->Backward_cpu();
  }

  vector<bool> split_layer_propagate_down(1, true);
  split_layer_->Backward(split_layer_top_vec_, split_layer_propagate_down, split_layer_bottom_vec_);

  // Accumulate diffs from mean field iterations.
  for (int blob_id = 0; blob_id < this->blobs_.size(); ++blob_id) {

    Blob<Dtype>* cur_blob = this->blobs_[blob_id].get();

    if (this->param_propagate_down_[blob_id]) {

      caffe_set(cur_blob->count(), Dtype(0), cur_blob->mutable_cpu_diff());

      for (int i = 0; i < num_iterations_; ++i) {
        const Dtype* diffs_to_add = meanfield_iterations_[i]->blobs()[blob_id]->cpu_diff();
        caffe_axpy(cur_blob->count(), Dtype(1.), diffs_to_add, cur_blob->mutable_cpu_diff());
      }
    }
  }
}

開始就是讓每個MeanfieldIteration進行一個Backward_cpu。然後有兩個for迴圈,第一個就是迴圈所有的blob,第二個就是把每個blob的所有迭代時的diff相加,放到對應blob的diff中。

PS:

————————————————-我是分割線2016.06.23———————————————————————————–
我把這個版本的caffe已經merge到了最新的官方版caffe,因為它的原始版本實在太老了。下載地址在此.

相關文章