【TensorFlow】 TensorFlow-Slim影像分類模型庫

tankII發表於2021-09-09


 is a new lightweight high-level API of TensorFlow (tensorflow.contrib.slim) for defining, training and evaluating complex models. This directory contains code for training and evaluating several widely used Convolutional Neural Network (CNN) image classification models using TF-slim. It contains scripts that will allow you to train models from scratch or fine-tune them from pre-trained network weights. It also contains code for downloading standard image datasets, converting them to TensorFlow's native TFRecord format and reading them in using TF-Slim's data reading and queueing utilities. You can easily train any model on any of these datasets, as we demonstrate below. We've also included a , which provides working examples of how to use TF-Slim for image classification. For developing or modifying your own models, see also the .

 是TensorFlow(tensorflow.contrib.slim)的一個新的輕量級高階API,用於定義,訓練和評估複雜模型。該目錄包含用於訓練和評估使用TF-slim的幾種廣泛使用的卷積神經網路(CNN)影像分類模型的程式碼。它包含的指令碼可以讓您從頭開始訓練模型,或者根據預先訓練好的網路權重對其進行微調。它還包含下載標準影像資料集的程式碼,將它們轉換為TensorFlow的原生TFRecord格式,並使用TF-Slim的資料讀取和排隊實用程式讀取它們。您可以輕鬆地在任何這些資料集上訓練任何模型,如下所示。我們還包括一個 ,它提供瞭如何使用TF-Slim進行影像分類的例項。要開發或修改自己的模型,請參閱。


,該不是核心TF庫的一部分。要做到這一點,請按如下方式檢查張量 庫:
git clone 


五、Fine-tuning a model from an existing checkpoint(從現有檢查點微調模型)

Rather than training from scratch, we'll often want to start from a pre-trained model and fine-tune it. To indicate a checkpoint from which to fine-tune, we'll call training with the --checkpoint_path flag and assign it an absolute path to a checkpoint file.

When fine-tuning a model, we need to be careful about restoring checkpoint weights. In particular, when we fine-tune a model on a new task with a different number of output labels, we wont be able restore the final logits (classifier) layer. For this, we'll use the --checkpoint_exclude_scopes flag. This flag hinders certain variables from being loaded. When fine-tuning on a classification task using a different number of classes than the trained model, the new model will have a final 'logits' layer whose dimensions differ from the pre-trained model. For example, if fine-tuning an ImageNet-trained model on Flowers, the pre-trained logits layer will have dimensions [2048 x 1001] but our new logits layer will have dimensions [2048 x 5]. Consequently, this flag indicates to TF-Slim to avoid loading these weights from the checkpoint.

在微調模型時,我們需要注意恢復檢查點權重。特別是,當我們用一個不同數量的輸出標籤對新任務進行微調時,我們無法恢復最終的logits(分類器)層。為此,我們將使用該--checkpoint_exclude_scopes標誌。此標誌阻止某些變數被載入。當使用與訓練模型不同數量的分類任務進行微調時,新模型將具有最終的“分類”層,其尺寸與預先訓練的模型不同。例如,如果在Flowers上微調了ImageNet訓練的模型,預先訓練的logits圖層將具有尺寸,[2048 x 1001]但是我們的新logits圖層將具有尺寸[2048 x 5]。因此,該標誌向TF-Slim指示避免從檢查點載入這些權重。

Keep in mind that warm-starting from a checkpoint affects the model's weights only during the initialization of the model. Once a model has started training, a new checkpoint will be created in ${TRAIN_DIR}. If the fine-tuning training is stopped and restarted, this new checkpoint will be the one from which weights are restored and not the ${checkpoint_path}$. Consequently, the flags --checkpoint_path and --checkpoint_exclude_scopes are only used during the 0-th global step (model initialization). Typically for fine-tuning one only want train a sub-set of layers, so the flag --trainable_scopes allows to specify which subsets of layers should trained, the rest would remain frozen.

因此,標誌--checkpoint_path--checkpoint_exclude_scopes期間僅用於0-第全球步驟(模型初始化)。通常情況下,微調只需要訓練一組子層,因此該標誌--trainable_scopes允許指定層的哪些子層應該訓練,其餘的將保持凍結。

Below we give an example of , inception_v3 was trained on ImageNet with 1000 class labels, but the flowers dataset only have 5 classes. Since the dataset is quite small we will only train the new layers.

$ DATASET_DIR=/tmp/flowers
$ TRAIN_DIR=/tmp/flowers-models/inception_v3
$ CHECKPOINT_PATH=/tmp/my_checkpoints/inception_v3.ckpt
$ python train_image_classifier.py 
    --train_dir=${TRAIN_DIR} 
    --dataset_dir=${DATASET_DIR} 
    --dataset_name=flowers 
    --dataset_split_name=train 
    --model_name=inception_v3     --checkpoint_path=${CHECKPOINT_PATH} 
    --checkpoint_exclude_scopes=InceptionV3/Logits,InceptionV3/AuxLogits 
    --trainable_scopes=InceptionV3/Logits,InceptionV3/AuxLogits


七、Exporting the Inference Graph(匯出推理圖)

Saves out a GraphDef containing the architecture of the model.

To use it with a model name defined by slim, run:


儲存包含模型體系結構的GraphDef(.pb)。

要將其與由slim定義的模型名稱一起使用,請執行:

$ python export_inference_graph.py 
  --alsologtostderr 
  --model_name=inception_v3 
  --output_file=/tmp/inception_v3_inf_graph.pb

$ python export_inference_graph.py 
  --alsologtostderr 
  --model_name=mobilenet_v1 
  --image_size=224 
  --output_file=/tmp/mobilenet_v1_224.pb

If you then want to use the resulting model with your own or pretrained checkpoints as part of a mobile model, you can run freeze_graph to get a graph def with the variables inlined as constants using:

如果您希望將自己或預訓練檢查點的結果模型用作移動模型的一部分,則可以使用以下命令執行freeze_graph以獲取圖形def,並將變數內聯為常量:

bazel build tensorflow/python/tools:freeze_graph

bazel-bin/tensorflow/python/tools/freeze_graph 
  --input_graph=/tmp/inception_v3_inf_graph.pb 
  --input_checkpoint=/tmp/checkpoints/inception_v3.ckpt 
  --input_binary=true --output_graph=/tmp/frozen_inception_v3.pb 
  --output_node_names=InceptionV3/Predictions/Reshape_1

透過freeze_graph把tf.train.write_graph()生成的pb檔案與tf.train.saver() 生成的chkp檔案固化之後重新生成一個pb檔案

bazel-bin/tensorflow/python/tools/freeze_graph

--input_graph=/path/to/graph.pb  # 注意:這裡的pb檔案是用tf.train.write_graph方法儲存的(所以此時只有圖結構哦)

--input_checkpoint=/path/to/model.ckpt

--output_node_names=output/predict

--output_graph=/path/to/frozen.pb


  1.   

  2.   


[html]

  1. ////freeze_graph.py原理:  

tensorflow/tensorflow/python/tools/freeze_graph.py原理:

output_graph_def = graph_util.convert_variables_to_constants( # 關鍵函式


          sess,
          input_graph_def,
          output_node_names.replace(" ", "").split(","),
          variable_names_whitelist=variable_names_whitelist,
          variable_names_blacklist=variable_names_blacklist)

The output node names will vary depending on the model, but you can inspect and estimate them using the summarize_graph tool:

輸出節點名稱會根據模型而有所不同,但您可以使用summarize_graph工具檢查並估計它們:

bazel build tensorflow/tools/graph_transforms:summarize_graph

bazel-bin/tensorflow/tools/graph_transforms/summarize_graph 
  --in_graph=/tmp/inception_v3_inf_graph.pb


來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/1795/viewspace-2808875/,如需轉載,請註明出處,否則將追究法律責任。

相關文章