TensorFlow知識點

weixin_34413065發表於2018-01-10

1. 使用指定的GPU和視訊記憶體

如果裝置上裝備了多塊GPU,TF執行時預設使用所有與他可見GPU,而且預設使用盡可能多的視訊記憶體。可是,很多情況下我們的程式實際上並不需要消耗那麼多資源,TF的這種獨佔性的處理方式就造成了資源浪費。那麼如何限制TF 可使用的GPU數目和視訊記憶體容量呢?

1.1 設定TF可使用GPU裝置

GPU裝置在TF下是從零開始編號的,如果裝置上有四塊GPU裝置,他們的編號依次是 0,1,2,3. 裝置名稱依次是 "/gpu:0","/gpu:1","/gpu:2","/gpu:3"。預設所有裝置是TF執行時可見的,而TF預設也會佔用所有與他可見的GPU裝置。我們可以通過環境變數
CUDA_VISIBLE_DEVICES 來設定哪些GPU對TF可見,具體語法如下:

Environment Variable Syntax Results
CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen
CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible
CUDA_VISIBLE_DEVICES="0,1" Same as above, quotation marks are optional
CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked

為保證裝置使用上的靈活性, CUDA_VISIBLE_DEVICES 環境變數應為臨時變數 ,預設情況下該臨時變數不存在或未設定。終端中配置使用方法如下:

## Windows 下設定使用方法
# set  CUDA_DEVICE_ORDER=PCI_BUS_ID     # 設定按PCI_BUS_ID 順序索引裝置
set CUDA_VISIBLE_DEVICES=1     # 設定/gpu:1或1號GPU對後續CUDA程式可見
python mywork.py
## Linux 下設定使用方法
CUDA_VISIBLE_DEVICES=1 python mywork.py

CUDA_VISIBLE_DEVICES臨時變數的設定還可放置在Python程式碼中設定,這樣既能保證跨平臺的統一性又免去了手動設定臨時變數的麻煩。

import os
# os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"     # 見 Tensorflow issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = "1"

通過這種方式設定的臨時變數值會覆蓋在終端中手動設定的值。
注1: 通過設定臨時變數 CUDA_VISIBLE_DEVICES 來設定對TF可見的GPU裝置並非TF的專屬操作,臨時變數CUDA_VISIBLE_DEVICES 是CUDA用來限制CUDA Application 可見GPU裝置的手段,參見CUDA_VISIBLE_DEVICES 環境變數說明
注2: CUDA_VISIBLE_DEVICES 設定的執行時可見裝置序號可能與NVML 工具nvidia-smi輸出的不一致,參見CUDA_DEVICE_ORDER 環境變數說明

1.2 設定TF可使用GPU記憶體容量

預設情況下,TF將佔用盡可能多的可見GPU記憶體,使用者可在啟動TF Session時設定GPU引數來加以限制。典型設定分為定量使用和按需使用。

1.2.1 GPU記憶體定量使用

這種設定方法使得TF最多隻能使用指定比例的可見視訊記憶體。

gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.7)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))  

Tips: 可見視訊記憶體 = 可見GPU視訊記憶體總和
通過上述設定,TF最多隻能使用70%的可見視訊記憶體。

1.2.2 GPU記憶體按需使用

通過設定allow_growth GPU引數,TF可按需使用可見視訊記憶體。

gpu_options = tf.GPUOptions(allow_growth=True)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) 

2. Variables

A TensorFlow variable is the best way to represent shared, persistent state manipulated by your program. Variables are manipulated via the tf.Variable class.
Features:

  1. A variable is a tensor whose value can be changed by running ops on it.
  2. Modifications on a variable are visible across multiple tf.Sessions.
  3. A variable exists outside the context of a single session.run call.

The best way to create a variable is to call the tf.get_variable function.
Usage: name = tf.get_variable(str_name,shape=[1], dtype=tf.float32, initializer=tf.glorot_uniform_initializer, collections, trainable)
params:

--name: 變數名,用於上下文中引用該變數
--str_name: 變數名, to name this variable's value when checkpointing and exporting models.
--shape: 可遍歷值(list,tuple etc.), 變數的維度和每一維的長度,類似numpy 陣列。預設1維,即標量。
--dtype:變數資料型別,預設tf.float32
--initializer: 呼叫tf.global_variables_initializer()session.run(name.initializer)時變數初始方式,預設tf.glorot_uniform_initializer
--collections: 變數所在集合
--trainable: 變數是否可學習,將被放在tf.GraphKeys.TRAINABLE_VARIABLES集合if True
注:為保證變數使用上的一致性,name 和 str_name 常設成一樣。

#### Variable 使用示例
## create a variable in tf.GraphKeys.LOCAL_VARIABLES collection, in which variables are not trainable
my_local = tf.get_variable("my_local", shape=(), 
collections=[tf.GraphKeys.LOCAL_VARIABLES])

## create a variable which is not trainable
my_non_trainable = tf.get_variable("my_non_trainable", 
                                   shape=(), 
                                   trainable=False)

## add an existing variable named my_local to a collection named my_collection_name
tf.add_to_collection("my_collection_name", my_local)

## retrieve a list of all the variables (or other objects)  in collection  named my_collection_name
tf.get_collection("my_collection_name")

## creates a variable named v and places it on the second GPU device
with tf.device("/device:GPU:1"):
    v = tf.get_variable("v", [1])

## initialize all variables
session.run(tf.global_variables_initializer())

## initialize variable my_variable only
session.run(my_variable.initializer)

## prints the names of all variables which have not yet been initialized
print(session.run(tf.report_uninitialized_variables()))

## initialize w with v's value
v = tf.get_variable("v", shape=(), initializer=tf.zeros_initializer())
w = tf.get_variable("w", initializer=v.initialized_value() + 1)

## set reuse=True to reuse variables
with tf.variable_scope("model"):
  output1 = my_image_filter(input1)
with tf.variable_scope("model", reuse=True):
  output2 = my_image_filter(input2)

## call scope.reuse_variables() to trigger a variable reuse
with tf.variable_scope("model") as scope:
  output1 = my_image_filter(input1)
  scope.reuse_variables()
  output2 = my_image_filter(input2)

## initialize a variable scope based on another one
## set reuse=true to share variables
with tf.variable_scope("model") as scope:
  output1 = my_image_filter(input1)
with tf.variable_scope(scope, reuse=True):
  output2 = my_image_filter(input2)
  1. Any string is a valid collection name, and there is no need to explicitly create a collection.
  2. Note that by default tf.global_variables_initializer does not specify the order in which variables are initialized. Therefore, if the initial value of a variable depends on another variable's value, it's likely that you'll get an error. Any time you use the value of a variable in a context in which not all variables are initialized, it is best to use variable.initialized_value() instead of variable.
  3. Variable scopes allow you to control variable reuse when calling functions which implicitly create and use variables. They also allow you to name your variables in a hierarchical and understandable way.
  4. Since depending on exact string names of scopes can feel dangerous, it's also possible to initialize a variable scope based on another one

Goto Tensorflow Docs -- Variables

3. Tensorboard multiple scalar summaries in one plot

Tensorboard 一張圖中畫兩個變數,方便對比,如:


2875616-6e564fc8e4f19cc8.png
對比圖
import tensorflow as tf
from numpy import random

writer_1 = tf.summary.FileWriter("./logs/plot_1")
writer_2 = tf.summary.FileWriter("./logs/plot_2")

log_var = tf.Variable(0.0)
tf.summary.scalar("loss", log_var)

write_op = tf.summary.merge_all()

session = tf.InteractiveSession()
session.run(tf.global_variables_initializer())

for i in range(100):
    # for writer 1
    summary = session.run(write_op, {log_var: random.rand()})
    writer_1.add_summary(summary, i)
    writer_1.flush()

    # for writer 2
    summary = session.run(write_op, {log_var: random.rand()})
    writer_2.add_summary(summary, i)
    writer_2.flush()