tensorflow學習筆記1(程式碼轉自官網)

Emma1997發表於2017-05-29

1、首先要把tensorflow給import進去

import tensorflow as tf

2、tensor

3 # a rank 0 tensor; this is a scalar with shape []
[1. ,2., 3.] # a rank 1 tensor; this is a vector with shape [3]
[[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3]
[[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]

computational graph計算陣

3、建立constant node:常量只能在初始化時被賦值,以後不能更改
1)如建立兩個浮點型node

node1 = tf.constant(3.0, tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
print(node1, node2)

這個語句的執行結果為

Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0", shape=(), dtype=float32)

若要顯示變數的值3.0, 4.0便要用計算矩陣 (computational graph)跑一下

sess = tf.Session()
print(sess.run([node1, node2]))

出來的結果是

[3.0, 4.0]

2)兩個常量的加法運算

node3 = tf.add(node1, node2)

4、placeholder可在以後賦值

a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b  # + provides a shortcut for tf.add(a, b)

而且支援加減乘除法

print(sess.run(adder_node, {a: 3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))

若要顯示出加減乘除的結果,要如上這樣先給a,b賦值,再sess.run()一下。
結果為

7.5
[ 3.  7.]

還可以

add_and_triple = adder_node * 3.
print(sess.run(add_and_triple, {a: 3, b:4.5}))

5、變數(Variable)
1)變數定義

W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)

2)變數初始化(一定要記住初始化,不然程式會跑錯)

init = tf.global_variables_initializer()
sess.run(init)

3)可與其他型別量進行組合運算

x = tf.placeholder(tf.float32)
linear_model = W * x + b
print(sess.run(linear_model, {x:[1,2,3,4]}))

4)變數賦值

fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])

這時再

print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))#loss為前面已經定義的一種操作組合

loss的值就是新的值了
6、tf.train API
tf是一個用來實現機器學習相關功能的平臺,而機器學習的目的主要是給一部分輸入,並對輸出的資料進行預測,優化引數使預測更加準確。具體的預測方式有許多種,比如線性迴歸等。而tf本身提供了一些函式,可以自動調整預測模型的引數,使預測的輸出資料和現實的輸出資料之間的差別儘可能小。而預測的多組輸出資料和現實的多組輸出資料之間的差值的平方的和是一種基本的loss function(損失函式),換句話說機器學習就是要調整引數使得loss最少。下面的程式碼是用梯度優化實現調整引數的程式碼。

optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init) # reset values to incorrect defaults.
for i in range(1000):
  sess.run(train, {x:[1,2,3,4], y:[0,-1,-2,-3]})

print(sess.run([W, b]))

7、完整的線性迴歸訓練模型如下

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

' a test module '

__author__ = 'Google and Emma Guo'

import tensorflow as tf
import numpy as np
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

# Model parameters模型引數
W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
# Model input and output模型的輸入輸出
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
# loss損失函式
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer引數優化部分
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data訓練資料
x_train = [1,2,3,4]
y_train = [0,-1,-2,-3]
# training loop訓練部分
init = tf.global_variables_initializer()#變數初始化
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
  sess.run(train, {x:x_train, y:y_train})#真正的執行train來優化引數

# evaluate training accuracy人工與機器評估訓練的準確性
curr_W, curr_b, curr_loss  = sess.run([W, b, loss], {x:x_train, y:y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))

8、tf.contrib.learn是一個更高階的tf模型,簡化了ml(機器學習)的步驟

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

' a test module '

__author__ = 'Google and Emma Guo'

import tensorflow as tf
# NumPy is often used to load, manipulate and preprocess data.
# NumPy一般被用來讀取操作預處理資料
import numpy as np
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

# Declare list of features. We only have one real-valued feature. There are many
# other types of columns that are more complicated and useful.
# 宣告一系列特徵,此例只宣告瞭一個特徵
features = [tf.contrib.layers.real_valued_column("x", dimension=1)]

# An estimator is the front end to invoke training (fitting) and evaluation
# (inference). There are many predefined types like linear regression,
# logistic regression, linear classification, logistic classification, and
# many neural network classifiers and regressors. The following code
# provides an estimator that does linear regression.
# estimator是呼叫訓練和評估的前端。已經有好多已經設計好了的estimator,比如線性
# 迴歸,logistic迴歸,線性分類,logistic分類還有很多神經網路分類和迴歸方式。下
# 面的程式碼是一個線性迴歸的例子
estimator = tf.contrib.learn.LinearRegressor(feature_columns=features)

# TensorFlow provides many helper methods to read and set up data sets.
# Here we use `numpy_input_fn`. We have to tell the function how many batches
# of data (num_epochs) we want and how big each batch should be.
# Tf提供了很多有用的方法去讀取,建立資料集合。這裡我們用`numpy_input_fn`,我們
# 必須告訴這個函式我們想要多少批資料(num_epochs),這些資料的大小(batch_size)
# 如何
x = np.array([1., 2., 3., 4.])
y = np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x}, y, batch_size=4,
                                              num_epochs=1000)

# We can invoke 1000 training steps by invoking the `fit` method and passing the
# training data set.
# 我們可以呼叫fit函式去呼叫1000步訓練,並把訓練資料傳遞下去
estimator.fit(input_fn=input_fn, steps=1000)

# Here we evaluate how well our model did. In a real example, we would want
# to use a separate validation and testing data set to avoid overfitting.
# 下面我們評估我們的模型建立的好不好。這個例子中我們用一個單獨檢驗和測試資料集合
# 去避免過擬合
print(estimator.evaluate(input_fn=input_fn))

9、用我們一開始說的基礎演算法也可以來實現ml,不過太麻煩,就不贅述,附上程式碼

import numpy as np
import tensorflow as tf
# Declare list of features, we only have one real-valued feature
def model(features, labels, mode):
  # Build a linear model and predict values
  W = tf.get_variable("W", [1], dtype=tf.float64)
  b = tf.get_variable("b", [1], dtype=tf.float64)
  y = W*features['x'] + b
  # Loss sub-graph
  loss = tf.reduce_sum(tf.square(y - labels))
  # Training sub-graph
  global_step = tf.train.get_global_step()
  optimizer = tf.train.GradientDescentOptimizer(0.01)
  train = tf.group(optimizer.minimize(loss),
                   tf.assign_add(global_step, 1))
  # ModelFnOps connects subgraphs we built to the
  # appropriate functionality.
  return tf.contrib.learn.ModelFnOps(
      mode=mode, predictions=y,
      loss=loss,
      train_op=train)

estimator = tf.contrib.learn.Estimator(model_fn=model)
# define our data set
x = np.array([1., 2., 3., 4.])
y = np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x": x}, y, 4, num_epochs=1000)

# train
estimator.fit(input_fn=input_fn, steps=1000)
# evaluate our model
print(estimator.evaluate(input_fn=input_fn, steps=10))

相關文章