線性迴歸
線性迴歸是很常見的一種迴歸,線性迴歸可以用來預測或者分類,主要解決線性問題。相關知識可看“相關閱讀”。
主要思想
在TensorFlow中進行線性迴歸處理重點是將樣本和樣本特徵矩陣化。
單特徵線性迴歸
單特徵迴歸模型為:y = wx + b
構建模型
X = tf.placeholder(tf.float32, [None, 1])
w = tf.Variable(tf.zeros([1, 1]))
b = tf.Variable(tf.zeros([1]))
y = tf.matmul(X, w) + b
Y = tf.placeholder(tf.float32, [None, 1])複製程式碼
構建成本函式
cost = tf.reduce_mean(tf.square(Y-y))複製程式碼
梯度下降最小化成本函式,梯度下降步長為0.01
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cost)複製程式碼
完整程式碼,迭代次數為10000
import tensorflow as tf
X = tf.placeholder(tf.float32, [None, 1])
w = tf.Variable(tf.zeros([1, 1]))
b = tf.Variable(tf.zeros([1]))
y = tf.matmul(X, w) + b
Y = tf.placeholder(tf.float32, [None, 1])
# 成本函式 sum(sqr(y_-y))/n
cost = tf.reduce_mean(tf.square(Y-y))
# 用梯度下降訓練
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cost)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
x_train = [[1],[2],[3],[4],[5],[6],[7],[8],[9],[10]]
y_train = [[10],[11.5],[12],[13],[14.5],[15.5],[16.8],[17.3],[18],[18.7]]
for i in range(10000):
sess.run(train_step, feed_dict={X: x_train, Y: y_train})
print("w:%f" % sess.run(w))
print("b:%f" % sess.run(b))複製程式碼
多特徵線性迴歸
多特徵迴歸模型為:y = (w{1}x{1} + w{2}x{2} +...+w{n}x{n}) + b,寫為y = wx + b。
y為m行1列矩陣,x為m行n列矩陣,w為n行1列矩陣。TensorFlow中用如下來表示模型。
構建模型
X = tf.placeholder(tf.float32, [None, n])
w = tf.Variable(tf.zeros([n, 1]))
b = tf.Variable(tf.zeros([1]))
y = tf.matmul(X, w) + b
Y = tf.placeholder(tf.float32, [None, 1])複製程式碼
構建成本函式
cost = tf.reduce_mean(tf.square(Y-y))複製程式碼
梯度下降最小化成本函式,梯度下降步長為0.01
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cost)複製程式碼
完整程式碼,迭代次數為10000
import tensorflow as tf
X = tf.placeholder(tf.float32, [None, 2])
w = tf.Variable(tf.zeros([2, 1]))
b = tf.Variable(tf.zeros([1]))
y = tf.matmul(X, w) + b
Y = tf.placeholder(tf.float32, [None, 1])
# 成本函式 sum(sqr(y_-y))/n
cost = tf.reduce_mean(tf.square(Y-y))
# 用梯度下降訓練
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cost)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
x_train = [[1, 2], [2, 1], [2, 3], [3, 5], [1, 3], [4, 2], [7, 3], [4, 5], [11, 3], [8, 7]]
y_train = [[7], [8], [10], [14], [8], [13], [20], [16], [28], [26]]
for i in range(10000):
sess.run(train_step, feed_dict={X: x_train, Y: y_train})
print("w0:%f" % sess.run(w[0]))
print("w1:%f" % sess.run(w[1]))
print("b:%f" % sess.run(b))複製程式碼
總結
線上性迴歸中,TensorFlow可以很方便地利用矩陣進行多特徵的樣本訓練。
相關閱讀
歡迎關注: