2-7實戰迴歸模型
#迴歸模型
#regression
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf
from tensorflow import keras
使用房價資料集
#使用房價資料集
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
print(housing.DESCR)
print(housing.data.shape)
print(housing.target.shape)
.. _california_housing_dataset:
California Housing dataset
--------------------------
**Data Set Characteristics:**
:Number of Instances: 20640
:Number of Attributes: 8 numeric, predictive attributes and the target
:Attribute Information:
- MedInc median income in block
- HouseAge median house age in block
- AveRooms average number of rooms
- AveBedrms average number of bedrooms
- Population block population
- AveOccup average house occupancy
- Latitude house block latitude
- Longitude house block longitude
:Missing Attribute Values: None
This dataset was obtained from the StatLib repository.
http://lib.stat.cmu.edu/datasets/
The target variable is the median house value for California districts.
This dataset was derived from the 1990 U.S. census, using one row per census
block group. A block group is the smallest geographical unit for which the U.S.
Census Bureau publishes sample data (a block group typically has a population
of 600 to 3,000 people).
It can be downloaded/loaded using the
:func:`sklearn.datasets.fetch_california_housing` function.
.. topic:: References
- Pace, R. Kelley and Ronald Barry, Sparse Spatial Autoregressions,
Statistics and Probability Letters, 33 (1997) 291-297
(20640, 8)
(20640,)
import pprint#這個庫會使得列印的好看一點
pprint.pprint(housing.data[0:5])
pprint.pprint(housing.target[0:5])
array([[ 8.32520000e+00, 4.10000000e+01, 6.98412698e+00,
1.02380952e+00, 3.22000000e+02, 2.55555556e+00,
3.78800000e+01, -1.22230000e+02],
[ 8.30140000e+00, 2.10000000e+01, 6.23813708e+00,
9.71880492e-01, 2.40100000e+03, 2.10984183e+00,
3.78600000e+01, -1.22220000e+02],
[ 7.25740000e+00, 5.20000000e+01, 8.28813559e+00,
1.07344633e+00, 4.96000000e+02, 2.80225989e+00,
3.78500000e+01, -1.22240000e+02],
[ 5.64310000e+00, 5.20000000e+01, 5.81735160e+00,
1.07305936e+00, 5.58000000e+02, 2.54794521e+00,
3.78500000e+01, -1.22250000e+02],
[ 3.84620000e+00, 5.20000000e+01, 6.28185328e+00,
1.08108108e+00, 5.65000000e+02, 2.18146718e+00,
3.78500000e+01, -1.22250000e+02]])
array([4.526, 3.585, 3.521, 3.413, 3.422])
from sklearn.model_selection import train_test_split
x_train_all,x_test,y_train_all,y_test = train_test_split(
housing.data,housing.target,random_state=7
)
x_train,x_valid,y_train,y_valid = train_test_split(x_train_all,y_train_all,random_state=11)
print(x_train.shape,y_train.shape)
print(x_test.shape,y_test.shape)
print(x_valid.shape,y_valid.shape)
(11610, 8) (11610,)
(5160, 8) (5160,)
(3870, 8) (3870,)
#資料歸一化
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train_scaler = scaler.fit_transform(x_train)
x_valid_scaler = scaler.transform(x_valid)
x_test_scaler = scaler.transform(x_test)
搭建模型
#搭建模型
model = keras.models.Sequential([
keras.layers.Dense(30,activation='relu',
input_shape=x_train.shape[1:]),#取8
keras.layers.Dense(1),
])
model.summary()
model.compile(loss="mean_squared_error",optimizer='sgd')
callbacks = [keras.callbacks.EarlyStopping(patience=5,min_delta=1e-3)]
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 30) 270
_________________________________________________________________
dense_1 (Dense) (None, 1) 31
=================================================================
Total params: 301
Trainable params: 301
Non-trainable params: 0
history = model.fit(x_train_scaler,y_train,
validation_data=(x_valid_scaler,y_valid),
epochs=100,
callbacks = callbacks)
訓練結果的一部分
Epoch 36/100
363/363 [==============================] - 1s 2ms/step - loss: 0.3469 - val_loss: 0.3647
Epoch 37/100
363/363 [==============================] - 1s 2ms/step - loss: 0.3491 - val_loss: 0.3722
Epoch 38/100
363/363 [==============================] - 1s 2ms/step - loss: 0.3459 - val_loss: 0.3630
def plot_learning_curves(history):
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1)
plt.show()
plot_learning_curves(history)
model.evaluate(x_test_scaler,y_test)
0.3662084639072418
相關文章
- 線性迴歸實戰
- Keras上實現Softmax迴歸模型Keras模型
- 機器學習實戰(一)—— 線性迴歸機器學習
- 機器學習實戰之Logistic迴歸機器學習
- 邏輯迴歸模型邏輯迴歸模型
- 利用TensorFlow實現線性迴歸模型模型
- 利用Tensorflow實現邏輯迴歸模型邏輯迴歸模型
- 多元線性迴歸模型模型
- [機器學習實戰-Logistic迴歸]使用Logistic迴歸預測各種例項機器學習
- PRML 迴歸的線性模型模型
- 迴歸模型-評估指標模型指標
- 一元線性迴歸模型模型
- Spark 線性迴歸模型異常Spark模型
- 機器學習實戰專案-預測數值型迴歸機器學習
- 量化交易 實戰之迴歸法選股 part 2
- 機器學習之邏輯迴歸:模型訓練機器學習邏輯迴歸模型
- python自迴歸模型是什麼?Python模型
- 機器學習入門(二) — 迴歸模型 (理論)機器學習模型
- python實現線性迴歸之簡單迴歸Python
- 迴歸演算法全解析!一文讀懂機器學習中的迴歸模型演算法機器學習模型
- 《Spark機器學習》筆記——Spark迴歸模型(最小二乘迴歸、決策樹迴歸,模型效能評估、目標變數變換、引數調優)Spark機器學習筆記模型變數
- windows10 tensorflow(二)原理實戰之迴歸分析,深度Windows
- 機器學習-邏輯迴歸:從技術原理到案例實戰機器學習邏輯迴歸
- 迴歸樹(Regression Trees)模型的優缺點模型
- 機器學習入門 - 快速掌握邏輯迴歸模型機器學習邏輯迴歸模型
- 機器學習入門(三) — 迴歸模型(進階案例)機器學習模型
- 迴歸模型2018-01-25模型
- 機器學習中的邏輯迴歸模型簡介機器學習邏輯迴歸模型
- 迴歸模型的演算法效能評價模型演算法
- 2-7
- Logistic迴歸、softmax迴歸以及tensorflow實現MNIST識別
- 模式識別與機器學習——迴歸的線性模型模式機器學習模型
- 邏輯斯蒂迴歸與最大熵模型初探熵模型
- 對數機率迴歸(邏輯迴歸)原理與Python實現邏輯迴歸Python
- 決策樹、邏輯迴歸、線性迴歸使用時注意事項以及模型過擬合策略邏輯迴歸模型
- 線性迴歸模型公式推導完整簡潔版模型公式
- TensorFlow實現線性迴歸
- 【機器學習】線性迴歸sklearn實現機器學習