從樣本集進行歸納的方法是建立這些樣本的模型,然後使用這個模型進行預測,這叫作基於模型學習(Model-based learning)。
例如,你想知道錢是否能讓人快樂?下面是一個簡單的基於線性模型的案例。
# Python ≥3.5
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20
import sklearn
assert sklearn.__version__ >= "0.20"
載入資料
# 資料所在路徑設定
import os
datapath = os.path.join("datasets", "lifesat", "")
print(datapath)
datasets/lifesat/
從 OECD 網站下載了 Better Life Index 指數資料,如下:
import numpy as np
import pandas as pd
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',') # thousands 設定千位分隔符;
oecd_bli.head()
LOCATION | Country | INDICATOR | Indicator | MEASURE | Measure | INEQUALITY | Inequality | Unit Code | Unit | PowerCode Code | PowerCode | Reference Period Code | Reference Period | Value | Flag Codes | Flags | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | AUS | Australia | HO_BASE | Dwellings without basic facilities | L | Value | TOT | Total | PC | Percentage | 0 | units | NaN | NaN | 1.1 | E | Estimated value |
1 | AUT | Austria | HO_BASE | Dwellings without basic facilities | L | Value | TOT | Total | PC | Percentage | 0 | units | NaN | NaN | 1.0 | NaN | NaN |
2 | BEL | Belgium | HO_BASE | Dwellings without basic facilities | L | Value | TOT | Total | PC | Percentage | 0 | units | NaN | NaN | 2.0 | NaN | NaN |
3 | CAN | Canada | HO_BASE | Dwellings without basic facilities | L | Value | TOT | Total | PC | Percentage | 0 | units | NaN | NaN | 0.2 | NaN | NaN |
4 | CZE | Czech Republic | HO_BASE | Dwellings without basic facilities | L | Value | TOT | Total | PC | Percentage | 0 | units | NaN | NaN | 0.9 | NaN | NaN |
從 IMF 下載了人均 GDP 資料,如下:
gdp_per_capita = pd.read_csv(datapath + "gdp_per_capita.csv", thousands=',', # per capita 人均
delimiter='\t', encoding='latin1', na_values="n/a")
gdp_per_capita.head()
Country | Subject Descriptor | Units | Scale | Country/Series-specific Notes | 2015 | Estimates Start After | |
---|---|---|---|---|---|---|---|
0 | Afghanistan | Gross domestic product per capita, current prices | U.S. dollars | Units | See notes for: Gross domestic product, curren... | 599.994 | 2013.0 |
1 | Albania | Gross domestic product per capita, current prices | U.S. dollars | Units | See notes for: Gross domestic product, curren... | 3995.383 | 2010.0 |
2 | Algeria | Gross domestic product per capita, current prices | U.S. dollars | Units | See notes for: Gross domestic product, curren... | 4318.135 | 2014.0 |
3 | Angola | Gross domestic product per capita, current prices | U.S. dollars | Units | See notes for: Gross domestic product, curren... | 4100.315 | 2014.0 |
4 | Antigua and Barbuda | Gross domestic product per capita, current prices | U.S. dollars | Units | See notes for: Gross domestic product, curren... | 14414.302 | 2011.0 |
準備資料
This function just merges the OECD's life satisfaction data and the IMF's GDP per capita data. It's a bit too long and boring and it's not specific to Machine Learning, which is why I left it out of the book.
def prepare_country_stats(oecd_bli, gdp_per_capita):
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita,
left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
return full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
country_stats = prepare_country_stats(oecd_bli, gdp_per_capita)
country_stats.head()
GDP per capita | Life satisfaction | |
---|---|---|
Country | ||
Russia | 9054.914 | 6.0 |
Turkey | 9437.372 | 5.6 |
Hungary | 12239.894 | 4.9 |
Poland | 12495.334 | 5.8 |
Slovak Republic | 15991.736 | 6.1 |
視覺化資料
import matplotlib.pyplot as plt
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.show()
線性迴歸
import sklearn.linear_model
model = sklearn.linear_model.LinearRegression()
訓練模型
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
model.fit(X, y)
LinearRegression()
根據模型進行預測
X_new = [[22587]] # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.96242338]]
[[5.96242338]]
總結
read_csv引數
thousands=','
: 千位分隔符;可以將"1,000"轉換為 int 型的1000;delimiter='\t'
: sep的替代引數,csv檔案分隔符可能為"," or "\t",可用sublime檢視;encoding='latin1'
: 確定正確的編碼方式才能正確解碼;vim this file andset fileencoding
即可顯示編碼格式;na_values="n/a"
: 缺少值處理,可參考 https://blog.csdn.net/weixin_44520259/article/details/106053987 ;
學習重點是機器學習原理,對於numpy,pandas之類的不熟悉的遇到了就學一下,不需要系統的學習,抓住重點!