Python爬蟲學習筆記(三)

Nu1L發表於2021-01-30

Cookies:

以抓取https://www.yaozh.com/為例

Test1(不使用cookies):

程式碼:

Python爬蟲學習筆記(三)
import urllib.request

# 1.新增URL
url = "https://www.yaozh.com/"

# 2.新增請求頭
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36"
}

# 3.構建請求物件
request = urllib.request.Request(url, headers=headers)

# 4.傳送請求物件
response = urllib.request.urlopen(request)

# 5.讀取資料
data = response.read()

#儲存到檔案中,驗證資料
with open('01cookies.html', 'wb')as f:
    f.write(data)
View Code

返回:

此時進入頁面顯示為遊客模式,即未登入狀態。

Test2(使用cookies:手動登入):

在network中查詢cookies部分

程式碼(先登入在抓取):

Python爬蟲學習筆記(三)
"""
    直接獲取個人中心的頁面
    手動貼上,複製抓包的cookies
    放在 request請求物件的請求頭裡面
"""
import urllib.request


# 1.新增URL
url = "https://www.yaozh.com/"

# 2.新增請求頭
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36",
    "Cookie": "acw_tc=707c9fc316119925309487503e709498d3fe1f6beb4457b1cb1399958ad4d3; PHPSESSID=bvc8utedu2sljbdb818m4va8q3; _ga=GA1.2.472741825.1611992531; _gid=GA1.2.2079712096.1611992531; yaozh_logintime=1611992697; yaozh_user=1038868%09s1mpL3; yaozh_userId=1038868; yaozh_jobstatus=kptta67UcJieW6zKnFSe2JyYnoaSZ5htnZqdg26qb21rg66flM6bh5%2BscZdyVNaWz9Gwl4Ny2G%2BenofNlKqpl6XKppZVnKmflWlxg2lolJabd519626986447e0E3cd918611D19BBEbmpaamm6HcNiemZtVq56lloN0pG2SaZ%2BGam2SaWucl5ianZiWbIdw4g%3D%3Da9295385d0680617486debd4ce304305; _gat=1; Hm_lpvt_65968db3ac154c3089d7f9a4cbb98c94=1611992698; yaozh_uidhas=1; yaozh_mylogin=1611992704; acw_tc=707c9fc316119925309487503e709498d3fe1f6beb4457b1cb1399958ad4d3; Hm_lvt_65968db3ac154c3089d7f9a4cbb98c94=1611992531%2C1611992638",

}

# 3.構建請求物件
request = urllib.request.Request(url, headers=headers)

# 4.傳送請求物件
response = urllib.request.urlopen(request)

# 5.讀取資料
data = response.read()

#儲存到檔案中,驗證資料
with open('01cookies2.html', 'wb')as f:
    f.write(data)
先登入再抓取

返回:

此時為登入狀態s1mpL3。

Test3(使用cookies:程式碼登入):

準備:

1.勾選Preserve Log,用於記錄上一次登入

2.根據登入時的資料包,發現傳送POST請求

3.登陸之後退出,進入登入頁面,檢察元素,查詢表單各項資料,

程式碼:

Python爬蟲學習筆記(三)
"""
    獲取個人頁面
    1.程式碼登入  登陸成功    cookie有效
    2.自動帶著cookie 去請求個人中心

    cookiejar:自動儲存cookie
"""
import urllib.request
from http import cookiejar
from urllib import parse
# 登陸之前,登入頁的網址,https://www.yaozh.com/login,找登入引數

# 後臺,根據傳送的請求方式來判斷,如果是GET,返回登入頁面,如果是POST,返回登入結果

#   1.程式碼登入
# 1.1 登陸的網址
login_url = "https://www.yaozh.com/login"
# 1.2 登陸的引數
login_form_data = {
" username": "s1mpL3",
"pwd": "***************",#個人隱私,程式碼不予顯示
"formhash": "87F6F28A4*",#個人隱私,程式碼不予顯示
"backurl": "https%3A%2F%2Fwww.yaozh.com%2F",
}
# 引數需要轉碼;POST請求的data要求是bytes樂行
login_str = urllib.parse.urlencode(login_form_data).encode('utf-8')
# 1.3 傳送POST登入請求
cookie_jar = cookiejar.CookieJar()
# 定義有新增cookie功能的處理器
cook_handler = urllib.request.HTTPCookieProcessor(cookie_jar)
# 根據處理器 生成openner
openner = urllib.request.build_opener(cook_handler)
# 帶著引數,傳送POST請求
# 新增請求頭
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36"
}
login_request = urllib.request.Request(login_url, headers=headers, data=login_str)
# 如果登陸成功,cookiejar自動儲存cookie
openner.open(login_request)

#   2. 程式碼帶著cookie去訪問個人中心
center_url = "https://www.yaozh.com/member/"
center_request = urllib.request.Request(center_url, headers=headers)
response = openner.open(center_url)
# bytes --> str
data = response.read().decode()
with open('02cookies.html', 'w', encoding="utf-8")as f:
    f.write(data)
程式碼登入

返回:

以s1mpL3使用者返回

注:

1.cookiejar庫的使用

from http import cookiejar
cookiejar.CookieJar()

2.HTTPCookieProcessor():有cookie功能的處理器

3.程式碼登入:只需修改使用者名稱和密碼

4.Python3報錯:

 

UnicodeEncodeError: 'gbk' codec can't encode character '\xa0' in position 19523: illegal multibyte sequence

 

修改:open()中新增encoding="utf-8"

with open('02cookies.html', 'w', encoding="utf-8")as f:
    f.write(data)

解決方案參考:

https://www.cnblogs.com/cwp-bg/p/7835434.html

 

 

https://www.cnblogs.com/shaosks/p/9287474.html

https://blog.csdn.net/github_35160620/article/details/53353672

 

URLError:

urllib.request 提示錯誤
分為URLError HTTPError
其中HTTPError為URLError的子類

Test:

程式碼1:

import urllib.request
url = 'http://www.xiaojian.cn' # 假設
response = urllib.request.urlopen(url)

返回1:

部分報錯:

raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 11001] getaddrinfo failed>

 程式碼2:

import urllib.request
url = 'https://blog.csdn.net/dQCFKyQDXYm3F8rB0/article/details/1111'
response = urllib.request.urlopen(url)

返回2:

部分報錯:

raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found

程式碼3:

import urllib.request


url = 'https://blog.csdn.net/dQCFKyQDXYm3F8rB0/article/details/1111'
try:
    response = urllib.request.urlopen(url)
except urllib.request.HTTPError as error:
    print(error.code)
except urllib.request.URLError as error:
    print(error)

返回3:

程式碼4:

import urllib.request


url = 'https://blog.cs1'
try:
    response = urllib.request.urlopen(url)
except urllib.request.HTTPError as error:
    print(error.code)
except urllib.request.URLError as error:
    print(error)

返回4:

 

Requsets:

準備:

安裝第三方模組:

pip install requests

Test1(基本屬性:GET):

程式碼1(不帶請求頭):

Python爬蟲學習筆記(三)
import requests

url = "http://www.baidu.com"
response = requests.get(url)

# content屬性:返回型別是bytes
data = response.content
print(data)

data1 = response.content.decode('utf-8')
print(type(data1))

# text屬性:返回型別是文字str(如果響應內容沒有編碼,將自行編碼,可能出錯。因此優先使用content)
data2 = response.text
print(type(data2))
View Code

返回1:

程式碼2(帶請求頭):

import requests


class RequestSpider(object):
    def __init__(self):
        url = "https://www.baidu.com/"
        headers = {
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36"
        }
        self.response = requests.get(url, headers=headers)

    def run(self):
        data = self.response.content

        # 1.獲取請求頭
        request_headers1 = self.response.request.headers
        print(request_headers1)

        # 2.獲取響應頭
        request_headers2 = self.response.headers
        print(request_headers2)

        # 3.獲取響應狀態碼
        code = self.response.status_code
        print(code)

        # 4.獲取請求的cookie
        request_cookie = self.response.request._cookies
        print(request_cookie)
        #注:用瀏覽器進入百度時,可能會有很多cookie,這是瀏覽器自動新增的,不是伺服器給的

        # 5.獲取響應的cookie
        response_cookie = self.response.cookies
        print(response_cookie)


RequestSpider().run()

返回:

E:\python\python.exe H:/code/Python爬蟲/Day04/03-requests_use2.py
{
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} {'Bdpagetype': '1', 'Bdqid': '0xe0b22322001a2c4a', 'Cache-Control': 'private', 'Connection': 'keep-alive', 'Content-Encoding': 'gzip', 'Content-Type': 'text/html;charset=utf-8', 'Date': 'Sat, 30 Jan 2021 09:27:06 GMT', 'Expires': 'Sat, 30 Jan 2021 09:26:56 GMT', 'P3p': 'CP=" OTI DSP COR IVA OUR IND COM ", CP=" OTI DSP COR IVA OUR IND COM "', 'Server': 'BWS/1.1', 'Set-Cookie': 'BAIDUID=E577CD647F2B1CA6A7C0F4112781CAF9:FG=1; expires=Thu, 31-Dec-37 23:55:55 GMT; max-age=2147483647; path=/; domain=.baidu.com, BIDUPSID=E577CD647F2B1CA6A7C0F4112781CAF9; expires=Thu, 31-Dec-37 23:55:55 GMT; max-age=2147483647; path=/; domain=.baidu.com, PSTM=1611998826; expires=Thu, 31-Dec-37 23:55:55 GMT; max-age=2147483647; path=/; domain=.baidu.com, BAIDUID=E577CD647F2B1CA65749857950B007E4:FG=1; max-age=31536000; expires=Sun, 30-Jan-22 09:27:06 GMT; domain=.baidu.com; path=/; version=1; comment=bd, BDSVRTM=0; path=/, BD_HOME=1; path=/, H_PS_PSSID=33423_33516_33402_33273_33590_26350_33568; path=/; domain=.baidu.com, BAIDUID_BFESS=E577CD647F2B1CA6A7C0F4112781CAF9:FG=1; Path=/; Domain=baidu.com; Expires=Thu, 31 Dec 2037 23:55:55 GMT; Max-Age=2147483647; Secure; SameSite=None', 'Strict-Transport-Security': 'max-age=172800', 'Traceid': '1611998826055672090616191042239287929930', 'X-Ua-Compatible': 'IE=Edge,chrome=1', 'Transfer-Encoding': 'chunked'} 200 <RequestsCookieJar[]> <RequestsCookieJar[<Cookie BAIDUID=E577CD647F2B1CA65749857950B007E4:FG=1 for .baidu.com/>, <Cookie BAIDUID_BFESS=E577CD647F2B1CA6A7C0F4112781CAF9:FG=1 for .baidu.com/>, <Cookie BIDUPSID=E577CD647F2B1CA6A7C0F4112781CAF9 for .baidu.com/>, <Cookie H_PS_PSSID=33423_33516_33402_33273_33590_26350_33568 for .baidu.com/>, <Cookie PSTM=1611998826 for .baidu.com/>, <Cookie BDSVRTM=0 for www.baidu.com/>, <Cookie BD_HOME=1 for www.baidu.com/>]> Process finished with exit code 0

Test2(URL自動轉譯):

程式碼1:

Python爬蟲學習筆記(三)
# https://www.baidu.com/s?ie=utf-8&f=8&rsv_bp=1&tn=baidu&wd=%E7%88%AC%E8%99%AB&oq=%2526lt%253BcH0%2520-%2520Nu1L&rsv_pq=d38dc072002f5aef&rsv_t=62dcS%2BcocFsilJnL%2FcjmqGeUvo6S6XMFTiyfxi22AnqTbscZBf6K%2F13WW%2Bo&rqlang=cn&rsv_enter=1&rsv_dl=tb&rsv_sug3=4&rsv_sug1=3&rsv_sug7=100&rsv_sug2=0&rsv_btype=t&inputT=875&rsv_sug4=875
# https://www.baidu.com/s?wd=%E7%88%AC%E8%99%AB

import requests

# 引數自動轉譯
url = "http://www.baidu.com/s?wd=爬蟲"
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36"
}
response = requests.get(url, headers=headers)
data = response.content.decode()
with open('baidu.html', 'w', encoding="utf-8")as f:
    f.write(data)
漢字引數自動轉譯

返回:

成功返回並生成檔案,此時漢字作為引數實現了自動轉譯。

程式碼2:

Python爬蟲學習筆記(三)
# https://www.baidu.com/s?ie=utf-8&f=8&rsv_bp=1&tn=baidu&wd=%E7%88%AC%E8%99%AB&oq=%2526lt%253BcH0%2520-%2520Nu1L&rsv_pq=d38dc072002f5aef&rsv_t=62dcS%2BcocFsilJnL%2FcjmqGeUvo6S6XMFTiyfxi22AnqTbscZBf6K%2F13WW%2Bo&rqlang=cn&rsv_enter=1&rsv_dl=tb&rsv_sug3=4&rsv_sug1=3&rsv_sug7=100&rsv_sug2=0&rsv_btype=t&inputT=875&rsv_sug4=875
# https://www.baidu.com/s?wd=%E7%88%AC%E8%99%AB

import requests

# 引數自動轉譯
url = "http://www.baidu.com/s"
parmas = {
    'wd': '爬蟲',
}
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36"
}
response = requests.get(url, headers=headers, params=parmas)
data = response.content.decode()
with open('baidu1.html', 'w', encoding="utf-8")as f:
    f.write(data)
字典自動轉譯

返回:

成功返回並生成檔案,此時字典作為引數實現了自動轉譯。

注:

傳送POST請求和新增引數

requests.post(url, data=(引數{}), json=(引數))

Test3(json):

程式碼:

# https://www.baidu.com/s?ie=utf-8&f=8&rsv_bp=1&tn=baidu&wd=%E7%88%AC%E8%99%AB&oq=%2526lt%253BcH0%2520-%2520Nu1L&rsv_pq=d38dc072002f5aef&rsv_t=62dcS%2BcocFsilJnL%2FcjmqGeUvo6S6XMFTiyfxi22AnqTbscZBf6K%2F13WW%2Bo&rqlang=cn&rsv_enter=1&rsv_dl=tb&rsv_sug3=4&rsv_sug1=3&rsv_sug7=100&rsv_sug2=0&rsv_btype=t&inputT=875&rsv_sug4=875
# https://www.baidu.com/s?wd=%E7%88%AC%E8%99%AB

import requests
import json

url = "https://api.github.com/user"
#這個網址返回的內容不是HTML,而是標準的json
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36"
}
response = requests.get(url, headers=headers)
# str
data = response.content.decode()
print(data)

# str --> dict
data_dict = json.loads(data)
print(data_dict["message"])

# json()會自動將json字串轉換成Python dict list
data1 = response.json()
print(data1)
print(type(data1))
print(data1["message"])

返回:

E:\python\python.exe H:/code/Python爬蟲/Day04/03-requests_use3.py
{
  "message": "Requires authentication",
  "documentation_url": "https://docs.github.com/rest/reference/users#get-the-authenticated-user"
}

Requires authentication
{'message': 'Requires authentication', 'documentation_url': 'https://docs.github.com/rest/reference/users#get-the-authenticated-user'}
<class 'dict'>
Requires authentication

Process finished with exit code 0

相關文章