爬蟲之requests庫

HammerZe發表於2022-03-20

requests庫

image

雖然Python的標準庫中 urllib模組已經包含了平常我們使用的大多數功能,但是它的 API 使用起來讓人感覺不太好,而 Requests宣傳是 “HTTP for Humans”,說明使用更簡潔方便;

Requests 是用Python語言編寫,基於 urllib,但是它比 urllib 更加方便,可以節約我們大量的工作,完全滿足 HTTP 測試需求;

安裝

部分原始碼

"""
Requests HTTP Library
~~~~~~~~~~~~~~~~~~~~~

Requests is an HTTP library, written in Python, for human beings.
Basic GET usage:

   >>> import requests
   >>> r = requests.get('https://www.python.org')
   >>> r.status_code
   200
   >>> b'Python is a programming language' in r.content
   True

... or POST:

   >>> payload = dict(key1='value1', key2='value2')
   >>> r = requests.post('https://httpbin.org/post', data=payload)
   >>> print(r.text)
   {
     ...
     "form": {
       "key1": "value1",
       "key2": "value2"
     },
     ...
   }

The other HTTP methods are supported - see `requests.api`. Full documentation
is at <https://requests.readthedocs.io>.

:copyright: (c) 2017 by Kenneth Reitz.
:license: Apache 2.0, see LICENSE for more details.
"""

通過原始碼我們可以發現,主要用法是GET請求和POST請求,介紹了檢視狀態碼和檢視文字等方法,其他HTTP請求方法檢視request.api,常用的方法用法如下:

傳送GET請求

URL 的查詢字串(query string)傳遞某種資料,我們可以通過params引數來傳遞,requests庫不需要url編碼,自動給我們編碼處理

import requests

url = "http://httpbin.org/get"
payload = {'key':'value','key2':'value'}
r = requests.get(url,params=payload)
print(r.text)
print(r.url) # http://httpbin.org/get?key=value&key2=value

列表作為值傳入:

payload = {'key1': 'value1', 'key2': ['value2', 'value3']}

r = requests.get('http://httpbin.org/get', params=payload)
 print(r.url)
http://httpbin.org/get?key1=value1&key2=value2&key2=value3

ps:注意字典裡值為 None 的鍵都不會被新增到 URL 的查詢字串裡。

import requests

# 新增headers引數
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'
}
# https://www.baidu.com/s?&word=%E4%B8%AD%E5%9B%BD
url = 'https://www.baidu.com/s'
kw = {'wd':'中國'}

# # params 接收一個字典或者字串的查詢引數,字典型別自動轉換為url編碼,不需要urlencode()
response = requests.get(url,headers=headers,params=kw)
print(response) # <Response [200]>

'''更多屬性方法'''
# print(response.text)  # 返回unicode格式的資料
print(response.content)  # 返回位元組流資料
print(response.status_code)  # 200
print(response.url) # https://www.baidu.com/s?wd=%E4%B8%AD%E5%9B%BD  檢視完整url地址
print(response.encoding) # utf-8

response.text和response.content的區別:

  1. response.content :這個是直接從網路上抓取的資料,沒有經過任何的編碼,所以是一個bytes型別,其實在硬碟上和網路上傳輸的字串都是bytes型別
  2. response.text:這個是str的資料型別,是requests庫將response.content進行解碼的字串,解碼需要指定一個編碼方式,requests會根據自己的猜測來判斷編碼的方式,所以有時候可能會猜測錯誤,就會導致解碼產生亂碼,這時候就應該進行手動解碼,比如使用response.content.decode('utf8')

傳送POST請求

r = requests.post('http://httpbin.org/post', data = {'key':'value'})
import requests

url = 'https://i.meishi.cc/login.php?redirect=https%3A%2F%2Fwww.meishij.net%2F'
headers={
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'
}
data = {
    'redirect': 'https://www.meishij.net/',
    'username': '1097566154@qq.com',
    'password': 'wq15290884759.'
}
resp = requests.post(url,headers=headers,data=data)
print(resp.text)

Requests 簡便的 API 意味著所有 HTTP 請求型別都是顯而易見的,那麼其他 HTTP 請求型別:PUT,DELETE,HEAD 以及 OPTIONS 又是如何的呢?都是一樣的簡單:

>>> r = requests.put('http://httpbin.org/put', data = {'key':'value'})
>>> r = requests.delete('http://httpbin.org/delete')
>>> r = requests.head('http://httpbin.org/get')
>>> r = requests.options('http://httpbin.org/get')

requests使用代理

只要在請求的方法中(比如get或者post)傳遞proxies引數就可以了

import requests

proxy = {
    'http':'111.77.197.127:9999'
}
url = 'http://www.httpbin.org/ip'
resp = requests.get(url,proxies=proxy)
print(resp.text)

如果在一個響應中包含了cookie,那麼可以利用cookies屬性拿到這個返回的cookie值

import requests

resp = requests.get('http://www.baidu.com/')
print(resp.cookies)
print(resp.cookies.get_dict())

Cookie模擬登入

import requests
url = 'https://www.zhihu.com/hot'
headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',
    'cookie':'_zap=59cde9c3-c5c0-4baa-b756-fa16b5e72b10; d_c0="APDi1NJcuQ6PTvP9qa1EKY6nlhVHc_zYWGM=|1545737641"; __gads=ID=237616e597ec37ad:T=1546339385:S=ALNI_Mbo2JturZesh38v7GzEeKjlADtQ5Q; _xsrf=pOd30ApWQ2jihUIfq94gn2UXxc0zEeay; q_c1=1767e338c3ab416692e624763646fc07|1554209209000|1545743740000; tst=h; __utma=51854390.247721793.1554359436.1554359436.1554359436.1; __utmc=51854390; __utmz=51854390.1554359436.1.1.utmcsr=zhihu.com|utmccn=(referral)|utmcmd=referral|utmcct=/hot; __utmv=51854390.100-1|2=registration_date=20180515=1^3=entry_date=20180515=1; l_n_c=1; l_cap_id="OWRiYjI0NzJhYzYwNDM3MmE2ZmIxMGIzYmQwYzgzN2I=|1554365239|875ac141458a2ebc478680d99b9219c461947071"; r_cap_id="MmZmNDFkYmIyM2YwNDAxZmJhNWU1NmFjOGRkNDNjYjc=|1554365239|54372ab1797cba8c4dd224ba1845dd7d3f851802"; cap_id="YzQwNGFlYWNmNjY3NDFhNGI4MGMyYjZjYjRhMzQ1ZmE=|1554365239|385cc25e3c4e3b0b68ad5747f623cf3ad2955c9f"; n_c=1; capsion_ticket="2|1:0|10:1554366287|14:capsion_ticket|44:MmE5YzNkYjgzODAyNDgzNzg5MTdjNmE3NjQyODllOGE=|40d3498bedab1b7ba1a247d9fc70dc0e4f9a4f394d095b0992a4c85e32fd29be"; z_c0="2|1:0|10:1554366318|4:z_c0|92:Mi4xOWpCeUNRQUFBQUFBOE9MVTBseTVEaVlBQUFCZ0FsVk5iZzJUWFFEWi1JMkxnQXlVUXh2SlhYb3NmWks3d1VwMXRB|81b45e01da4bc235c2e7e535d580a8cc07679b50dac9e02de2711e66c65460c6"; tgw_l7_route=578107ff0d4b4f191be329db6089ff48'
}
resp = requests.get(url,headers=headers)
print(resp.text)

Session:共享cookie

使用requests,也要達到共享cookie的目的,那麼可以使用requests庫給我們提供的session物件;

注意:這裡的session不是web開發中的那個session,這個地方只是一個會話的物件而已

import requests

# 登入連結
post_url = 'https://i.meishi.cc/login.php?redirect=https%3A%2F%2Fwww.meishij.net%2F'
post_data = {
    'username':'1097566154@qq.com',
    'password':'wq15290884759.'
}
headers={
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'
}

# 登入
# 通過session方法建立一個會話物件
session = requests.session()
# 傳送post請求,攜帶登入資料
session.post(post_url,headers=headers,data=post_data)

'''有了cookie資訊,訪問個人網頁'''
url = 'https://i.meishi.cc/cook.php?id=13686422'

resp = session.get(url)
print(resp.text)

處理不信任的SSL證照:

對於那些已經被信任的SSL證照的網站,比如https://www.baidu.com/,那麼使用requests直接就可以正常的返回響應;

如果是自簽證照,那麼瀏覽器是不承認該證照的,提示不安全,那麼在爬取的時候會報錯,需要怎麼處理?新增verify=False引數

示例程式碼如下:

resp = requests.get('https://inv-veri.chinatax.gov.cn/',verify=False)
print(resp.content.decode('utf-8'))

? Requests 2.18.1 文件 (python-requests.org)

相關文章