python爬取網圖

scorpiovo發表於2019-10-15

最近在和一個朋友研究爬蟲,我和他每人負責一個網站的圖片爬取,以下是我負責的網站爬取的code,他的code的文章連結是

https://blog.csdn.net/qq_39305249/article/details/102628783

我的多執行緒爬取cosplay圖片的連結是
https://blog.csdn.net/qq_45026221

import requests
from bs4 import BeautifulSoup
import time
import os


def get_main_urls(headers):
    urls = []
    for i in range(233):
        res = requests.get('https://www.mzitu.com/' + '/page/' + str(i+1), headers=headers)
        soup = BeautifulSoup(res.text, 'lxml')
        list = soup.find(class_='postlist').find_all('li')
        for item in list:
            url = item.find('a').get('href')
            with open('r.txt', 'a') as f:
                f.write(url + ',')


def get_pics_urls(url, headers):
    global pic_urls
    res2 = requests.get(url, headers=headers)
    soup2 = BeautifulSoup(res2.text, 'lxml')
    total = soup2.find(class_='pagenavi').find_all('a')[-2].find('span').string
    title = soup2.find(class_='main-title').string
    index = 1
    file_folder = title
    folder = 'images/' + file_folder + '/'
    if os.path.exists(folder) == False:
        os.makedirs(folder)
    for i in range(int(total)):
        res3 = requests.get(url + '/' + str(i+1), headers=headers)
        soup3 = BeautifulSoup(res3.text, 'lxml')
        pic_url = soup3.find('img').get('src')
        print('downloading......' + title + 'NO.' + str(index))
        filename = folder + str(index) + '.jpg'
        with open(filename, 'wb') as f:
            img = requests.get(pic_url, headers=headers).content
            f.write(img)
        index += 1
    print('當前圖集下載完成')


if __name__ == '__main__':
    headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) App'
                             'leWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.'
                             '3865.120 Safari/537.36',
               'Referer': 'https://www.mzitu.com/'
               }
    pic_urls = []
    i = 1
    print("程式於 {} 開始啟動,請等待...".format(time.ctime()))
    # get_main_urls(headers)
    with open('r.txt', 'r') as f:
        urls = f.read().split(',')
    for url in urls:
        print('正在下載第' + str(i) + '個圖集,共5574個圖集')
        get_pics_urls(url, headers)
        i += 1
需要說明的是這個網站會在你requests多次之後遮蔽你的ip以致於爬到一半會中斷,我辛辛苦苦爬的h圖難道就這樣沒了?
我想到一個解決辦法,先requests網站把雖有的圖集的連結都儲存在r.txt中,然後直接從檔案中讀取,供後面的爬取與下載,這樣減少了很多次的requests,ip不會被遮蔽,程式碼中get_main_urls函式用與獲取檔案中的urls,在程式碼中已經不用執行這個函式,直接讀取r.txt獲取urls即可。
如果想自己生成r.txt,則將程式碼倒數第7行get_main_urls(headers)前的# 刪掉,但被爬取的網站大部分情況下會發現你requests多次會遮蔽你的ip一段時間。
r.txt的百度雲連結已附上
連結:https://pan.baidu.com/s/1YMG-c9NJTm8b9Aq0BerCKw
提取碼:iycf

相關文章