淘寶商品資訊爬取

不問散人發表於2020-12-20

這兩天做的python課設有一個關於python爬蟲的題目,要求是從某寶爬取,那今天就來個某寶的商品資訊爬取的內容吧!

  • 首先確定個目標,根據某關鍵詞搜尋,從獲取的頁面資訊中提取商品標題、價格、發貨地點、付款人數、以及點名這些資訊,這些資訊都是直接在網頁原始碼中。
  • ok,目標定好了,就直接瞄準進攻吧!在淘寶中隨便輸入一個關鍵詞,看一下url,順便來個翻頁,檢視一下url的變化,為了方便檢視不同頁碼的url的不同,就把他們放一起了,依次是1,2,3,4
  • https://s.taobao.com/search?q=%E7%BE%BD%E7%BB%92%E6%9C%8D&imgfile=&js=1&stats_click=search_radio_all%3A1&initiative_id=staobaoz_20201220&ie=utf8,
    https://s.taobao.com/search?q=%E7%BE%BD%E7%BB%92%E6%9C%8D&imgfile=&js=1&stats_click=search_radio_all%3A1&initiative_id=staobaoz_20201220&ie=utf8&bcoffset=3&ntoffset=3&p4ppushleft=1%2C48&s=44,
    https://s.taobao.com/search?q=%E7%BE%BD%E7%BB%92%E6%9C%8D&imgfile=&js=1&stats_click=search_radio_all%3A1&initiative_id=staobaoz_20201220&ie=utf8&bcoffset=0&ntoffset=6&p4ppushleft=1%2C48&s=88
  • 這麼長,這麼多,看不懂,怎麼辦,莫慌!那就把url解碼工具來幫忙吧!http://tool.chinaz.com/tools/urlencode.aspx,然後把我們的url複製進去,然後你就會發現,oh,q=跟的是一個關鍵字的編碼後的結果。那還有一串是啥啊?嗯,把三個url中重複的刪掉!別問我怎麼知道的,既然一樣,那就是可有可無的了,可有可無的=沒用的!刪去的這一部分就不管他了,然後那兩個帶offset的,那個偏移量,一般給他刪了也沒啥問題,然後你就會發現,最後只剩下s了,第一頁沒有,預設0,第二頁44,第三頁88,那這個s就是跟翻頁有關了,s = 44 * (頁數 - 1)
  • OK,那我們就可以先構造url了,先把頭頭拿過來https://s.taobao.com/search?+q=編碼後的字元+&s=(頁碼 - 1) x 44,url編碼可以用urllib.parse.quote(‘字元’)就行了,先整個20頁。
    key = '手套'
    key = parse.quote(key)
    url = 'https://s.taobao.com/search?q={}&s={}'
    page = 20
    for i in range(page):
        url_page = url.format(key, i * 44)
        print(url_page)

然後當我們按照正常步驟構造headers請求頭,用get()方法獲取的時候,你會發現,呦吼,炸了,不行,返回的內容不對,唉,那該咋整啊,作業咋辦啊,面向csdn程式設計不是隨便說說的,然後我就知道了,用爬蟲爬淘寶,需要“假登入”,獲取頭部headers資訊,我們只弄個ua是肯定不行的,然後把弄好的headers作為引數傳給qequests.get(url,headers = header)就行了,那該咋弄這個headers啊,右鍵,開啟瀏覽器抓包工具,然後network,ctrl+r重新整理一波,在all裡面找到search?開頭的,對他進行右鍵,copy as curl(bash),然後開啟https://curl.trillworks.com/,然後貼上,右邊Python requests裡直接複製headers,再請求就完事了!

  • 網頁原始碼請求到了,該提取資訊了,由於內容在script中,所以,re !
    然後我就貼心的把正規表示式貼過來了(小聲比比,這是我拿老師寫的改的)
    title = re.findall(r'\"raw_title\"\:\"(.*?)\"', response)

    nick = re.findall(r'\"nick\"\:\"(.*?)\"', response)

    item_loc = re.findall(r'\"item_loc\"\:\"(.*?)\"', response)

    price = re.findall(r'\"view_price\"\:\"(.*?)\"', response)

    sales = re.findall(r'\"view_sales\"\:\"(.*?)\"', response)

正規表示式匹配得到的內容是一個列表,

  • title是標題
  • nick是店鋪(這個多了一個哦,然後我看了看,最後一個沒用,就切片把最後一個丟了就行)
  • item_loc是地區
  • price是價格
  • sales是付款人數
    咳咳,然後再來一波對應儲存就行了,這邊推薦您使用csv呢,親!
    附上原始碼
from urllib import parse
from fake_useragent import UserAgent
import requests
import re
import time
import csv
import os


def get_response(url):
    ua = UserAgent()

    headers = {
        'authority': 's.taobao.com',
        'cache-control': 'max-age=0',
        'upgrade-insecure-requests': '1',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36',
        'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'sec-fetch-site': 'same-origin',
        'sec-fetch-mode': 'navigate',
        'sec-fetch-user': '?1',
        'sec-fetch-dest': 'document',
        'accept-language': 'zh-CN,zh;q=0.9',
        'cookie': 'cna=jKMMGOupxlMCAWpbGwO3zyh4; tracknick=tb311115932; _cc_=URm48syIZQ%3D%3D; thw=cn; hng=CN%7Czh-CN%7CCNY%7C156; miid=935759921262504718; t=bd88fe30e6685a4312aa896a54838a7e; sgcookie=E100kQv1bRHxrwnulL8HT5z2wacaf40qkSLYMR8tOCmVIjE%2FxrR5nzhju3UySug2dFrigMAy3v%2FjkNElYj%2BDcqmgdA%3D%3D; uc3=nk2=F5RGNwnC%2FkUVLHU%3D&vt3=F8dCuf2OXoGHiuEl2D8%3D&id2=VyyUy7sStBYaoA%3D%3D&lg2=U%2BGCWk%2F75gdr5Q%3D%3D; lgc=tb311115932; uc4=nk4=0%40FY4NAq0PgYBeuIHFyHE%2F9QSZnG6juw%3D%3D&id4=0%40VXtbYhfspVba1o0MN1OuNaxcY%2BUP; enc=tJQ9f26IYMQmwsNzfEZi6fJNcflLvL6bdcU4yyus3rqfsM37Mpy1jvcSMZ%2BYSaE5vziMtC9svi%2B4JVMfCnIsWA%3D%3D; _samesite_flag_=true; cookie2=112f2a76112f88f183403c6a3c4b721f; _tb_token_=eeeb18eb59e1; tk_trace=oTRxOWSBNwn9dPyorMJE%2FoPdY8zfvmw%2Fq5v3iwJfzrr80CDMiLUbZX4jcwHeizGatsFqHolN1SmeHD692%2BvAq7YJ%2FbITqs68WMjdAhcxP7WLdArSe8thnE40E0eWE4GQTvQP9j5XSLFbjZAE7XgwagUcgW%2Fg6rXAuZaws1NrrZksnq%2BsYQUb%2FHT%2Fa1m%2Fctub0jBbjlmp8ZDJGSpGyPMgg561G3vjIRPVnkhRCyG9GgwteJUZAsyQIkeh7xtdyN%2BF50TIambWylXMZhQW7LQGZ48rHl3Q; lLtC1_=1; v=0; mt=ci=-1_0; _m_h5_tk=b0940eb947e1d7b861c7715aa847bfc7_1608386181566; _m_h5_tk_enc=6a732872976b4415231b3a5270e90d9c; xlly_s=1; alitrackid=www.taobao.com; lastalitrackid=www.taobao.com; JSESSIONID=136875559FEC7BCA3591450E7EE11104; uc1=cookie14=Uoe0ZebpXxPftA%3D%3D; tfstk=cgSFBiAIAkEUdZx7kHtrPz1rd-xdZBAkGcJ2-atXaR-zGpLhi7lJIRGJQLRYjef..; l=eBI8YSBIOXAWZRYCBOfaourza779sIRYSuPzaNbMiOCP9_fp5rvCWZJUVfT9CnGVh6SBR3-wPvUJBeYBqnY4n5U62j-la_Dmn; isg=BAsLX3b80AwyYAwAj8PO7RC0mq_1oB8iDqsYtX0I5sqhnCv-BXFHcGI-cpxyuXca',
    }
    response = requests.get(url, headers=headers).content.decode('utf-8')
    # "raw_title":"卡蒙手套女2020秋冬季新款運動保暖護手休閒針織觸屏防寒羊皮手套"
    # "view_price":"208.00"
    # "nick":"intersport旗艦店"
    # "item_loc":"江蘇 連雲港"
    # "view_sales":"0人付款"
    title = re.findall(r'\"raw_title\"\:\"(.*?)\"', response)

    nick = re.findall(r'\"nick\"\:\"(.*?)\"', response)[:-1]

    item_loc = re.findall(r'\"item_loc\"\:\"(.*?)\"', response)

    price = re.findall(r'\"view_price\"\:\"(.*?)\"', response)

    sales = re.findall(r'\"view_sales\"\:\"(.*?)\"', response)
    return [title, nick, item_loc, price, sales]


def tocsv(file, filename):
    with open(filename, 'a+', encoding='utf-8') as f:
        f.seek(0)
        write = csv.writer(f)
        if f.read() == '':
            write.writerow(('標題', '店鋪', '地點', '付款人數', '價格'))

        for i in range(len(file[0])):
            write.writerow((file[0][i], file[1][i], file[2][i], file[3][i], file[4][i]))


if __name__ == '__main__':
    filename = 'taobao.csv'
    key = '手套'
    key = parse.quote(key)
    url = 'https://s.taobao.com/search?q={}&s={}'
    page = 20
    if os.path.exists('taobao.csv'):
        os.remove('taobao.csv')
    for i in range(page):
        url_page = url.format(key, i * 44)
        print(url_page)
        res = get_response(url_page)
        time.sleep(1)
        tocsv(res, filename=filename)

相關文章