Python爬蟲入門學習實戰專案(一)

Residual NS發表於2020-02-18
  • 靜態資料的採集
    第一個專案我們來抓取拉勾網的招聘資訊,話不多說直接開始吧!

1.首先我們匯入相關庫:

import requests
from lxml import etree
import pandas as pd
from time import sleep
import random

2.檢視我們的cookie:
在這裡插入圖片描述
3.設定headers:

cookie = 'user_trace_token=20190329130619-9fcf5ee7-dcc5-4a9b-b82e-53a0eba6862c...LGRID=20190403124044-a4a8c961-55ca-11e9-bd16-5254005c3644'
headers = {
    'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3650.400 QQBrowser/10.4.3341.400',
    'Cookie':'cookie'
}

4.檢視網頁結構迴圈頁數進行採集:

for i in range(2, 8):
    sleep(random.randint(3,10))
    url = 'https://www.lagou.com/zhaopin/jiqixuexi/{}/?filterOption=3'.format(i)
    print('正在抓取第{}頁...'.format(i), url)
# 請求網頁解析
    con = etree.HTML(requests.get(url=url, headers=headers).text)

5.使用xpath表示式抽取各目標欄位:
在這裡插入圖片描述
在這裡插入圖片描述

# 使用xpath表示式抽取各目標欄位
    job_name = [i for i in con.xpath("//a[@class='position_link']/h3/text()")]
    job_address = [i for i in con.xpath("//span[@class='add']/em/text()")]
    job_company = [i for i in con.xpath("//div[@class='company_name']/a/text()")]
    job_salary = [i for i in con.xpath("//span[@class='money']/text()")]
    job_links = [i for i in con.xpath("//a[@class='position_link']/@href")]
   
 # 獲取詳情頁連線後採集詳情頁崗位描述資訊
    job_des = []
    for link in job_links:
        sleep(random.randint(3,10))
        con2 = etree.HTML(requests.get(url=link, headers=headers).text)
        des = [[i.xpath('string(.)') for i in con2.xpath("//div[@class='job-detail']/p")]]
        job_des += des

    break

6.對資料進行字典封裝:

dataset = {'崗位名稱':job_name,'工作地址':job_address,'公司名稱':job_company,'工資':job_salary,'任職要求':job_des}

#轉化為資料框並儲存為csv
data = pd.DataFrame(dataset)
data.to_csv('machine_learning_LG_job.csv')

7.抓取的結果:
在這裡插入圖片描述

相關文章