【Python爬蟲9】Python網路爬蟲例項實戰

Wu_Being發表於2017-02-17

  • 爬取Google真實的搜尋表單
  • 爬取依賴JavaScript的網站Facebook
  • 爬取典型線上商店Gap
  • 爬取擁有地圖介面的寶馬官網
    #1.爬Google搜尋引擎
# -*- coding: utf-8 -*-

import sys
import urllib
import urlparse
import lxml.html
from downloader import Downloader

def search(keyword):
    D = Downloader()
    url = 'https://www.google.com/search?q=' + urllib.quote_plus(keyword)
    html = D(url)
    tree = lxml.html.fromstring(html)
    links = []
    for result in tree.cssselect('h3.r a'):
        link = result.get('href')
        qs = urlparse.urlparse(link).query
        links.extend(urlparse.parse_qs(qs).get('q', []))
    return links

if __name__ == '__main__':
    try:
        keyword = sys.argv[1]
    except IndexError:
        keyword = 'test'
    print search(keyword)

注意:提取Google搜尋結果時注意爬取延時問題,否則下載速度過快會出現驗證碼處理。
#2.爬Facebook和Linkein
檢視Packt出版本的Facebook頁面原始碼時,可以找到最開始的幾篇日誌,但後面的日誌只有瀏覽器滾動時才會通過AJAX載入。

2.1自動化登入Facebook

這些AJAX的資料無法簡化提取,雖然這些AJAX事件可以被臥逆向工程,但是不同型別的Facebook頁面使用了不用的AJAX呼叫。所以下面用Selenium渲染實現自動化登入Facebook。

# -*- coding: utf-8 -*-

import sys
from selenium import webdriver

def facebook(username, password, url):
    driver = webdriver.Firefox()
    driver.get('https://www.facebook.com')
    driver.find_element_by_id('email').send_keys(username)
    driver.find_element_by_id('pass').send_keys(password)
    driver.find_element_by_id('login_form').submit()
    driver.implicitly_wait(30)
    # wait until the search box is available,
    # which means have succrssfully logged in
    search = driver.find_element_by_id('q')
    # now are logged in so can navigate to the page of interest
    driver.get(url)
    # add code to scrape data of interest here
    #driver.close()
    
if __name__ == '__main__':
    try:
        username = sys.argv[1]
        password = sys.argv[2]
        url = sys.argv[3]
    except IndexError:
        print 'Usage: %s <username> <password> <url>' % sys.argv[0]
    else:
        facebook(username, password, url)

##2.2提取Facebook的API資料
Facebook提供了一些API資料,如果允許訪問這些資料,下面就提取Packt出版社頁面的資料。

# -*- coding: utf-8 -*-

import sys
import json
import pprint
from downloader import Downloader

def graph(page_id):
    D = Downloader()
    html = D('http://graph.facebook.com/' + page_id)
    return json.loads(html)

if __name__ == '__main__':
    try:
        page_id = sys.argv[1]
    except IndexError:
        page_id = 'PacktPub'
    pprint.pprint(graph(page_id))

Facebook開發者文件:https://developers.facebook.com/docs/graph-api 這些API呼叫多數是設計給已授權的facebook使用者互動的facebook應用的,要想提取比如使用者日誌等更加詳細的資訊,仍然需要爬蟲。

2.3自動化登入Linkedin

# -*- coding: utf-8 -*-

import sys
from selenium import webdriver

def search(username, password, keyword):
    driver = webdriver.Firefox()
    driver.get('https://www.linkedin.com/')
    driver.find_element_by_id('session_key-login').send_keys(username)
    driver.find_element_by_id('session_password-login').send_keys(password)
    driver.find_element_by_id('signin').click()
    driver.implicitly_wait(30)
    driver.find_element_by_id('main-search-box').send_keys(keyword)
    driver.find_element_by_class_name('search-button').click()
    driver.find_element_by_css_selector('ol#results li a').click()
    # Add code to scrape data of interest from LinkedIn page here
    #driver.close()
    
if __name__ == '__main__':
    try:
        username = sys.argv[1]
        password = sys.argv[2]
        keyword = sys.argv[3]
    except IndexError:
        print 'Usage: %s <username> <password> <keyword>' % sys.argv[0]
    else:
        search(username, password, keyword)

#3.爬線上商店Gap
Gap擁有一個結構化良好的網站,通過Sitemap可以幫助網路爬蟲定位到最新的內容。從http://www.gap.com/robots.txt 中可以發現網站地圖Sitemap: http://www.gap.com/products/sitemap_index.html

This XML file does not appear to have any style information associated with it. The document tree is shown below.
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<sitemap>
<loc>http://www.gap.com/products/sitemap_1.xml</loc>
<lastmod>2017-01-30</lastmod>
</sitemap>
<sitemap>
<loc>http://www.gap.com/products/sitemap_2.xml</loc>
<lastmod>2017-01-30</lastmod>
</sitemap>
</sitemapindex>

如上網站地圖冊連結的內容僅僅是索引,其索引的網站地圖才是數千種產品類目的連結,比如:http://www.gap.com/products/blue-long-sleeve-shirts-for-men.jsp 。由於這裡有大量要爬取的內容,因此我們將使用第4篇開發的多線和爬蟲,並支援一人可選的回撥引數。

# -*- coding: utf-8 -*-

from lxml import etree
from threaded_crawler import threaded_crawler

def scrape_callback(url, html):
    if url.endswith('.xml'):
        # Parse the sitemap XML file
        tree = etree.fromstring(html)
        links = [e[0].text for e in tree]
        return links
    else:
        # Add scraping code here
        pass       

def main():
    sitemap = 'http://www.gap.com/products/sitemap_index.xml'
    threaded_crawler(sitemap, scrape_callback=scrape_callback)
    
if __name__ == '__main__':
    main() 

該回撥函式首先下載到的URL副檔名。如果副檔名是.xml,則用lxml的etree模組解析XML檔案並從中提取連結。否則,認為這是一個類目URL(這例沒有實現提取類目的功能)。
#4.爬寶馬官網
寶馬官方網站中有一個查詢本地經銷商的搜尋工具,其網址為https://www.bmw.de/de/home.html?entryType=dlo
該工具將地理位置作為輸入引數,然後在地圖上顯示附近的經銷商地點,比如輸入BerlinLook For
我們使用開發者工具會發現搜尋觸發了AJAX請求:
https://c2b-services.bmw.com/c2b-localsearch/services/api/v3/clients/BMWDIGITAL_DLO/DE/pois?country=DE&category=BM&maxResults=99&language=en&lat=52.507537768880056&lng=13.425269635701511
maxResults預設設為99,我們可以增大該值。AJAX請求提供了JSONP格式的資料,其中JSONP是指填充模式的JSON(JSON with padding)。這裡的填充通常是指要呼叫的函式,而函式的引數則為純JSON資料。本例呼叫的是callback函式,要想使用Pythonr的json模組解析該資料,首先需要將填充的部分擷取掉。

# -*- coding: utf-8 -*-

import json
import csv
from downloader import Downloader

def main():
    D = Downloader()
    url = 'https://c2b-services.bmw.com/c2b-localsearch/services/api/v3/clients/BMWDIGITAL_DLO/DE/pois?country=DE&category=BM&maxResults=%d&language=en&lat=52.507537768880056&lng=13.425269635701511'
    jsonp = D(url % 1000) ###callback({"status:{...}"})
    pure_json = jsonp[jsonp.index('(') + 1 : jsonp.rindex(')')]
    dealers = json.loads(pure_json) ###
    with open('bmw.csv', 'w') as fp:
        writer = csv.writer(fp)
        writer.writerow(['Name', 'Latitude', 'Longitude'])
        for dealer in dealers['data']['pois']:
            name = dealer['name'].encode('utf-8')
            lat, lng = dealer['lat'], dealer['lng']
            writer.writerow([name, lat, lng])
    
if __name__ == '__main__':
    main() 
>>> dealers.keys() #[u'status',u'count',u'data',...]
>>> dealers['count'] #顯示個數
>>> dealers['data']['pois'][0] #第一個經銷商資料

Wu_Being 部落格宣告:本人部落格歡迎轉載,請標明部落格原文和原連結!謝謝!
【Python爬蟲系列】《【Python爬蟲9】Python網路爬蟲例項實戰》http://blog.csdn.net/u014134180/article/details/55508272
Python爬蟲系列的GitHub程式碼檔案https://github.com/1040003585/WebScrapingWithPython

相關文章