簡書非官方大資料(二)

weixin_34253539發表於2017-02-10

PS:這條很重要,我的文章中所說的大資料並不是現在很火的大資料話題,前幾天看過一篇大資料的文章,簡單來說:當一臺電腦沒法處理或你現在的條件沒法處理的資料就可以談的上大資料了,這個沒有指定的資料量。
爬蟲爬了一晚上,到目前為止已爬取170W+,大早上想了一下,效率不夠,我又不會分散式爬蟲,也只好停下來改程式碼了,這時細心的朋友就會想到我要解釋斷點續爬了啊(斷了之後又要重頭開始麼?)。但今天也只是偽斷點續爬,但會給你們提供一個思路。

爬取熱門和城市URL

import requests
from lxml import etree
import pymongo

client = pymongo.MongoClient('localhost', 27017)
jianshu = client['jianshu']
topic_urls = jianshu['topic_urls']

host_url = 'http://www.jianshu.com'
hot_urls = ['http://www.jianshu.com/recommendations/collections?page={}&order_by=hot'.format(str(i)) for i in range(1,40)]
city_urls = ['http://www.jianshu.com/recommendations/collections?page={}&order_by=city'.format(str(i)) for i in range(1,3)]

def get_channel_urls(url):
    html = requests.get(url)
    selector = etree.HTML(html.text)
    infos = selector.xpath('//div[@class="count"]')
    for info in infos:
        part_url = info.xpath('a/@href')[0]
        article_amounts = info.xpath('a/text()')[0]
        focus_amounts = info.xpath('text()')[0].split('·')[1]
        # print(part_url,article_amounts,focus_amounts)
        topic_urls.insert_one({'topicurl':host_url + part_url,'article_amounts':article_amounts,
                              'focus_amounts':focus_amounts})

# for hot_url in hot_urls:
#     get_channel_urls(hot_url)

for city_url in city_urls:
    get_channel_urls(city_url)

這部分程式碼是爬取URL儲存到topic_urls表中,其它爬取細節比較簡單,就不多述。

爬取文章作者及粉絲

import requests
from lxml import etree
import time
import pymongo

client = pymongo.MongoClient('localhost', 27017)
jianshu = client['jianshu']
author_urls = jianshu['author_urls']
author_infos = jianshu['author_infos']

headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
    'Connection':'keep-alive'
}

def get_article_url(url,page):
    link_view = '{}?order_by=added_at&page={}'.format(url,str(page))
    try:
        html = requests.get(link_view,headers=headers)
        selector = etree.HTML(html.text)
        infos = selector.xpath('//div[@class="name"]')
        for info in infos:
            author_name = info.xpath('a/text()')[0]
            authorurl = info.xpath('a/@href')[0]
            if 'http://www.jianshu.com'+ authorurl in [item['author_url'] for item in author_urls.find()]:
                pass
            else:
            # print('http://www.jianshu.com'+authorurl,author_name)
                author_infos.insert_one({'author_name':author_name,'author_url':'http://www.jianshu.com'+authorurl})
                get_reader_url(authorurl)
        time.sleep(2)
    except requests.exceptions.ConnectionError:
        pass

# get_article_url('http://www.jianshu.com/c/bDHhpK',2)
def get_reader_url(url):
    link_views = ['http://www.jianshu.com/users/{}/followers?page={}'.format(url.split('/')[-1],str(i)) for i in range(1,100)]
    for link_view in link_views:
        try:
            html = requests.get(link_view,headers=headers)
            selector = etree.HTML(html.text)
            infos = selector.xpath('//li/div[@class="info"]')
            for info in infos:
                author_name = info.xpath('a/text()')[0]
                authorurl = info.xpath('a/@href')[0]
                # print(author_name,authorurl)
                author_infos.insert_one({'author_name': author_name, 'author_url': 'http://www.jianshu.com' + authorurl})
        except requests.exceptions.ConnectionError:
            pass
# get_reader_url('http://www.jianshu.com/u/7091a52ac9e5')

1 簡書對爬蟲還是比較友好的,加了一個代理就行(但大家不要惡意爬取,維護網路安全)。
2 中途出現了二次錯誤,加了二個try就好了,之前有考慮過是否會出錯,簡書翻頁如果超過了最後一頁會自動跳轉到第二頁(手動嘗試了下),所以調了一個很大的閾值,不想到會出錯。
3 出現錯誤不想爬重複資料以及一個使用者可以發表很多篇文章,所以在get_article_url中加了一個判斷,大概意思是說:如果爬去的url在使用者表中,我就不進行訪問,儲存,爬取粉絲等操作了。

執行入口

import sys
sys.path.append("..")
from multiprocessing import Pool
from channel_extract import topic_urls
from page_spider import get_article_url

db_topic_urls = [item['topicurl'] for item in topic_urls.find()]
shouye_url = ['http://www.jianshu.com/c/bDHhpK']
x = set(db_topic_urls)
y = set(shouye_url)
rest_urls = x - y

def get_all_links_from(channel):
    for num in range(1,5000):
        get_article_url(channel,num)

if __name__ == '__main__':

    pool = Pool(processes=4)
    pool.map(get_all_links_from,rest_urls)

1 今天還在爬首頁(因為num之前取的17000(首頁文章太多)),我想了下首頁的文章大部分是其它專題推送過來的,就不爬取了,續爬的話我就用二個集合相減,去掉首頁的連結,進而爬取。
2 為什麼說是偽斷點爬取呢?因為下次報錯還是要重新開始(除非改程式),但這裡提供了一個思路給大家,通過集合相減,去爬取其餘的資訊。

相關文章