1. 海王評論資料爬取前分析
海王上映了,然後口碑炸了,對我們來說,多了一個可爬可分析的電影,美哉~
摘錄一個評論
零點場剛看完,溫導的電影一直很不錯,無論是速7,電鋸驚魂還是招魂都很棒。打鬥和音效方面沒話說非常棒,特別震撼。總之,DC扳回一分( ̄▽ ̄)。比正義聯盟好的不止一點半點(我個人感覺)。還有艾梅伯希爾德是真的漂亮,溫導選的人都很棒。
真的第一次看到這麼牛逼的電影 轉場特效都吊炸天
2. 海王案例開始爬取資料
資料爬取的依舊是貓眼的評論,這部分內容我們們用把牛刀,scrapy
爬取,一般情況下,用一下requests
就好了
抓取地址
http://m.maoyan.com/mmdb/comments/movie/249342.json?_v_=yes&offset=15&startTime=2018-12-11%2009%3A58%3A43
關鍵引數
url:http://m.maoyan.com/mmdb/comments/movie/249342.json
offset:15
startTime:起始時間
scrapy 爬取貓眼程式碼特別簡單,我分開幾個py檔案即可。
Haiwang.py
import scrapy
import json
from haiwang.items import HaiwangItem
class HaiwangSpider(scrapy.Spider):
name = `Haiwang`
allowed_domains = [`m.maoyan.com`]
start_urls = [`http://m.maoyan.com/mmdb/comments/movie/249342.json?_v_=yes&offset=0&startTime=0`]
def parse(self, response):
print(response.url)
body_data = response.body_as_unicode()
js_data = json.loads(body_data)
item = HaiwangItem()
for info in js_data["cmts"]:
item["nickName"] = info["nickName"]
item["cityName"] = info["cityName"] if "cityName" in info else ""
item["content"] = info["content"]
item["score"] = info["score"]
item["startTime"] = info["startTime"]
item["approve"] = info["approve"]
item["reply"] = info["reply"]
item["avatarurl"] = info["avatarurl"]
yield item
yield scrapy.Request("http://m.maoyan.com/mmdb/comments/movie/249342.json?_v_=yes&offset=0&startTime={}".format(item["startTime"]),callback=self.parse)
setting.py
設定需要配置headers
DEFAULT_REQUEST_HEADERS = {
"Referer":"http://m.maoyan.com/movie/249342/comments?_v_=yes",
"User-Agent":"Mozilla/5.0 Chrome/63.0.3239.26 Mobile Safari/537.36",
"X-Requested-With":"superagent"
}
需要配置一些抓取條件
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 1
# Disable cookies (enabled by default)
COOKIES_ENABLED = False
開啟管道
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
`haiwang.pipelines.HaiwangPipeline`: 300,
}
items.py
獲取你想要的資料
import scrapy
class HaiwangItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
nickName = scrapy.Field()
cityName = scrapy.Field()
content = scrapy.Field()
score = scrapy.Field()
startTime = scrapy.Field()
approve = scrapy.Field()
reply =scrapy.Field()
avatarurl = scrapy.Field()
pipelines.py
儲存資料,資料儲存到csv
檔案中
import os
import csv
class HaiwangPipeline(object):
def __init__(self):
store_file = os.path.dirname(__file__) + `/spiders/haiwang.csv`
self.file = open(store_file, "a+", newline="", encoding="utf-8")
self.writer = csv.writer(self.file)
def process_item(self, item, spider):
try:
self.writer.writerow((
item["nickName"],
item["cityName"],
item["content"],
item["approve"],
item["reply"],
item["startTime"],
item["avatarurl"],
item["score"]
))
except Exception as e:
print(e.args)
def close_spider(self, spider):
self.file.close()
begin.py
編寫執行指令碼
from scrapy import cmdline
cmdline.execute(("scrapy crawl Haiwang").split())
走起,搞定,等著資料來到,就可以了