Python爬虫入门教程 33-100 电影评论数据抓取 scrapy

1. 海王评论数据爬取前分析

海王上映了,然后口碑炸了,对咱来说,多了一个可爬可分析的电影,美哉~

摘录一个评论

零点场刚看完,温导的电影一直很不错,无论是速7,电锯惊魂还是招魂都很棒。打斗和音效方面没话说非常棒,特别震撼。总之,DC扳回一分( ̄▽ ̄)。比正义联盟好的不止一点半点(我个人感觉)。还有艾梅伯希尔德是真的漂亮,温导选的人都很棒。

真的第一次看到这么牛逼的电影 转场特效都吊炸天

2. 海王案例开始爬取数据

数据爬取的依旧是猫眼的评论,这部分内容咱们用把牛刀,scrapy爬取,一般情况下,用一下requests就好了

抓取地址、交流群:1029344413 分享视频资料

http://m.maoyan.com/mmdb/comments/movie/249342.json?_v_=yes&offset=15&startTime=2018-12-11%2009%3A58%3A43

关键参数

url:http://m.maoyan.com/mmdb/comments/movie/249342.json

offset:15startTime:起始时间

scrapy 爬取猫眼代码特别简单,我分开几个py文件即可。Haiwang.py

import scrapyimport jsonfromhaiwang.itemsimport HaiwangItemclass HaiwangSpider(scrapy.Spider):

    name =’Haiwang’    allowed_domains = [‘m.maoyan.com’]

    start_urls = [‘http://m.maoyan.com/mmdb/comments/movie/249342.json?_v_=yes&offset=0&startTime=0’]

    def parse(self, response):

        print(response.url)

        body_data = response.body_as_unicode()

        js_data = json.loads(body_data)

        item = HaiwangItem()

        forinfoinjs_data[“cmts”]:

            item[“nickName”] = info[“nickName”]

            item[“cityName”] = info[“cityName”]if”cityName”ininfoelse””            item[“content”] = info[“content”]

            item[“score”] = info[“score”]

            item[“startTime”] = info[“startTime”]

            item[“approve”] = info[“approve”]

            item[“reply”] = info[“reply”]

            item[“avatarurl”] = info[“avatarurl”]

            yield item

        yieldscrapy.Request(“http://m.maoyan.com/mmdb/comments/movie/249342.json?_v_=yes&offset=0&startTime={}”.format(item[“startTime”]),callback=self.parse)

setting.py

设置需要配置headers

DEFAULT_REQUEST_HEADERS = {

    “Referer”:”http://m.maoyan.com/movie/249342/comments?_v_=yes”,

    “User-Agent”:”Mozilla/5.0 Chrome/63.0.3239.26 Mobile Safari/537.36″,

    “X-Requested-With”:”superagent”}

需要配置一些抓取条件

# Obey robots.txt rulesROBOTSTXT_OBEY = False# See also autothrottle settings and docsDOWNLOAD_DELAY = 1# Disable cookies (enabled by default)COOKIES_ENABLED = False

开启管道

# Configure item pipelines# See https://doc.scrapy.org/en/latest/topics/item-pipeline.htmlITEM_PIPELINES = {

  ‘haiwang.pipelines.HaiwangPipeline’: 300,

}

items.py

获取你想要的数据

import scrapyclass HaiwangItem(scrapy.Item):

    # define the fields for your item here like:# name = scrapy.Field()nickName = scrapy.Field()

    cityName = scrapy.Field()

    content = scrapy.Field()

    score = scrapy.Field()

    startTime = scrapy.Field()

    approve = scrapy.Field()

    reply =scrapy.Field()

    avatarurl = scrapy.Field()

pipelines.py

保存数据,数据存储到csv文件中

import osimport csvclass HaiwangPipeline(object):

    def__init__(self):

        store_file = os.path.dirname(__file__) +’/spiders/haiwang.csv’        self.file = open(store_file,”a+”, newline=””, encoding=”utf-8″)

        self.writer = csv.writer(self.file)

    def process_item(self, item, spider):

        try:

            self.writer.writerow((

                item[“nickName”],

                item[“cityName”],

                item[“content”],

                item[“approve”],

                item[“reply”],

                item[“startTime”],

                item[“avatarurl”],

                item[“score”]

            ))

        except Exception as e:

            print(e.args)

        def close_spider(self, spider):

            self.file.close()

begin.py

编写运行脚本

fromscrapyimport cmdline

cmdline.execute((“scrapy crawl Haiwang”).split())

搞定,等着数据来到,就可以了

https://www.jianshu.com/p/d10593f20686

「点点赞赏,手留余香」

    还没有人赞赏,快来当第一个赞赏的人吧!
0 条回复 A 作者 M 管理员
    所有的伟大,都源于一个勇敢的开始!
欢迎您,新朋友,感谢参与互动!欢迎您 {{author}},您在本站有{{commentsCount}}条评论