scrapy–middlewares(4)

一、scrapy middlewares

scrapy中间件主要有两种,下载中间件和spider中间件,
以下是我们在使用命令创建项目的过程中,自动给我们的案例代码。
使用中间件需要在setting中进行激活。

SPIDER_MIDDLEWARES = {
   'czw.middlewares.CzwSpiderMiddleware': 543,
}
# -*- coding: utf-8 -*-
# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy import signals
class CzwSpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.
    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s
    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.
        # Should return None or raise an exception.
        return None
    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.
        # Must return an iterable of Request, dict or Item objects.
        for i in result:
            yield i
    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.
        # Should return either None or an iterable of Request, dict
        # or Item objects.
        pass
    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn’t have a response associated.
        # Must return only requests (not items).
        for r in start_requests:
            yield r
    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)
class CzwDownloaderMiddleware(object):
    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s
    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.
        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        return None
    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.
        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response
    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.
        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass
    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)

二、DownloaderMiddleware

我们一般进行重写的主要有:
process_request, process_response,process_exception

1、 process_request: 当调用此中间件的时候会被调用

返回的对象必须是: requsts对象、response对象、None、raise [IgnoreRequest]任一

  • requst对象:终止当前流程,将request对象交给调度器,重新进行请求
  • reponse对象:终止将前的流程,也中止其他process_requst方法,将response通过引擎返回给爬虫。
  • None:继续执行其他下载器中间件的process_requst的方法,直到找到合适的下载器函数调用,返回response, process_response会被调用。
  • raise:异常提交,会传递到process_exception中,如果没有管理这个异常,将会取消这个操作,不会记录
2、 process_response:当下载器完成http请求,返回响应给引擎的时候调用

返回的对象必须是: requsts对象、response对象、raise [IgnoreRequest]任一

  • requst对象:终止当前操作,返回一个requst对象给调度器,重新调度
  • reponse对象:会被其他中间件中的process_response进行调用
  • raise:异常提交,会传递到process_exception中,如果没有管理这个异常,将会取消这个操作,不会记录
3、 process_exception:

返回的对象必须是: requsts对象、response对象任一

  • None: 调用其他中间件中的process_exception处理异常
  • request对象:终止当前操作,返回一个requst对象给调度器,重新调度
  • response对象:process_response将会被调用,不会再调用其他process_exception。
4、内置DownloaderMiddleware:
  • CookiesMiddleware:
    设置cookie
    COOKIES_ENABLED默认为True
    COOKIES_DEBUG默认为False
  • DefaultHeadersMiddleware:
    该中间件设置 DEFAULT_REQUEST_HEADERS 指定的默认request header。
  • DownloadTimeoutMiddleware
    该中间件设置 DOWNLOAD_TIMEOUT 指定的request下载超时时间
  • HttpProxyMiddleware
    该中间件提供了对request设置HTTP代理的支持。您可以通过在 Request 对象中设置 proxy 元数据来开启代理

三、SpiderMiddleware

process_spider_inputprocess_spider_outputprocess_spider_exceptionprocess_start_requests

1、process_spider_input:当response通过spider中间件时,该方法被调用,处理该response

返回的对象必须是: None、异常exception任一

  • None: 继续进行处理,调用其他中间件中的process_spider_input
  • 异常exception:
2、process_spider_out(response, result, spider):当Spider处理response返回result时,该方法被调用

必须返回包含Request|、Item对象的可迭代对象(iterable)
参数:

  • response(Response对象) – 生成该输出的response
  • result(包含Reques或Item对象的可迭代对象(iterable)) – spider返回的result
  • spider(Spider对象) – 其结果被处理的spider
3、process_spider_exception(response, exception, spider):process_spider_input()抛出异常时,该方法被调用

返回对象:NoneResponse或Item的可迭代对象(iterable)

  • None: 调用中间件链中的其它中间件的process_spider_exception()
  • 可迭代对象:则中间件链的process_spider_output()方法被调用,其他的process_spider_exception()将不会被调用
4、process_start_requests(start_requests, spider)

该方法以spider 启动的request为参数被调用,执行的过程类似于 process_spider_output() ,只不过其没有相关联的response并且必须返回request(不是item)。
其接受一个可迭代的对象(start_requests 参数)且必须返回另一个包含 [Request]对象的可迭代对象。

https://www.jianshu.com/p/05adc9d96bb5

「点点赞赏,手留余香」

    还没有人赞赏,快来当第一个赞赏的人吧!
0 条回复 A 作者 M 管理员
    所有的伟大,都源于一个勇敢的开始!
欢迎您,新朋友,感谢参与互动!欢迎您 {{author}},您在本站有{{commentsCount}}条评论