昨日内容拾遗

打开昨天写的DianShang项目,查看items.py

  1. class AmazonItem(scrapy.Item):
  2. name = scrapy.Field() # 商品名
  3. price= scrapy.Field() # 价格
  4. delivery=scrapy.Field() # 配送方式

这里的AmazonItem类名,可以随意。这里定义的3个属性,和spiders\amazon.py定义的3个key,是一一对应的

  1. # 生成标准化数据
  2. item = AmazonItem() # 执行函数,默认是一个空字典
  3. # 增加键值对
  4. item["name"] = name
  5. item["price"] = price
  6. item["delivery"] = delivery

查看 pipelines.py

  1. class MongodbPipeline(object):
  2. def __init__(self, host, port, db, table):
  3. self.host = host
  4. self.port = port
  5. self.db = db
  6. self.table = table
  7. @classmethod
  8. def from_crawler(cls, crawler):
  9. """
  10. Scrapy会先通过getattr判断我们是否自定义了from_crawler,有则调它来完
  11. 成实例化
  12. """
  13. HOST = crawler.settings.get('HOST')
  14. PORT = crawler.settings.get('PORT')
  15. DB = crawler.settings.get('DB')
  16. TABLE = crawler.settings.get('TABLE')
  17. return cls(HOST, PORT, DB, TABLE)

如果有fromcrawler方法,它会优先执行!之后再执行_init方法。

fromcrawler方法必须返回一个对象,这个cls对象,其实是执行了init方法。它传送的4个值和_init是一一对应的!

pipelines.py 可以放多个pipeline,比如文件处理

修改 pipelines.py,增加FilePipeline,它会将爬取的信息写入到文件中

  1. # -*- coding: utf-8 -*-
  2. # Define your item pipelines here
  3. #
  4. # Don't forget to add your pipeline to the ITEM_PIPELINES setting
  5. # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
  6. from pymongo import MongoClient
  7. class MongodbPipeline(object):
  8. def __init__(self, host, port, db, table):
  9. self.host = host
  10. self.port = port
  11. self.db = db
  12. self.table = table
  13. @classmethod
  14. def from_crawler(cls, crawler):
  15. """
  16. Scrapy会先通过getattr判断我们是否自定义了from_crawler,有则调它来完
  17. 成实例化
  18. """
  19. HOST = crawler.settings.get('HOST')
  20. PORT = crawler.settings.get('PORT')
  21. DB = crawler.settings.get('DB')
  22. TABLE = crawler.settings.get('TABLE')
  23. return cls(HOST, PORT, DB, TABLE)
  24. def open_spider(self, spider):
  25. """
  26. 爬虫刚启动时执行一次
  27. """
  28. # self.client = MongoClient('mongodb://%s:%s@%s:%s' %(self.user,self.pwd,self.host,self.port))
  29. self.client = MongoClient(host=self.host, port=self.port)
  30. def close_spider(self, spider):
  31. """
  32. 爬虫关闭时执行一次
  33. """
  34. self.client.close()
  35. def process_item(self, item, spider):
  36. # 操作并进行持久化
  37. d = dict(item)
  38. if all(d.values()):
  39. self.client[self.db][self.table].insert(d)
  40. print("添加成功一条")
  41. class FilePipeline(object):
  42. def __init__(self, file_path):
  43. self.file_path=file_path
  44. @classmethod
  45. def from_crawler(cls, crawler):
  46. """
  47. Scrapy会先通过getattr判断我们是否自定义了from_crawler,有则调它来完
  48. 成实例化
  49. """
  50. file_path = crawler.settings.get('FILE_PATH')
  51. return cls(file_path)
  52. def open_spider(self, spider):
  53. """
  54. 爬虫刚启动时执行一次
  55. """
  56. print('==============>爬虫程序刚刚启动')
  57. self.fileobj=open(self.file_path,'w',encoding='utf-8')
  58. def close_spider(self, spider):
  59. """
  60. 爬虫关闭时执行一次
  61. """
  62. print('==============>爬虫程序运行完毕')
  63. self.fileobj.close()
  64. def process_item(self, item, spider):
  65. # 操作并进行持久化
  66. print("items----->",item)
  67. # return表示会被后续的pipeline继续处理
  68. d = dict(item)
  69. if all(d.values()):
  70. self.fileobj.write("%s\n" %str(d))
  71. return item
  72. # 表示将item丢弃,不会被后续pipeline处理

如果写了raise,表示将item丢弃,不会被后续pipeline处理。

由于file_path指定的文件路径,需要在settings中获取。

修改 setting.py,最后一行增加配置项FILE_PATH

  1. FILE_PATH='pipe.txt'

这里写的是相对路由,实际路径是项目根目录

修改 setting.py,增加pipeline

  1. ITEM_PIPELINES = {
  2. 'DianShang.pipelines.MongodbPipeline': 300,
  3. 'DianShang.pipelines.FilePipeline': 500,
  4. }

修改 pipelines.py,修改MongodbPipeline中的process_item方法,它必须要return

  1. def process_item(self, item, spider):
  2. # 操作并进行持久化
  3. d = dict(item)
  4. if all(d.values()):
  5. self.client[self.db][self.table].insert(d)
  6. print("添加成功一条")
  7. return item

执行bin.py,查看pipe.txt,内容如下:

Day138 scrapy框架的下载中间件,settings配置 - 图1

修改 spiders—>amazon.py,增加close方法。这个命令不能变动!

  1. # -*- coding: utf-8 -*-
  2. import scrapy
  3. from scrapy import Request # 导入模块
  4. from DianShang.items import AmazonItem # 导入item
  5. class AmazonSpider(scrapy.Spider):
  6. name = 'amazon'
  7. allowed_domains = ['amazon.cn']
  8. # start_urls = ['http://amazon.cn/']
  9. # 自定义配置,注意:变量名必须是custom_settings
  10. custom_settings = {
  11. 'REQUEST_HEADERS': {
  12. 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.75 Safari/537.36',
  13. }
  14. }
  15. def start_requests(self):
  16. r1 = Request(url="https://www.amazon.cn/s/ref=nb_sb_ss_i_3_6?field-keywords=iphone+x",
  17. headers=self.settings.get('REQUEST_HEADERS'),)
  18. yield r1
  19. def parse(self, response):
  20. # 商品详细链接
  21. detail_urls = response.xpath('//li[contains(@id,"result_")]/div/div[3]/div[1]/a/@href').extract()
  22. # print(detail_urls)
  23. for url in detail_urls:
  24. yield Request(url=url,
  25. headers=self.settings.get('REQUEST_HEADERS'), # 请求头
  26. callback=self.parse_detail, # 回调函数
  27. dont_filter=True # 不去重
  28. )
  29. def parse_detail(self, response): # 获取商品详细信息
  30. # 商品名,获取第一个结果
  31. name = response.xpath('//*[@id="productTitle"]/text()').extract_first()
  32. if name:
  33. name = name.strip()
  34. # 商品价格
  35. price = response.xpath('//*[@id="priceblock_ourprice"]/text()').extract_first()
  36. # 配送方式,*[1]表示取第一个标签,也就是b标签
  37. delivery = response.xpath('//*[@id="ddmMerchantMessage"]/*[1]/text()').extract_first()
  38. print(name,price,delivery)
  39. # 生成标准化数据
  40. item = AmazonItem() # 执行函数,默认是一个空字典
  41. # 增加键值对
  42. item["name"] = name
  43. item["price"] = price
  44. item["delivery"] = delivery
  45. return item # 必须要返回
  46. def close(self,reason):
  47. print("spider is closed")

这个方法,在每次请求执行完毕后,会调用。它可以打印一些日志信息,或者做一些收尾工作!

一、下载中间件

  1. class MyDownMiddleware(object):
  2. def process_request(self, request, spider):
  3. """
  4. 请求需要被下载时,经过所有下载器中间件的process_request调用
  5. :param request:
  6. :param spider:
  7. :return:
  8. None,继续后续中间件去下载;
  9. Response对象,停止process_request的执行,开始执行process_response
  10. Request对象,停止中间件的执行,将Request重新调度器
  11. raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
  12. """
  13. pass
  14. def process_response(self, request, response, spider):
  15. """
  16. spider处理完成,返回时调用
  17. :param response:
  18. :param result:
  19. :param spider:
  20. :return:
  21. Response 对象:转交给其他中间件process_response
  22. Request 对象:停止中间件,request会被重新调度下载
  23. raise IgnoreRequest 异常:调用Request.errback
  24. """
  25. print('response1')
  26. return response
  27. def process_exception(self, request, exception, spider):
  28. """
  29. 当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
  30. :param response:
  31. :param exception:
  32. :param spider:
  33. :return:
  34. None:继续交给后续中间件处理异常;
  35. Response对象:停止后续process_exception方法
  36. Request对象:停止中间件,request将会被重新调用下载
  37. """
  38. return None

这个主要用来做代理IP更换的!比如某些网站,一分钟只能下载3次。超过3次之后,就会封锁IP。

这个时候再去访问已经没有意义了,需要更改IP地址才行!

在scrapy架构中,主要有8个步骤。有可能第4步—>第5步时,就会出现问题,需要重新访问才行!

Day138 scrapy框架的下载中间件,settings配置 - 图2

中间的蓝色条块,就是中间件!如果在中间件里面做更改IP操作,那么就可以保证每次请求都是不同的IP地址访问。

这里需要做一个IP代理池,有一个请求过来,通过中间件,就取一个IP地址做封装!

只要每次IP不一样,某些网站就无法封锁你!

推荐在中间件中做更改IP操作,为什么呢?目前在spiders中,只有一个亚马逊爬虫程序。

假设还有一个淘宝爬虫程序,它也需要做更好IP操作,怎么办?每一个爬虫程序里面,用代码实现更换IP操作吗?

这样代码就重复了,如果在中间中做更改IP的操作,那么不管有多少个爬虫程序,都会自动更换IP。

所以:对于所有请求做同一批量操作时,推荐使用中间件!

不管针对于换IP,还以做cookie池,账户池(花钱买一堆真实账户)

在django的中间中,如果遇到return HttpResponse或者异常,它会原路返回!

但是在scrapy框架中,它是从最里面的Response返回。每一个中间件的Response都会被执行!

看上面蓝色块中的Request对象,它帮你做了封装。那么更换IP操作,是在这里面封装的!

如果遇到报错,会交给SCHEDULER,也就是调度器。

举例:

修改 middlewares.py,增加2个下载中间件

由于时间关系,步骤略…

Day138 scrapy框架的下载中间件,settings配置 - 图3

项目链接如下:

https://github.com/jhao104/proxy_pool

使用方法,请先查看README.md

由于时间关系,步骤略…

二、settings配置

  1. # -*- coding: utf-8 -*-
  2. # Scrapy settings for step8_king project
  3. #
  4. # For simplicity, this file contains only settings considered important or
  5. # commonly used. You can find more settings consulting the documentation:
  6. #
  7. # http://doc.scrapy.org/en/latest/topics/settings.html
  8. # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
  9. # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
  10. # 1. 爬虫名称
  11. BOT_NAME = 'step8_king'
  12. # 2. 爬虫应用路径
  13. SPIDER_MODULES = ['step8_king.spiders']
  14. NEWSPIDER_MODULE = 'step8_king.spiders'
  15. # Crawl responsibly by identifying yourself (and your website) on the user-agent
  16. # 3. 客户端 user-agent请求头
  17. # USER_AGENT = 'step8_king (+http://www.yourdomain.com)'
  18. # Obey robots.txt rules
  19. # 4. 禁止爬虫配置
  20. # ROBOTSTXT_OBEY = False
  21. # Configure maximum concurrent requests performed by Scrapy (default: 16)
  22. # 5. 并发请求数
  23. # CONCURRENT_REQUESTS = 4
  24. # Configure a delay for requests for the same website (default: 0)
  25. # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
  26. # See also autothrottle settings and docs
  27. # 6. 延迟下载秒数
  28. # DOWNLOAD_DELAY = 2
  29. # The download delay setting will honor only one of:
  30. # 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名
  31. # CONCURRENT_REQUESTS_PER_DOMAIN = 2
  32. # 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP
  33. # CONCURRENT_REQUESTS_PER_IP = 3
  34. # Disable cookies (enabled by default)
  35. # 8. 是否支持cookie,cookiejar进行操作cookie
  36. # COOKIES_ENABLED = True
  37. # COOKIES_DEBUG = True
  38. # Disable Telnet Console (enabled by default)
  39. # 9. Telnet用于查看当前爬虫的信息,操作爬虫等...
  40. # 使用telnet ip port ,然后通过命令操作
  41. # TELNETCONSOLE_ENABLED = True
  42. # TELNETCONSOLE_HOST = '127.0.0.1'
  43. # TELNETCONSOLE_PORT = [6023,]
  44. # 10. 默认请求头
  45. # Override the default request headers:
  46. # DEFAULT_REQUEST_HEADERS = {
  47. # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  48. # 'Accept-Language': 'en',
  49. # }
  50. # Configure item pipelines
  51. # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
  52. # 11. 定义pipeline处理请求
  53. # ITEM_PIPELINES = {
  54. # 'step8_king.pipelines.JsonPipeline': 700,
  55. # 'step8_king.pipelines.FilePipeline': 500,
  56. # }
  57. # 12. 自定义扩展,基于信号进行调用
  58. # Enable or disable extensions
  59. # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
  60. # EXTENSIONS = {
  61. # # 'step8_king.extensions.MyExtension': 500,
  62. # }
  63. # 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度
  64. # DEPTH_LIMIT = 3
  65. # 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo
  66. # 后进先出,深度优先
  67. # DEPTH_PRIORITY = 0
  68. # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
  69. # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
  70. # 先进先出,广度优先
  71. # DEPTH_PRIORITY = 1
  72. # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
  73. # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'
  74. # 15. 调度器队列
  75. # SCHEDULER = 'scrapy.core.scheduler.Scheduler'
  76. # from scrapy.core.scheduler import Scheduler
  77. # 16. 访问URL去重
  78. # DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'
  79. # Enable and configure the AutoThrottle extension (disabled by default)
  80. # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
  81. """
  82. 17. 自动限速算法
  83. from scrapy.contrib.throttle import AutoThrottle
  84. 自动限速设置
  85. 1. 获取最小延迟 DOWNLOAD_DELAY
  86. 2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY
  87. 3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY
  88. 4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间
  89. 5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCY
  90. target_delay = latency / self.target_concurrency
  91. new_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间
  92. new_delay = max(target_delay, new_delay)
  93. new_delay = min(max(self.mindelay, new_delay), self.maxdelay)
  94. slot.delay = new_delay
  95. """
  96. # 开始自动限速
  97. # AUTOTHROTTLE_ENABLED = True
  98. # The initial download delay
  99. # 初始下载延迟
  100. # AUTOTHROTTLE_START_DELAY = 5
  101. # The maximum download delay to be set in case of high latencies
  102. # 最大下载延迟
  103. # AUTOTHROTTLE_MAX_DELAY = 10
  104. # The average number of requests Scrapy should be sending in parallel to each remote server
  105. # 平均每秒并发数
  106. # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
  107. # Enable showing throttling stats for every response received:
  108. # 是否显示
  109. # AUTOTHROTTLE_DEBUG = True
  110. # Enable and configure HTTP caching (disabled by default)
  111. # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
  112. """
  113. 18. 启用缓存
  114. 目的用于将已经发送的请求或相应缓存下来,以便以后使用
  115. from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
  116. from scrapy.extensions.httpcache import DummyPolicy
  117. from scrapy.extensions.httpcache import FilesystemCacheStorage
  118. """
  119. # 是否启用缓存策略
  120. # HTTPCACHE_ENABLED = True
  121. # 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
  122. # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
  123. # 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
  124. # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"
  125. # 缓存超时时间
  126. # HTTPCACHE_EXPIRATION_SECS = 0
  127. # 缓存保存路径
  128. # HTTPCACHE_DIR = 'httpcache'
  129. # 缓存忽略的Http状态码
  130. # HTTPCACHE_IGNORE_HTTP_CODES = []
  131. # 缓存存储的插件
  132. # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
  133. """
  134. 19. 代理,需要在环境变量中设置
  135. from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware
  136. 方式一:使用默认
  137. os.environ
  138. {
  139. http_proxy:http://root:woshiniba@192.168.11.11:9999/
  140. https_proxy:http://192.168.11.11:9999/
  141. }
  142. 方式二:使用自定义下载中间件
  143. def to_bytes(text, encoding=None, errors='strict'):
  144. if isinstance(text, bytes):
  145. return text
  146. if not isinstance(text, six.string_types):
  147. raise TypeError('to_bytes must receive a unicode, str or bytes '
  148. 'object, got %s' % type(text).__name__)
  149. if encoding is None:
  150. encoding = 'utf-8'
  151. return text.encode(encoding, errors)
  152. class ProxyMiddleware(object):
  153. def process_request(self, request, spider):
  154. PROXIES = [
  155. {'ip_port': '111.11.228.75:80', 'user_pass': ''},
  156. {'ip_port': '120.198.243.22:80', 'user_pass': ''},
  157. {'ip_port': '111.8.60.9:8123', 'user_pass': ''},
  158. {'ip_port': '101.71.27.120:80', 'user_pass': ''},
  159. {'ip_port': '122.96.59.104:80', 'user_pass': ''},
  160. {'ip_port': '122.224.249.122:8088', 'user_pass': ''},
  161. ]
  162. proxy = random.choice(PROXIES)
  163. if proxy['user_pass'] is not None:
  164. request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
  165. encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))
  166. request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)
  167. print "**************ProxyMiddleware have pass************" + proxy['ip_port']
  168. else:
  169. print "**************ProxyMiddleware no pass************" + proxy['ip_port']
  170. request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
  171. DOWNLOADER_MIDDLEWARES = {
  172. 'step8_king.middlewares.ProxyMiddleware': 500,
  173. }
  174. """
  175. """
  176. 20. Https访问
  177. Https访问时有两种情况:
  178. 1. 要爬取网站使用的可信任证书(默认支持)
  179. DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
  180. DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"
  181. 2. 要爬取网站使用的自定义证书
  182. DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
  183. DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"
  184. # https.py
  185. from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
  186. from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)
  187. class MySSLFactory(ScrapyClientContextFactory):
  188. def getCertificateOptions(self):
  189. from OpenSSL import crypto
  190. v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())
  191. v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())
  192. return CertificateOptions(
  193. privateKey=v1, # pKey对象
  194. certificate=v2, # X509对象
  195. verify=False,
  196. method=getattr(self, 'method', getattr(self, '_ssl_method', None))
  197. )
  198. 其他:
  199. 相关类
  200. scrapy.core.downloader.handlers.http.HttpDownloadHandler
  201. scrapy.core.downloader.webclient.ScrapyHTTPClientFactory
  202. scrapy.core.downloader.contextfactory.ScrapyClientContextFactory
  203. 相关配置
  204. DOWNLOADER_HTTPCLIENTFACTORY
  205. DOWNLOADER_CLIENTCONTEXTFACTORY
  206. """
  207. """
  208. 21. 爬虫中间件
  209. class SpiderMiddleware(object):
  210. def process_spider_input(self,response, spider):
  211. '''
  212. 下载完成,执行,然后交给parse处理
  213. :param response:
  214. :param spider:
  215. :return:
  216. '''
  217. pass
  218. def process_spider_output(self,response, result, spider):
  219. '''
  220. spider处理完成,返回时调用
  221. :param response:
  222. :param result:
  223. :param spider:
  224. :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
  225. '''
  226. return result
  227. def process_spider_exception(self,response, exception, spider):
  228. '''
  229. 异常调用
  230. :param response:
  231. :param exception:
  232. :param spider:
  233. :return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
  234. '''
  235. return None
  236. def process_start_requests(self,start_requests, spider):
  237. '''
  238. 爬虫启动时调用
  239. :param start_requests:
  240. :param spider:
  241. :return: 包含 Request 对象的可迭代对象
  242. '''
  243. return start_requests
  244. 内置爬虫中间件:
  245. 'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
  246. 'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
  247. 'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
  248. 'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
  249. 'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,
  250. """
  251. # from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
  252. # Enable or disable spider middlewares
  253. # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
  254. SPIDER_MIDDLEWARES = {
  255. # 'step8_king.middlewares.SpiderMiddleware': 543,
  256. }
  257. """
  258. 22. 下载中间件
  259. class DownMiddleware1(object):
  260. def process_request(self, request, spider):
  261. '''
  262. 请求需要被下载时,经过所有下载器中间件的process_request调用
  263. :param request:
  264. :param spider:
  265. :return:
  266. None,继续后续中间件去下载;
  267. Response对象,停止process_request的执行,开始执行process_response
  268. Request对象,停止中间件的执行,将Request重新调度器
  269. raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
  270. '''
  271. pass
  272. def process_response(self, request, response, spider):
  273. '''
  274. spider处理完成,返回时调用
  275. :param response:
  276. :param result:
  277. :param spider:
  278. :return:
  279. Response 对象:转交给其他中间件process_response
  280. Request 对象:停止中间件,request会被重新调度下载
  281. raise IgnoreRequest 异常:调用Request.errback
  282. '''
  283. print('response1')
  284. return response
  285. def process_exception(self, request, exception, spider):
  286. '''
  287. 当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
  288. :param response:
  289. :param exception:
  290. :param spider:
  291. :return:
  292. None:继续交给后续中间件处理异常;
  293. Response对象:停止后续process_exception方法
  294. Request对象:停止中间件,request将会被重新调用下载
  295. '''
  296. return None
  297. 默认下载中间件
  298. {
  299. 'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
  300. 'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
  301. 'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
  302. 'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
  303. 'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
  304. 'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
  305. 'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
  306. 'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
  307. 'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
  308. 'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
  309. 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
  310. 'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
  311. 'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
  312. 'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
  313. }
  314. """
  315. # from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
  316. # Enable or disable downloader middlewares
  317. # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
  318. # DOWNLOADER_MIDDLEWARES = {
  319. # 'step8_king.middlewares.DownMiddleware1': 100,
  320. # 'step8_king.middlewares.DownMiddleware2': 500,
  321. # }

三、亚马逊项目

完整代码,请参考:

下载项目代码

未完待续…