下载器中间件

大致功能:

  • 更换代理IP
  • 更换Cookies
  • 更换User-Agent
  • 自动重试

代理中间件

Scrapy工程创建后会自动生成middlewares.py这个文件,s表示这个文件里可以放很多中间件。通俗点说,一个类对应一个中间件。

  1. import random
  2. from scrapy import settings
  3. class ProxyMiddleware:
  4. def process_request(self, request, spider):
  5. proxy = random.choice(settings['PROXIES'])
  6. request.meta['proxy'] = proxy

当然,settings.py中设置:

  1. # ↓↓ 新增代理选项
  2. PROXIES = [
  3. "http://112.87.69.135:9999",
  4. "https://125.123.153.131:3000",
  5. ]
  6. # ↓↓ 激活中间件(取消注释)
  7. DOWNLOADER_MIDDLEWARES = {
  8. 'projectName.middlewares.ProjectnameDownloaderMiddleware': 543, # 这个是默认自带的,543序号越大,优先级越低
  9. 'projectName.middlewares.ProxyMiddleware': 544,
  10. }

这里可以试试你的ip地址。这个网站还可以在后面加/10表示翻到了第十页

另外,可以自己爬免费代理网站构建个小池子,然后随机从中获取代理。(高富帅可以直接花钱买,不容易挂)

  • 写个小爬虫去各大免费代理网站上爬并验证,将可用的代理IP保存到数据库或Redis
  • 在process_request方法中随机从数据库中获取一条代理
  • 周期性验证数据库中的代理池,清除掉挂了的代理

UA中间件

和代理中间件几乎一样的套路:

  1. class UAMiddleware:
  2. def process_request(self, request, spider):
  3. ua = random.choice(settings['USER_AGENT_LIST'])
  4. request.headers['User-Agent'] = ua

然后再去settings.py中激活。访问这里可以查看自己的UA,以及第十页的URL。

附常用UA池:

  1. USER_AGENT_LIST = [
  2. 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 OPR/26.0.1656.60',
  3. 'Opera/8.0 (Windows NT 5.1; U; en)',
  4. 'Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/2.0.0 Opera 9.50',
  5. 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; en) Opera 9.50',
  6. 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0',
  7. 'Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10',
  8. 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.57.2 (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2 ',
  9. 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36',
  10. 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
  11. 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.133 Safari/534.16',
  12. 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36',
  13. 'Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko',
  14. 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11',
  15. 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER',
  16. 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)',
  17. 'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 SE 2.X MetaSr 1.0',
  18. 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; SE 2.X MetaSr 1.0) ',
  19. ]

Cookies中间件

Cookies一般用来保持登录状态,可以将多个Cookies存入Redis,然后在中间件的process_request方法中设置:

  1. request.cookies = cookies

中间件集成Selenium

代码示例:

  1. import time
  2. from selenium import webdriver
  3. from scrapy.http import HtmlResponse
  4. class SeleniumMiddleware:
  5. def __init__(self):
  6. self.driver = webdriver.Chrome('./chromedriver')
  7. def process_request(self, request, spider):
  8. if spider.name == 'seleniumSpider':
  9. self.driver.get(request.url)
  10. time.sleep(2)
  11. body = self.driver.page_source
  12. return HtmlResponse(self.driver.current_url, body=body, encoding='utf-8', request=request)