一. 课程安排

  • 课程内容
    • Scrapy log信息的认知
    • Scrapy shell
    • Scrapy settings说明和配置
    • Scrapy CrawlSpider说明

二. 课堂笔记

1. Scrapy log信息的认知

image.png

  1. 2019-01-19 09:50:48 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: tencent)
  2. 2019-01-19 09:50:48 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.9.0, Python 3.6.5 (v3
  3. .6.5:f59c0932b4, Mar 28 2018, 17:00:18) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0i 14 Aug 2018), cryptography 2.3.1, Platform Windows-10-10.0
  4. .17134-SP0 ### 爬虫scrpay框架依赖的相关模块和平台的信息
  5. 2019-01-19 09:50:48 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'tencent', 'NEWSPIDER_MODULE': 'tencent.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MO
  6. DULES': ['tencent.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/53
  7. 7.36'} ### 自定义的配置信息哪些被应用了
  8. 2019-01-19 09:50:48 [scrapy.middleware] INFO: Enabled extensions: ### 插件信息
  9. ['scrapy.extensions.corestats.CoreStats',
  10. 'scrapy.extensions.telnet.TelnetConsole',
  11. 'scrapy.extensions.logstats.LogStats']
  12. 2019-01-19 09:50:48 [scrapy.middleware] INFO: Enabled downloader middlewares: ### 启动的下载器中间件
  13. ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
  14. 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
  15. 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
  16. 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
  17. 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
  18. 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
  19. 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
  20. 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
  21. 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
  22. 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
  23. 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
  24. 'scrapy.downloadermiddlewares.stats.DownloaderStats']
  25. 2019-01-19 09:50:48 [scrapy.middleware] INFO: Enabled spider middlewares: ### 启动的爬虫中间件
  26. ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
  27. 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
  28. 'scrapy.spidermiddlewares.referer.RefererMiddleware',
  29. 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
  30. 'scrapy.spidermiddlewares.depth.DepthMiddleware']
  31. 2019-01-19 09:50:48 [scrapy.middleware] INFO: Enabled item pipelines: ### 启动的管道
  32. ['tencent.pipelines.TencentPipeline']
  33. 2019-01-19 09:50:48 [scrapy.core.engine] INFO: Spider opened ### 开始爬去数据
  34. 2019-01-19 09:50:48 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
  35. 2019-01-19 09:50:48 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
  36. 2019-01-19 09:50:51 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://hr.tencent.com/robots.txt> (referer: None) ### 抓取robots协议内容
  37. 2019-01-19 09:50:51 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://hr.tencent.com/position.php?&start=#a0> (referer: None) ### start_url发起请求
  38. 2019-01-19 09:50:51 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'hr.tencent.com': <GET https://hr.tencent.com/position.php?&start=> ### 提示错误,爬虫中通过yeid交给引擎的请求会经过爬虫中间件,由于请求的url超出allowed_domain的范围,被offsitmiddleware 拦截了
  39. 2019-01-19 09:50:51 [scrapy.core.engine] INFO: Closing spider (finished) ### 爬虫关闭
  40. 2019-01-19 09:50:51 [scrapy.statscollectors] INFO: Dumping Scrapy stats: ### 本次爬虫的信息统计
  41. {'downloader/request_bytes': 630,
  42. 'downloader/request_count': 2,
  43. 'downloader/request_method_count/GET': 2,
  44. 'downloader/response_bytes': 4469,
  45. 'downloader/response_count': 2,
  46. 'downloader/response_status_count/200': 2,
  47. 'finish_reason': 'finished',
  48. 'finish_time': datetime.datetime(2019, 1, 19, 1, 50, 51, 558634),
  49. 'log_count/DEBUG': 4,
  50. 'log_count/INFO': 7,
  51. 'offsite/domains': 1,
  52. 'offsite/filtered': 12,
  53. 'request_depth_max': 1,
  54. 'response_received_count': 2,
  55. 'scheduler/dequeued': 1,
  56. 'scheduler/dequeued/memory': 1,
  57. 'scheduler/enqueued': 1,
  58. 'scheduler/enqueued/memory': 1,
  59. 'start_time': datetime.datetime(2019, 1, 19, 1, 50, 48, 628465)}
  60. 2019-01-19 09:50:51 [scrapy.core.engine] INFO: Spider closed (finished)

2. Scrapy shell

Scrapy shell是一个交互终端,我们可以在未启动spider的情况下尝试及调试代码,也可以用来测试XPath表达式

使用方法:
scrapy shell https://www.baidu.com/

  1. response.url:当前相应的URL地址
  2. response.request.url:当前相应的请求的URL地址
  3. response.headers:响应头
  4. response.body:响应体,也就是HTML代码,默认是byte类型
  5. response.requests.headers:当前响应的请求头

3. Scrapy settings说明和配置

为什么需要配置文件:

配置文件存放一些公共的变量(比如数据库的地址,账号密码等)

方便自己和别人修改

一般用全大写字母命名变量名 SQL_HOST = ‘192.168.0.1’

settings文件详细信息:https://www.cnblogs.com/cnkai/p/7399573.html

4. Scrapy CrawlSpider说明

之前的代码中,我们有很大一部分时间在寻找下一页的URL地址或者内容的URL地址上面,这个过程能更简单一些吗?

思路:
1.从response中提取所有的li标签对应的URL地址
2.自动的构造自己resquests请求,发送给引擎

目标:通过爬虫了解crawlspider的使用

生成crawlspider的命令:scrapy genspider -t crawl 爬虫名字 域名``<br />

4.1 LinkExtractors链接提取器

使用LinkExtractors可以不用程序员自己提取想要的url,然后发送请求。这些工作都可以交给LinkExtractors,他会在所有爬的页面中找到满足规则的url,实现自动的爬取。

  1. class scrapy.linkextractors.LinkExtractor(
  2. allow = (),
  3. deny = (),
  4. allow_domains = (),
  5. deny_domains = (),
  6. deny_extensions = None,
  7. restrict_xpaths = (),
  8. tags = ('a','area'),
  9. attrs = ('href'),
  10. canonicalize = True,
  11. unique = True,
  12. process_value = None
  13. )

主要参数讲解:

  • allow:允许的url。所有满足这个正则表达式的url都会被提取。
  • deny:禁止的url。所有满足这个正则表达式的url都不会被提取。
  • allow_domains:允许的域名。只有在这个里面指定的域名的url才会被提取。
  • deny_domains:禁止的域名。所有在这个里面指定的域名的url都不会被提取。
  • restrict_xpaths:严格的xpath。和allow共同过滤链接。

4.2 Rule规则类

定义爬虫的规则类。

  1. class scrapy.spiders.Rule(
  2. link_extractor,
  3. callback = None,
  4. cb_kwargs = None,
  5. follow = None,
  6. process_links = None,
  7. process_request = None
  8. )

主要参数讲解:

  • link_extractor:一个LinkExtractor对象,用于定义爬取规则。
  • callback:满足这个规则的url,应该要执行哪个回调函数。因为CrawlSpider使用了parse作为回调函数,因此不要覆盖parse作为回调函数自己的回调函数。
  • follow:指定根据该规则从response中提取的链接是否需要跟进。
  • process_links:从link_extractor中获取到链接后会传递给这个函数,用来过滤不需要爬取的链接。

class YgSpider(CrawlSpider):
name = ‘yg’
allowed_domains = [‘sun0769.com’]
start_urls = [‘http://wz.sun0769.com/index.php/question/questionType?type=4&page=0‘]

  1. rules = (
  2. Rule(LinkExtractor(allow=r'wz.sun0769.com/html/question/201811/\d+\.shtml'), callback='parse_item'),
  3. Rule(LinkExtractor(allow=r'http:\/\/wz.sun0769.com/index.php/question/questionType\?type=4&page=\d+'), follow=True),
  4. )
  5. def parse_item(self, response):
  6. item = {}
  7. item['content'] = response.xpath('//div[@class="c1 text14_2"]//text()').extract()
  8. print(item)
  1. <a name="RH82H"></a>
  2. ### 4.3 案例演示 爬取小程序社区
  3. [http://www.wxapp-union.com/portal.php?mod=list&catid=2&page=1](http://www.wxapp-union.com/portal.php?mod=list&catid=2&page=1)<br />