本文以前面的ip138爬虫为例,爬虫程序不再赘述。
方法一:修改配置文件
可以通过修改settings.py配置文件,很容易地为爬虫程序设置头部数据
修改User-Agent:
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1'
修改headers:
DEFAULT_REQUEST_HEADERS = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.89 Safari/537.36','Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',"Accept-Language": "zh-CN,zh;q=0.9,en-US;q=0.5,en;q=0.3","Accept-Encoding": "gzip, deflate",'Content-Length': '0',"Connection": "keep-alive"}
方法二:在爬虫程序中设置
除了可以在 settings.py 中设置,我们也可以为爬虫单独设置headers,例如:
class Ip138Spider(scrapy.Spider):custom_settings = {'DEFAULT_REQUEST_HEADERS' : {'User-Agent': 'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3','Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',"Accept-Language": "zh-CN,zh;q=0.9,en-US;q=0.5,en;q=0.3","Accept-Encoding": "gzip, deflate",'Content-Length': '0',"Connection": "keep-alive"}}
还可以在请求的时候设置:
# -*- coding: utf-8 -*-import scrapyclass Ip138Spider(scrapy.Spider):name = 'ip138'allowed_domains = ['www.ip138.com','2018.ip138.com']start_urls = ['http://2018.ip138.com/ic.asp']headers = {'User-Agent': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)','Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',"Accept-Language": "zh-CN,zh;q=0.9,en-US;q=0.5,en;q=0.3","Accept-Encoding": "gzip, deflate",'Content-Length': '0',"Connection": "keep-alive"}def start_requests(self):for url in self.start_urls:yield scrapy.Request(url, headers=self.headers ,callback=self.parse)def parse(self, response):print("*" * 40)print("response text: %s" % response.text)print("response headers: %s" % response.headers)print("response meta: %s" % response.meta)print("request headers: %s" % response.request.headers)print("request cookies: %s" % response.request.cookies)print("request meta: %s" % response.request.meta)
方法三:在中间件中设置
我们可以在DownloaderMiddleware中间件中设置headers
class TutorialDownloaderMiddleware(object):def process_request(self, request, spider):request.headers.setdefault('User-Agent', 'Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5')
当然我们也可以设置完整的headers
from scrapy.http.headers import Headersheaders = {'User-Agent': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)','Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',"Accept-Language": "zh-CN,zh;q=0.9,en-US;q=0.5,en;q=0.3","Accept-Encoding": "gzip, deflate",'Content-Length': '0',"Connection": "keep-alive"}class TutorialDownloaderMiddleware(object):def process_request(self, request, spider):request.headers = Headers(headers)
在settings.py中启动中间件
# Enable or disable downloader middlewares# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.htmlDOWNLOADER_MIDDLEWARES = {'tutorial.middlewares.TutorialDownloaderMiddleware': 1,}
动态UserAgent
如果想要设置动态UserAgent,可以通过在settings.py中配置一个UserAgent列表,然后在中间件中引入,随机取出一个UserAgent。
具体实现如下
settings.py
USER_AGENT_LIST=["Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1","Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6","Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6","Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1","Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5","Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3","Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)","Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3","Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3","Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3","Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24","Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"]
middlewares.py
from tutorial.settings import USER_AGENT_LISTclass TutorialDownloaderMiddleware(object):def process_request(self, request, spider):request.headers.setdefault('User-Agent', random.choice(USER_AGENT_LIST))
方法四:使用fake-useragent切换User-Agent
安装fake-useragent:
pip install fake-useragent
具体实现如下
middlewares.py
from fake_useragent import UserAgentclass RandomUserAgentMiddleware(object):def __init__(self,crawler):super(RandomUserAgentMiddleware, self).__init__()self.ua = UserAgent()self.ua_type = crawler.settings.get('RANDOM_UA_TYPE','random')@classmethoddef from_crawler(cls,crawler):return cls(crawler)def process_request(self,request,spider):def get_ua():return getattr(self.ua,self.ua_type)request.headers.setdefault('User-Agent',get_ua())
在settings.py中启用中间件
RANDOM_UA_TYPE= 'random'DOWNLOADER_MIDDLEWARES = {'tutorial.middlewares.RandomUserAgentMiddleware': 543,}
