1、爬虫页面

    1. class ProxyrandomSpider(scrapy.Spider):
    2. name = 'proxyRandom'
    3. def start_requests(self):
    4. yield scrapy.Request('http://httpbin.org/get',callback=self.parse)
    5. def parse(self, response):
    6. print(response.text)

    2、写随机ip中间件

    1. import random
    2. class IpRandomProxyMiddleware(object):
    3. # 定义有效的代理IP列表
    4. proxy = [
    5. '117.88.177.0:3000','117.45.139.179:9006'
    6. ]
    7. def process_request(self,request,spider):
    8. proxy = random.choice(self.proxy)
    9. request.meta['proxy'] = 'http://' + proxy

    3、settings.py修改

    1. DOWNLOADER_MIDDLEWARES = {
    2. 'proxy.middlewares.IpRandomProxyMiddleware': 200,
    3. 'proxy.middlewares.ProxyDownloaderMiddleware': None,
    4. }