tags: [Python, 爬虫]
categories: [Python, 爬虫]


开始之前

学习爬虫并记录一下爬取试题文本时使用的代码,遇到的问题等。

读取txt文件

  1. import os
  2. import re
  3. def read_file_as_str(file_path):
  4. # 判断路径文件存在
  5. if not os.path.isfile(file_path):
  6. raise TypeError(file_path + " does not exist")
  7. all_the_text = open(file_path).read()
  8. # print type(all_the_text)
  9. return all_the_text
  10. file=read_file_as_str('test.txt')
  11. re.split('\d+#',file)[1:]
  1. ['Timeout\n',
  2. 'Timeout\n',
  3. 'Timeout\n',
  4. 'Timeout\n',
  5. 'Timeout\n',
  6. 'Timeout\n',
  7. 'Timeout\n',
  8. 'Timeout\n',
  9. 'Calculate a+b\n',
  10. 'Problems involving the computation of exact values of very large magnitude and precision are common. For example, the computation of the national debt is a taxing experience for many computer systems.\n\nThis problem requires that you write a program to compute the exact value of Rn where R is a real number ( 0.0 < R < 99.999 ) and n is an integer such that 0 < n <= 25.\n',
  11. "Businesses like to have memorable telephone numbers. One way to make a telephone number memorable is to have it spell a memorable word or phrase. For example, you can call the University of Waterloo by dialing the memorable TUT-GLOP. Sometimes only part of the number is used to spell a word. When you get back to your hotel tonight you can order a pizza from Gino's by dialing 310-GINO. Another way to make a telephone number memorable is to group the digits in a memorable way. You could order your pizza from Pizza Hut by calling their ``three tens'' number 3-10-10-10.\n\nThe standard form of a telephone number is seven decimal digits with a hyphen between the third and fourth digits (e.g. 888-1200). The keypad of a phone supplies the mapping of letters to numbers, as follows:\n\nA, B, and C map to 2\nD, E, and F map to 3\nG, H, and I map to 4\nJ, K, and L map to 5\nM, N, and O map to 6\nP, R, and S map to 7\nT, U, and V map to 8\nW, X, and Y map to 9\n\nThere is no mapping for Q or Z. Hyphens are not dialed, and can be added and removed as necessary. The standard form of TUT-GLOP is 888-4567, the standard form of 310-GINO is 310-4466, and the standard form of 3-10-10-10 is 310-1010.\n\nTwo telephone numbers are equivalent if they have the same standard form. (They dial the same number.)\n\nYour company is compiling a directory of telephone numbers from local businesses. As part of the quality control process you want to check that no two (or more) businesses in the directory have the same telephone number.\n",
  12. "How far can you make a stack of cards overhang a table? If you have one card, you can create a maximum overhang of half a card length. (We're assuming that the cards must be perpendicular to the table.) With two cards you can make the top card overhang the bottom one by half a card length, and the bottom one overhang the table by a third of a card length, for a total maximum overhang of 1/2 + 1/3 = 5/6 card lengths. In general you can make n cards overhang by 1/2 + 1/3 + 1/4 + ... + 1/(n + 1) card lengths, where the top card overhangs the second by 1/2, the second overhangs tha third by 1/3, the third overhangs the fourth by 1/4, etc., and the bottom card overhangs the table by 1/(n + 1). This is illustrated in the figure below.\n",
  13. "Larry graduated this year and finally has a job. He's making a lot of money, but somehow never seems to have enough. Larry has decided that he needs to grab hold of his financial portfolio and solve his financing problems. The first step is to figure out what's been going on with his money. Larry has his bank account statements and wants to see how much money he has. Help Larry by writing a program to take his closing balance from each of the past twelve months and calculate his average account balance.\n",
  14. 'Fred Mapper is considering purchasing some land in Louisiana to build his house on. In the process of investigating the land, he learned that the state of Louisiana is actually shrinking by 50 square miles each year, due to erosion caused by the Mississippi River. Since Fred is hoping to live in this house the rest of his life, he needs to know if his land is going to be lost to erosion.\n\nAfter doing more research, Fred has learned that the land that is being lost forms a semicircle. This semicircle is part of a circle centered at (0,0), with the line that bisects the circle being the X axis. Locations below the X axis are in the water. The semicircle has an area of 0 at the beginning of year 1. (Semicircle illustrated in the Figure.)\n',
  15. "Some people believe that there are three cycles in a person's life that start the day he or she is born. These three cycles are the physical, emotional, and intellectual cycles, and they have periods of lengths 23, 28, and 33 days, respectively. There is one peak in each period of a cycle. At the peak of a cycle, a person performs at his or her best in the corresponding field (physical, emotional or mental). For example, if it is the mental curve, thought processes will be sharper and concentration will be easier.\nSince the three cycles have different periods, the peaks of the three cycles generally occur at different times. We would like to determine when a triple peak occurs (the peaks of all three cycles occur in the same day) for any person. For each cycle, you will be given the number of days from the beginning of the current year at which one of its peaks (not necessarily the first) occurs. You will also be given a date expressed as the number of days from the beginning of the current year. You task is to determine the number of days from the given date to the next triple peak. The given date is not counted. For example, if the given date is 10 and the next triple peak occurs on day 12, the answer is 2, not 3. If a triple peak occurs on the given date, you should give the number of days to the next occurrence of a triple peak.\n"]

爬取POJ网站上的试题文本

  1. import requests #爬取网页的库
  2. from bs4 import BeautifulSoup #用于解析网页的库
  3. headers = {
  4. 'user-agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.70 Safari/537.36",
  5. } # 构造请求头
  6. url = 'http://tjj.changsha.gov.cn/tjxx/tjsj/tjgb/202010/t20201016_9060722.html'
  7. response = requests.request("GET", url, headers=headers) # 获取网页数据
  8. response.encoding = response.apparent_encoding # 当获取的网页有乱码时加
  9. soup = BeautifulSoup(response.text, 'html.parser')
  10. bf = soup.find('div', class_='view TRS_UEDITOR trs_paper_default trs_web')
  11. bf
  1. import requests
  2. from bs4 import BeautifulSoup
  3. import bs4
  4. import os
  5. from time import sleep
  6. url_list = []
  7. headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.70 Safari/537.36'} # 构造请求头
  8. def url_all():
  9. for i in range(1000,4055):
  10. url = 'http://poj.org/problem?id='+str(i)
  11. url_list.append(url)
  12. def save_path():
  13. s_path = './text/'
  14. if not os.path.isdir(s_path):
  15. os.mkdir(s_path)
  16. else:
  17. pass
  18. return s_path
  19. def save_text(urls,s_path): #找到所有试题的文本描述。
  20. num=1000
  21. for url in urls:
  22. response = requests.get(url, headers=headers) # 获取网页数据
  23. response.encoding = response.apparent_encoding # 当获取的网页有乱码时加
  24. soup = BeautifulSoup(response.text, 'html.parser')
  25. try:
  26. bf=soup.find('div', class_='ptx')
  27. if isinstance(bf, bs4.element.Tag):
  28. text = str(num)+'#'+str(bf.text)+'\n'
  29. print('第'+str(num)+'题已爬!')
  30. try:
  31. file = open(s_path + 'poj_question_text.txt', 'a')
  32. file.write(text)
  33. file.close()
  34. except BaseException as a:
  35. print(a)
  36. except BaseException as b:
  37. print(b)
  38. num+=1
  39. sleep(random.randint(0,3))
  40. print('---------------所有页面遍历完成----------------')
  41. url_all()
  42. save_text(url_list,save_path())
  1. 1000#Calculate a+b
  2. This problem requires that you write a program to compute the exact value of Rn where R is a real number ( 0.0 < R < 99.999 ) and n is an integer such that 0 < n <= 25. perience for many computer systems.
  3. Your company is compiling a directory of telephone numbers from local businesses. As part of the quality control process you want to check that no two (or more) businesses in the directory have the same telephone number. is 310-1010.LOP. Sometimes only part of the number is used to spell a word. When you get back to your hotel tonight you can order a pizza from Gino's by dialing 310-GINO. Another way to make a telephone number memorable is to group the digits in a memorable way. You could order your pizza from Pizza Hut by calling their ``three tens'' number 3-10-10-10.
  4. 1003#How far can you make a stack of cards overhang a table? If you have one card, you can create a maximum overhang of half a card length. (We're assuming that the cards must be perpendicular to the table.) With two cards you can make the top card overhang the bottom one by half a card length, and the bottom one overhang the table by a third of a card length, for a total maximum overhang of 1/2 + 1/3 = 5/6 card lengths. In general you can make n cards overhang by 1/2 + 1/3 + 1/4 + ... + 1/(n + 1) card lengths, where the top card overhangs the second by 1/2, the second overhangs tha third by 1/3, the third overhangs the fourth by 1/4, etc., and the bottom card overhangs the table by 1/(n + 1). This is illustrated in the figure below.
  5. 1004#Larry graduated this year and finally has a job. He's making a lot of money, but somehow never seems to have enough. Larry has decided that he needs to grab hold of his financial portfolio and solve his financing problems. The first step is to figure out what's been going on with his money. Larry has his bank account statements and wants to see how much money he has. Help Larry by writing a program to take his closing balance from each of the past twelve months and calculate his average account balance.
  6. After doing more research, Fred has learned that the land that is being lost forms a semicircle. This semicircle is part of a circle centered at (0,0), with the line that bisects the circle being the X axis. Locations below the X axis are in the water. The semicircle has an area of 0 at the beginning of year 1. (Semicircle illustrated in the Figure.)f his land is going to be lost to erosion.
  7. Since the three cycles have different periods, the peaks of the three cycles generally occur at different times. We would like to determine when a triple peak occurs (the peaks of all three cycles occur in the same day) for any person. For each cycle, you will be given the number of days from the beginning of the current year at which one of its peaks (not necessarily the first) occurs. You will also be given a date expressed as the number of days from the beginning of the current year. You task is to determine the number of days from the given date to the next triple peak. The given date is not counted. For example, if the given date is 10 and the next triple peak occurs on day 12, the answer is 2, not 3. If a triple peak occurs on the given date, you should give the number of days to the next occurrence of a triple peak.
  8. You are responsible for cataloguing a sequence of DNA strings (sequences containing only the four letters A, C, G, and T). However, you want to catalog them, not in alphabetical order, but rather in order of ``sortedness'', from ``most sorted'' to ``least sorted''. All the strings are of the same length.easure is called the number of inversions in the sequence. The sequence ``AACEDGG'' has only one inversion (E and D)---it is nearly sorted---while the sequence ``ZWQM'' has 6 inversions (it is as unsorted as can be---exactly the reverse of sorted).
  9. Help professor M. A. Ya and write a program for him to convert the dates from the Haab calendar to the Tzolkin calendar. hus, the first day was: b, 6 canac, 7 ahau, and again in the next period 8 imix, 9 ik, 10 akbal . . .r and the name of the day. They used 20 names: imix, ik, akbal, kan, chicchan, cimi, manik, lamat, muluk, ok, chuen, eb, ben, ix, mem, cib, caban, eznab, canac, ahau and 13 numbers; both in cycles. cumhu. Instead of having names, the days of the months were denoted by numbers starting from 0 to 19. The last month of Haab was called uayet and had 5 days denoted by numbers 0, 1, 2, 3, 4. The Maya believed that this month was unlucky, the court of justice was not in session, the trade stopped, people did not even sweep the floor.
  10. Images contain 2 to 1,000,000,000 (109) pixels. All images are encoded using run length encoding (RLE). This is a sequence of pairs, containing pixel value (0-255) and run length (1-109). Input images have at most 1,000 of these pairs. Successive pairs have different pixel values. All lines in an image contain the same number of pixels.
  11. To save money, the RPS would like to issue as few duplicate stamps as possible (given the constraint that they want to issue as many different types). Further, the RPS won't sell more than four stamps at a time.e RPS has been known to issue several stamps of the same denomination in order to please customers (these count as different types, even though they are the same denomination). The maximum number of different types of stamps issued at any time is twenty-five.
  12. ---------------所有页面遍历完成----------------

追加文本

  1. # 打开一个文件
  2. fo = open("test.txt", "a")
  3. fo.write( "www.runoob.com!\nVery good site!\n")
  4. # 关闭打开的文件
  5. fo.close()

测试自己IP为多少

  1. import requests
  2. url = 'http://httpbin.org/get' # 该网址会返回访问者的IP
  3. headers = {'User-Agent': 'Mozilla/5.0'}
  4. # 使用代理IP
  5. proxies = {'http':'http://122.4.48.145:9999','https':'https://122.4.48.145:9999'}
  6. html = requests.get(url, headers=headers,proxies=proxies, timeout=5).text
  7. print(html)

使用fake_useragent随机生成User-Agent

  1. from fake_useragent import UserAgent
  2. ua = UserAgent()
  3. ua.ie
  1. 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; Media Center PC 6.0; InfoPath.3; MS-RTC LM 8; Zune 4.7'

构建自己的IP池

从快代理上面爬取IP,迭代测试能否使用,建立一个自己的代理IP池,随时更新用来抓取网站数据

  1. import requests
  2. from lxml import etree
  3. import time
  4. import random
  5. from fake_useragent import UserAgent
  6. class GetProxyIP(object):
  7. def __init__(self):
  8. self.url = 'https://www.kuaidaili.com/free/inha/{}/' # 'https://www.xicidaili.com/nn/'
  9. self.proxies = {
  10. 'http': 'http://163.204.247.219:9999',
  11. 'https': 'https://163.204.247.219:9999'}
  12. # 随机生成User-Agent
  13. def get_random_ua(self):
  14. ua = UserAgent() # 创建User-Agent对象
  15. useragent = ua.random
  16. return useragent
  17. # 从IP代理网站上获取随机的代理IP
  18. def get_ip_file(self, url):
  19. headers = {'User-Agent': self.get_random_ua()}
  20. # 访问IP代理网站国内高匿代理,找到所有的tr节点对象
  21. html = requests.get(url=url, headers=headers, timeout=5).content.decode('utf-8', 'ignore')
  22. parse_html = etree.HTML(html)
  23. # 基准xpath,匹配每个代理IP的节点对象列表
  24. tr_list = parse_html.xpath('//*[@id="list"]/table/tbody/tr')
  25. for tr in tr_list:
  26. ip = tr.xpath('./td[1]/text()')[0]
  27. port = tr.xpath('./td[2]/text()')[0]
  28. # 测试ip:port是否可用
  29. self.test_proxy_ip(ip, port)
  30. # 测试抓取的代理IP是否可用
  31. def test_proxy_ip(self, ip, port):
  32. proxies = {
  33. 'http': 'http://{}:{}'.format(ip, port),
  34. 'https': 'https://{}:{}'.format(ip, port), }
  35. test_url = 'http://www.baidu.com/'
  36. try:
  37. res = requests.get(url=test_url, proxies=proxies, timeout=8)
  38. if res.status_code == 200:
  39. print(ip, ":", port, 'Success')
  40. with open('proxies.txt', 'a') as f:
  41. f.write(ip + ':' + port + '\n')
  42. except Exception as e:
  43. print(ip, port, 'Failed')
  44. # 主函数
  45. def main(self):
  46. for i in range(2000, 2050):
  47. url = self.url.format(i)
  48. self.get_ip_file(url)
  49. time.sleep(random.randint(5, 10))
  50. spider = GetProxyIP()
  51. spider.main()
  1. # import requests
  2. # from lxml import etree
  3. # import time
  4. # import random
  5. # from fake_useragent import UserAgent
  6. # url='https://www.kuaidaili.com/free/inha/{}/'
  7. # url
  8. # headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.70 Safari/537.36'}
  9. # # 访问IP代理网站国内高匿代理,找到所有的tbody节点对象
  10. # url = url.format(1)
  11. # url
  12. # html = requests.get(url=url, headers=headers, timeout=5).content.decode('utf-8', 'ignore')
  13. # parse_html = etree.HTML(html)
  14. # # 基准xpath,匹配每个代理IP的节点对象列表
  15. # tr_list = parse_html.xpath('//*[@id="list"]/table/tbody/tr')
  16. # print(type(tr_list))
  17. # ip = tr_list[0].xpath('./td[1]/text()')[0]
  18. # ip
  19. # port = tr_list[0].xpath('./td[2]/text()')[0]
  20. # port

测试代理IP是否可用

  1. headers = {'user-agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.70 Safari/537.36",} # 构造请求头
  2. proxies = {'http':'http://10.10.10.10:8765','https':'https://10.10.10.10:8765'}
  3. url='http://www.baidu.com/'
  4. resp = requests.get(url,headers=headers, proxies = proxies, timeout=5)
  5. if res.status_code == 200:
  6. print("OK")
  7. return True
  8. else:
  9. print(res.status_code)
  10. print("错误")
  11. return False

从IP池中取IP

从文件中随机获取代理IP写爬虫,防止同一个IP访问频繁被封

  1. import random
  2. import requests
  3. class BaiduSpider(object):
  4. def __init__(self):
  5. self.url = 'http://www.baidu.com/'
  6. self.headers = {'User-Agent': 'Mozilla/5.0'}
  7. self.flag = 1
  8. def get_proxies(self):
  9. with open('proxies.txt', 'r') as f:
  10. result = f.readlines() # 读取所有行并返回列表
  11. proxy_ip = random.choice(result)[:-1] # 获取了所有代理IP
  12. L = proxy_ip.split(':')
  13. proxy_ip = {
  14. 'http': 'http://{}:{}'.format(L[0], L[1]),
  15. 'https': 'https://{}:{}'.format(L[0], L[1])
  16. }
  17. return proxy_ip
  18. def get_html(self):
  19. proxies = self.get_proxies()
  20. if self.flag <= 3:
  21. try:
  22. html = requests.get(url=self.url, proxies=proxies, headers=self.headers, timeout=5).text
  23. print(html)
  24. except Exception as e:
  25. print('Retry')
  26. self.flag += 1
  27. self.get_html()
  28. spider = BaiduSpider()
  29. spider.get_html()

获取收费代理IP

写一个获取收费开放API代理的接口

  1. # 获取开放代理的接口
  2. import requests
  3. from fake_useragent import UserAgent
  4. ua = UserAgent() # 创建User-Agent对象
  5. useragent = ua.random
  6. headers = {'User-Agent': useragent}
  7. def ip_test(ip):
  8. url = 'http://www.baidu.com/'
  9. ip_port = ip.split(':')
  10. proxies = {
  11. 'http': 'http://{}:{}'.format(ip_port[0], ip_port[1]),
  12. 'https': 'https://{}:{}'.format(ip_port[0], ip_port[1]),
  13. }
  14. res = requests.get(url=url, headers=headers, proxies=proxies, timeout=5)
  15. if res.status_code == 200:
  16. return True
  17. else:
  18. return False
  19. # 提取代理IP
  20. def get_ip_list():
  21. # 快代理:https://www.kuaidaili.com/doc/product/dps/
  22. api_url = 'http://dev.kdlapi.com/api/getproxy/?orderid=946562662041898&num=100&protocol=1&method=2&an_an=1&an_ha=1&sep=2'
  23. html = requests.get(api_url).content.decode('utf-8', 'ignore')
  24. ip_port_list = html.split('\n')
  25. for ip in ip_port_list:
  26. with open('proxy_ip.txt', 'a') as f:
  27. if ip_test(ip):
  28. f.write(ip + '\n')
  29. if __name__ == '__main__':
  30. get_ip_list()

获取私密代理IP

用户名和密码会在给你API_URL的时候给你。不是你的账号和账号密码。格式如下:

proxies = {‘协议’:’协议://用户名:密码@IP:端口号’}

proxies = {‘http’:’http://用户名:密码@IP:端口号’, ‘https’:’https://用户名:密码@IP:端口号’}

proxies = {‘http’: ‘http://309435365:szayclhp@106.75.71.140:16816‘, ‘https’:’https://309435365:szayclhp@106.75.71.140:16816‘}

  1. # 获取开放代理的接口
  2. import requests
  3. from fake_useragent import UserAgent
  4. ua = UserAgent() # 创建User-Agent对象
  5. useragent = ua.random
  6. headers = {'User-Agent': useragent}
  7. def ip_test(ip):
  8. url = 'https://blog.csdn.net/qq_34218078/article/details/90901602/'
  9. ip_port = ip.split(':')
  10. proxies = {
  11. 'http': 'http://1786088386:b95djiha@{}:{}'.format(ip_port[0], ip_port[1]),
  12. 'https': 'http://1786088386:b95djiha@{}:{}'.format(ip_port[0], ip_port[1]),
  13. }
  14. res = requests.get(url=url, headers=headers, proxies=proxies, timeout=5)
  15. if res.status_code == 200:
  16. print("OK")
  17. return True
  18. else:
  19. print(res.status_code)
  20. print("错误")
  21. return False
  22. # 提取代理IP
  23. def get_ip_list():
  24. # 快代理:https://www.kuaidaili.com/doc/product/dps/
  25. api_url = 'http://dps.kdlapi.com/api/getdps/?orderid=986603271748760&num=1000&signature=z4a5b2rpt062iejd6h7wvox16si0f7ct&pt=1&sep=2'
  26. html = requests.get(api_url).content.decode('utf-8', 'ignore')
  27. ip_port_list = html.split('\n')
  28. for ip in ip_port_list:
  29. with open('proxy_ip.txt', 'a') as f:
  30. if ip_test(ip):
  31. f.write(ip + '\n')
  32. if __name__ == '__main__':
  33. get_ip_list()