1、作业1

  • 目标网站:http://www.ujxsw.com
  • 爬取要求:

    1. 1. 自己创建一个账号,收藏基本小说<br /> 2. 爬取自己书架页面的html代码(用requests模块实现)<br /> 3. 在爬取到的html代码里面能找到自己收藏的小说名
  • 方法一:

    1. import requests
    2. url = 'http://www.ujxsw.com/modules/article/bookcase.php'
    3. headers = {
    4. 'Cookie':'Hm_lvt_ffafa5ae2f1ca7e65cb521c271c680c5=1641658035;PHPSESSID=sg0lft68h5d6et1h4tj0lf4uh4;username=User;Hm_lpvt_ffafa5ae2f1ca7e65cb521c271c680c5=1641658267',
    5. 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36'
    6. }
    7. response = requests.get(url,headers=headers)
    8. if response.status_code == 200:
    9. with open('myBook.html','w',encoding='utf-8') as file:
    10. file.write(response.text)
  • 方法二、使用requests.cookies.RequestsCookieJar() ``` 1、使用fake_useragent模块生成User-Agent 2、requests.cookies.RequestsCookieJar() 3、读取收藏的作品

import requests from lxml import etree from fake_useragent import UserAgent cookies = ‘Hm_lvt_ffafa5ae2f1ca7e65cb521c271c680c5=1641658035; PHPSESSID=rl88btlfdmjudf7qatb976eqf0; username=User; _identity-frontend=06fad69e01035a15a691d7db06c16fe9e7a3a1d33ee402d187fde58d76889571a%3A2%3A%7Bi%3A0%3Bs%3A18%3A%22_identity-frontend%22%3Bi%3A1%3Bs%3A19%3A%22%5B108353%2C%22%22%2C2592000%5D%22%3B%7D; Hm_lpvt_ffafa5ae2f1ca7e65cb521c271c680c5=1641703779’ userAgent = UserAgent().random headers = {‘User-Agent’:userAgent} cookiesJar = requests.cookies.RequestsCookieJar() for cookie in cookies.split(‘;’): key,value = cookie.split(‘=’,1) cookiesJar.set(key,value) response = requests.get(‘http://www.ujxsw.com/modules/article/bookcase.php',headers=headers,cookies=cookiesJar) try: if response.status_code == 200:

  1. # 爬取我的书架页面
  2. with open('myBook.html','w',encoding='utf-8') as file:
  3. file.write(response.text)
  4. # 获取值收藏的书籍信息
  5. html = etree.HTML(response.text)
  6. ulTree = html.xpath('//div[@class="rec_rullist"]/ul')
  7. print('您收藏的书籍有:')
  8. for ul in ulTree:
  9. bookInfo = ul.xpath('li/a/text()')
  10. print(f'【{bookInfo[0]}】,作者:{bookInfo[2]}')
  11. else:
  12. print(response.reason)

except Exception as error: print(error)

  1. - **方法三、使用requests_html模块爬取**

from requests_html import HTMLSession,UserAgent session = HTMLSession() headers = { ‘User-Agent’:UserAgent().random, ‘Cookie’: ‘Hm_lvt_ffafa5ae2f1ca7e65cb521c271c680c5=1641658035; PHPSESSID=rl88btlfdmjudf7qatb976eqf0; username=User; _identity-frontend=06fad69e01035a15a691d7db06c16fe9e7a3a1d33ee402d187fde58d76889571a%3A2%3A%7Bi%3A0%3Bs%3A18%3A%22_identity-frontend%22%3Bi%3A1%3Bs%3A19%3A%22%5B108353%2C%22%22%2C2592000%5D%22%3B%7D; Hm_lpvt_ffafa5ae2f1ca7e65cb521c271c680c5=1641709972’ } url = ‘http://www.ujxsw.com/modules/article/bookcase.php‘ response = session.get(url,headers=headers) if response.status_code == 200: with open(‘my_books.html’,’w’,encoding=’utf-8’) as file: file.write(response.text) html = response.html allUl = html.xpath(‘//*[@class=”rec_rullist”]/ul’) print(‘您收藏的书籍有:’) for ul in allUl: bookTitle = ul.find(‘a’)[0].text bookAuthor = ul.find(‘a’)[3].text print(f’【{bookTitle}】,作者:{bookAuthor}’)

  1. <a name="kYiuw"></a>
  2. # 2、作业2
  3. 目标网站: https://www.dmzj.com<br />爬取要求:<br /> 1. 到这个网站上面找一张自己喜欢的漫画里面的随便一张图片的url<br /> 2. 把图片爬取下来,保存到本地、
  4. 这里我多爬了几张页面上的图片

“”” 爬取https://www.dmzj.com/地址的图片 “”” import requests from lxml import etree import os

可正常访问的图片url:’https://images.dmzj.com/img/webpic/6/1001685461594000781.jpg

非正常访问的图片地址: ‘//images.dmzj.com/img/webpic/6/1001685461594000781.jpg’

两者相差一个协议前缀:https

class SaveImg: url = ‘https://www.dmzj.com/

# 获取页面上的图片
def getImageUrls(self):
    response = requests.get(self.url)
    try:
        if response.status_code == 200:
            html = etree.HTML(response.text)
            liImg = html.xpath('//ul[@class="update_con"]/li/a/img/@src')
            liTitle = html.xpath('//ul[@class="update_con"]/li/a/@title')
            for i in range(0,len(liTitle)):
                imgUrl = 'https:' + liImg[i]
                imgTitle = liTitle[i]
                self.saveImg(imgTitle,imgUrl)
        else:
            print(response.reason)
    except Exception as error:
        print(error)

# 保存图片
def saveImg(self,imgTitle,imgUrl):
    if not os.path.exists('./images'):
        os.mkdir('./images')
    # 获取原图片后缀
    ext = imgUrl.split('/')[-1].split('.')[-1]
    headers = {
        'Referer':'https://www.dmzj.com/',
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36'
    }
    # 因为这里爬的时候是403,可能是做了防盗链,所以这里加上referer
    response = requests.get(imgUrl,headers=headers)
    if response.status_code == 200:
        with open(f'./images/{imgTitle}.{ext}','wb') as file:
            file.write(response.content)
        print(f'图片:{imgTitle}已下载完成')

obj = SaveImg() obj.getImageUrls() ```