| 解析器 | 使用方法 | 优势 | 劣势 | | —- | —- | —- | —- |
| Python标准库 | BeautifulSoup(markup, “html.parser”) | Python的内置标准库、执行速度适中 、文档容错能力强 | Python 2.7.3 or 3.2.2)前的版本中文容错能力差 |
| lxml HTML 解析器 | BeautifulSoup(markup, “lxml”) | 速度快、文档容错能力强 | 需要安装C语言库 |
| lxml XML 解析器 | BeautifulSoup(markup, “xml”) | 速度快、唯一支持XML的解析器 | 需要安装C语言库 |
| html5lib | BeautifulSoup(markup, “html5lib”) | 最好的容错性、以浏览器的方式解析文档、生成HTML5格式的文档 | 速度慢、不依赖外部扩展 |
安装
pip install BeautifulSoup4
pip install lxml
基本使用
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
# lxml 是解析库,看上面 是 最常用的哦。
print(soup.prettify())
# .prettify 自动格式化代码
print(soup.title.string)
# 打印出标题的字符串
结果
选择元素
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.title)
print(type(soup.title))
print(soup.head)
print(soup.p)
# 他只输出了第一个 P 标签,如果有多个也只返回一个
获取名称
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.title.name)
结果
获取属性
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.attrs['name'])
print(soup.p['name'])
结果 name 的属性名称是 dromouse
获取内容
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.string)
结果如下
嵌套选择
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.head.title.string)
结果和上面一样
子节点和子孙节点
contens
返回子节点
html = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.contents)
结果是一个列表
children
以迭代器的方式返回子节点,必须加上 enumerate 和 for 循环
html = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.children)
for i, child in enumerate(soup.p.children):
print(i, child)
需要用到enumerate(列举、枚举) 参数,结果如下。
descendants 输出所有子孙节点
html = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.children)
for i, child in enumerate(soup.p.descendants): #这里不是chidren
print(i, child)
结果如下 注意 soup.p.descendants 这里是 descendants
父节点 和 祖先节点
父节点 parent
html = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a> # 以这个的父节点,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.a.parent)
a标签(第一个)的父节点,那就是 P 标签,然后打印 P 标签下面的所有节点,也就是和a标签同级的所有节点。也是和打印p的子节点一样是6个标签,但是它把 P 标签也打印出来了。
祖先节点 parents
输出的是一个 generator 迭代器。用list列表 enumerate 枚举出来。
html = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(list(enumerate(soup.a.parents)))
- 输出了P 标签
- 输出了 body
- 输出了 html
- 输出了整个文档(和第三个内容一样)
兄弟节点
``` html = “””<title>The Dormouse's story</title>
“”” from bs4 import BeautifulSoup soup = BeautifulSoup(html, ‘lxml’) print(list(enumerate(soup.a.next_siblings))) print(“———————————“) print(list(enumerate(soup.a.previous_siblings)))<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
<p class="story">...</p>
输出结果 . a 标签是第一个a 标签<br />![20170916150553288756709.png](http://7.feilongs.com/20170916150553288756709.png)
## 标签选择器
> **find_all(name,attrs,recursive,text,kwargs)**
html=’’’
Hello
- Foo
- Bar
- Jay
- Foo
- Bar
from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) print(soup.find_all(‘ul’)) print(type(soup.find_all(‘ul’)[0]))
![20170916150553384088714.png](http://7.feilongs.com/20170916150553384088714.png)
> 输出 ul
html=’’’
Hello
- Foo
- Bar
- Jay
- Foo
- Bar
from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) for ul in soup.find_all(‘ul’): print(ul.find_all(‘li’))
print(soup.find_all(‘li’))
这个是把两个 ul 分开了。<br />如果是用 `print(soup.find_all('li'))` 那 就会把下面的所有的li 放在一个 列表里面去打印了。<br />![20170916150553565296810.png](http://7.feilongs.com/20170916150553565296810.png)
### attrs 属性
html=’’’
Hello
- Foo
- Bar
- Jay
- Foo
- Bar
from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) print(soup.find_all(attrs={‘id’:’list-1’}))
这个ul有两个属性 上面的是 id = list-1 下面的是 name = elements
print(soup.find_all(attrs={‘name’:’elements’}))
结果如下<br />![20170916150553613184901.png](http://7.feilongs.com/20170916150553613184901.png)
html=’’’
Hello
- Foo
- Bar
- Jay
- Foo
- Bar
from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) print(soup.findall(id=’list-1’)) print(soup.find_all(class=’element’))
由于class 是python 里面的关键词,所有 不能直接用class作为名称是用,需要在class后面加一个 _ 下划线。这样他其实等同于 上面的属性名称 class。
> 由于class 是python 里面的关键词,所有 不能直接用class作为名称是用,需要在class后面加一个 _ 下划线。这样他其实等同于 上面的属性名称 class。
结果如下<br />![20170916150553645414050.png](http://7.feilongs.com/20170916150553645414050.png)
## text 根据文本的内容进行选择
主要是用来做内容匹配的时候有点用,但是对于内容查找其实没什么用
html=’’’
Hello
- Foo
- Bar
- Jay
- Foo
- Bar
from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) print(soup.find_all(text=’Foo’))
返回结果<br />![20170916150553668590154.png](http://7.feilongs.com/20170916150553668590154.png)
## find ( name, attrs, recursive, text, kwargs)
find 和 find_all 是完全一样的<br />**find_all 是返回所有元素,find 是返回单个元素**
html=’’’
Hello
- Foo
- Bar
- Jay
- Foo
- Bar
from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) print(soup.find(‘ul’)) print(type(soup.find(‘ul’))) print(soup.find(‘page’))
结果如下<br />![20170916150553700226461.png](http://7.feilongs.com/20170916150553700226461.png)
## 其他方法
和上面的类似
#### find_parents() find_parent()
find_parents()返回所有祖先节点,find_parent()返回直接父节点。
#### find_next_siblings() find_next_sibling()
find_next_siblings()返回后面所有兄弟节点,find_next_sibling()返回后面第一个兄弟节点。
#### find_previous_siblings() find_previous_sibling()
find_previous_siblings()返回前面所有兄弟节点,find_previous_sibling()返回前面第一个兄弟节点。
#### find_all_next() find_next()
find_all_next()返回节点后所有符合条件的节点, find_next()返回第一个符合条件的节点
#### find_all_previous() 和 find_previous()
find_all_previous()返回节点后所有符合条件的节点, find_previous()返回第一个符合条件的节点
## CSS 选择器
html=’’’
Hello
- Foo
- Bar
- Jay
- Foo
- Bar
from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) print(soup.select(‘.panel .panel-heading’))
选择.panel这个class 下面的 panel-heading class 两个class里面要用空格隔开
print(soup.select(‘ul li’))
选择所有ul 标签里面的 li 标签,中间一个空格
print(soup.select(‘#list-2 .element’))
选择id 为 list-2 里面的 element class
结果如下<br />![20170916150553830083229.png](http://7.feilongs.com/20170916150553830083229.png)
html=’’’
Hello
- Foo
- Bar
- Jay
- Foo
- Bar
from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) for ul in soup.select(‘ul’): print(ul.select(‘li’))
![2017091615055384813927.png](http://7.feilongs.com/2017091615055384813927.png)
### 获取属性
html=’’’
Hello
- #这是第一个
- Foo
- Bar
- Jay
- #这是第二个
- Foo
- Bar
from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) for ul in soup.select(‘ul’): print(ul[‘id’]) print(ul.attrs[‘id’])
> 上面两个用法是一样的,不加 attrs也是可以的。结果如下
![2017091615055386618335.png](http://7.feilongs.com/2017091615055386618335.png)
### 获取内容 get_text()
.string 也可以 .text 也可以
html=’’’
Hello
- Foo
- Bar
- Jay
- Foo
- Bar
from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) for li in soup.select(‘li’): print(li.get_text())
# 用get_text() 就可以获取这个标签里面的内容
```
结果如下
总结
- 推荐使用lxml解析库,必要时(出现代码混乱)使用html.parser
- 标签选择筛选功能弱但是速度快
- 建议使用find()、find_all() 查询匹配单个结果或者多个结果
- 如果对CSS选择器熟悉建议使用select()
- 记住常用的获取属性和文本值的方法