| 解析器 | 使用方法 | 优势 | 劣势 | | —- | —- | —- | —- |

| Python标准库 | BeautifulSoup(markup, “html.parser”) | Python的内置标准库、执行速度适中 、文档容错能力强 | Python 2.7.3 or 3.2.2)前的版本中文容错能力差 |

| lxml HTML 解析器 | BeautifulSoup(markup, “lxml”) | 速度快、文档容错能力强 | 需要安装C语言库 |

| lxml XML 解析器 | BeautifulSoup(markup, “xml”) | 速度快、唯一支持XML的解析器 | 需要安装C语言库 |

| html5lib | BeautifulSoup(markup, “html5lib”) | 最好的容错性、以浏览器的方式解析文档、生成HTML5格式的文档 | 速度慢、不依赖外部扩展 |

安装

  1. pip install BeautifulSoup4
  2. pip install lxml

基本使用

  1. html = """
  2. <html><head><title>The Dormouse's story</title></head>
  3. <body>
  4. <p class="title" name="dromouse"><b>The Dormouse's story</b></p>
  5. <p class="story">Once upon a time there were three little sisters; and their names were
  6. <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
  7. <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
  8. <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
  9. and they lived at the bottom of a well.</p>
  10. <p class="story">...</p>
  11. """
  12. from bs4 import BeautifulSoup
  13. soup = BeautifulSoup(html, 'lxml')
  14. # lxml 是解析库,看上面 是 最常用的哦。
  15. print(soup.prettify())
  16. # .prettify 自动格式化代码
  17. print(soup.title.string)
  18. # 打印出标题的字符串

结果
20170913150531317891310.png

选择元素

  1. html = """
  2. <html><head><title>The Dormouse's story</title></head>
  3. <body>
  4. <p class="title" name="dromouse"><b>The Dormouse's story</b></p>
  5. <p class="story">Once upon a time there were three little sisters; and their names were
  6. <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
  7. <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
  8. <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
  9. and they lived at the bottom of a well.</p>
  10. <p class="story">...</p>
  11. """
  12. from bs4 import BeautifulSoup
  13. soup = BeautifulSoup(html, 'lxml')
  14. print(soup.title)
  15. print(type(soup.title))
  16. print(soup.head)
  17. print(soup.p)
  18. # 他只输出了第一个 P 标签,如果有多个也只返回一个

20170913150531427965979.png

获取名称

  1. html = """
  2. <html><head><title>The Dormouse's story</title></head>
  3. <body>
  4. <p class="title" name="dromouse"><b>The Dormouse's story</b></p>
  5. <p class="story">Once upon a time there were three little sisters; and their names were
  6. <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
  7. <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
  8. <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
  9. and they lived at the bottom of a well.</p>
  10. <p class="story">...</p>
  11. """
  12. from bs4 import BeautifulSoup
  13. soup = BeautifulSoup(html, 'lxml')
  14. print(soup.title.name)

结果
2017091315053143599691.png

获取属性

  1. html = """
  2. <html><head><title>The Dormouse's story</title></head>
  3. <body>
  4. <p class="title" name="dromouse"><b>The Dormouse's story</b></p>
  5. <p class="story">Once upon a time there were three little sisters; and their names were
  6. <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
  7. <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
  8. <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
  9. and they lived at the bottom of a well.</p>
  10. <p class="story">...</p>
  11. """
  12. from bs4 import BeautifulSoup
  13. soup = BeautifulSoup(html, 'lxml')
  14. print(soup.p.attrs['name'])
  15. print(soup.p['name'])

结果 name 的属性名称是 dromouse
2017091315053146482158.png

获取内容

  1. html = """
  2. <html><head><title>The Dormouse's story</title></head>
  3. <body>
  4. <p class="title" name="dromouse"><b>The Dormouse's story</b></p>
  5. <p class="story">Once upon a time there were three little sisters; and their names were
  6. <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
  7. <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
  8. <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
  9. and they lived at the bottom of a well.</p>
  10. <p class="story">...</p>
  11. """
  12. from bs4 import BeautifulSoup
  13. soup = BeautifulSoup(html, 'lxml')
  14. print(soup.p.string)

结果如下
20170913150531479022073.png

嵌套选择

  1. html = """
  2. <html><head><title>The Dormouse's story</title></head>
  3. <body>
  4. <p class="title" name="dromouse"><b>The Dormouse's story</b></p>
  5. <p class="story">Once upon a time there were three little sisters; and their names were
  6. <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
  7. <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
  8. <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
  9. and they lived at the bottom of a well.</p>
  10. <p class="story">...</p>
  11. """
  12. from bs4 import BeautifulSoup
  13. soup = BeautifulSoup(html, 'lxml')
  14. print(soup.head.title.string)

结果和上面一样
20170913150531493143317.png

子节点和子孙节点

contens

返回子节点

  1. html = """
  2. <html>
  3. <head>
  4. <title>The Dormouse's story</title>
  5. </head>
  6. <body>
  7. <p class="story">
  8. Once upon a time there were three little sisters; and their names were
  9. <a href="http://example.com/elsie" class="sister" id="link1">
  10. <span>Elsie</span>
  11. </a>
  12. <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
  13. and
  14. <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
  15. and they lived at the bottom of a well.
  16. </p>
  17. <p class="story">...</p>
  18. """
  19. from bs4 import BeautifulSoup
  20. soup = BeautifulSoup(html, 'lxml')
  21. print(soup.p.contents)

结果是一个列表
20170916150552959888631.png

children

以迭代器的方式返回子节点,必须加上 enumerate 和 for 循环

  1. html = """
  2. <html>
  3. <head>
  4. <title>The Dormouse's story</title>
  5. </head>
  6. <body>
  7. <p class="story">
  8. Once upon a time there were three little sisters; and their names were
  9. <a href="http://example.com/elsie" class="sister" id="link1">
  10. <span>Elsie</span>
  11. </a>
  12. <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
  13. and
  14. <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
  15. and they lived at the bottom of a well.
  16. </p>
  17. <p class="story">...</p>
  18. """
  19. from bs4 import BeautifulSoup
  20. soup = BeautifulSoup(html, 'lxml')
  21. print(soup.p.children)
  22. for i, child in enumerate(soup.p.children):
  23. print(i, child)

需要用到enumerate(列举、枚举) 参数,结果如下。
20170916150553077396959.png

descendants 输出所有子孙节点

  1. html = """
  2. <html>
  3. <head>
  4. <title>The Dormouse's story</title>
  5. </head>
  6. <body>
  7. <p class="story">
  8. Once upon a time there were three little sisters; and their names were
  9. <a href="http://example.com/elsie" class="sister" id="link1">
  10. <span>Elsie</span>
  11. </a>
  12. <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
  13. and
  14. <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
  15. and they lived at the bottom of a well.
  16. </p>
  17. <p class="story">...</p>
  18. """
  19. from bs4 import BeautifulSoup
  20. soup = BeautifulSoup(html, 'lxml')
  21. print(soup.p.children)
  22. for i, child in enumerate(soup.p.descendants): #这里不是chidren
  23. print(i, child)

结果如下 注意 soup.p.descendants 这里是 descendants
20170916150553094122499.png

父节点 和 祖先节点

父节点 parent

  1. html = """
  2. <html>
  3. <head>
  4. <title>The Dormouse's story</title>
  5. </head>
  6. <body>
  7. <p class="story">
  8. Once upon a time there were three little sisters; and their names were
  9. <a href="http://example.com/elsie" class="sister" id="link1">
  10. <span>Elsie</span>
  11. </a> # 以这个的父节点,
  12. <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
  13. and
  14. <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
  15. and they lived at the bottom of a well.
  16. </p>
  17. <p class="story">...</p>
  18. """
  19. from bs4 import BeautifulSoup
  20. soup = BeautifulSoup(html, 'lxml')
  21. print(soup.a.parent)

a标签(第一个)的父节点,那就是 P 标签,然后打印 P 标签下面的所有节点,也就是和a标签同级的所有节点。也是和打印p的子节点一样是6个标签,但是它把 P 标签也打印出来了。
20170916150553147433035.png

祖先节点 parents

输出的是一个 generator 迭代器。用list列表 enumerate 枚举出来。

  1. html = """
  2. <html>
  3. <head>
  4. <title>The Dormouse's story</title>
  5. </head>
  6. <body>
  7. <p class="story">
  8. Once upon a time there were three little sisters; and their names were
  9. <a href="http://example.com/elsie" class="sister" id="link1">
  10. <span>Elsie</span>
  11. </a>
  12. <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
  13. and
  14. <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
  15. and they lived at the bottom of a well.
  16. </p>
  17. <p class="story">...</p>
  18. """
  19. from bs4 import BeautifulSoup
  20. soup = BeautifulSoup(html, 'lxml')
  21. print(list(enumerate(soup.a.parents)))
  1. 输出了P 标签
  2. 输出了 body
  3. 输出了 html
  4. 输出了整个文档(和第三个内容一样)

    兄弟节点

    ``` html = “””
    1. <title>The Dormouse's story</title>
    1. <p class="story">
    2. Once upon a time there were three little sisters; and their names were
    3. <a href="http://example.com/elsie" class="sister" id="link1">
    4. <span>Elsie</span>
    5. </a>
    6. <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
    7. and
    8. <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
    9. and they lived at the bottom of a well.
    10. </p>
    11. <p class="story">...</p>
    “”” from bs4 import BeautifulSoup soup = BeautifulSoup(html, ‘lxml’) print(list(enumerate(soup.a.next_siblings))) print(“———————————“) print(list(enumerate(soup.a.previous_siblings)))
  1. 输出结果 . a 标签是第一个a 标签<br />![20170916150553288756709.png](http://7.feilongs.com/20170916150553288756709.png)
  2. ## 标签选择器
  3. > **find_all(name,attrs,recursive,text,kwargs)**

html=’’’

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
‘’’

from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) print(soup.find_all(‘ul’)) print(type(soup.find_all(‘ul’)[0]))

  1. ![20170916150553384088714.png](http://7.feilongs.com/20170916150553384088714.png)
  2. > 输出 ul

html=’’’

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
‘’’

from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) for ul in soup.find_all(‘ul’): print(ul.find_all(‘li’))

print(soup.find_all(‘li’))

  1. 这个是把两个 ul 分开了。<br />如果是用 `print(soup.find_all('li'))` 就会把下面的所有的li 放在一个 列表里面去打印了。<br />![20170916150553565296810.png](http://7.feilongs.com/20170916150553565296810.png)
  2. ### attrs 属性

html=’’’

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
‘’’

from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) print(soup.find_all(attrs={‘id’:’list-1’}))

这个ul有两个属性 上面的是 id = list-1 下面的是 name = elements

print(soup.find_all(attrs={‘name’:’elements’}))

  1. 结果如下<br />![20170916150553613184901.png](http://7.feilongs.com/20170916150553613184901.png)

html=’’’

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
‘’’

from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) print(soup.findall(id=’list-1’)) print(soup.find_all(class=’element’))

由于class 是python 里面的关键词,所有 不能直接用class作为名称是用,需要在class后面加一个 _ 下划线。这样他其实等同于 上面的属性名称 class。

  1. > 由于class python 里面的关键词,所有 不能直接用class作为名称是用,需要在class后面加一个 _ 下划线。这样他其实等同于 上面的属性名称 class
  2. 结果如下<br />![20170916150553645414050.png](http://7.feilongs.com/20170916150553645414050.png)
  3. ## text 根据文本的内容进行选择
  4. 主要是用来做内容匹配的时候有点用,但是对于内容查找其实没什么用

html=’’’

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
‘’’

from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) print(soup.find_all(text=’Foo’))

  1. 返回结果<br />![20170916150553668590154.png](http://7.feilongs.com/20170916150553668590154.png)
  2. ## find ( name, attrs, recursive, text, kwargs)
  3. find find_all 是完全一样的<br />**find_all 是返回所有元素,find 是返回单个元素**

html=’’’

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
‘’’

from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) print(soup.find(‘ul’)) print(type(soup.find(‘ul’))) print(soup.find(‘page’))

  1. 结果如下<br />![20170916150553700226461.png](http://7.feilongs.com/20170916150553700226461.png)
  2. ## 其他方法
  3. 和上面的类似
  4. #### find_parents() find_parent()
  5. find_parents()返回所有祖先节点,find_parent()返回直接父节点。
  6. #### find_next_siblings() find_next_sibling()
  7. find_next_siblings()返回后面所有兄弟节点,find_next_sibling()返回后面第一个兄弟节点。
  8. #### find_previous_siblings() find_previous_sibling()
  9. find_previous_siblings()返回前面所有兄弟节点,find_previous_sibling()返回前面第一个兄弟节点。
  10. #### find_all_next() find_next()
  11. find_all_next()返回节点后所有符合条件的节点, find_next()返回第一个符合条件的节点
  12. #### find_all_previous() 和 find_previous()
  13. find_all_previous()返回节点后所有符合条件的节点, find_previous()返回第一个符合条件的节点
  14. ## CSS 选择器

html=’’’

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
‘’’

from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) print(soup.select(‘.panel .panel-heading’))

选择.panel这个class 下面的 panel-heading class 两个class里面要用空格隔开

print(soup.select(‘ul li’))

选择所有ul 标签里面的 li 标签,中间一个空格

print(soup.select(‘#list-2 .element’))

选择id 为 list-2 里面的 element class

  1. 结果如下<br />![20170916150553830083229.png](http://7.feilongs.com/20170916150553830083229.png)

html=’’’

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
‘’’

from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) for ul in soup.select(‘ul’): print(ul.select(‘li’))

  1. ![2017091615055384813927.png](http://7.feilongs.com/2017091615055384813927.png)
  2. ### 获取属性

html=’’’

Hello

    #这是第一个
  • Foo
  • Bar
  • Jay
    #这是第二个
  • Foo
  • Bar
‘’’

from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) for ul in soup.select(‘ul’): print(ul[‘id’]) print(ul.attrs[‘id’])

  1. > 上面两个用法是一样的,不加 attrs也是可以的。结果如下
  2. ![2017091615055386618335.png](http://7.feilongs.com/2017091615055386618335.png)
  3. ### 获取内容 get_text()
  4. .string 也可以 .text 也可以

html=’’’

Hello

  • Foo
  • Bar
  • Jay
  • Foo
  • Bar
‘’’

from bs4 import BeautifulSoup soup = BeautifulSoup(html,’lxml’) for li in soup.select(‘li’): print(li.get_text())

  1. # 用get_text() 就可以获取这个标签里面的内容

``` 结果如下
20170916150553885917466.png

总结

  • 推荐使用lxml解析库,必要时(出现代码混乱)使用html.parser
  • 标签选择筛选功能弱但是速度快
  • 建议使用find()、find_all() 查询匹配单个结果或者多个结果
  • 如果对CSS选择器熟悉建议使用select()
  • 记住常用的获取属性和文本值的方法