Beautiful Soup 4.9.0 documentation – Crummy
Beautiful Soup is a
Python library for pulling data out of HTML and XML files. It works
with your favorite parser to provide idiomatic ways of navigating,
searching, and modifying the parse tree. It commonly saves programmers
hours or days of work.
These instructions illustrate all major features of Beautiful Soup 4,
with examples. I show you what the library is good for, how it works,
how to use it, how to make it do what you want, and what to do when it
violates your expectations.
This document covers Beautiful Soup version 4. 9. 3. The examples in
this documentation should work the same way in Python 2. 7 and Python
3. 8.
You might be looking for the documentation for Beautiful Soup 3.
If so, you should know that Beautiful Soup 3 is no longer being
developed and that support for it will be dropped on or after December
31, 2020. If you want to learn about the differences between Beautiful
Soup 3 and Beautiful Soup 4, see Porting code to BS4.
This documentation has been translated into other languages by
Beautiful Soup users:
这篇文档当然还有中文版.
このページは日本語で利用できます(外部リンク)
이 문서는 한국어 번역도 가능합니다.
Este documento também está disponível em Português do Brasil.
Эта документация доступна на русском языке.
Getting help¶
If you have questions about Beautiful Soup, or run into problems,
send mail to the discussion group. If
your problem involves parsing an HTML document, be sure to mention
what the diagnose() function says about
that document.
Here’s an HTML document I’ll be using as an example throughout this
document. It’s part of a story from Alice in Wonderland:
html_doc = “””
The Dormouse’s story
Once upon a time there were three little sisters; and their names were
Elsie,
Lacie and
Tillie;
and they lived at the bottom of a well.
…
“””
Running the “three sisters” document through Beautiful Soup gives us a
BeautifulSoup object, which represents the document as a nested
data structure:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, ”)
print(ettify())
#
#
#
# The Dormouse’s story
#
#
#
#
#
#
#
#
# Once upon a time there were three little sisters; and their names were
#
# Elsie
#
#,
#
# Lacie
# and
#
# Tillie
#; and they lived at the bottom of a well.
#…
#
#
Here are some simple ways to navigate that data structure:
#
# u’title’
# u’The Dormouse’s story’
# u’head’
soup. p
#
The Dormouse’s story
soup. p[‘class’]
soup. a
# Elsie
nd_all(‘a’)
# [Elsie,
# Lacie,
# Tillie]
(id=”link3″)
# Tillie
One common task is extracting all the URLs found within a page’s tags:
for link in nd_all(‘a’):
print((‘href’))
# # #
Another common task is extracting all the text from a page:
print(t_text())
#
# Elsie,
# Lacie and
# Tillie;
# and they lived at the bottom of a well.
Does this look like what you need? If so, read on.
If you’re using a recent version of Debian or Ubuntu Linux, you can
install Beautiful Soup with the system package manager:
$ apt-get install python-bs4 (for Python 2)
$ apt-get install python3-bs4 (for Python 3)
Beautiful Soup 4 is published through PyPi, so if you can’t install it
with the system packager, you can install it with easy_install or
pip. The package name is beautifulsoup4, and the same package
works on Python 2 and Python 3. Make sure you use the right version of
pip or easy_install for your Python version (these may be named
pip3 and easy_install3 respectively if you’re using Python 3).
$ easy_install beautifulsoup4
$ pip install beautifulsoup4
(The BeautifulSoup package is not what you want. That’s
the previous major release, Beautiful Soup 3. Lots of software uses
BS3, so it’s still available, but if you’re writing new code you
should install beautifulsoup4. )
If you don’t have easy_install or pip installed, you can
download the Beautiful Soup 4 source tarball and
install it with
$ python install
If all else fails, the license for Beautiful Soup allows you to
package the entire library with your application. You can download the
tarball, copy its bs4 directory into your application’s codebase,
and use Beautiful Soup without installing it at all.
I use Python 2. 7 and Python 3. 8 to develop Beautiful Soup, but it
should work with other recent versions.
Problems after installation¶
Beautiful Soup is packaged as Python 2 code. When you install it for
use with Python 3, it’s automatically converted to Python 3 code. If
you don’t install the package, the code won’t be converted. There have
also been reports on Windows machines of the wrong version being
installed.
If you get the ImportError “No module named HTMLParser”, your
problem is that you’re running the Python 2 version of the code under
Python 3.
If you get the ImportError “No module named ”, your
problem is that you’re running the Python 3 version of the code under
Python 2.
In both cases, your best bet is to completely remove the Beautiful
Soup installation from your system (including any directory created
when you unzipped the tarball) and try the installation again.
If you get the SyntaxError “Invalid syntax” on the line
ROOT_TAG_NAME = u'[document]’, you need to convert the Python 2
code to Python 3. You can do this either by installing the package:
$ python3 install
or by manually running Python’s 2to3 conversion script on the
bs4 directory:
$ 2to3-3. 2 -w bs4
Installing a parser¶
Beautiful Soup supports the HTML parser included in Python’s standard
library, but it also supports a number of third-party Python parsers.
One is the lxml parser. Depending on your setup,
you might install lxml with one of these commands:
$ apt-get install python-lxml
$ easy_install lxml
$ pip install lxml
Another alternative is the pure-Python html5lib parser, which parses HTML the way a
web browser does. Depending on your setup, you might install html5lib
with one of these commands:
$ apt-get install python-html5lib
$ easy_install html5lib
$ pip install html5lib
This table summarizes the advantages and disadvantages of each parser library:
Parser
Typical usage
Advantages
Disadvantages
Python’s
BeautifulSoup(markup, “”)
Batteries included
Decent speed
Lenient (As of Python 2. 7. 3
and 3. 2. )
Not as fast as lxml,
less lenient than
html5lib.
lxml’s HTML parser
BeautifulSoup(markup, “lxml”)
Very fast
Lenient
External C dependency
lxml’s XML parser
BeautifulSoup(markup, “lxml-xml”)
BeautifulSoup(markup, “xml”)
The only currently supported
XML parser
html5lib
BeautifulSoup(markup, “html5lib”)
Extremely lenient
Parses pages the same way a
web browser does
Creates valid HTML5
Very slow
External Python
dependency
If you can, I recommend you install and use lxml for speed. If you’re
using a very old version of Python – earlier than 2. 3 or 3. 2 –
it’s essential that you install lxml or html5lib. Python’s built-in
HTML parser is just not very good in those old versions.
Note that if a document is invalid, different parsers will generate
different Beautiful Soup trees for it. See Differences
between parsers for details.
To parse a document, pass it into the BeautifulSoup
constructor. You can pass in a string or an open filehandle:
with open(“”) as fp:
soup = BeautifulSoup(fp, ”)
soup = BeautifulSoup(“a web page“, ”)
First, the document is converted to Unicode, and HTML entities are
converted to Unicode characters:
print(BeautifulSoup(“Sacré bleu! “, “”))
# Sacré bleu!
Beautiful Soup then parses the document using the best available
parser. It will use an HTML parser unless you specifically tell it to
use an XML parser. (See Parsing XML. )
Beautiful Soup transforms a complex HTML document into a complex tree
of Python objects. But you’ll only ever have to deal with about four
kinds of objects: Tag, NavigableString, BeautifulSoup,
and Comment.
Tag¶
A Tag object corresponds to an XML or HTML tag in the original document:
soup = BeautifulSoup(‘Extremely bold‘, ”)
tag = soup. b
type(tag)
#
Tags have a lot of attributes and methods, and I’ll cover most of them
in Navigating the tree and Searching the tree. For now, the most
important features of a tag are its name and attributes.
Name¶
Every tag has a name, accessible as
If you change a tag’s name, the change will be reflected in any HTML
markup generated by Beautiful Soup:
= “blockquote”
tag
#
Extremely bold
Attributes¶
A tag may have any number of attributes. The tag has an attribute “id” whose value is
“boldest”. You can access a tag’s attributes by treating the tag like
a dictionary:
tag = BeautifulSoup(‘bold‘, ”). b
tag[‘id’]
# ‘boldest’
You can access that dictionary directly as
# {‘id’: ‘boldest’}
You can add, remove, and modify a tag’s attributes. Again, this is
done by treating the tag as a dictionary:
tag[‘id’] = ‘verybold’
tag[‘another-attribute’] = 1
#
del tag[‘id’]
del tag[‘another-attribute’]
# bold
# KeyError: ‘id’
(‘id’)
# None
Multi-valued attributes¶
HTML 4 defines a few attributes that can have multiple values. HTML 5
removes a couple of them, but defines a few more. The most common
multi-valued attribute is class (that is, a tag can have more than
one CSS class). Others include rel, rev, accept-charset,
headers, and accesskey. Beautiful Soup presents the value(s)
of a multi-valued attribute as a list:
css_soup = BeautifulSoup(‘
‘, ”)
css_soup. p[‘class’]
# [‘body’]
css_soup = BeautifulSoup(‘
‘, ”)
# [‘body’, ‘strikeout’]
If an attribute looks like it has more than one value, but it’s not
a multi-valued attribute as defined by any version of the HTML
standard, Beautiful Soup will leave the attribute alone:
id_soup = BeautifulSoup(‘
‘, ”)
id_soup. p[‘id’]
# ‘my id’
When you turn a tag back into a string, multiple attribute values are
consolidated:
rel_soup = BeautifulSoup(‘
Back to the homepage
‘, ”)
rel_soup. a[‘rel’]
# [‘index’]
rel_soup. a[‘rel’] = [‘index’, ‘contents’]
print(rel_soup. p)
#
Back to the homepage
You can disable this by passing multi_valued_attributes=None as a
keyword argument into the BeautifulSoup constructor:
no_list_soup = BeautifulSoup(‘
‘, ”, multi_valued_attributes=None)
no_list_soup. p[‘class’]
# ‘body strikeout’
You can use get_attribute_list to get a value that’s always a
list, whether or not it’s a multi-valued atribute:
t_attribute_list(‘id’)
# [“my id”]
If you parse a document as XML, there are no multi-valued attributes:
xml_soup = BeautifulSoup(‘
‘, ‘xml’)
xml_soup. p[‘class’]
Again, you can configure this using the multi_valued_attributes argument:
class_is_multi= { ‘*’: ‘class’}
xml_soup = BeautifulSoup(‘
‘, ‘xml’, multi_valued_attributes=class_is_multi) “, “xml”)
You probably won’t need to do this, but if you do, use the defaults as
a guide. They implement the rules described in the HTML specification:
from er import builder_registry
(‘html’). DEFAULT_CDATA_LIST_ATTRIBUTES
NavigableString¶
A string corresponds to a bit of text within a tag. Beautiful Soup
uses the NavigableString class to contain these bits of text:
# ‘Extremely bold’
type()
#
A NavigableString is just like a Python Unicode string, except
that it also supports some of the features described in Navigating
the tree and Searching the tree. You can convert a
NavigableString to a Unicode string with unicode() (in
Python 2) or str (in Python 3):
unicode_string = str()
unicode_string
type(unicode_string)
#
You can’t edit a string in place, but you can replace one string with
another, using replace_with():
(“No longer bold”)
# No longer bold
NavigableString supports most of the features described in
Navigating the tree and Searching the tree, but not all of
them. In particular, since a string can’t contain anything (the way a
tag may contain a string or another tag), strings don’t support the. contents or attributes, or the find() method.
If you want to use a NavigableString outside of Beautiful Soup,
you should call unicode() on it to turn it into a normal Python
Unicode string. If you don’t, your string will carry around a
reference to the entire Beautiful Soup parse tree, even when you’re
done using Beautiful Soup. This is a big waste of memory.
BeautifulSoup¶
The BeautifulSoup object represents the parsed document as a
whole. For most purposes, you can treat it as a Tag
object. This means it supports most of the methods described in
Navigating the tree and Searching the tree.
You can also pass a BeautifulSoup object into one of the methods
defined in Modifying the tree, just as you would a Tag. This
lets you do things like combine two parsed documents:
doc = BeautifulSoup(“
(text=”INSERT FOOTER HERE”). replace_with(footer)
# ‘INSERT FOOTER HERE’
print(doc)
# xml version="1. 0" encoding="utf-8"? >
#
Since the BeautifulSoup object doesn’t correspond to an actual
HTML or XML tag, it has no name and no attributes. But sometimes it’s
useful to look at its, so it’s been given the special
“[document]”:
Here’s the “Three sisters” HTML document again:
html_doc = “””
I’ll use this as an example to show you how to move from one part of
a document to another.
Going down¶
Tags may contain strings and other tags. These elements are the tag’s
children. Beautiful Soup provides a lot of different attributes for
navigating and iterating over a tag’s children.
Note that Beautiful Soup strings don’t support any of these
attributes, because a string can’t have children.
Navigating using tag names¶
The simplest way to navigate the parse tree is to say the name of the
tag you want. If you want the tag, just say
#
You can do use this trick again and again to zoom in on a certain part
of the parse tree. This code gets the first tag beneath the tag:
# The Dormouse’s story
Using a tag name as an attribute will give you only the first tag by that
name:
If you need to get all the tags, or anything more complicated
than the first tag with a certain name, you’ll need to use one of the
methods described in Searching the tree, such as find_all():
# Tillie]. contents and. children¶
A tag’s children are available in a list called. contents:
head_tag =
head_tag
ntents
# [
title_tag = ntents[0]
title_tag
# [‘The Dormouse’s story’]
The BeautifulSoup object itself has children. In this case, the
tag is the child of the BeautifulSoup object. :
len(ntents)
# 1
ntents[0]
# ‘html’
A string does not have. contents, because it can’t contain
anything:
text = ntents[0]
# AttributeError: ‘NavigableString’ object has no attribute ‘contents’
Instead of getting them as a list, you can iterate over a tag’s
children using the. children generator:
for child in ildren:
print(child)
# The Dormouse’s story. descendants¶
The. children attributes only consider a tag’s
direct children. For instance, the tag has a single direct
child–the
But the
story”. There’s a sense in which that string is also a child of the
tag. The. descendants attribute lets you iterate over all
of a tag’s children, recursively: its direct children, the children of
its direct children, and so on:
for child in scendants:
The tag has only one child, but it has two descendants: the
only has one direct child (the tag), but it has a whole lot of
descendants:
len(list(ildren))
len(list(scendants))
# 26
¶
If a tag has only one child, and that child is a NavigableString,
the child is made available as
# ‘The Dormouse’s story’
If a tag’s only child is another tag, and that tag has a, then the parent tag is considered to have the same
as its child:
If a tag contains more than one thing, then it’s not clear what
should refer to, so is defined to be
None:
print()
# None. strings and stripped_strings¶
If there’s more than one thing inside a tag, you can still look at
just the strings. Use the. strings generator:
for string in rings:
print(repr(string))
‘\n’
# “The Dormouse’s story”
# ‘\n’
# ‘Once upon a time there were three little sisters; and their names were\n’
# ‘Elsie’
# ‘, \n’
# ‘Lacie’
# ‘ and\n’
# ‘Tillie’
# ‘;\nand they lived at the bottom of a well. ‘
# ‘… ‘
These strings tend to have a lot of extra whitespace, which you can
remove by using the. stripped_strings generator instead:
for string in ripped_strings:
# ‘Once upon a time there were three little sisters; and their names were’
# ‘, ‘
# ‘and’
# ‘;\n and they lived at the bottom of a well. ‘
Here, strings consisting entirely of whitespace are ignored, and
whitespace at the beginning and end of strings is removed.
Going up¶
Continuing the “family tree” analogy, every tag and every string has a
parent: the tag that contains it.
You can access an element’s parent with the attribute. In
the example “three sisters” document, the tag is the parent
of the
title_tag =
The title string itself has a parent: the
it:
The parent of a top-level tag like is the BeautifulSoup object
itself:
html_tag =
#
And the of a BeautifulSoup object is defined as None:
# None. parents¶
You can iterate over all of an element’s parents with. parents. This example uses. parents to travel from an tag
buried deep within the document, to the very top of the document:
link = soup. a
link
for parent in rents:
# p
# body
# html
# [document]
Going sideways¶
Consider a simple document like this:
sibling_soup = BeautifulSoup(“text1
#
# text1
#
# text2
#
The tag and the
children of the same tag. We call them siblings. When a document is
pretty-printed, siblings show up at the same indentation level. You
can also use this relationship in the code you write.. next_sibling and. previous_sibling¶
You can use. previous_sibling to navigate
between page elements that are on the same level of the parse tree:
xt_sibling
#
evious_sibling
# text1
The tag has a. next_sibling, but no. previous_sibling,
because there’s nothing before the tag on the same level of the
tree. For the same reason, the
but no. next_sibling:
print(evious_sibling)
print(xt_sibling)
The strings “text1” and “text2” are not siblings, because they don’t
have the same parent:
# ‘text1’
In real documents, the. next_sibling or. previous_sibling of a
tag will usually be a string containing whitespace. Going back to the
“three sisters” document:
# Elsie
# Lacie
# Tillie
You might think that the. next_sibling of the first tag would
be the second tag. But actually, it’s a string: the comma and
newline that separate the first tag from the second:
# ‘, \n ‘
The second tag is actually the. next_sibling of the comma:
# Lacie. next_siblings and. previous_siblings¶
You can iterate over a tag’s siblings with. next_siblings or. previous_siblings:
for sibling in xt_siblings:
print(repr(sibling))
# Lacie
# ‘; and they lived at the bottom of a well. ‘
for sibling in (id=”link3″). previous_siblings:
Going back and forth¶
Take a look at the beginning of the “three sisters” document:
#
An HTML parser takes this string of characters and turns it into a
series of events: “open an tag”, “open a tag”, “open a
tag”, and so on. Beautiful Soup offers tools for reconstructing the
initial parse of the document.. next_element and. previous_element¶
The. next_element attribute of a string or tag points to whatever
was parsed immediately afterwards. It might be the same as. next_sibling, but it’s usually drastically different.
Here’s the final tag in the “three sisters” document. Its. next_sibling is a string: the conclusion of the sentence that was
interrupted by the start of the tag. :
last_a_tag = (“a”, id=”link3″)
last_a_tag
But the. next_element of that tag, the thing that was parsed
immediately after the tag, is not the rest of that sentence:
it’s the word “Tillie”:
xt_element
That’s because in the original markup, the word “Tillie” appeared
before that semicolon. The parser encountered an tag, then the
word “Tillie”, then the closing tag, then the semicolon and rest of
the sentence. The semicolon is on the same level as the tag, but the
word “Tillie” was encountered first.
The. previous_element attribute is the exact opposite of. next_element. It points to whatever element was parsed
immediately before this one:
evious_element
# Tillie. next_elements and. previous_elements¶
You should get the idea by now. You can use these iterators to move
forward or backward in the document as it was parsed:
for element in xt_elements:
print(repr(element))
#
…
Beautiful Soup defines a lot of methods for searching the parse tree,
but they’re all very similar. I’m going to spend a lot of time explaining
the two most popular methods: find() and find_all(). The other
methods take almost exactly the same arguments, so I’ll just cover
them briefly.
Once again, I’ll be using the “three sisters” document as an example:
By passing in a filter to an argument like find_all(), you can
zoom in on the parts of the document you’re interested in.
Kinds of filters¶
Before talking in detail about find_all() and similar methods, I
want to show examples of different filters you can pass into these
methods. These filters show up again and again, throughout the
search API. You can use them to filter based on a tag’s name,
on its attributes, on the text of a string, or on some combination of
these.
A string¶
The simplest filter is a string. Pass a string to a search method and
Beautiful Soup will perform a match against that exact string. This
code finds all the tags in the document:
nd_all(‘b’)
# [The Dormouse’s story]
If you pass in a byte string, Beautiful Soup will assume the string is
encoded as UTF-8. You can avoid this by passing in a Unicode string instead.
A regular expression¶
If you pass in a regular expression object, Beautiful Soup will filter
against that regular expression using its search() method. This code
finds all the tags whose names start with the letter “b”; in this
case, the tag and the tag:
import re
for tag in nd_all(mpile(“^b”)):
# b
This code finds all the tags whose names contain the letter ‘t’:
for tag in nd_all(mpile(“t”)):
# title
A list¶
If you pass in a list, Beautiful Soup will allow a string match
against any item in that list. This code finds all the tags
and all the tags:
nd_all([“a”, “b”])
# [The Dormouse’s story,
# Elsie,
True¶
The value True matches everything it can. This code finds all
the tags in the document, but none of the text strings:
for tag in nd_all(True):
# head
# a
A function¶
If none of the other matches work for you, define a function that
takes an element as its only argument. The function should return
True if the argument matches, and False otherwise.
Here’s a function that returns True if a tag defines the “class”
attribute but doesn’t define the “id” attribute:
def has_class_but_no_id(tag):
return tag. has_attr(‘class’) and not tag. has_attr(‘id’)
Pass this function into find_all() and you’ll pick up all the
tags:
nd_all(has_class_but_no_id)
# [
The Dormouse’s story
,
#
Once upon a time there were…bottom of a well.
,
#
…
]
This function only picks up the
The Dormouse’s story
]
nd_all(“a”)
nd_all(id=”link2″)
# [Lacie]
(mpile(“sisters”))
Some of these should look familiar, but others are new. What does it
mean to pass in a value for string, or id? Why does
find_all(“p”, “title”) find a
tag with the CSS class “title”?
Let’s look at the arguments to find_all().
The name argument¶
Pass in a value for name and you’ll tell Beautiful Soup to only
consider tags with certain names. Text strings will be ignored, as
will tags whose names that don’t match.
This is the simplest usage:
Recall from Kinds of filters that the value to name can be a
string, a regular expression, a list, a function, or the value
True.
The keyword arguments¶
Any argument that’s not recognized will be turned into a filter on one
of a tag’s attributes. If you pass in a value for an argument called id,
Beautiful Soup will filter against each tag’s ‘id’ attribute:
nd_all(id=’link2′)
If you pass in a value for href, Beautiful Soup will filter
against each tag’s ‘href’ attribute:
nd_all(mpile(“elsie”))
# [Elsie]
You can filter an attribute based on a string, a regular
expression, a list, a function, or the value True.
This code finds all tags whose id attribute has a value,
regardless of what the value is:
nd_all(id=True)
You can filter multiple attributes at once by passing in more than one
keyword argument:
nd_all(mpile(“elsie”), id=’link1′)
Some attributes, like the data-* attributes in HTML 5, have names that
can’t be used as the names of keyword arguments:
data_soup = BeautifulSoup(‘
‘, ”)
nd_all(data-foo=”value”)
# SyntaxError: keyword can’t be an expression
You can use these attributes in searches by putting them into a
dictionary and passing the dictionary into find_all() as the
attrs argument:
nd_all(attrs={“data-foo”: “value”})
# [
]
You can’t use a keyword argument to search for HTML’s ‘name’ element,
because Beautiful Soup uses the name argument to contain the name
of the tag itself. Instead, you can give a value to ‘name’ in the
name_soup = BeautifulSoup(‘‘, ”)
nd_all(name=”email”)
# []
nd_all(attrs={“name”: “email”})
# []
Searching by CSS class¶
It’s very useful to search for a tag that has a certain CSS class, but
the name of the CSS attribute, “class”, is a reserved word in
Python. Using class as a keyword argument will give you a syntax
error. As of Beautiful Soup 4. 1. 2, you can search by CSS class using
the keyword argument class_:
nd_all(“a”, class_=”sister”)
As with any keyword argument, you can pass class_ a string, a regular
expression, a function, or True:
nd_all(mpile(“itl”))
def has_six_characters(css_class):
return css_class is not None and len(css_class) == 6
nd_all(class_=has_six_characters)
Remember that a single tag can have multiple
values for its “class” attribute. When you search for a tag that
matches a certain CSS class, you’re matching against any of its CSS
classes:
nd_all(“p”, class_=”strikeout”)
# [
]
nd_all(“p”, class_=”body”)
You can also search for the exact string value of the class attribute:
nd_all(“p”, class_=”body strikeout”)
But searching for variants of the string value won’t work:
nd_all(“p”, class_=”strikeout body”)
If you want to search for tags that match two or more CSS classes, you
should use a CSS selector:
(“p. “)
In older versions of Beautiful Soup, which don’t have the class_
shortcut, you can use the attrs trick mentioned above. Create a
dictionary whose value for “class” is the string (or regular
expression, or whatever) you want to search for:
nd_all(“a”, attrs={“class”: “sister”})
The string argument¶
With string you can search for strings instead of tags. As with
name and the keyword arguments, you can pass in a string, a
regular expression, a list, a function, or the value True.
Here are some examples:
nd_all(string=”Elsie”)
# [‘Elsie’]
nd_all(string=[“Tillie”, “Elsie”, “Lacie”])
# [‘Elsie’, ‘Lacie’, ‘Tillie’]
nd_all(mpile(“Dormouse”))
# [“The Dormouse’s story”, “The Dormouse’s story”]
def is_the_only_string_within_a_tag(s):
“””Return True if this string is the only child of its parent tag. “””
return (s ==)
nd_all(string=is_the_only_string_within_a_tag)
# [“The Dormouse’s story”, “The Dormouse’s story”, ‘Elsie’, ‘Lacie’, ‘Tillie’, ‘… ‘]
Although string is for finding strings, you can combine it with
arguments that find tags: Beautiful Soup will find all tags whose
matches your value for string. This code finds the
tags whose is “Elsie”:
nd_all(“a”, string=”Elsie”)
# [Elsie]
The string argument is new in Beautiful Soup 4. 4. 0. In earlier
versions it was called text:
nd_all(“a”, text=”Elsie”)
The limit argument¶
find_all() returns all the tags and strings that match your
filters. This can take a while if the document is large. If you don’t
need all the results, you can pass in a number for limit. This
works just like the LIMIT keyword in SQL. It tells Beautiful Soup to
stop gathering results after it’s found a certain number.
There are three links in the “three sisters” document, but this code
only finds the first two:
nd_all(“a”, limit=2)
# Lacie]
The recursive argument¶
If you call nd_all(), Beautiful Soup will examine all the
descendants of mytag: its children, its children’s children, and
so on. If you only want Beautiful Soup to consider direct children,
you can pass in recursive=False. See the differe
XML Parsing – BeautifulSoup
Although XML parsing can be done using a class that extends the
class, this requires some understanding
of classes and callback functions. This has been discussed in
the Classes lesson, and
we will not go further into that method here. Instead we will
discuss an alternative method using the BeautifulSoup class from
the bs4 module.
BeautifulSoup
The BeautifulSoup class was actually created to parse HTML files.
However, the way that it parses HTML files involves coming up
with a complex tree consisting of Python objects. This type of
tree structure is applicable to XML files as well.
Therefore, the BeautifulSoup class can also be used to parse
XML files directly.
The installation of BeautifulSoup has already been discussed at the
end of the lesson on Setting up for
Python programming. So, this lesson assumes that you already
have BeautifulSoup’s bs4 module installed.
Documentation for using BeautifulSoup4 can be found
here.
Going through the Quick Start is highly recommended. The
rest of the documentation goes through a number of practical
examples, so that is worth looking through when you are trying
to figure out how to parse a HTML or XML document.
BeautifulSoup is a DOM-based tool
The module is based on SAX parsing. That means that
the parser makes a single sequential pass through the file to parse
the XML file. None of the tags or contents between the tags is
saved by the parser. This lends itself to very fast parsing because
the XML file contents is never changed by the parser, and the parser
just makes one pass through the file.
By contrast, the BeautifulSoup class constructs a DOM (Document
Object Model) object. That means, the entire contents of the
XML file is stored in memory. This is a slower form of parsing
but allows making changes to the contents of the XML file. In
addition, BeautifulSoup uses mainly two kinds of objects to perform
XML parsing, so it is much easier to learn than SAX parsing with
the module.
BeautifulSoup’s main objects: BeautifulSoup and tag
To do XML parsing with BeautifulSoup, there are only two main
objects that you need to be concerned with: BeautifulSoup
and tag. The BeautifulSoup object is the object
that holds the entire contents of the XML file in a tree-like
form. The tag object stores a HTML or XML tag. The
tag object has a number of attributes and methods that
make manipulating the XML file relatively easy.
Simple example of using BeautifulSoup for parsing XML
The best way to begin learning about how BeautifulSoup works
is to use a simple example. Here is an XML file, “”
that we can use to demonstrate BeautifulSoup:
xml version="1. 0"? >
Here is a Python program, “” that uses BeautifulSoup
to extract some information from “”:
from bs4 import BeautifulSoup
infile = open(“”, “r”)
contents = ()
soup = BeautifulSoup(contents, ‘xml’)
titles = nd_all(‘title’)
for title in titles:
print(t_text())
Line 1 imports the BeautifulSoup class from the
bs4 module. Line 2 opens “” for reading and
stores the file handle as infile. Line 3 reads the
entire contents of infile and stores this as a single
string called contents.
Line 4 constructs a BeautifulSoup object from contents
and stores this as soup. Note the second argument
to the BeautifulSoup constructor is ‘xml’. That will cause
the BeautifulSoup object to be treated as an XML object instead
of a HTML object.
Line 5 uses the find_all() function of the BeautifulSoup
class to return a list of all the
in soup. Lines 6 and 7 define a for loop to iterate over
the list of
function is called on the
BeautifulSoup refers to an XML element as a tag. So,
the get_text() function is considered a function of
the tag object. The get_text() function is
used to obtain the contents of the XML element. In this
case, that would be the string that is the title of the book.
The output produced when this program is run is shown next:
$ python3
The Cat in the Hat
Ender’s Game
Prey
Let’s modify “” so that it makes use of the
authors = nd_all(‘author’)
prices = nd_all(‘price’)
for i in range(0, len(titles)):
print(titles[i]. get_text(), “by”, end=’ ‘)
print(authors[i]. get_text(), end=’ ‘)
print(“costs $” + prices[i]. get_text())
Note that the for loop is an index-based for loop. This is
because we are processing more than one list at a time. The
output produced by running this program would be:
The Cat in the Hat by Dr. Seuss costs $6. 99
Ender’s Game by Orson Scott Card costs $8. 99
Prey by Michael Crichton costs $9. 35
There are a lot more attributess that can be used with
BeautifulSoup tags (elements). A number of those
attributes help to navigate the document tree that is
created when the BeautifulSoup object is constructed.
Although those attributes are beyond the scope of this
course, those attributes (e. g. children, descendants,
parent, etc. ) are worth studying if the XML parsing
is more complex.
How to Parse XML Files Using Python’s BeautifulSoup – Linux …
Data is literally everywhere, in all kinds of documents. But not all of it is useful, hence the need to parse it to get the parts that are needed. XML documents are one of such documents that hold data. They are very similar to HTML files, as they have almost the same kind of structure. Hence, you’ll need to parse them to get vital information, just as you would when working with HTML.
There are two major aspects to parsing XML files. They are:
Finding Tags
Extracting from Tags
You’ll need to find the tag that holds the information you want, then extract that information. You’ll learn how to do both when working with XML files before the end of this article.
BeautifulSoup is one of the most used libraries when it comes to web scraping with Python. Since XML files are similar to HTML files, it is also capable of parsing them. To parse XML files using BeautifulSoup though, it’s best that you make use of Python’s lxml parser.
You can install both libraries using the pip installation tool, through the command below:
To confirm that both libraries are successfully installed, you can activate the interactive shell and try importing both. If no error pops up, then you are ready to go with the rest of the article.
Here’s an example:
$python
Python 3. 7. 4 (tags/v3. 4:e09359112e, Jul 8 2019, 20:34:20)
[MSC v. 1916 64 bit (AMD64)] on win32
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import bs4
>>> import lxml
>>>
Before moving on, you should create an XML file from the code snippet below. It’s quite simple, and should suit the use cases you’ll learn about in the rest of the article. Simply copy, paste in your editor and save; a name like should suffice.
xml version="1. 0" encoding="UTF-8" standalone="no"? >
The Tree
Third
One
Two
Now, in your Python script; you’ll need to read the XML file like a normal file, then pass it into BeautifulSoup. The remainder of this article will make use of the bs_content variable, so it’s important that you take this step.
# Import BeautifulSoup
from bs4 import BeautifulSoup as bs
content = []
# Read the XML file
with open(“”, “r”) as file:
# Read each line in the file, readlines() returns a list of lines
content = adlines()
# Combine the lines in the list into a string
content = “”(content)
bs_content = bs(content, “lxml”)
The code sample above imports BeautifulSoup, then it reads the XML file like a regular file. After that, it passes the content into the imported BeautifulSoup library as well as the parser of choice.
You’ll notice that the code doesn’t import lxml. It doesn’t have to as BeautifulSoup will choose the lxml parser as a result of passing “lxml” into the object.
Now, you can proceed with the rest of the article.
One of the most important stages of parsing XML files is searching for tags. There are various ways to go about this when using BeautifulSoup; so you need to know about a handful of them to have the best tools for the appropriate situation.
You can find tags in XML documents by:
Names
Relationships
Finding Tags By Names
There are two BeautifulSoup methods you can use when finding tags by names. However, the use cases differ; let’s take a look at them.
find
From personal experience, you’ll use the find method more often than the other methods for finding tags in this article. The find tag receives the name of the tag you want to get, and returns a BeautifulSoup object of the tag if it finds one; else, it returns None.
>>> result = (“data”)
>>> print(result)
>>> result = (“unique”)
>>> result = (“father”)
None
>>> result = (“mother”)
If you take a look at the example, you’ll see that the find method returns a tag if it matches the name, else it returns None. However, if you take a closer look at it, you’ll see it only returns a single tag.
For example, when find(“data”) was called, it only returned the first data tag, but didn’t return the other ones.
GOTCHA: The find method will only return the first tag that matches its query.
So how do you get to find other tags too? That leads us to the next method.
find_all
The find_all method is quite similar to the find method. The only difference is that it returns a list of tags that match its query. When it doesn’t find any tag, it simply returns an empty list. Hence, find_all will always return a list.
>>> result = nd_all(“data”)
[One, Two]
>>> result = nd_all(“child”)
[
>>> result = nd_all(“father”)
>>> print(result
[]
>>> result = nd_all(“mother”)
Now that you know how to use the find and find_all methods, you can search for tags anywhere in the XML document. However, you can make your searches more powerful.
Here’s how:
Some tags may have the same name, but different attributes. For example, the child tags have a name attribute and different values. You can make specific searches based on those.
Have a look at this:
>>> result = (“child”, {“name”: “Rose”})
>>> result = nd_all(“child”, {“name”: “Rose”})
[
>>> result = (“child”, {“name”: “Jack”})
>>> result = nd_all(“child”, {“name”: “Jack”})
[
You’ll see that there is something different about the use of the find and find_all methods here: they both have a second parameter.
When you pass in a dictionary as a second parameter, the find and find_all methods further their search to get tags that have attributes and values that fit the provided key:value pair.
For example, despite using the find method in the first example, it returned the second child tag (instead of the first child tag), because that’s the first tag that matches the query. The find_all tag follows the same principle, except that it returns all the tags that match the query, not just the first.
Finding Tags By Relationships
While less popular than searching by tag names, you can also search for tags by relationships. In the real sense though, it’s more of navigating than searching.
There are three key relationships in XML documents:
Parent: The tag in which the reference tag exists.
Children: The tags that exist in the reference tag.
Siblings: The tags that exist on the same level as the reference tag.
From the explanation above, you may infer that the reference tag is the most important factor in searching for tags by relationships. Hence, let’s look for the reference tag, and continue the article.
Take a look at this:
>>> third_child = (“child”, {“name”: “Blue Ivy”})
>>> print(third_child)
From the code sample above, the reference tag for the rest of this section will be the third child tag, stored in a third_child variable. In the subsections below, you’ll see how to search for tags based on their parent, sibling, and children relationship with the reference tag.
Finding Parents
To find the parent tag of a reference tag, you’ll make use of the parent attribute. Doing this returns the parent tag, as well as the tags under it. This behaviour is quite understandable, since the children tags are part of the parent tag.
>>> result =
Finding Children
To find the children tags of a reference tag, you’ll make use of the children attribute. Doing this returns the children tags, as well as the sub-tags under each one of them. This behaviour is also understandable, as the children tags often have their own children tags too.
One thing you should note is that the children attribute returns the children tags as a generator. So if you need a list of the children tags, you’ll have to convert the generator to a list.
>>> result = list(ildren)
[‘\n Third\n ‘,
If you take a closer look at the example above, you’ll notice that some values in the list are not tags. That’s something you need to watch out for.
GOTCHA: The children attribute doesn’t only return the children tags, it also returns the text in the reference tag.
Finding Siblings
The last in this section is finding tags that are siblings to the reference tag. For every reference tag, there may be sibling tags before and after it. The previous_siblings attribute will return the sibling tags before the reference tag, and the next_siblings attribute will return the sibling tags after it.
Just like the children attribute, the previous_siblings and next_siblings attributes will return generators. So you need to convert to a list if you need a list of siblings.
>>> previous_siblings = list(evious_siblings)
>>> print(previous_siblings)
[‘\n’,
>>> next_siblings = list(xt_siblings)
>>> print(next_siblings)
[‘\n’,
>>> print(previous_siblings + next_siblings)
[‘\n’,
‘\n’, ‘\n’,
The first example shows the previous siblings, the second shows the next siblings; then both results are combined to generate a list of all the siblings for the reference tag.
When parsing XML documents, a lot of the work lies in finding the right tags. However, when you find them, you may also want to extract certain information from those tags, and that’s what this section will teach you.
You’ll see how to extract the following:
Tag Attribute Values
Tag Text
Tag Content
Extracting Tag Attribute Values
Sometimes, you may have a reason to extract the values for attributes in a tag. In the following attribute-value pairing for example: name=”Rose”, you may want to extract “Rose. ”
To do this, you can make use of the get method, or accessing the attribute’s name using [] like an index, just as you would when working with a dictionary.
>>> result = (“name”)
Blue Ivy
>>> result = third_child[“name”]
Extracting Tag Text
When you want to access the text values of a tag, you can use the text or strings attribute. Both will return the text in a tag, and even the children tags. However, the text attribute will return them as a single string, concatenated; while the strings attribute will return them as a generator which you can convert to a list.
‘\n Third\n \nOne\nTwo\nTwins\n\n’
>>> result = list(rings)
[‘\n Third\n ‘, ‘\n’, ‘One’, ‘\n’, ‘Two’, ‘\n’, ‘Twins’, ‘\n’, ‘\n’]
Extracting Tag Content
Asides extracting the attribute values, and tag text, you can also extract all of a tags content. To do this, you can use the contents attribute; it is a bit similar to the children attribute and will yield the same results. However, while the children attribute returns a generator, the contents attribute returns a list.
>>> result = ntents
Printing Beautiful
So far, you’ve seen some important methods and attributes that are useful when parsing XML documents using BeautifulSoup. But if you notice, when you print the tags to the screen, they have some kind of clustered look. While appearance may not have a direct impact on your productivity, it can help you parse more effectively and make the work less tedious.
Here’s an example of printing the normal way:
However, you can improve its appearance by using the prettify method. Simply call the prettify method on the tag while printing, and you’ll get something visually pleasing.
Conclusion
Parsing documents is an important aspect of sourcing for data. XML documents are pretty popular, and hopefully you are better equipped to take them on, and extract the data you want.
From this article, you are now able to:
search for tags either by names, or relationships
extract data from tags
If you feel quite lost, and are pretty new to the BeautifulSoup library, you can check out the BeautifulSoup tutorial for beginners.
About the author
I love building software, very proficient with Python and JavaScript. I’m very comfortable with the linux terminal and interested in machine learning. In my spare time, I write prose, poetry and tech articles.
Frequently Asked Questions about beautifulsoup xml
Can you use BeautifulSoup for XML?
This type of tree structure is applicable to XML files as well. Therefore, the BeautifulSoup class can also be used to parse XML files directly.
How do you parse XML?
In order to parse XML document you need to have the entire document in memory.To parse XML document.Import xml.dom.minidom.Use the function “parse” to parse the document ( doc=xml.dom.minidom.parse (file name);Call the list of XML tags from the XML document using code (=doc.getElementsByTagName( “name of xml tags”)More items…•Oct 6, 2021
Is lxml faster than BeautifulSoup?
lxml is way faster than BeautifulSoup – this may not matter if all you’re waiting for is the network. But if you’re parsing something on disk, this may be significant. … html5lib fixes that (and can construct both lxml and bs trees, and both libraries have html5lib integration), however it’s slow.Oct 24, 2017