Lxml Parser

Parsing XML and HTML with lxml

lxml provides a very simple and powerful API for parsing XML and HTML. It
supports one-step parsing as well as step-by-step parsing using an
event-driven API (currently only for XML).
Contents
Parsers
Parser options
Error log
Parsing HTML
Doctype information
The target parser interface
The feed parser interface
Incremental event parsing
Event types
Modifying the tree
Selective tag events
Comments and PIs
Events with custom targets
iterparse and iterwalk
iterwalk
Python unicode strings
Serialising to Unicode strings
The usual setup procedure:
>>> from lxml import etree
The following examples also use StringIO or BytesIO to show how to parse
from files and file-like objects. Both are available in the io module:
from io import StringIO, BytesIO
Parsers are represented by parser objects. There is support for parsing both
XML and (broken) HTML. Note that XHTML is best parsed as XML, parsing it with
the HTML parser can lead to unexpected results. Here is a simple example for
parsing XML from an in-memory string:
>>> xml = ‘
>>> root = omstring(xml)
>>> string(root)
b’
To read from a file or file-like object, you can use the parse() function,
which returns an ElementTree object:
>>> tree = (StringIO(xml))
>>> string(troot())
Note how the parse() function reads from a file-like object here. If
parsing is done from a real file, it is more common (and also somewhat more
efficient) to pass a filename:
>>> tree = (“doc/”)
lxml can parse from a local file, an HTTP URL or an FTP URL. It also
auto-detects and reads gzip-compressed XML files ().
If you want to parse from memory and still provide a base URL for the document
(e. g. to support relative paths in an XInclude), you can pass the base_url
keyword argument:
>>> root = omstring(xml, base_url=”)
The parsers accept a number of setup options as keyword arguments. The above
example is easily extended to clean up namespaces during parsing:
>>> parser = etree. XMLParser(ns_clean=True)
>>> tree = (StringIO(xml), parser)
b’
The keyword arguments in the constructor are mainly based on the libxml2
parser configuration. A DTD will also be loaded if validation or attribute
default values are requested.
Available boolean keyword arguments:
attribute_defaults – read the DTD (if referenced by the document) and add
the default attributes from it
dtd_validation – validate while parsing (if a DTD was referenced)
load_dtd – load and parse the DTD while parsing (no validation is performed)
no_network – prevent network access when looking up external
documents (on by default)
ns_clean – try to clean up redundant namespace declarations
recover – try hard to parse through broken XML
remove_blank_text – discard blank text nodes between tags, also known as
ignorable whitespace. This is best used together with a DTD or schema
(which tells data and noise apart), otherwise a heuristic will be applied.
remove_comments – discard comments
remove_pis – discard processing instructions
strip_cdata – replace CDATA sections by normal text content (on by
default)
resolve_entities – replace entities by their text value (on by
huge_tree – disable security restrictions and support very deep trees
and very long text content (only affects libxml2 2. 7+)
compact – use compact storage for short text content (on by default)
collect_ids – collect XML IDs in a hash table while parsing (on by default).
Disabling this can substantially speed up parsing of documents with many
different IDs if the hash lookup is not used afterwards.
Other keyword arguments:
encoding – override the document encoding
target – a parser target object that will receive the parse events
(see The target parser interface)
schema – an XMLSchema to validate against (see validation)
Parsers have an error_log property that lists the errors and
warnings of the last parser run:
>>> parser = etree. XMLParser()
>>> print(len(ror_log))
0
>>> tree = (“n
“, parser) # doctest: +ELLIPSIS
Traceback (most recent call last):…
Opening and ending tag mismatch: root line 1 and b, line 2, column 5…
1
>>> error = ror_log[0]
>>> print(ssage)
Opening and ending tag mismatch: root line 1 and b
>>> print()
2
5
Each entry in the log has the following properties:
message: the message text
domain: the domain ID (see the class)
type: the message type ID (see the class)
level: the log level ID (see the class)
line: the line at which the message originated (if applicable)
column: the character column at which the message originated (if applicable)
filename: the name of the file in which the message originated (if applicable)
For convenience, there are also three properties that provide readable
names for the ID values:
domain_name
type_name
level_name
To filter for a specific kind of message, use the different
filter_*() methods on the error log (see the
class).
HTML parsing is similarly simple. The parsers have a recover
keyword argument that the HTMLParser sets by default. It lets libxml2
try its best to return a valid HTML tree with all content it can
manage to parse. It will not raise an exception on parser errors.
You should use libxml2 version 2. 6. 21 or newer to take advantage of
this feature.
>>> broken_html = “test<body></p> <h1>page title</h3> <p>”<br /> >>> parser = MLParser()<br /> >>> tree = (StringIO(broken_html), parser)<br /> >>> result = string(troot(),… pretty_print=True, method=”html”)<br /> >>> print(result)<br /> <html><br /> <head><br /> <title>test

page title



Lxml has an HTML function, similar to the XML shortcut known from
ElementTree:
>>> html = (broken_html)
>>> result = string(html, pretty_print=True, method=”html”)
The support for parsing broken HTML depends entirely on libxml2’s recovery
algorithm. It is not the fault of lxml if you find documents that are so
heavily broken that the parser cannot handle them. There is also no guarantee
that the resulting tree will contain all data from the original document. The
parser may have to drop seriously broken parts when struggling to keep
parsing. Especially misplaced meta tags can suffer from this, which may lead
to encoding problems.
Note that the result is a valid HTML tree, but it may not be a
well-formed XML tree. For example, XML forbids double hyphens in
comments, which the HTML parser will happily accept in recovery mode.
Therefore, if your goal is to serialise an HTML document as an
XML/XHTML document after parsing, you may have to apply some manual
preprocessing first.
Also note that the HTML parser is meant to parse HTML documents. For
XHTML documents, use the XML parser, which is namespace aware.
The use of the libxml2 parsers makes some additional information available at
the API level. Currently, ElementTree objects can access the DOCTYPE
information provided by a parsed document, as well as the XML version and the
original encoding. Since lxml 3. 5, the doctype references are mutable.
>>> pub_id = “-//W3C//DTD XHTML 1. 0 Transitional//EN”
>>> sys_url = ”
>>> doctype_string = ‘‘% (pub_id, sys_url)
>>> xml_header = ‘
>>> xhtml = xml_header + doctype_string + ‘
>>> tree = (StringIO(xhtml))
>>> docinfo = cinfo
>>> print(lic_id)
-//W3C//DTD XHTML 1. 0 Transitional//EN
>>> print(stem_url)
>>> ctype == doctype_string
True
>>> print(docinfo. xml_version)
1. 0
>>> print(docinfo. encoding)
ascii
>>> stem_url = None
>>> lic_id = None
>>> print(string(tree))


As in ElementTree, and similar to a SAX event handler, you can pass
a target object to the parser:
>>> class EchoTarget(object):… def start(self, tag, attrib):… print(“start%s%r”% (tag, dict(attrib)))… def end(self, tag):… print(“end%s”% tag)… def data(self, data):… print(“data%r”% data)… def comment(self, text):… print(“comment%s”% text)… def close(self):… print(“close”)… return “closed! ”
>>> parser = etree. XMLParser(target = EchoTarget())
>>> result = (“sometext“,… parser)
start element {}
data u’some’
comment comment
data u’text’
end element
close
closed!
It is important for the () method to reset the parser target
to a usable state, so that you can reuse the parser as often as you
like:
Starting with lxml 2. 3, the () method will also be called in
the error case. This diverges from the behaviour of ElementTree, but
allows target objects to clean up their state in all situations, so
that the parser can reuse them afterwards.
>>> class CollectorTarget(object):… def __init__(self):… = []… (“start%s%r”% (tag, dict(attrib)))… (“end%s”% tag)… (“data%r”% data)… (“comment%s”% text)… (“close”)… XMLParser(target = CollectorTarget())
>>> result = (“some“,… parser) # doctest: +ELLIPSIS
Opening and ending tag mismatch…
>>> for event in… print(event)
Note that the parser does not build a tree when using a parser
target. The result of the parser run is whatever the target object
returns from its () method. If you want to return an XML
tree here, you have to create it programmatically in the target
object. An example for a parser target that builds a tree is the
TreeBuilder:
>>> parser = etree. XMLParser(target = eeBuilder())
element
>>> print(result[0])
comment
Since lxml 2. 0, the parsers have a feed parser interface that is
compatible to the ElementTree parsers. You can use it to feed data
into the parser in a controlled step-by-step way.
In, you can use both interfaces to a parser at the same
time: the parse() or XML() functions, and the feed parser
interface. Both are independent and will not conflict (except if used
in conjunction with a parser target object as described above).
To start parsing with a feed parser, just call its feed() method
to feed it some data.
>>> for data in (‘‘):… (data)
When you are done parsing, you must call the close() method to
retrieve the root Element of the parse result document, and to unlock the
parser:
>>> root = ()
root
>>> print(root[0])
a
If you do not call close(), the parser will stay locked and
subsequent feeds will keep appending data, usually resulting in a non
well-formed document and an unexpected parser error. So make sure you
always close the parser after use, also in the exception case.
Another way of achieving the same step-by-step parsing is by writing your own
file-like object that returns a chunk of data on each read() call. Where
the feed parser interface allows you to actively pass data chunks into the
parser, a file-like object passively responds to read() requests of the
parser itself. Depending on the data source, either way may be more natural.
Note that the feed parser has its own error log called
feed_error_log. Errors in the feed parser do not show up in the
normal error_log and vice versa.
You can also combine the feed parser interface with the target parser:
>>> (“>> (“nt>some text>> (“ent>”)
>>> result = ()
Again, this prevents the automatic creation of an XML tree and leaves
all the event handling to the target object. The close() method
of the parser forwards the return value of the target’s close()
method.
In Python 3. 4, the package gained an extension
to the feed parser interface that is implemented by the XMLPullParser
class. It additionally allows processing parse events after each
incremental parsing step, by calling the. read_events() method and
iterating over the result. This is most useful for non-blocking execution
environments where data chunks arrive one after the other and should be
processed as far as possible in each step.
The same feature is available in lxml 3. 3. The basic usage is as follows:
>>> parser = etree. XMLPullParser(events=(‘start’, ‘end’))
>>> def print_events(parser):… for action, element in ad_events():… print(‘%s:%s’% (action, ))
>>> (‘some text’)
>>> print_events(parser)
start: root
>>> print_events(parser) # well, no more events, as before…
>>> (‘‘)
start: child
start: a
end: a
>>> (‘
>> (‘t>’)
end: root
Just like the normal feed parser, the XMLPullParser builds a tree in
memory (and you should always call the () method when done with
parsing):
b’some text

However, since the parser provides incremental access to that tree,
you can explicitly delete content that you no longer need once you
have processed it. Read the section on Modifying the tree below
to see what you can do here and what kind of modifications you should
avoid.
In lxml, it is enough to call the. read_events() method once as
the iterator it returns can be reused when new events are available.
Also, as known from other iterators in lxml, you can pass a tag
argument that selects which parse events are returned by the. read_events() iterator.
The parse events are tuples (event-type, object). The event types
supported by ElementTree and are the strings ‘start’, ‘end’,
‘start-ns’ and ‘end-ns’. The ‘start’ and ‘end’ events represent opening
and closing elements. They are accompanied by the respective Element
instance. By default, only ‘end’ events are generated, whereas the
example above requested the generation of both ‘start’ and ‘end’ events.
The ‘start-ns’ and ‘end-ns’ events notify about namespace declarations.
They do not come with Elements. Instead, the value of the ‘start-ns’
event is a tuple (prefix, namespaceURI) that designates the beginning
of a prefix-namespace mapping. The corresponding end-ns event does
not have a value (None). It is common practice to use a list as namespace
stack and pop the last entry on the ‘end-ns’ event.
>>> def print_events(events):… for action, obj in events:… if action in (‘start’, ‘end’):… print(“%s:%s”% (action, ))… elif action == ‘start-ns’:… print(“%s:%s”% (action, obj))… else:… print(action)
>>> event_types = (“start”, “end”, “start-ns”, “end-ns”)
>>> parser = etree. XMLPullParser(event_types)
>>> events = ad_events()
>>> (‘‘)
>>> print_events(events)
start: element
>>> (‘text
text‘)
end: element
>>> (‘‘)
start-ns: (”, ‘testns/’)
start: {testns/}empty-element
end: {testns/}empty-element
end-ns
>>> (‘
‘)
You can modify the element and its descendants when handling the
‘end’ event. To save memory, for example, you can remove subtrees
that are no longer needed:
>>> parser = etree. XMLPullParser()
>>> (‘text‘)
>>> (‘‘)
>>> for action, elem in events:… print(‘%s:%d’% (, len(elem))) # processing… (keep_tail=True) # delete children
element: 0
child: 0
element: 1
>>> (‘
‘)
{testns/}empty-element: 0
root: 3
b’
WARNING: During the ‘start’ event, any content of the element,
such as the descendants, following siblings or text, is not yet
available and should not be accessed. Only attributes are guaranteed
to be set. During the ‘end’ event, the element and its descendants
can be freely modified, but its following siblings should not be
accessed. During either of the two events, you must not modify or
move the ancestors (parents) of the current element. You should also
avoid moving or discarding the element itself. The golden rule is: do
not touch anything that will have to be touched again by the parser
later on.
If you have elements with a long list of children in your XML file and want
to save more memory during parsing, you can clean up the preceding siblings
of the current element:
>>> for event, element in ad_events():… #… do something with the element… (keep_tail=True) # clean up children… while tprevious() is not None:… del tparent()[0] # clean up preceding siblings
The while loop deletes multiple siblings in a row. This is only necessary
if you skipped over some of them using the tag keyword argument.
Otherwise, a simple if should do. The more selective your tag is,
however, the more thought you will have to put into finding the right way to
clean up the elements that were skipped. Therefore, it is sometimes easier to
traverse all elements and do the tag selection by hand in the event handler
code.
As an extension over ElementTree, accepts a tag keyword
argument just like (tag). This restricts events to a
specific tag or namespace:
>>> parser = etree. XMLPullParser(tag=”element”)
>>> for action, elem in ad_events():… print(“%s:%s”% (action, ))
>>> event_types = (“start”, “end”)
>>> parser = etree. XMLPullParser(event_types, tag=”{testns/}*”)
You can combine the pull parser with a parser target. In that case,
it is the target’s responsibility to generate event values. Whatever
it returns from its () and () methods will be returned
by the pull parser as the second item of the parse events tuple.
>>> class Target(object):… print(‘-> start(%s)’% tag)… return ‘>>START:%s<<'% tag... print('-> end(%s)’% tag)… return ‘>>END:%s<<'% tag... print('-> close()’)… return “CLOSED! ”
>>> event_types = (‘start’, ‘end’)
>>> parser = etree. XMLPullParser(event_types, target=Target())
>>> (‘‘)
-> start(root)
-> start(child1)
-> end(child1)
-> start(child2)
-> end(child2)
-> end(root)
>>> for action, value in ad_events():… print(‘%s:%s’% (action, value))
start: >>START: root<< start: >>START: child1<< end: >>END: child1<< start: >>START: child2<< end: >>END: child2<< end: >>END: root<< >>> print(())
-> close()
CLOSED!
As you can see, the event values do not even have to be Element objects.
The target is generally free to decide how it wants to create an XML tree
or whatever else it wants to make of the parser callbacks. In many cases,
however, you will want to make your custom target inherit from the
TreeBuilder class in order to have it build a tree that you can process
normally. The start() and () methods of TreeBuilder return
the Element object that was created, so you can override them and modify
the input or output according to your needs. Here is an example that
filters attributes before they are being added to the tree:
>>> class AttributeFilter(eeBuilder):… attrib = dict(attrib)… if ‘evil’ in attrib:… del attrib[‘evil’]… return super(AttributeFilter, self)(tag, attrib)
>>> parser = etree. XMLPullParser(target=AttributeFilter())
>>> (‘‘)
>>> for action, element in ad_events():… print(‘%s:%s(%r)’% (action,, ))
end: child1({‘test’: ‘123’})
end: child2({})
end: root({})
As known from ElementTree, the iterparse() utility function
returns an iterator that generates parser events for an XML file (or
file-like object), while building the tree. You can think of it as
a blocking wrapper around the XMLPullParser that automatically and
incrementally reads data from the input file for you and provides a
single iterator for them:
>>> xml = ”’… texttexttail… … ”’
>>> context = erparse(StringIO(xml))
>>> for action, elem in context:… print(“%s:%s”% (action, ))
After parsing, the resulting tree is available through the root property
of the iterator:
>>>
‘root’
The other event types can be activated with the events keyword argument:
>>> events = (“start”, “end”)
>>> context = erparse(StringIO(xml), events=events)
iterparse() also supports the tag argument for selective event
iteration and several other parameters that control the parser setup.
The tag argument can be a single tag or a sequence of tags.
You can also use it to parse HTML input by passing html=True.
For convenience, lxml also provides an iterwalk() function.
It behaves exactly like iterparse(), but works on Elements and
ElementTrees. Here is an example for a tree parsed by iterparse():
>>> f = StringIO(xml)
>>> context = erparse(… f, events=(“start”, “end”), tag=”element”)
>>> root =
And now we can take the resulting in-memory tree and iterate over it
using iterwalk() to get the exact same events without parsing the
input again:
>>> context = erwalk(… root, events=(“start”, “end”), tag=”element”)
In order to avoid wasting time on uninteresting parts of the tree, the iterwalk
iterator can be instructed to skip over an entire subtree with its. skip_subtree() method.
>>> root = (”’…
… ”’)
>>> context = erwalk(root, events=(“start”, “end”))
>>> for action, elem in context:… if action == ‘start’ and == ‘a’:… ip_subtree() # ignore
start: c
end: c
Note that. skip_subtree() only has an effect when handling start or
start-ns events.
has broader support for Python unicode strings than the ElementTree
library. First of all, where ElementTree would raise an exception, the
parsers in can handle unicode strings straight away. This is most
helpful for XML snippets embedded in source code using the XML()
function:
>>> root = ( u’ uf8d1 + uf8d2 ‘)
This requires, however, that unicode strings do not specify a conflicting
encoding themselves and thus lie about their real encoding:
>>> ( u’n’ +… u’ uf8d1 + uf8d2 ‘)
ValueError: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.
Similarly, you will get errors when you try the same with HTML data in a
unicode string that specifies a charset in a meta tag of the header. You
should generally avoid converting XML/HTML data to unicode before passing it
into the parsers. It is both slower and error prone.
To serialize the result, you would normally use the tostring()
module function, which serializes to plain ASCII by default or a
number of other byte encodings if asked for:
b’  + 
>>> string(root, encoding=’UTF-8′, xml_declaration=False)
b’ xefxa3x91 + xefxa3x92
As an extension, recognises the name ‘unicode’ as an argument
to the encoding parameter to build a Python unicode representation of a tree:
>>> string(root, encoding=’unicode’)
u’ uf8d1 + uf8d2
>>> el = etree. Element(“test”)
>>> string(el, encoding=’unicode’)
u’
>>> subel = bElement(el, “subtest”)
u’
>>> tree = etree. ElementTree(el)
>>> string(tree, encoding=’unicode’)
The result of tostring(encoding=’unicode’) can be treated like any
other Python unicode string and then passed back into the parsers.
However, if you want to save the result to a file or pass it over the
network, you should use write() or tostring() with a byte
encoding (typically UTF-8) to serialize the XML. The main reason is
that unicode strings returned by tostring(encoding=’unicode’) are
not byte streams and they never have an XML declaration to specify
their encoding. These strings are most likely not parsable by other
XML libraries.
For normal byte encodings, the tostring() function automatically
adds a declaration as needed that reflects the encoding of the
returned string. This makes it possible for other parsers to
correctly parse the XML byte stream. Note that using tostring()
with UTF-8 is also considerably faster in most cases.
Parsing XML and HTML with lxml

Parsing XML and HTML with lxml

lxml provides a very simple and powerful API for parsing XML and HTML. It
supports one-step parsing as well as step-by-step parsing using an
event-driven API (currently only for XML).
Contents
Parsers
Parser options
Error log
Parsing HTML
Doctype information
The target parser interface
The feed parser interface
Incremental event parsing
Event types
Modifying the tree
Selective tag events
Comments and PIs
Events with custom targets
iterparse and iterwalk
iterwalk
Python unicode strings
Serialising to Unicode strings
The usual setup procedure:
>>> from lxml import etree
The following examples also use StringIO or BytesIO to show how to parse
from files and file-like objects. Both are available in the io module:
from io import StringIO, BytesIO
Parsers are represented by parser objects. There is support for parsing both
XML and (broken) HTML. Note that XHTML is best parsed as XML, parsing it with
the HTML parser can lead to unexpected results. Here is a simple example for
parsing XML from an in-memory string:
>>> xml = ‘
>>> root = omstring(xml)
>>> string(root)
b’
To read from a file or file-like object, you can use the parse() function,
which returns an ElementTree object:
>>> tree = (StringIO(xml))
>>> string(troot())
Note how the parse() function reads from a file-like object here. If
parsing is done from a real file, it is more common (and also somewhat more
efficient) to pass a filename:
>>> tree = (“doc/”)
lxml can parse from a local file, an HTTP URL or an FTP URL. It also
auto-detects and reads gzip-compressed XML files ().
If you want to parse from memory and still provide a base URL for the document
(e. g. to support relative paths in an XInclude), you can pass the base_url
keyword argument:
>>> root = omstring(xml, base_url=”)
The parsers accept a number of setup options as keyword arguments. The above
example is easily extended to clean up namespaces during parsing:
>>> parser = etree. XMLParser(ns_clean=True)
>>> tree = (StringIO(xml), parser)
b’
The keyword arguments in the constructor are mainly based on the libxml2
parser configuration. A DTD will also be loaded if validation or attribute
default values are requested.
Available boolean keyword arguments:
attribute_defaults – read the DTD (if referenced by the document) and add
the default attributes from it
dtd_validation – validate while parsing (if a DTD was referenced)
load_dtd – load and parse the DTD while parsing (no validation is performed)
no_network – prevent network access when looking up external
documents (on by default)
ns_clean – try to clean up redundant namespace declarations
recover – try hard to parse through broken XML
remove_blank_text – discard blank text nodes between tags, also known as
ignorable whitespace. This is best used together with a DTD or schema
(which tells data and noise apart), otherwise a heuristic will be applied.
remove_comments – discard comments
remove_pis – discard processing instructions
strip_cdata – replace CDATA sections by normal text content (on by
default)
resolve_entities – replace entities by their text value (on by
huge_tree – disable security restrictions and support very deep trees
and very long text content (only affects libxml2 2. 7+)
compact – use compact storage for short text content (on by default)
collect_ids – collect XML IDs in a hash table while parsing (on by default).
Disabling this can substantially speed up parsing of documents with many
different IDs if the hash lookup is not used afterwards.
Other keyword arguments:
encoding – override the document encoding
target – a parser target object that will receive the parse events
(see The target parser interface)
schema – an XMLSchema to validate against (see validation)
Parsers have an error_log property that lists the errors and
warnings of the last parser run:
>>> parser = etree. XMLParser()
>>> print(len(ror_log))
0
>>> tree = (“n
“, parser) # doctest: +ELLIPSIS
Traceback (most recent call last):…
Opening and ending tag mismatch: root line 1 and b, line 2, column 5…
1
>>> error = ror_log[0]
>>> print(ssage)
Opening and ending tag mismatch: root line 1 and b
>>> print()
2
5
Each entry in the log has the following properties:
message: the message text
domain: the domain ID (see the class)
type: the message type ID (see the class)
level: the log level ID (see the class)
line: the line at which the message originated (if applicable)
column: the character column at which the message originated (if applicable)
filename: the name of the file in which the message originated (if applicable)
For convenience, there are also three properties that provide readable
names for the ID values:
domain_name
type_name
level_name
To filter for a specific kind of message, use the different
filter_*() methods on the error log (see the
class).
HTML parsing is similarly simple. The parsers have a recover
keyword argument that the HTMLParser sets by default. It lets libxml2
try its best to return a valid HTML tree with all content it can
manage to parse. It will not raise an exception on parser errors.
You should use libxml2 version 2. 6. 21 or newer to take advantage of
this feature.
>>> broken_html = “test<body></p> <h1>page title</h3> <p>”<br /> >>> parser = MLParser()<br /> >>> tree = (StringIO(broken_html), parser)<br /> >>> result = string(troot(),… pretty_print=True, method=”html”)<br /> >>> print(result)<br /> <html><br /> <head><br /> <title>test

page title



Lxml has an HTML function, similar to the XML shortcut known from
ElementTree:
>>> html = (broken_html)
>>> result = string(html, pretty_print=True, method=”html”)
The support for parsing broken HTML depends entirely on libxml2’s recovery
algorithm. It is not the fault of lxml if you find documents that are so
heavily broken that the parser cannot handle them. There is also no guarantee
that the resulting tree will contain all data from the original document. The
parser may have to drop seriously broken parts when struggling to keep
parsing. Especially misplaced meta tags can suffer from this, which may lead
to encoding problems.
Note that the result is a valid HTML tree, but it may not be a
well-formed XML tree. For example, XML forbids double hyphens in
comments, which the HTML parser will happily accept in recovery mode.
Therefore, if your goal is to serialise an HTML document as an
XML/XHTML document after parsing, you may have to apply some manual
preprocessing first.
Also note that the HTML parser is meant to parse HTML documents. For
XHTML documents, use the XML parser, which is namespace aware.
The use of the libxml2 parsers makes some additional information available at
the API level. Currently, ElementTree objects can access the DOCTYPE
information provided by a parsed document, as well as the XML version and the
original encoding. Since lxml 3. 5, the doctype references are mutable.
>>> pub_id = “-//W3C//DTD XHTML 1. 0 Transitional//EN”
>>> sys_url = ”
>>> doctype_string = ‘‘% (pub_id, sys_url)
>>> xml_header = ‘
>>> xhtml = xml_header + doctype_string + ‘
>>> tree = (StringIO(xhtml))
>>> docinfo = cinfo
>>> print(lic_id)
-//W3C//DTD XHTML 1. 0 Transitional//EN
>>> print(stem_url)
>>> ctype == doctype_string
True
>>> print(docinfo. xml_version)
1. 0
>>> print(docinfo. encoding)
ascii
>>> stem_url = None
>>> lic_id = None
>>> print(string(tree))


As in ElementTree, and similar to a SAX event handler, you can pass
a target object to the parser:
>>> class EchoTarget(object):… def start(self, tag, attrib):… print(“start%s%r”% (tag, dict(attrib)))… def end(self, tag):… print(“end%s”% tag)… def data(self, data):… print(“data%r”% data)… def comment(self, text):… print(“comment%s”% text)… def close(self):… print(“close”)… return “closed! ”
>>> parser = etree. XMLParser(target = EchoTarget())
>>> result = (“sometext“,… parser)
start element {}
data u’some’
comment comment
data u’text’
end element
close
closed!
It is important for the () method to reset the parser target
to a usable state, so that you can reuse the parser as often as you
like:
Starting with lxml 2. 3, the () method will also be called in
the error case. This diverges from the behaviour of ElementTree, but
allows target objects to clean up their state in all situations, so
that the parser can reuse them afterwards.
>>> class CollectorTarget(object):… def __init__(self):… = []… (“start%s%r”% (tag, dict(attrib)))… (“end%s”% tag)… (“data%r”% data)… (“comment%s”% text)… (“close”)… XMLParser(target = CollectorTarget())
>>> result = (“some“,… parser) # doctest: +ELLIPSIS
Opening and ending tag mismatch…
>>> for event in… print(event)
Note that the parser does not build a tree when using a parser
target. The result of the parser run is whatever the target object
returns from its () method. If you want to return an XML
tree here, you have to create it programmatically in the target
object. An example for a parser target that builds a tree is the
TreeBuilder:
>>> parser = etree. XMLParser(target = eeBuilder())
element
>>> print(result[0])
comment
Since lxml 2. 0, the parsers have a feed parser interface that is
compatible to the ElementTree parsers. You can use it to feed data
into the parser in a controlled step-by-step way.
In, you can use both interfaces to a parser at the same
time: the parse() or XML() functions, and the feed parser
interface. Both are independent and will not conflict (except if used
in conjunction with a parser target object as described above).
To start parsing with a feed parser, just call its feed() method
to feed it some data.
>>> for data in (‘‘):… (data)
When you are done parsing, you must call the close() method to
retrieve the root Element of the parse result document, and to unlock the
parser:
>>> root = ()
root
>>> print(root[0])
a
If you do not call close(), the parser will stay locked and
subsequent feeds will keep appending data, usually resulting in a non
well-formed document and an unexpected parser error. So make sure you
always close the parser after use, also in the exception case.
Another way of achieving the same step-by-step parsing is by writing your own
file-like object that returns a chunk of data on each read() call. Where
the feed parser interface allows you to actively pass data chunks into the
parser, a file-like object passively responds to read() requests of the
parser itself. Depending on the data source, either way may be more natural.
Note that the feed parser has its own error log called
feed_error_log. Errors in the feed parser do not show up in the
normal error_log and vice versa.
You can also combine the feed parser interface with the target parser:
>>> (“>> (“nt>some text>> (“ent>”)
>>> result = ()
Again, this prevents the automatic creation of an XML tree and leaves
all the event handling to the target object. The close() method
of the parser forwards the return value of the target’s close()
method.
In Python 3. 4, the package gained an extension
to the feed parser interface that is implemented by the XMLPullParser
class. It additionally allows processing parse events after each
incremental parsing step, by calling the. read_events() method and
iterating over the result. This is most useful for non-blocking execution
environments where data chunks arrive one after the other and should be
processed as far as possible in each step.
The same feature is available in lxml 3. 3. The basic usage is as follows:
>>> parser = etree. XMLPullParser(events=(‘start’, ‘end’))
>>> def print_events(parser):… for action, element in ad_events():… print(‘%s:%s’% (action, ))
>>> (‘some text’)
>>> print_events(parser)
start: root
>>> print_events(parser) # well, no more events, as before…
>>> (‘‘)
start: child
start: a
end: a
>>> (‘
>> (‘t>’)
end: root
Just like the normal feed parser, the XMLPullParser builds a tree in
memory (and you should always call the () method when done with
parsing):
b’some text

However, since the parser provides incremental access to that tree,
you can explicitly delete content that you no longer need once you
have processed it. Read the section on Modifying the tree below
to see what you can do here and what kind of modifications you should
avoid.
In lxml, it is enough to call the. read_events() method once as
the iterator it returns can be reused when new events are available.
Also, as known from other iterators in lxml, you can pass a tag
argument that selects which parse events are returned by the. read_events() iterator.
The parse events are tuples (event-type, object). The event types
supported by ElementTree and are the strings ‘start’, ‘end’,
‘start-ns’ and ‘end-ns’. The ‘start’ and ‘end’ events represent opening
and closing elements. They are accompanied by the respective Element
instance. By default, only ‘end’ events are generated, whereas the
example above requested the generation of both ‘start’ and ‘end’ events.
The ‘start-ns’ and ‘end-ns’ events notify about namespace declarations.
They do not come with Elements. Instead, the value of the ‘start-ns’
event is a tuple (prefix, namespaceURI) that designates the beginning
of a prefix-namespace mapping. The corresponding end-ns event does
not have a value (None). It is common practice to use a list as namespace
stack and pop the last entry on the ‘end-ns’ event.
>>> def print_events(events):… for action, obj in events:… if action in (‘start’, ‘end’):… print(“%s:%s”% (action, ))… elif action == ‘start-ns’:… print(“%s:%s”% (action, obj))… else:… print(action)
>>> event_types = (“start”, “end”, “start-ns”, “end-ns”)
>>> parser = etree. XMLPullParser(event_types)
>>> events = ad_events()
>>> (‘‘)
>>> print_events(events)
start: element
>>> (‘text
text‘)
end: element
>>> (‘‘)
start-ns: (”, ‘testns/’)
start: {testns/}empty-element
end: {testns/}empty-element
end-ns
>>> (‘
‘)
You can modify the element and its descendants when handling the
‘end’ event. To save memory, for example, you can remove subtrees
that are no longer needed:
>>> parser = etree. XMLPullParser()
>>> (‘text‘)
>>> (‘‘)
>>> for action, elem in events:… print(‘%s:%d’% (, len(elem))) # processing… (keep_tail=True) # delete children
element: 0
child: 0
element: 1
>>> (‘
‘)
{testns/}empty-element: 0
root: 3
b’
WARNING: During the ‘start’ event, any content of the element,
such as the descendants, following siblings or text, is not yet
available and should not be accessed. Only attributes are guaranteed
to be set. During the ‘end’ event, the element and its descendants
can be freely modified, but its following siblings should not be
accessed. During either of the two events, you must not modify or
move the ancestors (parents) of the current element. You should also
avoid moving or discarding the element itself. The golden rule is: do
not touch anything that will have to be touched again by the parser
later on.
If you have elements with a long list of children in your XML file and want
to save more memory during parsing, you can clean up the preceding siblings
of the current element:
>>> for event, element in ad_events():… #… do something with the element… (keep_tail=True) # clean up children… while tprevious() is not None:… del tparent()[0] # clean up preceding siblings
The while loop deletes multiple siblings in a row. This is only necessary
if you skipped over some of them using the tag keyword argument.
Otherwise, a simple if should do. The more selective your tag is,
however, the more thought you will have to put into finding the right way to
clean up the elements that were skipped. Therefore, it is sometimes easier to
traverse all elements and do the tag selection by hand in the event handler
code.
As an extension over ElementTree, accepts a tag keyword
argument just like (tag). This restricts events to a
specific tag or namespace:
>>> parser = etree. XMLPullParser(tag=”element”)
>>> for action, elem in ad_events():… print(“%s:%s”% (action, ))
>>> event_types = (“start”, “end”)
>>> parser = etree. XMLPullParser(event_types, tag=”{testns/}*”)
You can combine the pull parser with a parser target. In that case,
it is the target’s responsibility to generate event values. Whatever
it returns from its () and () methods will be returned
by the pull parser as the second item of the parse events tuple.
>>> class Target(object):… print(‘-> start(%s)’% tag)… return ‘>>START:%s<<'% tag... print('-> end(%s)’% tag)… return ‘>>END:%s<<'% tag... print('-> close()’)… return “CLOSED! ”
>>> event_types = (‘start’, ‘end’)
>>> parser = etree. XMLPullParser(event_types, target=Target())
>>> (‘‘)
-> start(root)
-> start(child1)
-> end(child1)
-> start(child2)
-> end(child2)
-> end(root)
>>> for action, value in ad_events():… print(‘%s:%s’% (action, value))
start: >>START: root<< start: >>START: child1<< end: >>END: child1<< start: >>START: child2<< end: >>END: child2<< end: >>END: root<< >>> print(())
-> close()
CLOSED!
As you can see, the event values do not even have to be Element objects.
The target is generally free to decide how it wants to create an XML tree
or whatever else it wants to make of the parser callbacks. In many cases,
however, you will want to make your custom target inherit from the
TreeBuilder class in order to have it build a tree that you can process
normally. The start() and () methods of TreeBuilder return
the Element object that was created, so you can override them and modify
the input or output according to your needs. Here is an example that
filters attributes before they are being added to the tree:
>>> class AttributeFilter(eeBuilder):… attrib = dict(attrib)… if ‘evil’ in attrib:… del attrib[‘evil’]… return super(AttributeFilter, self)(tag, attrib)
>>> parser = etree. XMLPullParser(target=AttributeFilter())
>>> (‘‘)
>>> for action, element in ad_events():… print(‘%s:%s(%r)’% (action,, ))
end: child1({‘test’: ‘123’})
end: child2({})
end: root({})
As known from ElementTree, the iterparse() utility function
returns an iterator that generates parser events for an XML file (or
file-like object), while building the tree. You can think of it as
a blocking wrapper around the XMLPullParser that automatically and
incrementally reads data from the input file for you and provides a
single iterator for them:
>>> xml = ”’… texttexttail… … ”’
>>> context = erparse(StringIO(xml))
>>> for action, elem in context:… print(“%s:%s”% (action, ))
After parsing, the resulting tree is available through the root property
of the iterator:
>>>
‘root’
The other event types can be activated with the events keyword argument:
>>> events = (“start”, “end”)
>>> context = erparse(StringIO(xml), events=events)
iterparse() also supports the tag argument for selective event
iteration and several other parameters that control the parser setup.
The tag argument can be a single tag or a sequence of tags.
You can also use it to parse HTML input by passing html=True.
For convenience, lxml also provides an iterwalk() function.
It behaves exactly like iterparse(), but works on Elements and
ElementTrees. Here is an example for a tree parsed by iterparse():
>>> f = StringIO(xml)
>>> context = erparse(… f, events=(“start”, “end”), tag=”element”)
>>> root =
And now we can take the resulting in-memory tree and iterate over it
using iterwalk() to get the exact same events without parsing the
input again:
>>> context = erwalk(… root, events=(“start”, “end”), tag=”element”)
In order to avoid wasting time on uninteresting parts of the tree, the iterwalk
iterator can be instructed to skip over an entire subtree with its. skip_subtree() method.
>>> root = (”’…
… ”’)
>>> context = erwalk(root, events=(“start”, “end”))
>>> for action, elem in context:… if action == ‘start’ and == ‘a’:… ip_subtree() # ignore
start: c
end: c
Note that. skip_subtree() only has an effect when handling start or
start-ns events.
has broader support for Python unicode strings than the ElementTree
library. First of all, where ElementTree would raise an exception, the
parsers in can handle unicode strings straight away. This is most
helpful for XML snippets embedded in source code using the XML()
function:
>>> root = ( u’ uf8d1 + uf8d2 ‘)
This requires, however, that unicode strings do not specify a conflicting
encoding themselves and thus lie about their real encoding:
>>> ( u’n’ +… u’ uf8d1 + uf8d2 ‘)
ValueError: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.
Similarly, you will get errors when you try the same with HTML data in a
unicode string that specifies a charset in a meta tag of the header. You
should generally avoid converting XML/HTML data to unicode before passing it
into the parsers. It is both slower and error prone.
To serialize the result, you would normally use the tostring()
module function, which serializes to plain ASCII by default or a
number of other byte encodings if asked for:
b’  + 
>>> string(root, encoding=’UTF-8′, xml_declaration=False)
b’ xefxa3x91 + xefxa3x92
As an extension, recognises the name ‘unicode’ as an argument
to the encoding parameter to build a Python unicode representation of a tree:
>>> string(root, encoding=’unicode’)
u’ uf8d1 + uf8d2
>>> el = etree. Element(“test”)
>>> string(el, encoding=’unicode’)
u’
>>> subel = bElement(el, “subtest”)
u’
>>> tree = etree. ElementTree(el)
>>> string(tree, encoding=’unicode’)
The result of tostring(encoding=’unicode’) can be treated like any
other Python unicode string and then passed back into the parsers.
However, if you want to save the result to a file or pass it over the
network, you should use write() or tostring() with a byte
encoding (typically UTF-8) to serialize the XML. The main reason is
that unicode strings returned by tostring(encoding=’unicode’) are
not byte streams and they never have an XML declaration to specify
their encoding. These strings are most likely not parsable by other
XML libraries.
For normal byte encodings, the tostring() function automatically
adds a declaration as needed that reflects the encoding of the
returned string. This makes it possible for other parsers to
correctly parse the XML byte stream. Note that using tostring()
with UTF-8 is also considerably faster in most cases.
Implementing web scraping using lxml in Python - GeeksforGeeks

Implementing web scraping using lxml in Python – GeeksforGeeks

Web scraping basically refers to fetching only some important piece of information from one or more websites. Every website has recognizable structure/pattern of HTML elements. Steps to perform web scraping:1. Send a link and get the response from the sent link 2. Then convert response object to a byte string. 3. Pass the byte string to ‘fromstring’ method in html class in lxml module. 4. Get to a particular element by xpath. 5. Use the content according to your need. Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics. To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning – Basic Level CourseFor accomplishing this task some third-party packages is needed to install. Use pip to install wheel() files. pip install requests
pip install lxmlxpath to the element is also needed from which data will be scrapped. An easy way to do this is –1. Right-click the element in the page which has to be scrapped and go-to “Inspect”. 2. Right-click the element on source-code to the right. 3. Copy xpath. Here is a simple implementation on “geeksforgeeks homepage“: Python3import requestsfrom lxml import htmlpath = ‘//*[@id =”post-183376″]/div / p’response = (url)byte_data = ntentsource_code = omstring(byte_data)tree = (path)print(tree[0]. text_content())The above code scrapes the paragraph in first article from “geeksforgeeks homepage” homepage. Here’s the sample output. The output may not be same for everyone as the article would have: “Consider the following C/C++ programs and try to guess the output?
Output of all of the above programs is unpredictable (or undefined).
The compilers (implementing… Read More »”Here’s another example for data scraped from Wiki-web-scraping. Python3import requestsfrom lxml import htmlpath = ‘//*[@id =”mw-content-text”]/div / p[1]’response = (link)byte_string = ntentsource_code = omstring(byte_string)tree = (path)print(tree[0]. text_content())Output: Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites. [1] Web scraping software may access the World Wide Web directly using the Hypertext Transfer Protocol, or through a web browser. While web scraping can be done manually by a software user, the term typically refers to automate processes implemented using a bot or web crawler. It is a form of copying, in which specific data is gathered and copied from the web, typically into a central local database or spreadsheet, for later retrieval or analysis.

Frequently Asked Questions about lxml parser

What is lxml parser?

lxml provides a very simple and powerful API for parsing XML and HTML. It supports one-step parsing as well as step-by-step parsing using an event-driven API (currently only for XML). Contents. Parsers. Parser options.

How do I use lxml parser in Python?

Implementing web scraping using lxml in PythonSend a link and get the response from the sent link.Then convert response object to a byte string.Pass the byte string to ‘fromstring’ method in html class in lxml module.Get to a particular element by xpath.Use the content according to your need.Oct 5, 2021

What is difference between XML and lxml?

For most normal XML operations including building document trees and simple searching and parsing of element attributes and node values, even namespaces, ElementTree is a reliable handler. Lxml is a third-party module that requires installation.Apr 2, 2018

Leave a Reply

Your email address will not be published. Required fields are marked *

Theme Blog Tales by Kantipur Themes