The XML/HTML lexer is now capable of processing most invalid XML/HTML
(that I can think of at least). This is achieved by inserting missing
closing tags (where needed) and/or ignoring excessive closing tags. For
example, HTML such as this:
<a></a></p>
Results in the following tokens:
[:T_ELEM_START, nil, 1]
[:T_ELEM_NAME, 'a', 1]
[:T_ELEM_CLOSE, nil, 1]
In turn this HTML:
<a>
Results in these tokens:
[:T_ELEM_START, nil, 1]
[:T_ELEM_NAME, 'a', 1]
[:T_ELEM_CLOSE, nil, 1]
Fixes#84
Previously a single Ragel machine was used for processing HTML
script and style tags. This had the unfortunate side-effect that the
following was not parsed correctly (while being valid HTML):
<script>
var foo = "</style>";
</script>
The same applied to style tags:
<style>
/* </script> */
</style>
By using separate machines we can work around the above issue. The
downside is that this can produce multiple T_TEXT nodes, which have to
be stitched back together in the parser.
This adds lexing support for HTML/XML such as:
<foo bar="""></foo>
While technically invalid, some websites (e.g. yahoo.com) contain HTML
just like this.
The lexer handles this as following:
1. When we're in the "element_head" machine, do business as usual until
we bump into a "=".
2. Call (using Ragel's "fcall") the machine to use for processing the
attribute value (if any).
3. In this machine quoted strings are processed. The moment a string has
been processed the lexer jumps right back in to the "element_head"
machine. This ensures that any stray quotes are ignored instead of
being processed as extra attribute values (eventually leading to
parsing errors due to unbalanced quotes).
Similar to comments (ea8b4aa92f) and CDATA
tags (8acc7fc743) processing instructions
are now lexed in separate chunks _with_ proper support for streaming
input.
Related issue: #93
Instead of using a single token (T_CDATA) for a CDATA tag the lexer now
uses 3 tokens:
1. T_CDATA_START
2. T_CDATA_BODY
3. T_CDATA_END
The T_CDATA_BODY token can occur multiple times and is turned into a
single value in the XML parser. This is similar to the way strings are
lexed.
By changing the way CDATA tags are lexed Oga can now lex CDATA tags
containing newlines when using an IO as input. For example, this would
previously fail:
Oga.parse_xml(StringIO.new("<![CDATA[\nfoo]]>"))
Because IO input reads input per line the input for the lexer would be
as following:
"<![CDATA[\n"
"foo]]>"
Related issues: #93
This cache is flushed whenever Element#register_namespace is called.
When this cache is flushed it's also recursively flushed for all child
elements. This makes calls to Element#register_namespace a bit more
expensive but in turn calls to Element#available_namespaces will be a
lot faster.
The results of these methods is now cached until a Node is moved into
another NodeSet. This reduces the time spent in the
xpath/evaluator/big_xml_average_bench.rb benchmark from roughly 10
seconds to roughly 5 seconds per iteration.
In HTML the text of a script/style tag should be left untouched, no
entities must be converted. Doing so would break Javascript such as the
following:
foo&&bar;
Such code is often the result of minifiers doing their dirty business.
This was broken by introducing the process of lazy decoding of XML/HTML
entities. The new setup works similar to how XML::Text#text decodes any
entities that may be present.
Fixes#91
Removing this makes the process of parsing larger XML documents a bit faster.
The downside is that NodeSet#initialize will no longer filter out duplicate
nodes, though this is not something Oga itself relies upon.
Methods such as NodeSet#push still do ignore elements already present.
This was utterly broken, mainly due to me overlooking it. There are now 2 new
callbacks to handle this properly:
* on_attribute: to handle a single attribute/value pair
* on_attributes: to handle a collection of attributes (as returned by
on_attribute)
By default on_attribut returns a Hash, on_attributes in turn merges all
attribute hashes into a single one. This ensures that on_element _actually_
receives the attributes as a Hash, instead of an Array with random
nil/XML::Attribute values.
Instead of decoding entities in the lexer we'll do this whenever XML::Text#text
is called. This removes the overhead from the parsing phase and ensures the
process is only triggered when actually needed. Note that calling #to_xml and/or
the #inspect methods on a Text (or parent) instance will also trigger the entity
conversion process.
The new entity decoding API supports both regular entities (e.g. &) as well
as codepoint based entities (both regular and hexadecimal codepoints).
To allow safe read-only access to Text instances from multiple threads a mutex
is used. This mutex ensures that only 1 thread can trigger the conversion
process.
Fixes#68
This basically re-applies the technique used for HTML <script> tags. With this
extra addition I decided to rename/normalize a few things so it's easier to add
any extra tags in the future. One downside of this setup is that the following
will not be parsed by Oga:
<style>
</script>
</style>
The same applies to script tags containing a literal </style> tag. Since this
particular case is rather unlikely to occur I'm OK with not supporting it as it
_does_ simplify the lexer quite a bit.
Fixes#80
This keeps things consistent with the general testing guidelines in the Ruby
community. This in turn should hopefully make my life easier as I don't have to
tell people to use this rather odd stlye I was using before.
This changes the behaviour of after_element when parsing documents using the SAX
parsing API. Previously it would always receive a nil argument, which is kinda
pointless. This commit changes that by making sure it receives a namespace name
(if any) and the element name.
This fixes#54.
This method can be used to compare two NodeSet instances. By using
XML::NodeSet#equal_nodes?() the need for exposing the "nodes" instance variable
is also removed.
When lexing multi-line strings everything used to work fine as long as the input
were to be read as a whole. However, when using an IO instance all hell would
break loose. Due to the lexer reading IO instances on a per line basis,
sometimes Ragel would end up setting "ts" to NULL. For example, the following
input would break the lexer:
<foo class="\nbar" />
Due to the input being read per line, the following data would be sent to the
lexer:
<foo class="\n
bar" />
This would result in different (or NULL) pointers being used for building a
string, in turn resulting in memory allocation errors.
To work around this the string lexing setup has been broken into separate
machines for single and double quoted strings. The tokens used have also been
changed so that instead of just "T_STRING" there are now the following tokens:
* T_STRING_SQUOTE
* T_STRING_DQUOTE
* T_STRING_BODY
A string can have multiple T_STRING_BODY tokens (= multi-line strings, only the
case for IO inputs). These strings are stitched back together by the parser.
This fixes#58.
When lexing XML entities such as & and < these sequences are now
converted into their "actual" forms. In turn, Oga::XML::Text#to_xml ensures they
are encoded when the method is called.
Performance wise this puts some strain on the lexer, for every T_TEXT/T_STRING
node now potentially has to have its content modified. In the benchmark
xml/lexer/string_average_bench.rb the average processing time is now about the
same as before the improvements made in
8db77c0a09. I was hoping that the lexer would
still be a bit faster, but alas this is not the case. Doing this in native code
would be a nightmare as C doesn't have a proper string replacement function. I'm
not old/sadistic enough to write on myself just yet.
This fixes#49
This was a gimmick in the first place. It doesn't work well with IO instances
(= requires re-reading of the input), the code was too complex and it wasn't
that useful in the end. Lets just get rid of it.
This fixes#53.
Instead of using `namespace.name` lets just use `namespace_name`. This fixes the
problem of serializing attributes where the namespace prefix is "xmlns" as the
namespace for this isn't registered by default.
This fixes#47.
This API is a little bit dodgy (similar to Nokogiri's API) due to the use of
separate parser and handler classes. This is done to ensure that the return
values of callback methods (e.g. on_element) aren't used by Racc for building
AST trees. This also ensures that whatever variables are set by the handler
don't conflict with any variables of the parser.
This fixes#42.
While still a bit cryptic this is probably as best as we can get it. An example:
Oga.parse_xml("<namefoo:bar=\"10\"")
parser.rb:116:in `on_error': Unexpected string on line 1: (Racc::ParseError)
=> 1: <namefoo:bar="10"
This fixes#43.
When an XML element has no child nodes a self-closing tag is used. When parsing
documents/elements in HTML mode this is only done if the element is a so called
"void element" (e.g. <link> tags).
This fixes#46.
When the default namespace is registered (using xmlns="...") Oga now properly
sets the namespace of the container and all child elements.
This fixes#44.
This was originally reported by @jrochkind and partially patched by @billdueber.
My patches are built upon the latter, but without the need of using Array#map,
Array#join, etc. They also contain a few style changes.
This fixes#32 and #33.
The methods XML::Element#add_attribute and XML::Element#set can be used to more
easily add attributes to elements. The first method simply adds an Attribute
instance and links it to the element. This allows for fine grained control over
what data the attribute should contain. The second method ("set") simply sets an
attribute based on a name and value, optionally creating the attribute if it
doesn't already exist.
This ensures that Oga can lex the following properly:
<input value="" />
Previously Ragel would stop upon finding the empty string. This was caused due
to the string rules being declared as following:
string_dquote = (dquote ^dquote+ dquote);
string_squote = (squote ^squote+ squote);
These rules only match strings _with_ content, not without. Since Ragel stops
consuming input the moment it finds unhandled data this resulted in incorrect
tokens being emitted.
Previously this wouldn't display anything due to the IO object being exhausted.
To fix this the input has to be wound back to the start, which means re-reading
it. Sadly I can't think of a way around this that doesn't require buffering
lines while parsing them (which massively increases memory usage).
When an attribute is prefixed with "xml" the default namespace should be used
automatically. This namespace is not registered on element level by default as
this namespace isn't registered manually, instead it's a "magic" namespace. This
also ensures we match the behaviour of libxml more closely, hopefully reducing
confusion.
The previous setup would consume too much. For example the following HTML:
<a><!--foo--><b><!--bar--></b></a>
would result in the following T_COMMENT token:
"foo--><b><!--bar"
The new setup requires the marking of a start position. I'm not a huge fan of
this but there doesn't appear to be a way around this.
Namespaces aren't scoped per document but instead per element, thus this method
doesn't make that much sense. This also fixes the remaining, failing XPath test.
This separates namespace handling into namespace names and namespace objects.
The namespace objects are retrieved from the element an attribute belongs to.
Once retrieved the namespace is cached, due to the overhead of retrieving
namespaces in large documents.
The old code used for generating Object#inspect values has been ripped out (for
the most part). The result is a non indented but far more compact #inspect
output. The code for this is also easier and doesn't break the signature of
Object#inspect.