When lexing multi-line strings everything used to work fine as long as the input
were to be read as a whole. However, when using an IO instance all hell would
break loose. Due to the lexer reading IO instances on a per line basis,
sometimes Ragel would end up setting "ts" to NULL. For example, the following
input would break the lexer:
<foo class="\nbar" />
Due to the input being read per line, the following data would be sent to the
lexer:
<foo class="\n
bar" />
This would result in different (or NULL) pointers being used for building a
string, in turn resulting in memory allocation errors.
To work around this the string lexing setup has been broken into separate
machines for single and double quoted strings. The tokens used have also been
changed so that instead of just "T_STRING" there are now the following tokens:
* T_STRING_SQUOTE
* T_STRING_DQUOTE
* T_STRING_BODY
A string can have multiple T_STRING_BODY tokens (= multi-line strings, only the
case for IO inputs). These strings are stitched back together by the parser.
This fixes#58.
Processing of this axis along with a predicate wouldn't quite work out. Even if
the predicate returned false the node would still be matched (which should not
be the case).
Previously input such as "x > y" would result in the following token sequences:
T_IDENT, T_CHILD, T_IDENT
This commit changes this to the following:
T_IDENT, T_SPACE, T_CHILD, T_IDENT
This allows the parser to use T_SPACE as a terminal token, this in turn prevents
around 16 shift/reduce conflicts from arising.
This does mean that input such as " > y" or " x > y" is now invalid. This
however can be solved by simply _not_ adding leading/trailing whitespace to CSS
queries.
The new setup will not involve a separate transformation stage, instead the CSS
parser will directly emit an XPath AST. This reduces the overhead needed for
parsing/evaluating CSS selectors while also simplifying the code. The downside
is that I basically have to re-write 80% of the parser.
This uses stricter (and more correct) rules in both the lexer and the parser.
The resulting AST has also received a small rework to make it more compact and
less confusing.
This includes support for the crazy 2n+1 syntax you can use with selectors such
as :nth-child().
CSS selectors: doing what XPath already does using an even crazier syntax,
because screw you.
When lexing XML entities such as & and < these sequences are now
converted into their "actual" forms. In turn, Oga::XML::Text#to_xml ensures they
are encoded when the method is called.
Performance wise this puts some strain on the lexer, for every T_TEXT/T_STRING
node now potentially has to have its content modified. In the benchmark
xml/lexer/string_average_bench.rb the average processing time is now about the
same as before the improvements made in
8db77c0a09. I was hoping that the lexer would
still be a bit faster, but alas this is not the case. Doing this in native code
would be a nightmare as C doesn't have a proper string replacement function. I'm
not old/sadistic enough to write on myself just yet.
This fixes#49
This was a gimmick in the first place. It doesn't work well with IO instances
(= requires re-reading of the input), the code was too complex and it wasn't
that useful in the end. Lets just get rid of it.
This fixes#53.
Instead of relying on String#count for counting newlines in text nodes, Oga now
does this in C/Java. String#count isn't exactly the fastest way of counting
characters. Performance was measured using
benchmark/xml/lexer/string_average_bench.rb. Before this patch the results were
as following:
MRI: 0.529s
Rbx: 4.965s
JRuby: 0.622s
After this patch:
MRI: 0.424s
Rbx: 1.942s
JRuby: 0.665s => numbers vary a bit, seem roughly the same as before
The commands used for benchmarking:
$ rake clean # to make sure that C exts aren't shared between MRI/Rbx
$ rake generate
$ rake fixtures
$ ruby benchmark/xml/lexer/string_average_bench.rb
The big difference for Rbx is probably due to the implementation of String#count
not being super fast. Some changes were made
(https://github.com/rubinius/rubinius/pull/3133) to the method, but this hasn't
been released yet.
JRuby seems to perform in a similar way, so either it was already optimizing
things for me or I suck at writing well performing Java code.
This fixes#51.
This ensures we only call String#downcase if we can't find an all lowercased
*and* all uppercased version of the element name. This in turn can save as many
object allocations as there are HTML opening tags.
This fixes#52.
Instead of using `namespace.name` lets just use `namespace_name`. This fixes the
problem of serializing attributes where the namespace prefix is "xmlns" as the
namespace for this isn't registered by default.
This fixes#47.
This API is a little bit dodgy (similar to Nokogiri's API) due to the use of
separate parser and handler classes. This is done to ensure that the return
values of callback methods (e.g. on_element) aren't used by Racc for building
AST trees. This also ensures that whatever variables are set by the handler
don't conflict with any variables of the parser.
This fixes#42.
While still a bit cryptic this is probably as best as we can get it. An example:
Oga.parse_xml("<namefoo:bar=\"10\"")
parser.rb:116:in `on_error': Unexpected string on line 1: (Racc::ParseError)
=> 1: <namefoo:bar="10"
This fixes#43.
When an XML element has no child nodes a self-closing tag is used. When parsing
documents/elements in HTML mode this is only done if the element is a so called
"void element" (e.g. <link> tags).
This fixes#46.
When the default namespace is registered (using xmlns="...") Oga now properly
sets the namespace of the container and all child elements.
This fixes#44.
This was originally reported by @jrochkind and partially patched by @billdueber.
My patches are built upon the latter, but without the need of using Array#map,
Array#join, etc. They also contain a few style changes.
This fixes#32 and #33.
The methods XML::Element#add_attribute and XML::Element#set can be used to more
easily add attributes to elements. The first method simply adds an Attribute
instance and links it to the element. This allows for fine grained control over
what data the attribute should contain. The second method ("set") simply sets an
attribute based on a name and value, optionally creating the attribute if it
doesn't already exist.
By using NodeSet#concat we can further reduce the amount of object allocations.
This in turn greatly reduces the time it takes to query large documents using
descendant-or-self.
Previously this wouldn't display anything due to the IO object being exhausted.
To fix this the input has to be wound back to the start, which means re-reading
it. Sadly I can't think of a way around this that doesn't require buffering
lines while parsing them (which massively increases memory usage).
This ensures the current context node is set correctly when using the "self"
axis inside a path that's inside a predicate, e.g.
foo/bar[baz/. = "something"]
Here the "self" axis should refer to foo/bar/baz, _not_ foo/bar.
The XPath number() function should also be capable of converting booleans to
numbers, something it previously was not able to do. In order to do this
reliably we can't rely on the string() function as this would make it impossible
to distinguish between literal string values and booleans. This is due to
true(), which returns a TrueClass, being converted to the string "true". This
string in turn can't be converted to a float.
When an attribute is prefixed with "xml" the default namespace should be used
automatically. This namespace is not registered on element level by default as
this namespace isn't registered manually, instead it's a "magic" namespace. This
also ensures we match the behaviour of libxml more closely, hopefully reducing
confusion.
When calling the string() XPath function floats with zero decimals (10.0, 5.0,
etc) should result in a string without any decimals. Ruby converts 10.0 to
"10.0" whereas XPath expects "10".
The particular case of string(10) having to return "10" instead of "10.0" will
be handled separately. Returning integers breaks behaviour/expectations
elsewhere.
This reverts commit 431a253000.
This allows filtering of nodes by indexes (e.g. using last() or a literal
number) and uses the correct index range (1 to N in XPath). The function
position() is not yet supported as this requires access to the current node,
which isn't passed down the call stack just yet.
Namespaces aren't scoped per document but instead per element, thus this method
doesn't make that much sense. This also fixes the remaining, failing XPath test.
The method XPath::Evaluator#node_matches? now has a special case to handle
"type-test" nodes. This in turn fixes a bunch of failing tests such as those for
the XPath query "parent::node()".
Unlike what I thought before syntax such as "node()" is not a function call.
Instead this is a special node test that tests the *types* of nodes, not their
names.
This separates namespace handling into namespace names and namespace objects.
The namespace objects are retrieved from the element an attribute belongs to.
Once retrieved the namespace is cached, due to the overhead of retrieving
namespaces in large documents.
The old code used for generating Object#inspect values has been ripped out (for
the most part). The result is a non indented but far more compact #inspect
output. The code for this is also easier and doesn't break the signature of
Object#inspect.
Oga won't be handling URIs any time soon. The rationale is that they server zero
purpose when it comes to just parsing XML. Another goal of Oga is to make it
easy to modify and reserialize documents back to XML. If namespaces would also
store the URIs this would make this process more difficult.