When lexing input in HTML mode the lexer has to treat _all_ content of a
<script> tag as plain text. This ensures that the lexer can process input such
as "x <y" and "// <foo>" correctly.
Fixes#70.
For JRuby this has little to no benefits as it uses strings for method names.
However, both MRI and Rubinius will perform a Symbol lookup whenever rb_intern()
is called. By doing this once for all callback names and caching the resulting
VALUE objects the lexer timings can be reduced by about 25%. In case of the
benchmark benchmark/xml/lexer/string_average_bench.rb this means it runs in
around 500ms instead of 700ms.
When lexing multi-line strings everything used to work fine as long as the input
were to be read as a whole. However, when using an IO instance all hell would
break loose. Due to the lexer reading IO instances on a per line basis,
sometimes Ragel would end up setting "ts" to NULL. For example, the following
input would break the lexer:
<foo class="\nbar" />
Due to the input being read per line, the following data would be sent to the
lexer:
<foo class="\n
bar" />
This would result in different (or NULL) pointers being used for building a
string, in turn resulting in memory allocation errors.
To work around this the string lexing setup has been broken into separate
machines for single and double quoted strings. The tokens used have also been
changed so that instead of just "T_STRING" there are now the following tokens:
* T_STRING_SQUOTE
* T_STRING_DQUOTE
* T_STRING_BODY
A string can have multiple T_STRING_BODY tokens (= multi-line strings, only the
case for IO inputs). These strings are stitched back together by the parser.
This fixes#58.
Instead of storing "act" and "cs" as an instance variable they (along with some
other variables) are now stored in a struct. This struct is attached to a lexer
instance using the (crappy) Data_Get_Struct/Data_Wrap_Struct API.
Haven't bumped into any problems just yet. However, in theory all sorts of evil
could happen here. Which is part of the problem of C: so much shit is undefined
behaviour that you can take a single step and fall in 15 holes at the same time.
In theory, because nobody bothered to actually specify it properly.
Instead of relying on String#count for counting newlines in text nodes, Oga now
does this in C/Java. String#count isn't exactly the fastest way of counting
characters. Performance was measured using
benchmark/xml/lexer/string_average_bench.rb. Before this patch the results were
as following:
MRI: 0.529s
Rbx: 4.965s
JRuby: 0.622s
After this patch:
MRI: 0.424s
Rbx: 1.942s
JRuby: 0.665s => numbers vary a bit, seem roughly the same as before
The commands used for benchmarking:
$ rake clean # to make sure that C exts aren't shared between MRI/Rbx
$ rake generate
$ rake fixtures
$ ruby benchmark/xml/lexer/string_average_bench.rb
The big difference for Rbx is probably due to the implementation of String#count
not being super fast. Some changes were made
(https://github.com/rubinius/rubinius/pull/3133) to the method, but this hasn't
been released yet.
JRuby seems to perform in a similar way, so either it was already optimizing
things for me or I suck at writing well performing Java code.
This fixes#51.
This ensures that Oga can lex the following properly:
<input value="" />
Previously Ragel would stop upon finding the empty string. This was caused due
to the string rules being declared as following:
string_dquote = (dquote ^dquote+ dquote);
string_squote = (squote ^squote+ squote);
These rules only match strings _with_ content, not without. Since Ragel stops
consuming input the moment it finds unhandled data this resulted in incorrect
tokens being emitted.
Thanks to some heavy rubberducking with @whitequark the lexer is now a little
bit better at lexing T_TEXT nodes. For example, previously the following could
not be lexed properly:
"foo < bar"
There might still be some tweaking to do but we're getting there.
Using create_makefile('liboga/liboga') will compile liboga.so into
path-to-gem/lib/liboga/ and therefore require_relative in oga.rb will fail.
Therefore the right parameter for create_makefile is 'liboga' ->
path-to-gem/lib/liboga.so
The previous setup would consume too much. For example the following HTML:
<a><!--foo--><b><!--bar--></b></a>
would result in the following T_COMMENT token:
"foo--><b><!--bar"
The new setup requires the marking of a start position. I'm not a huge fan of
this but there doesn't appear to be a way around this.
Instead of using a raw Hash Oga now uses the XML::Attribute class for storing
information about element attributes.
Attributes are stored as an Array of XML::Attribute instances. This allows the
attributes to be more easily modified. If they were stored as a Hash you'd not
only have to update the attributes themselves but also the Hash that contains
them.
While using an Array has a slight runtime cost in most cases the amount of
attributes is small enough that this doesn't really pose a problem. If webscale
performance is desired at some point in the future Oga could most likely cache
the lookup of an attribute. This however is something for the future.
Using IO/StringIO objects one can parse large XML files without first having to
read the entire file into memory. This can potentially save a lot of memory at
the cost of a slightly slower runtime.
For IO like instances the lexer will consume the input line by line. If a
String is given it's consumed as a whole instead. A small side effect of
reading the input line by line is that text such as "foo\nbar" will be lexed as
two tokens instead of one.
Fixes#19.
Instead of directly accessing the `data` instance variable the C/Java code now
uses the method `read_data`. This is part of one of the various steps required
to allow Oga to read data from IO like instances. It also means I can freely
change the name of the instance variable without also having to change the
C/Java code.
This moves the element related rules to the element_head machine (where they
belong). This in turn makes it possible to lex ">" as a text node, previously
this was impossible.
After discussing this with @headius I've decided to do this the manual way
anyway. Apparently the basic load service stuff is deprecated and not very
reliable.