Instead of using instance variables for ts, te, etc we'll use local variables.
Grand wizard overloard @whitequark suggested that this would be quite a bit
faster, which turns out to be true. For example, the big XML lexer benchmark
would, prior to this commit, complete in about 9 - 9,3 seconds. With this
commit that hovers around 8,5 seconds.
This adds the ability to more easily act upon specific node types and nestings
when using the pull parsing API.
A basic example of this API looks like the following (only including relevant
code):
parser.parse do |node|
parser.on(:element, %w{people person}) do
people << {:name => nil, :age => nil}
end
parser.on(:text, %w{people person name}) do
people.last[:name] = node.text
end
parser.on(:text, %w{people person age}) do
people.last[:age] = node.text.to_i
end
end
This fixes#6.
When using Git the resulting Gem will contain far too many useless files. For
example, the profile/ and spec/ directories are not needed when building Gems.
This makes it a bit easier to profile memory usage of certain components and
plot them using Gnuplot. In the past I would write one-off scripts for this and
throw them away, only to figure out I needed them again later on.
Profiling samples are written to profile/samples and can be plotted using
corresponding Gnuplot scripts found in profile/plot. The latter requires
Gnuplot to be installed.
Tracking the names of nested elements makes it a lot easier to do contextual
pull parsing. Without this it's impossible to know what context the parser is
in at a given moment.
For memory reasons the parser currently only tracks the element names. In the
future it might perhaps also track extra information to make parsing easier.
This parser extends the regular DOM parser but instead delegates certain nodes
to a block instead of building a DOM tree.
The API is a bit raw in its current form but I'll extend it and make it a bit
more user friendly in the following commits. In particular I want to make it
easier to figure out if a certain node is nested inside another node.
The AST layer is being removed because it doesn't really serve a useful
purpose. In particular when creating a streaming parser the AST nodes would
only introduce extra overhead.
As a result of this the parser will instead emit a DOM tree directly instead of
first emitting an AST.
To make running benchmarks easier we'll track the XML file in Git in its
compressed form. I also decreased the size of the XML file from ~50 MB to
~10MB.
Profiling showed that calls to methods defined using `define_method` are
really, really slow. Before this commit the lexer would process 3000-4000
lines per second. With this commit that has been increased to around 10 000
lines per second.
Thanks to @headius for mentioning the (potential) overhead of define_method.
Instead of lexing the input as a raw String or as a set of codepoints it's
treated as a sequence of bytes. This removes the need of String#[] (replaced by
String#byteslice) which in turn reduces the amount of memory needed and speeds
up the lexing time.
Thanks to @headius and @apeiros for suggesting this and rubber ducking along!
After some digging I found out that Racc has a method called `yyparse`. Using
this method (and a custom callback method) you can `yield` tokens as a form of
input. This makes it a lot easier to feed tokens as a stream from the lexer.
Sadly the current performance of the lexer is still total garbage. Most of the
memory usage also comes from using String#unpack, especially on large XML
inputs (e.g. 100 MB of XML). It looks like the resulting memory usage is about
10x the input size.
One option might be some kind of wrapper around String. This wrapper would have
a sliding window of, say, 1024 bytes. When you create it the first 1024 bytes
of the input would be unpacked. When seeking through the input this window
would move forward.
In theory this means that you'd only end up with having only 1024 Fixnum
instances around at any given time instead of "a very big number". I have to
test how efficient this is in practise.
The offending lines of code displayed in the error message are truncated to 80
characters. This should make reading the error messages less of a pain when
dealing with very long lines of HTML/XML.