Profiling showed that calls to methods defined using `define_method` are
really, really slow. Before this commit the lexer would process 3000-4000
lines per second. With this commit that has been increased to around 10 000
lines per second.
Thanks to @headius for mentioning the (potential) overhead of define_method.
Instead of lexing the input as a raw String or as a set of codepoints it's
treated as a sequence of bytes. This removes the need of String#[] (replaced by
String#byteslice) which in turn reduces the amount of memory needed and speeds
up the lexing time.
Thanks to @headius and @apeiros for suggesting this and rubber ducking along!
After some digging I found out that Racc has a method called `yyparse`. Using
this method (and a custom callback method) you can `yield` tokens as a form of
input. This makes it a lot easier to feed tokens as a stream from the lexer.
Sadly the current performance of the lexer is still total garbage. Most of the
memory usage also comes from using String#unpack, especially on large XML
inputs (e.g. 100 MB of XML). It looks like the resulting memory usage is about
10x the input size.
One option might be some kind of wrapper around String. This wrapper would have
a sliding window of, say, 1024 bytes. When you create it the first 1024 bytes
of the input would be unpacked. When seeking through the input this window
would move forward.
In theory this means that you'd only end up with having only 1024 Fixnum
instances around at any given time instead of "a very big number". I have to
test how efficient this is in practise.
The offending lines of code displayed in the error message are truncated to 80
characters. This should make reading the error messages less of a pain when
dealing with very long lines of HTML/XML.
Instead of returning the tokens as a whole they are now streamed using
XML::Lexer#advance. This method returns the next token upon every call. It uses
a small buffer in case a particular block of text results in multiple tokens.
T_ELEM_OPEN has been renamed to T_ELEM_START, T_ELEM_CLOSE has been renamed to
T_ELEM_END. This keeps the token names consistent with the other ones (e.g.
T_COMMENT_START).
The latter returns an Enumerable which on Ruby 1.9.3 doesn't have #length
available. Besides this it's better to just return an Array since we'll iterate
over every character anyway.
Grand wizard overlord @whitequark recommended this as it will bypass the need
for creating individual String instance for every character (at least not until
needed). This becomes noticable on large inputs (e.g. 100 MB of XML).
Previously these would result in the kernel OOM killing the process. Using
codepoints memory increase by a "mere" 1-1,5 GB.
This was a rather interesting turn of events. As it turned out the Ragel
generated lexer was extremely slow on large inputs. For example, lexing
benchmark/fixtures/hrs.html took around 10 seconds according to the benchmark
benchmark/lexer/bench_html_time.rb:
Rehearsal --------------------------------------------------------
lex HTML 10.870000 0.000000 10.870000 ( 10.877920)
---------------------------------------------- total: 10.870000sec
user system total real
lex HTML 10.440000 0.010000 10.450000 ( 10.449500)
The corresponding benchmark-ips benchmark (bench_html.rb) presented the
following results:
Calculating -------------------------------------
lex HTML 1 i/100ms
-------------------------------------------------
lex HTML 0.1 (±0.0%) i/s - 1 in 10.472534s
10 seconds for around 165 KB of HTML was not acceptable. I spent a good time
profiling things, even submitting a patch to Ragel
(https://github.com/athurston/ragel/pull/1). At some point I decided to give a
pure C lexer + FFI bindings a try (so it would also work on JRuby). Trying to
write C reminded me why I didn't want to do it in C in the first place.
Around 2AM I gave up and went to brush my teeth and head to bed. Then, a
miracle happened. More precisely, I actually gave my brain some time to think
away from the computer. I said to myself:
What if I feed Ragel an Array of characters instead of an entire String?
That way I bypass String#[] being expensive without having to change all of
Ragel or use a different language.
The results of this change are rather interesting. With these changes the
benchmark bench_html_time.rb now gives back the following:
Rehearsal --------------------------------------------------------
lex HTML 0.550000 0.000000 0.550000 ( 0.550649)
----------------------------------------------- total: 0.550000sec
user system total real
lex HTML 0.520000 0.000000 0.520000 ( 0.520713)
The benchmark bench_html.rb in turn gives back this:
Calculating -------------------------------------
lex HTML 1 i/100ms
-------------------------------------------------
lex HTML 2.0 (±0.0%) i/s - 10 in 5.120905s
According to both benchmarks we now have a speedup of about 20 times without
having to make any further changes to Ragel or the lexer itself.
I love it when a plan comes together.
Instead of appending single characters to a String buffer the lexer now uses a
start and end position to figure out what the buffer is. This is a lot faster
than constantly appending to a String.
The error messages of the parser now contain surrounding lines of code instead
of only the offending line of code. This should make debugging a bit easier.
Line numbers are also shown for each line.