There are still some untested codepaths here, but this does seem to
work for nontrivial objects, so, woohoo. It's worth noting that this
is a recursive implementation (which seems silly after I hand-rolled
the non-recursive main parser). The thinking is that if you have a
deeply-enough nested object that you run out of stack space here, you
probably shouldn't be converting it directly to an object.
I may revisit this, though I am still not 100% certain how
straightforward it would be to make this nonrecursive with all the
weird comptime objects. Basically the "parse stack" would have to be
created at comptime.
In practice, there are probably still things I missed here, and I
should audit this to make sure there aren't any egregious copy paste
errors remaining. Also, it's pretty likely that the diagnostics
line_offset field isn't correct in most of these messages. More work
will need to be done to update that correctly.
The errors in the line buffer and tokenizer now have diagnostics. The
line number is trivial to keep track of due to the line buffer, but
the column index requires quite a bit of juggling, as we pass
successively trimmed down buffers to the internals of the parser.
There will probably be some column index counting problems in the
future. Also, handling the diagnostics is a bit awkward, since it's a
mandatory out-parameter of the parse functions now. The user must
provide a valid diagnostics object that survives for the life of the
parser.
finally the flow parser has been "integrated" with the main parser in
that they now share a stack. The bigger thing is that the parsing has
been decoupled from the tokenization, which will allow parsing
documents without loading them fully into memory first.
I've been calling this the streaming parser, but it's worth noting that
I am referring to streaming input, not streaming output. It would
certainly be possible to do streaming output, but I am not interested
in that at the moment (it would be the lowest-memory-overhead
approach, but it's a lot of work for little gain, and it is less
flexible for converting input to objects).