config: use std.StringArrayHashMap for the map type
As I was thinking about this, I realized that data serialization is much more of a bear than deserialization. Or, more accurately, trying to make stable round trip serialization a goal puts heavier demands on deserialization, including preserving input order. I think there may be a mountain hiding under this molehill, though, because the goals of having a format that is designed to be handwritten and also machine written are at odds with each other. Right now, the parser does not preserve comments at all. But even if we did (they could easily become a special type of string), comment indentation is ignored. Comments are not directly a child of any other part of the document, they're awkward text that exists interspersed throughout it. With the current design, there are some essentially unsolvable problems, like comments interspersed throughout multiline strings. The string is processed into a single object in the output, so there can't be weird magic data interleaved with it because it loses the concept of being interleaved entirely (this is a bigger issue for space strings, which don't even preserve a unique way to reserialize them. Line strings at least contain a character (the newline) that can appear nowhere else but at a break in the string). Obviously this isn't technically impossible, but it would require a change to the way that values are modeled. And even if we did take the approach of associating a comment with, say, the value that follows it (which I think is a reasonable thing to do, ignoring the interleaved comment situation described above), if software reads in data, changes it, and writes it back out, how do we account for deleted items? Does the comment get deleted with the item? Does it become a dangling comment that just gets shoved somewhere in the document? How are comments that come after everything else in the document handled? From a pure data perspective, it's fairly obvious why JSON omits comments: they're trivial to parse, but there's not a strategy for emitting them that will always be correct, especially in a format that doesn't give a hoot about linebreaks. It may be interesting to look at fancy TOML (barf) parsers to see how they handle comments, though I assume the general technique is to store their row position in the original document and track when a line is added or removed. Ultimately, I think the use case of a format to be written by humans and read by computers is still useful. That's my intended use case for this and why I started it, but its application as a configuration file format is probably hamstrung muchly by software not being able to write it back. On the other hand, there's a lot of successful software I use where the config files are not written directly by the software at all, so maybe it's entirely fine to declare this as being out of scope and not worrying about it further. At the very least it's almost certainly less of an issue than erroring on carriage returns. Also the fact that certain keys are simply unrepresentable. As a side note, I guess what they say about commit message length being inversely proportional to the change length is true. Hope you enjoyed the blog over this 5 character change.
This commit is contained in:
parent
a0107ab9fd
commit
a9d179acc1
@ -430,7 +430,7 @@ pub fn LineTokenizer(comptime Buffer: type) type {
|
||||
|
||||
pub const Value = union(enum) {
|
||||
pub const String = std.ArrayList(u8);
|
||||
pub const Map = std.StringHashMap(Value);
|
||||
pub const Map = std.StringArrayHashMap(Value);
|
||||
pub const List = std.ArrayList(Value);
|
||||
pub const TagType = @typeInfo(Value).Union.tag_type.?;
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user