Glad that the author was able to get increased performance by batching, but I can guarantee you the author didn’t discover a log(n) way to parse JSON. JSON is at least context-free depending on if you consider unique keys a requirement or not, which means any parsing has a lower bound of polynomial time.
Author here. It’s not the algorithm itself that is log(n), it’s the way the validation (parsing strings to json is done beforehand) behaves: it’s similar to log(n). The more it validates in the same pass, the less time it spends per object in that pass.
Glad that the author was able to get increased performance by batching, but I can guarantee you the author didn’t discover a log(n) way to parse JSON. JSON is at least context-free depending on if you consider unique keys a requirement or not, which means any parsing has a lower bound of polynomial time.
Author here. It’s not the algorithm itself that is log(n), it’s the way the validation (parsing strings to json is done beforehand) behaves: it’s similar to log(n). The more it validates in the same pass, the less time it spends per object in that pass.