1. 42
  1.  

  2. 5

    I don’t have much experience in this area, but I’d be interested in an overview of how different pieces of sofware handle the concurrent / multiplayer editing problem, like:

    So is the gist of it that OT relies on central servers and they all use OT rather than CRDT? That was not entirely clear to me.

    Looking at the xi analysis, it sounds like “IME” is specific to a desktop application using X (?), so it doesn’t apply to any of this web software.

    The rest of them seem like they do apply?

    Is the problem that you have to pay a “CRDT tax” for every piece of state in the application? I thought the same was true of OT. Don’t you have to express every piece of state within those constraints too?

    repl.it doesn’t seem to have a problem with syntax highlighting or brace matching (try it, it’s pretty slick). So is it just that they paid that tax with a lot of code or is there something else fundamentally different about xi vs. repl.it ? Or maybe xi is going for a lot lower latency than web-based editors?

    recent thread about OT vs. CRDT that might be interesting: https://news.ycombinator.com/item?id=18191867

    (copy of HN comment)

    1. 6

      I responded on HN also, but the short answer is that you care about Input Method Editing in some form any time you’re not dealing with English.

      With brace matching, there’s no real reason for it to be slow, especially if you limit yourself to small documents. It’s much simpler to do it synchronously rather than deal with the potential for edits to conflict.

      1. 3

        OK, thanks for the clarification!

        On the second point, you’re saying that you can “cheat” for brace matching and do it synchronously instead of asynchronously? I guess can cheat for any feature, but it increases latency and hurts user experience?

        1. 7

          I’m not sure I would consider it cheating, in fact this is a complex tradeoff.

          I would summarize the lesson learned as: asynchronous might be worth it for things that are actually slow. Otherwise, probably not. Brace matching is not an inherently slow task, though to do it precisely depends on having an accurate parser for at least a subset of the language. It’s also unbounded, which was my motivation for making it async. If you edit a 100MB file with brackets around the contents and need to find the match, that might take a while.

          Syntax highlighting is somewhere in the middle. I think regex-based highlighting is somewhere in the ballpark of a couple thousand lines per second. This means, on a largish document, if you synchronously wait for re-highlighting, you will be facing noticeable lag. I consider that a problem that needs to be solved. But regex-based is almost certainly not the global best; I think parser combinator based approaches are promising and there’s an experiment on that in the xi-editor tree. But the advantage of regex-based is that there’s a huge library of syntax definitions for hundreds of different languages.

          Now, something that is actually slow is semantic analysis, like the various Language Server implementations. Those can and should be made to go as fast as possible (the Roslyn work is excellent), but at the end of the day you can’t guarantee that can be made to keep up with typing speed. So it is an absolute requirement for a code editor that it be able to handle language server annotations asynchronously, ideally without having artifacts like showing the annotation on the wrong span of text if it is being concurrently edited.

          Hope that helps.

          1. 5

            Yes thanks, that framing helps a lot! I can see that an editor performs many computations on your constantly updated buffer, and some of them are a lot more expensive than others.

            Is it correct to say that “synchronous” in this context means: do the whole computation on a single editor buffer state, disregarding any updates you received while the computation is being done? After you do the computation, you can update the screen with text changes, but then that leads to user-perceived lag. And then you potentially need to compute the whole thing synchronously all over again?

            So in the brace match case, once you type {, you can’t get any updates until you potentially parsed the whole file and find the ending } ?

            On the other hand, asynchronous means that if you get an update while doing the computation, you can somehow take it into account? I am not clear on how that happens. I guess that is why you have to rewrite every single computation to work on the CRDT, because you don’t want to redo the entire thing from scratch if you receive an update in the middle, and you don’t want to cause lag?


            Though I’m still confused by the relationship between these two issues:

            1. Asynchronous vs. synchronous (if I have the definitions above right)

            2. Whole-file batch computation of vs. incremental recomputation (of say syntax highlighting)

            I guess the question is: can’t you synchronously wait for an incremental computation rather than a whole-file computation?

            For example, I agree regex-based highlighting isn’t ideal. Have you seen Treesitter? It’s based on GLR parsing and does incremental syntax highlighting. Github is using it in the Atom editor and for a few server side things.

            This video is very well motivated:

            Tree-sitter - a new parsing system for programming tools” by Max Brunsfeld

            One of the complaints, which I agree with, is that lexical syntax highlighting is not all that useful. For example, it would be nice if types and variables were consistently highlighted, but that requires parsing, not just lexing. He gives some good examples.

            So even though Treesitter uses a more powerful model (GLR, along with some “lexer hacks”), it can still do things incrementally – i.e. not re-highlighting the entire file when you insert a few characters. I don’t recall exactly how the algorithm works at the moment, but it’s well explained in the video. He gives a citation to someone’s Ph.D. thesis.

            Some more comments on treesitter [1].

            Basically, doesn’t having an incremental algorithm for syntax highlighting “offload” some of the work so you don’t have to do it on the CRDT to be fast? Or am I misunderstanding?

            It’s not exactly clear to me how you would do incremental brace matching, but it seems like Treesitter solves a “superset” of that problem.


            [1] I’m pretty impressed with Treesitter because it sounds like they have worked through many of the problems you hit in “real” languages. A lot of grammar-based approaches work in theory but not in practice, but it sounds like they did a lot of work to handle the real world situations.

            I have written a bunch on my blog about the limitations of grammar-based approaches, many of them linked from How to Parse Shell Like a Programming Language

            [2] Other speculation: In the limit, I wonder if you could use something like the Skip language to say write a semantic analyzer?

            http://skiplang.com/

            It’s basically a programming language that enforces that every algorithm you write can be made incremental. As I understand it, while most of these kinds of things are limited functional languages, Skip goes to some lengths to give you an imperative style of programming, so you can express more things naturally. I haven’t used it but I peeked at the source code.

            1. 7

              Lemme try to answer, there isn’t really one question in there.

              Yes, of course we’ve seen Treesitter. It’s interesting work for sure. There are other interesting incremental computing frameworks too, like Salsa from Rust-land.

              Incremental and asynchronous are two complementary approaches to the same problem of reducing latency. The syntax highlighting in xi-editor (see rope science 11) is both. To say it briefly, an incremental approach reduces the amount of work by avoiding recomputation of already known quantities, while async by itself doesn’t reduce work, but lets you get back to typing or scrolling or whatever before the work is actually done.

              There’s a third approach, which is making the work so fast you don’t care. That’s what I’ve done for markdown parsing, and there’s no reason you couldn’t do similar approach for syntax highlighting, autoindent, and all those other code-related tasks.

      2. 4

        So is the gist of it that OT relies on central servers and they all use OT rather than CRDT? That was not entirely clear to me.

        You can do OT p2p. In another lifetime I used to work on Groove which was p2p and based on OT, but collaborative document editing was not its main focus. The article actually makes the less ambitious claim:

        To do OT well, you need to elect a centralized server, …

        [emphasis added]

        1. 1

          That’s correct, and there’s a much more nuanced discussion of exactly that point in followup discussion in the github issue.

        2. 3

          From what I’ve seen, Etherpad (or at least, etherpad-lite) has a central server and it just takes changes from each person and diff’s them together. I think occasionally it decides there’s an authoritative revision and future revisions go based off that. It’s not actually terribly complicated. Of course, it also doesn’t scale that well and needs a central server, but that makes life a lot simpler than dealing with P2P stuffs.

        3. 1

          I wonder if using a tree structure (parsed AST) instead of text would make the problem simpler for working on source files. Concurrent edits within a leaf would be just as difficult, but maybe the fact that in other cases you have structural and possibly meta information might make it easier to do the right thing.

          1. 2

            Doing CRDT on tree structures is much harder than on linear arrays.