1. 6
  1. 3

    This guy is absolutely terrible at communicating his ideas.

    I agree that it would be nice to have rich content formats. As he puts it “hypermedia” might be ok: it’s a medium that spans several dimensions of expressions, and can thus be read in several ways.

    But his example is terrible, the execution is poor, his drawings are a joke, even the argumentation is full of circumvolutions. This is obvious, to me, that this guy has no idea what he is talking about. He has no idea about the complexity of implementing what he is talking about (properly I mean, not as PoC).

    When we look at history and what finally happened, I contend that while at a base level we can see that economic or political factors could sway computer science one way or another, but I think that’s for trivial stuff, like choosing one encoding over another, because for example designing an ASIC capable of doing it in hardware cost less, even if the other encoding is better in pretty much all other ways. But looking at the big picture, what won was simplicity, the least (overall) effort.

    This is, I think, what he means when he says that current content is in the format choosen by computer scientists. And this is true. Because all technic will evolve this way.

    His idea are infeasible IMO. I would love to receive an article one day where I will be able to open it using mathematica to look at graphs and play with the data, read the content in another interface, navigate the cited papers and have more context about the authors for example. But all of this is already possible. There is no need for a data-structure to be specifically designed for it, this is implementation detail, abstraction leakage. ZigZag is just a bad idea. It should not exist (and I can’t help but be extremely sceptical at the amount of trademark this guy uses, as if his ideas were worth anything).

    What I find weird actually, is to be speaking about hypertext like some kind of invention. The concept is just so trivial and so self-evident, no one invented it! What was invented was a proper grammar to describe the object and protocols to communicate about. But the concept is trivially simple. Same stuff for hypermedia, but then, the implementation is infeasible (in a standardized, content-agnostic way).

    1. 5

      This guy is absolutely terrible at communicating his ideas.

      Fair.

      The concept [of hypertext] is just so trivial and so self-evident

      This isn’t really accurate. Most people today don’t understand what hypertext is. (Just look at the comments on this thread!)

      The idea of navigable connections between ideas through mechanism is trivial (assuming you’re familiar with the western cyclopedic tradition) & many people independently invented similar systems, but hypertext has a very specific set of rules that interact in a fairly nuanced way. (The web implements approximately one-half of one of these rules, which is the source of a lot of confusion.)

      He has no idea about the complexity of implementing what he is talking about

      He has a pretty clear idea of the complexity of implementing what he’s talking about, because he’s been in close communication with different teams of serious professional developers actually implementing versions of it for many years.

      It’s easier to implement a proper hypertext system than a modern web browser – but, where browsers have hundreds of developers, all of the implementations of Xanadu ideas since the mid-80s have (as far as I am aware) had teams of at most three people.

      His idea are infeasible IMO.

      They’ve been implemented. Implementations are being used internally.

      The core ideas are pretty straightforward to implement. (I’ve written open source implementations of them in my free time, after leaving the project.)

      The primary difficulty in implementing these things is poor public-facing documentation (because Ted wrote all the public-facing documentation, and he doesn’t separate technical ideas from rants & marketing material). This is why I wrote my own documentation.

      Once the concepts are understood, most of them can be implemented in an hour or two. (I know, because I did exactly that many times.)

      what won was simplicity, the least (overall) effort

      Take a look at any W3C standard and tell me, with a straight face, that simplicity won.

      What won was organic growth. In other words: instead of thinking carefully and seriously about how things should be designed, they went with their gut and used the design that came to mind most quickly. This gives them an edge in terms of communication: a stupid idea is much easier to communicate than a simple idea, because it will be as obvious to the person who hears it as it is to the person who says it. However, it’s a nightmare when it comes to maintainability, because poorly-thought-out designs are inflexible.

      In terms of the actual number of elements necessary & the actual amount of text required to explain it, hypertext is simpler than webtech. The effort in a hypertext or translit system is the fifteen minutes you spend thinking hard about how all the pieces fit together, while the effort in webtech is trying to figure out how to make a pile of mismatched pieces do something that shouldn’t be done in the first place a decade after you learned to use all of them.

    2. 3

      This is interesting, but another 20 years of software development continues to prove him wrong.

      The current dominant paradigm is flat, single-ordered lists, and search (perhaps augmented with tags like our dear lobste.rs here).

      This is even more of all the bad stuff he’s railing against at the start of the article, but this is the stuff that works and there are innumerable other approaches dead or dying.

      It suspect that for UI’s less freedom is simpler (one button, one list, one query, one purpose, etc.) and not the other way around.


      For developers, I think he was right, and it’s also what we’ve got today. It’s clearly preferable for developers to have a simple model to work against (Like URIs + JSON).

      apt-get install firefox (Which unpacks to a resource identifier and a standardized, machine-readable package file) is quite probably as good as it gets. It’s a directed graph instead of an undirected graph like his zipper system, but undirected graphs require an unrealistic (and in my opinion probably harmful) amount of federation between producers of API’s and their consumers.

      1. 7

        When the pitch is “good computing is possible”, “bad computing has dominated” isn’t actually a great counterargument – particularly when the history of so much of it comes down to dumb luck, path dependence, tradeoffs between technical ability & marketing skills, and increasingly fast turnover and the dominance of increasingly inexperienced devs in the industry.

        If you’re trying to suggest that the way things shook out is actually ideal for users – I don’t know how to even start arguing against that. If you’re suggesting that it’s inevitable, then I can’t share that kind of cynicism because it would kill me.

        A better world is possible but nobody ever said it would be easy.

        1. 4

          Your comment is such a good expression of how I feel about the status quo! I was just having a similar discussion in another thread about source code, where I said “text is hugely limiting for working with source code”, and somebody objected with “but look at this grep-like tool, it’s totally enough for me”. I can understand when people raise practical objections to better tools (hard to get traction, hard to interface with existing systems etc.). What’s dispiriting is the refusal to even admit that better tools are possible.

          1. 2

            The mistake is believing that we’re anywhere close to status quo in software development. The tools and techniques used today are completely different from the tools we used 5 and 10 years ago, and are almost unrecognizable next to the tools and techniques used 40 and 50 years ago.

            Some stuff sticks around, (keyboards are fast!) but other things change and there is loads of innovative stuff going on all the time. With reference to visual programming: I recently spent a weekend playing with the Unreal 4 SDK’s block programming language (they call it blueprints) it has fairly seamless C++ integration and I was surprised with how nice it was for certain operations… You might also be interested in Scratch.

            Often, these systems are out there, already existing. Sometimes they’re not in the mainstream because of institutional momentum, but more often they’re not in the mainstream because they’re not good (the implementations or the ideas themselves).

            The proof of the pudding is in the eating.

            1. 4

              I don’t think I can agree with this. I’m pretty sure the “write code-compile-run” approach to writing code that is still in incredibly widespread use is over 40 years old. Smalltalk was developed in the 70s. Emacs was developed in the 70s. Turbo Pascal, which had an integrated compiler and editor, was released in mid-80s (more than 30 years ago). CVS was developed in mid-80s (more than 30 years ago). Borland Delphi and Microsoft Visual Studio, which were pretty much full-fledged IDEs, were released in the 90s (20 years ago). I could go on.

              What do we have now that’s qualitatively different from 20 years ago?

              1. 3

                Yup. Some very shallow things have changed but the big ideas in computing really all date to the 70s (and even the ‘radical’ ideas from the 70s still seem radical). I blame the churn: half of the industry has less than 10 years of experience, and degree programs don’t emphasize an in-depth understanding of the variety of ideas (focusing instead on the ‘royal road’ between Turing’s UTM paper and Java, while avoiding important but complicated side-quests into domains like computability).

                Somebody graduating with a CS degree today can be forgiven for thinking that the web is hypertext, because they didn’t really receive an education about it. Likewise, they can be forgiven for thinking (for example) that inheritance is a great way to do code reuse in large java codebases – because they were taught this, despite the fact that everybody knows it isn’t true. And, because more than half their coworkers got fundamentally the same curriculum, they can stay blissfully unaware of all the possible (and actually existing) alternatives – and think that what they work with is anywhere from “all there is” to “the best possible system”.

                1. 1

                  I got your book of essays - interested in your thinking on these topics.

                  1. 1

                    Thanks!

                    There are more details in that, but I’m not sure whether or not they’ll be any more accessible than my explanation here.

                2. 3
                  • Most languages aren’t AOT compiled, there’s usually a JIT in place (if even that, Ruby and python are run-time languages through and through). These languages did not exist 20 years ago, though their ancestors did (and died, and had some of the good bits resurrected, I use Clojure regularly, which is both modern and a throwback).

                  • Automated testing is very much the norm today, it was a fringe idea 10 years ago and something that you were only crazy enough to do if you were building rockets or missiles or something.

                  • Packages and entire machines are regularly downloaded from the internet and executed in production. I had someone tell me that a docker image was the best way to distribute and run a desktop Linux application.

                  • Smartphones, and the old-as-new challenges of working around vendors locking them down.

                  • The year of the Linux desktop surely came sometime in the last or next 20 years.

                  • Near dominance of Linux in the cloud.

                  • Cloud computing and the tooling around it.

                  • The browser wars ended, though they started to heat up before the 20 year cutoff.

                  • The last days of Moore’s law and the 10 years it took most of the industry to realize the party was over.

                  • CUDA, related, the almost unbelievable advances in computer graphics. (Which we aren’t seeing in web/UI design, again, probably not for lack of trying, but maybe the right design hasn’t been struck)

                  • Success with Neural Networks on some problem sets and their fledgling integration into other parts of the stack. Wondering when or if I’ll see a NN based linter I can drop into Emacs.


                  I could go on too, QWERTY keyboards have been around 150 years because it’s good enough and the alternatives aren’t better then having one standard. I don’t think that the fact that my computer has a QWERTY keyboard on it is an aberration or failure, and not for lack of experimentation on my own part and on the parts of others. Now if only we could do something about that caps lock key… Oh wait, I remapped it.


                  It’s easy to pick up on the greatest hits in computer science, 20, 30, and 40 years ago. There’s a ton of survivorship bias and you don’t point to all of those COBOL-alikes and stack-based languages which have all but vanished from the industry. If it seems like there’s no progress today, it’s only because it’s more difficult to pick the winners without the benefit of hindsight. There might be some innovation still buried that makes two way linking better then one way linking, but I don’t know what it is and my opinion is that it doesn’t exist.

                  1. 3

                    Fair enough. Let me clarify my comment, which was narrowly focused on developer tools for no good reason.

                    There is no question that there have been massive advances in hardware, but I think the software is a lot more hit and miss.

                    In terms of advances on the software front, I would point to distributed storage in addition to cloud computing and machine learning. For end users, navigation and maps are finally really good too. There are probably hundreds of other specific examples like incredible technology for animated films.

                    I think my complaints are to do with the fact that most of the effort in the last 20 years seems to have been directed to reimplementing mainframes on top of the web. In many ways, there is churn without innovation. I do not see much change in software development either, as I mentioned in the previous comment (I don’t think automated testing counts), and it’s what I spend most of my time on so there’s an availability bias to my complaints. There is also very little progress in tools for information management and, for lack of a better word, “end user computing” (again, spreadsheets are very old news).

                    I think my perception is additionally coloured by the fact that we ended up with both smartphones and the web as channels for addictive consumption and advertising industry surveillance. It often feels like one step forward and ten back.

                    I hope this comment provides a more balanced perspective.

            2. 2

              In the last 20 years, the ideas in that paper have been attempted a lot, by a lot of people.

              Opensource and the internet have given a ton of ideas a fair shake, including these ideas. Stuff is getting better (not worse). The two way links thing is crummy, and you don’t have to take my word for it, you can go engage with any of the dozens of systems implementing it (including several by the author of that paper) and form your own opinions.

              1. 4

                In the last 20 years, the ideas in that paper have been attempted a lot, by a lot of people.

                Dozens of people, and I’ve met or worked with approximately half of them. Post-web, the hypertext community is tiny. I can describe at length the problems preventing these implementations from becoming commercially successful, but none of them are that the underlying ideas are difficult or impractical.

                The two way links thing is crummy, and you don’t have to take my word for it, you can go engage with any of the dozens of systems implementing it (including several by the author of that paper) and form your own opinions.

                I wrote some of those systems, while working under the author of that paper. That’s how I formed my opinions.

                1. 1

                  That’s awesome. Maybe you can change my mind!

                  Directed graphs are more general then undirected graphs (You can implement two-way undirected graphs out of one way arrows, you can’t go the other way around). Almost every level of the stack from the tippy top of the application layer to the deepest depths of CPU caching and branch prediction is implemented in terms of one-way arrows and abstractions, I find it difficult to believe that this is a mistake.


                  EDIT: I realized that ‘general’ in this case has a different meaning for a software developer then it does in mathematics and here I was using the software-developers perspective of “can be readily implemented using”. Mathematically, something is more general when it can be described with fewer terms or axioms. Undirected graphs are more maths general because you have to add arrowheads to an undirected graph to make a directed graph, but for the software developer it feels more obvious that you could get a “bidirected” graph by adding a backwards arrow to each forwards arrow. The implementation of a directed graph from an undirected graph is difficult for a software developer because you have to figure out which way each arrow is supposed to go.

                  1. 1

                    Bidirectional links are not undirected edges. The difference is not that direction is unknown – it’s that the edge is visible whichever side of the node you’re on.

                    (This is only hard on the web because HTML decided against linkbases in favor of embedded representations that must be mined by a third party in order to reverse them – which makes jump links a little bit easier to initially implement but screws over other forms of linking. The issue, essentially, is that with a naive host-centric way of performing jump links, no portion of the graph is actually known without mining.

                    Linkbases are literally the connection graph, and links are constructed from linkbases. In the XanaSpace/XanaduSpace model, you’ve got a bunch of arbitrary linkbases representing arbitrary subgraphts that are ‘resident’ – created by whoever and distributed however – and when a node intersects with one of the resident links, the connection is displayed and made navigable.

                    Also in this model a link might actually be a node in itself where it has multiple points on either side, or it might have zero end points on one side, but that’s a generalization & not necessarily interesting since it’s equivalent to all combinations of either end’s endsets.)

                    TL;DR: bidirectional links are not undirected links – merely links understood above the level of the contents of a single node.

                    1. 1

                      Ok then, and how is it that you construct a graph out of a set of subgraphs? Is that construction also two way links thereby assuring that every participant constructs the same graph?

                      1. 1

                        Participants are not guaranteed to construct the same graph, and the graphs aren’t guaranteed to even be fully connected. (The only difference between bidirectional links & jump links is that you can see both points.)

                        Instead, you get whatever collection of connected subgraphs are navigable from the linkbases you have resident (which are just lists of directed edges).

                        This particular kind of graph-theory analysis isn’t terribly meaningful for either the web or translit, since it’s the technical detail of how much work you have to do to get a link graph that differs, not the kind of graph itself. (Graph theory is useful for talking about ZigZag, but ZigZag is basically unrelated to translit / hypertext and is more like an everted tabular database.)

                        1. 1

                          I guess I’m trying to understand how this is better or different from what already exists. If it’s a curated list of one way links that you can search and discuss freely with others, then guess what, lobste.rs is your dream, the future is now, time to throw one back and celebrate.

                          1. 1

                            I’m trying to understand how this is better or different from what already exists

                            Well, when the project started, none of what we have existed. This was the first attempt.

                            If it’s a curated list of one way links that you can search and discuss freely with others, then guess what, lobste.rs is your dream, the future is now,

                            ‘Link’ doesn’t actually mean ‘URL’ in this sense. A link is an edge between two nodes – each of these nodes being a collection of positions within a document. So, a linkbase isn’t anything like a collection of URLs, but it it’s a lot like a collection of pairs of URLs with an array of byte offsets & lengths affixed to each URL. (In fact, this is exactly what it is in the XanaSpace ODL model.) A URL by itself is only capable of creating a jump link, not a bidirectional link.

                            It’s not a matter of commenting on a URL, but of creating sharable lists of connections between sections of already-existing content. That’s the point of linking: that you can indicate a connection between two existing things without coordinating with any authors or owners.

                            URL-sharing sites like lobste.rs provide one quarter of that function: by coordinating with one site, you can share a URL to another site, but you don’t have control over either side beyond the level of an entire document (or, if you’re very lucky and the author put useful anchors, you can point to the beginning of a section on only the target side of the link).

                            1. 1

                              To take an example of a system which steps in the middle and does take greater control over both ends, Google’s AMP. I feel like it is one of the worse things anyone has ever tried to do to the internet in it’s entire existence.

                              Control oriented systems like AMP and to a lesser degree sharing sites like Imgur, Pinterest, Facebook, and soon (probably) Medium, represent existential threats to forums like lobste.rs.

                              So, in short, you’re really not selling me on why this two way links thing is better.

                              1. 2

                                We actually don’t have centralization like that in the system. (We sort of did in XU88 and XU92 but that stopped in the mid-80s.)

                                It’s not about controlling the ends. The edges are not part of the ends, and therefore the edges can be distributed and handled without permission from the ends.

                                Links are not part of a document. Links are an association between sections of documents. Therefore, it doesn’t make any sense to embed them in a document (and then require a big organization like Google to extract them and sell them back to you). Instead, people create connections between existing things & share them.

                                I’m having a hard time understanding what your understanding of bidirectional linking is, so let me get down to brass tacks & implementation details:

                                A link is a pair of spanpointers. A spanpointer is a document address, a byte offset from the beginning of the document, and a span length. Anyone can make one of these between any two things so long as you have the addresses. This doesn’t require control of either endpoint. It doesn’t require any third party to control anything either. I can write a link on a piece of paper and give it to you, and you can make the same link on your own computer, without any bits being transferred between our machines.

                                We do not host the links. We do not host the endpoints. We don’t host anything. We let you see connections between documents.

                                Seeing connections between documents manifests in two ways:

                                1. transpointing windows – we draw a line between the sections that are linked together, and maybe color them the same color as the line
                                2. bidirectional navigation – since you can see the link from either side, you can go left instead of going right

                                It’s not about control, or centralization. Documents aren’t aware of their links.

                                The only requirement for bidirectional linking is that an address points to the same document forever. (This is a solved problem: ignore hosts & use content addressing, like IPFS.)

                                1. 1

                                  Wow, thank you for taking the time to walk me through these ideas. I think I’m starting to understand a little better.

                                  I still think we’ve got this, or could implement it on the existing web stack. I think any user could implement zig-zag links in a hierarchal windows-style file structure since ’98 if not ‘95. I think it’s informative that most users do not construct those links, who knows how many of us have tried it in the name of getting organized.

                                  I really believe that any interface more complex then a single item is too complex, and if you absolutely must you can usually present a list without distracting from a UI too badly. I think a minimalist and relatively focused UI is what allows this website to thrive and us to have this discussion.

                                  I’m going to be thinking over this a lot more. A system like git stores the differences between documents instead of the documents themselves, so clearly there are places for other ways of relating documents to each other then what we’ve got, which work!

                                  1. 3

                                    I should clarify: I’ve been describing bidirectional links in translit (aka hypertext or transliterature). ZigZag is actually a totally different (incompatible) system. The only similarity is that they’re both interactive methods of looking at associations between data invented by Ted Nelson.

                                    If we want to compare to existing stacks, transliterature is a kind of whole-document authoring and annotation thing like Word, while ZigZag is a personal database like Access – though in both cases the assumptions have been turned inside-out.

                                    You’re right that these things, once they’re understood, aren’t very difficult to implement. (I implemented open source versions of core data structures after leaving the project, specifically as demonstrations of this.)

                                    I really believe that any interface more complex then a single item is too complex, and if you absolutely must you can usually present a list without distracting from a UI too badly. I think a minimalist and relatively focused UI is what allows this website to thrive and us to have this discussion.

                                    Depending on how you chunk, a site like this has a whole host of items. I see a lot of characters, for instance. I see multiple buttons, and multiple jump links. We’ve sort of gotten used to a particular way of working with the web, so its inherent complexity is forgotten.

                                    thank you for taking the time to walk me through these ideas. I think I’m starting to understand a little better.

                                    No problem! I feel like it’s my duty to explain Xanadu ideas because they’re explained so poorly elsewhere. I spent years trying to fully understand them from public documentation before I joined the project and got direct feedback, and I want to make it easier for other people to learn it than it was for me.

              2. 1

                I wouldn’t say so. What you have is more and more people are using the same tools, therefore you will never get a “perfect” solution. Generally, nature doesn’t provide a perfect system but “good enough to survive”. My partner and I are getting a child at the moment, and the times the doctor told us: “This is not perfect, but nature doesn’t care about that. It just cares about good enough to get the job done”.

                After I’ve heard this statement, I see it everywhere. Also with computers. Code and how we work run a huge chunk of important systems, and somehow they work. Maybe they work because they are not perfect.

                I agree that things will change (“for the better”), but it will come in phases. We will have a bigger catastrophic thing happening and afterwards systems and tools will change and adapt. As long everything sort of works, well, there is no big reason to change it (for the majority of people) since they can get the job done and then enjoy the sun, beaches and human interactions.

                1. 1

                  Nobody’s complaining that we don’t have perfection here. We’re complaining about the remarkable absence of not-awful in projects by people who should know better.

              3. 3

                I think the best way to describe what we have is “Design by Pop Culture”. Our socio-economic system is a low pass filter, distilling ideas until you can package them and sell them. Is it the best we got given those economic constraints? maybe…

                But that’s like saying “Look, this is the best way to produce cotton, it’s the best we got” during the slave era…(slavery being a different socio-economic system)